The ALMS analysis is a framework to measure the technical difficulty in revising or remixing open educational resources. A rubric has been in development over the past year to break down each component of the rubric into measurable criteria. The results of a test of inter-rater reliability for that rubric will be discussed as well as future iterations of the rubric.