Volume 37 Number 1 | February 2023
Kasey Edwardson, PhD, MLS(ASCP)CMSCCM
Becoming a medical laboratory professional requires knowledge and competence that is typically proven by passing a Board of Certification (BOC) exam. While there are a few certification options, the most popular and recognized are the medical laboratory technician (MLT) and medical laboratory scientist (MLS) certifications through the American Society for Clinical Pathology (ASCP) BOC. Eligibility to sit for either exam is often gained through completion of an accredited MLT or MLS program. As an individual journeys to become a certified laboratory professional, they experience several points of evaluation through their academic program before reaching the final assessment of the BOC, and these evaluations can be inequitable and exclusive.
Most educational programs, including medical laboratory education programs (MLEPs), utilize the five-letter grading system that represents a 100-point percentage scale. Typically, MLEPs require minimum letter grades such as a “C” or “C+” for admission to, progression through, and completion of the program. In a previous review of ASCLS Region VI MLEPs, I found the grading scale used to convert percentage scores to letter grades varies among MLEPs, and the minimum score to achieve a “C” ranged from 70 percent to 80 percent. Therefore, laboratory science learners experience varying grade requirements depending on the program they enter. Additionally, the variations in grading scales likely reach beyond MLEPs, such as in general education and pre-program courses that are used to determine admission into an MLEP. Even before applying to MLEPs, future laboratory professionals experience inequity in the assessment of their learning which influences their access to MLEPs.
“I believe that our grading practices have been more equitable and inclusive because ungrading meets the learner where they are and builds from there, rather than assigning poor grades and setting them up to fight an uphill battle to pass.”
Although better standardization across MLEPs would help decrease inequities and promote accessibility, it is an unrealistic expectation that all MLEPs adopt the same grading scale because they have to abide by institution, region, and/or state-level guidelines on student assessment. Even if we standardized grading scales across MLEPs, individual faculty grading practices will continue to contribute to inequities in learner evaluation. Instructors consider different factors while grading,1-4 and many include behavioral factors.5-8 Not only are behavioral factors (e.g., participation and affective objectives) unreliable measures of knowledge, they also introduce inequities in grading because learners’ behavioral factors are likely assessed differently based on implicit biases of the instructor.2 This is evidenced by nonwhite and male-identifying learners receiving lower grades, yet outperforming white and non-male learners in measures of knowledge and intellect.2 MLEPs are not immune to these effects, because many programs include affective and behavioral objectives in their grading practices. Furthermore, traditional grading treats every learner as though they have had the same educational experiences and preparation, which contributes to the inequities in grading.
One method for more equitable and inclusive grading is ungrading, also known as going gradeless, de-grading, or alternative assessment approaches. The ungrading movement isn’t new; it has been used by many elite institutions, including medical schools,9 and gained attention when programs and institutions resorted to pass/fail assessments due to the COVID-19 pandemic. Not only does ungrading promote equity, inclusion, and access in higher education, it also mediates other negative impacts of grading. Focusing on grades actually decreases learning10-14 and is associated with increased extrinsic motivation and decreased intrinsic motivation, increased competition between learners, poor instructor-learner relationships, and increased anxiety and depression.11 All of these factors negatively impact the learning process.
Ungrading, particularly in a field that requires passing a final assessment such as the BOC, raises eyebrows. Yet ungrading only takes the focus away from the grade and emphasizes substantial feedback and focuses on the learning process. Final letter grades can still be assigned, and if ungrading is used appropriately, these grades should reflect learner knowledge equally, if not better than, traditional grading practices.
There are several assessment approaches that can be used in MLEPs including limited grading, grade-free zones, self-assessment and process letters, and competency-based assessment.
- Limited grading utilizes fewer grade categories (e.g., pass/fail)
- Grade-free zones include grading some learning activities but not all
- Self-assessment and process letters are reflections produced by the learner
- Competency-based assessment is already commonly used in MLEPs and other health science programs to evaluate a learner’s ability to perform a task at a competent level.
I have used all of these methods in my undergraduate and graduate clinical laboratory science courses and have seen positive outcomes. In doctoral courses, I use grade free zones by categorizing learning activities as required or optional. In undergraduate courses I use grade free zones by applying percentage grades on written and practical exams only. Limited grading is applied to mark learning activities as complete or incomplete in both academic levels.
When doing this, it is imperative that learners receive substantial feedback to promote learning. Throughout doctoral courses, I have learners complete optional memos. Although they are optional, most learners complete them, and this provides me an understanding of areas in the class in which they need more development. At the end of each course, at both academic levels, I require learners to complete a self-evaluation and grade their performance in the course. The self-evaluation allows learners to reflect and highlight the areas they perceive to be their strengths and weaknesses in the course and is a tool that can promote learner motivation.15 Lastly, like many laboratory educators, I use competency-based assessment in lab courses, and learners are assessed as competent or not.
I have been ungrading my doctoral courses longer than my undergraduate courses and plan to publish findings from my doctoral courses in the spring. Additionally, a colleague and I ungraded our undergraduate courses this fall and plan to present our findings at a society conference in 2023. So far, learners have expressed that they are able to focus more on learning the material. I believe that our grading practices have been more equitable and inclusive because ungrading meets the learner where they are and builds from there, rather than assigning poor grades and setting them up to fight an uphill battle to pass. I encourage other educators to consider alternative assessment approaches. If done well, ungrading can improve the learning process and promote equity, access, and inclusion in laboratory science education.
References
- Guskey, T.R., & Link, L.J. (2019). Exploring the factors teachers consider in determining students’ grades. Assessment in Education: Principles, Policies & Practice, 26(1), 23-30.
- Link, L.J., & Guskey, T.R. (2019). How traditional grading contributes to student inequities and how to fix it. Educational, School, and Counseling Psychology Faculty Publications. 53. http://uknowledge.uky.edu/edp_facpub/53
- McMillan, J.H. (2001). Secondary teachers’ classroom assessment and grading practices. Educational Measurement: Issues and Practice, 20(1), 20-32.
- McMillan, J.H., Myran, S., & Workman, D. (2002). Elementary teachers’ classroom assessment and grading practices. Journal of Educational Research, 95(4), 203-213.
- Brookhart, S.M., Guskey, T.R., Bowers, A.J., McMillan, J.H., Smith, J.K., Smith, L.F., & Welsh, M.E. (2016). A century of grading research: Meaning and value in the most common educational measure. Review of Educational Research, 86(4), 801-848.
- Frary, R.B., Cross, L.H., & weber, L.J. (1993). Testing and grading practices and opinions of secondary teachers of academic subjects: Implications for instruction in measurement. Educational Measurement: Issues & Practice, 12(3), 23-30.
- Gullickson, A.R. (1985). Student evaluation techniques and their relationship to grade and curriculum. Journal of Educational Research, 79(2), 96-100.
- Link, L.J. (2018). Teachers’ perceptions of grading practices: How pre-service training makes a difference. Journal of Research in Education, 28(1), 62-91.
- Blum, S.D. (2020). UNgrading: Why rating students undermines learning (and what to do instead). Morgantown: West Virginia University Press.
- Beck, H.P., Rorrer-Woody, S., & Pierce, L.G. (1991). The relations of learning and grade orientations to academic performance. Teaching of Psychology, 18(1), 35-37.
- Gorichanaz, T. (2022). “It made me feel like it was okay to be wrong”: Student experiences with ungrading. Active Learning in Higher Education, https://doi.org/10.1177/14697874221093640
- Harland, T., McLean, A., Wass, R., Miller, E., & Sim, K.N. (2015). An assessment arms race and its fallout: High-stakes grading and the case for slow scholarship. Assessment & Evaluation in Higher Education, 40(4), 528-541.
- Horowitz, G. (2010). It’s not always just about the grade: Exploring the achievement goal orientations of pre-med students. Journal of Experiential Education, 78(2), 215-245.
- Milton, O., Pollio, H.R., & Eison, J.A. (1986). Making sense of college grades: Why the grading system does not work and what can be done about it. San Francisco, CA: Jossey-Bass.
- McMillan, J.H., & Hearn, J. (2008). Student self-assessment: The key to stronger student motivation and higher achievement. Educational Horizons, 87(1), 40-49.
Kasey Edwardson is a Clinical Assistant Professor, Department of Clinical Laboratory Sciences, School of Health Professions at the University of Kansas Medical Center in Kansas City, Kansas.