Quandaries with Learning Assessments?

By Michael J. Peeters, PharmD, MEd, FCCP, BCPS

Have you ever doubted whether or not you are using the “right word”? I have had colleagues tell me they avoid using certain words like “affect” or “effect” when they write; they do not want to look silly by using them incorrectly. Considering this, and after hearing some notable assessment-related words and perhaps misuse of them at a recent AACP meeting, I thought it would be good to review some of the “right ways” to use some common assessment terms.

Quandary #1: When is a learning assessment considered to be formative or summative?

Traditionally, learning assessments in education are categorized by two primary reasons for use. A common misconception is that an assessment can either be formative, providing feedback to an assessed learner during a learning scenario (eg, a midpoint or snapshot assessment during a student’s/resident’s experiential rotation), or it can be summative, culminating with evaluation of a student (eg, an end-of-rotation final evaluation). Usually if it involves points in a course, it is commonly regarded as summative.

Clarification #1: Assessments are not always only summative or only formative; an assessment can accomplish both.1 Unfortunately, many faculty instructors focus only on summative assessment, even though with a little more planning and work, their students could receive feedback and progress even further. We can (and should) leverage summative assessments to provide formative feedback to learners; a summative assessment can become an assessment for those students’ learning. Granted, with the number of pharmacy students in a didactic PharmD class, it can be challenging to provide formative feedback to everyone, but I have found that many students really want helpful, formative feedback—when it is well-planned, well-organized, and timely. Merely providing exam scores, even with normative, comparative information on how the rest of the class performed, is not considered feedback.1 One example of combining formative and summative assessments is using portfolios by a review committee for decisions on a student’s progression to the next program year. With periodic meetings beforehand, these summative assessments are also an opportunity for faculty advisors/mentors to provide formative feedback for their students to reflect upon and grow with.2

Quandary #2: When is an assessment instrument considered to be valid, reliable and/or “validated”?

Clarification #2: First, validity and reliability are not characteristics of an instrument. An instrument’s scores are not valid or reliable either. Furthermore, validity is not dichotomous (ie, yes or no). Instead, validity is your interpretation of test score use, and it is on a continuum.3,4 Validity is based on evidence, and that evidence comes through validation. Validation is the process of generating supporting evidence for your interpretation of your assessments’ scores in your learners.3 This evidence comes from your local college/school of pharmacy. Thus, an instrument is not proven reliable and valid based on someone else’s prior published study/studies. And a common misconception is that an instrument can be “validated”; however, only interpretations from an assessment instrument’s use can be validated.

Quandary#3: When and how is it best to classify the validity of an educational assessment?

This one can be very confusing. Many of us have heard of different types of validity such as “criterion validity”, “discriminant validity”, “predictive validity”, or “content validity”. While many of these terms are outdated, they represent concepts that more ‘seasoned’ educators may be familiar with from their training. Using these terms can be a “shorthand” way of expressing ideas in fewer words. However, colleagues may not share a similar background (nor understand that this is “shorthand” within the current, widely-accepted definition of psychometric validity), and misinterpretation may occur.

Clarification #3: The definition of validity has evolved over the 20th century. While Kane provided a good summary of validity’s history,5 briefly, the term has evolved from meaning just “criterion validity”, to the addition of “content validity”, and ultimately, a full-fledged framework with four types of validity (construct, content, concurrent, and predictive). However currently, theory of validity has progressed and is understood as one unified validity4 with multiple sources of evidence…such as evidence based on relation of other variables (i.e., evidence from former “criterion validity”) and evidence based on test content (i.e., evidence from former “content validity”). A good tip to remember: instead of speaking about a “type” of validity, simply insert “evidence” before or afterwards. For example, “predictive validity” can easily be converted to either predictive evidence for validity or validity evidence for prediction—both demonstrate a unitary concept of validity while retaining the idea of prediction.

Communication is important to advancing science, and using the “right” terminology is imperative to progressing full understanding—especially with learning assessments. Hopefully these reminders will clear up some of those doubts, as you continue to boldly communicate your innovative thoughts!


  1. Eva KW, Bordage G, Campbell C, Galbraith R, Ginsburg S, Holmboe E, Regehr G. Towards a program of assessment for health professions: from training to practice. Adv Health Sci Educ Theory Prac. 2016; 21(4):897-913.
  2. Heeneman S, Pool AO, Schuwirth LWT, van der Vleuten CPM, Driessen EW. The impact of programmatic assessment on student learning: theory versus practice. Med Educ. 2015; 49:487-498.
  3. Peeters MJ, Martin BA. Measurement validation of learning assessments: a primer. Curr Pharm Teach Learn. 2017; 9(5) [epub ahead of print]
  4. American Educational Research Association, American Psychological Association, National Council on Measurement in Education. Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association; 2014.
  5. Kane MT. Validating the interpretations and uses of test scores. J Educ Meas. 2013; 50(1):1-73.

M Peeters profile photo

Michael Peeters is a non-tenure track faculty at the University of Toledo College of Pharmacy. His educational scholarship interests include educational psychometrics, learning assessments, development of learners and interprofessional education.

 Pulses is a scholarly blog supported by Currents in Pharmacy Teaching and Learning

1 Comment

  1. Great article, Michael!

    I have found useful interpretations or explanations (translations?) for Kane’s and Messick’s validity frameworks in the following two articles:
    Downing SM. Validity: on the meaningful interpretation of assessment data Med Educ 2003;37:830-7.
    Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Med Educ 2015;49:560-75.

    These articles have helped me to identify evidence about how our consortium’s new APPE assessment instrument is working.

    Terri O’Sullivan

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s