Jobs in Education System

Competencies: How should we assess competencies?

This article is republished with permission from IntrepidEd News

– Devin Vodicka

There are two fundamentally different types of learning outcomes. It is important to be clear that some learning follows a fairly linear pathway where there are clear right and wrong answers. The former, which is referred to as “ladder” learning or as a technical problem, can be mastered and typically can be assessed using software. This technical learning tends to be oriented around knowledge acquisition. As an example, successfully completing two-digit multiplication is preceded by competence with one-digit multiplication.  In this mode of learning, the outcomes can be binary (mastered/not yet mastered or competent/not yet competent). These binary determinations are used to inform advancement and to certify competency.

Other learning is much more adaptive and contextual with multiple possible “solutions” to open-ended challenges. This type of learning, which is referred to as “knot” learning, typically cannot be mastered and competency progressions are nonlinear. Habits such as curiosity or creativity, for example, are deeply contextualized and dynamic. Inputs to inform progress are also more complex, requiring self-reflection, peer observation, educator observation, and even external “expert” observation. As an example, a challenge tied to one of the United Nations Sustainable Development Goals (such as “no poverty”) is unlikely to result in a binary outcome and the valuable outcomes are more directional than determinative. In this case, feedback is designed to inform ongoing growth in the learners’ knowledge, habits, and skills.

When I engaged in a research project to determine which other forms of input may be helpful, I solicited input from students, teachers, administrators, families, and researchers. In addition to academic assessments, we can see that self-reflection, peer feedback, educator observations and feedback from non-classroom based “experts” are all valuable perspectives to inform the learner. 

What follows are examples of learner-centered metrics that reflect whole-learner outcomes in each of these areas.

Agency

Measuring learner agency requires us to think about expertise as well as ensuring that learners are developing belief in their power to act purposefully toward meaningful goals.  

Setting and Achieving goals

Using tools like the Altitude Learning platform, students can set their own goals, reflect on their progress, and collect evidence of their achievement related to those goals.  

Demonstrate Mastery through Competency-Based Assessment

Demonstrations of mastery can take many forms.  While we often think first of knowledge-based outcomes, schools such as Lighthouse Community Public Schools, based in Oakland, California, has not only created a custom set of student-friendly learning targets derived from the Common Core to empower students to drive their own academic learning, they also pair those learning targets with their HOWLs (Habits of Work and Learning) that guide their social-emotional work.

Another example comes from Mount Vernon School, in Atlanta, Georgia, which has developed a custom set of academic milestones and “Mindsets” that they track and cultivate with students. 

Mindsets
  • Collaborator — Accepts feedback, implements decisions, and shares the credit 
  • Innovator — Builds resilience through risk taking and setbacks
  • Solution Seeker — Sets goals, develops a plan of action, and tests solutions

Evidence can be tagged back to these milestones and mindsets to generate feedback that informs learners, educators, and their families about how to best support their next steps.  

Above is an example of competency-based progress view tied to a communication milestone

Collaboration

Tools such as Minerva’s Forum, which is used during the implementation of the Minerva Baccalaureate program, collect and share analytics related to student talk time during synchronous learning experiences.  This can be a proxy of engagement and can also be used by educators to understand how their learning experience design promotes (or does not promote interaction among students).  

Distribution of student talk time across sections. Students in section A (in blue) talk less in class and show a greater variance compared with section B (in orange), with the most talkative section A student speaking almost six times as much as the quietest student. With the exception of two outliers, it appears that the section B instructor is effectively facilitating a highly engaged and equitable classroom.

Problem Solving

Solving real-world problems is complex, contextualized, and multi-faceted.  For this reason, exhibitions of learning that include multiple forms of evidence from a learning portfolio is one way to demonstrate progress related to such challenges.  This process requires deep reflection as well as the ability to share insights along the way, thus informing an ongoing metacognitive journey that helps to develop lifelong learning.  

We can measure problem-solving through expert feedback. This feedback may come through exhibitions of applied learning where the learner shares their journey with those who can provide meaningful feedback to validate impact and suggest next steps. Portfolios are helpful references for these exhibitions, particularly as they offer the right medium for demonstrations over time and the corresponding appropriate evaluations.

Portfolio Defense

The School for Examining Essential Questions of Sustainability (SEEQs) in Honolulu, Hawaii, uses an intensive process of portfolio defenses for their promoting 8th grade students that serves as a capstone experience after embedding similar experiences across all grade levels.  My colleague Dr. Katie Martin wrote about this process and reflected on how powerful this process is in terms of creating an environment where students learned to “ engage deeply with content to solve real-world problems.” 

It is important to remember that when we shift to a competency-based approach we remember to measure what matters.  Whole-learner outcomes such as the development of agency, collaboration, and problem-solving may not all lend themselves to binary- or master-based metrics but when we reframe the goal of our feedback as a means to inform the learner it is clear that there are many compelling ways to utilize important inputs such as self-reflection, peer feedback, and educator observations. 

Devin Vodicka is the CEO of Learner-Centered Collaborative and the author of Learner-Centered Leadership. He is also three-time California superintendent of the year (2016 AASA, 2015 ACSA, 2015 Pepperdine), Innovative Superintendent of the Year (2014 Classroom of the Future Foundation), and nine-time White House invitee, both in recognition for district-wide achievement, and to advise and partner with the U.S. Department of Education’s office of Educational Technology and Digital Promise League of Innovative Schools.

Current Issue
EducationWorld April 2024
ParentsWorld February 2024

Xperimentor
HealthStart
WordPress Lightbox Plugin