Rubrics in Portfolio (CBA) do more than label performance levels. The points behind those levels determine:
- How competency development is visualized in the portfolio
- When a student is considered to pass or fail in the LMS
This article explains that connection, using insights from our collaboration with TU Eindhoven (TU/e).
Why rubric points matter so much
When you set up a developmental portfolio with competencies, three things become tightly connected:
The points attached to rubric levels in the backbone
- The visual representations of student progress in the portfolio
- The grades and passing thresholds in the LMS (for example, Canvas)
In short:
Rubric points → Percentages → Visualizations and LMS grades
Because of this chain, small changes in point values can have big consequences for:
- How “developed” a student appears in the portfolio
- Whether that same student is technically passing or failing an assignment or course
From points to percentages
In the rubric backbone, each performance level gets a point value. Portfolio (CBA) then converts those points into a 0–100% scale:
- The highest point value is interpreted as 100%.
- 0 points is interpreted as 0% (explicitly or implicitly).
- All other levels become steps between 0% and 100%.
For example, with four levels such as:
- Beginner
- Developing
- Advanced
- Expert
the backbone might map them to something like:
- Beginner → low percentage (close to 0%)
- Developing → middle percentage
- Advanced → high percentage
- Expert → 100%
These percentages are then used:
- To draw bars and colors in Over time, Snapshot, and Heatmap views
- As the basis for grades that travel from Portfolio (CBA) to the LMS
Visualizations: What you see is what the points say
The portfolio’s visualizations reflect the percentage values coming from the rubric backbone.
Over time
Shows how a student’s competency level changes across activities and assessments. If the step from one level to the next is large (for example, from 0% to 50% to 100%), the graph will show big jumps between levels.
Snapshot
Shows the current “state” of a student’s performance on competencies. Here, the chosen point distances determine how far apart levels look in terms of progress.
Heatmap
Uses color or intensity to compare competencies or learning outcomes. The underlying percentages again determine how:
- “Low,” “medium,” and “high” performance are distributed
- Subtle or stark the visual differences are
If you want visualizations that clearly show gradual development, your point choices must allow for more nuanced steps between 0% and 100%.
Grading: When a label becomes a pass or a fail
When Portfolio (CBA) is connected to grading in the LMS:
- Student performances on competencies are recorded via the rubric.
- The rubric points are converted into percentages.
- Those percentages are sent to the LMS and placed on its grading scale.
- The LMS applies its own passing threshold (for example, 60%).
This means that:
- The same label (for example, “Developing”) can lead to very different outcomes depending on how many points it carries.
- A student averaging “Developing” might fail in one configuration but pass in another, even though the wording of the rubric is identical.
The TU/e examples make this very clear.
Two TU/e-inspired scenarios
In both scenarios below:
- The rubric levels are:
- Beginner
- Developing
- Advanced
- Expert
- 0 is implied as the minimum level (0%).
- The student averages “Developing” on all learning outcomes.
The only difference is the point configuration behind the levels.
Scenario 1: “Developing” equals 50% — student does not pass
Here, the point distances are set so that Developing represents 50% of the full development range.
Outcome:
- The student’s average performance ends up at 50/100 = 50%.
- When this reaches the LMS, 50% falls below the typical passing mark (for example, below 5.5 or 6.0 out of 10).
- Even though the label “Developing” sounds acceptable, it is not a passing level in this configuration.
This scenario fits a policy where:
- “Developing” is still considered insufficient, and
- Students should reach at least Advanced to pass.
Figure 1: Scenario 1: Rubric levels and points and how they translate into Portfolio visualizations and LMS grading. In this configuration, “Developing” corresponds to 50%, which remains below the passing threshold.
Scenario 2: “Developing” equals 60% — student passes
In this setup, the points are chosen so that Developing represents 60%.
Outcome:
- The same “Developing” across all learning outcomes becomes 60/100 = 60%.
- In the LMS, 60% typically meets or exceeds the passing threshold.
- Now, “Developing” is effectively treated as pass.
This scenario matches a policy where:
- “Developing” is good enough at this stage.
- “Advanced” and “Expert” signal performance above the minimum requirement.
Figure 2: Scenario 2: Rubric levels and points and how they translate into Portfolio visualizations and LMS grading. Here, “Developing” corresponds to 60%, which meets the passing threshold.
Educational questions this raises
The TU/e journey highlights that designing rubric points is not just a technical decision, but a pedagogical one.
Some key questions that naturally arise for learning designers and teachers:
What does each level mean in practice?
Is “Developing” still below expectation, or is it the acceptable target level at this point in the program?
At which level should a student pass?
Should a student with consistent “Developing” pass the assignment or course, or only those at “Advanced” and above?
How should compensation work across learning outcomes?
Can strong performance on one LO compensate for weaker performance on another, or must all competencies reach a minimum level?
How much nuance do you want in the visual story?
Do you want visuals that show fine-grained growth, or a simpler picture of “not yet there” versus “there”?
The answers to these questions should drive how you allocate points to levels, not the other way around.
What this means in practice for CBA users
For institutions working with competency-based assessment and Portfolio (CBA), the main takeaway is:
Rubric design, visualization, and grading are part of one continuous system.
Concretely, this means:
- You cannot treat rubric labels and points as purely local choices.
- The same backbone feeds both how students see their own development and how they are evaluated in the curriculum.
- Piloting configurations (as done with TU/e) and checking actual outcomes for a few hypothetical students can prevent misalignment between intended policy and actual grade behavior.
When these pieces are aligned, Portfolio (CBA) can:
- Tell a coherent story about growth over time
- Support transparent decisions about who passes, who needs more support, and why
- Make rubric levels meaningful to both students and educators