This feature intends to help teachers quickly identify out-of-the-ordinary situations in teamwork assignments. This is based on how students rated themselves and their peers. Such situations can range from positive (like someone who performed exceptionally well) to a possible concern (like an apparent conflict between two students).

We'll discuss 3 topics:

  1. How to enable this feature

  2. How to ethically use this

  3. Overview of labels


How to enable this feature

The feature can be enabled in the settings of "Task 3: Give feedback to peers". Currently, this feature requires the following conditions to be met:

  • Use the Group Member Evaluation tool

  • Enable self-assessment setting

  • Configure to review in groups

  • Configure that students review at least 4 peers

  • "New user-interface design" setting (enabled by default, but required)

We recommend using it for teamwork assignments only. We aim to provide a generalized version of this feature in Peer Review too.


How to ethically use this

One might rightfully wonder: "Does this label students and judge them?"

The short answer is: no. These labels are mere indicators of likely areas of interest. Judging students is not the intention of this feature. Though we call them "labels" for clarity and brevity, FeedbackFruits does not support or intent to engage in automated judgement of students. To think of these as definitive would be a mischaracterization of the feature. Let's clear up what this feature can and can't do:

Appropriate use

  • Use to guide attention: again, the labels are just indicators of likely areas of interest.

  • Dive deeper: look at the student’s overview of received reviews ratings, compare it to the team average and their self assessment. Browse through some of the comments that were left for context.

  • Get in touch with the student: it’s good to hear their part of the story. If an indicator turns out legitimate, such a situation can be big learning opportunities for them. Help them understand the situation to unlock a powerful learning experience.

Inappropriate use

  • Don’t use the labels to jump to conclusions. They are just indicators of likely areas of interest.

  • Don't judge a student using the labels. Even if the label turns out legitimate right now, it does not reflect the students future potential to learn and outgrow it in the future.

  • Don’t use labels for grading without investigating per individual. We will never support blind automatic grading based on these labels.


Overview of labels

High performer

This occurs when a student received an average rating higher than 70% and their rating was more than 10% percentage points higher than the overall average of the team as a whole.

Low performer

This occurs when a student did not contribute to the team's success. This condition is triggered when their overall average rating is less than 50%.

Underconfident

This occurs when the overall team rating for a student is greater than 60%, but the student rated themselves at least 20% percentage points lower than this. This could indicate that the student is "underconfident" or too critical of their own contributions.

Overconfident

This occurs when the overall rating for a student is less than 60%, but the student rated themselves at least 20% percentage points higher than their average rating.

Manipulator

This occurs when a student appears to be trying to "skew the curve" by giving themselves high ratings while rating the other team members poorly. The student has given themselves an overall rating of 80% or higher, while rating all the other members on their team at least 40% percentage points below this rating (when rating more than 5 peers, we measure the difference using average)

Conflict

This student rated a team-member at 40% or less while the median rating from the rest of the team is 60% or more. This generally indicates that there is a conflict between 2 team members.

Co-conflict

This occurs when a student got rated at 40% or less by one of their team-mates (Marie Curie), while the median rating from the rest of the team is 60% or more. This generally indicates that there is a conflict between 2 team members.

Clique

This occurs when it appears the team might have split into two non-cooperating groups, where there’s protective insider rating. This happens when there is significant disagreement between the ratings from various team members (evidenced by standard deviation of the 0-to-1 normalized ratings given by peers being above 0.23).


Due credits:

This feature is heavily inspired on the great research of Purdue University, specifically their excellent CATme assessment technique. Minor adaptations and reinterpretations were subsequently made by FeedbackFruits based on our experience. Also of note is the BuddyCheck.io adaptation of CATme by Shareworks who created a similar implementation.

Tags:

exceptional conditions, special cases, extreme voting, outlier ratings, buddycheck labels, catme

Did this answer your question?