All Collections
Group Member Evaluation
Features
Group Member Evaluation | Detect outliers
Group Member Evaluation | Detect outliers

Quickly identify out-of-the-ordinary situations in teamwork assignments, like overconfident students, or group conflicts.

Updated over a week ago

Quick Configuration Video

This feature intends to help teachers quickly identify out-of-the-ordinary situations in teamwork assignments. This is based on how students rated themselves and their peers. Such situations can range from positive (like someone who performed exceptionally well) to a possible concern (like an apparent conflict between two students).

We'll discuss 3 topics:

  1. How to enable this feature

  2. How to ethically use this

  3. Overview of indicators


How to enable this feature

The feature can be enabled in the settings of "Task 3: Give feedback to peers". Currently, this feature requires the following conditions to be met:

  • Use the Group Member Evaluation tool

  • Enable self-assessment setting

  • Configure to review in groups

  • Configure that students review at least 4 peers or 'all'

  • "New user-interface design" setting (enabled by default, but required)

We recommend using it for teamwork assignments only. We aim to provide a generalized version of this feature in Peer Review too.


How to ethically use this

One might rightfully wonder: "Does this label and judge students?"

The short answer is: no, unless you do so. To prevent this, let's clear up what this feature can and can't do:

Appropriate use

  • Use to guide attention: These indicators are just that; mere indicators of likely areas of interest. Please don't jump to conclusions with this feature. To think of these as definitive would be over interpreting the certainty of the indicator.

  • Dive deeper: Look at the student’s overview of received reviews ratings, compare it to the team average and their self assessment. Browse through some of the comments that were left for context.

  • Get in touch with the student: it’s good to hear their part of the story. If an indicator turns out legitimate, such a situation can be a big learning opportunity; show them the data to provide evidence in the discussion and help them understand how the situation occurred. Also touch upon the limits of the data in a single assignment to paint a picture of reality. Helping them understand the "why" unlocks a powerful learning experience to recalibrate their self perception and motivate them to set self-development goals.

Inappropriate use

  • Don’t jump to conclusions. They are just indicators of likely areas of interest.

  • Caution on "fixed mindset" interpretations: Even if the indicator turns out legitimate right now, it does not necessarily reflect a student's potential to learn and outgrow them in the future.

  • Don’t use indicators for grading without investigating per individual. We will never support blind automatic grading based on these indicators.


Overview of indicators

High performer

This occurs when a student received an average rating higher than 70% and their rating was more than 10% percentage points higher than the overall average of the team as a whole.

Low performer

This occurs when a student did not contribute to the team's success. This condition is triggered when their overall average rating is less than 50%.

Underconfident

This occurs when the overall team rating for a student is greater than 60%, but the student rated themselves at least 20% percentage points lower than this. This could indicate that the student is "underconfident" or too critical of their own contributions.

Overconfident

This occurs when the overall rating for a student is less than 60%, but the student rated themselves at least 20% percentage points higher than their average rating.

Manipulator

This occurs when a student appears to be trying to "skew the curve" by giving themselves high ratings while rating the other team members poorly. The student has given themselves an overall rating of 80% or higher, while rating all the other members on their team at least 40% percentage points below this rating (when rating more than 5 peers, we measure the difference using average)

Conflict

This student rated a team-member at 40% or less while the median rating from the rest of the team is 60% or more. This generally indicates that there is a conflict between 2 team members.

Co-conflict

This occurs when a student got rated at 40% or less by one of their team-mates (Marie Curie), while the median rating from the rest of the team is 60% or more. This generally indicates that there is a conflict between 2 team members.

Clique

This occurs when it appears the team might have split into two non-cooperating groups, where there’s protective insider rating. This happens when there is significant disagreement between the ratings from various team members (evidenced by standard deviation of the 0-to-1 normalized ratings given by peers being above 0.23).


Due credits:

This feature is heavily inspired by the research of Purdue University, specifically their excellent CATme assessment technique. Pedagogy wise, minor adaptations and reinterpretations were subsequently made by FeedbackFruits based on our experience. The most substantial differences are in the interaction design of our implementation, intended to bring the user-friendliness to the level expected of FeedbackFruits. Also of note is the BuddyCheck.io (by Shareworks) adaptation of the same CATme feature called "Labels".

Tags:

exceptional conditions, special cases, extreme voting, outlier ratings, buddycheck labels, indicators, catme

This concludes the Detect outliers article.
If you have any questions or experience a technical issue, please contact our friendly support team by clicking on the blue chat button (Note: support is available 24h every weekday & unavailable on the weekend).

Did this answer your question?