Automated feedback coach 1.0 (beta)

Provide students with real-time tips on their feedback to their peers in GME.

Product Team avatar
Written by Product Team
Updated over a week ago

NOTE: this article applies to the 1.0 version of Automated Feedback Coach released in 2019. It is being phased out in the US and EU regions, and only available to new users in the Australasia region.

This feature intends to help students provide better feedback to each other in Group Member Evaluation. Students often struggle to provide constructive feedback to their peers which can hamper student learning and growth.

The article will cover three main topics:

  1. How to enable the feature

  2. Overview of current feedback the model can give.

  3. Privacy & Ethics


How to enable the feature

This feature can be enabled in settings of "Task 3: Give feedback to peers". We recommend using this in teamwork assignments only.


Overview of feedback categories:

Too short

When feedback is too short to be constructive, students will receive feedback that indicates their comments may be too short to be useful for their peers.

Unspecific

When feedback is not specific enough, either due to a specific word being used (e.g. good, excellent, bad) or a lacking elaboration, students will receive feedback that providing an example, or explaining what they appreciated would be useful.

Personal attacks

When student feedback is directed towards their peer, rather than their behaviour, and done so in an unfriendly way it is flagged as a personal attack. Students are encouraged to reflect on whether they can frame that feedback in another, more motivational, way.

Overly positive & Overly negative feedback

  1. Overly positive: When feedback provided is overly positive, we highlight that providing some areas of improvement so their peer can keep growing is very constructive.

  2. Overly negative: When feedback provided is overly negative, we encourage students to reflect on how this would affect them, and provide some constructive comments for improvement.

Repeating the criterion

When a student feedback comment has a high overlap with the criterion or criterion description, without any elaboration, students are encouraged to think of examples of how/when their peer displayed these behaviours.

Good feedback

When students take the time to provide good feedback this is also communicated to them. This happens when the student directly addresses the feedback to their peer, when they frame their feedback as their own perception or provide an example or explanation in their comment.

  1. The feedback includes an example or explanations

  2. The feedback is addressed directly to the peer*

  3. The feedback is framed as the givers own perception.

*One condition applies: Not active if under Task 2: Given Reviews, teacher chooses "never" under setting "Students see their received feedback"


Privacy & Ethics

In this age of tech and AI growing increasingly powerful, you might wonder whether FeedbackFruits exercises the due caution when employing them in your education. We hope we’ll convince you that we take student’s interest to heart with transparency on how we deal with some common concerns.

  1. Can this ever block my students from posting their feedback?

    No. Even when feedback is classified as very poor in quality, we will not block students from submitting it. Like with most AI applications, part of the reason is that we currently can't guarantee that we won't mistake quality feedback as low quality feedback. Even if we could, we are currently very skeptical this would be beneficial to the learning experience. Also going forwards, FeedbackFruits will stay very conservative and cautious in designing automated interventions in the students' learning to ensure we take the student's interest to heart.

  2. Can I use this to automatically grade my students?

    No, we don't think it is appropriate to support automated grading with such young innovations and won't enable for the foreseeable future. FeedbackFruits has a strong stance on algorithmic injustice, and going forwards we will error on the side of caution and stay very conservative in automation of student assessment.

  3. What if the AI makes a mistake?

    It's inevitable that at some point AI will make a mistake. It can still sometimes mischaracterize proper feedback as low quality feedback. Due to the complexity and diversity in language, it's not a matter of "if", but rather how frequent. Hence, we believe any tech company should design with this in mind and anticipate failures before they happen. Only if failure is rare enough and when it happens it can quickly be corrected, it becomes of little consequence and permissible in education. Hence we stick to the EU guidelines on AI as much as possible, and implemented a survey mechanic to close-the-loop. This early warning system allows us to monitor these usefulness ratings per intervention type. This way we can address faults in the system quickly, and minimize the time between introduction to resolution, thereby limiting the amount of students affected.

  4. Where does my student’s feedback go for analysis? Who sees it?

    Because we're a European company we adhere to the EU's privacy standards (GDPR). The real-time processing of the students' feedback is done fully autonomously without exposure to employees, exclusively on servers at Leafcloud, a Dutch company in the EU that also adheres to GDPR.

    To learn from problems indicated in the usefulness ratings, employees will retroactively sample some linked students' feedback for analysis. This is done exclusively with anonymized responses, by a small group of employees who's are cleared for this work and granted access to only this data.

This concludes the Automated feedback coach (beta) article.
If you have any questions or experience a technical issue, please contact our friendly support team by clicking on the blue chat button (Note: support is available 24h every weekday & unavailable on the weekend).

Did this answer your question?