The Automated feedback coach 2.0 feature aims to assist students in writing better reviews for their peers in Peer Review and Group Member Evaluation activities. It provides real-time tips to help improve the quality of the feedback, not only coaching students on how to write better feedback, but also increasing the quality of feedback they receive from their peers.
This version of the feature is now available to all regions upon request. Please get in touch with your FeedbackFruits representative to enable this feature for you.
What is Automated Feedback Coach 2.0? How is it different from the previous version?
Automated Feedback Coach 2.0 uses advanced Large Language Models provided by Azure OpenAI Service to process students’ input and provide feedback.
We choose to use Azure’s OpenAI Service to deliver this feature because of its enterprise-level security, compliance, and regional availability.
Compared to its predecessor, it is much faster, more scalable, and can provide more specific and actionable feedback based on the context of reviews, such as the feedback criteria.
Is my data sent to OpenAI to train their language models?
No. Students’ input is only processed by the language models hosted at Azure to generate automated feedback.
The input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models.
Moreover, we use these models as a “plug-and-play” service. We do not otherwise retrain or fine-tune the models for a specific use case outside of providing custom instructions to the model. In other words, we do not use any student data to retrain, train, or improve our models.
How can I enable this feature in a Peer Review or Group Member Evaluation activity as a teacher?
Given reviews module, navigate to
Guiding students, section. Enable the feature by toggling on the option for "Automated feedback coach."
What does the Automated feedback coach look like when it's working?
After the teacher has enabled the Automated feedback coach feature for a Peer Review or Group Member Evaluation activity, students enrolled in that activity will see "Automated feedback coach" in their feedback giving window when they begin to write reviews for other students.
Once the student starts typing their reviews, the AI coach will process the content and gives feedback to the student. The feedback updates in real time as the student adds on or alters their reviews. Please see the image below for an animated presentation of the functionality.
In Peer Review activities, the coach is also available in the sidebar.
Can I choose to not use this feature as a student, even if my teacher has enabled it for me?
Yes, you can disable or enable it any time in your FeedbackFruits preferences.
What is included in your prompt?
A prompt is a set of specific instructions given to a large language model to generate a particular type of response or follow a specified task. Prompts help prime the AI model to know what and how it should respond. Examples of prompts include “Write me a short story about turtles and rabbits”, “Explain the moon landing to me as if I were 10 years old”, or “Help me translate my feedback into Dutch.”
Because prompts contain cues or guidelines that help create a feature, they can also be considered intellectual property. Although we cannot divulge the exact prompt, here are some of the elements included in the prompt for the Automated Feedback Coach 2.0:
Respond in a helpful and coachable way.
Ensure that the feedback addresses the given criteria.
Include the feedback and context (such as the criteria) in a specific format.
Compliment and encourage students to give better feedback.
Do not respond to gibberish, random text, or things that appear unrelated to giving feedback to a peer.
Offer areas of improvement.
In what language is this feature available?
As of October 2023, the AI coach can give feedback in English, Dutch, and French.
The language of the feedback depends on the student's settings in FeedbackFruits.
Please note that the quality of AI-generated feedback may vary in different languages. This is due to the fact that the training data for advanced language models is largely drawn from public sources that are primarily in English, and the quantity and quality of available data in other languages may differ significantly.