All Collections
FAQ
Transparency note: Automated feedback coach 2.0
Transparency note: Automated feedback coach 2.0

Understand how your data is processed when you use Automated feedback coach 2.0 with generative AI.

Updated over a week ago

General

  1. What is Automated Feedback Coach 2.0?

    The Automated Feedback Coach 2.0 aims to assist students in writing better reviews for their peers in and Peer Review and Group Member Evaluation activities. It provides real-time tips to help improve the quality of the feedback, not only coaching students on how to write better feedback, but also increasing the quality of feedback they receive from their peers.

    Automated Feedback Coach 2.0 uses advanced Large Language Models provided by Azure OpenAI Service to process students’ input and provide feedback. Compared to its predecessor, it is much faster, more scalable, and can provide more specific and actionable feedback based on the context of reviews.

    We choose to use Azure’s OpenAI Service to deliver this feature because of its enterprise-level security, compliance, and regional availability.

  2. Is my data sent to OpenAI to train their language models?

    No. Students’ input is only processed by the language models hosted at Azure to generate automated feedback.

    The input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models.

    Moreover, we use these models as a “plug-and-play” service. We do not otherwise retrain or fine-tune the models for a specific use case outside of providing custom instructions to the model. In other words, we do not use any student data to retrain, train, or improve our models.

Transparency, Privacy, and data governance

  1. How are students’ data processed and used by Automated Feedback Coach 2.0

    Your data is processed in compliance with FeedbackFruits' general Privacy policy.

    Specifically, when using Automated Feedback Coach 2.0 to improve review, your data is processed in four separate stages (see figure below):

    • First, your data (including the review text you wrote, the prompt, and the criterion) are sent to the Azure OpenAI Service to be prepared to send into a model; this process is known as tokenization.

    • Once your data are tokenized and arrive at the model, the model then creates a generation based off custom instructions (created by FeedbackFruits) and an input (your “tokens”).

    • After the model completes its generation, the instructions, input, and the generation are all sent through the Azure OpenAI Service’s Content Moderation System and Abuse Monitoring System to ensure both the input and the model’s response are appropriate, do not violate any terms of service, and not harmful to you.

      • In the Abuse Monitoring System, Azure OpenAI Service may store your input data as well as the model generation for up to 30 days.

    • Finally, the model’s generation is returned to FeedbackFruits in a standard JSON format from Azure through their API. This only contains the completion data as well as some relevant metadata including reason for stopping and if there was an error.

    A diagram of how your data is transferred from the user interface through our API into Azure OpenAI Service

    (figure: diagram of data processing in Automated feedback coach using Azure OpenAI Service)

    In addition, all relevant actions performed by or within the Azure OpenAI Service will be done in the region in which you are using FeedbackFruits to comply with data residency requirements and improve latency.

    Students’ input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models. Currently, we also do not use any students’ input to fine-tune the instance of model we use.

    To read more about how Azure processes and uses your data, please visit Azure OpenAI Service’s page on Data, privacy, and security.

  2. What are the Content Moderation System and Abuse Monitoring System?

    FeedbackFruits utilizes Azure’s alerting system to ensure that both the input fed into the model as well as the model’s response are appropriate, do not violate any terms of service, and not harmful to you. The alerting system comprises of the Content Moderation System and the Abuse Monitoring System.

    The Content Moderation System is a proprietary collection of models (known as an “ensemble”) that are trained to detect potential abuse, misuse, or harmful content generation within the Azure OpenAI Service. The Azure OpenAI Service does not store any metadata regarding the use of the Content Moderation System.

    The Abuse Monitoring System monitors for any abuse or misuse of the service that could violate Azure’s applicable product terms. Only specific authorized Microsoft employees may review your data in the event that certain content has triggered potential abuse of the system. In the event of misuse of the system, an authorized employee will reach out directly to FeedbackFruits to resolve the issue and prevent further abuse. Azure OpenAI Service will store your input data as well as the model generation for up to 30 days to monitor if there is use of the Azure OpenAI Service that would directly violate Microsoft’s applicable product terms.

    For more information, please visit Azure OpenAI Service’s data privacy page and the Code of Conduct

  3. What is included in your prompt?

    A prompt is a set of specific instructions given to a large language model to generate a particular type of response or follow a specified task. Prompts help prime the AI model to know what and how it should respond.

    Examples of prompts include “Write me a short story about turtles and rabbits”, “Explain the moon landing to me as if I were 10 years old”, or “Help me translate my feedback into Dutch.”

    Because prompts contain cues or guidelines that help create a feature, they can also be considered intellectual property. Although we cannot divulge the exact prompt, here are some of the elements included in the prompt for the Automated Feedback Coach 2.0:

    • Respond in a helpful and coachable way.

    • Ensure that the feedback addresses the given criteria.

    • Include the feedback and context (such as the criteria) in a specific format.

    • Compliment and encourage students to give better feedback.

    • Do not respond to gibberish, random text, or things that appear unrelated to giving feedback to a peer.

    • Offer areas of improvement.

Human agency and oversight

  1. Can I choose to not use Automated feedback coach 2.0?

    Either as a teacher or a student, you can choose whether or not you want to use Automated Feedback Coach 2.0. For students, even after a teacher has enabled the coach for you, you may disable it in your preferences.

    For more information, please consult this article.

Fairness

  1. Does the Automated feedback coach work for languages other than English?

    Yes. The Automated Feedback Coach currently gives feedback in English, Dutch, and French as of October 2023.

    The language of the feedback depends on the student's settings in FeedbackFruits.

    Please note that the quality of AI-generated feedback may vary in different languages. This is due to the fact that the training data for advanced language models is largely drawn from public sources that are primarily in English, and the quantity and quality of available data in other languages may differ significantly.

Relevant Azure OpenAI Service Documents

For more information about how data is processed by Azure OpenAI Service, please consult the following resources:

Did this answer your question?