Skip to main content
Acai Coach Transparency Note

Understand how data is processed when using Acai Feedback Coach, Reflection Coach or Discussion Coach, leveraging generative AI.

Updated over 3 months ago

General

  1. What are Feedback Coach, Reflection Coach and Discussion Coach?

    With the Acai Coach features, we assist students in writing better reviews for their peers, improve the quality of their self-assessment, go in-depth in reflections and guide towards stronger discussion contributions. Each Coach provides real-time guidance to help improve the quality of their contributions, not only coaching students on how to write better themselves, but also increasing the quality of feedback/responses they receive from their peers.
    All our Acai Coach features use advanced Large Language Models provided by Microsoft Azure OpenAI Service, to process students’ input and provide guidance. We choose to work with Microsoft services to deliver this feature, as it provides enterprise-level security, compliance, and regional availability.

  2. Is my data sent to OpenAI to train their language models?

    No. Students’ input is only processed by the language models hosted at Azure to generate guidance on the provided writing.

    The input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models.

    Moreover, we use these foundational models as a “plug-and-play” service. We do not otherwise retrain or fine-tune the models for a specific use case outside of providing custom instructions to the model. In other words, we do not use any student data to retrain, train, or improve our models.

Transparency, Privacy, and data governance

  1. How is student data processed and used by Acai Coach functionality?

    When using Acai Coach features, like Feedback Coach, Reflection Coach and Discussion Coach, your data is processed in four separate stages (see figure below):

    • First, your data (including the text you wrote, the prompt, and the criterion/instructions/discussion prompt) are sent to the Azure OpenAI Service to be prepared to processed by a model; this step is known as tokenization.

    • Once your data is tokenized and are interpreted by the model, the model then creates a generation based on custom instructions (created by FeedbackFruits) and an input (your “tokens”).

    • After the model completes its generation, the instructions, input, and the generation are all sent through the Azure OpenAI Service’s Content Moderation System and Abuse Monitoring System to ensure both the input and the model’s response are appropriate, do not violate any terms of service, and not harmful to you.

      • As part of the Abuse Monitoring System, Azure OpenAI Service may store your input data as well as the model generation for up to 30 days.

    • Finally, the model’s generation is returned to FeedbackFruits in a standard JSON format from Microsoft Azure through their API. This only contains the completion data as well as some relevant metadata including reason for stopping and if there was an error.

    A diagram of how your data is transferred from the user interface through our API into Azure OpenAI Service

    (figure: diagram of data processing in Automated feedback coach using Azure OpenAI Service)


    Your personal data is processed in compliance with applicable data protection law and the Data Processing Agreement entered into between FeedbackFruits and your institution.

    In addition, all relevant actions performed by or within the Azure OpenAI Service will be done in the region in which you are using FeedbackFruits to comply with data residency requirements and improve latency.

    Students’ input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models. We also do not use any students’ input to fine-tune the instance of model we use.

    To read more about how Azure processes and uses your data, please visit Azure OpenAI Service’s page on Data, privacy, and security.

  2. What are the Content Moderation System and Abuse Monitoring System?

    FeedbackFruits utilizes Microsoft Azure’s alerting system to ensure that both the input fed into the model as well as the model’s response are appropriate, do not violate any terms of service, and not harmful to you. The alerting system comprises of the Content Moderation System and the Abuse Monitoring System.

    The Content Moderation System is a proprietary collection of models (known as an “ensemble”) that are trained to detect potential abuse, misuse, or harmful content generation within the Azure OpenAI Service. The Azure OpenAI Service does not store any metadata regarding the use of the Content Moderation System.

    The Abuse Monitoring System monitors for any abuse or misuse of the service that could violate Azure’s applicable product terms. Only specific authorized Microsoft employees may review your data in the event that certain content has triggered potential abuse of the system. In the event of misuse of the system, an authorized employee will reach out directly to FeedbackFruits to resolve the issue and prevent further abuse. Azure OpenAI Service will store your input data as well as the model generation for up to 30 days to monitor if there is use of the Azure OpenAI Service that would directly violate Microsoft’s applicable product terms.

    For more information, please visit Azure OpenAI Service’s data privacy page and the Code of Conduct

  3. What is included in your prompt?

    A prompt is a set of specific instructions given to a large language model to generate a particular type of response or follow a specified task. Prompts help prime the AI model to know what and how it should respond.

    Examples of prompts include “Write me a short story about turtles and rabbits”, “Explain the moon landing to me as if I were 10 years old”, or “Help me translate my feedback into Dutch.”

    Because prompts contain cues or guidelines that help create a feature, they can also be considered intellectual property. Although we cannot divulge the exact prompt, here are some of the elements included in the prompts we use for Acai Coach features:

    • Respond in a helpful and coachable way.

    • Ensure that the feedback addresses the given criteria.

    • Include the feedback and context (such as the criteria) in a specific format.

    • Compliment and encourage students to give better feedback.

    • Do not respond to gibberish, random text, or things that appear unrelated to giving feedback to a peer.

    • Offer areas of improvement.

Human agency and oversight

  1. Can I choose to not use Acai Coach features?

    Either as a teacher or a student, you can choose whether or not you want to use Acai Coach features. For students, even after a teacher has enabled the Feedback Coach, Reflection Coach or Discussion Coach for you, you can disable it in your preferences.

    For more information, please consult this article.

Fairness

  1. Do Acai Coach features work for languages other than English?

    Yes. The Feedback Coach, Reflection Coach and Discussion Coach currently provide guidance in English, Dutch, French, Spanish and Icelandic.

    The language of the feedback depends on the student's settings in FeedbackFruits.

    Please note that the quality of AI-generated feedback may vary in different languages. This is due to the fact that the training data for advanced language models is largely drawn from public sources that are primarily in English, and the quantity and quality of available data in other languages may differ significantly.

Relevant Azure OpenAI Service Documents

For more information about how data is processed by Azure OpenAI Service, please consult the following resources:

Did this answer your question?