General
- What is Acai Grading Assistant?
Acai Grading Assistant helps instructors streamline their grading workflow by providing AI-driven feedback suggestions on student contributions. Integrated within FeedbackFruits Assignment Review, the Assistant analyzes student contributions, such as submissions, and suggests feedback that aligns with the instructor’s rubric criteria. Through this, the feature supports educators by allowing them to provide high quality, more consistent feedback at scale.
The educator is always in control: suggested ratings or feedback are to be adopted by educators themselves. This ensures ‘human in the loop’: AI is never directly providing ratings to students on their work.
Grading Assistant uses advanced Large Language Models (LLMs) provided by Azure OpenAI Service to analyze students’ inputs and suggest feedback based on the grading rubric. We choose to use Azure’s OpenAI Service to deliver this feature because of its enterprise-level security, compliance, and in-region availability.
- Is your data sent to OpenAI to train their language models?
No. Student input is only processed by the language models hosted on Azure to generate the suggestions.
The input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models. It is also never used to train FeedbackFruits AI models. We use these models as a “plug-and-play” service: we do not retrain or fine-tune the models for a specific use case outside of providing custom instructions to the model.
User Impact
Acai Grade Assistant enhances educators’ workflows by improving feedback consistency in larger activities. By offering AI-generated suggestions, it helps educators identify key areas of improvement in student work, ensuring that students receive timely, high-quality feedback. Instead of replacing educators, Acai serves as a powerful guide, enabling them to focus more on what we believe educators do best: helping unlock the potential of their students.
Functionality
Acai Grade Assistant works in Assignment Review with two key components, assuming the educator has access to the feature: a student document submission and at least one Acai-assisted rubric criterion. When a student uploads a submission as part of their assignment, the text from the submission is extracted and prepared for Acai to analyze. For each Acai-assisted rubric criterion, Acai reads the submission, along with the rubric criterion, and offers suggested feedback. This suggested feedback comes in three forms. First, a suggested rating based on the rubric. This will always be one of the options present on the rubric and is present with a visible “Suggestion” chip. Second, an explanation for how Acai arrived at this rating option. When clicking on an info icon within the “Suggestion” chip, the user will be able to see this explanation. It’s formatted in 3-4 bullet points that summarize Acai’s rationale for the suggestion. Finally, Acai also offers a suggested review, written from the perspective of an educator. It’s available in the Acai Feedback Coach component when writing a review for the student. The suggested review is meant to serve as a guide for the educator, which can be adopted into the educator’s review text field.
It is worth noting that Acai does not produce a rating or review, rather it produces a suggestion. The educator is allowed to adopt, augment, or disregard Acai’s suggestions.
Risks and Limitations
We are aware that not all AI products are created equal and that we are still in early phases of development with AI, both for FeedbackFruits and the world as a whole. Not all suggestions from Acai will be perfect. We have identified some key limitations and will list them below:
- Acai is sensitive to the detail provided in the rubric criterion options. The more detail provided, the higher probability that Acai will provide a relevant, accurate response
- Acai works best with English rubrics at the moment. We are working on making Acai perform better in other languages and will update our users when we have updates
- Acai is also sensitive to the structure of the document, performing better on documents with easily extractable text. PDFs or other submission types with hard-to-read or hard-to-parse text may produce inaccurate results
Upon its initial release, Acai’s performance closely aligns with typical educator evaluations, differing by less than one rubric criterion level on average. We will continue to update users with more accurate data as we continue evaluating its performance.
Correction Options
Acai is not perfect and there will be times when it does not provide accurate suggestions. It’s important to recognize that educators are always in control and can choose to disregard Acai’s suggestions. Acai will never produce a rating or review for a student without an educator’s explicit approval.
For circumstances in which Acai’s suggestions warrant additional feedback, FeedbackFruits is available to help. Three ways educators can provide this feedback include:
- Support Chat within FeedbackFruits
- Reaching out to Pilot/Partner managers
- Emailing acai@feedbackfruits.com
Transparency, Privacy, and Data Governance
- How are data processed and used by Grading Assistant?
Your data is processed in compliance with the Data Processing Agreement concluded between your institution and FeedbackFruits. Specifically, when using Grading Assistant to suggest feedback, your data is processed in four separate stages (see figure below):- First, your data (rubric text and student contributions) are sent to the Azure OpenAI Service to be prepared to send into a model; this process is known as tokenization.
- Once your data are tokenized and arrive at the model, the model then creates a generation based on custom instructions (created by FeedbackFruits) and an input (your “tokens”).
- After the model completes its generation, the instructions, input, and the generation are all sent through the Azure OpenAI Service’s Content Moderation System and Abuse Monitoring System to ensure both the input and the model’s response are appropriate, do not violate any terms of service, and are not harmful to you.
- In the Abuse Monitoring System, Azure OpenAI Service may store your input data as well as the model generation for up to 30 days.
- Finally, the model’s generation is returned to FeedbackFruits in a standard JSON format from Azure through their API. This only contains the completion data as well as some relevant metadata including reason for stopping and if there was an error.
In addition, all relevant actions performed by or within the Azure OpenAI Service will be done in the region in which you are using FeedbackFruits to comply with data residency requirements and improve latency. Your input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models. Currently, we also do not use any input to fine-tune the instance of any model we use. To read more about how Azure processes and uses your data, please visit Azure OpenAI Service’s page on Data, privacy, and security.
- What are the Content Moderation System and Abuse Monitoring System? FeedbackFruits utilizes Azure’s alerting system to ensure that both the input fed into the model as well as the model’s response are appropriate, do not violate any terms of service, and not harmful to you. The alerting system consists of the Content Moderation System and the Abuse Monitoring System.
The Content Moderation System is a proprietary collection of models (known as an “ensemble”) that are trained to detect potential abuse, misuse, or harmful content generation within the Azure OpenAI Service. The Azure OpenAI Service does not store any metadata regarding the use of the Content Moderation System. For more information, please visit Azure OpenAI’s content filtering system page.
The Abuse Monitoring System monitors for any abuse or misuse of the service that could violate Azure’s applicable product terms. Only specific authorized Microsoft employees may review your data in the event that certain content has triggered potential abuse of the system. In the event of misuse of the system, an authorized employee will reach out directly to FeedbackFruits to resolve the issue and prevent further abuse. Azure OpenAI Service will store your input data as well as the model generation for up to 30 days to monitor if there is use of the Azure OpenAI Service that would directly violate Microsoft’s applicable product terms.
For more information, please visit Azure OpenAI Service’s data privacy page and the Code of Conduct.
- What is included in your prompt?
A prompt is a set of specific instructions given to a large language model to generate a particular type of response or follow a specified task. Prompts help prime the AI model to know what and how it should respond.
Examples of prompts include “Write me a short story about turtles and rabbits”, “Explain the moon landing to me as if I were 10 years old”, or “Help me translate my feedback into Dutch.”
For Grading Assistant, prompts are designed to analyze student contributions and provide AI-generated rating suggestions and feedback aligned with the instructor’s rubric. Because prompts contain cues or guidelines that help create a feature, they may constitute intellectual property owned by FeedbackFruits. While we cannot share the exact prompts, here are some key elements included:
- Analyze student responses based on rubric criteria
- Suggest a rating that aligns with the rubric scale
- Provide constructive feedback tailored to the submission
- Ensure feedback is clear, pedagogically sound, and actionable
- Format suggestions in a structured, easy-to-review manner
Human agency and oversight
Can I choose to not use Grading Assistant?
Yes. Grading Assistant is an optional feature: an institution needs to opt-in to be able to use the functionality, which still allows for individual instructors to choose to not use the feature. When using the feature as an instructor, you are always in control. In other words, the suggestions will not perform any actions on your behalf, such as rating or reviewing a student, without an explicit interaction with the button (clicking). You are able to adopt or disregard any Grading Assistant suggestions as you please. When adopting suggestions, you are able to modify or edit the suggestion.
Can I reach out to FeedbackFruits to get more information, seek clarity, or raise concerns about Acai or the Grade Assistant?
Yes. We at FeedbackFruits are committed to producing high-quality, ethical, and pedagogy-first AI. As part of our commitments, we are available to discuss more deeply potential concerns or unclarity regarding Acai functionality. Reach out to your pilot/partner manager or acai@feedbackfruits.com for more information.
Fairness
Does Grading Assistant work for languages other than English?
Not yet. While LLMs are able to read multiple languages, we have, at this point, configured Grading Assistant to only respond in English. We are working on ways to include other languages within the feature and will update this when we do so.
Relevant Azure OpenAI Service Documents
For more information about how data is processed by Azure OpenAI Service, please consult the following resources: