General
What is Acai Rubric Assistant?
Acai Rubric Assistant helps instructors streamline their rubric-creation workflow by providing AI-driven feedback suggestions when editing or creating a rubric. The Assistant analyzes the rubric criteria and offers suggestions to increase their quality. Through this, the feature supports educators by allowing them to create higher quality rubrics that can then be used with the Acai Grading Assistant.
The educator is always in control: suggested feedback has to be explicitly adopted. This ensures ‘human in the loop’: AI is never directly creating or publishing rubrics without the educator’s consent.
Rubric Assistant uses advanced Large Language Models (LLMs) provided by Azure OpenAI Service to analyze the rubric and suggest relevant feedback. We choose to use Azure’s OpenAI Service to deliver this feature because of its enterprise-level security, compliance, and in-region availability.Is your data sent to OpenAI to train their language models?
No. The rubric data is only processed by the language models hosted on Azure to generate the suggestions.
The input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models. It is also never used to train FeedbackFruits AI models. We use these models as a “plug-and-play” service: we do not retrain or fine-tune the models for a specific use case outside of providing custom instructions to the model.
User Impact
Acai Rubric Assistant enhances educators’ workflows by helping streamline the rubric creation or editing process. By offering AI-generated suggestions, it helps educators identify key areas of improvement in their rubrics. Instead of replacing educators, Acai serves as a powerful guide, enabling them to focus more on what we believe educators do best.
Functionality
Acai Rubric Assistant works in any area where rubrics can be edited. Near the bottom of the screen, there is a new visual Acai component that contains a “Get suggestions” button. When clicked, Acai analyzes the rubric criteria and offers suggestions. At first, the suggestions are read-only and the user must accept or reject the suggestions on the criterion’s row. If a user accepts the suggestions, then the criterion text is updated and becomes editable again; if relevant, the criterion is now eligible to be used with the Acai Grading Assistant. If a user rejects the suggestions, then the criterion text returns to its pre-suggestions state and becomes editable again. Once suggestions are made, they can be rerun to ensure the suggestions meet the user’s needs.
It is worth noting that Acai only produces a suggestion. The user is allowed to adopt, augment, or disregard Acai’s suggestions.
Risks and Limitations
We are aware that not all AI products are created equal and that we are still in early phases of development with AI, both for FeedbackFruits and the world as a whole. Not all suggestions from Acai will be perfect. We have identified some key limitations and will list them below:
Acai is sensitive to the detail provided in the rubric criterion title and options. The more detail provided, the higher probability that Acai will provide a relevant, accurate response
Acai works best with English rubrics at the moment. We are working on making Acai perform better in other languages and will update our users when we have updates
We will update users with performance data as we continue evaluating its performance and rollout.
Correction Options
Acai is not perfect and there will be times when it does not provide accurate suggestions. It’s important to recognize that educators are always in control and can choose to disregard Acai’s suggestions. Acai will never save or publish a rubric without an educator’s explicit approval.
For circumstances in which Acai’s suggestions warrant additional feedback, FeedbackFruits is available to help. Three ways educators can provide this feedback include:
Support Chat within FeedbackFruits
Reaching out to Pilot/Partner managers
Emailing acai@feedbackfruits.com
Transparency, Privacy, and Data Governance
-
How are data processed and used by Rubric Assistant?
Your data is processed in compliance with the Data Processing Agreement concluded between your institution and FeedbackFruits. Specifically, when using Rubric Assistant to suggest feedback, your data is processed in four separate stages (see figure below):First, your data (rubric text) are sent to the Azure OpenAI Service to be prepared to send into a model; this process is known as tokenization.
Once your data are tokenized and arrive at the model, the model then creates a generation based on custom instructions (created by FeedbackFruits) and an input (your “tokens”).
-
After the model completes its generation, the instructions, input, and the generation are all sent through the Azure OpenAI Service’s Content Moderation System and Abuse Monitoring System to ensure both the input and the model’s response are appropriate, do not violate any terms of service, and are not harmful to you.
a. In the Abuse Monitoring System, Azure OpenAI Service may store your input data as well as the model generation for up to 30 days.
Finally, the model’s generation is returned to FeedbackFruits in a standard JSON format from Azure through their API. This only contains the completion data as well as some relevant metadata including reason for stopping and if there was an error.
In addition, all relevant actions performed by or within the Azure OpenAI Service will be done in the region in which you are using FeedbackFruits to comply with data residency requirements and improve latency. Your input is never used to train, retrain, or improve any of the Azure OpenAI Service’s Models. Currently, we also do not use any input to fine-tune the instance of any model we use. To read more about how Azure processes and uses your data, please visit Azure OpenAI Service’s page on Data, privacy, and security.
-
What are the Content Moderation System and Abuse Monitoring System? FeedbackFruits utilizes Azure’s alerting system to ensure that both the input fed into the model as well as the model’s response are appropriate, do not violate any terms of service, and not harmful to you. The alerting system consists of the Content Moderation System and the Abuse Monitoring System.
The Content Moderation System is a proprietary collection of models (known as an “ensemble”) that are trained to detect potential abuse, misuse, or harmful content generation within the Azure OpenAI Service. The Azure OpenAI Service does not store any metadata regarding the use of the Content Moderation System. For more information, please visit Azure OpenAI’s content filtering system page.
The Abuse Monitoring System monitors for any abuse or misuse of the service that could violate Azure’s applicable product terms. Only specific authorized Microsoft employees may review your data in the event that certain content has triggered potential abuse of the system. In the event of misuse of the system, an authorized employee will reach out directly to FeedbackFruits to resolve the issue and prevent further abuse. Azure OpenAI Service will store your input data as well as the model generation for up to 30 days to monitor if there is use of the Azure OpenAI Service that would directly violate Microsoft’s applicable product terms.
For more information, please visit Azure OpenAI Service’s data privacy page and the Code of Conduct.
What is included in your prompt?
A prompt is a set of specific instructions given to a large language model to generate a particular type of response or follow a specified task. Prompts help prime the AI model to know what and how it should respond.
Examples of prompts include “Write me a short story about turtles and rabbits”, “Explain the moon landing to me as if I were 10 years old”, or “Help me translate my feedback into Dutch.”
For Rubric Assistant, prompts are designed to analyze rubrics and provide AI-generated suggestions and feedback aligned with the instructor’s rubric. Because prompts contain cues or guidelines that help create a feature, they may constitute intellectual property owned by FeedbackFruits. While we cannot share the exact prompts, here are some key elements included:
Analyze the existing rubric
Provide constructive feedback tailored to the rubric criterion
Ensure new suggestions are clear and differentiate strongly between each rubric level
-
Ensure rubric is human-readable and high-quality
Human agency and oversight
Can I choose to not use Rubric Assistant?
Yes. Rubric Assistant is an optional feature: an institution needs to opt-in to be able to use the functionality, which still allows for individual instructors to choose to not use the feature. When using the feature as an instructor, you are always in control. In other words, the suggestions will not perform any actions on your behalf, such as rating or reviewing a student, without an explicit interaction with the button (clicking). You are able to adopt or disregard any Rubric Assistant suggestions as you please. When adopting suggestions, you are able to modify or edit the suggestion.
Can I reach out to FeedbackFruits to get more information, seek clarity, or raise concerns about Acai or the Rubric Assistant?
Yes. We at FeedbackFruits are committed to producing high-quality, ethical, and pedagogy-first AI. As part of our commitments, we are available to discuss more deeply potential concerns or unclarity regarding Acai functionality. Reach out to your pilot/partner manager or acai@feedbackfruits.com for more information.
Fairness
Does Rubric Assistant work for languages other than English?
Not yet. While LLMs are able to read multiple languages, we have, at this point, configured Rubric Assistant to only respond in English. We are working on ways to include other languages within the feature and will update this when we do so.
Relevant Azure OpenAI Service Documents
For more information about how data is processed by Azure OpenAI Service, please consult the following resources: