Introduction
What is FeedbackFruits AI Practice Activity?
AI Practice is a dedicated learning activity within FeedbackFruits designed to strengthen students’ AI literacy. It provides students with the opportunity to interact with a large language model (LLM) directly inside the platform. Unlike informal or untracked use of AI tools, this activity makes the conversation transparent: the full exchange is visible to the instructor once submitted.
The activity is structured to not only encourage experimentation with AI but also to promote critical reflection. Students can review their own interactions, provide feedback to peers, and consider the broader implications of working with AI. Instructors may also guide the process by providing feedback, helping students refine their prompting strategies and deepen engagement. The central focus is on the dialogue itself, rather than a polished outcome.
Our conversational AI is powered by Azure OpenAI Service. This allows us to deliver high-quality model outputs while relying on enterprise-grade security, compliance measures, and in-region hosting and availability.
Is Data Within AI Practice Used to Train LLMs?
No. Conversations within AI Practice are processed by Azure OpenAI Service to generate suggestions but are never used to train, retrain, or fine-tune any model—whether owned by Microsoft or FeedbackFruits. We use these models in a plug-and-play way, with only custom instructions applied for educational purposes.
Functionality
The AI Practice activity enables students to conduct a text-based conversation with an LLM directly inside FeedbackFruits. Before starting, instructors can set clear expectations and learning goals that are shown to students when they enter the activity. From there, students can either initiate a free-form conversation or upload their own work as context for the dialogue.
The system requires at least two student messages before submission is allowed. Once the conversation feels complete, students can submit it to the activity. Until the review phase begins (if there is a review phase in the set activity) or the deadline passes, students may revisit the conversation, continue it, or resubmit updated versions.
For instructors, the activity can be extended with peer review, instructor feedback, reflection steps, or grading modules. In all cases, the primary objective of review is the conversation itself.
User Impact
We recognize the growing importance of AI literacy and the varying levels of experience students bring to this space within academia (and elsewhere). Because conversational AI is increasingly present in both academic and professional contexts, our goal is to provide a transparent, supported environment for students to learn.
The LLM in use does not provide built-in scaffolding for learning, as the purpose is to reflect the reality of how such tools are typically encountered. Instead, the activity supports learning by structuring feedback opportunities and making the full conversation visible to both students and instructors.
For instructors who are newer to teaching with AI, the activity includes pre-configured instructions, example prompts, and an assessment rubric, all of which can be customized as needed. We also remain attentive to emerging research to continually refine and improve the activity.
Limitations and Risks
As with any AI system, there are important limitations to be aware of:
- Bias in responses, stemming from patterns in training data
- Hallucinations, where the model produces inaccurate but plausible statements
- Reduced reliability of performance in longer conversations
- Visibility of the complete conversation upon submission, without access to prior edited history (dependent on the settings chosen in the assignment by the teacher)
- Risks of sharing personal identifiable information in uploaded attachments (although this data is not stored for retraining)
At this stage, only text-based interactions are supported. The models cannot process images or videos, even if included in an attachment.
Finally, the accuracy and quality of responses vary depending on the model used and the nature of the task. We do not provide performance benchmarks, and we encourage instructors to consult available research for task-specific results.
Correction Options
AI Practice is designed to mirror real-world interactions with AI. Students may edit or delete earlier parts of their conversation if they are not satisfied with the responses. This action resets the conversation from that point forward, giving students control over the direction of the dialogue.
As with other FeedbackFruits activities, students may also delete their submission until the review phase begins or the deadline has passed.
Transparency, Privacy, and Data Governance
Data processing in AI Practice follows the Data Processing Agreement established between your institution and FeedbackFruits. Conversations are processed in four stages:
- Conversation, and any attachment, data is sent to Azure OpenAI Service for tokenization - this is the process by which your data is prepared to be sent to an LLM.
- The model generates a response based on FeedbackFruits’ custom instructions and the tokenized input.
- Input and output are reviewed by Azure’s Content Moderation and Abuse Monitoring systems to ensure safety and compliance. In rare cases of flagged misuse, Microsoft may retain input and output data for up to 30 days.
- Responses are returned to FeedbackFruits in JSON format through Azure's API and stored securely by FeedbackFruits in line with our Data and Privacy Policy to enable submission, resubmission, and review.
All processing occurs within the region where FeedbackFruits is used to meet data residency requirements. Input is never used to train Azure OpenAI Service models or FeedbackFruits systems. To read more about how Azure processes and uses your data, please visit Azure OpenAI Service’s page on Data, Privacy, and Security.
If you have specific questions about how your institution's data is processed within the scope of FeedbackFruits as a whole, please contact your partner success manager at FeedbackFruits.
Content Moderation and Abuse Monitoring
Azure OpenAI Service includes two layers of oversight. The Content Moderation system uses models to detect potentially harmful or inappropriate content, while the Abuse Monitoring system checks for misuse of the service under Microsoft’s terms.
The Azure OpenAI Service does not store any metadata regarding the use of the Content Moderation System - however, in the rare event of suspected abuse caught by the Abuse Monitoring system, authorized Microsoft staff may review data and follow up with FeedbackFruits.
For more information, you can visit the following links from Azure:
Prompts in AI Practice
A prompt is a set of instructions that guide the model’s response. Prompts can range from simple requests (“Explain the moon landing as if I were 10 years old”) to more specific tasks (“Help me translate my feedback into Dutch”).
In AI Practice, sample prompts are provided to help students get started. These can be used as-is, adapted to the student’s needs, or disabled entirely by instructors.
Human Agency and Oversight
Can I choose not to use this activity?
Yes. AI Practice is an optional activity within the FeedbackFruits Learning Design System. Instructors are free to use it or not, depending on their course needs.
Can I contact FeedbackFruits for more information?
Yes. FeedbackFruits is committed to building ethical, transparent, and education-first AI. Educators can reach out to their pilot/partner manager for further guidance or to raise concerns.
Fairness
At present, AI Practice is optimized for English. While the underlying models can process multiple languages, support is currently limited. We are exploring ways to broaden language availability in future updates.
Relevant Azure OpenAI Resources
For more detail on Azure OpenAI Service, we recommend the following resources: