All Collections
Automated Feedback
Tool understanding
Automated Feedback | Data privacy and ethics policy
Automated Feedback | Data privacy and ethics policy
Updated over a week ago

To ensure we’re doing the right thing while pushing ed-tech innovation, we’re going beyond our due diligence in privacy and ethics. We’re trying to set a new standard to encourage the entire ed-tech market to step up its game. Let us show you by answering some common questions.

What happens to my privacy during AI processing?

By default, the AI does not have access to student data. Access to data will be given to the AI only when it is required for giving automated feedback. Access to student data will be immediately removed when Automated Feedback is disabled. No personal information that directly identifies students is used by the system. We limit the AI to only access the student work on which feedback should be given, and generic metadata about the assignment.
Student work is discarded from the AI's memory as soon as processing is done. We may use certain student works to train the AI to get better at giving feedback. This happens with the utmost discretion, adhering to our Service Level Agreement with your institution. In these cases, our Data Officer will first anonymize the work (e.g. removing any student and teacher names, the university name etc.) before any further processing will occur. After this, the file will only be viewed by at most one annotator (an employee of FeedbackFruits), who will mark relevant sections for the AI to learn to recognize.
Of course, we always comply with requests for full data removal, to which we are also bound by law in Europe.


What happens if the AI makes a wrong decision?

Although we do our best to provide a stable and mature service, any AI will make mistakes. We take this reality seriously, and appreciate that in education the room for mistakes is small. However, we strongly feel that the benefit that can be gained from the use of AI in education is too valuable not to pursue. Therefore, we take all precautions to minimize the impact of any mistakes we make.
First, we only use AI technologies in the formative process (i.e. the learning process itself), and at this time we will not use AI in the summative process (i.e. anything related to grading). This prevents the risk of the AI making mistakes with severe consequences to the student.
Second, we always make sure that whenever an AI draws a conclusion (e.g. a suggestion or compliment), students will be made aware that it was the result of an AI, and that they are always able to mark this conclusion as incorrect.


To whom do I reach out when I notice bad automated feedback?

You’re always welcome to reach out to us via the support button in the bottom left corner. Our support will make sure you’ll get in touch with the right person that understands the problem you encounter, and makes sure it’s fixed in the future.

To ensure that we will also spot problems without users reaching out to us, we have also set up active monitoring. This shows us how often each check fails based on the objection rate by students.

Note that this is an evolving feature, there will be checks available at various stages of technical maturity. Some will be mature, and rarely make mistakes. Others are very experimental, and not very reliable. We always try to communicate as clearly as possible what the level of reliability is of each check, so you know what to expect and can use that to decide whether you want to use them in your class.


This concludes the Automated Feedback's data privacy and ethics policy article.
If you have any questions or experience a technical issue, please contact our friendly support team by clicking on the blue chat button (Note: support is available 24h on weekdays & unavailable on the weekend).




Did this answer your question?