Security and Data Protection
In the new world of Artificial Intelligence (AI), both parents and schools are asking about security and data protection. This page aims to answer your questions, but if you have additional queries, please reach out to connect@marking.ai.
-
Is Marking.ai's data compliant?
If you use ChatGPT or another customer facing AI tool, it does not protect your data. In most cases, the AI company will use your data to train their model in order to improve it.
Marking.ai is POPIA compliant and has a registered Information Officer to ensure that all student data is well protected. This also means:
1. We do not share any student (or any other user data) with third parties
2. We don't use any individual personal student data to train our models
3. The data for each school is kept separate
4. Administrators, educators, parents and students can access their assessment data at any time
5. We can export your school's data at any time upon request -
Do you provide access to students under the age of 13?
The minimum age of use for ChatGPT is 13 years old. Although this is important in terms of protection, it also reduces the child's ability to develop a strong level of AI literacy early on.
Marking.ai has made the decision to make its platform available to users under the age of 13, as long as the school has a waiver in place that parents can sign. When we onboard your school, we will assist you to ensure that your student's data is fully protected and every user is consenting. -
How do you control the context of the AI and avoid hallucinations?
When using an 'off the shelf' customer facing AI tool, both the educator and student could receive any manner of responses from the AI. The model is not specifically trained to stay on topic, so even if you're engaging with it about a particular subject, there's no guarantee that it will stay on the right track.
Marking.ai ensures that any output provided in terms of feedback or other output is tailored specifically to the relevant information that has been given. This ensures that the user receives a specialised experience. Remember, the educator always has the final say. They are able to edit the mark and feedback if they don't agree with the output that the AI has given.
1. The feedback provided to students is not always designed to give away the answer, but rather guide them towards an outcome.
2. The feedback to the student and AI justification for the mark is highly specialised and aims to remediate the student effectively to set them up for success. -
Does Marking.ai detect AI use by students?
Unfortunately, AI detection tools are not that advanced and are struggling to keep up with the pace that AI is moving at. Although there are some 'proxy indicators' that can be used to detect if a student might have used AI in their answer, it is still difficult to tell. Marking.ai aims to provide an optimal experience for the educator and the student in terms of marking and feedback, and as such, does not use or has developed an AI detection tool.
If you're concerned about your students using AI to produce their answers, then we are here to guide you on some best practices that can reduce this. We can also assist your school to explore a range of AI or plagiarism detection tools that can be used in conjunction with Marking.ai. -
Is Marking.ai going to take teacher's jobs?
At Marking.ai, we believe in the incredible value that AI can bring to the classroom. You've probably already witnessed the ability of AI to 'multiply your impact' as an educator.
Our commitment is to optimise our platform to assist you as the educator to be more efficient in your work, and provide effaceable outcomes that will set your students up for success.
However, we're also aware of the pitfalls of AI and when it requires that a human hand steps in. That's why we always leave you in control to edit any output provided before sharing it with your students.
Together, by twinning you as the educator and a platform like Marking.ai, you can make the educational impact that you once only dreamed would be possible.