AI Use Policy

Effective Date: January 2026

Compliance Status: Aligned with Department for Education (DfE)

- Safe & effective use of AI in education settings

- Generative AI Product Safety Expectations (2025)

Sigma is provided as an AI Tutor & Educational Assistant. Sigma is AI Conversational Agent to be used as a support tool to enhance teaching and learning.

Use of this service is based on three core principles:

  1. Human in the Loop: AI is an assistant, not a replacement for human judgment.
  2. Privacy First: Users must respect the privacy-by-design architecture of the system.
  3. Academic Integrity: Sigma should be used to support the learning process, not to bypass it.

1. Who Can Use Sigma

Age Requirement: Parents/guardians are responsible for creating accounts for users who are below the legal age to consent to their personal data processing. Parents remain as the account owners and are responsible for access and use of account including supervision of interactions with Sigma where the child is below the legal age to consent to their data processing.   

Authorised Access: Access is granted via a manual verification process. You may not share your login credentials or allow unauthorised individuals to use your authenticated session.

Safe & Effective Use of Sigma AI - The "Dos"

  • Do use Sigma to brainstorm, explore concepts, request explanation of topics,  summarise curriculum topics, and generate creative teaching and learning ideas.
  • Do verify all AI generated outputs,  fact check sources and adapt outputs before using them in a professional or academic context.
  • Do report any biased, inaccurate, or concerning responses immediately using the Sigma Response Feedback button.
  • Do treat Sigma as a learning companion and education assistant. Sigma can be used as a co-pilot but you are the pilot in charge and command of the final output.

2. Prohibited Activities

Users are strictly prohibited from:

Circumvention: Attempting to "jailbreak," bypass safety filters, or force the AI to generate harmful, illegal, discriminatory, biased, or sexually explicit content.

Identity Injection: Manually typing sensitive personal data (e.g., home addresses, private medical info, or full student records) into the chat interface.

Automated Scraping: Using bots to scrape the Sigma data stores or curriculum responses.

Misrepresentation: Presenting AI generated content as original human work without appropriate disclosure, reference or verification.

DO NOT

  • Enter sensitive personal information (PII) or data such as email or home addresses, person names, date of birth, national insurance number, private medical data, or full student ID records into the chat.
  • Use Sigma to generate "High-stakes" content (like legal or medical advice) or to perform automated grading.
  • Copy and paste AI responses and present them as your own original work for formal assessments (Academic Integrity).
  • Attempt to "jailbreak" the system or force the AI to bypass its safety and educational filters.

3. Data Sensitivity & Personal Data

While Sigma utilises automated GCP DLP Redaction to protect users, the "Principle of Data Minimisation" remains the user's responsibility. Users should avoid entering any information that could identify a specific minor or vulnerable adult.

4. Ownership and Liability

Output Responsibility: Users are legally responsible for the content they generate and share using Sigma.

Accuracy: Users acknowledge that AI can occasionally produce inaccurate information (hallucinations) and agree to verify all critical facts against the grounded data or reliable sources before sharing or using any AI responses.

5. Educator Specific Code of Conduct (mandatory)

Applies to school staff and education institutions where education leaders, teachers, tutors, administrative personnel and professionals working with young people in education settings. 

A. Prohibition of Automated Marking & Grading of Assessments

In compliance with GDPR Article 22, educators are strictly prohibited from using Sigma AI for:

  • Grading or marking formal student assessments or examinations.
  • Making "high-stakes" decisions regarding student admissions, disciplinary actions, or academic tiering.
  • Generating reports that impact any real person’s legal or academic standing without comprehensive human oversight, intervention where necessary and review.

B. The "Review & Verify" Mandate

Educators must verify the accuracy, age-appropriateness, and pedagogical relevance of any AI generated output before:

  1. Displaying it to students in a classroom.
  2. Including it in lesson plans or curriculum materials.
  3. Sending it to parents or external stakeholders.

C. Safeguarding & Reporting Duty

Under KCSIE 2025/26 standards, educators have a statutory duty to:

Flag Inappropriate Content: Use the Sigma Response Feedback button immediately if the AI generates content that appears biased, unsafe, or detrimental to student safeguarding.

Monitor Student Interaction: Ensure that any student use of AI is supervised and aligns with the school's internal Online Safety Policy.

D. Professional Integrity

Educators agree to use Sigma to augment their expertise, not to replace human teachers. Teachers should use Sigma as a support tool to make their own work better, faster, or more informed (for example, helping draft materials, suggest ideas, or simplify administrative tasks), but the teacher is still the one thinking, deciding, and taking responsibility.

No Automated Marking: In line with UK GDPR and the EU AI Act, you are strictly prohibited from using Sigma to mark, grade, or rank student work for assessments or examinations.

The Verification Rule: You must review every AI response for accuracy, relevance and age appropriateness before using or sharing with learners. You are the final authority on pedagogical accuracy.

Safeguarding: You have a statutory duty to report any safeguarding triggers identified during AI interactions. Use the Sigma Response Feedback button to report any AI output concerns

5. Our Commitment to You

Data Sovereignty: Your conversations are processed in europe-west 2 (London,UK) region and are protected by AES-256 encryption.

No Global Training: We do not use your data to train public AI models.

Verified Grounding: Sigma is grounded in public sector curriculum information and data (Open Government Licence), ensuring responses are educationally relevant and accurate.