Safety and Governance Policy

Last Updated: December 2025

Compliance Status: Aligned with UK Department for Education (DfE) Generative AI Product Safety Expectations (2025)

1. Our Commitment to Safety

Sigma 1.0 is built on a "Safety-by-Design" framework. We utilise enterprise-grade Google Cloud infrastructure to provide a "Closed Loop" learning environment where user data is protected, and AI outputs are strictly grounded in school-approved curriculum within the learning and education context.

2. The "Safety-First" Infrastructure

We meet the DfE Product Safety Expectations through three specific technical layers:

  • Closed AI Environment: We use Vertex AI to ensure that no student or teacher data is ever used to train public AI models. Your intellectual property remains yours.
  • Real-Time Data Redaction: All conversations pass through a Google Cloud Sensitive Data Protection (Data Loss Prevention DLP) layer. Personal identifiers (names, email addresses, addresses, government issued and personal IDs) are redacted in real-time before logs are stored.
  • Strictest Filtering: We employ AI Safety Filters set to the 'Block Most' threshold, preventing the generation of harmful, biased, or inappropriate content.

3. Human-in-the-Loop Governance

We believe AI should assist teachers, not replace them. Our governance structure includes:

  • Refusal & Signposting: If a student enters an unsafe prompt, Sigma provides an age-appropriate Safe Refusal and signposts the student to speak with a trusted adult or access resources like Childline.
  • Admin Escalation Flow: High-risk triggers automatically alert the school's Designated Safeguarding Lead (DSL) via our Tier 1 Alert System.
  • Teacher Oversight: Every response includes a feedback mechanism allowing teachers to flag inaccuracies, ensuring the AI remains pedagogically sound.

4. Data Privacy & GDPR

  • Lawful Basis: We assist schools in identifying Public Task or Legitimate Interest as the lawful basis for processing under UK GDPR.
  • Data Residency: All data is processed and stored within the europe-west2 (London, England, UK) region.
  • DPIA Support: We provide a comprehensive DPIA Pack and Technical Fact Sheet to all partner schools to simplify the local data protection impact assessment process.

5. Compliance Documentation Downloads

To support transparency, the following documents are available for authorised school/academy/college AI Leads, Data Protection Officers, Educational Leaders and Administrators:

  • [Download] Sigma 1.0 Technical Fact Sheet
  • [Download] Refusal Message Matrix
  • [Download] Staff Training Quick Reference Guide
  • [Download] Sample DPIA Assessment

Compliance Glossary

DLP (Sensitive Data Protection)

  • What it is: Data Loss Prevention.
  • How Sigma uses it: We use Google Cloud’s automated DLP engine to scan every interaction in real-time. It identifies and redacts Personally Identifiable Information (PII) such as names, addresses, and phone numbers before the data is processed, ensuring a "Privacy-First" environment.

RAG (Retrieval-Augmented Generation)

  • What it is: A method of "anchoring" AI responses to trusted facts.
  • How Sigma uses it: Instead of letting the AI "guess," we use RAG to link Sigma to a private, school-approved Knowledge Base (national curriculum specifications and education resources). This ensures the AI provides pedagogically accurate answers grounded in the UK National Curriculum.

HITL (Human-in-the-Loop)

  • What it is: A governance model where human expertise oversees AI logic.
  • How Sigma uses it: Sigma’s conversational routines using generative AI are not left to chance. They are engineered and regularly audited by qualified UK teachers to ensure the AI’s guidance remains helpful, unbiased, and academically sound.

Closed Generative AI Environment

  • What it is: A secure, private cloud boundary.
  • How Sigma uses it: Unlike "Open AI" tools (like the free version of ChatGPT or Gemini), Sigma operates in a siloed environment. This means no data is shared with the public, and no student data is ever used to train the underlying global AI models.

PII (Personally Identifiable Information)

  • What it is: Any data that could be used to identify a specific individual.
  • How Sigma uses it: Our infrastructure is designed to be "PII-Neutral." We use unique, anonymised IDs for authentication via Outseta, ensuring that even in the event of a log review, no user’s legal identity is exposed.

Instructional Engineering

  • What it is: The process of defining the "Rules of Engagement" for an AI.
  • How Sigma uses it: This is our proprietary IP. We write the complex logic and guardrails that turn a general AI model into a specialised Education Assistant and Learning Companion that has the knowledge and skills base to ground its responses and interactions in an educational and learning context.  

Technical Terms

The "Safety First" Terms

  • Closed-Loop System: Think of this as a "private digital classroom." Unlike public AI (like the free version of ChatGPT or Gemini), information in a closed loop stays inside your school's secure boundary. It doesn't "leak" out to the rest of the internet.
  • Data Redaction (Auto-Scribble): An automated safety layer that acts like a black marker pen. If a student accidentally types a name, phone number, or address, the system "sees" it and instantly scrubs it out [Redacts it] before the AI even logs it in the conversation.
  • Safety Filters: These are digital "guardrails" that scan every message. If a student asks something inappropriate or unsafe, the filter blocks the answer and provides a helpful, age-appropriate redirection instead.

Understanding the "Brain" (AI Logic)

  • Generative AI: Technology that can "create" new text, like an essay or a lesson plan, based on instructions. It’s like a very advanced digital assistant that can write, summarise, and explain complex ideas.
  • Grounded AI (The Digital Textbook): Standard AI "guesses" answers based on the whole internet. Sigma is "grounded," meaning its answers are anchored only in the UK National Curriculum. It doesn't make things up (hallucinate) because it has specific educational framework, resources and specifications to follow.
  • Custom AI Logic Engines: These are the "lesson plans" for the AI. Instead of just "being a chatbot," the AI uses specific rules created by teachers to make sure it talks like a tutor, not just a computer.

Privacy & "The Human Touch"

  • Non-Training Clause: A legal and technical promise that we never use your child’s work or your teacher’s plans to "teach" the AI or make Google’s products better. Your data is for your use only.
  • Human-in-the-Loop (HITL): This means the AI is never left unsupervised. Real teachers and safety experts review "scrubbed" (anonymous) chat histories to make sure the AI is being helpful, kind, and accurate.
  • System-Level Metadata: This is "anonymous feedback" that tells us if the system is healthy. We might see that "100 students used the Math tool today," but we never see who those students are or what exactly they said.

Security & Access

  • Gated Access: Just like a school gate, you need a specific, verified "key" (login) to get in. This ensures that only authorised students, parents, and teachers can interact with Sigma.
  • Enterprise-Grade Protection: This means we use the same high-level security technology that large institutions use to keep information safe from hackers.