Safety and Governance Policy

Last Updated: December 2025

Compliance Status: Aligned with UK Department for Education (DfE) Generative AI Product Safety Expectations (2025)

This policy should be read and understood alongside our AI Transparency Statement and AI Use policy for safe and effective use of AI in education for adminstrative, pedagogic and learning support.

1. Our Commitment to Safety

Sigma 1.0 is built on a "Safety by Design" framework. We utilise enterprise-grade Google Cloud infrastructure to provide a "Closed Loop" learning environment where user data is protected, and AI outputs are strictly grounded in school-approved curriculum within the learning and education context.

2. The "Safety First" Infrastructure

We meet the Department for Education's Product Safety Expectations through three specific technical layers:

  • Closed AI Environment: We use Vertex AI to ensure that no student or teacher data is ever used to train public AI models. Your intellectual property remains yours.
  • Real-Time Data Redaction: All conversations pass through a Google Cloud Sensitive Data Protection (Data Loss Prevention DLP) layer. Personally Identifiable Information (PII) identifiers such as names, email addresses, addresses, government issued IDs, financial and banking details are redacted in real-time before and do not appear in our conversation logs for Sigma.
  • Strictest Filtering: We employ AI Safety Filters set to the 'Block Most' threshold, preventing the generation of harmful, biased, or inappropriate content. In addition to these filters we have education specific guardrails and negative constraints for our AI system rules.

3. Human Oversight (Human in the Loop)

We believe AI should assist teachers, not replace them. Our governance structure includes:

  • Refusal & Signposting: If a student enters an unsafe prompt, Sigma provides an age-appropriate Safe Refusal and signposts the student to speak with a trusted adult or access resources like Childline.
  • Admin Escalation Flow: High-risk triggers should be reported to a school's Designated Safeguarding Lead (DSL) or whoever the instiution's AI Lead or Supervisor.
  • Teacher Oversight: Every response includes a feedback mechanism allowing teachers to flag inaccuracies, ensuring the AI remains pedagogically sound.

4. Data Privacy & GDPR

  • Lawful Basis: We assist schools in identifying Public Task or Legitimate Interest as the lawful basis for processing under UK GDPR.
  • Data Residency: All data is processed and stored within the europe-west2 (London, England, UK) region.
  • DPIA: We provide a comprehensive DPIA Technical Fact Sheet to all partner schools education institutions to simplify the local data protection impact assessment process (DPIA). This is available for authenticated and logged in users who activate School Programme plan and send email request.

5. Compliance Documentation

To support transparency, the following articles are available for authorised school/academy/college AI Leads, Data Protection Officers, Educational Leaders and Administrators. Please visit our knowledge base help.otem.co.uk

AI Literacy and Governance

In addition to the compliance documentation, schools and education institutions can access our AI Literacy & Governance Programme.  A course with six modules for self paced study. It is aligned with Department for Education's guidance for safe and effective us of AI in education settings.

Compliance Glossary

DLP (Sensitive Data Protection)

  • What it is: Data Loss Prevention.
  • How Sigma uses it: We use Google Cloud’s automated DLP engine to scan every interaction in real-time. It identifies and redacts Personally Identifiable Information (PII) such as names, addresses, and phone numbers before the data is processed, ensuring a "Privacy-First" environment.

RAG (Retrieval-Augmented Generation)

  • What it is: A method of "anchoring" AI responses to trusted facts.
  • How Sigma uses it: Instead of letting the AI "guess," we use RAG to link Sigma to a private, school-approved Knowledge Base (national curriculum specifications and education resources). This ensures the AI provides pedagogically accurate answers grounded in the UK National Curriculum.

HITL (Human-in-the-Loop)

  • What it is: A governance model where human expertise oversees AI logic.
  • How Sigma uses it: Sigma’s conversational routines using generative AI are not left to chance. They are engineered and regularly audited by qualified UK teachers to ensure the AI’s guidance remains helpful, unbiased, and academically sound.

Closed Generative AI Environment

  • What it is: A secure, private cloud boundary.
  • How Sigma uses it: Unlike "Open AI" tools (like the free version of ChatGPT or Gemini), Sigma operates in a siloed environment. This means no data is shared with the public, and no student data is ever used to train the underlying global AI models.

PII (Personally Identifiable Information)

  • What it is: Any data that could be used to identify a specific individual.
  • How Sigma uses it: Our infrastructure is designed to be "PII-Neutral." We use unique, anonymised IDs for authentication via Outseta, ensuring that even in the event of a log review, no user’s legal identity is exposed.

Instructional Engineering

  • What it is: The process of defining the "Rules of Engagement" for an AI.
  • How Sigma uses it: This is our proprietary IP. We write the complex logic and guardrails that turn a general AI model into a specialised Education Assistant and Learning Companion that has the knowledge and skills base to ground its responses and interactions in an educational and learning context.  

Technical Terms

Artificial intelligence (AI) - Computer programs that are designed to complete tasks like a human such as problem solving, learning and understanding language.

Generative AI - a type of AI that can create new content such as images, text, video, audio and code based on its prior learning.

Machine learning - The way we teach programs to learn based on rules and prior knowledge.

Deep learning - A more sophisticated type of machine learning that is layered in complexity and can self-learn the rules.

Large language model (LLM) - A type of Artificial Intelligence (AI) trained to understand, generate, and reason with human language.

Foundation AI Model - A general purpose AI model that serves as a "base" or "foundation" for many different applications.

Prompt - The instruction or input provided to an AI system.

The "Safety First" Terms

  • Closed-Loop System: Think of this as a "private digital classroom." Unlike public AI (like the free version of ChatGPT or Gemini), information in a closed loop stays inside your school's secure boundary. It doesn't "leak" out to the rest of the internet.
  • Data Redaction (Auto-Scribble): An automated safety layer that acts like a black marker pen. If a student accidentally types a name, phone number, or address, the system "sees" it and instantly scrubs it out [Redacts it] before the AI even logs it in the conversation.
  • Safety Filters: These are digital "guardrails" that scan every message. If a student asks something inappropriate or unsafe, the filter blocks the answer and provides a helpful, age-appropriate redirection instead.

Understanding the "Brain" (AI Logic)

  • Generative AI: Technology that can "create" new text, like an essay or a lesson plan, based on instructions. It’s like a very advanced digital assistant that can write, summarise, and explain complex ideas.
  • Grounded AI (The Digital Textbook): Standard AI "guesses" answers based on the whole internet. Sigma is "grounded," meaning its answers are anchored only in the UK National Curriculum. It doesn't make things up (hallucinate) because it has specific educational framework, resources and specifications to follow.
  • Custom AI Logic Engines: These are the "lesson plans" for the AI. Instead of just "being a chatbot," the AI uses specific rules created by teachers to make sure it talks like a tutor, not just a computer.

Privacy & "The Human Touch"

  • Non-Training Clause: A legal and technical promise that we never use your child’s work or your teacher’s plans to "teach" the AI or make Google’s products better. Your data is for your use only.
  • Human-in-the-Loop (HITL): This means the AI is never left unsupervised. Real teachers and safety experts review "scrubbed" (anonymous) chat histories to make sure the AI is being helpful, kind, and accurate.
  • System-Level Metadata: This is "anonymous feedback" that tells us if the system is healthy. We might see that "100 students used the Math tool today," but we never see who those students are or what exactly they said.

Security & Access

  • Gated Access: Just like a school gate, you need a specific, verified "key" (login) to get in. This ensures that only authorised students, parents, and teachers can interact with Sigma.
  • Enterprise-Grade Protection: This means we use the same high-level security technology that large institutions use to keep information safe from hackers.