Sigma School AI

AI Literacy & Governance Programme

Master the essentials of artificial intelligence in education through our self paced, six module AI Literacy course. Designed specifically for schools, education leaders and staff, this course provides clear, practical guidance on safe and responsible AI use, aligned with Department for Education (DfE) and OFSTED guidelines.  

Your Path to Certification
Navigate through seven expert led modules, covering everything from use of AI in Teaching & Assessments, GDPR compliance, AI risk mitigation and ethical use for AI integration in education settings. Learn at your own pace and complete the final assessment to earn your AI Literacy Certificate.

Empower your school with the confidence to use AI in Education safely and responsibly.

Understanding AI in Education

Objective:

Build baseline understanding, remove myths, and explain AI use in education.

  • Understand what AI is and isn’t
  • Identify myths and misconceptions
  • Recognise AI limitations in education

AI is a tool that supports human work. It does not think or make decisions. Generative AI can produce text, images, or ideas based on patterns, not understanding.

Examples of AI in education:

  • Drafting lesson ideas
  • Summarising content
  • Generating multiple choice questions

Risks:

  • Produces inaccurate or biased outputs
  • Over-reliance may reduce critical thinking
  • Not all AI tools are safe or compliant

AI cannot replace teacher judgement or make decisions about pupils.

Department for Education Guidelines on AI Use

"Schools and colleges can set their own rules on AI use, as long as they follow legal requirements around data protection, child safety, and intellectual property."

Department for Education

AI tools can be used in a variety of ways. In education settings there are AI use cases in the following three areas:

- Teaching and Learning

- Personalised Learning

- Administrative Processes

Teacher Empowerment: AI assists with drafting resources, marking, feedback, and admin, freeing up teachers for core teaching.

Human Oversight: Teachers must review and verify AI generated content. AI is a tool, not a replacement for professional judgment.

Student Skills: Pupils need to learn critical evaluation, cross-referencing, and safeguarding against AI risks.

"Teachers can use AI to help with things like planning lessons, creating resources, marking work, giving feedback, and handling administrative tasks. But they need to use their professional judgement and check that anything AI generates is accurate and appropriate. The final responsibility always rests with them and their school or college."

Education Hub - England

Pedagogy: Prioritise teaching methods over technology and ensure the use of AI tool serves learning goals

Ethics & Bias: Address potential inaccuracies and bias in AI outputs.

Data Protection: Comply with GDPR laws, understand what data AI tools collect. Also follow statutory guidance around safeguarding and child protection.

Knowledge Check

Safe & Appropriate use of AI

Objective:

Clarify where AI can and cannot be used safely in school or education institution.

  • Identify safe and prohibited uses
  • Apply school policy approved AI practices (permitted use)

For Teachers: Lesson planning, resource creation, drafting communications, creating templates or examples, personalised support.

For School operations: Administrative planning tasks, budget analysis, CPD planning.

For Students: Learning to use AI critically for research, summarising, and idea generation, not just getting answers.

Prohibited uses:

  • Entering identifiable pupil or staff information and data
  • Using AI for safeguarding, behaviour, assessment, SEN decisions
"If used safely, effectively and with the right infrastructure in place, AI can support every child and young person, regardless of their background, to achieve at school and college and develop the knowledge and skills they need for life."

Department for Education

Uses requiring caution or approval:

  • AI with anonymised pupil or staff data
  • New or unfamiliar AI tools
  • Direct pupil use (depending on age)

The following reflection questions serve as a critical thinking framework when considering use of AI tools.

These questions guide education leaders and practitioners to think about HOW they can ensure safe, responsible and appropriate use of AI in education settings,

HOW: 

  • Does this example align with my setting’s teaching and learning policy?
  • Could I adapt the output for use in my setting?
  • Would this be useful to me or my pupils/students in my setting?
  • Would I need to contextualise this example for my setting?
  • Would I ensure that I’m protecting data and intellectual property?
  • Would I check the output for accuracy and bias?

The above list of questions are a good starting point designed to move AI use from experimentation to governance by ensuring that every interaction is intentional, supervised, and compliant.

The critical thinking framework helps to build the foundation for Safe and Responsible AI use in education

The questions serve to address a safety principle, establishing stronger governance.

1. Establishing Accountability (The "Human-in-the-Loop")

"How does this example align with my setting’s policy?"

"How would I check the output for accuracy and bias?"

The Safety Principle: The above questions establish human oversight (Human in the Loop). Responsible use means never treating AI tools as an "authority". By checking for hallucinations (incorrect outputs), inaccuracies and bias, teachers maintain professional accountability for the content delivered to students.

2. Data Privacy & Intellectual Property (IP) Protection

"How would I ensure that I’m protecting data and intellectual property?"

The Safety Principle: This is the most critical "guardrail." It ensures staff always verify that they aren't entering Personally Identifiable Information (PII) into AI Foundation Models or LLMs. It also addresses that student work is their own IP and is not used to train commercial AI models without explicit, informed consent.

3. Contextual Safeguarding

"How would I need to contextualise this for my setting?"

"How could I adapt the output?"

The Safety Principle: Generative AI systems are often trained on large datasets that may not reflect the curriculum, local safeguarding risks, or the specific needs including SEND of your students. Responsible use means adapting AI content so it is age appropriate, accurate and relevant learners' needs, rather than using generic AI outputs.

4. Pedagogical Value vs Novelty

"Would this be useful to me or my pupils/students in my setting?"

The Safety Principle: Responsible use requires evaluating whether the AI tool actually improves learning outcomes or reduces workload, ensuring that technology remains a servant to pedagogy, not a replacement for teacher student relationships.

Knowledge Check

Data Protection, Privacy & Safeguarding

Objective:

Ensure staff understand data risks and safeguarding boundaries.

  • Understand GDPR implications
  • Apply safeguarding boundaries

GDPR compliance is critical in the use of AI.

  • Never enter personally identifiable information data into AI tools.
  • Anonymised data may be acceptable with leadership approval.
  • AI cannot be used for safeguarding decisions and other decisions that impact any human individual.
  • Follow existing safeguarding policies (KCSIE).

 Risks of unsafe AI use:

  • Exposure of sensitive data impacting individuals
  • Inaccurate advice impacting safeguarding
  • Regulatory breaches

There are important differences between open and closed generative AI tools, especially when it comes to data protection.

Open generative AI tools are available to the public and can be changed or used by anyone. Information entered into these tools may be stored, shared, or used to train Large Language Models (LLMs) or other AI Models. This means personal or sensitive information could be seen or reused.

For this reason, you should never enter any identifiable pupil, staff, or other personal data into open AI tools.

Closed generative AI tools are usually more secure. The data you enter cannot be accessed by the public or external users. Because of this, they are generally a safer option for working with personal or sensitive data.

It is not always clear whether an AI tool is open or closed. If you are unsure, speak to your school’s data protection officer or IT lead for guidance on which AI tools are approved for use.

If your school uses a closed AI tool and you enter personal or sensitive data into it, the processing and use of this must be included in the school’s Privacy Policy/ Privacy Notice.

Schools or eucation organisations should be open and transparent about how they use generative AI tools. Staff, students, parents, carers and governors should understand how their personal data is processed.

Some AI tools process and store more information than just the text you enter into them. They may collect and store additional information and data such as:

  • IP address
  • System or device information
  • Browser information

The data collected by organisations that provide AI tools can be viewed or sold to third parties. Schools must include how any data is collected, processed and stored by AI tools in their Privacy Policy / Privacy Notice.

Knowledge Check

Responsible use of AI for Teaching & Learning

Objective:

Promote pedagogically sound AI use.

  • Recognise bias and errors

AI supports planning and resource creation, but teachers retain all curriculum decisions.

Generative AI tools can be used as a starting point to develop resources, including:

  • lesson plans or activities
  • questions and quizzes
  • revision activities
  • images to help with character descriptions or stories
  • communications for parents and carers

All staff have to check with their organisation's data protection officer or ICT & AI lead for further guidance on what may be acceptable use.

  • Always review outputs for bias or errors.
  • Use AI to enhance, not replace, teacher expertise.
  • Encourage pupils to think critically about AI answers and outputs.
  • Acknowledge or reference the use of generative AI in your work.
  • Fact-check results to make sure the information is accurate.

Responsible AI in education emphasises safety, ethics, data privacy and human oversight (Human in the Loop), positioning AI as a supportive tool for teachers, not a replacement. AI can be used as a tool to personalise learning and reduce administrative work.

Department for Education

AI cannot replace professional judgement and the use of AI in education requires human oversight. This is referred to as a "Human in the Loop" safety first approach. AI should be used as a tool to enhance pedagogy and learning outcomes.

The department for education guidance on safe and effective use of generative AI provide the following framework called FACTS.

FACTS framework for AI prompting
When using AI, keep the following framework in mind:

  • Focus Prompts: be clear and specific.
  • Analyse Outputs: check for factual accuracy.
  • Check for Bias: identify potential biases.
  • Tailor Suitability: ensure content fits the audience.
  • Strengthen Prompts: refine for better results.

This framework supports education practitioners to carefully plan the use of AI in teaching and learning.

Knowledge Check

AI, Assessment & Academic Integrity

Objective:

Protect assessment standards and fairness.

  • Maintain assessment integrity
  • Reduce AI misuse

AI in assessment emphasises human oversight and pedagogy while maintaining the data privacy and protection under GDPR requirements. AI should support, not replace, teachers for tasks like lesson planning, creating resources and feedback, while upholding legal duties, safeguarding, and Intellectual Property (IP) rights.

Joint Council for Qualifications (JCQ) Guidance: AI Use in Assessments

For the purposes of demonstrating knowledge, understanding and skills for qualifications, it is important students develop the knowledge, skills and understanding of the subjects they are studying and do not rely on AI. 

"Students must be able to demonstrate the final submission is the product of their own independent work and independent thinking."

JCQ Guidance - Your role in protecting the integrity of qualifications

Teachers and educators should:

  • Design AI-resilient tasks and activities for students to help promote independent and critical thinking.
  • Require students to declare AI use where appropriate.
  • Retain responsibility for marking and assessment decisions.

AI cannot make decisions about grades, behaviour, or SEN support.

"Generative AI tools can support the work of teachers/educators and benefit students. However, the technology can pose risks to individuals’ rights and freedoms, including their right to privacy."

ICO guidance - Evidence on Generative AI in Education

Key guidance for teachers and education staff

Human-in-the-Loop: Teachers must human oversight, always review AI outputs and make final decisions. Never use AI as the sole marker or grader for student assessments.

Professional Accuracy: The teacher is ultimately responsible for the factual accuracy of any AI-assisted materials used in the classroom.

Independent Effort: All submitted work must be the student’s own. Undisclosed or unreferenced AI content is classified as malpractice.

Proper Citation: If AI is used for research, students must reference and state the tool used.   Include date, and any specific prompts provided.

Proactive Monitoring: Verify authenticity by checking work in progress and ensuring the writing matches the student’s known style and current levels.

"Schools and colleges should ensure that students are aware of the risks of malpractice" and that "awarding organisations need to continue taking reasonable steps... to prevent malpractice involving the use of generative AI."

Department for Education Guidance

Definition of AI Misuse and Malpractice

AI misuse occurs when a student submits work that is not their own without appropriate acknowledgement. Specific examples include:

Copying or Paraphrasing: Directly reproducing or slightly altering whole responses or sections of AI generated content.

Undisclosed Use: Failing to acknowledge that an AI tool was used as a source of information and research.

Incomplete Referencing: Providing misleading or poor acknowledgements of AI tools.

Sanctions: Misuse is considered malpractice and can lead to severe penalties, including a loss of marks, disqualification from the qualification, or being barred from future examinations.

Schools and education institutions must provide clear guidance in their Assessment Policy or AI Use Policy. All education staff should  consider their data protection obligations from the outset, and in particular their obligations towards safeguarding children.

Knowledge Check

Responsibilities & Accountability

Objective:

Ensure education leaders and teachers understand their roles when using AI.

  • Understand professional responsibility
  • Identify and manage risks of AI use in education
  • Develop, manage and promote safe AI use

There are many risks with AI tools themselves and challenges associated with the use of AI tools both open and closed AI tools in education.

key risks and challenges:

  • Staff/student exposure to inappropriate or harmful content
  • Staff/student exposure to inaccurate, misleading or biased content
  • Data Protection breaches
  • Intellectual Property infringements
  • Academic integrity challenges

There may also be other risks and challenges of using AI, these should also be considered in line with the schools policies and their obligations under safeguarding.

Key Principles for Education Institutions

Human Oversight: AI should augment, not replace, teacher expertise.  Teachers must review, edit, and own AI-generated content.

Prioritise Pedagogy: Develop an AI strategy linked to school improvement plans and implemented through policy action, focusing on learning outcomes over tools.

Safety & Safeguarding Measures

Content & Data: Block harmful AI tools, use tools with strong filters to ensure data protection compliance (GDPR).

Supervision: Supervise students using AI, especially free tools, and set clear expectations and boundaries.

Policy Updates: Update child protection and data protection policies or privacy notices to reflect AI risks

Staff Training: Invest and deliver Continuing Professional Development. Train and staff and students to use AI critically and safely. Develop and promote AI Lead within school or institution.

What education leaders and teachers Should DO

  • Filter and Protect: Ensure Generative AI tools effectively and reliably prevent access to harmful or inappropriate content.
  • Adaptive Safety: Maintain filtering throughout all interactions, adjusting settings based on risk levels, user age, and specific needs (e.g., users with SEND).
  • Monitor and Alert: Use systems that log all activity, alert supervisors to harmful content, and provide real-time notifications to users when content is blocked.
  • GDPR Compliance: Ensure all data processing complies with UK GDPR by providing age-appropriate privacy notices and maintaining lawful bases for data collection and storage.
  • Prioritise Safety by Design: Select AI products that prioritise child safety, transparency, and technical robustness as their core objectives.
  • Maintain Governance: Ensure total compliance with current safeguarding regulations (KCSIE), data protection laws, and institutional AI governance frameworks.

What education leaders and Teachers Must NOT Do

  • Avoid Unauthorised IP Use: Do not allow AI tools to collect, store, or use intellectual property (such as student's own created work, essays or artwork) for commercial training or fine-tuning without explicit consent.
  • No Data Sharing Without Consent: Never share the work of children under 18 with AI tools that retain data for commercial purposes without documented parental or guardian consent.
  • Respect Employer IP: Teachers must not share their own professional materials with AI tools in a way that violates their employer's intellectual property and data policies.
  • Avoid Non-Transparent Processing: Do not use AI tools that lack transparency regarding how and where data is processed, stored, or shared.

Knowledge Check

AI Literacy Assessment

You have now completed the core curriculum of the AI Literacy for Schools & Teachers course. By progressing through these modules, you have moved beyond simple AI curiosity into a state of professional AI Literacy and Governance.

What you have achieved:

  • Operational Literacy: You now understand the mechanics of Generative AI, moving from basic prompting to sophisticated pedagogical application.
  • Governance & Ethics: You have explored the critical frameworks of data privacy GDPR compliance, intellectual property, and the ethical implications of algorithmic bias.
  • Risk Mitigation: You are now equipped to identify and manage the risks of hallucinations, bias, misinformation, and student over reliance.

This course has been specifically structured to align with the Department for Education (DfE) guidelines on the Safe and Responsible use of AI in Education.

  • Safety First: You have demonstrated an understanding of the Human in the Loop requirement, ensuring that AI remains a supportive tool with human oversight rather than a replacement for professional judgment.
  • Data Protection: You are prepared to safeguard your students' data and privacy in line with the latest Department for Education product safety expectations as well as being aware of the GDPR compliance when using AI in education settings.
  • Future Ready Leadership: Your completion of this course marks you as a leader in your school, capable of modeling the critical thinking and digital citizenship required in an AI enabled future with safe, effective and compliant us of AI in education settings.

Final Assessment

To receive your AI Literacy Certification, you must complete the 30 question assessment below and evidence your competence in safe, ethical, and effective AI use in education.

Requirements:

  • Pass Mark: 80%
    • You must sore 24 out of 30 marks, each correct answer is awarded one mark.
  • Result: Upon completion, you will immediately receive your score. If successful, a link to claim and download your AI Literacy certificate will be revealed.