Staff intranet

Glossary of terms

Glossary of key Artificial Intelligence (AI) terms, providing clear and concise definitions of essential concepts, techniques, and models in the field.

Terms

API (Application Programming Interface)

A secure way for different software systems to talk to each other. In AI, APIs let your local systems connect to AI tools to send prompts and get back responses.

Approved AI Tool Register

A list kept by the council showing which AI tools are safe and approved to use, what they can be used for, and what kinds of data they can handle.

Artificial Intelligence (AI)

A broad term for computer systems that can do tasks usually needing human intelligence – like recognising speech, understanding language, finding patterns, or making predictions.

Bias (in AI)

When an AI system produces unfair or skewed results due to imbalances or stereotypes in the data it was trained on. For example, if a model was trained mostly on certain groups, it may misrepresent others. We use human oversight to check for this.

Black Box

Describes when an AI system’s inner workings are hard (or impossible) to see or fully explain – even to experts. This can make it tricky to understand exactly how decisions or outputs are produced.

Data Protection Impact Assessment (DPIA)

A risk assessment required under UK GDPR when a new system (like AI) might affect people’s privacy. It checks how personal data is collected, stored, used, and protected.

Equality Impact Assessment (EqIA)

An assessment to check if a project, policy or AI tool could affect people unfairly based on protected characteristics (like age, disability, race). It helps ensure compliance with the Equality Act 2010.

Explainability

How understandable an AI system is to humans – can we clearly see why it gave a certain result? Explainable AI helps build trust and spot errors or biases.

Generative AI (Gen AI)

A type of AI that creates new content (like text, images, code) based on patterns learned from existing data. Examples include tools that can draft emails, write reports, or produce artwork.

Human–in–the–Loop (HITL)

A safeguard where a person supervises or reviews AI–generated outputs or decisions, ensuring that final responsibility stays with humans.

Large Language Model (LLM)

A type of AI model trained on huge amounts of text to generate human–like writing or answers. Examples include ChatGPT and Microsoft Copilot’s underlying engines.

Machine Learning (ML)

A branch of AI where a system “learns” from data to get better at a task over time – without being explicitly programmed for every step. For example, spam filters or fraud detection often use machine learning.

Model

The underlying mathematical system that an AI tool uses to process inputs and generate outputs. It’s like the “brain” behind the tool. Large, powerful models can handle more complex tasks.

Natural Language Processing (NLP)

A subfield of AI focused on understanding, interpreting and generating human language in text or speech. This is how chatbots, translation tools or smart assistants can “read” and “write” like a person.

Prompt

The input or question you give to an AI tool to get a result. For example, “Summarise this paragraph” or “Draft an email inviting parents to a school event.” Good prompts help get clearer, more relevant outputs.