Contact

What are you looking for?

AI Assurance Simplified – Our Glossary of Terms

Artificial intelligence (AI) is rapidly transforming industries, driving innovation and reshaping how organizations operate and make decisions. However, with its growing influence comes complex challenges in governance, risk management, ethics and compliance.

AI assurance addresses these challenges by ensuring AI systems are trustworthy, transparent and aligned with legal and ethical standards. Yet, navigating the ever-evolving landscape of terminology, frameworks and regulations, such as the EU AI Act, can be difficult.

Before developing or deploying AI responsibly, you must understand the terminology that defines roles, governance models, assurance frameworks and compliance requirements.

To support you, our experts have compiled this glossary to clarify key terms to help you strengthen your understanding of AI assurance and responsible AI management.

Regulations governing AI

Standards governing AI

High-risk AI system

AI roles and regulatory definitions

Level of autonomy of the AI system

Regulations governing AI

EU AI Act
The EU regulation that establishes harmonized rules for developing, placing on the market and using AI systems within the EU. It introduces a risk-based approach to ensure AI is safe, transparent and respects fundamental rights.

NIST AI Risk Management Framework (RMF)
A voluntary framework by the US National Institute of Standards and Technology (NIST) for managing risks associated with AI systems. It promotes trustworthy, fair and transparent AI.

General Data Protection Regulation (GDPR)
The EU regulation governing personal data collection, processing and storage. It emphasizes user consent, data minimization and the individual’s right to access, correct or delete their data.

Health Insurance Portability and Accountability Act (HIPAA)
The US law that sets standards for protecting sensitive patient health information and ensuring privacy and security in healthcare.

Basel III
International banking regulations developed by the Basel Committee on Banking Supervision (BCBS) for capital adequacy, stress testing and market liquidity to strengthen financial system stability.

CPS 230
The Australian Prudential Regulation Authority (APRA) standard that defines requirements for operational risk management and resilience in financial institutions.

Standards governing AI

ISO/IEC 42001
The international standard for establishing, implementing, maintaining and continually improving an AI management system (AIMS). It supports AI governance, risk management and transparency.

We hold accreditations for AIMS certification: ANAB (US) and SAC (Singapore).

ISO/IEC 27001
The international standard for information security management systems (ISMS). It defines the requirements for protecting sensitive data through risk management and security controls.

We hold multiple accreditations for ISMS certification.

High-risk AI system

An AI system is classified as high-risk under the EU AI Act if it meets one of the following conditions:

  1. It serves as a safety component of a product, or is itself a product, that falls under the EU harmonization legislation on product safety and is subject to third-party conformity assessment (Annex I), such as machinery, medical devices, aviation systems or motor vehicles.
  1. It is used in specific applications that have a significant impact on health, safety or fundamental rights (Annex III), such as remote biometric identification, the operation of critical infrastructure, education, employment and workforce management, access to essential services (including credit scoring), law enforcement, migration and border control, or the administration of justice and democratic processes. 

AI roles and regulatory definitions

ISO/IEC 22989:2022 definitions

AI producer
An organization or entity that designs, develops, tests and deploys products or services that use one or more AI systems.

AI provider
An organization or entity that provides products or services that use one or more AI systems. AI providers include both platform providers and product/service providers.

AI user
An organization or entity that uses AI products or services.

EU AI Act definitions

AI provider
An organization or entity that develops an AI system or general-purpose AI model, or places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.

AI deployer
An organization or entity that uses an AI system under its authority, except when used in a personal, non-professional activity.

AI importer
An organization or entity located or established in the EU that places on the market an AI system bearing the name or trademark of a person established in a third country.

AI distributor
An organization or entity in the supply chain, other than the provider or importer, that makes an AI system available on the EU market without modifying it. 

Level of autonomy of the AI system

Ethics guidelines for trustworthy AI (European Commission)

Human-on-the-loop (HOTL)
Human intervention is possible during the design cycle and monitoring of the system’s operation.

Human-in-the-loop (HITL)
Human intervention is possible in every decision cycle of the system.

Autonomous
The system can perform its entire mission without external intervention.

Self-learning
The system can modify its intended domain of use or goals without external intervention, control or oversight.

Related Links

  • SGS - China - Beijing

16th Floor, Block A, No.73 Fucheng Road, Century Yuhui Mansion,

Beijing, Haidian District, China