Responsible AI
A Guide to Responsible AI Standards and Frameworks
All businesses worldwide are accelerating their building or adoption of Artificial Intelligence (AI) products. One important consideration is to ensure the responsible building and usage of these products. Without proper standards, these products can become biased, lack transparency, and even generate harmful output. Having standards and guidelines is crucial to building a trustworthy AI ecosystem. These guidelines should cover practices to help build and use fair, explainable, and ethical AI products.
Although there is no universal standard for building or using responsible AI products, several frameworks can assess various aspects of AI products, such as fairness and transparency. This post aims to review some of the key frameworks that can be used to assess the trustworthiness of AI products. So, let’s dive in….
Responsible AI Principles
Fairness
Fair AI products treat all individuals or groups equally and do not discriminate based on factors such as gender, age, religion, and more. Here are some examples of fair AI products:
An AI-enabled loan approver product does not reject people from certain nationalities.
An AI-enabled hiring product does not filter candidates from underrepresented groups.
Transparency
Transparency is the capability of AI products to provide an understanding of how they make decisions. It is not just about output; it dives into the inner working system. Transparent AI Products include Explainability (clarity around the reasoning behind an AI’s decision), Traceability (clarity around the data used to train the AI model and how it influences the outcome), and Auditability (ability to assess the fairness, accuracy, and robustness ).
Robustness
Robust AI products are resilient to errors and maintain their performance and accuracy under a variety of conditions. Here’s a breakdown of what robustness means for AI.
Privacy
AI products that respect privacy ensure user data is collected, stored, and used ethically. They also allow users to have control over their data.
Human-centred
Human-centred AI products prioritize human needs, values, and well-being instead of profit.
Accountability
Accountability for AI products includes mechanisms in place to have a clear line of responsibility throughout the entire lifecycle of AI products. So, in case of any issue, we can identify responsible parties and the reasonings.
Responsible AI frameworks
The following table lists the key standards and frameworks around responsible AI. Let’s explore each of these frameworks.

AI Risk Management Framework (AI RMF)
This initial draft of the NIST AI Risk Management Framework (AI RMF) was released in December 2021. The main goal of AI RMF is for voluntary use to address risks in the development and use of AI products, services, and systems. This work was done in collaboration with the private and public sectors. The second draft of AI RMF was released on August 18, 2022.
The framework focuses on four key domains to effectively manage risks associated with AI products. These focus domains are:
Govern
This area is the foundation for responsible AI development and use within an organisation which involves understanding the legal requirements, integrating responsible AI principles into policies and processes and defining risk tolerance levels.
Map
This domain focuses on identifying potential risks associated with an AI product which involves identifying potential threats, and vulnerabilities and analyzing the impact.
Measure
This domain focuses on defining techniques to assess and monitor AI risks using quantitative and qualitative methods and developing metrics to track the effectiveness of controls.
Manage
This domain focuses on strategies to mitigate risk associated with AI products which involves selecting, implementing, and operating appropriate controls throughout the AI product lifecycle.
On April 29, 2024, NIST also released a draft publication to help manage the risk of Generative AI. Organizations can use the AI RMF Generative AI Profile to identify risks posed by generative AI and propose actions to manage those risks that best align with their priorities.
BS ISO/IEC 42001
BS ISO/IEC 42001 is the first international standard for implementing and continuously improving an Artificial Intelligence Management System (AIMS) within organizations. This framework provides the ability to all organizations and sectors that aim to develop or use AI products. The framework focuses on four key domains
This framework focuses on five key aspects of responsible AI including:
Governance
This domain includes Establishing clear roles and responsibilities for AI projects within an organization.
Risk Management
This domain includes Defining a process for identifying, assessing, and managing potential risks (e.g. bias, fairness, security, privacy, and safety) associated with AI products.
Data Management
This domain includes Guiding responsible data collection, storage, usage, and governance for AI products.
Transparency and Explainability
This domain encourages developing AI products that generate explainable and understandable output.
Monitoring and Improvement
This domain includes defining processes for monitoring different aspects of AI products and continuously improving them.
European Union (EU) Ethics Guidelines for Trustworthy AI
Ethics guidelines for trustworthy AI aims to promote responsible AI in the EU. This framework has three main focus domains that should be met throughout the AI product's entire life cycle.
Compliant
AI products should be lawful and comply with all applicable regulations.
Ethical
AI products should be ethical and adhere to ethical principles.
Robust
AI products should be robust both from a technical and social perspective since, even with good intentions, these products can cause unintentional harm.
OCDE AI Principles
The Organisation for Economic Co-operation and Development (OECD) AI principles offer a set of high-level and practical principles for responsible AI development, deployment, and governance.
OECD AI includes five key principles for responsible AI as follows:
Human-centred values
This principle emphasizes that AI products should be developed and used in a way that respects human values including Avoiding discrimination, Transparency and Privacy.
Inclusiveness, sustainable development and well-being
This principle emphasizes the importance of designing AI products that can benefit society and contribute to sustainable development.
Transparency and explainability
This principle focuses on designing AI products that are transparent and explainable.
Robustness, security and safety
This principle emphasizes the importance of building AI products that are robust, secure, and safe.
Accountability
This principle emphasizes the importance of establishing clear lines of accountability for the development, deployment, and use of AI systems.
Montreal Declaration for Responsible AI Development
The Montreal Declaration for Responsible AI Development originated in 2017 and is a collaborative institute between HEC Montréal, Polytechnique Montréal, and the University of Montreal. This institute formulated a declaration in response to growing concerns around the ethical and societal implications of AI technologies. This declaration focuses on the following key functions:
Well-being and Human Rights
This function emphasises the importance of AI products to respect human rights.
Fairness
This function includes AP product’s ability to treat all individuals or groups equally.
Democracy and Social Justice
This function ensures the output of AI products aligns with democratic principles and social justice.
Privacy and Security
This function ensures the protection of user privacy and prevents data breaches.
Sustainability:
This function emphasises the importance of considering the environmental impact of AI products during development, deployment, and use.
Accountability:
This function emphasises the importance of establishing clear accountability mechanisms for AI products.
Transparency and Explainability
This function emphasises on importance of AI products’ decisions to be explained in a way that is understandable to stakeholders.
Broad Societal Dialogue
This function promotes an open dialogue about the product’s development and implications with relevant stakeholders.
The Algorithmic Justice League (AJL) Framework
The AJL framework aims to raise awareness about the impacts of AI, This framework has been developed by the Algorithmic Justice League and outlines four focus domains when assessing the fairness of AI systems which include Data, Models, Use and Outcomes.
Partnership on AI (PAI) Framework for Responsible AI
PAI framework was Developed by several companies, governments, and NGOs and focuses on responsible AI implementation and use. It offers guidance on topics such as fairness, transparency, accountability, privacy, security, safety, and worker well-being.
It is worth noting that AJL Framework doesn’t have a single, pre-defined document. This is an ongoing effort through research, advocacy, and collaboration.
OWASP Top 10 for Large Language Model Applications
The OWASP Top 10 for Large Language Model Applications project educate different AI stakeholders about the potential security risks when deploying and managing Large Language Models (LLMs). The project lists the top 10 most critical vulnerabilities often seen in LLM applications. This also includes their potential impact and prevalence.
OWASP Top 10 for LLMs is specifically focused on security risks, while the other frameworks listed in this post have a broader scope of responsible AI principles.
Conclusion
This post has explored the landscape of responsible AI standards and frameworks. Although there isn’t a universal set of guidelines, various frameworks have emerged to address different dimensions of responsible AI including fairness, transparency, privacy, etc. By incorporating these standards into AI development and usage practices, we can foster responsible AI and balance between innovation and ethical considerations. In future posts, we will further explore some of these frameworks. Stay tuned for more insights!

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.
Subscribe to our newsletter to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!
