130+ Authoritative Resources from WHO, UNESCO, OECD, NIST, ILO, FDA, UNEP, RAND, IMF, ECB and Leading Research Institutions
Comprehensive database of research papers, policy frameworks, technical standards, and industry reports aligned with our 9-pillar framework
Search, filter, and explore our complete library
68 resources found
The first-ever global standard on AI ethics, adopted by 194 UNESCO member states, establishing comprehensive principles and policy action areas for responsible AI development.
Comprehensive analysis of 200 AI use cases across OECD governments, examining opportunities, risks, and practical guidance for responsible AI adoption in public sector functions.
Voluntary framework for managing risks to individuals, organisations, and society associated with AI, developed through consensus-driven collaboration with public and private sectors.
Comprehensive overview of AI ethics issues and policy initiatives across the European Union and globally, examining regulatory approaches and ethical frameworks.
Critical evaluation of over 80 AI ethics guidelines from around the world, examining their effectiveness, convergence, and implementation challenges.
Develops comprehensive conceptual framework for regulating AI based on systematic review of literature on AI governance and regulatory theory.
Comprehensive primer on the EU AI Act's risk-based regulatory approach, serving as guide for US legislators and policymakers considering AI regulation.
Examines relationship between GDPR and AI, analysing how data protection regulation impacts AI development, deployment, and governance in the EU.
Comparative analysis of AI regulatory approaches across major jurisdictions including US, EU, UK, China, and international organisations.
Comprehensive systematic review synthesizing research on algorithmic bias with integrated framework and agenda for future research directions.
Proposes comprehensive approaches for understanding and mitigating AI bias in healthcare to advance health equity and reduce disparities.
Highlights importance of systematically identifying bias and engaging mitigation activities throughout AI model lifecycle with practical strategies.
Comprehensive study on trustworthy AI elements and integration of explainable AI methodologies across diverse applications and domains.
Provides insights into what organisations consider important in transparency and explainability of AI systems, bridging ethics and engineering.
Defines accountability in AI context and analyses its architecture through compliance, reporting, oversight and liability goals with practical frameworks.
Explores XAI's transformative opportunity for financial institutions to enhance trust, compliance, and decision-making through transparent AI systems.
Reviews challenges and approaches to data governance for AI systems, proposing comprehensive framework for trustworthy data management.
Reviews privacy-preserving techniques for AI in healthcare including federated learning, differential privacy, and secure multi-party computation.
Overview of trends in privacy-preserving technologies for AI developers and stakeholders, examining solutions for data protection challenges.
Identifies three dimensions of AI implementation in public sector: technology-deterministic, data-induced, and organizational transformation approaches.
Sets forth clear policy goals for federal government AI execution across innovation, infrastructure, and international cooperation pillars.
UNEP explores AI's environmental footprint including energy consumption and carbon emissions, proposing solutions for sustainable AI deployment.
First international standard for AI management systems, providing requirements and guidance for responsible development and use of AI.
Practical guidance for small and medium businesses on implementing proportionate AI governance without enterprise-scale resources.
Develops actionable properties for designing AI systems under meaningful human control with practical guidance for developers and organisations.
Practical toolkit providing implementation guidance, assessment tools, and best practices for responsible AI innovation across sectors.
Comprehensive annual report tracking global AI trends, progress, and challenges across technical performance, policy, economics, and societal impact.
Collection of practical approaches and real-world examples demonstrating how organisations implement responsible AI principles in practice.
Comprehensive framework from IEEE establishing principles and recommendations for human-centric design of autonomous and intelligent systems.
Critical analysis of AI power concentration, examining corporate dominance, regulatory challenges, and impacts on democracy and social justice.
Foundational guide explaining algorithmic accountability concepts, mechanisms, and implementation approaches for diverse stakeholders.
Characterizes carbon footprint of AI computing including manufacturing and operational emissions, proposing sustainable AI practices and metrics.
Proposes AI risk management maturity model building on NIST AI RMF, enabling organisations to assess and advance their AI governance capabilities.
Proposes comprehensive framework for AI safety assurance combining formal verification, runtime monitoring, and safety-critical engineering practices.
Comparative analysis of AI regulatory approaches in healthcare across major jurisdictions, examining standards, approval processes, and oversight mechanisms.
Reviews AI applications in safety and reliability engineering, examining both opportunities and challenges in using AI for safety-critical systems.
Examines unique opportunities and challenges SMEs face in AI adoption, providing strategic guidance for successful integration.
An independent research institute focusing on the social implications of AI and policy research, notable for being the first AI institute founded and led by women.
An independent UK-based institute ensuring data and AI technologies work for people and society by promoting equitable AI benefits and social wellbeing.
A New York-based nonprofit conducting empirical research on the social implications of data-centric technologies, including AI systems.
A Berlin-based nonprofit evaluating algorithmic decision-making processes and maintaining a global inventory of AI ethics guidelines.
An institute focused on steering transformative technologies like AI toward benefiting humanity, known for the Asilomar AI Principles and AI Safety Index.
A nonprofit research organization focused on reducing societal-scale risks from AI through technical safety research and advocacy.
A leading global nonprofit providing cutting-edge tools for responsible AI oversight and compliance through the RAISE framework.
An independent nonprofit coalition bringing together diverse communities to address AI's future through guidelines, policymaker education, and international convenings.
A flagship WEF initiative with 350+ members promoting responsible AI development through working groups on safety, applications, and governance.
An independent nonprofit connecting experts across sectors to promote research and policy for safe and ethical AI systems.
The leading digital civil liberties organization championing user privacy, free expression, and innovation through algorithmic accountability advocacy.
A youth-led global coalition of 600+ students across 40+ states and 30+ countries advocating for AI governance and human rights protection.
A national nonprofit transforming the AI practitioner pipeline by creating a more inclusive discipline through education and mentorship for underrepresented groups.
A UK government institute dedicated to advancing AI safety research, testing, and standards development to ensure safe AI deployment.
A pioneering AI research organization with a unique nonprofit governance structure ensuring AI benefits all of humanity.
A community-driven initiative promoting fairness, accountability, and transparency in machine learning systems through research and advocacy.
A nonprofit institute providing practical tools and frameworks for organizations to implement ethical AI practices and ensure compliance.
A comprehensive 290-page framework providing recommendations for ethical AI development based on five core principles and supported by the P7000 technical standards series.
A global AI ethics certification program assessing autonomous intelligent systems across transparency, accountability, algorithmic bias, and privacy.
A systematic risk management framework released in 2023 providing guidance through four core functions: Govern, Map, Measure, and Manage.
Guidelines published in 2019 establishing seven key requirements for trustworthy AI including human agency, technical robustness, and accountability.
The first intergovernmental AI principles adopted in 2019, establishing five principles for responsible AI stewardship including inclusive growth and human-centered values.
The Responsible AI Safety and Effectiveness framework providing practical implementation pathways with over 1,100 controls mapped across 17 global standards.
A declaration developed in 2018 through a unique citizen co-construction process involving 500+ participants, presenting 10 principles for responsible AI.
Created in 2017, these 23 principles address research issues, ethics and values, and longer-term concerns, signed by over 2,000 experts including leading AI researchers.
Developed by the World Economic Forum's AI Governance Alliance, this framework addresses safe systems, responsible applications, and resilient governance through multi-stakeholder collaboration.
The first global standard on AI ethics adopted by UNESCO's 193 member states in 2021, providing comprehensive guidance on human rights-centered AI development.
The world's first AI management system standard published in 2023, providing requirements for establishing, implementing, maintaining, and continually improving AI management systems.
A comprehensive database cataloguing AI system failures and harms to inform safer AI development through lessons learned from real-world incidents.
Singapore's AI governance testing framework and toolkit launched in 2022, providing organizations with tools to validate AI systems against ethical principles.
Canada's mandatory assessment tool launched in 2019 for evaluating risks associated with automated decision systems used by federal government.
Try adjusting your search or filters
Access our complete toolkit, assessments, and member-only annotated resources to turn research into action.
Resource Name
Secure download • No credit card required