A focused suite of lightweight, browser-based tools designed to support reflective, governance-aligned sense-making about AI capability — grounded in the CloudPedagogy AI Capability Framework (2026 Edition).
These tools are intentionally non-surveillance, non-benchmarking, and non-prescriptive.
They exist to help teams surface patterns, tensions, and questions about AI capability — not to automate decisions, score performance, or enforce compliance.
This repository is an index and launchpad for the CloudPedagogy AI Capability Tools.
It provides:
- a clear overview of each tool
- a recommended logical flow for using the tools together
- direct links to live tools and their source repositories
If you are looking for the framework that underpins all tools, see:
https://github.com/cloudpedagogy/cloudpedagogy-ai-capability-framework
The tools are designed to work together as a capability journey:
- Establish a baseline — shared understanding of current capability
- Interpret patterns — gaps, imbalances, and risk-relevant signals
- Stress-test resilience — how capability holds up under pressure or change
- Make capability visible — where it appears (or doesn’t) in programmes or structures
- Track signals over time — notice trends without surveillance or KPIs
Each tool can be used standalone, but the sequence above reflects the most coherent and defensible progression.
Purpose: Establish a reflective baseline
A static, browser-based self-assessment for exploring organisational AI capability across six domains.
Designed to support shared reflection rather than evaluation or audit.
- Live tool:
http://ai-capability-self-assessment-cloudpedagogy.s3-website.eu-west-2.amazonaws.com/ - Repository:
https://github.com/cloudpedagogy/cloudpedagogy-ai-capability-assessment
Key characteristics
- Framework-led and non-prescriptive
- Explainable, rule-based interpretation
- No accounts, analytics, or data transmission
Purpose: Interpret gaps, imbalances, and fragilities
A browser-based diagnostic tool for interpreting capability profiles to surface gap signals, imbalance patterns, and risk-relevant tensions.
- Live tool:
http://cloudpedagogy-ai-capability-gaps-risk.s3-website-us-east-1.amazonaws.com/ - Repository:
https://github.com/cloudpedagogy/cloudpedagogy-ai-capability-gaps-risk
What it supports
- Governance and QA discussions
- Leadership and steering group sense-making
- Identifying stabilising steps before scaling AI use
What it does not do
- No compliance checks
- No benchmarking or maturity scoring
- No automated decisions or recommendations
Purpose: Explore resilience under disruption or scrutiny
An exploratory, browser-based tool for examining how an existing AI capability profile may hold up under plausible future or stress scenarios.
- Live tool:
http://cloudpedagogy-ai-capability-scenario-stress-test.s3-website.eu-west-2.amazonaws.com/ - Repository:
https://github.com/cloudpedagogy/cloudpedagogy-ai-capability-scenario-stress-test
Typical scenarios include
- Rapid AI uptake in high-stakes contexts
- Regulatory tightening or audit scrutiny
- Reputational or public incidents
- Vendor disruption or loss of key expertise
This tool supports foresight and judgement, not prediction or policy automation.
Purpose: Make capability visible across structures
A browser-based tool for mapping modules, activities, or assessments against the six AI capability domains to support curriculum design, review, and QA conversations.
- Live tool:
http://cloudpedagogy-ai-capability-programme-mapping.s3-website.eu-west-2.amazonaws.com/ - Repository:
https://github.com/cloudpedagogy/cloudpedagogy-ai-capability-programme-mapping
Key features
- Domain tagging using the framework as lenses
- Export to Markdown (QA- and committee-ready)
- Import/export via JSON for portability
- Fully client-side, suitable for static hosting
Purpose: Notice aggregate patterns and trends over time
A lightweight, browser-based aggregate dashboard for examining how AI capability is developing across an organisation over time.
- Live tool:
http://cloudpedagogy-ai-capability-dashboard.s3-website.eu-west-2.amazonaws.com/ - Repository:
https://github.com/cloudpedagogy/cloudpedagogy-ai-capability-dashboard
Design intent
- No surveillance or monitoring
- No KPIs, league tables, or performance metrics
- Supports governance-aligned discussion and shared interpretation
The dashboard focuses on signals, trends, and tensions, not measurement or enforcement.
Across the suite, all tools are:
- Capability-led — grounded in a six-domain framework
- Interpretive — outputs are prompts, not verdicts
- Governance-compatible — supports decision hygiene, not automation
- Privacy-respecting — no accounts, tracking, or data transmission
- Lightweight and portable — suitable for static hosting (e.g. AWS S3)
The six domains function as lenses, not checklists:
- Awareness & Orientation
- Human–AI Co-Agency
- Applied Practice & Innovation
- Ethics, Equity & Impact
- Decision-Making & Governance
- Reflection, Learning & Renewal
- reflective team and leadership conversations
- curriculum and programme design / review
- governance, QA, and steering group discussions
- sense-making before scaling or formalising AI use
- audits or compliance instruments
- benchmarking, ranking, or maturity models
- monitoring or surveillance systems
- automated decision-making or risk engines
- substitutes for institutional governance or professional judgement
All tools run entirely in the browser.
Depending on the tool:
- inputs exist only in the current session, or
- are stored locally in the user’s browser (e.g. localStorage)
No user data is transmitted, uploaded, or tracked.
These tools are exploratory and framework-aligned.
They are provided for learning, reflection, and professional discussion.
They are not production governance systems or compliance software.
Responsibility for interpretation and any subsequent decisions remains with the user or adopting organisation.
This repository contains open-source software released under the MIT License.
CloudPedagogy frameworks, capability models, taxonomies, and training materials are separate intellectual works and are licensed independently (typically under Creative Commons Attribution–NonCommercial–ShareAlike 4.0).
This software is designed to support capability-aligned workflows but does not embed or enforce any specific CloudPedagogy framework.
CloudPedagogy develops open, governance-credible resources for building confident, responsible AI capability across education, research, and public service.