Skip to content

An index and launchpad for the CloudPedagogy AI Capability Tools suite — lightweight, browser-based, governance-aligned tools for reflection, sense-making, and responsible AI capability development.

Notifications You must be signed in to change notification settings

cloudpedagogy/cloudpedagogy-ai-capability-tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

CloudPedagogy AI Capability Tools

A focused suite of lightweight, browser-based tools designed to support reflective, governance-aligned sense-making about AI capability — grounded in the CloudPedagogy AI Capability Framework (2026 Edition).

These tools are intentionally non-surveillance, non-benchmarking, and non-prescriptive.
They exist to help teams surface patterns, tensions, and questions about AI capability — not to automate decisions, score performance, or enforce compliance.


What this repository is

This repository is an index and launchpad for the CloudPedagogy AI Capability Tools.

It provides:

  • a clear overview of each tool
  • a recommended logical flow for using the tools together
  • direct links to live tools and their source repositories

If you are looking for the framework that underpins all tools, see:
https://github.com/cloudpedagogy/cloudpedagogy-ai-capability-framework


Recommended tool flow (capability-led)

The tools are designed to work together as a capability journey:

  1. Establish a baseline — shared understanding of current capability
  2. Interpret patterns — gaps, imbalances, and risk-relevant signals
  3. Stress-test resilience — how capability holds up under pressure or change
  4. Make capability visible — where it appears (or doesn’t) in programmes or structures
  5. Track signals over time — notice trends without surveillance or KPIs

Each tool can be used standalone, but the sequence above reflects the most coherent and defensible progression.


The tools

1) AI Capability Self-Assessment Tool

Purpose: Establish a reflective baseline

A static, browser-based self-assessment for exploring organisational AI capability across six domains.
Designed to support shared reflection rather than evaluation or audit.

Key characteristics

  • Framework-led and non-prescriptive
  • Explainable, rule-based interpretation
  • No accounts, analytics, or data transmission

2) AI Capability Gaps & Risk Diagnostic

Purpose: Interpret gaps, imbalances, and fragilities

A browser-based diagnostic tool for interpreting capability profiles to surface gap signals, imbalance patterns, and risk-relevant tensions.

What it supports

  • Governance and QA discussions
  • Leadership and steering group sense-making
  • Identifying stabilising steps before scaling AI use

What it does not do

  • No compliance checks
  • No benchmarking or maturity scoring
  • No automated decisions or recommendations

3) AI Capability Scenario Stress-Test

Purpose: Explore resilience under disruption or scrutiny

An exploratory, browser-based tool for examining how an existing AI capability profile may hold up under plausible future or stress scenarios.

Typical scenarios include

  • Rapid AI uptake in high-stakes contexts
  • Regulatory tightening or audit scrutiny
  • Reputational or public incidents
  • Vendor disruption or loss of key expertise

This tool supports foresight and judgement, not prediction or policy automation.


4) AI Capability Programme Mapping Tool

Purpose: Make capability visible across structures

A browser-based tool for mapping modules, activities, or assessments against the six AI capability domains to support curriculum design, review, and QA conversations.

Key features

  • Domain tagging using the framework as lenses
  • Export to Markdown (QA- and committee-ready)
  • Import/export via JSON for portability
  • Fully client-side, suitable for static hosting

5) AI Capability Signals Dashboard

Purpose: Notice aggregate patterns and trends over time

A lightweight, browser-based aggregate dashboard for examining how AI capability is developing across an organisation over time.

Design intent

  • No surveillance or monitoring
  • No KPIs, league tables, or performance metrics
  • Supports governance-aligned discussion and shared interpretation

The dashboard focuses on signals, trends, and tensions, not measurement or enforcement.


Shared design principles

Across the suite, all tools are:

  • Capability-led — grounded in a six-domain framework
  • Interpretive — outputs are prompts, not verdicts
  • Governance-compatible — supports decision hygiene, not automation
  • Privacy-respecting — no accounts, tracking, or data transmission
  • Lightweight and portable — suitable for static hosting (e.g. AWS S3)

The six domains function as lenses, not checklists:

  1. Awareness & Orientation
  2. Human–AI Co-Agency
  3. Applied Practice & Innovation
  4. Ethics, Equity & Impact
  5. Decision-Making & Governance
  6. Reflection, Learning & Renewal

What these tools are (and are not)

These tools are for

  • reflective team and leadership conversations
  • curriculum and programme design / review
  • governance, QA, and steering group discussions
  • sense-making before scaling or formalising AI use

These tools are not

  • audits or compliance instruments
  • benchmarking, ranking, or maturity models
  • monitoring or surveillance systems
  • automated decision-making or risk engines
  • substitutes for institutional governance or professional judgement

Data handling and privacy

All tools run entirely in the browser.

Depending on the tool:

  • inputs exist only in the current session, or
  • are stored locally in the user’s browser (e.g. localStorage)

No user data is transmitted, uploaded, or tracked.


Status and intended use

These tools are exploratory and framework-aligned.

They are provided for learning, reflection, and professional discussion.
They are not production governance systems or compliance software.

Responsibility for interpretation and any subsequent decisions remains with the user or adopting organisation.


Licensing & Scope

This repository contains open-source software released under the MIT License.

CloudPedagogy frameworks, capability models, taxonomies, and training materials are separate intellectual works and are licensed independently (typically under Creative Commons Attribution–NonCommercial–ShareAlike 4.0).

This software is designed to support capability-aligned workflows but does not embed or enforce any specific CloudPedagogy framework.


About CloudPedagogy

CloudPedagogy develops open, governance-credible resources for building confident, responsible AI capability across education, research, and public service.

About

An index and launchpad for the CloudPedagogy AI Capability Tools suite — lightweight, browser-based, governance-aligned tools for reflection, sense-making, and responsible AI capability development.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published