Our Methodology

How we measure what others assume

The Discernment Framework

Most organisations assume their people can exercise meaningful oversight of AI systems. We don't assume — we measure. Our methodology examines four interconnected dimensions of human discernment capacity.

Recognition

Can they see what's happening?

The ability to recognise when and how AI systems are influencing the decision context — including subtle framing effects, authority cues, and information filtering.

Evaluation

Can they assess what they see?

The capacity to evaluate AI outputs critically — distinguishing between well-reasoned recommendations and confidently-stated errors, between genuine insights and systematic biases.

Agency

Can they act on their assessment?

The practical ability to override, modify, or reject AI recommendations when warranted — including psychological readiness to disagree with an "expert" system.

Accountability

Can they take responsibility?

Clear chains of responsibility for decisions made with AI involvement — knowing who decided what, why, and how to trace decisions back to accountable humans.

How we assess

The Discernment Snapshot uses three complementary methods.

Structured Interview

Guided conversation exploring how decisions actually happen in your context. Where does AI input come in? How is it weighted? What triggers override decisions? What's documented and why?

Mini-Simulation

Brief scenario exercise that surfaces real responses to AI influence under controlled conditions. Not a test — a diagnostic. How do you actually react when the system is confident and you're uncertain?

Method Demonstration

Live walkthrough of how we measure discernment — showing the concrete, reproducible chain from observation to assessment to recommendation. Proof that this isn't subjective opinion.

What makes this different

Not AI training

Training gives knowledge. Knowledge doesn't guarantee capacity. We measure whether people can apply what they know under real conditions.

Instead: capacity assessment

We test actual response patterns, not theoretical understanding. Can they recognise, evaluate, and act — not just recite principles?

Not compliance audit

Compliance checks whether you have policies. It doesn't test whether people can follow them when the AI sounds authoritative.

Instead: operational truth

We examine what happens in practice, not what the documentation says should happen. The gap between policy and behaviour is where risk lives.

Not generic assessment

Abstract frameworks don't capture context-specific vulnerabilities. "AI oversight" looks different in every decision environment.

Instead: context-specific

We assess one bounded decision context at a time. The output reflects your specific situation, not generic best practices.

Theoretical foundation

The Discernment methodology draws on established research in cognitive science, human factors, and automation psychology — adapted for the specific challenges of superintelligent systems.

Automation bias research

Decades of evidence on how humans over-rely on automated systems, particularly when those systems are presented as expert or authoritative.

Decision support systems

Understanding of how advisory systems shape human judgment — and what conditions enable humans to maintain independent evaluation.

Cognitive load theory

Recognition that meaningful oversight requires cognitive resources. Overloaded humans default to system recommendations.

Organisational psychology

How responsibility diffuses in complex systems, and what structures preserve clear accountability chains.

See it in action

The Discernment Snapshot applies this methodology to one concrete decision context.

Learn about the Snapshot