About Discernment

Measuring what regulations assume

What we see

AI systems are no longer tools you pick up when convenient. They are direct collaboration partners — often the most knowledgeable voice in the room on any given topic.

AI experts analyse at beyond-PhD level. Personal assistants know your thinking patterns better than you know yourself. Agentic teams don't just advise — they execute. Embedded systems determine what information you see before you start thinking.

The question isn't whether AI influences human decision-making. It already does, on all four layers. The question is whether people develop the capacity to remain clear in that reality.

That capacity is discernment: the ability to see what's happening in the interaction — even when the system you're working with is more knowledgeable than you, knows you better than you know yourself, and can persuade in ways you don't notice.

See

Recognise influence patterns as they happen

Weigh

Know your own reactive patterns under pressure

Decide

Take responsibility for choices shaped by AI

Check

Apply concrete practices that restore steering

Our approach

Discernment combines three disciplines that rarely come together.

Deep understanding of human influence dynamics. Over twenty years of experience working with people and groups in complex situations — across therapy, facilitation, and organisational advisory. The core insight: people don't change through information. They change through awareness of patterns in the moment.

Building capacity in AI systems. Not just talking about AI — working with it. Developing and testing prototype chains. Understanding how agentic workflows function from the inside.

Systems thinking at civilisation scale. Work spanning social movements, consciousness development projects, and frameworks for human evolution in relation to technology. This work isn't just B2B services — it's preparation for a world where we structurally collaborate with systems that surpass us.

Mission

Measure and strengthen the human capacity to remain clear, responsible, and sovereign when working with AI systems that can surpass human cognition.

Why this matters beyond business

International cooperation for sustainable development requires trust, shared reality-formation, and decision quality across borders. AI influence affects all three.

Shared decisions

When international partners each consult their own AI experts, systematic biases can lead to diverging reality-pictures — even with the same data.

Trust erosion

If it's not transparent how AI systems shaped positions, it becomes harder to trust each other's stances as authentically human.

Cognitive sovereignty

Countries and organisations without capacity to recognise AI influence become dependent on those who supply the systems.

Human oversight is also an international cooperation question: how do we keep human responsibility and defensible choices intact in a world of cognitive asymmetry?

Start with one decision context

The Discernment Snapshot measures where you stand — and what to do about it.

See the product