Methodology
How we measure cyber behaviour.
Three perspectives, two indices, four pillars, five secure behaviours. Versioned, calibrated, and mapped to the regulatory frameworks your auditor cares about. This page covers the structure. The instrument itself, the question bank, scoring rules, weights, and framework clause mappings, is shared with customers under NDA.
Three perspectives
Three sources of evidence. One honest picture.
Instinct Lab triangulates data from leadership perception, staff experience, and observed behaviour. The composite gives you a complete, evidence-backed view of your security culture instead of the single-angle snapshot most awareness programmes produce.
01
Executive baseline
A structured survey capturing leadership's perception of security behaviours across the organisation. How they think the culture operates: the governance view.
Perception data02
Staff baseline
A scenario-based survey measuring real decision-making, reporting instincts, and psychological safety. What people would actually do, not what they know they should.
Behavioural data03
Observed behaviour
Real-world evidence from simulations, exercises, and operational data. How people behave when the pressure is real and nobody's watching. Currently in development; an architecturally first-class source from day one.
Evidence dataThe two indices
One narrative. One evidence layer. Both, together.
Security Instinct Index
The narrative view. A composite across Engagement, Culture, Awareness, and Instinct, scored 0 to 100. It shows where your security culture actually sits, not where you think it does. Same staff data as the SBI, rolled up across the four pillars.
Security Behaviour Index
The evidence layer. Five measurable secure behaviours, each independently scored and each mapped to specific regulatory clauses. Scenario questions test what someone would actually do; self-assessment items are deliberately discounted, because the research is clear that people are unreliable raters of their own competence.
We deliberately publish two numbers, not a single composite. The divergence between them is the diagnostic: the SII tells you the story the data wants to tell. The SBI tells you what stands up to an auditor.
SII · Four pillars
The Security Instinct Index, broken down.
Engagement
Do people care enough to pay attention? Curiosity, participation, and emotional investment in security. Not just showing up.
Culture
How security shows up between the training sessions. Shared expectations, psychological safety, and whether people actually speak up.
Awareness
Can people recognise risk in real time? Pattern recognition, clarity, and confidence when something feels off.
Instinct
The speed and quality of behaviour under pressure. Risk recognition, decision-making, and reporting without hesitation.
Each pillar carries a calibrated weight in the SII composite, chosen to reflect what actually predicts secure behaviour under pressure. The weights are shared with customers as part of the instrument documentation.
SBI · Five secure behaviours
What we actually measure people doing.
The SBI tracks five secure behaviours, each scored independently and each mapped to specific clauses across DORA, NIS2, NIST CSF, ISO 27001, CAF, and Cyber Essentials Plus. The same measurement supports both internal reporting and external evidence.
- 01
Risk Recognition
Can people see what's wrong before it goes wrong? Pattern recognition under uncertainty: spotting the unusual email, the off-script request, the colleague behaving out of character.
- 02
Secure Decision Making
When the situation is ambiguous and the clock is ticking, do people's defaults bend toward secure choices? Decisions under speed, authority, and pressure.
- 03
Reporting
Do people surface anomalies fast, even when they're not sure, even when it might be nothing? Reporting without hesitation is the difference between a near-miss and an incident.
- 04
Authentication & Credentials
Password hygiene, MFA, account separation, and credential handling. The unglamorous discipline that closes the most common attack paths.
- 05
Psychological Safety
Can people speak up without fear of being blamed? Without psychological safety, the first four behaviours quietly collapse. With it, the rest become possible.
SBI scoring distinguishes between scenario-based questions (what someone would actually do) and self-assessment items (how confident they feel). The two carry different weights in the composite, with self-assessment discounted to reflect the well-documented gap between perceived and demonstrated competence. Exact multipliers are shared with customers as part of the instrument documentation.
Banding
Four bands. Plain English.
Reactive
Behaviour is led by incidents, not instincts. Most programmes start here. The action plan focuses on the highest-consequence behaviours first.
High behavioural risk
Developing
Good intent, inconsistent execution. The right things happen when someone is watching. Engagement and culture are usually the limiting factor.
Moderate behavioural risk
Embedded
Secure behaviour is the default for most people most of the time. Outliers are visible and addressable. The programme is delivering measurable value.
Low behavioural risk
Instinctive
Secure behaviour is automatic. People act safely under pressure, surface anomalies fast, and bring colleagues with them. Rare and worth defending.
Minimal behavioural risk
Every score lands in one of these four bands, with both a maturity label (Band column) and a behavioural-risk label (right column). Same number, two reads, depending on whether you're talking to L&D or to risk. Exact thresholds are shared with customers as part of the instrument documentation.
Alignment Gap
The most diagnostic number we publish.
The Alignment Gap is the difference between executive perception and staff perception of the same security culture. Executives often rate culture 15 to 25 points higher than staff experience it. That gap is invisible until you measure both sides, and it's where risk builds.
A large gap means trouble is hiding in plain sight. A small gap means the programme is honest with itself, even when the underlying score is low.
Aligned
Perception matches across the org. The programme is honest with itself.
Moderate
Some drift. Worth investigating which pillar is contributing.
Significant
Leadership and staff are telling different stories. Common before a culture-change programme. Usually fixable.
Material
Disconnect between leadership narrative and shop-floor reality. Often a precursor to a serious incident.
Versioning
Every score is tagged with the methodology version that produced it.
Cyber threats change. Our instrument evolves with them. When pillar weights, behaviours, or question banks shift, we publish a new methodology version and every score from that point forward is tagged with it.
Historical scores stay interpretable in their original frame. Your baseline from 2024 is comparable to your 2026 re-measure, because the version metadata tells you whether you're comparing apples to apples or whether part of the delta is instrument drift rather than behaviour change.
Want to see what the output looks like?
We'll share a redacted sample report so you can see how a real baseline lands: SII and SBI scores, banded behaviours, the Alignment Gap, framework-mapped evidence, and the AI-generated recommendations layer.