25+
Years in cyber security and assurance

Control based AI governance and assurance for organisations that cannot afford to get it wrong.

I help boards, CISOs and AI leaders turn AI principles into operational controls, evidence and audit ready assurance across the AI lifecycle.

Flagship engagement
AI Controls and Assurance Readiness Review

A structured, evidence driven review of your AI use cases, data and controls, producing a clear view of what is in place, what is missing, and what you can defend with confidence.

Aligned to
NIST AI RMF ISO/IEC 23894 ISO/IEC 42001 CSA AICM Zero Trust

Control coverage

Baseline and gap analysis across governance, data and security controls, mapped to your AI lifecycle stages.

Evidence and auditability

What you can prove today, what needs strengthening, and what artefacts regulators and auditors expect.

12 month roadmap

A prioritised, practical plan with ownership and sequencing, aligned to business delivery, not bureaucracy.

Outcome: board ready view of AI control posture Outcome: assurance narrative you can evidence Outcome: prioritised remediation roadmap
Independent assurance for boards, CISOs, AI leads and regulators
Controls first. Evidence always. Practical delivery without theatre.
Focus AI governance, security, assurance and auditability Typical clients Government, critical infrastructure, financial services, health, AI vendors
Services

What I help you do

I turn AI ambitions into operational controls, measurable evidence and assurance that stands up to scrutiny.

AI controls assessment and evidence mapping

A structured review of your AI use cases, data and governance controls, focused on what you can evidence today.

  • AI system and use case scoping with risk tiering
  • Control baseline and gap analysis across the AI lifecycle
  • Evidence inventory and auditability review
  • Clear prioritisation of gaps by risk and impact
Outcome: control posture you can defend, not just describe

Secure AI design and Zero Trust patterns

Architecture and design support for AI enabled systems, with identity, segmentation and data controls built in.

  • Threat modelling for AI services and LLM integrations
  • Access control and segregation for AI workloads and data
  • Secure RAG and knowledge store patterns
  • Hardening of data pipelines and inference environments
Outcome: AI that is secure by design and operationally monitorable

Assurance readiness and independent review

Independent assurance focused on control effectiveness, evidence strength, and stakeholder confidence.

  • Pre deployment and change impact assurance
  • Safety, security and governance reviews for AI use cases
  • Misuse and adversarial scenario testing for LLMs
  • Assurance narratives for boards, customers and regulators
Outcome: evidence based assurance you can share with confidence
Why work with me

What makes this approach different

Many AI specialists lack security depth. Many security teams are new to AI. I bridge both, with controls and evidence as the common language.

Controls, not slogans

I focus on operational controls, evidence and auditability, not generic principle statements. You get artefacts that support decisions and stand up to scrutiny.

Security depth applied to modern AI

I combine formal Trustworthy AI training with decades of security architecture and assurance work. This allows me to connect model behaviour, data, infrastructure and governance in one view.

Practical and regulator friendly

My work is grounded in what risk teams, auditors and regulators actually expect to see. You receive artefacts that stand up to scrutiny and support real decisions.

Credentials at a glance

I am a practitioner first. Certifications support the experience rather than replace it.

Training and capability building

If you need to build internal capability in AI governance, safety and assurance, I also lead the Institute of Trustworthy AI (TITAI).

TITAI is focused on practical, control led Trustworthy AI for boards, risk and compliance leaders, security and data professionals, and career switchers entering AI governance roles.

Training is delivered separately from advisory and assurance engagements to preserve independence, while applying the same lifecycle thinking and control logic.

Visit TITAI training site
Education and professional development
Example engagements

Where I add immediate value

These examples show the type of work I do. Each engagement is tailored to the organisation, its risk appetite and its stage of AI adoption.

AI risk framework for a regulated enterprise

A large organisation needed a coherent way to understand and govern AI risk across multiple business units.

  • Defined an AI risk taxonomy and governance model
  • Mapped AI controls into existing security and risk frameworks
  • Created AI risk cards for priority use cases
  • Designed human oversight and escalation paths
Result: AI decisions tied directly to risk and accountability

Secure LLM adoption for sensitive data

A team wanted to use large language models with internal documents without creating a data breach risk.

  • Assessed data, threat and regulatory constraints
  • Designed safe patterns for retrieval augmented generation
  • Recommended encryption and access controls around knowledge stores
  • Defined operational monitoring and incident paths
Result: useful AI features within an acceptable risk envelope

Zero Trust architecture for AI driven services

An organisation wanted to align its AI platform with Zero Trust principles without blocking delivery.

  • Mapped current environment and trust boundaries
  • Defined a target architecture for AI services
  • Integrated identity, device, network and data controls
  • Created a staged roadmap that matched delivery cadence
Result: a clear path to secure AI without a big bang rewrite

Trustworthy AI training for security and data teams

Security teams and data scientists needed a common language for AI risk and safety so they could work together.

  • Delivered workshops on AI threats, controls and failure modes
  • Introduced a shared set of patterns and anti patterns
  • Embedded AI aware checks into existing pipelines and processes
  • Provided reference materials and templates for ongoing use
Result: teams aligned on AI risk with clear next steps
Sectors and engagement

Who I work with and how

I typically support organisations that treat AI as safety and mission critical. That often means regulated or high impact environments.

Sectors that benefit most

  • Critical national infrastructure and utilities
  • Government and public sector bodies
  • Health and life sciences
  • Financial services, insurance and fintech
  • Technology vendors and AI start ups
Role: independent adviser, assessor or consortium partner

Ways of working

  • 1
    Discovery and strategy
    Short, focused work to understand your AI landscape, risk drivers and constraints.
  • 2
    Architecture and design
    Co designing secure and safe AI patterns and reference architectures with your teams.
  • 3
    Assurance and review
    Independent assessment of AI systems, suppliers and changes at key decision points.
  • 4
    Ongoing advisory
    Retained support for boards, CISOs, AI leads and risk teams as your AI portfolio grows.
Self assessment

AI and unstructured data safety self assessment

In a few minutes you can get an independent view of how well your organisation is protecting unstructured data and AI enabled workflows. The assessment runs in the browser and produces a report you can share with security, risk and leadership teams.

What you will get

The Unstructured Data Security and AI Safety Assessment (UDSA) provides a structured view of your current posture and highlights where to focus next.

  • Coverage across data discovery, access control, protection and governance
  • Specific focus on how AI and unstructured data interact in your environment
  • Domain scores and clear explanations of key risks
  • Prioritised recommendations you can act on quickly

Run the assessment

The assessment normally takes around 10 to 15 minutes to complete. It is designed for security, data and risk leaders who need a practical starting point.

Start the assessment

I created the UDSA engine to help organisations understand how AI and unstructured data introduce new safety, security and compliance risks before they become incidents.

Start a confidential conversation

If you are planning a significant AI initiative or need independent assurance on existing systems, I am happy to explore how I can help. There is no obligation.

Typical engagement Discovery call, followed by a short options paper outlining possible paths.
Location United Kingdom, working with clients across the UK and internationally.
Email arinze@okosieme.com
Website or company https://okosieme.org
Name
Email
Organisation
Role
What would you like to discuss?
When you submit this form, your message will be emailed securely to me via Formspree.