1 Part Course  | 
Book places now

AI & Cyber-Enabled Fraud: Risk, Regulation & Resilience

An interactive course focused on identifying, preventing and governing the escalating risk of fraud driven by AI and cyber-enabled technologies, aligned with evolving 2026 UK and EU regulatory frameworks and supervisory expectations

Close-up of a skyscraper's glass facade with the sun shining through

A one-day course presented over two-half days in a virtual class

In-house pricing available – often more cost-effective for teams of 10+
pdf Download:   Course Outline

Introduction to AI Ubiquity and Fraud Risks

  • AI's role in enabling fraud: ubiquity, untrained staff misuse, and "welcome mat" risks from LinkedIn/social media oversharing
  • Overview of 2026 threat landscape: cyber-enabled fraud as 67% of UK fraud (per NCA), rising ransomware, and AI vulnerabilities (NCSC warnings)
  • Dual perspectives: innovation opportunities vs. regulatory pitfalls under UK Government AI principles (no new bill yet)

Attack Vectors and Fraud Purposes

  • Beyond email: social media, WhatsApp, LinkedIn as entry points for spear phishing, blagging, and deepfakes
  • Fraud objectives: data exfiltration, financial gain, command and control - illustrated with 2026 cases (e.g., AI-enabled investment scams)
  • Interactive: Identifying red flags in professional contexts, per FCA Mills Review and ICO AI guidance

The Cyber Kill Chain: Stages and Defences

  • Breaking down reconnaissance, weaponisation, delivery, exploitation, installation, command/control, and actions on objectives
  • AI/cyber intersections: how generative AI accelerates each stage (e.g., automated phishing, deepfake reconnaissance)
  • Mitigation strategies: NCSC preparation guidance, ICO DPIAs for AI risks, and FRC AI use in fraud detection/audit

UK and EU Legislative Landscape

  • UK Cyber Security and Resilience Bill (committee February 2026): enhanced resilience for critical sectors
  • EU impacts: NIS2 transpositions, Cyber Resilience Act reporting (September 2026), revised Cybersecurity Act (January 2026 proposals)
  • Cross-border implications: data transfers, supply chain risks, and alignment with UK GDPR/DUAA

Guidance from Key Bodies and Regulators

  • Law enforcement: Report Fraud service (January 2026) for streamlined cyber/fraud reporting; Action Fraud updates on economic crime
  • Regulators: SRA on AI/cyber risks for solicitors; ICAEW PCRT on AI fraud in tax/accounting; FCA on AI-enabled fraud resilience; FRC on AI in audit/fraud detection
  • Bodies: Law Society Cybersecurity Toolkit (2026 updates); NCSC/ICO on AI vulnerabilities and data protection

UK Government AI Regulation and Fraud Intersection

  • Principles-based approach (February 2026 status): no dedicated bill, focus on ethical guidance and risk management
  • AI-specific fraud risks: agentic AI, hallucinations in compliance tools; governance per ICO updates.
    Forward-looking: balancing innovation with prevention, including staff training protocols

Practical Risk Management and Case Studies

  • Building resilience: risk assessments, AI due diligence, incident response aligned with cyber kill chain
  • Real-world scenarios: ransomware recovery, phishing defence, economic crime investigations
  • Tools: checklists for compliance (SRA/FCA/ICO), ethical AI frameworks (ICAEW/Law Society)

Interactive Q&A and Action Planning

  • Participant discussions: applying concepts to sector-specific risks (law, finance, accounting)
  • Key takeaways: 2026 compliance roadmap, resources (NCSC guides, Report Fraud portal)
  • Trainer’s insights: emerging threats and proactive strategies

Redcliffe’s course trainer has over a decade of experience advising clients on data protection and founded Digital Law in 2014 to offer legal and compliance guidance to organisations operating in the digital space. The trainer works with clients across the UK, Europe, the Middle East, North Africa, Asia, and the United States on data protection, GDPR and cyber security compliance, along with e-commerce, website compliance, software licensing, AI, blockchain, privacy, and Freedom of Information Act matters. The trainer has advised clients in the creative, digital, and retail sectors, as well as working with clients in the banking, insurance, and financial services sectors who are engaged in the supply of goods and services using digital technology.

He is co-author of the Cyber Security Toolkit for The Law Society of England and Wales, a practical compliance guide for law firms, and is also co-author of a GDPR practical compliance manual for law firms. He regularly speaks at conferences and presents webinars and podcasts for various organisations. A regular international speaker, he has presented at LegalTechTalkNordic Privacy ArenaEuropean Legal Security ForumLawyer2050 ConferenceLegal Geek, and British Legal Technology Forum. He also produces the Digital Law Podcast.

The trainer is a member of the Expert Advisory Board for the Security, Privacy, Identity, Trust and Engagement Network Plus (SPRITE+) and is a past Chair of the GDPR Working Group of The Law Society of England and Wales. He is also a past Chair of the Law Society’s Technology and Law Committee.

This interactive course will enable participants to:
  • Understand the ubiquity of AI and cyber-enabled technologies as enablers of fraud, including misuse by untrained staff and risks from oversharing on platforms like LinkedIn, WhatsApp, and social media
  • Identify evolving attack vectors beyond email - such as social engineering via professional networks, AI-driven deepfakes, and multi-channel phishing - while exploring fraud purposes (data theft, financial gain, command and control)
  • Apply the cyber kill chain model to dissect fraud stages, from reconnaissance to exfiltration, and implement preventive controls tailored to evolving threats like ransomware, spear phishing, blagging, and economic crime
  • Navigate key UK and EU legislation, including the Cyber Security and Resilience Bill (committee stage February 2026), NIS2 Directive transpositions, Cyber Resilience Act (reporting from September 2026), and revised
  • Cybersecurity Act proposals (January 2026)
  • Incorporate guidance from law enforcement (Action Fraud/Report Fraud service, NCSC on AI vulnerabilities), regulators (FCA Mills Review on AI fraud, SRA on cyber/AI risks, FRC on AI in audit/fraud detection), and bodies (Law
  • Society Cybersecurity Toolkit, ICAEW PCRT on AI in tax/fraud risks, ICO on AI data protection)
  • Assess UK Government AI initiatives, principles-based regulation, and their intersection with fraud prevention, including ethical AI use and governance frameworks.
  • Develop practical risk management strategies, including staff training, due diligence on AI tools, incident response plans, and compliance with enhanced reporting requirements under Report Fraud (launched January 2026)

Participants will gain checklists, case studies, and the trainer’s expert insights to build resilient, compliant approaches amid 2026's regulatory and threat landscape.

This course is tailored for professionals in regulated sectors exposed to AI and cyber-enabled fraud risks, assuming basic familiarity with digital tools but focusing on 2026-specific compliance and threats. It is ideal for:

  • Solicitors, partners, and compliance officers (COLP/COFA) in law firms handling client data or transactions vulnerable to AI fraud (e.g., conveyancing, probate)
  • Accountants, auditors, and finance professionals (ACA/ICAS/ACCA members) in practice or in-house roles using AI for tax, audit, or reporting, per ICAEW/PCRT guidance
  • Compliance and risk managers in financial services (banks, fintech) navigating FCA Mills Review and cyber resilience expectations
  • In-house counsel and decision-makers in organisations adopting AI/cyber tools, where untrained staff or social media risks could enable fraud
  • Those overseeing fraud prevention, including MLROs/MLCOs responsible for SARs and enhanced due diligence amid ransomware/phishing surges
  • Professionals in high-risk sectors (law, accounting, finance) deploying AI tools without updated governance, risking SRA/ICAEW/FCA enforcement for inadequate supervision or fraud controls.
  • Firms exposed to cyber-enabled fraud vectors (e.g., LinkedIn phishing, deepfakes), especially those without cyber kill chain-based defences amid NCA's 67% cyber-fraud statistic
  • Those handling client data/transactions vulnerable to ransomware or economic crime, per ICO/SRA guidance on AI risks and DUAA alignment
  • Compliance leads updating policies for EU-impacting laws (Cyber Resilience Act September 2026), where non-compliance could trigger cross-border scrutiny
  • Any practitioner or firm lagging in AI literacy/training, as FRC/ICAEW emphasise human oversight to mitigate hallucinations/bias in audit/fraud detection—failure risks professional liability amid Treasury Select Committee's critique of regulators' "wait-and-see" approach

This targeted, interactive course for UK professionals in regulated sectors addresses the escalating convergence of AI ubiquity, cyber tools, and fraud risks as of February 2026. Led by an expert trainer, it equips attendees to mitigate threats such as AI-powered phishing and ransomware while complying with evolving UK/EU frameworks, including the Cyber Security and Resilience Bill and the Cyber Resilience Act. Drawing on the trainer’s leadership and advisory expertise, the session examines how untrained staff misuse, social media oversharing, and multi-vector attacks amplify fraud - framed through the cyber kill chain - and provides actionable strategies aligned with guidance from the NCSC, ICO, SRA, ICAEW, FCA, FRC, and Action Fraud/Report Fraud. Participants explore real-world typologies (deepfakes, blagging, and economic crime) and leave with tools for risk profiling, governance, and ethical AI deployment in a high-threat environment.

The interactive format prioritises practical tools - risk assessments, staff training protocols, and compliance checklists - drawn from the trainer’s cross-sector advisory experience rather than theoretical overviews. This ensures participants not only understand emerging threats but can also implement governed responses aligned with SRA, ICAEW, FCA, and FRC expectations, making the course essential for those balancing innovation with fraud prevention in a post-Report Fraud era (launched January 2026).

Number of places:

£ 1590.00

Discounts available:

  • 2 places at 20% less
  • 3 places at 30% less
  • 4+ places at 40% less
  • Select the number of course places and dates to automatically calculate the discount
    *T&Cs apply,
    click here
    to read
    ADD TO BASKET REQUEST CALL BACK
    Trusted By:

    We use cookies

    In order to show you courses tailored to your profession we use cookies.

    To enjoy all the features of this website please accept.