SteelCon

2026

A Talk About Giving Talks (PG)

Rory McCune

We’re getting a bit meta, with a talk about giving talks, because great research & ideas deserve great presentation.

Have you ever thought about giving a talk at a conference or meet-up, but haven’t been sure where to start? Are there things you wish you’d known before getting up on stage for the first time? This talk will answer those questions for you giving you the information you need to get started speaking at meet-ups and conferences.

Based on 16 years of conference presentation delivery, I’ll be sharing what worked for me and also what didn’t, and providing some practical examples of how you can help ensure your first or next talk is a success.

Adversary

Neil Lines

This talk will explore the modern challenges observed in red teaming assessments, with a particular focus on bypassing the protections commonly associated with modern environments. Much has changed over the years, with a significant shift towards cloud services and a continued decline in traditional Active Directory managed environments, requiring different approaches and ways of thinking.

The days of hunting for accounts belonging to the Domain Admins group are becoming increasingly limited, as modern environments often rely on a combination of cloud providers, conditional access policies, and integrated security controls, creating new obstacles.

A positive outcome of this, has been a strong drive in security solutions to keep pace with client requirements, which on paper appear to make the role of simulated adversaries more challenging. However, these advancements also introduce new assumptions and potential gaps. This has, in turn, created more opportunities for those who are divergent, encouraging creative approaches to identifying weaknesses and navigating modern defensive controls.

Why Your Pen Test Didn’t Matter

Tim Holman

Every year, organisations commission penetration tests, receive detailed reports, tick the compliance box, and proudly declare victory. And yet the same vulnerabilities reappear, incidents still happen, and nothing feels meaningfully safer.

This talk is about what happens after the pen test.

Drawing on real-world experience delivering penetration tests and compliance-driven security assessments, this session explores how solid, technically accurate findings are quietly neutralised once they leave the tester’s hands. We’ll look at how risk ratings are misunderstood, how remediation guidance gets diluted, and how organisational incentives reward closure over fixing the right things.

We’ll examine common failure points including report inflation, evidence-driven security theatre, ownership gaps between security, engineering and compliance teams, and the uncomfortable truth that many organisations do exactly what the report asks and still remain vulnerable. Along the way, we’ll show examples of findings that were “accepted”, “closed”, or “mitigated” without any meaningful reduction in attack surface.

This isn’t a talk about how to hack better, or how to write prettier reports. It’s a candid look at why penetration testing often fails to change security outcomes, how testers sometimes contribute to the problem without realising it, and what actually makes findings actionable to the people expected to fix them.

If you’ve ever wondered why yesterday’s critical finding is today’s accepted risk, this talk is for you.

‘Cybersecurity, cyberpunk, and dreams of electric sheep’; stories and narratives in cybersecurity’

Shannon Clerkin

Between 2012 and 2015, there was a strange micro-trend among cybersecurity vendors in which they created short Hollywood-style cyberpunk films to advertise their SIEM tools and network scanners. I became utterly obsessed with these short videos and spent an entire summer analysing them through the lens of Russian folklore structuralism and comparing them to classic cyberpunk films and literature. What do Neo, Winterfell, the Gernsback Continuum have to do with EDRs? Let’s find out! 

This talk explores the influence of cyberpunk and other forms of narrative media on the cybersecurity industry. From here, we consider how wider discussions around framing, narrative, and storytelling in cybersecurity and hacker culture can deepen our understanding of cybersecurity and its present and future role in an increasingly digitised society. 

Ey I Up, Pal: Dead Fast, Reyt Spurious, and a Wee Bit Brutal

Samantha Swift & David McKenzie

The screaming AI hype machine wants you to believe that the security rules have all changed. Spoiler alert: most haven’t. Sensitive data is still sensitive, humans remain predictably and wonderfully squishy, and just saying “no” is very much still a fast train to Riskville.

In this session, we’ll slice through the AI security noise with zero patience for buzzword soup. From myth-busting overhyped narratives, sniffing out snake oil from a mile off, to tackling the messy reality of “Look at me, Mum, I’m a DEvLeLoPisTeR” humans wielding AI like they just invented fire. In an igloo.

Expect a high-energy session packed with real-world war stories, vendor reality checks, and practical advice. Walk away healthily sceptical and merrily equipped with clearer risk lenses and regurgitable examples to drive “yes, but” conversations.

There’s even a smattering of audience participation, and the promise of a live demo or two.
Experience level: come as tha are – practitioners to leads, n00bs to veterans, no AI PhD needed.

Game Cheats vs Anti-Cheats: A Journey Through Windows Internals

Niharika

Game cheats and malware have more in common than you’d think. Techniques like code injection, memory manipulation, API hooking, and debugging sit at the core of both worlds. Cheaters and malware authors are pulling from the same playbook, applying similar methods to manipulate software behaviour for very different ends.

This talk explores that overlap through hands-on experiments with PwnAdventure3, an open-source game. Four cheat techniques are demonstrated, each paired with a corresponding defence built using Windows internals, along with an honest look at where those defences fall short and how they might be bypassed.

Hack to the Future: A History of Hacker Culture

Saskia Coplans

In the 80s we built the myths and texts. In the 90s we built the tribes and conferences. In the 2000s we split into industry, activism, and criminal subcultures. This is the story of how hacker culture grew up, sold out, fought back, and still shaped the security world you work in today.

Ever been confused when the olds are chuntering on about McAfee’s bath salts, not McAfee antivirus? Thought Emmanuel Goldstein was just a film character, or that Captain Crunch was only a breakfast cereal? Then this talk is for you.

We’ll walk the line from phone phreaking, BBS boards, 2600 meetups, and text-file culture through DEF CON, HOPE, CCC, media myth-making, LulzSec, WikiLeaks, and the professional security pipeline we work in now. Along the way we’ll decode the names, memes, folklore, and internal arguments that shaped the scene, not the sanitised corporate version, and not the tabloid panic version.

This industry loves nostalgia, and yes, a bit of hero worship. So we’ll look at where modern security culture really came from, the good, the bad, the ego, the ethics, the legal crackdowns, the community building, and what got dropped when money and institutions moved in.

If you’ve ever wondered why security culture looks the way it does, why old arguments keep resurfacing, and why half the room still speaks in references nobody explains, this is your decoder ring.

Nutcase Running The Asylum: Overengineer it like you'd steal it

Andy Gill

I’ve spent most of my career finding archaic and creative ways to circumvent systems. So naturally, when I built my own training platform and wrote a course have I mentioned I wrote a course? I became the most paranoid person imaginable about protecting it.

Because nothing says security like an overengineered platform with ridiculous layers of obfuscation, anti-piracy, and watermarking. We’re talking a system that had DRM experts saying “this is a little bit insane were you on crack when you thought this up?”

I’m going to show some of my hand on what it’s like to architect and design your own platform from the ground up the challenges, the mistakes, and the lessons learned the hard way. Fair warning: it’s a typical Andy talk, so expect some colourful language and questionable life choices along the way.

Responsible disclosure and legal risk

Richard Hanstock

Society needs vulnerability researchers, but the law treats them like criminals. The Computer Misuse Act 1990 was written to protect telephone exchanges: 36 years later, it still governs the work of the people keeping our critical infrastructure secure. This talk maps the legal risks facing security researchers in 2026 — not just the CMA, but researchers’ wider legal attack surface: interception, GDPR, confidentiality, economic torts, contract — drawing on real examples of the power imbalance between researchers and the vendors they’re trying to help. The talk outlines what other nations are doing to close the gap, and closes with a practical mitigation toolkit, making the case that ‘just give up’ shouldn’t be the best advice the industry can offer.

The Boxes Changed Shape: 5 Years of Security Lessons Or "The Choose-Your-Own-Adventure That Broke Reality"

Dave McKenzie

In September 2021 at BSides Newcastle, I accidentally created ‘Choose Your Own Keynote’, a live branching talk that evolved for a couple of outings adding more and more topics answering entry-level security questions, war stories, ethics with equal parts truth and absurdity. Plus Nerf, plus magic.

Five years later, some of those questions feel adorably naïve and for some the answer changed, the world changed around it or decided it doesn’t matter anymore.

This revival revisits the some of the original questions like “certs or experience?” “Blue team or red?” “Best tools?” “How many incident plans?” “AI will take your job?” and asks does it matter anymore and if so is the answer different?

Looking at how the industry moved from siloed roles and tool wars to AI risk management, supply chain resilience, evolving legalities and compliances, and burnout engineering.

More importantly, I examine what changed in me, you & us: family life, career scars, neurodiversity challenges, and the quiet realisation that judgement matters more than tactics, people more than rules.

What to expect:

Audience participation to help navigate the updated paths through SOC evolution, IR automation, community building in the ex-X and LinkedIn era, and “AI doesn’t want your job — it wants your data governance framework / identity / more tokens please”

We can laugh at what aged poorly (looking at you, ‘NERF incident response’) and “marvel” at what became mission-critical, and rebuild our view of what reality means 2026 and beyond.

The takeaway:

The new security choices aren’t about picking teams or tools. They’re about building antifragile careers amid constant change in technical depth, business context, and having enough self-awareness to work sustainably and survive as a human.

No holograms, just battle-tested perspective from someone who’s broken and lived all the answers.

Choose your own path. The industry already has.

Threat-Informed Defence: Designing Security for Adversaries, Not Auditors

Kirk Maddison

It’s 2026. Projects still “go live” with accepted risks because the last-minute “PEN test” found issues too late to fix. Or projects are delayed while teams scramble to satisfy compliance requirements. Sound familiar? It doesn’t have to be this way.

Most organisations ask the important question: what are we building? But many stop there and neglect to consider: what could go wrong? What are we going to do about it? And, did we do a good enough job?

This talk explores how threat modelling can help you move beyond checkbox security and guesswork. It will show how grounding security thinking in real adversary behaviour and business context helps teams collaborate earlier, influence design decisions, choose security controls intentionally, and scope testing that reflects how systems are actually attacked.

The objective of this talk is not to dismiss compliance frameworks, but to put them in their proper place. When security decisions are grounded in real threats and robust processes, compliance becomes the natural byproduct of effort rather than the destination itself.

This talk will share practical ways you can introduce threat-informed thinking into your work, so that you can articulate how you’re defending against your real adversaries, rather than your auditors.

What Trading Systems Taught Me About Breaking (and Defending) Production Infrastructure

Mert Satilmaz

Before I was a security engineer, I built market-making engines and low-latency trading systems in C++. Order management, execution routing, exchange connectivity, all running in environments where a microsecond of latency or a mishandled edge case could cost real money in real time.

That background broke my brain in ways that turned out to be useful for security. This talk covers the specific lessons from building trading infrastructure that changed how I approach security: how thinking about latency-sensitive systems reveals timing-based attack surfaces that scanners will never find; why exchange protocol parsing (FIX, binary protocols, custom wire formats) teaches you more about parsing vulnerabilities than any web app pen test; how trading systems’ state machines under adversarial market conditions map directly onto authentication and session management flaws; and why financial-grade audit trails and event sourcing solve the log tampering problem that most SIEM deployments ignore.

Not a finance talk. A talk about what happens when someone trained to think in microseconds and failure modes starts looking at your infrastructure

Where do we go from here? Towards a philosophy of cyber security

Michelle D'israeli

Cyber security is on a precipice. Ever-changing technology and global geopolitics pushing us ever closer to the edge, to be flung down to join those already pushed off by major breaches and surreal world events. In this talk, we give an overview of the challenges faced by cyber security professionals, their root causes, and how building a philosophy of cyber security could help us move forward and learn, against all odds, to fly instead of fall.

Expect discussion on everything from vulnerability management to AI to history, psychology and more. This talk offers a better understanding of our shared calling and a few new tools to help address our day-to-day challenges. Above all else, you will come away knowing you are not alone in the challenges you face, and that together we can build a better cyber world!

This talk is only the start of the journey to define a philosophy of cyber security, and audience participation will be highly encouraged. Chatham House rules will apply – share the information you receive, but do not reveal the identity or background of who said it.

Why does my LTE USB modem have 512MB of memory? Exploring the world of unbranded 4G USB modems

Aaron Thacker

Did you know that most of unbranded £10 4G modems are actually headless Android phones in disguise?

This hardware talk deep-dives into how they work, what the headless “Android” device looks like and what goes on behind the scenes alongside rooting and reflashing the devices for fun and profit.

The talk will explore current research and techniques to create custom firmware to turn those cheap (£10) gadgets into hardware for social engineering and red team jobs.

If you’ve ever needed to demonstrate a foreign device attack then join me as I tear apart these devices and subsequent firmware to make the cheapest* dead drop devices ever, capable of exfiltrating C2 traffic, injecting keystrokes and anything else which could be done with Linux + USB OTG + Wi-Fi + a 4G modem!

By the end of this talk, you’ll understand how these USB modems really work and why this overlooked hardware is far more powerful and interesting than it looks.

* Maybe – don’t quote me on this

Comparing Industrial Safety to Pentesting

Michael J Ranaldo

We often think that the only thing which can go wrong in a factory is a machine taking an arm off. But we’ve automated that now. In this talk we compare the safety of maintenance and testing in industrial settings and compare it to how we approach penetration testing, with an eye on increasing low context testing including automated, assisted, and off-shore.

Stop, get some help. Design your implants properly

Brandon McGrath

Building a loader is not as simple as cramming in TTPs and hoping a callback returns. In 2026, nearly every stage of an implant’s lifecycle generates telemetry that an EDR can observe.

In this talk, I discuss the tradeoffs involved in building implants across core red team phases. Using a decision-based flow, we walk through the considerations that shape a loader at each stage, and how these may change depending on context, environment, and telemetry visibility.

At each step, we examine what decisions matter and what telemetry those choices may expose to an EDR.

Rather than cataloguing evasion tricks, each decision point is mapped to the Windows and EDR telemetry it produces, helping operators better understand the tradeoffs between capability and observable artifacts.

So you want to build a pentesting company

Kurtis Baron

So you want to build a pen testing company.

I built Fidus Information Security from nothing to a 50-person consultancy and sold it. The acquisition changed everything we’d worked to build

So I started again.

This talk is not about hacking. It’s about the stuff nobody tells you when you decide to go it alone. The hiring decisions that keep you up at night. The clients who don’t pay. The staff you lose to bigger firms with deeper pockets. The moment you realise the books don’t balance themselves.

I’ll cover the whole journey, first engagement booked on goodwill and a spreadsheet, building the team, the exit, what the sale actually looked like from the inside. It’s not a post-mortem. It’s a case for why I’m doing it all over again. The pen testing industry is still broken. Commoditised, full of scan-and-report shops charging enterprise rates, badly in need of firms that actually give a damn.

If you’ve ever thought about going independent, starting a consultancy, or just want to know what running one looks like behind the numbers, come along.

What you’ll take away:
– What hiring looks like when you’re up against firms with 10x your budget
– How to keep good people in an industry where everyone’s being poached
– The financial reality, its not all a bed of roses
– What an acquisition actually does to the thing you built
– Why the industry needs more independent firms, not fewer

Break Things and Move Fast: Visualising Memory Corruption by Beating Video Games

Keith Learmonth

Memory corruption is still one of the major causes of computer vulnerabilities; whether they’re buffer overflows, use-after-frees, double frees or otherwise, poor memory handling has kept the CVE program in stock for decades. However, compared to other major vulnerability types such as command injections or missing authentication, how these handling flaws become fully fledged code execution vulnerabilities can be hard to visualise for those who don’t live and breathe binary.

As with any other software, video games are also notorious for memory corruption issues. Fans have been finding new ways to break their favourite games for years, looking for tricks, cheats and exploits in order to beat the game as fast as possible. This talk uses glitches from a Nintendo 64 classic as a visualisation to explain how these vulnerabilities become exploits, and how the principles that allow you to turn a spider into a door have more sinister applications in real-world technology.