Adam Tornhill is a programmer who combines degrees in engineering and psychology. He’s the founder of CodeScene where he designs tools for code analysis. Adam is also the author of multiple technical books, including the best selling Your Code as a Crime Scene and Software Design X-Rays. Adam’s other interests include modern history, music, retro computing, and martial arts.
Have you seen early productivity gains from AI, only to watch them disappear under growing complexity and production incidents? You're not alone. There's a common reason: many production systems already struggle with technical debt. When AI agents enter the development loop, that debt becomes a multiplier. Poor-quality code not only increases defects and costs. It dramatically raises AI risk by driving high breakage rates, turning promising AI agents into legacy code generators rather than genuine help.
Fortunately, there's hope on the horizon. In this talk, Adam Tornhill shows how organizations can achieve both speed and quality with AI. Backed by large-scale empirical studies on AI coding and developer productivity, we separate what works from what doesn't in real-world systems. Building on these findings, we then look at a practical framework for driving and sustaining AI-friendly code at scale. The AI revolution is here. Is your code ready?
Code quality is an abstract concept that fails to get traction at the business level. Consequently, software companies keep trading code quality for new features. The resulting technical debt is estimated to waste up to 42% of developers' time, causing stress and uncertainty, as well as making our job less enjoyable than it should be. Without clear and quantifiable benefits, it's hard to build a business case for code quality.
In this keynote, Adam takes on the challenge by tuning the code analysis microscope towards a business outcome. We do that by combining novel code quality metrics with analyses of how the engineering organization works with the code. We then take those metrics a step further by connecting them to values like time-to-market, customer satisfaction, and road-map risks. This makes it possible to a) prioritize the parts of your system that benefit the most from improvements, b) communicate quality trade-offs in terms of actual costs, and c) identify high-risk parts of the application so that we can focus our efforts on the areas that need them the most. All recommendations are supported by data and brand new real-world research. This is a perspective on software development that will change how you view code. Promise.
In this workshop, you learn novel analysis techniques that support both technical and organizational decisions around your codebase. The techniques use data from the most underused informational source in our industry: the version-control system. Combined with metaphors from forensic psychology, you learn to analyze version-control data to:
Identify the code that’s most expensive to maintain in systems with millions of lines of code:
Detect architectural decay and learn to control it.
Analyze different architectures such as layers and microservices.
*Measure how the organization structure influences code quality and knowledge distribution.
During the workshop, you get access to CodeScene, a behavioral code analysis tool that automates the analyses and supports the practical exercises. Participants are encouraged to take this opportunity to analyze their own codebase and get specific takeaways about their system.
Prerequisites
The workshop is language-neutral. The target audience is developers, architects, and technical leaders. While we won’t write any code during the class, you need to be comfortable reading code.
Style
Hands-on - in front of your laptop. The masterclass is based on the books Your Code As A Crime Scene (2024) and Software Design X-Rays (2018) by the instructor.
AI agents don’t struggle with syntax. They struggle with missing intent, non-expressive code, and surprising dependencies. Historically, we were supposed to write code for human readers, code that fits our cognitive limits and supports collaboration. In reality, much of our industry has fallen short.
That comes back to bite us.
When AI agents enter the development loop, they amplify those same problems. Where a human developer will ask questions and seek clarification, an AI often proceeds without it, making its best guess from patterns in code that was never designed to be unambiguous.
Code that is hard for humans to understand becomes unreliable for AI.
In this talk, Adam Tornhill shows how to turn that around. You’ll learn the key principles behind AI-friendly code and apply practical AI-assisted refactoring patterns that make those principles concrete. The focus is not on generating more code, but on improving the code you already have so AI becomes reliable instead of risky. All recommendations are grounded in AI research and cognitive psychology.