This full-day, hands-on workshop equips developers, architects, and technical leaders with the knowledge and skills to secure AI systems end-to-end — from model interaction to production deployment. Participants learn how to recognize and mitigate AI-specific threats such as prompt injection, data leakage, model exfiltration, and unsafe tool execution.
Through a series of focused labs, attendees build, test, and harden AI agents and Model Context Protocol (MCP) services using modern defensive strategies, including guardrails, policy enforcement, authentication, auditing, and adversarial testing.
The training emphasizes real-world implementation over theory, using preconfigured environments in GitHub Codespaces for instant, reproducible results. By the end of the day, participants will have created a working secure AI pipeline that demonstrates best practices for trustworthy AI operations and resilient agent architectures.
The course blends short conceptual discussions with deep, hands-on practice across eight structured labs, each focusing on a key area of AI security. Labs can be completed in sequence within GitHub Codespaces, requiring no local setup.
1.Lab 1 – Mapping AI Security Risks
Identify the unique attack surfaces of AI systems, including LLMs, RAG pipelines, and agents. Learn how to perform a structured threat model and pinpoint where vulnerabilities typically occur.
2.Lab 2 – Securing Prompts and Contexts
Implement defensive prompting, context isolation, and sanitization to mitigate prompt injection, hidden instructions, and data leakage risks.
3.Lab 3 – Implementing Guardrails
Use open-source frameworks (e.g., Guardrails.ai, LlamaGuard) to validate LLM outputs, enforce content policies, and intercept unsafe completions before delivery.
4.Lab 4 – Hardening MCP Servers and Tools
Configure FastMCP servers with authentication, scoped tokens, and restricted tool manifests. Examine how to isolate and monitor server–client interactions to prevent privilege escalation.
5.Lab 5 – Auditing and Observability for Agents
Integrate structured logging, trace identifiers, and telemetry into AI pipelines. Learn how to monitor for suspicious tool calls and enforce explainability through audit trails.
6.Lab 6 – Adversarial Testing and Red-Teaming
Simulate common AI attacks—prompt injection, model hijacking, and context poisoning—and apply mitigation patterns using controlled experiments.
7.Lab 7 – Policy-Driven Governance
Introduce a “security-as-code” approach using policy files that define allowed tools, query types, and data scopes. Enforce runtime governance directly within your agent’s workflow.
8.Lab 8 – Secure Deployment and Lifecycle Management
Apply DevSecOps practices to containerize, sign, and deploy AI systems safely. Incorporate secrets management, vulnerability scanning, and compliance checks before release.
Outcome:
Participants finish the day with a secure, auditable, and policy-controlled AI system built from the ground up. They leave with practical experience defending agents, MCP servers, and model workflows—plus learning for integrating security-by-design principles into future projects.
Hi, Spring fans! Developers today are being asked to deliver more with less time and build ever more efficient services, and Spring is ready to help you meet the demands. In this workshop, we'll take a roving tour of all things Spring, looking at fundamentals of the Spring component model, look at Spring Boot, and then see how to apply Spring in the context of batch processing, security, data processing, modular architecture, miroservices, messaging, AI, and so much more.
Basics
which IDE? IntelliJ, VSCode, and Eclipse
your choice of Java: GraalVM
start.spring.io, an API, website, and an IDE wizard
Devtools
Docker Compose
Testcontainers
banner.txt
Development Desk Check
the Spring JavaFormat Plugin
Python, gofmt, your favorite IDE, and
the power of environment variables
SDKMAN
.sdkman
direnv
.envrc
a good password manager for secrets
Data Oriented Programming in Java 21+
an example
Beans
dependency injection from first principles
bean configuration
XML
stereotype annotations
lifecycle
BeanPostProcessor
BeanFactoryPostProcessor
auto configuration
AOP
Spring's event publisher
configuration and the Environment
configuration processor
AOT & GraalVM
installing GraalVM
GraalVM native images
basics
AOT lifecycles
Scalability
non-blocking IO
virtual threads
José Paumard's demo
Cora Iberkleid's demo
Cloud Native Java (with Kubernetes)
graceful shutdown
ConfigMap and you
Buildpacks and Docker support
Actuator readiness and liveness probes
Data
JdbcClient
SQL Initialization
Flyway
Spring Data JDBC
Web Programming
clients: RestTemplate, RestClient, declarative interface clients
REST
controllers
functional style
GraphQL
batches
Architecting for Modularity
Privacy
Spring Modulith
Externalized messages
Testing
Batch Processing
Spring Batch
load some data from a CSV file to a SQL database
Microservices
centralized configuration
API gateways
reactive or not reactive
event bus and refreshable configuration
service registration and discovery
Messaging and Integration
“What do you mean by Event Driven?”
Messaging Technologies like RabbitMQ or Apache Kafka
Spring Integration
files to events
Kafka
a look at Spring for Apache Kafka
Spring Integration
Spring Cloud Stream
Spring Cloud Stream Kafka Streams
Security
adding form login to an application
authentication
authorization
passkeys
one time tokens
OAuth
the Spring Authorizatinm Server
OAuth clients
OAuth resource servers
protecting messaging code
You are ready to level up your skills. Or, you've already been playing accidental architect, and need to have a structured plan to be designated as one. Well, your wait is over.
From the author of O'Reilly's best-selling “Head First Software Architecture” comes a full-day workshop that covers all that you need to start thinking architecturally. Everything from the difference between design and architecture, and modern description of architecture, to the skills you'll need to develop to become a successful architect, this workshop will be your one stop shop.
We'll cover several topics:
This is an exercise heavy workshop—so be prepared to put on your architect hat!
As code generation becomes increasingly automated, our role as developers and architects is evolving. The challenge ahead isn’t how to get AI to write more code, it’s how to guide it toward coherent, maintainable, and purposeful systems.
In this session, Michael Carducci reframes software architecture for the era of intelligent agents. You’ll learn how architectural constraints, composition, and trade-offs provide the compass for orchestrating AI tools effectively. Using principles from the Tailor-Made Architecture Model, Carducci introduces practical mental models to help you think architecturally, communicate intent clearly to your agents, and prevent automation from accelerating entropy. This talk reveals how the enduring discipline of architecture becomes the key to harnessing AI—not by replacing human creativity, but by amplifying it.
AI is accelerating software development at an unprecedented pace, but many teams are discovering a frustrating reality: faster coding isn’t translating into faster delivery.
The reason is counterintuitive. When you accelerate one part of a system, you don’t improve the system… you stress it. More code becomes more review, more coordination, more cognitive load, and ultimately, less flow.
This talk connects that modern failure mode to a foundational systems insight from The Goal: local optimization usually degrades overall performance. From there, Michael Carducci shows how to apply the Theory of Constraints to modern software delivery.
Using concrete examples, you’ll see how practices like XP, DevOps, Domain-Driven Design, and Team Topologies act as targeted interventions on specific bottlenecks—and how misapplying them can make things worse.
You’ll leave with a practical mental model for identifying constraints in your system, reasoning about trade-offs, and designing for flow in an AI-accelerated world.
The hardest part of software architecture isn’t the technology, it’s the people. Every architecture lives or dies by its ability to influence behavior, build consensus, and turn vision into change. In this session, Michael Carducci explores the real work of being an architect: communicating clearly, guiding decisions, and driving meaningful change in complex organizations. Drawing from decades of experience and the principles behind the Tailor-Made Architecture Model, Carducci shows how to identify where change is needed, package ideas for adoption, and lead with both clarity and empathy.
And while AI may soon help us design systems, it still can’t align humans around them. The enduring art of architecture lies in shaping not just the code, but the culture that makes progress possible. You’ll leave with practical tools to navigate the human side of architecture and a renewed appreciation for why that art still matters.
Gartner just declared the semantic layer a non-negotiable foundation for AI. Most of the industry responded with a blank stare.
This presentation is the answer to that blank stare.
Your AI has a dirty secret: there is no mechanism in its architecture for truth. Only probability. Every response is a hallucination — most just happen to overlap with the facts. The philosophers figured out why 2,500 years ago, and they also gave us the solution. Plato defined knowledge as justified true belief. RAG is our architecture for justification. But there's a problem — your structured data is wholly inaccessible to it, because your JSON is full of magic strings that mean nothing outside the system that generated them.
This presentation shows you how to fix that. Not with a new framework, a bigger model, or an enterprise triple store. With a discipline — the discipline of making meaning explicit. JSON-LD, RDFS, OWL, and Schema.org form a standards stack that has been quietly solving this problem for 30 years. Your AI is already fluent in it. Half the web already speaks it. Google built an empire on it.
You'll leave with a concrete understanding of what the semantic layer actually is, why it matters, and — most importantly — how to start building it this week with the APIs you already have.
Your data isn't worthless. AI just doesn't know what it means yet.
In our rush toward the future, the software industry keeps forgetting its past—and with it, the hard-won lessons that could save us from repeating the same mistakes. In this live storytelling session, Michael Carducci revives the forgotten wisdom of the pioneers who shaped our craft.
Through entertaining, thought-provoking tales drawn from computing’s early days, he reveals how timeless principles still illuminate today’s challenges in architecture, AI, and innovation. Blending inspiration, history, and humor, Carducci connects these tales to our modern struggles with AI, architecture, and innovation itself. This isn’t nostalgia—it’s a rediscovery of the foundations that still shape great software and better technologists.
Microservices architecture has become a buzzword in the tech industry, promising unparalleled agility, scalability, and resilience. Yet, according to Gartner, more than 90% of organizations attempting to adopt microservices will fail. How can you ensure you're part of the successful 10%?
Success begins with looking beyond the superficial topology and understanding the unique demands this architectural style places on the teams, the organization, and the environment. These demands must be balanced against the current business needs and organizational realities while maintaining a clear and pragmatic path for incremental evolution.
In this session, Michael will share some real-world examples, practical insights, and proven techniques to balance both the power and complexities of microservices. Whether you're considering adopting microservices or already on the journey and facing challenges, this session will equip you with the knowledge and tools to succeed.
2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.
This curated set of 3 sessions will help equip senior technologists to evolve from document stewardship to adaptive integrity management—blending human judgment, executable principles, and guided agent assistance.Architecture is shifting from static designs to adaptive, agent-driven execution.
Come to the Agentic Architect session if you want to see:
how the role of architecture is evolving in the agentic era
practical tips and trick for how to embrace the new agentic toolset
how to lean into architecture as code
cut decision time from weeks and days to hours
stop redrawing diagrams forever
“The Agentic Architect isn't about AI writing your code – it's about transforming how you make, communicate, and enforce architecture in an AI-accelerated world.”
2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.
This live demo session takes the patterns from “The Agentic Architect” and runs them end-to-end starting with a blank slate.
Watch ideas turn into working architecture
See diagrams-as-code that update themselves based on a more holistic context
Learn how to use AI agents on a daily basis to transform your work
2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.
2025 delivered unprecedented architectural disruption.
This interactive session will explore key events throughout 2025/2026 that have impacted the architect's role in the context of AI ubiquity, platform acceleration, and cost pressures.
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the latest changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we'll take a tour of some features that you might or might not have heard of, but can significantly improve your workflow and day-to-day interaction with Git.
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we will look at some features you are probably aware of, but haven't used, alongside new features that Git has brought to the table.
You will need the following installed
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the latest changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we'll take a tour of some features that you might or might not have heard of, but can significantly improve your workflow and day-to-day interaction with Git.
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we will look at some features you are probably aware of, but haven't used, alongside new features that Git has brought to the table.
You will need the following installed
Spring Boot 3.x and Java 21 have arrived, making it an exciting time to be a Java developer! Join me, Josh Long (@starbuxman), as we dive into the future of Spring Boot with Java 21. Discover how to scale your applications and codebases effortlessly. We'll explore the robust Spring Boot ecosystem, featuring AI, modularity, seamless data access, and cutting-edge production optimizations like Project Loom's virtual threads, GraalVM, AppCDS, and more.
Let's explore the latest-and-greatest in Spring Boot to build faster, more scalable, more efficient, more modular, more secure, and more intelligent systems and services.
The age of artificial intelligence (because the search for regular intelligence hasn't gone well..) is nearly at hand, and it's everywhere! But is it in your application? It should be. AI is about integration, and here the Java and Spring communities come second to nobody.
In this talk, we'll demystify the concepts of modern day Artificial Intelligence and look at its integration with the white hot new Spring AI project, a framework that builds on the richness of Spring Boot to extend them to the wide world of AI engineering.
There's a clear need for security in the software systems that we build. The problem for most organizations is that they don't want to spend any money on it. Even if they did, they often have no idea how much to spend. No particular initiative is likely to imbue your system with “security”, but a strong, deep defensive approach is likely to give you a fighting chance of getting it right.
Web Security as applied to APIs in particular are an important part of the plan. In this workshop, we'll show you how approaches to defining “enough” as well as concrete techniques to employ incrementally in your designs.
In this workshop, we will pick a hands on framework for implementation, but the ideas will generally be standards-based and transcend technology choice so you should have a strategy for mapping the ideas into your own systems.
We will cover a broad range of topics including:
There's a clear need for security in the software systems that we build. The problem for most organizations is that they don't want to spend any money on it. Even if they did, they often have no idea how much to spend. No particular initiative is likely to imbue your system with “security”, but a strong, deep defensive approach is likely to give you a fighting chance of getting it right.
Web Security as applied to APIs in particular are an important part of the plan. In this workshop, we'll show you how approaches to defining “enough” as well as concrete techniques to employ incrementally in your designs.
In this workshop, we will pick a hands on framework for implementation, but the ideas will generally be standards-based and transcend technology choice so you should have a strategy for mapping the ideas into your own systems.
We will cover a broad range of topics including:
In the fast-paced world of software development, maintaining architectural integrity is a
continuous challenge. Over time, well-intended architectural decisions can erode, leading to unexpected drift and misalignment with original design principles.
This hands-on workshop will equip participants with practical techniques to enforce architecture decisions using tests. By leveraging architecturally-relevant testing, attendees will learn how to proactively guard their system's design, ensuring consistency, scalability, and security as the codebase evolves. Through interactive exercises and real-world examples, we will explore how testing can serve as a powerful tool for preserving architectural integrity throughout a project's lifecycle.
Key Takeaways
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites
Basic Understanding of Software Architecture: Familiarity with architectural patterns and
principles
Experience with Automated Testing: Understanding of unit, integration, or system testing
concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the
Key Takeaways:
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites:
Basic Understanding of Software Architecture: Familiarity with architectural patterns and principles
Experience with Automated Testing: Understanding of unit, integration, or system testing concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the
In the fast-paced world of software development, maintaining architectural integrity is a
continuous challenge. Over time, well-intended architectural decisions can erode, leading to unexpected drift and misalignment with original design principles.
This hands-on workshop will equip participants with practical techniques to enforce architecture decisions using tests. By leveraging architecturally-relevant testing, attendees will learn how to proactively guard their system's design, ensuring consistency, scalability, and security as the codebase evolves. Through interactive exercises and real-world examples, we will explore how testing can serve as a powerful tool for preserving architectural integrity throughout a project's lifecycle.
Key Takeaways
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites
Basic Understanding of Software Architecture: Familiarity with architectural patterns and
principles
Experience with Automated Testing: Understanding of unit, integration, or system testing
concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the
Key Takeaways:
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites:
Basic Understanding of Software Architecture: Familiarity with architectural patterns and principles
Experience with Automated Testing: Understanding of unit, integration, or system testing concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the
REST APIs often fall into a cycle of constant refactoring and rewrites, leading to wasted time, technical debt, and endless rework. This is especially difficult when you don't control the API clients.
But what if this could be your last major API refactor? In this session, we’ll dive into strategies for designing and refactoring REST APIs with long-term sustainability in mind—ensuring that your next refactor sets you up for the future.
You’ll learn how to design APIs that can adapt to changing business requirements and scale effectively without requiring constant rewrites. We’ll explore principles like extensibility, versioning, and decoupling, all aimed at future-proofing your API while keeping backward compatibility intact. Along the way, we’ll examine real-world examples of incremental API refactoring, where breaking the cycle of endless rewrites is possible.
This session is perfect for API developers, architects, and tech leads who are ready to stop chasing their tails and want to invest in designing APIs that will stand the test of time—so they can focus on building great features instead of constantly rewriting code.
AI models are evolving fast, but the systems around them aren’t. Every backend change still breaks your carefully tuned AI client, while on the web, every change to a server doesn’t require you to download a new browser. What if AI worked the same way?
In this talk, Michael Carducci explores the architecture of 3rd Generation Agentic AI, building on the ideas and technologies introduced in Data Architecture for AI. You’ll discover how JSON-LD, Hydra, and semantic integration enable truly evolvable, interoperable AI ecosystems at web scale. Through live demos and real-world examples, Carducci shows how these web-native standards create APIs that describe themselves, adapt to change, and empower agents to discover and interact safely without brittle coupling. The real frontier isn’t smarter models—it’s shared meaning—and that’s an architectural problem worth solving.
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Attendees will need the following to do the hands-on labs:
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Attendees will need the following to do the hands-on labs:
Just as CI/CD and other revolutions in DevOps have changed the landscape of the software development lifecycle (SDLC), so Generative AI is now changing it again. Gen AI has the potential to simplify, clarify, and lessen the cycles required across multiple phases of the SDLC.
In this session with author, trainer, and experienced DevOps director Brent Laster, we'll survey the ways that today's AI assistants and tools can be incorporated across your SDLC phases including planning, development, testing, documentation, maintaining, etc. There are multiple ways the existing tools can help us beyond just the standard day-to-day coding and, like other changes that have happened over the years, teams need to be aware of, and thinking about how to incorporate AI into their processes to stay relevant and up-to-date.
In the age of digital transformation, Cloud Architects emerge as architects of the virtual realm, bridging innovation with infrastructure. This presentation offers a comprehensive exploration of the Cloud Architect's pivotal role.
Delving into cloud computing models, architecture design, and best practices, attendees will gain insights into harnessing the power of cloud technologies. From optimizing scalability and ensuring security to enhancing efficiency and reducing costs, this session unravels the strategic decisions and technical expertise that define a Cloud Architect's journey. Join us as we decode the nuances of cloud architecture, illustrating its transformative impact on businesses in the modern era.
AI inference is no longer a simple model call—it is a multi-hop DAG of planners, retrievers, vector searches, large models, tools, and agent loops. With this complexity comes new failure modes: tail-latency blowups, silent retry storms, vector store cold partitions, GPU queue saturation, exponential cost curves, and unmeasured carbon impact.
In this talk, we unveil ROCS-Loop, a practical architecture designed to close the four critical loops of enterprise AI:
•Reliability (Predictable latency, controlled queues, resilient routing)
•Observability (Full DAG tracing, prompt spans, vector metrics, GPU queue depth)
•Cost-Awareness (Token budgets, model tiering, cost attribution, spot/preemptible strategies)
•Sustainability (SCI metrics, carbon-aware routing, efficient hardware, eliminating unnecessary work)
KEY TAKEAWAYS
•Understand the four forces behind AI outages (latency, visibility, cost, carbon).
•Learn the ROCS-Loop framework for enterprise-grade AI reliability.
•Apply 19 practical patterns to reduce P99, prevent retry storms, and control GPU spend.
•Gain a clear view of vector store + agent observability and GPU queue metrics.
•Learn how ROCS-Loop maps to GCP, Azure, Databricks, FinOps & SCI.
•Leave with a 30-day action plan to stabilize your AI workloads.
⸻
AGENDA
1.The Quiet Outage: Why AI inference fails
2.X-Ray of the inference pipeline (RAG, agents, vector, GPUs)
3.Introducing the ROCS-Loop framework
4.19 patterns for Reliability, Observability, FinOps & GreenOps
5.Cross-cloud mapping (GCP, Azure, Databricks)
6.Hands-on: Diagnose an outage with ROCS
7.Your 30-day ROCS stabilization plan
8.Closing: Becoming a ROCS AI Architect
Dynamic Programming (DP) intimidates even seasoned engineers. With the right lens, it’s just optimal substructure + overlapping subproblems turned into code. In this talk, we start from a brute-force recursive baseline, surface the recurrence, convert it to memoization and tabulation, and connect it to real systems (resource allocation, routing, caching). Along the way you’ll see how to use AI tools (ChatGPT, Copilot) to propose recurrences, generate edge cases, and draft tests—while you retain ownership of correctness and complexity. Expect pragmatic patterns you can reuse in interviews and production.
Why Now
Key Framework
Core Content
Learning Outcomes
Building an AI model is the easy part—making it work reliably in production is where the real engineering begins. In this fast-paced, experience-driven session, Ken explores the architecture, patterns, and practices behind operationalizing AI at scale. Drawing from real-world lessons and enterprise implementations, Ken will demystify the complex intersection of machine learning, DevOps, and data engineering, showing how modern organizations bring AI from the lab into mission-critical systems.
Attendees will learn how to:
Design production-ready AI pipelines that are testable, observable, and maintainable
Integrate model deployment, monitoring, and feedback loops using MLOps best practices
Avoid common pitfalls in scaling, governance, and model drift management
Leverage automation to reduce friction between data science and engineering teams
Whether you’re a software architect, developer, or engineering leader, this session will give you a clear roadmap for turning AI innovation into operational excellence—with the same pragmatic, architecture-first perspective that Ken is known for.
Most of us don't want to go back to the days of malloc and free, but the magic of garbage collectors while convenient can be mysterious and hard to understand.
In this talk, you'll learn about the many different garbage collectors available in JVMs. The strength and weaknesses of the different allocation and collection strategies used by each collector. And how garbage collectors keep evolving to support today's hardware and cloud environments.
This talk will cover the core concepts in garbage collection: object reachability, concurrent collectors, parallel garbage collectors, and generational garbage collectors. These concepts will be covered by following the progression of garbage collectors in the HotSpot JVM.
Unlike other languages, Java had a well-defined memory model from the very beginning, but over the years additional packages and low-level features have been added to make the most of today's hardware.
In this talk, we'll discuss concurrency in detail starting at the hardware up to Java's latest synchronization mechanisms and finally onto high-level concurrent collections.
This talk will cover hardware memory fences, Java's synchronized and volatile, Atomic classes, newer capabilities added by VarHandles, and some select high-level concurrent collections.
Fortunately for most Java developers the just-in-time compiler just works and appears to do so by magic. And yet sometimes, we find ourselves facing a performance problem, so what do we when the magic stops?
In this talk, we’ll learn a few key concepts behind the magic of modern optimizing compilers: intrinsics, basic blocks, static single assignment, and inlining. By learning these key concepts, you’ll learn to save time not trying to optimize the things that the compiler can already do for you and to focus on the things that matter most.
This talk will provide a high-level overview of just-in-time compilation.
The talk will cover when the JVM triggers just-in-time compilation, an overview of core compiler concepts, and speculative optimizations and deoptimization which make the JVM unique.
In this half-day workshop, we’ll practice Test-Driven Development (TDD) by solving a real problem step by step. You’ll learn how to think in tests, write clean code through refactoring, and use your IDE and AI tools effectively. We’ll also explore how modern Java features (like lambdas and streams) enhance testability, and discuss what’s worth testing — and what’s not.
In this half-day workshop, we’ll practice Test-Driven Development (TDD) by solving a real problem step by step. You’ll learn how to think in tests, write clean code through refactoring, and use your IDE and AI tools effectively. We’ll also explore how modern Java features (like lambdas and streams) enhance testability, and discuss what’s worth testing — and what’s not.
In the realm of architecture, principles form the bedrock upon which innovative and enduring designs are crafted. This presentation delves into the core architectural principles that guide the creation of structures both functional and aesthetic. Exploring concepts such as balance, proportion, harmony, and sustainability, attendees will gain profound insights into the art and science of architectural design. Through real-world examples and practical applications, this session illuminates the transformative power of adhering to these principles, shaping not only buildings but entire environments. Join us as we unravel the secrets behind architectural mastery and the principles that define architectural brilliance.
Good architectural principles are fundamental guidelines or rules that inform the design and development of software systems, ensuring they are scalable, maintainable, and adaptable. Here are some key architectural principles that are generally considered valuable in software development:
Adhering to these architectural principles can lead to the development of robust, maintainable, and adaptable software systems that meet the needs of users and stakeholders effectively.
AI agents are moving from novelty to necessity — but building them safely, predictably, and observably requires more than clever prompts. This workshop gives developers a practical introduction to AI agent engineering using Embabel, with an emphasis on the habits, patterns, and mental models needed to design trustworthy agents in real systems.
Across two focused sessions, you’ll learn how to ground agents in strong domain models (DICE), design goal-driven behaviours (GOAP), enforce safety through invariants and preconditions, and make every action explainable through observability.
You’ll run and inspect a fully working reference agent, extend its domain, add new actions, and validate behaviour through explainable planning logs. You'll explore how to select and deploy models tuned to provide the best and most cost-effective agent behaviour for your users.
By the end of this full day workshop, you’ll know how to:
Build agents anchored in typed domain models
Design composable, goal-oriented behaviours
Use preconditions and invariants as safety guardrails
Debug agents through explainability, not guesswork
Extend an agent with new domain objects and actions without breaking existing flows
Apply a repeatable habit stack for reliable agent engineering
Whether you’re designing workflow agents, platform automations, or domain-specific assistants, this workshop gives you the practical skills and engineering discipline to build agents that behave safely and reason predictably — fit for production, and even fit for regulated environments.
Internal developer platforms hold promise: faster delivery, better reliability, happier engineers. Yet they often stall—caught in organisational inertia, tool complexity, and unclear value.
These two high-impact 90-minute sessions give you the core mental models and practical artefacts to shift from tool-chase to strategic platform value. You’ll walk away with real maps, a clear focus, and next-step experiments—not just ideas. If you’re looking to make your platform team a force for value, not just operations, this compact format delivers.
By the end of the workshop you'll know:
How to see your platform ecosystem clearly using tools like Wardley Mapping, User Needs Mapping, Value Stream Mapping and OODA loops.
How to treat the platform as a product, shifting mindset from internal project to self-service, developer-centric product.
How to apply the pattern language of platform design (Golden Path, Self-Service, Abstraction, Composability, Guardrails, Observability, Extensibility, Incremental Roll-out).
How to use DSRP + UNM + VSM to reveal where value stalls, flow breaks, and cognitive load spikes.
How to design smallest viable changes, build an impact roadmap, and influence adoption through “Elephant & Rider” thinking (rational vs emotional mindsets).
You'll also get a sneak peek into the future of platforms: AI/automation, evolving loops, and building resilience into your ecosystem.
If your platform team is stuck in the grind—shipping tickets, fighting fires, juggling tools, and wondering why nothing ever seems to change—this workshop will give you the clarity and leverage you’ve been missing. You’ll learn to read your organisation like a map: where value flows, where it dies, where cognitive load spikes, and where small, strategic platform moves can unlock disproportionate impact.
This isn’t another “tools tour”. Whether you’re building an IDP from scratch or rescuing one that’s drifting, you’ll leave with a clear roadmap, a set of tested patterns, and the influence skills to actually make the work land. Platform engineering is no longer about managing complexity—it’s about creating the conditions where developers can thrive. Join us, map your ecosystem, design the future, and turn your platform team into the strategic engine your organisation needs.
By the end of this half day workshop you'll know:
How to see your platform ecosystem clearly using tools like Wardley Mapping, User Needs Mapping, Value Stream Mapping and OODA loops.
How to treat the platform as a product, shifting mindset from internal project to self-service, developer-centric product.
How to apply the pattern language of platform design (Golden Path, Self-Service, Abstraction, Composability, Guardrails, Observability, Extensibility, Incremental Roll-out).
How to use DSRP + UNM + VSM to reveal where value stalls, flow breaks, and cognitive load spikes.
How to design smallest viable changes, build an impact roadmap, and influence adoption through “Elephant & Rider” thinking (rational vs emotional mindsets).
You'll also get a sneak peek into the future of platforms: AI/automation, evolving loops, and building resilience into your ecosystem.
If your platform team is stuck in the grind—shipping tickets, fighting fires, juggling tools, and wondering why nothing ever seems to change—this workshop will give you the clarity and leverage you’ve been missing. You’ll learn to read your organisation like a map: where value flows, where it dies, where cognitive load spikes, and where small, strategic platform moves can unlock disproportionate impact.
This isn’t another “tools tour”. Whether you’re building an IDP from scratch or rescuing one that’s drifting, you’ll leave with a clear roadmap, a set of tested patterns, and the influence skills to actually make the work land. Platform engineering is no longer about managing complexity—it’s about creating the conditions where developers can thrive. Join us, map your ecosystem, design the future, and turn your platform team into the strategic engine your organisation needs.
Internal developer platforms hold promise: faster delivery, better reliability, happier engineers. Yet they often stall—caught in organisational inertia, tool complexity, and unclear value.
These two high-impact 90-minute sessions give you the core mental models and practical artefacts to shift from tool-chase to strategic platform value. You’ll walk away with real maps, a clear focus, and next-step experiments—not just ideas. If you’re looking to make your platform team a force for value, not just operations, this compact format delivers.
By the end of the workshop you'll know:
How to see your platform ecosystem clearly using tools like Wardley Mapping, User Needs Mapping, Value Stream Mapping and OODA loops.
How to treat the platform as a product, shifting mindset from internal project to self-service, developer-centric product.
How to apply the pattern language of platform design (Golden Path, Self-Service, Abstraction, Composability, Guardrails, Observability, Extensibility, Incremental Roll-out).
How to use DSRP + UNM + VSM to reveal where value stalls, flow breaks, and cognitive load spikes.
How to design smallest viable changes, build an impact roadmap, and influence adoption through “Elephant & Rider” thinking (rational vs emotional mindsets).
You'll also get a sneak peek into the future of platforms: AI/automation, evolving loops, and building resilience into your ecosystem.
If your platform team is stuck in the grind—shipping tickets, fighting fires, juggling tools, and wondering why nothing ever seems to change—this workshop will give you the clarity and leverage you’ve been missing. You’ll learn to read your organisation like a map: where value flows, where it dies, where cognitive load spikes, and where small, strategic platform moves can unlock disproportionate impact.
This isn’t another “tools tour”. Whether you’re building an IDP from scratch or rescuing one that’s drifting, you’ll leave with a clear roadmap, a set of tested patterns, and the influence skills to actually make the work land. Platform engineering is no longer about managing complexity—it’s about creating the conditions where developers can thrive. Join us, map your ecosystem, design the future, and turn your platform team into the strategic engine your organisation needs.
By the end of this half day workshop you'll know:
How to see your platform ecosystem clearly using tools like Wardley Mapping, User Needs Mapping, Value Stream Mapping and OODA loops.
How to treat the platform as a product, shifting mindset from internal project to self-service, developer-centric product.
How to apply the pattern language of platform design (Golden Path, Self-Service, Abstraction, Composability, Guardrails, Observability, Extensibility, Incremental Roll-out).
How to use DSRP + UNM + VSM to reveal where value stalls, flow breaks, and cognitive load spikes.
How to design smallest viable changes, build an impact roadmap, and influence adoption through “Elephant & Rider” thinking (rational vs emotional mindsets).
You'll also get a sneak peek into the future of platforms: AI/automation, evolving loops, and building resilience into your ecosystem.
If your platform team is stuck in the grind—shipping tickets, fighting fires, juggling tools, and wondering why nothing ever seems to change—this workshop will give you the clarity and leverage you’ve been missing. You’ll learn to read your organisation like a map: where value flows, where it dies, where cognitive load spikes, and where small, strategic platform moves can unlock disproportionate impact.
This isn’t another “tools tour”. Whether you’re building an IDP from scratch or rescuing one that’s drifting, you’ll leave with a clear roadmap, a set of tested patterns, and the influence skills to actually make the work land. Platform engineering is no longer about managing complexity—it’s about creating the conditions where developers can thrive. Join us, map your ecosystem, design the future, and turn your platform team into the strategic engine your organisation needs.
By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.
So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.
For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot autoconfiguration, you'll be able to get straight to the point of asking questions and getting answers your application needs.
In this handson workshop, you'll build a complete Spring AIenabled application applying such techniques as prompt templating, Retrieval Augmented Generation (RAG), conversational history, and tools invocation. You'll also learn prompt engineering techniques that can help your application get the best results with minimal “hallucinations” while minimizing cost.
In the workshop, we will be using…
Optionally, you may choose to use a different AI provider other than OpenAI such as Anthropic, Mistral, or Google Vertex (Gemini), but you will need an account with them and some reasonable amount of credit with them. Or, you may choose to install Ollama (https://ollama.com/), but if you do be sure to install a reasonable model (llama3:latest or gemma:9b) before you arrive.
Know that if you choose to use something other than OpenAI, your workshop experience will vary.
By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.
So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.
For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot autoconfiguration, you'll be able to get straight to the point of asking questions and getting answers your application needs.
In this handson workshop, you'll build a complete Spring AIenabled application applying such techniques as prompt templating, Retrieval Augmented Generation (RAG), conversational history, and tools invocation. You'll also learn prompt engineering techniques that can help your application get the best results with minimal “hallucinations” while minimizing cost.
In the workshop, we will be using…
Optionally, you may choose to use a different AI provider other than OpenAI such as Anthropic, Mistral, or Google Vertex (Gemini), but you will need an account with them and some reasonable amount of credit with them. Or, you may choose to install Ollama (https://ollama.com/), but if you do be sure to install a reasonable model (llama3:latest or gemma:9b) before you arrive.
Know that if you choose to use something other than OpenAI, your workshop experience will vary.
Enterprise Architecture (EA) has long been misunderstood as a bottleneck to innovation, often labeled the “department of no.” But in today’s fast-paced world of Agile, DevOps, Cloud, and AI, does EA still have a role to play—or is it a relic of the past?
This session reimagines the role of EA in the modern enterprise, showcasing how it can evolve into a catalyst for agility and innovation. We’ll explore the core functions of EA, its alignment with business and IT strategies, and how modern tools, techniques, and governance can transform it into a driver of value. Attendees will leave with actionable insights on building a future-ready EA practice that thrives in an ever-changing technological landscape.
Architecture is often defined as “hard to change”. Within software architecture, an architecture pattern is a reusable solution to a commonly occurring problem in software architecture within a specific context. Architecture anti-patterns are their diabolical counterparts—wherein they sound good in theory, but in practice lead to negative consequences. And given that they affect both the architectural characteristics and the structural design of the system, are incredibly expensive and have far-reaching consequences.
This session explores various architecture patterns, how one can easily fall into anti-patterns, and how one can avoid the antipatterns. We will do qualitative analysis of various architecture patterns and anti-patterns, and introduce fitness functions govern against anti-patterns.
An architecture pattern is a reusable solution to a commonly occurring problem in software architecture within a specific context. Architecture patterns affect the “-ilities” of a system, such as scalability, performance, maintainability, and security as well as impact the structural design of the system.
This session explores various architecture patterns, their applicability and trade-offs. But that's not all—this session will also provide insight into the numerous intersections of these patterns with all the other tendrils of the organization, including implementation, infrastructure, engineering practices, team topologies, data topologies, systems integration, the enterprise, the business environment, and generative AI. And we will see how to govern each pattern using fitness functions to ensure alignment.
Platform engineering is the latest buzzword, in a industry that already has it's fair share. But what is platform engineering? How does it fit in with DevOps and Developer Experience (DevEx)? And is this something your organization even needs?
In this session we will aim to to dive deep into the world of platform engineering. We will see what platform engineering entails, how it is the logical succession to a successful DevOps implementation, and how it aims to improve the developer experience. We will also uncover the keys to building robust, sustainable platforms for the future
Platform engineering is the latest buzzword, in a industry that already has it's fair share. But what is platform engineering? How does it fit in with DevOps and Developer Experience (DevEx)? And is this something your organization even needs?
In this session we will aim to to dive deep into the world of platform engineering. We will see what platform engineering entails, how it is the logical succession to a successful DevOps implementation, and how it aims to improve the developer experience. We will also uncover the keys to building robust, sustainable platforms for the future
This 1/2 day workshop introduces participants to Claude Code, Anthropic’s AI-powered coding assistant. In three hours, attendees will learn how to integrate Claude Code into their development workflow, leverage its capabilities for productivity, and avoid common pitfalls. The workshop also introduces the concept of subagents (specialized roles like Planner, Tester, Coder, Refactorer, DocWriter) to show how structured interactions can improve accuracy and collaboration.
Format: 3-hour interactive workshop (2 × 90-minute sessions + 30-minute break).
Audience: Developers and technical professionals with basic programming knowledge.
Focus Areas:
Core capabilities and limitations of Claude Code.
Effective prompting and iteration techniques.
Applying Claude Code for code generation, debugging, refactoring, and documentation.
Using subagents for structured workflows as an optional advanced technique.
Deliverables:
5 hands-on labs (10–12 minutes each).
Experience with everyday Claude Code workflows plus a brief introduction to subagents.
To do the labs in this workshop, you must have a Claude Code Pro subscription already so you will have access to Claude Code. If you do not, you will not be able to use Claude Code in this workshop. See https://www.anthropic.com/pricing.
This 1/2 day workshop introduces participants to Claude Code, Anthropic’s AI-powered coding assistant. In three hours, attendees will learn how to integrate Claude Code into their development workflow, leverage its capabilities for productivity, and avoid common pitfalls. The workshop also introduces the concept of subagents (specialized roles like Planner, Tester, Coder, Refactorer, DocWriter) to show how structured interactions can improve accuracy and collaboration.
Format: 3-hour interactive workshop (2 × 90-minute sessions + 30-minute break).
Audience: Developers and technical professionals with basic programming knowledge.
Focus Areas:
Core capabilities and limitations of Claude Code.
Effective prompting and iteration techniques.
Applying Claude Code for code generation, debugging, refactoring, and documentation.
Using subagents for structured workflows as an optional advanced technique.
Deliverables:
5 hands-on labs (10–12 minutes each).
Experience with everyday Claude Code workflows plus a brief introduction to subagents.
To do the labs in this workshop, you must have a Claude Code Pro subscription already so you will have access to Claude Code. If you do not, you will not be able to use Claude Code in this workshop. See https://www.anthropic.com/pricing.
As cloud architectures evolve, AI is quickly becoming a foundational component rather than an add-on.
This session explores the architectural principles behind building scalable hybrid clouds and shows how AI can elevate them—from predictive scaling to intelligent workload optimization. We’ll look at patterns already emerging in the industry and map out a clear approach for designing resilient, AI-augmented systems that are ready for the next wave of innovation.
API security goes beyond protecting endpoints—it requires defense across infrastructure, data, and business logic. In this talk, I’ll present a structured approach to implementing Zero Trust security for APIs in a cloud-native architecture.
We’ll cover how to establish a strong foundation across layers—using mTLS, OAuth2/JWT, policy-as-code (OPA), GitOps for deployment integrity, and cloud-native secrets management. The session addresses real-world threats like misconfigurations, privilege escalation, and API abuse, and shows how to mitigate them with layered controls in Kubernetes-based environments on Azure and AWS.
Attendees will walk away with actionable practices to secure their API ecosystem end-to-end— without slowing development teams down.
Here I’ll break down how GitOps simplifies the operational challenges around cloud and Kubernetes environments. We’ll look at how a Git-driven model brings consistency, automation, and better visibility across both infrastructure and application delivery.
The goal is to share a clear and practical approach to reducing operational overhead and creating a more reliable DevOps workflow.
In this hands-on workshop you will learn how to build & deploy production-ready AI Agents. You will use Spring AI, MCP, Java, and Amazon Bedrock and learn how to deal with production concerns like observability and security. We will start with basic prompting then expand with chat memory, RAG, and integration through MCP. You will be provided a provisioned cloud environment and step-by-step instructions.
Bring your laptop, walk away with the skills to build your own AI Agents with Java.
In this hands-on workshop you will learn how to build & deploy production-ready AI Agents. You will use Spring AI, MCP, Java, and Amazon Bedrock and learn how to deal with production concerns like observability and security. We will start with basic prompting then expand with chat memory, RAG, and integration through MCP. You will be provided a provisioned cloud environment and step-by-step instructions.
Bring your laptop, walk away with the skills to build your own AI Agents with Java.
Data Mesh rethinks data architecture in organizations by treating data as a product, owned and operated by bounded context teams rather than centralized platforms. This way, data owners can describe, enrich, and prove data sources to prevent any malicious poisoning.
Java has accumulated a diverse toolbox for concurrency and asynchrony over the decades, ranging from classic threads to parallel streams, from Future to CompletableFuture, and from reactive libraries to the latest innovations, including virtual threads, structured concurrency, and the Vector API. But with so many options, the question is: which ones should we use today, which still matter, and which belong in the history books?
In this talk, we’ll explore the entire spectrum:
We’ll also tackle the hard questions:
As AI model usage grows across enterprise systems, teams face new infrastructure challenges—fragmented integrations, inconsistent interfaces, and limited visibility into model performance. An AI Gateway bridges this gap by providing an abstraction layer for model routing, guardrails, and observability, standardizing how applications interact with AI models.
This session explores AI Gateway architecture, key design patterns, and integration strategies with existing API and DevOps ecosystems. Attendees will learn how to implement model routing, enforce runtime safety and compliance, and build unified monitoring for prompt-level analytics—all forming the foundation of a scalable enterprise AI platform.
Traditional API linting tools like Spectral, have helped teams identify issues in their OpenAPI specifications by surfacing violations of style guides and best practices. But the current paradigm stops at diagnosis—developers are still left with the manual burden of interpreting warnings, resolving inconsistencies, and applying often repetitive best practice fixes.
This session explores a transformative approach: using large language models (LLMs) fine-tuned on industry API standards to go beyond pointing out what’s wrong—to actively fixing it. Imagine replacing “Here’s a list of errors” with “Here’s your new spec, clean, compliant, and ready to ship.” By shifting from rule-checking to rule-enforcing via intelligent automation, teams can significantly reduce friction in their design workflows, improve standardization, and cut review cycles.
In today’s fast-paced development environment, delivering robust and efficient APIs requires a streamlined design process that minimizes delays and maximizes collaboration. Mocking has emerged as a transformative tool in the API design lifecycle, enabling teams to prototype, test, and iterate at unprecedented speeds.
This talk explores the role of mocking in enhancing API design workflows, focusing on its ability to:
1.Facilitate early stakeholder feedback by simulating API behavior before development.
2.Enable parallel development by decoupling frontend and backend teams.
3.Identify design flaws and inconsistencies earlier, reducing costly downstream changes.
4.Support rapid iteration and experimentation without impacting live systems.
Using real-world examples and best practices, we’ll demonstrate how tools like Prism and WireMock can be leveraged to create mock APIs that enhance collaboration, improve quality, and dramatically accelerate development timelines. Attendees will leave with actionable insights on integrating mocking into their API design lifecycle, fostering innovation and speed without compromising reliability.
In this immersive, hands-on workshop, participants will learn how to combine the discipline of Test-Driven Development with the creative support of AI-powered pair programming.
Working in pairs, developers will build a Booking system from scratch using Java and VS Code, progressively applying the red-green-refactor cycle while integrating AI assistance for test authoring and design validation.
This workshop emphasizes practical workflow habits; starting from unit tests, iterating with context-driven prompts, and applying refactoring techniques, to help participants write more reliable, maintainable, and thoughtful code.
By the end of this workshop, participants will be able to:
Apply the TDD cycle Red → Green → Refactor) effectively while coding a real-world service
Collaborate with AI tools to generate, refine, and extend test cases responsibly
Pass contextual prompts to guide AI toward meaningful, domain-relevant test generation Recognize design and code smells that emerge in the refactor phase and correct them through iterative improvement
Balance speed and intent—leveraging AI to accelerate feedback without compromising software quality
Reflect on workflow improvements, communication with AI tools, and ethical implications of AI-assisted testing
In this immersive, hands-on workshop, participants will learn how to combine the discipline of Test-Driven Development with the creative support of AI-powered pair programming.
Working in pairs, developers will build a Booking system from scratch using Java and VS Code, progressively applying the red-green-refactor cycle while integrating AI assistance for test authoring and design validation.
This workshop emphasizes practical workflow habits; starting from unit tests, iterating with context-driven prompts, and applying refactoring techniques, to help participants write more reliable, maintainable, and thoughtful code.
By the end of this workshop, participants will be able to:
Apply the TDD cycle Red → Green → Refactor) effectively while coding a real-world service
Collaborate with AI tools to generate, refine, and extend test cases responsibly
Pass contextual prompts to guide AI toward meaningful, domain-relevant test generation Recognize design and code smells that emerge in the refactor phase and correct them through iterative improvement
Balance speed and intent—leveraging AI to accelerate feedback without compromising software quality
Reflect on workflow improvements, communication with AI tools, and ethical implications of AI-assisted testing
In this hands-on session, participants will learn how to bridge the gap between technical strategy and execution using systems thinking principles.
Through some exercises, software architects will practice mapping business goals, constraints, and feedback loops, then translate them into a clear and adaptable technical roadmap.
This presentation focuses on helping architects/engineers to move from abstract vision to actionable outcomes, aligning architecture with value, sequencing initiatives, and communicating trade-offs effectively to stakeholders.
By the end of this session, participants will be able to:
Understand how systems thinking reveals dependencies and leverage points within technical ecosystems
Identify how business outcomes can be mapped to technical capabilities
Practice creating an adaptive roadmap using “Now / Next / Laterˮ framing
Learn to communicate trade-offs and priorities in a way that aligns with business goals Leave with a reusable framework and template for turning architectural strategy into delivery steps
There's an implied context to your software running in the world and processing data. The problem is that it is usually a reductive and insufficient context to capture the fluency of change that occurs at multiple layers. This need for shared context spreads to API usage which often necessitates fragile, custom development.
In this talk we will address the importance of dynamic context in software systems and how to engender flexible, sufficiently rich context-based systems.
We will cover the history of context-based thinking in the design of software systems and network protocols and how the ideas are merging into something along the lines of “Information DNS” where we resolve things at the time and place of execution into the form in which we need it.
Consider software systems with the technical and financial properties of the Web.
While this is a developing approach to software development, it builds on established ideas and will help provide the basis for next-generation development.
Java 25 has been released, but the Java release train coninues chugging along with Java 26.
In this presentation we will start with a quick review of the key changes from 17-21 and how they have improved developer experience, performance, and supporting Java applications in production. From there we will transition to changes to Java post-21 and how the various changes are bringing important stores into focus including; improved concurrency support, data-oriented programming, native support, and more! The Java platform is evolivng quickly to keep pace with the current needs of users, be sure to attend this presentation if you want to keep up!
Data is at the center of any organization. So it stands to reason that data should be at the center of how we design and write our Java applications.
In this talk we are going to look at how recent changes to the Java language; Records, Pattern Matching, Seal Hierarchies, are enabling Java applications to be written in a Data-Oriented Programming (DOP) paradigm. We will look at the core concepts of DOP, and how it compares and contrasts with the OOP approach familiar to many Java developers.
In this architectural kata, you will step into the shoes of a software architect tasked with designing a modern healthcare management system for a rapidly growing provider, MedBest.
The challenge is to create a system that integrates patient records, appointment scheduling, billing, and telemedicine while ensuring robust security, compliance with regulations, scalability, and cost efficiency.
Authentication and authorization are foundational concerns in modern systems, yet they’re often treated as afterthoughts or re-implemented inconsistently across services.
In this talk, we’ll explore Keycloak, an open-source identity and access management system, and how it fits into modern application architectures. We’ll break down what Keycloak actually does (and what it doesn’t), explain the role of JWTs and OAuth2/OpenID Connect, and examine how identity, trust, and access control are handled across distributed systems.
We’ll also compare Keycloak to secret management systems like Vault, clarify common misconceptions, and walk through integrations you will need with Spring, Quarkus, and other frameworks
By the end, you’ll understand when Keycloak is the right tool, how to integrate it cleanly, and how to avoid the most common architectural mistakes.
In this session, we will define what Keycloak is, its value, and how it integrates with your existing architecture. Here is the layout of the talk:
Prometheus and Grafana form the backbone of modern metrics-based observability, yet many teams struggle to move from “we collect metrics” to “we understand our systems.”
This talk builds a clear mental model for Prometheus and Grafana: how metrics are exposed, scraped, stored, queried, and visualized — and how those metrics connect to real operational decisions. We’ll explore Prometheus architecture, PromQL, Kubernetes integration via the Prometheus Operator, and how metrics power advanced workflows like canary deployments with Argo Rollouts and OpenTelemetry-based telemetry.
Attendees will leave knowing what to measure, how to measure it, and where to start on Monday.
This talk builds a practical mental model for metrics-based observability using Prometheus and Grafana. Rather than focusing solely on dashboards, we’ll explore how metrics are exposed, collected, queried, and ultimately used to make real operational decisions. We’ll connect application-level instrumentation, Kubernetes-native monitoring, and modern telemetry standards, showing how Prometheus fits into today’s production environments and deployment workflows.
In moments of uncertainty, teams don’t listen more closely to their leaders.
They watch them.
Across years of leading software organizations - and hundreds of documented leadership
moments - one truth becomes unavoidable: engineers take their emotional cues from
leadership behavior, especially under pressure. Stress, urgency, and fear propagate
through teams not by announcement, but by example.
This talk explores how leaders unintentionally amplify chaos through tone, timing, and
reaction - even when they believe they’re being clear or decisive. Drawing from psychology,
anthropology, and long-term observation of engineering teams, this session focuses on
principles, not tactics: stable leadership rules that hold when incidents, deadlines, or
change collide.
Attendees will learn:
• How leadership behavior becomes a system input
• Why urgency often masquerades as clarity - and how teams experience the
difference
• How trust is like a bank, being deposited or withdrawn during moments of pressure
• Practical ways leaders can become a stabilizing force without suppressing reality
This is not a talk about staying calm for appearances’ sake. It’s about understanding how
human systems react to stress, and how leaders can intentionally reduce noise instead
of becoming part of it.
Leaders will leave better equipped to guide teams through chaos - not by controlling
outcomes, but by shaping the environment in which decisions are made.
What happens when a self-taught programmer with a background in anthropology finds himself leading engineering teams? In this candid, humorous, and emotionally resonant talk, Robert Harris shares his journey from BASIC on a Commodore 64 to building psychologically safe, high-performing cultures in modern software organizations.
Blending fieldwork with frameworks, Robert explores the human side of engineering leadership—imposter syndrome, accidental management, and the painful lessons that shaped his philosophy. Drawing on his training in anthropology, he offers a practical guide to shaping team culture through shared language, rituals, experiences, and artifacts—from flaming pull request beacons to rubber duck onboarding kits.
Attendees will leave with:
•A fresh perspective on leadership rooted in emotional intelligence and cultural design
•Actionable strategies for building trust, accountability, and psychological safety
•A toolkit of metaphors, rituals, and artifacts to transform team dynamics
Whether you’re a reluctant manager, a seasoned leader, or just someone who’s ever stepped on a rake in production, this talk will help you turn dysfunction into culture—and culture into your team’s greatest asset.
This talk will guide Java developers through the design and implementation of multi-agent generative AI systems using event-driven principles.
Attendees will learn how autonomous GenAI agents collaborate, communicate, and adapt in real-time workflows using modern Java frameworks and messaging protocols.
As generative AI systems evolve from single LLM calls to complex, goal‑driven workflows, multi‑agent architectures are becoming essential for robust, scalable, and explainable AI applications.
This talk presents a practical framework for designing and implementing multi‑agent generative AI systems, covering four core orchestration patterns that define how agents coordinate:
Orchestrator‑Worker: A central agent decomposes a task and delegates subtasks to specialized worker agents, then aggregates and validates results.
Hierarchical Agent: Agents are organized in layers (e.g., manager, specialist, executor), enabling abstraction, delegation, and error handling across levels.
Blackboard: Agents contribute to and react from a shared “blackboard” workspace, enabling loosely coupled, event‑driven collaboration.
Market‑Based: Agents act as autonomous participants that negotiate, bid, or compete for tasks and resources, useful in dynamic, resource‑constrained environments.
For each pattern, we show concrete use cases, such as customer support triage, research synthesis, code generation pipelines, and discuss trade‑offs in latency, complexity, and observability.
Everybody is talking about Generative AI and models that are better than anything else before. What are they really talking about?
In this workshop with some hands-on exercise, we will discuss Generative AI in theory and will also try it in practice (with free access to an Oracle LiveLab cloud session to learn about Vector Search). You'll be able to understand what Generative AI is all about and how it can be used.
The content will include:
Everybody is talking about Generative AI and models that are better than anything else before. What are they really talking about?
In this workshop with some hands-on exercise, we will discuss Generative AI in theory and will also try it in practice (with free access to an Oracle LiveLab cloud session to learn about Vector Search). You'll be able to understand what Generative AI is all about and how it can be used.
The content will include:
Java has quietly absorbed functional ideas over the last decade. Lambdas, streams, records, sealed types. It has been an amazing journey, but most teams still write code as if none of that really changed anything. This workshop asks a simple question: what if we actually took those features seriously?
In Thinking Functionally in Java, we explore how far disciplined functional design can take us using plain Java with no rewrites, no new language mandates, and no academic detours. Along the way, we address reproducible development environments with Nix, replace exception-driven control flow with explicit error modeling, and uncover why concepts like flatMap, algebraic data types, and composability matter even if you never say the word “monad” out loud.
Show, Eq) and understanding the limits of Java’s type system.You’ve heard the buzz — now roll up your sleeves and build with it. In this hands-on workshop, you’ll learn exactly how the Model Context Protocol (MCP) works — and you’ll write your own MCP server tool from scratch, then author an Agent that uses it to deliver real-time, context-aware help right inside your dev flow.
We’ll break down the raw MCP protocol step by step:
• How it streams context between your IDE and Agents
• How messages are structured and exchanged
• How to wire up an MCP Client to talk to your new tool
By the end, you’ll not only understand the protocol — you’ll have built a working MCP server tool and your own Agent that plugs into it to automate tasks, provide better suggestions, and boost your productivity.
Bring your curiosity — and your laptop — because you’ll walk away with practical code, a working prototype, and the confidence to build and
We have all seen the “Hello, World” of Spring AI: sending a prompt and getting a response. But as we move toward production, the real challenge is not the LLM call; it is the workflow. How do you ensure an agent does not loop infinitely? How do you coordinate multiple tools without a mess of “if-else” blocks? And how do we keep our Java-centric domain models at the heart of the AI’s reasoning?
Enter Embabel, a new JVM-based framework from Rod Johnson (creator of Spring) designed to bring discipline to agentic AI. Unlike Python-centric alternatives, Embabel is built on the philosophy of strong typing, OODA loops (Observe, Orient, Decide, Act), and Goal-Oriented Action Planning (GOAP).
In this session, we will go beyond basic RAG and explore how to build “digital workers” that can actually plan. You will learn: How to turn your existing Spring Beans into AI Actions.
The shift from imperative coding to Goal-Oriented orchestration.
How Embabel uses DICE (Domain-Integrated Context Engineering) to give agents true domain knowledge.
Why the JVM is actually the best place to run mission-critical AI agents.
Join us for a code-heavy look at the future of Java backend development. We are moving to a world where our systems do not just respond to requests, but actively work to achieve goals.
The “Hello, World” of Spring AI involves sending a prompt and receiving a text response. This is no longer enough for production. To build enterprise grade AI, we must move beyond simple request and response cycles toward autonomous agents capable of reasoning, planning, and executing complex workflows. The challenge is doing this without losing the type safety, observability, and domain driven design that makes the Java ecosystem the backbone of enterprise software.
Join us for a three hour deep dive into Embabel, the new JVM framework from Rod Johnson designed for disciplined agentic AI. This workshop moves past the “if-else” mess of basic orchestration and introduces an architecture based on OODA loops and Goal Oriented Action Planning (GOAP).
Part 1: From Prompting to Planning (90 Minutes)
In the first half, we move from imperative logic to Goal Oriented orchestration. You will learn the core philosophy of Embabel and how it uses the OODA loop (Observe, Orient, Decide, Act) to maintain stateful awareness. This module focuses heavily on DICE (Domain-Integrated Context Engineering) which allows you to move beyond simple RAG by injecting your existing Java domain models directly into the agent’s reasoning process. You will learn the planning mindset by defining clear goals rather than rigid paths, allowing the AI to navigate your business rules dynamically.
Part 2: Building the Digital Worker (90 Minutes)
The second half is a hands-on lab where we turn theory into a functioning agent. We will explore the process of turning your existing Spring Beans into AI Actions by defining preconditions and effects so that Embabel can construct plans without infinite loops. We will also address what happens when an LLM hallucinates or a tool fails. This includes exploring advanced patterns for error handling and plan repair to demonstrate why the JVM is the superior environment for mission critical AI.
By the end of this workshop, you will have built a functional Digital Worker capable of navigating a complex domain and interacting with real Spring managed services. You will leave with a local prototype and a blueprint for bringing agentic AI to your organization. Participants should have experience with Spring Boot and Java or Kotlin as well as a laptop with a JDK 17+ environment.
Stop just calling APIs and start building workers. This workshop provides a code heavy look at the future of Java backend development where systems do not just respond to requests but actively work to achieve goals.
The “Hello, World” of Spring AI involves sending a prompt and receiving a text response. This is no longer enough for production. To build enterprise grade AI, we must move beyond simple request and response cycles toward autonomous agents capable of reasoning, planning, and executing complex workflows. The challenge is doing this without losing the type safety, observability, and domain driven design that makes the Java ecosystem the backbone of enterprise software.
Join us for a three hour deep dive into Embabel, the new JVM framework from Rod Johnson designed for disciplined agentic AI. This workshop moves past the “if-else” mess of basic orchestration and introduces an architecture based on OODA loops and Goal Oriented Action Planning (GOAP).
Part 1: From Prompting to Planning (90 Minutes)
In the first half, we move from imperative logic to Goal Oriented orchestration. You will learn the core philosophy of Embabel and how it uses the OODA loop (Observe, Orient, Decide, Act) to maintain stateful awareness. This module focuses heavily on DICE (Domain-Integrated Context Engineering) which allows you to move beyond simple RAG by injecting your existing Java domain models directly into the agent’s reasoning process. You will learn the planning mindset by defining clear goals rather than rigid paths, allowing the AI to navigate your business rules dynamically.
Part 2: Building the Digital Worker (90 Minutes)
The second half is a hands-on lab where we turn theory into a functioning agent. We will explore the process of turning your existing Spring Beans into AI Actions by defining preconditions and effects so that Embabel can construct plans without infinite loops. We will also address what happens when an LLM hallucinates or a tool fails. This includes exploring advanced patterns for error handling and plan repair to demonstrate why the JVM is the superior environment for mission critical AI.
By the end of this workshop, you will have built a functional Digital Worker capable of navigating a complex domain and interacting with real Spring managed services. You will leave with a local prototype and a blueprint for bringing agentic AI to your organization. Participants should have experience with Spring Boot and Java or Kotlin as well as a laptop with a JDK 17+ environment.
Stop just calling APIs and start building workers. This workshop provides a code heavy look at the future of Java backend development where systems do not just respond to requests but actively work to achieve goals.
AI has increased what your team can produce. Which means more output to review, more decisions to make, more places where everything runs through you. The leverage is real — and so is the growing pile of work that was supposed to be off your plate. Part of that is systems. But a bigger part is the pull to just do it yourself — because it's faster, because it's easier than explaining it, because if something goes wrong you want to have been the one who touched it last. That pull is costing you. Not just time — the kind of work you actually want to be doing, and the headspace to do it well.
This is a working session — you'll work through real delegation situations, yours or someone else's, and learn as much from the people in the room as from the content. You'll leave with the skills to hand off work with trust, clarity, and accountability — one concrete delegation you've been avoiding, with a plan to execute in upcoming week, and the foundation to keep building on it. Less bouncing back to your desk. More of your time on the work that only you can do. The kind of leader whose team actually runs without them in every conversation.
You've basically stopped writing significant amounts of code. AI does it — you check it, direct it, and sign off on it. The leverage is real and so is the uneasy feeling that comes with it. You're now responsible for output you didn't fully produce, at a volume you can't fully review, while leadership pushes for more speed. And nobody — not your org, not the industry — has figured out yet who's actually on the hook when something goes wrong.
Here's the honest truth: nobody has a clean answer yet — not your organization, not the industry, not the people writing the frameworks. So this session doesn't pretend to have one either. Instead, we'll pool what's actually working for people in the room, surface what's already helping teams stay more on top of it, and build something more useful than a handed-down answer. You'll also develop the skill to become calmer and more clearheaded when you're buried in pull requests and competing demands — and when something does go wrong. You'll leave with concrete approaches for navigating the gray zone and more practical ways to handle the pressure of leading when the rules are still being written.
Leadership is publicly pushing hard on AI — and your team is feeling it differently. Some are vocal about it, others aren't saying what's really on their mind. Meanwhile the classic hard conversations haven't gone away: missed deadlines, underperforming teammates, stakeholders pushing too hard. You're probably avoiding at least one conversation right now — not because you don't know it needs to happen, but because the last time you tried something like it, logic didn't land the way you expected — and the situation got harder, not easier.
This session gives you a practical framework for debugging communication the same way you'd debug code — and the skill to manage what makes these talks hard in the first place. The anxiety before, the rumination after, the pull to over-explain or go silent. You'll work through real situations in the room — yours or someone else's — and leave with more than you expected from the people around you. You'll leave with the skills to walk into difficult conversations with steadier confidence — and walk out with more clarity and less conflict. The conversation you've been avoiding becomes the one you're ready to have.
There are certain tech trends people at least know about such as Moore's Law even if they don't really understand them. But there are other forces at play in and around our industry that are unknown or ignored by the ever diminishing tech journalism profession. They help explain and predict the pressures and influences we are seeing now or soon will.
In this talk, I will identify a variety of trends that are happening at various paces in intertwined ways at the technological, scientific, cultural, biological, and geopolitical levels and why Tech Leaders should know about them. Being aware of the visible and invisible forces that surround you can help you work with them, rather than against them. You will also be more likely to make good choices and thrive rather than being buffeted uncontrollably.
You’ve heard the buzz — now roll up your sleeves and build with it. In this hands-on workshop, you’ll learn exactly how the Model Context Protocol (MCP) works — and you’ll write your own MCP server tool from scratch, then author an Agent that uses it to deliver real-time, context-aware help right inside your dev flow.
We’ll break down the raw MCP protocol step by step:
• How it streams context between your IDE and Agents
• How messages are structured and exchanged
• How to wire up an MCP Client to talk to your new tool
By the end, you’ll not only understand the protocol — you’ll have built a working MCP server tool and your own Agent that plugs into it to automate tasks, provide better suggestions, and boost your productivity.
Bring your curiosity — and your laptop — because you’ll walk away with practical code, a working prototype, and the confidence to build and
New languages often carry an operational burden to deployment and involve tradeoffs of performance for safety. Rust has emerged as a powerful, popular, and increasingly widely-used language for all types of development. Come learn why Rust is entering the Linux kernel and Microsoft and Google are favoring it for new development over C++.
This Introduction to Rust will introduce the students to the various merits (and complexities) of this safe, fast and popular new programming language that is taking the world by storm. This
three day course will cover everything students from various backgrounds will need to get started as a successful Rust programmer.
Attendees will Learn about and how to:
New languages often carry an operational burden to deployment and involve tradeoffs of performance for safety. Rust has emerged as a powerful, popular, and increasingly widely-used language for all types of development. Come learn why Rust is entering the Linux kernel and Microsoft and Google are favoring it for new development over C++.
This Introduction to Rust will introduce the students to the various merits (and complexities) of this safe, fast and popular new programming language that is taking the world by storm. This
three day course will cover everything students from various backgrounds will need to get started as a successful Rust programmer.
Attendees will Learn about and how to:
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Modern AI systems deliver many capabilities, but their effectiveness depends entirely on how well they're prompted. This intensive workshop transforms prompt engineering from trial-and-error guesswork into a systematic, measurable discipline. You'll learn proven frameworks for constructing effective prompts and learn cutting-edge optimization techniques that deliver quality improvements in real-world applications.
Through five hands-on labs in GitHub Codespaces, you'll work with Ollama hosting llama3.2:3b to implement each technique, measure its impact, and build reusable templates. Every concept is immediately validated with code you can deploy tomorrow.
What You'll Master
The workshop progresses through five core competency areas, each reinforced with a practical lab:
Foundations of Effective Prompting begins with the six essential elements every prompt needs: task definition, context, constraints, role assignment, output format, and examples. You'll systematically transform a poorly-constructed prompt into an optimized version, measuring quality improvements at each step. This foundation eliminates the guesswork and establishes a repeatable framework for all future prompt engineering work.
Pattern-Based Techniques introduces few-shot learning and Chain of Thought (CoT) reasoning. Few-shot prompting teaches models through examples rather than explanations, dramatically improving consistency on classification and transformation tasks. Chain of Thought makes reasoning transparent, improving accuracy on complex problems by 20-40% while enabling you to verify the model's logic. You'll build a classification system and compare zero-shot, few-shot, and CoT approaches with measurable accuracy metrics.
Advanced Structural Techniques combines role-based prompting, structured outputs, and constrained generation into enterprise-ready patterns. You'll create an API documentation generator that uses expert personas, enforces strict formatting requirements, outputs reliable JSON, and maintains 90%+ consistency across diverse inputs. This lab produces production templates with automated validation—patterns you can immediately deploy in your organization.
Cutting-Edge Methods explores two powerful techniques gaining traction in 2025-2026. Multi-expert prompting simulates a council of experts (technical, business, security) analyzing complex decisions from multiple perspectives, catching blind spots that single-perspective prompts miss. Reverse prompting flips the traditional interaction: instead of you trying to perfectly specify requirements, the AI asks clarifying questions to discover what you really need. You'll measure 40-60% improvements in decision quality and 80-90% gains in requirement clarity.
Probabilistic and Incentive-Based Optimization introduces the latest research-backed techniques for extracting maximum quality from language models. Stanford's breakthrough probability-based prompting—requesting multiple responses with confidence scores—improves reliability by 30-50% on ambiguous tasks. Incentive framing (yes, “This is critical” and “Take your time” actually work) increases thoroughness by 20-40%. Combined, these techniques deliver 50-70% quality improvements on high-stakes decisions.
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Modern AI systems deliver many capabilities, but their effectiveness depends entirely on how well they're prompted. This intensive workshop transforms prompt engineering from trial-and-error guesswork into a systematic, measurable discipline. You'll learn proven frameworks for constructing effective prompts and learn cutting-edge optimization techniques that deliver quality improvements in real-world applications.
Through five hands-on labs in GitHub Codespaces, you'll work with Ollama hosting llama3.2:3b to implement each technique, measure its impact, and build reusable templates. Every concept is immediately validated with code you can deploy tomorrow.
What You'll Master
The workshop progresses through five core competency areas, each reinforced with a practical lab:
Foundations of Effective Prompting begins with the six essential elements every prompt needs: task definition, context, constraints, role assignment, output format, and examples. You'll systematically transform a poorly-constructed prompt into an optimized version, measuring quality improvements at each step. This foundation eliminates the guesswork and establishes a repeatable framework for all future prompt engineering work.
Pattern-Based Techniques introduces few-shot learning and Chain of Thought (CoT) reasoning. Few-shot prompting teaches models through examples rather than explanations, dramatically improving consistency on classification and transformation tasks. Chain of Thought makes reasoning transparent, improving accuracy on complex problems by 20-40% while enabling you to verify the model's logic. You'll build a classification system and compare zero-shot, few-shot, and CoT approaches with measurable accuracy metrics.
Advanced Structural Techniques combines role-based prompting, structured outputs, and constrained generation into enterprise-ready patterns. You'll create an API documentation generator that uses expert personas, enforces strict formatting requirements, outputs reliable JSON, and maintains 90%+ consistency across diverse inputs. This lab produces production templates with automated validation—patterns you can immediately deploy in your organization.
Cutting-Edge Methods explores two powerful techniques gaining traction in 2025-2026. Multi-expert prompting simulates a council of experts (technical, business, security) analyzing complex decisions from multiple perspectives, catching blind spots that single-perspective prompts miss. Reverse prompting flips the traditional interaction: instead of you trying to perfectly specify requirements, the AI asks clarifying questions to discover what you really need. You'll measure 40-60% improvements in decision quality and 80-90% gains in requirement clarity.
Probabilistic and Incentive-Based Optimization introduces the latest research-backed techniques for extracting maximum quality from language models. Stanford's breakthrough probability-based prompting—requesting multiple responses with confidence scores—improves reliability by 30-50% on ambiguous tasks. Incentive framing (yes, “This is critical” and “Take your time” actually work) increases thoroughness by 20-40%. Combined, these techniques deliver 50-70% quality improvements on high-stakes decisions.
Modern system design has entered a new era. It’s no longer enough to optimize for uptime and latency — today’s systems must also be AI-ready, token-efficient, trustworthy, and resilient. Whether building global-scale apps, powering recommendation engines, or integrating GenAI agents, architects need new skills and playbooks to design for scale, speed, and reliability.
This full-day workshop blends classic distributed systems knowledge with AI-native thinking. Through case studies, frameworks, and hands-on design sessions, you’ll learn to design systems that balance performance, cost, resilience, and truthfulness — and walk away with reusable templates you can apply to interviews and real-world architectures.
Target Audience
Enterprise & Cloud Architects → building large-scale, AI-ready systems.
Backend Engineers & Tech Leads → leveling up to system design mastery.
AI/ML & Data Engineers → extending beyond pipelines to full-stack AI systems.
FAANG & Big Tech Interview Candidates → preparing for system design interviews with an AI twist.
Engineering Managers & CTO-track Leaders → guiding teams through AI adoption.
Startup Founders & Builders → scaling AI products without burning money.
Learning Outcomes
By the end of the workshop, participants will be able to:
Apply a 7-step system design framework extended for AI workloads.
Design systems that scale for both requests and tokens.
Architect multi-provider failover and graceful degradation ladders.
Engineer RAG 2.0 pipelines with hybrid search, GraphRAG, and semantic caching.
Implement AI trust & security with guardrails, sandboxing, and red-teaming.
Build observability dashboards for hallucination %, drift, token costs.
Reimagine real-world platforms (Uber, Netflix, Twitter, Instagram) with AI integration.
Practice mock interviews & chaos drills to defend trade-offs under pressure.
Take home reusable templates (AI System Design Canvas, RAG Checklist, Chaos Runbook).
Gain the confidence to lead AI-era system design in interviews, enterprises, or startups.
Workshop Agenda (Full-Day, 8 Hours)
Session 1 – Foundations of Modern System Design (60 min)
The new era: Why classic design is no longer enough.
Architecture KPIs in the AI age: latency, tokens, hallucination %, cost.
Group activity: brainstorm new KPIs.
Session 2 – Frameworks & Mindset (75 min)
The 7-Step System Design Framework (AI-extended).
Scaling humans vs tokens.
Token capacity planning exercise.
Session 3 – Retrieval & Resilience (75 min)
RAG 2.0 patterns: chunking, hybrid retrieval, GraphRAG, semantic cache.
Multi-provider resilience + graceful degradation ladders.
Whiteboard lab: design a resilient RAG pipeline.
Session 4 – Security & Observability (60 min)
Threats: prompt injection, data exfiltration, abuse.
Guardrails, sandboxing, red-teaming.
Observability for LLMs: traces, cost dashboards, drift monitoring.
Activity: STRIDE threat-modeling for an LLM endpoint.
Session 5 – Real-World System Patterns (90 min)
Uber, Netflix, Instagram, Twitter, Search, Fraud detection, Chatbot.
AI-enhanced vs classic system designs.
Breakout lab: redesign a system with AI augmentation.
Session 6 – Interviews & Chaos Drills (75 min)
Mock interview challenges: travel assistant, vector store sharding.
Peer review of trade-offs, diagrams, storytelling.
Chaos drills: provider outage, token overruns, fallback runbooks.
Closing (15 min)
Recap: 3 secrets (Scaling tokens, RAG as index, Resilient degradation).
Templates & takeaways: AI System Design Canvas, RAG Checklist, Chaos Runbook.
Q&A + networking.
Takeaways for Participants
AI System Design Canvas (framework for interviews & real-world reviews).
RAG 2.0 Checklist (end-to-end retrieval playbook).
Chaos Runbook Template (resilience drill starter kit).
AI SLO Dashboard template for observability + FinOps.
Confidence to design and defend AI-ready architectures in both career and enterprise contexts.
PIs built for humans often fail when consumed by AI agents.
They rely on documentation instead of contracts, return unpredictable structures, and break silently when upgraded. Large Language Models (LLMs) and autonomous agents need something different: machine-discoverable, deterministic, idempotent, and lifecycle-managed APIs.
This session introduces a five-phase API readiness framework—from discovery to deprecation—so you can systematically evolve your APIs for safe, predictable AI consumption.
You’ll learn how to assess current APIs, prioritize the ones that matter, and apply modern readiness practices: function/tool calling, schema validation, idempotency, version sunset headers, and agent-aware monitoring.
Problems Solved
What “AI-Readiness” Means
Common Failure Modes Today
Agenda
Introduction: The Shift from Human → Machine Consumption
Why LLMs and agents fundamentally change API design expectations.
Examples of human-centric patterns that break agent workflows.
Pattern 1: Assessment & Readiness Scorecard
How to audit existing APIs for AI-readiness.
Scoring dimensions: discoverability, determinism, idempotency, guardrails, lifecycle maturity.
Sample scorecard matrix and benchmark scoring.
Pattern 2: Prioritization Strategy
How to choose where to start:
Key Framework References
Takeaways
Autonomous LLM agents don’t just call APIs — they plan, retry, chain, and orchestrate across multiple services.
That fundamentally changes how we architect microservices, define boundaries, and operate distributed systems.
This session delivers a practical architecture playbook for Agentic AI integration — showing how to evolve from simple request/response designs to resilient, event-driven systems.
You’ll learn how to handle retry storms, contain failures with circuit breakers and bulkheads, implement sagas and outbox patterns for correctness, and version APIs safely for long-lived agents.
You’ll leave with reference patterns, guardrails, and operational KPIs to integrate agents confidently—without breaking production systems.
Problems Solved
Why Now
What Is Agentic AI in Microservices
Agenda
Opening: The Shift to Agent-Driven Systems
How autonomous agents change microservice assumptions.
Why request/response architectures fail when faced with planning, chaining, and self-healing agents.
Pattern 1: Event-Driven Flows Use events, queues, and replay-safe designs to decouple agents from synchronous APIs. Patterns: pub/sub, event sourcing, and replay-idempotency.
Pattern 2: Saga and Outbox Patterns Manage long workflows with compensations. Ensure atomicity and reliability between DB and event bus. Outbox → reliable publish; Saga → rollback on failure.
Pattern 3: Circuit Breakers and Bulkheads Contain agent-triggered failure storms. Apply timeout, retry, and fallback policies per domain. Prevent blast-radius amplification across services.
Pattern 4: Service Boundary Design Shape services around tasks and domains — not low-level entities. Example: ReserveInventory, ScheduleAppointment, SubmitClaim. Responses must return reason codes + next actions for agent clarity. Avoid polymorphic or shape-shifting payloads.
Pattern 5: Integrating Agent Frameworks Connect LLM frameworks (Agentforce, LangGraph) safely to services. Use operationId as the agent tool name; enforce strict schemas. Supervisor/planner checks between steps. Asynchronous jobs: job IDs, progress endpoints, webhooks.
Pattern 6: Infrastructure and Operations
Wrap-Up: KPIs and Guardrails for Production Key metrics: retry rate, success ratio, agent throughput, event replay lag. Lifecycle governance: monitoring, versioning, deprecation, and sunset plans.
Key Framework References
Takeaways
Building AI isn’t just about prompting or plugging into an API — it’s about architecture. This workshop translates Salesforce’s Enterprise Agentic Architecture blueprint into practical design patterns for real-world builders.
You’ll explore how Predictive, Assistive, and Agentic patterns map to Salesforce’s Agentforce maturity model, combining orchestration, context, and trust into cohesive systems. Through hands-on modules, participants design a Smart Checkout Helper using Agentforce, Data Cloud, MCP, and RAG—complete with observability, governance, and ROI mapping.
Key Takeaways
Agentic Architecture Foundations: Understand multi-agent design principles — decomposition, decoupling, modularity, and resilience.
Pattern Literacy- Apply patterns: Orchestrator, Domain SME, Interrogator, Prioritizer, Data Steward, and Listener.
Predictive–Assistive–Agentic Continuum: Align AI maturity with business intent — from prediction and guidance to autonomous execution.
RAG Grounding & Context Fabric: Integrate trusted enterprise data via Data Cloud and MCP for fact-based reasoning.
Multi-Agent Orchestration: Implement Orchestrator + Worker topologies using A2A protocol, Pub/Sub, Blackboard, and Capability Router.
Governance & Trust: Embed privacy, bias mitigation, observability, and audit trails — design for CIO confidence.
Business Alignment: Use the Jobs-to-Be-Done and Agentic Map templates to connect AI outcomes with ROI.
Agenda
Module 1 – Enterprise Agentic Foundations
Module 2 – The Big 3 Patterns: Predictive, Assistive, Agentic
Module 3 – Predictive AI → Foresight in Systems
Module 4 – Assistive AI → Guiding Humans
Module 5 – Agentic AI → Autonomy in Action
Module 6 – Agentic Map & Jobs-to-Be-Done Framework
Module 7 – RAG & Context Fabric
Module 8 – Multi-Agent Orchestration with MCP
Module 9 – Governance & Guardrails
Module 10 – From Prototype to Production
What You’ll Leave With
Building AI isn’t just about prompting or plugging into an API — it’s about architecture. This workshop translates Salesforce’s Enterprise Agentic Architecture blueprint into practical design patterns for real-world builders.
You’ll explore how Predictive, Assistive, and Agentic patterns map to Salesforce’s Agentforce maturity model, combining orchestration, context, and trust into cohesive systems. Through hands-on modules, participants design a Smart Checkout Helper using Agentforce, Data Cloud, MCP, and RAG—complete with observability, governance, and ROI mapping.
Key Takeaways
Agentic Architecture Foundations: Understand multi-agent design principles — decomposition, decoupling, modularity, and resilience.
Pattern Literacy- Apply patterns: Orchestrator, Domain SME, Interrogator, Prioritizer, Data Steward, and Listener.
Predictive–Assistive–Agentic Continuum: Align AI maturity with business intent — from prediction and guidance to autonomous execution.
RAG Grounding & Context Fabric: Integrate trusted enterprise data via Data Cloud and MCP for fact-based reasoning.
Multi-Agent Orchestration: Implement Orchestrator + Worker topologies using A2A protocol, Pub/Sub, Blackboard, and Capability Router.
Governance & Trust: Embed privacy, bias mitigation, observability, and audit trails — design for CIO confidence.
Business Alignment: Use the Jobs-to-Be-Done and Agentic Map templates to connect AI outcomes with ROI.
Agenda
Module 1 – Enterprise Agentic Foundations
Module 2 – The Big 3 Patterns: Predictive, Assistive, Agentic
Module 3 – Predictive AI → Foresight in Systems
Module 4 – Assistive AI → Guiding Humans
Module 5 – Agentic AI → Autonomy in Action
Module 6 – Agentic Map & Jobs-to-Be-Done Framework
Module 7 – RAG & Context Fabric
Module 8 – Multi-Agent Orchestration with MCP
Module 9 – Governance & Guardrails
Module 10 – From Prototype to Production
What You’ll Leave With
Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues. Even in the time of AI, there's a discussion to be had.
In addition to detecting the presence of common bugs as we have done with static analysis for years, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite and the Moderne cli to fix common security issues and keep them from coming back.
In this talk we will focus on:
Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues. Even in the time of AI, there's a discussion to be had.
In addition to detecting the presence of common bugs as we have done with static analysis for years, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite and the Moderne cli to fix common security issues and keep them from coming back.
In this talk we will focus on:
AI, agentic workflows, digital twins, edge intelligence, spatial computing, and blockchain trust are converging to reshape how enterprises operate.
This session introduces Enterprise Architecture 4.0—a practical, future-ready approach where architectures become intelligent, adaptive, and continuously learning.
You’ll explore the EA 4.0 Tech Radar, understand the six major waves of disruption, and learn the ARCHAI Blueprint—a structured framework for designing AI-native, agent-ready, and trust-centered systems.
Leave with a clear set of patterns and a 12-month roadmap for preparing your enterprise for the next era of intelligent operations.
⸻
KEY TAKEAWAYS
•Understand the EA 4.0 shift toward intelligent, agent-driven architecture
•Learn the top technology trends: AI, agents, edge, twins, spatial, blockchain, and machine customers
•See how the ARCHAI Blueprint structures AI-first design and governance
•Get practical patterns for agent safety, digital twins, trust, and ecosystem readiness
•Leave with a concise 12-month roadmap for implementing EA 4.0
⸻
AGENDA
– The Speed of Change
Why traditional enterprise architecture cannot support AI-native, agent-driven systems.
– The EA 4.0 Tech Radar
A 3–5 year outlook across:
•Agentic AI
•Edge intelligence
•Digital twins
•Spatial computing
•Trusted automation (blockchain)
•Machine customers
– The Six Waves of Transformation
Short deep dives into each wave with real enterprise use cases.
– The ARCHAI Blueprint
A clear architectural framework for AI-first enterprises:
•Attention & Intent Modeling
•Retrieval & Knowledge Fabric
•Capability & Context Models
•Human + Agent Co-working Patterns
•Action Guardrails & Safety
•Integration & Intelligence Architecture
This gives architects a single, unified design methodology across all emerging technologies.
– The Architect’s Playbook
Practical patterns for:
•Intelligence fabrics
•Agent-safe APIs
•Digital twin integration
•Trust & decentralized identity
•Ecosystem-ready design
– Operationalizing EA 4.0
How architecture teams evolve:
•New EA roles
•Continuous planning
•Agent governance
•EA dashboards
•The 12-month adoption roadmap
AI agents don’t behave like humans. A single prompt can trigger thousands of parallel API calls, retries, and tool chains—creating bursty load, cache-miss storms, and runaway costs. This talk unpacks how to design and operate APIs that stay fast, reliable, and affordable under AI workloads. We’ll cover agent-aware rate limiting, backpressure & load shedding, deterministic-result caching, idempotency & deduplication, async/event-driven patterns, and autoscaling without bill shock. You’ll learn how to tag and trace agent traffic, set SLOs that survive tail latency, and build graceful-degradation playbooks that keep experiences usable when the graph goes wild.
Why scaling is different with AI
Failure modes to expect (and design for)
Traffic control & fairness
Resilience patterns
Caching that actually works for AI
Async & event-driven designs
Autoscaling without bill shock
Observability & cost governance
Testing & readiness
Runbooks & playbooks
Deliverables for attendees
Learning Objectives (Takeaways)
As enterprises rush to embed large language models (LLMs) into apps and platforms, a new AI-specific attack surface has emerged. Prompt injections, model hijacking, vector database poisoning, and jailbreak exploits aren’t covered by traditional DevSecOps playbooks.
This full-day, hands-on workshop gives architects, platform engineers, and security leaders the blueprint to secure AI-powered applications end-to-end. You’ll master the OWASP LLM Top 10, integrate AI-specific controls into CI/CD pipelines, and run live red-team vs blue-team exercises to build real defensive muscle.
Bottom line: if your job involves deploying, securing, or governing AI systems, this workshop shows you how to do it safely—before attackers do it for you.
What You’ll Learn
Who Should Attend
Takeaways
Agenda
Module 1 – The New AI Attack Surface
Module 2 – OWASP LLM Top 10 Deep Dive
Module 3 – DevSecOps Patterns for LLMs
Module 4 – Real-World Threat Simulations
Module 5 – Business Impact & Mitigation Framework
Graphs aren’t just academic—they power the backbone of real systems: workflows (Airflow DAGs), build pipelines (Bazel), data processing (Spark DAGs), and microservice dependencies (Jaeger).
This session demystifies classic graph algorithms—BFS, DFS, topological sort, shortest paths, and cycle detection—and shows how to connect them to real-world systems.
You’ll also see how AI tools like ChatGPT and graph libraries (Graphviz, NetworkX, D3) can accelerate your workflow: generating adjacency lists, visualizing dependencies, and producing test cases in seconds.
You’ll leave with reusable patterns for interviews, architecture reviews, and production systems.
You’ll leave with reusable patterns for interviews, architecture reviews, and production systems.
Why Now
Problems Solved
Learning Outcomes
Agenda
Opening: From Whiteboard to Production
Why every large-scale system is a graph in disguise.
How workflows, microservices, and dependency managers rely on graph structures.
Pattern 1: Graphs in the Real World
Examples:
Pattern 2: Core Algorithms Refresher
Pattern 3: AI-Assisted Graph Engineering How to use AI tools to accelerate graph work:
Pattern 4: Graph Patterns in Architecture Mapping algorithms to system design:
Pattern 5: AI Demo Prompt → adjacency list → Graphviz/NetworkX render → algorithmic validation. Demonstrate quick prototyping workflow with AI assistance.
Wrap-Up: From Algorithms to Architectural Intuition How graph literacy improves system reliability and scalability. Checklist and reusable templates for ongoing graph-based reasoning.
Key Framework References
Takeaways
The scope keeps expanding. The team still needs direction. Leadership wants more output. And the AI landscape is moving faster than anyone can fully keep up with. For a lot of technical leaders, somewhere along the way the job quietly shifted from leading to just keeping up. Decisions get made from whatever state you happen to be in when the next thing lands — and it's been costing you focus, clarity, and probably sleep.
Most leadership training addresses what to do. Less often does it address the skill of staying clear-headed while you're doing it — under real pressure, in real time, with a team and boss looking to you for answers. This full-day workshop is built around that gap. You'll work through real pressure situations — the kind you actually face — and develop practical skills for staying grounded when they spike, so you can lead with steadiness instead of reaction. You'll learn as much from the honest conversations in the room as from the content itself.
You'll leave with skills you can use in those moments — the ones you already know need them. More of your best thinking available when it matters most. The kind of leader you already know you can be, more of the time. Getting more done with less of the grind that's been wearing you down
You got into this because you loved building things. With your own hands, your own mind, your own code. There was a craftsman feel to it — wrestling a hard problem to the ground, the satisfaction of elegant code, the identity that came with being the person who could figure it out. That identity took years to build. And here's the part that stings: being technically savvy didn't protect you from this one. If anything, it meant you understood exactly what was happening — and couldn't stop it. In the last few months, quietly and quickly, the ground shifted. You're still here. You're still valuable. But something about how you relate to the work has changed — and nobody's really talking about it.
This session names what's actually happening — because naming it is the first step to working through it. Some in the room are energized by the shift. Others have fear about it. Most are somewhere in between and haven't had a safe place to say so. Through honest conversation with the people around you, you'll start to separate what's actually changing from what's staying true about who you are — and leave with a clearer sense of where you stand, and what to build on, while the ground keeps moving
A few months ago you had a decent handle on what your team was building — or you were building it yourself. Knowing things is how you got here, and it still matters. But everything your team touches is moving faster than your ability to fully audit it, and the output is coming faster than anyone can fully evaluate. Some days that's exhilarating. Other days your team looks to you for answers — and you're not sure you're the right person to ask anymore.
This session gives you practical skills to stay functional and credible when the answer isn't there yet — not as a workaround, but as something you can build and rely on. You can use them in the 90 seconds before you walk into a conversation, or in the moment someone asks you something you can't answer yet.
Through real scenarios and honest conversation with the people around you, you'll leave with something steadier than certainty — the ability to lead clearly in the moment, even when the answer comes later.
AI agents are not just for developers. They are personal operating systems for your professional and personal life. In this session, Ken shares what it actually looks like to live and work with a personal AI agent — from morning briefs to travel ops to speaking pipeline automation — and provides a practical framework you can start deploying the same week.
Everyone talks about AI. Fewer people show what it looks like to actually live with one.
In this session, Ken shares his real-world deployment of a personal AI agent that runs across his work, speaking career, and personal life. This is not a demo of ChatGPT prompts. This is an operating model — built incrementally over time — that handles morning briefings, calendar privacy bridges, travel logistics, speaking pipeline automation, secure vault retrieval, relationship nudges, and nightly content creation while he sleeps.
The session covers a four-stage framework: Build (what your agent knows), Trust (the autonomy ramp), Delegate (what to hand off first), and Compound (where the real leverage comes from).
Attendees will see live or recorded demonstrations of real workflows, including:
We also cover safety and trust design — how to define what your agent can do autonomously versus what requires your approval — and how to build a context-rich memory system that makes the agent genuinely useful over time.
Outcomes:
Note: This talk is best when delivered with live demonstrations. Ken runs this system daily and can demo real workflows in real time. No slides required for the demo sections — the agent speaks for itself.
Most teams treat incidents as technical failures. Great teams treat them as coordination failures under stress. This session gives engineering leaders a practical incident command system they can apply immediately: roles, communication cadence, decision logging, escalation paths, and postmortems that create learning instead of fear.
When incidents hit, technology matters — but leadership determines outcomes. This session walks through an operating model for incident response that scales across teams and time zones without chaos.
We cover clear roles (incident commander, comms lead, operations lead, and scribe), fast status loops, and decision frameworks that lower risk under pressure. You’ll see practical templates for timeline capture, stakeholder communication, and recovery prioritization.
We also cover the most ignored part: after-action learning. You’ll leave with a blameless postmortem structure that improves systems, process, and team behavior instead of assigning guilt.
Includes realistic scenarios, facilitation techniques for cross-functional pressure moments, and a leadership checklist you can use in your next production incident.
Outcomes:
No panic theater. Just practical leadership patterns that work when production is on fire and Slack has gone feral.
Reliable systems are not accidents. They are designed with explicit operating limits. This session translates lessons from high-risk domains into practical engineering guardrails for microservices: latency budgets, timeout strategy, retry discipline, concurrency limits, and blast-radius controls.
In high-consequence systems, teams define and respect operating limits. Software teams should do the same.
This session introduces an operating-limits model for modern microservices and platform environments. We’ll map common failure patterns (retry storms, cascading timeouts, queue overload, dependency fan-out) to concrete design and operational constraints that prevent small issues from becoming full incidents.
You’ll learn practical techniques for timeout layering, bulkheads, error budgets, load shedding, progressive degradation, and observability signals that reveal approaching limits before customers feel impact.
We’ll also cover leadership practices: how to align teams around reliability contracts and how to enforce guardrails without turning architecture into bureaucracy.
Outcomes:
Yes, we will talk about when your retries are lying to you. And no, adding one more queue is not always the answer.
Have you seen early productivity gains from AI, only to watch them disappear under growing complexity and production incidents? You're not alone. There's a common reason: many production systems already struggle with technical debt. When AI agents enter the development loop, that debt becomes a multiplier. Poor-quality code not only increases defects and costs. It dramatically raises AI risk by driving high breakage rates, turning promising AI agents into legacy code generators rather than genuine help.
Fortunately, there's hope on the horizon. In this talk, Adam Tornhill shows how organizations can achieve both speed and quality with AI. Backed by large-scale empirical studies on AI coding and developer productivity, we separate what works from what doesn't in real-world systems. Building on these findings, we then look at a practical framework for driving and sustaining AI-friendly code at scale. The AI revolution is here. Is your code ready?
AI agents don’t struggle with syntax. They struggle with missing intent, non-expressive code, and surprising dependencies. Historically, we were supposed to write code for human readers, code that fits our cognitive limits and supports collaboration. In reality, much of our industry has fallen short.
That comes back to bite us.
When AI agents enter the development loop, they amplify those same problems. Where a human developer will ask questions and seek clarification, an AI often proceeds without it, making its best guess from patterns in code that was never designed to be unambiguous.
Code that is hard for humans to understand becomes unreliable for AI.
In this talk, Adam Tornhill shows how to turn that around. You’ll learn the key principles behind AI-friendly code and apply practical AI-assisted refactoring patterns that make those principles concrete. The focus is not on generating more code, but on improving the code you already have so AI becomes reliable instead of risky. All recommendations are grounded in AI research and cognitive psychology.
The Model Context Protocol (MCP) standardizes how AI agents connect to external data and tools.
Moving beyond local experiments, this talk explores advanced MCP architectures: local vs. remote server deployments, advanced human-in-the-loop features, and hosting and scaling strategies for remote MCP servers. With Java code we will walk through MCP features, highlighting how to use them in AI agents.
Java is evolving rapidly, not just in performance and scalability, but in how enjoyable it is to write. In this session, we explore recent language and platform features designed to reduce friction, improve expressiveness, and help developers focus on solving problems instead of wrestling with boilerplate.
We’ll cover tools and features such as the Java Almanac and Playground for exploration, Stream Gatherers for building powerful data pipelines, Module Import Declarations to simplify modular development, and Lazy Constants for safer and more efficient initialization. Together, these changes signal a clear direction: Java is becoming more concise, more flexible, and more developer-friendly than ever before.
This session highlights a set of recent Java features that improve everyday development. The focus is on reducing ceremony, improving readability, and enabling more expressive code while staying within familiar Java patterns.
We begin with the Java Almanac and Java Playground, which provide ways to explore the language and quickly experiment with its features. These tools help developers learn, prototype, and validate ideas with less setup.
Next, we cover Stream Gatherers, an enhancement to the Streams API that allows developers to create custom intermediate operations. This makes it easier to express patterns like grouping, batching, and windowing directly within a stream pipeline.
We then explore Module Import Declarations, which simplify working with the Java Module System by reducing verbosity and making dependencies easier to manage. This lowers the barrier to adopting modules in real applications.
Finally, we look at Lazy Constants, which provide a safer and more flexible approach to initializing values only when needed. This improves performance characteristics while maintaining clarity and correctness.
By the end of the session, attendees will understand how these features contribute to a more streamlined Java experience and how they can apply them to write cleaner and more maintainable code
You can opt to run the examples on GitHub Codespaces, in which case you don't need any setup
Otherwise, if you want to run locally:
Java’s concurrency model has undergone one of its most significant transformations in decades. This session introduces the core features behind that shift, enabling the development of highly concurrent applications with a simpler, more intuitive programming style.
We will explore Virtual Threads, Structured Concurrency, and Scoped Values. Together, these features allow developers to write code that looks sequential while scaling to handle large numbers of concurrent tasks, with improved clarity, safety, and maintainability.
This session focuses on the modern concurrency features introduced as part of Project Loom. These features change how developers approach parallelism and coordination in Java applications.
We begin with Virtual Threads, which provide lightweight threads managed by the JVM. They allow applications to scale to a large number of concurrent operations without the complexity of thread pools or reactive frameworks. Developers can write straightforward blocking code while still achieving high throughput.
Next, we examine Structured Concurrency, which introduces a way to organize concurrent tasks as a single unit of work. This approach simplifies error handling, cancellation, and lifecycle management by ensuring that related tasks are started and completed together.
We then explore Scoped Values, a safer alternative to ThreadLocal. Scoped Values allow data to be shared across a well-defined execution boundary, making context propagation more predictable and easier to reason about in concurrent programs.
By the end of the session, attendees will understand how these features work together to simplify concurrent programming in Java. They will gain a clear mental model for writing scalable applications using a style that is both readable and robust.
You can opt to run the examples on GitHub Codespaces, in which case you don't need any setup
Otherwise, if you want to run locally:
Java is becoming easier to start with while continuing to push performance forward. This session explores features that simplify how programs are written and executed, as well as APIs that enable more efficient use of modern hardware.
We will cover Simple Source Files and Instance Main Methods, Launching Multi-File Source-Code Programs, the built-in Java WebServer, Value Objects, and the Vector API. These features make it possible to write lightweight applications with minimal setup while still leveraging Java’s performance.
This session focuses on features that reduce the barrier to writing and running Java programs, while also introducing tools for building efficient and high-performance applications.
We begin with Simple Source Files and Instance Main Methods, which remove much of the traditional ceremony required to start a Java program. Developers can write code more directly, making Java more approachable for quick tasks, scripting, and teaching.
Next, we explore Launching Multi-File Source-Code Programs, which allows multiple source files to be executed without a separate compilation step. This enables more realistic applications to be built and run with minimal setup, bridging the gap between simple scripts and structured programs.
We then look at the Java WebServer, a lightweight built-in server for serving content and testing applications locally. This feature makes it easy to spin up a simple server without external dependencies.
After that, we introduce Value Objects, which represent identity-free data and improve both correctness and performance. By focusing on immutable, value-based design, developers can write code that is easier to reason about and better aligned with modern JVM optimizations.
Finally, we examine the Vector API, which provides access to hardware-level optimizations for numerical and data-parallel operations. This allows developers to write code that leverages SIMD capabilities while remaining within the Java ecosystem.
By the end of the session, attendees will understand how Java supports both rapid development and high performance, and how these features can be combined to build applications that are simple, efficient, and modern.
You can opt to run the examples on GitHub Codespaces, in which case you don't need any setup
Otherwise, if you want to run locally:
Java continues to evolve its language features while expanding its ability to interact with native code. This session focuses on improvements that make Java more expressive and flexible, as well as capabilities that bring it closer to the underlying system.
We will explore Primitive Patterns in instanceof and switch, Flexible Constructor Bodies, and the Foreign Function and Memory (FFM) API. These features improve how developers write conditional logic, construct objects, and integrate with native libraries, all while maintaining Java’s focus on safety and clarity.
This session highlights language and platform features that enhance expressiveness and extend Java’s reach beyond the JVM.
We begin with Primitive Patterns in instanceof and switch, which expand pattern matching to support primitive types. This allows developers to write clearer, more concise conditional logic, reducing boilerplate and improving readability in common control-flow scenarios.
Next, we explore Flexible Constructor Bodies, which relax previous constraints on constructor structure. Developers can perform logic before delegating to another constructor or superclass, enabling more natural object initialization and better alignment with real-world design needs.
Finally, we examine the Foreign Function and Memory (FFM) API, which provides a modern and safe way to interact with native code and memory. This API replaces many of the complexities of JNI, allowing Java applications to call native libraries and manage off-heap memory with greater clarity and control.
By the end of the session, attendees will understand how these features make Java more expressive as a language and more powerful as a platform, opening the door to cleaner code and new types of applications.
You can opt to run the examples on GitHub Codespaces, in which case you don't need any setup
Otherwise, if you want to run locally:
Most software engineering leaders struggle for a reason no one talks about.
They were promoted for being great at writing code - and then handed responsibility for humans.
Meetings.
Conflict.
Silence.
Motivation.
Trust.
We quietly expect the same instincts that worked for technical systems to work for people.
They don’t.
In this session, we’ll explore leadership through a systems lens - not as a set of personality traits or management hacks, but as an adaptive human ecosystem that responds to risk, safety, and meaning.
You’ll learn:
• Why silence in meetings is rarely agreement
• How everyday leadership behaviors quietly train teams to wait, escalate, or
disengage
• What makes leaders accidentally become bottlenecks
• Why “helpful” interventions often have unintended consequences
• How human systems learn - even when you’re not teaching
This is not a talk about tools, frameworks, or performance management templates.
It’s about seeing what’s actually happening in your team - and realizing how your own
behavior shapes the system you’re leading.
Drawing on real-world stories from 20+ years in software engineering leadership (and a
background in anthropology), this session gives leaders a language for the invisible
dynamics they’ve felt but never been able to name.
Attendees will leave with:
• A new mental model for leadership
• A sharper lens for reading group behavior
• And a deeper understanding of how to create environments where people actually
think, speak, and decide
No blame.
No buzzwords.
Just a clearer view of the human system you’re already inside.
Our industry is in the process of changing our understanding of computational systems. The combination of extreme computational and energy power demand is a key part of modern data centers and runtime platforms. How many calculations can we produce at what energy cost? The limitations are a confluence of material science, system design complexity, and the fundamental laws of physics.
It's about to get weird as we enter the world of quantum and biological systems.
We started with coprocessors, FPGAs, ASICs, GPUs, and DSPs as lowerpower, highperformance custom hardware. We're now seeing the emergence of neural processing units and tensor processing units as well.
But we are on the cusp of enormous shifts in what's possible computationally with the advent of quantum and biological systems. Not every computational element is suitable for every problem, but quantum computing will make some problems impossibly fast to handle. Artificial biological brains will be able to computations, like the human brain, with the power budget of a light bulb.
Come hear how things are already in the process of changing as well as what is likely to come next.
Gartner just declared the semantic layer a non-negotiable foundation for AI. Most of the industry responded with a blank stare.
This presentation is the answer to that blank stare.
Your AI has a dirty secret: there is no mechanism in its architecture for truth. Only probability. Every response is a hallucination — most just happen to overlap with the facts. The philosophers figured out why 2,500 years ago, and they also gave us the solution. Plato defined knowledge as justified true belief. RAG is our architecture for justification. But there's a problem — your structured data is wholly inaccessible to it, because your JSON is full of magic strings that mean nothing outside the system that generated them.
This presentation shows you how to fix that. Not with a new framework, a bigger model, or an enterprise triple store. With a discipline — the discipline of making meaning explicit. JSON-LD, RDFS, OWL, and Schema.org form a standards stack that has been quietly solving this problem for 30 years. Your AI is already fluent in it. Half the web already speaks it. Google built an empire on it.
You'll leave with a concrete understanding of what the semantic layer actually is, why it matters, and — most importantly — how to start building it this week with the APIs you already have.
Your data isn't worthless. AI just doesn't know what it means yet.
Gartner just declared the semantic layer a non-negotiable foundation for AI. Most of the industry responded with a blank stare.
This presentation is the answer to that blank stare.
Your AI has a dirty secret: there is no mechanism in its architecture for truth. Only probability. Every response is a hallucination — most just happen to overlap with the facts. The philosophers figured out why 2,500 years ago, and they also gave us the solution. Plato defined knowledge as justified true belief. RAG is our architecture for justification. But there's a problem — your structured data is wholly inaccessible to it, because your JSON is full of magic strings that mean nothing outside the system that generated them.
This presentation shows you how to fix that. Not with a new framework, a bigger model, or an enterprise triple store. With a discipline — the discipline of making meaning explicit. JSON-LD, RDFS, OWL, and Schema.org form a standards stack that has been quietly solving this problem for 30 years. Your AI is already fluent in it. Half the web already speaks it. Google built an empire on it.
You'll leave with a concrete understanding of what the semantic layer actually is, why it matters, and — most importantly — how to start building it this week with the APIs you already have.
Your data isn't worthless. AI just doesn't know what it means yet.
If everyone agrees with you, you’re probably not innovating, you’re just conforming faster. History’s breakthroughs rarely came from consensus; they came from heretics, hackers, and the hopelessly curious. In this talk, Michael Carducci takes aim at the myth of collective wisdom and explores why the crowd is almost always optimized for the past. Through stories of misfits who changed the world—from computing pioneers to magicians who reinvented wonder; Carducci reveals the hidden patterns of real innovation: discomfort, doubt, and persistence in the face of polite disbelief.
You’ll learn how to recognize the subtle forces that suppress new ideas, how to trust your intuition when it runs counter to consensus, and how to cultivate the curiosity and courage that real innovation demands. This is a talk for the misfits, the tinkerers, and the quietly visionary… because progress has always started at the edges.
In this 90-minute workshop, you will move beyond simply “fixing bugs” to mastering the art of world-class code quality. Most developers spend their days in “haunted” codebases where technical debt feels like a terminal diagnosis. But what if you had a diagnostic suite that didn't just point out problems, but actively helped you or the AI to perform the surgery itself?
Using the CodeHealth MCP you will learn to use AI effectively, without slop, so you can focus on what really matters - systems thinking, architecture, planning, etc . We shift the focus from reactive “firefighting” to proactive “preventative medicine,” ensuring your codebase remains maintainable and high-quality as it scales.
You will learn to:
Triage Technical Debt: Use the CodeHealth MCP to scan for “symptoms” (smells and anti-patterns) and prioritize what to fix first based on impact.
Perform Non-Invasive Surgery: Uplift legacy code by refactoring with MCP-guided precision—improving readability and performance without breaking existing functionality.
Implement “Health Guardrails”: Learn how to use the MCP to maintain high standards while shipping new features, preventing “rot” from creeping back in.
The Project
You will be provided with a selection of starter projects - real-world scenarios featuring less-than-ideal codebases. You can choose your preferred environment from:
Java
TypeScript/Node.js
*Python
Together, we will use the CodeHealth MCP to diagnose the mess, implement a requested new feature, and wrap it all in a robust testing suite to ensure that the codebase is better than it was before.
Prerequisites:
Language Agnostic: While we provide specific templates, the principles are universal.
Audience: Developers, Tech Leads, and Architects who want to integrate AI into their daily workflow without sacrificing quality.
Technical Setup: You should be comfortable with basic Git operations and have a local environment set up for your language of choice.
Workshop Style
100% Hands-on. This 90-minute masterclass is built around a “Day in the Life” simulation. You’ll be at your laptop, working through a real work scenario, using the CodeHealth MCP.
Gartner just declared the semantic layer a non-negotiable foundation for AI. Most of the industry responded with a blank stare.
This presentation is the answer to that blank stare.
Your AI has a dirty secret: there is no mechanism in its architecture for truth. Only probability. Every response is a hallucination — most just happen to overlap with the facts. The philosophers figured out why 2,500 years ago, and they also gave us the solution. Plato defined knowledge as justified true belief. RAG is our architecture for justification. But there's a problem — your structured data is wholly inaccessible to it, because your JSON is full of magic strings that mean nothing outside the system that generated them.
This presentation shows you how to fix that. Not with a new framework, a bigger model, or an enterprise triple store. With a discipline — the discipline of making meaning explicit. Be introduced to a standards stack that has been quietly solving this problem for 30 years. Your AI is already fluent in it. Half the web already speaks it. Google built an empire on it.
You'll leave with a concrete understanding of what the semantic layer actually is, why it matters, and—most importantly—how to empower teams to start building it this week with the APIs you already have.
Your data isn't worthless. AI just doesn't know what it means yet.
Coding agents are remarkably capable, and yet how do we treat them? By giving them stream-of-consciousness descriptions of half-baked ideas: vague feature requests, underspecified tickets, or maybe just good old vibes. We've got better implementation assistants than we ever could have imagined, but we're not always doing a great job thinking through what we want them to build. Would a lightweight spec document be so bad?
OpenSpec doesn't think so. Born from the recognition we've never really come to consensus on how to capture features—some do PRDs in Notion, some make epics in Jira, some still write user stories on index cards—it uses some Agent Skills and a CLI to guide you through structured feature exploration for greenfield and brownfield projects alike, producing detailed designs and todo lists precise enough to direct a coding agent even in the presence of some complexity. We'll trace how it emerged, see how it positions itself against approaches like Spec Kit and traditional design tooling, and examine the recent developments that have extended its capabilities.
The specification problem is older than agents, having bedeviled us since the first time someone handed a one of our grandparents a napkin sketch and called it a design doc. OpenSpec is surely not the final answer, but it's the right one for agentic engineering right now.
The AI revolution isn’t coming — it’s already here, in our editors, our pipelines, our incident channels, our platforms. But while everyone is racing to bolt “AI-powered” onto their products, a quieter, more consequential truth is emerging:
The future won’t belong to teams with the biggest models. It will belong to teams with the best habits.
This keynote is a fast-paced journey into the craft of AI engineering — the behaviours, reflexes, and mental disciplines that separate teams who build safe, reliable, explainable AI systems from those who unleash unpredictable ones into production.
Through vivid stories of teams who got these habits right — and cautionary tales of those who didn’t — you’ll see why AI engineering is less about algorithms and more about discipline: the daily behaviours that make AI predictable, governable, and safe in the wild.
Up until early 2023, I regularly said AI would always have a bright future—and I didn't mean that as a compliment. Sure, deep learning had made us good at building very impressive classifiers in the decade or so prior, but for so long, human-like intelligence was just five years in the future—and that made me a skeptic. Things are different now, but how much skepticism is still warranted? What is it that we've got on our hands? What changes is modern AI bringing with it? Like with so many other questions, the answers are easier if we understand where we've come from.
Beginning with the Turing Test itself, the famous Dartmouth Conference of 1956, and the Perceptron of 1957, we'll trace decades of disappointment and broken promises as we tried to realize the true potential of computing, at the same time grasping for an understanding of what it means to be human. Taking a close look at various technologies along the way, we'll arrive in the 21st century, the revival of neural networks, the advent of the Transformer, and a revolutionary new technology category that has prompted oracles of weal and woe from our most optimistic and apocalyptic technology prophets.
Standing on the edge of the unknown, what elements of the 75-year-old promise have we realized? A close examination gives us a better view of our immediate future as technologists and insight into what it means to be human.
Modernizing legacy systems seemed exciting…until I found myself absorbed in rewrites, facing business blockers, and watching tech debt pile up instead of shrink. In this talk, I’ll share the biggest traps I’ve seen and experienced firsthand while working on modernization efforts in large organizations—and what helped us avoid (or recover from) them. From picking the wrong architecture patterns too early to losing stakeholder trust halfway through, I’ll walk through real examples of what not to do, along with the principles and strategies that helped us get back on track. Whether you’re breaking down a monolith or updating a business-critical system, I’ll help you steer clear of common pitfalls and make smarter, more sustainable decisions.
What This Talk Will Answer:
-What are the most common and costly mistakes teams make during architecture modernization?
-How do you choose between refactoring, rewriting, or rearchitecting a legacy system?
-How can Domain-Driven Design reduce risk and improve focus in modernization efforts?
-What strategies keep modernization aligned with business priorities and avoid loss of momentum?
-How do you avoid turning tech upgrades into long-running, low-impact projects?
A client once asked me to take a team that was new to REST, Agile, etc. and put together a high profile, high value commerce-oriented API in the period of six months. In the process of training the team and designing this API, I hit upon the idea of providing rich testing
coverage by mixing the Behavior-Driven Design testing approach with REST.
In this talk, I will walk you through the idea, the process, and the remarkable outcomes we achieved. I will show you how you can benefit as well from this increasingly useful testing strategy. The approach makes it easy to produce tests that are accessible to business analysts and other stakeholders who wouldn't understand the first
thing about more conventional unit tests.
Behavior is expressed using natural language. The consistent API style minimizes the upfront work in defining step definitions. In the end, \you can produce sophisticated coverage, smoke tests, and more that exercise the full functionality of the API. It also produces another organizational artifact that can be used in the future to migrate to
other implementation technologies.