This full-day, hands-on workshop equips developers, architects, and technical leaders with the knowledge and skills to secure AI systems end-to-end — from model interaction to production deployment. Participants learn how to recognize and mitigate AI-specific threats such as prompt injection, data leakage, model exfiltration, and unsafe tool execution.
Through a series of focused labs, attendees build, test, and harden AI agents and Model Context Protocol (MCP) services using modern defensive strategies, including guardrails, policy enforcement, authentication, auditing, and adversarial testing.
The training emphasizes real-world implementation over theory, using preconfigured environments in GitHub Codespaces for instant, reproducible results. By the end of the day, participants will have created a working secure AI pipeline that demonstrates best practices for trustworthy AI operations and resilient agent architectures.
The course blends short conceptual discussions with deep, hands-on practice across eight structured labs, each focusing on a key area of AI security. Labs can be completed in sequence within GitHub Codespaces, requiring no local setup.
1.Lab 1 – Mapping AI Security Risks
Identify the unique attack surfaces of AI systems, including LLMs, RAG pipelines, and agents. Learn how to perform a structured threat model and pinpoint where vulnerabilities typically occur.
2.Lab 2 – Securing Prompts and Contexts
Implement defensive prompting, context isolation, and sanitization to mitigate prompt injection, hidden instructions, and data leakage risks.
3.Lab 3 – Implementing Guardrails
Use open-source frameworks (e.g., Guardrails.ai, LlamaGuard) to validate LLM outputs, enforce content policies, and intercept unsafe completions before delivery.
4.Lab 4 – Hardening MCP Servers and Tools
Configure FastMCP servers with authentication, scoped tokens, and restricted tool manifests. Examine how to isolate and monitor server–client interactions to prevent privilege escalation.
5.Lab 5 – Auditing and Observability for Agents
Integrate structured logging, trace identifiers, and telemetry into AI pipelines. Learn how to monitor for suspicious tool calls and enforce explainability through audit trails.
6.Lab 6 – Adversarial Testing and Red-Teaming
Simulate common AI attacks—prompt injection, model hijacking, and context poisoning—and apply mitigation patterns using controlled experiments.
7.Lab 7 – Policy-Driven Governance
Introduce a “security-as-code” approach using policy files that define allowed tools, query types, and data scopes. Enforce runtime governance directly within your agent’s workflow.
8.Lab 8 – Secure Deployment and Lifecycle Management
Apply DevSecOps practices to containerize, sign, and deploy AI systems safely. Incorporate secrets management, vulnerability scanning, and compliance checks before release.
Outcome:
Participants finish the day with a secure, auditable, and policy-controlled AI system built from the ground up. They leave with practical experience defending agents, MCP servers, and model workflows—plus learning for integrating security-by-design principles into future projects.
Hi, Spring fans! Developers today are being asked to deliver more with less time and build ever more efficient services, and Spring is ready to help you meet the demands. In this workshop, we'll take a roving tour of all things Spring, looking at fundamentals of the Spring component model, look at Spring Boot, and then see how to apply Spring in the context of batch processing, security, data processing, modular architecture, miroservices, messaging, AI, and so much more.
Basics
which IDE? IntelliJ, VSCode, and Eclipse
your choice of Java: GraalVM
start.spring.io, an API, website, and an IDE wizard
Devtools
Docker Compose
Testcontainers
banner.txt
Development Desk Check
the Spring JavaFormat Plugin
Python, gofmt, your favorite IDE, and
the power of environment variables
SDKMAN
.sdkman
direnv
.envrc
a good password manager for secrets
Data Oriented Programming in Java 21+
an example
Beans
dependency injection from first principles
bean configuration
XML
stereotype annotations
lifecycle
BeanPostProcessor
BeanFactoryPostProcessor
auto configuration
AOP
Spring's event publisher
configuration and the Environment
configuration processor
AOT & GraalVM
installing GraalVM
GraalVM native images
basics
AOT lifecycles
Scalability
non-blocking IO
virtual threads
José Paumard's demo
Cora Iberkleid's demo
Cloud Native Java (with Kubernetes)
graceful shutdown
ConfigMap and you
Buildpacks and Docker support
Actuator readiness and liveness probes
Data
JdbcClient
SQL Initialization
Flyway
Spring Data JDBC
Web Programming
clients: RestTemplate, RestClient, declarative interface clients
REST
controllers
functional style
GraphQL
batches
Architecting for Modularity
Privacy
Spring Modulith
Externalized messages
Testing
Batch Processing
Spring Batch
load some data from a CSV file to a SQL database
Microservices
centralized configuration
API gateways
reactive or not reactive
event bus and refreshable configuration
service registration and discovery
Messaging and Integration
“What do you mean by Event Driven?”
Messaging Technologies like RabbitMQ or Apache Kafka
Spring Integration
files to events
Kafka
a look at Spring for Apache Kafka
Spring Integration
Spring Cloud Stream
Spring Cloud Stream Kafka Streams
Security
adding form login to an application
authentication
authorization
passkeys
one time tokens
OAuth
the Spring Authorizatinm Server
OAuth clients
OAuth resource servers
protecting messaging code
You are ready to level up your skills. Or, you've already been playing accidental architect, and need to have a structured plan to be designated as one. Well, your wait is over.
From the author of O'Reilly's best-selling “Head First Software Architecture” comes a full-day workshop that covers all that you need to start thinking architecturally. Everything from the difference between design and architecture, and modern description of architecture, to the skills you'll need to develop to become a successful architect, this workshop will be your one stop shop.
We'll cover several topics:
This is an exercise heavy workshop—so be prepared to put on your architect hat!
As code generation becomes increasingly automated, our role as developers and architects is evolving. The challenge ahead isn’t how to get AI to write more code, it’s how to guide it toward coherent, maintainable, and purposeful systems.
In this session, Michael Carducci reframes software architecture for the era of intelligent agents. You’ll learn how architectural constraints, composition, and trade-offs provide the compass for orchestrating AI tools effectively. Using principles from the Tailor-Made Architecture Model, Carducci introduces practical mental models to help you think architecturally, communicate intent clearly to your agents, and prevent automation from accelerating entropy. This talk reveals how the enduring discipline of architecture becomes the key to harnessing AI—not by replacing human creativity, but by amplifying it.
When Eliyahu Goldratt wrote The Goal, he showed how local optimizations (like adding robots to a factory line) can actually decrease overall performance. Today, AI threatens to repeat that mistake in software. We’re accelerating coding without improving flow. In this talk, Michael Carducci explores what it means to architect for the goal: continuous delivery of value through systems designed for flow.
Drawing insights from Architecture for Flow, Domain-Driven Design, Team Topologies, and his own Tailor-Made Architecture Model, Carducci shows how to align business strategy, architecture, and teams around shared constraints and feedback loops. You’ll discover how to turn automation into advantage, orchestrate AI within the system of work, and build socio-technical architectures that evolve—not just accelerate.
The hardest part of software architecture isn’t the technology, it’s the people. Every architecture lives or dies by its ability to influence behavior, build consensus, and turn vision into change. In this session, Michael Carducci explores the real work of being an architect: communicating clearly, guiding decisions, and driving meaningful change in complex organizations. Drawing from decades of experience and the principles behind the Tailor-Made Architecture Model, Carducci shows how to identify where change is needed, package ideas for adoption, and lead with both clarity and empathy.
And while AI may soon help us design systems, it still can’t align humans around them. The enduring art of architecture lies in shaping not just the code, but the culture that makes progress possible. You’ll leave with practical tools to navigate the human side of architecture and a renewed appreciation for why that art still matters.
Everyone’s talking about AI models, but almost no one is talking about the data architecture that makes them intelligent. Today’s AI systems are brittle because they lack context, semantics, and shared understanding. In this session, Michael Carducci explores how linked data, RDF, ontologies, and knowledge graphs solve the very problems that leave the industry floundering: hallucination, inconsistency, and lack of interoperability.
Drawing from real-world examples, Carducci connects decades of overlooked research in semantic web technologies to the challenges of modern AI and agentic systems. You’ll see how meaning itself can be modeled, linked, and reasoned over; and why the future of AI depends not on bigger models, but on smarter data.
In our rush toward the future, the software industry keeps forgetting its past—and with it, the hard-won lessons that could save us from repeating the same mistakes. In this live storytelling session, Michael Carducci revives the forgotten wisdom of the pioneers who shaped our craft.
Through entertaining, thought-provoking tales drawn from computing’s early days, he reveals how timeless principles still illuminate today’s challenges in architecture, AI, and innovation. Blending inspiration, history, and humor, Carducci connects these tales to our modern struggles with AI, architecture, and innovation itself. This isn’t nostalgia—it’s a rediscovery of the foundations that still shape great software and better technologists.
Microservices architecture has become a buzzword in the tech industry, promising unparalleled agility, scalability, and resilience. Yet, according to Gartner, more than 90% of organizations attempting to adopt microservices will fail. How can you ensure you're part of the successful 10%?
Success begins with looking beyond the superficial topology and understanding the unique demands this architectural style places on the teams, the organization, and the environment. These demands must be balanced against the current business needs and organizational realities while maintaining a clear and pragmatic path for incremental evolution.
In this session, Michael will share some real-world examples, practical insights, and proven techniques to balance both the power and complexities of microservices. Whether you're considering adopting microservices or already on the journey and facing challenges, this session will equip you with the knowledge and tools to succeed.
2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.
This curated set of 3 sessions will help equip senior technologists to evolve from document stewardship to adaptive integrity management—blending human judgment, executable principles, and guided agent assistance.Architecture is shifting from static designs to adaptive, agent-driven execution.
Come to the Agentic Architect session if you want to see:
how the role of architecture is evolving in the agentic era
practical tips and trick for how to embrace the new agentic toolset
how to lean into architecture as code
cut decision time from weeks and days to hours
stop redrawing diagrams forever
“The Agentic Architect isn't about AI writing your code – it's about transforming how you make, communicate, and enforce architecture in an AI-accelerated world.”
2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.
This live demo session takes the patterns from “The Agentic Architect” and runs them end-to-end starting with a blank slate.
Watch ideas turn into working architecture
See diagrams-as-code that update themselves based on a more holistic context
Learn how to use AI agents on a daily basis to transform your work
2025 shattered the old cadence of software architecture. AI agents now co‑author code and refactors, compliance expectations tightened, and cost/latency signals moved inside everyday design loops. Static diagrams, quarterly review boards, and slide-driven governance can’t keep up.
2025 delivered unprecedented architectural disruption.
This interactive session will explore key events throughout 2025/2026 that have impacted the architect's role in the context of AI ubiquity, platform acceleration, and cost pressures.
This session will focus on the essential technical skills that are needed by software architects on a daily basis from ideation to product delivery. For many architects, maintaining your technical skills can be a challenge.
Come to this session if you want to learn some tricks and tips for how to raise your technical game as an architect.
Java has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
sealed classesrecordswitch expressionsJava has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
sealed classesrecordswitch expressionsJava has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
Java has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the latest changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we'll take a tour of some features that you might or might not have heard of, but can significantly improve your workflow and day-to-day interaction with Git.
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we will look at some features you are probably aware of, but haven't used, alongside new features that Git has brought to the table.
You will need the following installed
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the latest changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we'll take a tour of some features that you might or might not have heard of, but can significantly improve your workflow and day-to-day interaction with Git.
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we will look at some features you are probably aware of, but haven't used, alongside new features that Git has brought to the table.
You will need the following installed
Spring Boot 3.x and Java 21 have arrived, making it an exciting time to be a Java developer! Join me, Josh Long (@starbuxman), as we dive into the future of Spring Boot with Java 21. Discover how to scale your applications and codebases effortlessly. We'll explore the robust Spring Boot ecosystem, featuring AI, modularity, seamless data access, and cutting-edge production optimizations like Project Loom's virtual threads, GraalVM, AppCDS, and more.
Let's explore the latest-and-greatest in Spring Boot to build faster, more scalable, more efficient, more modular, more secure, and more intelligent systems and services.
The age of artificial intelligence (because the search for regular intelligence hasn't gone well..) is nearly at hand, and it's everywhere! But is it in your application? It should be. AI is about integration, and here the Java and Spring communities come second to nobody.
In this talk, we'll demystify the concepts of modern day Artificial Intelligence and look at its integration with the white hot new Spring AI project, a framework that builds on the richness of Spring Boot to extend them to the wide world of AI engineering.
There's a clear need for security in the software systems that we build. The problem for most organizations is that they don't want to spend any money on it. Even if they did, they often have no idea how much to spend. No particular initiative is likely to imbue your system with “security”, but a strong, deep defensive approach is likely to give you a fighting chance of getting it right.
Web Security as applied to APIs in particular are an important part of the plan. In this workshop, we'll show you how approaches to defining “enough” as well as concrete techniques to employ incrementally in your designs.
In this workshop, we will pick a hands on framework for implementation, but the ideas will generally be standards-based and transcend technology choice so you should have a strategy for mapping the ideas into your own systems.
We will cover a broad range of topics including:
There's a clear need for security in the software systems that we build. The problem for most organizations is that they don't want to spend any money on it. Even if they did, they often have no idea how much to spend. No particular initiative is likely to imbue your system with “security”, but a strong, deep defensive approach is likely to give you a fighting chance of getting it right.
Web Security as applied to APIs in particular are an important part of the plan. In this workshop, we'll show you how approaches to defining “enough” as well as concrete techniques to employ incrementally in your designs.
In this workshop, we will pick a hands on framework for implementation, but the ideas will generally be standards-based and transcend technology choice so you should have a strategy for mapping the ideas into your own systems.
We will cover a broad range of topics including:
In the fast-paced world of software development, maintaining architectural integrity is a
continuous challenge. Over time, well-intended architectural decisions can erode, leading to unexpected drift and misalignment with original design principles.
This hands-on workshop will equip participants with practical techniques to enforce architecture decisions using tests. By leveraging architecturally-relevant testing, attendees will learn how to proactively guard their system's design, ensuring consistency, scalability, and security as the codebase evolves. Through interactive exercises and real-world examples, we will explore how testing can serve as a powerful tool for preserving architectural integrity throughout a project's lifecycle.
Key Takeaways
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites
Basic Understanding of Software Architecture: Familiarity with architectural patterns and
principles
Experience with Automated Testing: Understanding of unit, integration, or system testing
concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the
Key Takeaways:
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites:
Basic Understanding of Software Architecture: Familiarity with architectural patterns and principles
Experience with Automated Testing: Understanding of unit, integration, or system testing concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the
In the fast-paced world of software development, maintaining architectural integrity is a
continuous challenge. Over time, well-intended architectural decisions can erode, leading to unexpected drift and misalignment with original design principles.
This hands-on workshop will equip participants with practical techniques to enforce architecture decisions using tests. By leveraging architecturally-relevant testing, attendees will learn how to proactively guard their system's design, ensuring consistency, scalability, and security as the codebase evolves. Through interactive exercises and real-world examples, we will explore how testing can serve as a powerful tool for preserving architectural integrity throughout a project's lifecycle.
Key Takeaways
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites
Basic Understanding of Software Architecture: Familiarity with architectural patterns and
principles
Experience with Automated Testing: Understanding of unit, integration, or system testing
concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the
Key Takeaways:
Participants will learn to:
Write architecture-driven tests that validate and enforce design constraints.
Identify architectural drift early and prevent unintended changes.
Maintain consistent, scalable, and secure architectures over time.
Collaborate effectively within teams to sustain architectural excellence.
Prerequisites:
Basic Understanding of Software Architecture: Familiarity with architectural patterns and principles
Experience with Automated Testing: Understanding of unit, integration, or system testing concepts
Collaboration and Communication Skills: Willingness to engage in discussions and
teamwork
Experience working with Java
Optional
Familiarity with Static Analysis and Code Quality Tools: Knowledge of tools like ArchUnit,
SonarQube, or custom linters is beneficial but not required
Experience with Large-Scale Systems: Prior work on complex systems can enhance the
REST APIs often fall into a cycle of constant refactoring and rewrites, leading to wasted time, technical debt, and endless rework. This is especially difficult when you don't control the API clients.
But what if this could be your last major API refactor? In this session, we’ll dive into strategies for designing and refactoring REST APIs with long-term sustainability in mind—ensuring that your next refactor sets you up for the future.
You’ll learn how to design APIs that can adapt to changing business requirements and scale effectively without requiring constant rewrites. We’ll explore principles like extensibility, versioning, and decoupling, all aimed at future-proofing your API while keeping backward compatibility intact. Along the way, we’ll examine real-world examples of incremental API refactoring, where breaking the cycle of endless rewrites is possible.
This session is perfect for API developers, architects, and tech leads who are ready to stop chasing their tails and want to invest in designing APIs that will stand the test of time—so they can focus on building great features instead of constantly rewriting code.
AI models are evolving fast, but the systems around them aren’t. Every backend change still breaks your carefully tuned AI client, while on the web, every change to a server doesn’t require you to download a new browser. What if AI worked the same way?
In this talk, Michael Carducci explores the architecture of 3rd Generation Agentic AI, building on the ideas and technologies introduced in Data Architecture for AI. You’ll discover how JSON-LD, Hydra, and semantic integration enable truly evolvable, interoperable AI ecosystems at web scale. Through live demos and real-world examples, Carducci shows how these web-native standards create APIs that describe themselves, adapt to change, and empower agents to discover and interact safely without brittle coupling. The real frontier isn’t smarter models—it’s shared meaning—and that’s an architectural problem worth solving.
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Attendees will need the following to do the hands-on labs:
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Attendees will need the following to do the hands-on labs:
Just as CI/CD and other revolutions in DevOps have changed the landscape of the software development lifecycle (SDLC), so Generative AI is now changing it again. Gen AI has the potential to simplify, clarify, and lessen the cycles required across multiple phases of the SDLC.
In this session with author, trainer, and experienced DevOps director Brent Laster, we'll survey the ways that today's AI assistants and tools can be incorporated across your SDLC phases including planning, development, testing, documentation, maintaining, etc. There are multiple ways the existing tools can help us beyond just the standard day-to-day coding and, like other changes that have happened over the years, teams need to be aware of, and thinking about how to incorporate AI into their processes to stay relevant and up-to-date.
In the age of digital transformation, Cloud Architects emerge as architects of the virtual realm, bridging innovation with infrastructure. This presentation offers a comprehensive exploration of the Cloud Architect's pivotal role.
Delving into cloud computing models, architecture design, and best practices, attendees will gain insights into harnessing the power of cloud technologies. From optimizing scalability and ensuring security to enhancing efficiency and reducing costs, this session unravels the strategic decisions and technical expertise that define a Cloud Architect's journey. Join us as we decode the nuances of cloud architecture, illustrating its transformative impact on businesses in the modern era.
AI inference is no longer a simple model call—it is a multi-hop DAG of planners, retrievers, vector searches, large models, tools, and agent loops. With this complexity comes new failure modes: tail-latency blowups, silent retry storms, vector store cold partitions, GPU queue saturation, exponential cost curves, and unmeasured carbon impact.
In this talk, we unveil ROCS-Loop, a practical architecture designed to close the four critical loops of enterprise AI:
•Reliability (Predictable latency, controlled queues, resilient routing)
•Observability (Full DAG tracing, prompt spans, vector metrics, GPU queue depth)
•Cost-Awareness (Token budgets, model tiering, cost attribution, spot/preemptible strategies)
•Sustainability (SCI metrics, carbon-aware routing, efficient hardware, eliminating unnecessary work)
KEY TAKEAWAYS
•Understand the four forces behind AI outages (latency, visibility, cost, carbon).
•Learn the ROCS-Loop framework for enterprise-grade AI reliability.
•Apply 19 practical patterns to reduce P99, prevent retry storms, and control GPU spend.
•Gain a clear view of vector store + agent observability and GPU queue metrics.
•Learn how ROCS-Loop maps to GCP, Azure, Databricks, FinOps & SCI.
•Leave with a 30-day action plan to stabilize your AI workloads.
⸻
AGENDA
1.The Quiet Outage: Why AI inference fails
2.X-Ray of the inference pipeline (RAG, agents, vector, GPUs)
3.Introducing the ROCS-Loop framework
4.19 patterns for Reliability, Observability, FinOps & GreenOps
5.Cross-cloud mapping (GCP, Azure, Databricks)
6.Hands-on: Diagnose an outage with ROCS
7.Your 30-day ROCS stabilization plan
8.Closing: Becoming a ROCS AI Architect
Dynamic Programming (DP) intimidates even seasoned engineers. With the right lens, it’s just optimal substructure + overlapping subproblems turned into code. In this talk, we start from a brute-force recursive baseline, surface the recurrence, convert it to memoization and tabulation, and connect it to real systems (resource allocation, routing, caching). Along the way you’ll see how to use AI tools (ChatGPT, Copilot) to propose recurrences, generate edge cases, and draft tests—while you retain ownership of correctness and complexity. Expect pragmatic patterns you can reuse in interviews and production.
Why Now
Key Framework
Core Content
Learning Outcomes
AI enablement isn’t buying Copilot and calling it done; it’s a system upgrade for the entire SDLC. Code completion helps, but the real bottlenecks live in reviews, testing, releases, documentation, governance, and knowledge flow. Achieving meaningful impact requires an operating model: guardrails, workflows, metrics, and change management; not a single tool.
This session shares SPS Commerce’s field notes: stories, failures, and working theories from enabling AI across teams. You’ll get a sampler of adaptable patterns and anti-patterns spanning productivity, systems integration, guardrails, golden repositories, capturing tribal knowledge, API design, platform engineering, and internal developer portals. Come for practical menus you can pilot next week, and stay to compare strategies with peers.
Building an AI model is the easy part—making it work reliably in production is where the real engineering begins. In this fast-paced, experience-driven session, Ken explores the architecture, patterns, and practices behind operationalizing AI at scale. Drawing from real-world lessons and enterprise implementations, Ken will demystify the complex intersection of machine learning, DevOps, and data engineering, showing how modern organizations bring AI from the lab into mission-critical systems.
Attendees will learn how to:
Design production-ready AI pipelines that are testable, observable, and maintainable
Integrate model deployment, monitoring, and feedback loops using MLOps best practices
Avoid common pitfalls in scaling, governance, and model drift management
Leverage automation to reduce friction between data science and engineering teams
Whether you’re a software architect, developer, or engineering leader, this session will give you a clear roadmap for turning AI innovation into operational excellence—with the same pragmatic, architecture-first perspective that Ken is known for.
Most of us don't want to go back to the days of malloc and free, but the magic of garbage collectors while convenient can be mysterious and hard to understand.
In this talk, you'll learn about the many different garbage collectors available in JVMs. The strength and weaknesses of the different allocation and collection strategies used by each collector. And how garbage collectors keep evolving to support today's hardware and cloud environments.
This talk will cover the core concepts in garbage collection: object reachability, concurrent collectors, parallel garbage collectors, and generational garbage collectors. These concepts will be covered by following the progression of garbage collectors in the HotSpot JVM.
Unlike other languages, Java had a well-defined memory model from the very beginning, but over the years additional packages and low-level features have been added to make the most of today's hardware.
In this talk, we'll discuss concurrency in detail starting at the hardware up to Java's latest synchronization mechanisms and finally onto high-level concurrent collections.
This talk will cover hardware memory fences, Java's synchronized and volatile, Atomic classes, newer capabilities added by VarHandles, and some select high-level concurrent collections.
Fortunately for most Java developers the just-in-time compiler just works and appears to do so by magic. And yet sometimes, we find ourselves facing a performance problem, so what do we when the magic stops?
In this talk, we’ll learn a few key concepts behind the magic of modern optimizing compilers: intrinsics, basic blocks, static single assignment, and inlining. By learning these key concepts, you’ll learn to save time not trying to optimize the things that the compiler can already do for you and to focus on the things that matter most.
This talk will provide a high-level overview of just-in-time compilation.
The talk will cover when the JVM triggers just-in-time compilation, an overview of core compiler concepts, and speculative optimizations and deoptimization which make the JVM unique.
Architectural decisions are often influenced by blindspots, biases, and unchecked assumptions, which can lead to significant long-term challenges in system design. In this session, we’ll explore how these cognitive traps affect decision-making, leading to architectural blunders that could have been avoided with a more critical, holistic approach.
You’ll learn how common biases—such as confirmation bias and anchoring—can cloud judgment, and how to counteract them through problem-space thinking and reflective feedback loops. We’ll dive into real-world examples of architectural failures caused by biases or narrow thinking, and discuss strategies for expanding your perspective and applying critical thinking to system design.
Whether you’re an architect, developer, or technical lead, this session will provide you with tools to recognize and mitigate the impact of biases and blindspots, helping you make more informed, thoughtful architectural decisions that stand the test of time.
In this half-day workshop, we’ll practice Test-Driven Development (TDD) by solving a real problem step by step. You’ll learn how to think in tests, write clean code through refactoring, and use your IDE and AI tools effectively. We’ll also explore how modern Java features (like lambdas and streams) enhance testability, and discuss what’s worth testing — and what’s not.
In this half-day workshop, we’ll practice Test-Driven Development (TDD) by solving a real problem step by step. You’ll learn how to think in tests, write clean code through refactoring, and use your IDE and AI tools effectively. We’ll also explore how modern Java features (like lambdas and streams) enhance testability, and discuss what’s worth testing — and what’s not.
In the realm of architecture, principles form the bedrock upon which innovative and enduring designs are crafted. This presentation delves into the core architectural principles that guide the creation of structures both functional and aesthetic. Exploring concepts such as balance, proportion, harmony, and sustainability, attendees will gain profound insights into the art and science of architectural design. Through real-world examples and practical applications, this session illuminates the transformative power of adhering to these principles, shaping not only buildings but entire environments. Join us as we unravel the secrets behind architectural mastery and the principles that define architectural brilliance.
Good architectural principles are fundamental guidelines or rules that inform the design and development of software systems, ensuring they are scalable, maintainable, and adaptable. Here are some key architectural principles that are generally considered valuable in software development:
Adhering to these architectural principles can lead to the development of robust, maintainable, and adaptable software systems that meet the needs of users and stakeholders effectively.
AI agents are moving from novelty to necessity — but building them safely, predictably, and observably requires more than clever prompts. This workshop gives developers a practical introduction to AI agent engineering using Embabel, with an emphasis on the habits, patterns, and mental models needed to design trustworthy agents in real systems.
Across two focused sessions, you’ll learn how to ground agents in strong domain models (DICE), design goal-driven behaviours (GOAP), enforce safety through invariants and preconditions, and make every action explainable through observability.
You’ll run and inspect a fully working reference agent, extend its domain, add new actions, and validate behaviour through explainable planning logs. You'll explore how to select and deploy models tuned to provide the best and most cost-effective agent behaviour for your users.
By the end of this full day workshop, you’ll know how to:
Build agents anchored in typed domain models
Design composable, goal-oriented behaviours
Use preconditions and invariants as safety guardrails
Debug agents through explainability, not guesswork
Extend an agent with new domain objects and actions without breaking existing flows
Apply a repeatable habit stack for reliable agent engineering
Whether you’re designing workflow agents, platform automations, or domain-specific assistants, this workshop gives you the practical skills and engineering discipline to build agents that behave safely and reason predictably — fit for production, and even fit for regulated environments.
The AI revolution isn’t coming — it’s already here, in our editors, our pipelines, our incident channels, our platforms. But while everyone is racing to bolt “AI-powered” onto their products, a quieter, more consequential truth is emerging:
The future won’t belong to teams with the biggest models. It will belong to teams with the best habits.
This keynote is a fast-paced journey into the craft of AI engineering — the behaviours, reflexes, and mental disciplines that separate teams who build safe, reliable, explainable AI systems from those who unleash unpredictable ones into production.
Through vivid stories of teams who got these habits right — and cautionary tales of those who didn’t — you’ll see why AI engineering is less about algorithms and more about discipline: the daily behaviours that make AI predictable, governable, and safe in the wild.
Internal developer platforms hold promise: faster delivery, better reliability, happier engineers. Yet they often stall—caught in organisational inertia, tool complexity, and unclear value.
These two high-impact 90-minute sessions give you the core mental models and practical artefacts to shift from tool-chase to strategic platform value. You’ll walk away with real maps, a clear focus, and next-step experiments—not just ideas. If you’re looking to make your platform team a force for value, not just operations, this compact format delivers.
By the end of the workshop you'll know:
How to see your platform ecosystem clearly using tools like Wardley Mapping, User Needs Mapping, Value Stream Mapping and OODA loops.
How to treat the platform as a product, shifting mindset from internal project to self-service, developer-centric product.
How to apply the pattern language of platform design (Golden Path, Self-Service, Abstraction, Composability, Guardrails, Observability, Extensibility, Incremental Roll-out).
How to use DSRP + UNM + VSM to reveal where value stalls, flow breaks, and cognitive load spikes.
How to design smallest viable changes, build an impact roadmap, and influence adoption through “Elephant & Rider” thinking (rational vs emotional mindsets).
You'll also get a sneak peek into the future of platforms: AI/automation, evolving loops, and building resilience into your ecosystem.
If your platform team is stuck in the grind—shipping tickets, fighting fires, juggling tools, and wondering why nothing ever seems to change—this workshop will give you the clarity and leverage you’ve been missing. You’ll learn to read your organisation like a map: where value flows, where it dies, where cognitive load spikes, and where small, strategic platform moves can unlock disproportionate impact.
This isn’t another “tools tour”. Whether you’re building an IDP from scratch or rescuing one that’s drifting, you’ll leave with a clear roadmap, a set of tested patterns, and the influence skills to actually make the work land. Platform engineering is no longer about managing complexity—it’s about creating the conditions where developers can thrive. Join us, map your ecosystem, design the future, and turn your platform team into the strategic engine your organisation needs.
By the end of this half day workshop you'll know:
How to see your platform ecosystem clearly using tools like Wardley Mapping, User Needs Mapping, Value Stream Mapping and OODA loops.
How to treat the platform as a product, shifting mindset from internal project to self-service, developer-centric product.
How to apply the pattern language of platform design (Golden Path, Self-Service, Abstraction, Composability, Guardrails, Observability, Extensibility, Incremental Roll-out).
How to use DSRP + UNM + VSM to reveal where value stalls, flow breaks, and cognitive load spikes.
How to design smallest viable changes, build an impact roadmap, and influence adoption through “Elephant & Rider” thinking (rational vs emotional mindsets).
You'll also get a sneak peek into the future of platforms: AI/automation, evolving loops, and building resilience into your ecosystem.
If your platform team is stuck in the grind—shipping tickets, fighting fires, juggling tools, and wondering why nothing ever seems to change—this workshop will give you the clarity and leverage you’ve been missing. You’ll learn to read your organisation like a map: where value flows, where it dies, where cognitive load spikes, and where small, strategic platform moves can unlock disproportionate impact.
This isn’t another “tools tour”. Whether you’re building an IDP from scratch or rescuing one that’s drifting, you’ll leave with a clear roadmap, a set of tested patterns, and the influence skills to actually make the work land. Platform engineering is no longer about managing complexity—it’s about creating the conditions where developers can thrive. Join us, map your ecosystem, design the future, and turn your platform team into the strategic engine your organisation needs.
Internal developer platforms hold promise: faster delivery, better reliability, happier engineers. Yet they often stall—caught in organisational inertia, tool complexity, and unclear value.
These two high-impact 90-minute sessions give you the core mental models and practical artefacts to shift from tool-chase to strategic platform value. You’ll walk away with real maps, a clear focus, and next-step experiments—not just ideas. If you’re looking to make your platform team a force for value, not just operations, this compact format delivers.
By the end of the workshop you'll know:
How to see your platform ecosystem clearly using tools like Wardley Mapping, User Needs Mapping, Value Stream Mapping and OODA loops.
How to treat the platform as a product, shifting mindset from internal project to self-service, developer-centric product.
How to apply the pattern language of platform design (Golden Path, Self-Service, Abstraction, Composability, Guardrails, Observability, Extensibility, Incremental Roll-out).
How to use DSRP + UNM + VSM to reveal where value stalls, flow breaks, and cognitive load spikes.
How to design smallest viable changes, build an impact roadmap, and influence adoption through “Elephant & Rider” thinking (rational vs emotional mindsets).
You'll also get a sneak peek into the future of platforms: AI/automation, evolving loops, and building resilience into your ecosystem.
If your platform team is stuck in the grind—shipping tickets, fighting fires, juggling tools, and wondering why nothing ever seems to change—this workshop will give you the clarity and leverage you’ve been missing. You’ll learn to read your organisation like a map: where value flows, where it dies, where cognitive load spikes, and where small, strategic platform moves can unlock disproportionate impact.
This isn’t another “tools tour”. Whether you’re building an IDP from scratch or rescuing one that’s drifting, you’ll leave with a clear roadmap, a set of tested patterns, and the influence skills to actually make the work land. Platform engineering is no longer about managing complexity—it’s about creating the conditions where developers can thrive. Join us, map your ecosystem, design the future, and turn your platform team into the strategic engine your organisation needs.
By the end of this half day workshop you'll know:
How to see your platform ecosystem clearly using tools like Wardley Mapping, User Needs Mapping, Value Stream Mapping and OODA loops.
How to treat the platform as a product, shifting mindset from internal project to self-service, developer-centric product.
How to apply the pattern language of platform design (Golden Path, Self-Service, Abstraction, Composability, Guardrails, Observability, Extensibility, Incremental Roll-out).
How to use DSRP + UNM + VSM to reveal where value stalls, flow breaks, and cognitive load spikes.
How to design smallest viable changes, build an impact roadmap, and influence adoption through “Elephant & Rider” thinking (rational vs emotional mindsets).
You'll also get a sneak peek into the future of platforms: AI/automation, evolving loops, and building resilience into your ecosystem.
If your platform team is stuck in the grind—shipping tickets, fighting fires, juggling tools, and wondering why nothing ever seems to change—this workshop will give you the clarity and leverage you’ve been missing. You’ll learn to read your organisation like a map: where value flows, where it dies, where cognitive load spikes, and where small, strategic platform moves can unlock disproportionate impact.
This isn’t another “tools tour”. Whether you’re building an IDP from scratch or rescuing one that’s drifting, you’ll leave with a clear roadmap, a set of tested patterns, and the influence skills to actually make the work land. Platform engineering is no longer about managing complexity—it’s about creating the conditions where developers can thrive. Join us, map your ecosystem, design the future, and turn your platform team into the strategic engine your organisation needs.
By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.
So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.
For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot autoconfiguration, you'll be able to get straight to the point of asking questions and getting answers your application needs.
In this handson workshop, you'll build a complete Spring AIenabled application applying such techniques as prompt templating, Retrieval Augmented Generation (RAG), conversational history, and tools invocation. You'll also learn prompt engineering techniques that can help your application get the best results with minimal “hallucinations” while minimizing cost.
In the workshop, we will be using…
Optionally, you may choose to use a different AI provider other than OpenAI such as Anthropic, Mistral, or Google Vertex (Gemini), but you will need an account with them and some reasonable amount of credit with them. Or, you may choose to install Ollama (https://ollama.com/), but if you do be sure to install a reasonable model (llama3:latest or gemma:9b) before you arrive.
Know that if you choose to use something other than OpenAI, your workshop experience will vary.
By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.
So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.
For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot autoconfiguration, you'll be able to get straight to the point of asking questions and getting answers your application needs.
In this handson workshop, you'll build a complete Spring AIenabled application applying such techniques as prompt templating, Retrieval Augmented Generation (RAG), conversational history, and tools invocation. You'll also learn prompt engineering techniques that can help your application get the best results with minimal “hallucinations” while minimizing cost.
In the workshop, we will be using…
Optionally, you may choose to use a different AI provider other than OpenAI such as Anthropic, Mistral, or Google Vertex (Gemini), but you will need an account with them and some reasonable amount of credit with them. Or, you may choose to install Ollama (https://ollama.com/), but if you do be sure to install a reasonable model (llama3:latest or gemma:9b) before you arrive.
Know that if you choose to use something other than OpenAI, your workshop experience will vary.
Enterprise Architecture (EA) has long been misunderstood as a bottleneck to innovation, often labeled the “department of no.” But in today’s fast-paced world of Agile, DevOps, Cloud, and AI, does EA still have a role to play—or is it a relic of the past?
This session reimagines the role of EA in the modern enterprise, showcasing how it can evolve into a catalyst for agility and innovation. We’ll explore the core functions of EA, its alignment with business and IT strategies, and how modern tools, techniques, and governance can transform it into a driver of value. Attendees will leave with actionable insights on building a future-ready EA practice that thrives in an ever-changing technological landscape.
Architecture is often defined as “hard to change”. Within software architecture, an architecture pattern is a reusable solution to a commonly occurring problem in software architecture within a specific context. Architecture anti-patterns are their diabolical counterparts—wherein they sound good in theory, but in practice lead to negative consequences. And given that they affect both the architectural characteristics and the structural design of the system, are incredibly expensive and have far-reaching consequences.
This session explores various architecture patterns, how one can easily fall into anti-patterns, and how one can avoid the antipatterns. We will do qualitative analysis of various architecture patterns and anti-patterns, and introduce fitness functions govern against anti-patterns.
An architecture pattern is a reusable solution to a commonly occurring problem in software architecture within a specific context. Architecture patterns affect the “-ilities” of a system, such as scalability, performance, maintainability, and security as well as impact the structural design of the system.
This session explores various architecture patterns, their applicability and trade-offs. But that's not all—this session will also provide insight into the numerous intersections of these patterns with all the other tendrils of the organization, including implementation, infrastructure, engineering practices, team topologies, data topologies, systems integration, the enterprise, the business environment, and generative AI. And we will see how to govern each pattern using fitness functions to ensure alignment.
Platform engineering is the latest buzzword, in a industry that already has it's fair share. But what is platform engineering? How does it fit in with DevOps and Developer Experience (DevEx)? And is this something your organization even needs?
In this session we will aim to to dive deep into the world of platform engineering. We will see what platform engineering entails, how it is the logical succession to a successful DevOps implementation, and how it aims to improve the developer experience. We will also uncover the keys to building robust, sustainable platforms for the future
Platform engineering is the latest buzzword, in a industry that already has it's fair share. But what is platform engineering? How does it fit in with DevOps and Developer Experience (DevEx)? And is this something your organization even needs?
In this session we will aim to to dive deep into the world of platform engineering. We will see what platform engineering entails, how it is the logical succession to a successful DevOps implementation, and how it aims to improve the developer experience. We will also uncover the keys to building robust, sustainable platforms for the future
This 1/2 day workshop introduces participants to Claude Code, Anthropic’s AI-powered coding assistant. In three hours, attendees will learn how to integrate Claude Code into their development workflow, leverage its capabilities for productivity, and avoid common pitfalls. The workshop also introduces the concept of subagents (specialized roles like Planner, Tester, Coder, Refactorer, DocWriter) to show how structured interactions can improve accuracy and collaboration.
Format: 3-hour interactive workshop (2 × 90-minute sessions + 30-minute break).
Audience: Developers and technical professionals with basic programming knowledge.
Focus Areas:
Core capabilities and limitations of Claude Code.
Effective prompting and iteration techniques.
Applying Claude Code for code generation, debugging, refactoring, and documentation.
Using subagents for structured workflows as an optional advanced technique.
Deliverables:
5 hands-on labs (10–12 minutes each).
Experience with everyday Claude Code workflows plus a brief introduction to subagents.
To do the labs in this workshop, you must have a Claude Code Pro subscription already so you will have access to Claude Code. If you do not, you will not be able to use Claude Code in this workshop. See https://www.anthropic.com/pricing.
This 1/2 day workshop introduces participants to Claude Code, Anthropic’s AI-powered coding assistant. In three hours, attendees will learn how to integrate Claude Code into their development workflow, leverage its capabilities for productivity, and avoid common pitfalls. The workshop also introduces the concept of subagents (specialized roles like Planner, Tester, Coder, Refactorer, DocWriter) to show how structured interactions can improve accuracy and collaboration.
Format: 3-hour interactive workshop (2 × 90-minute sessions + 30-minute break).
Audience: Developers and technical professionals with basic programming knowledge.
Focus Areas:
Core capabilities and limitations of Claude Code.
Effective prompting and iteration techniques.
Applying Claude Code for code generation, debugging, refactoring, and documentation.
Using subagents for structured workflows as an optional advanced technique.
Deliverables:
5 hands-on labs (10–12 minutes each).
Experience with everyday Claude Code workflows plus a brief introduction to subagents.
To do the labs in this workshop, you must have a Claude Code Pro subscription already so you will have access to Claude Code. If you do not, you will not be able to use Claude Code in this workshop. See https://www.anthropic.com/pricing.
As cloud architectures evolve, AI is quickly becoming a foundational component rather than an add-on.
This session explores the architectural principles behind building scalable hybrid clouds and shows how AI can elevate them—from predictive scaling to intelligent workload optimization. We’ll look at patterns already emerging in the industry and map out a clear approach for designing resilient, AI-augmented systems that are ready for the next wave of innovation.
API security goes beyond protecting endpoints—it requires defense across infrastructure, data, and business logic. In this talk, I’ll present a structured approach to implementing Zero Trust security for APIs in a cloud-native architecture.
We’ll cover how to establish a strong foundation across layers—using mTLS, OAuth2/JWT, policy-as-code (OPA), GitOps for deployment integrity, and cloud-native secrets management. The session addresses real-world threats like misconfigurations, privilege escalation, and API abuse, and shows how to mitigate them with layered controls in Kubernetes-based environments on Azure and AWS.
Attendees will walk away with actionable practices to secure their API ecosystem end-to-end— without slowing development teams down.
Here I’ll break down how GitOps simplifies the operational challenges around cloud and Kubernetes environments. We’ll look at how a Git-driven model brings consistency, automation, and better visibility across both infrastructure and application delivery.
The goal is to share a clear and practical approach to reducing operational overhead and creating a more reliable DevOps workflow.
In this hands-on workshop you will learn how to build & deploy production-ready AI Agents. You will use Spring AI, MCP, Java, and Amazon Bedrock and learn how to deal with production concerns like observability and security. We will start with basic prompting then expand with chat memory, RAG, and integration through MCP. You will be provided a provisioned cloud environment and step-by-step instructions.
Bring your laptop, walk away with the skills to build your own AI Agents with Java.
In this hands-on workshop you will learn how to build & deploy production-ready AI Agents. You will use Spring AI, MCP, Java, and Amazon Bedrock and learn how to deal with production concerns like observability and security. We will start with basic prompting then expand with chat memory, RAG, and integration through MCP. You will be provided a provisioned cloud environment and step-by-step instructions.
Bring your laptop, walk away with the skills to build your own AI Agents with Java.
The Model Context Protocol (MCP) standardizes how AI agents connect to external data and tools.
Moving beyond local experiments, this talk explores advanced MCP architectures: local vs. remote server deployments, advanced human-in-the-loop features, and hosting and scaling strategies for remote MCP servers. With Java code we will walk through MCP features, highlighting how to use them in AI agents.
Data Mesh rethinks data architecture in organizations by treating data as a product, owned and operated by bounded context teams rather than centralized platforms. This way, data owners can describe, enrich, and prove data sources to prevent any malicious poisoning.
Java has accumulated a diverse toolbox for concurrency and asynchrony over the decades, ranging from classic threads to parallel streams, from Future to CompletableFuture, and from reactive libraries to the latest innovations, including virtual threads, structured concurrency, and the Vector API. But with so many options, the question is: which ones should we use today, which still matter, and which belong in the history books?
In this talk, we’ll explore the entire spectrum:
We’ll also tackle the hard questions:
As AI model usage grows across enterprise systems, teams face new infrastructure challenges—fragmented integrations, inconsistent interfaces, and limited visibility into model performance. An AI Gateway bridges this gap by providing an abstraction layer for model routing, guardrails, and observability, standardizing how applications interact with AI models.
This session explores AI Gateway architecture, key design patterns, and integration strategies with existing API and DevOps ecosystems. Attendees will learn how to implement model routing, enforce runtime safety and compliance, and build unified monitoring for prompt-level analytics—all forming the foundation of a scalable enterprise AI platform.
Traditional API linting tools like Spectral, have helped teams identify issues in their OpenAPI specifications by surfacing violations of style guides and best practices. But the current paradigm stops at diagnosis—developers are still left with the manual burden of interpreting warnings, resolving inconsistencies, and applying often repetitive best practice fixes.
This session explores a transformative approach: using large language models (LLMs) fine-tuned on industry API standards to go beyond pointing out what’s wrong—to actively fixing it. Imagine replacing “Here’s a list of errors” with “Here’s your new spec, clean, compliant, and ready to ship.” By shifting from rule-checking to rule-enforcing via intelligent automation, teams can significantly reduce friction in their design workflows, improve standardization, and cut review cycles.
In today’s fast-paced development environment, delivering robust and efficient APIs requires a streamlined design process that minimizes delays and maximizes collaboration. Mocking has emerged as a transformative tool in the API design lifecycle, enabling teams to prototype, test, and iterate at unprecedented speeds.
This talk explores the role of mocking in enhancing API design workflows, focusing on its ability to:
1.Facilitate early stakeholder feedback by simulating API behavior before development.
2.Enable parallel development by decoupling frontend and backend teams.
3.Identify design flaws and inconsistencies earlier, reducing costly downstream changes.
4.Support rapid iteration and experimentation without impacting live systems.
Using real-world examples and best practices, we’ll demonstrate how tools like Prism and WireMock can be leveraged to create mock APIs that enhance collaboration, improve quality, and dramatically accelerate development timelines. Attendees will leave with actionable insights on integrating mocking into their API design lifecycle, fostering innovation and speed without compromising reliability.
In this immersive, hands-on workshop, participants will learn how to combine the discipline of Test-Driven Development with the creative support of AI-powered pair programming.
Working in pairs, developers will build a Booking system from scratch using Java and VS Code, progressively applying the red-green-refactor cycle while integrating AI assistance for test authoring and design validation.
This workshop emphasizes practical workflow habits; starting from unit tests, iterating with context-driven prompts, and applying refactoring techniques, to help participants write more reliable, maintainable, and thoughtful code.
By the end of this workshop, participants will be able to:
Apply the TDD cycle Red → Green → Refactor) effectively while coding a real-world service
Collaborate with AI tools to generate, refine, and extend test cases responsibly
Pass contextual prompts to guide AI toward meaningful, domain-relevant test generation Recognize design and code smells that emerge in the refactor phase and correct them through iterative improvement
Balance speed and intent—leveraging AI to accelerate feedback without compromising software quality
Reflect on workflow improvements, communication with AI tools, and ethical implications of AI-assisted testing
In this immersive, hands-on workshop, participants will learn how to combine the discipline of Test-Driven Development with the creative support of AI-powered pair programming.
Working in pairs, developers will build a Booking system from scratch using Java and VS Code, progressively applying the red-green-refactor cycle while integrating AI assistance for test authoring and design validation.
This workshop emphasizes practical workflow habits; starting from unit tests, iterating with context-driven prompts, and applying refactoring techniques, to help participants write more reliable, maintainable, and thoughtful code.
By the end of this workshop, participants will be able to:
Apply the TDD cycle Red → Green → Refactor) effectively while coding a real-world service
Collaborate with AI tools to generate, refine, and extend test cases responsibly
Pass contextual prompts to guide AI toward meaningful, domain-relevant test generation Recognize design and code smells that emerge in the refactor phase and correct them through iterative improvement
Balance speed and intent—leveraging AI to accelerate feedback without compromising software quality
Reflect on workflow improvements, communication with AI tools, and ethical implications of AI-assisted testing
In this hands-on session, participants will learn how to bridge the gap between technical strategy and execution using systems thinking principles.
Through some exercises, software architects will practice mapping business goals, constraints, and feedback loops, then translate them into a clear and adaptable technical roadmap.
This presentation focuses on helping architects/engineers to move from abstract vision to actionable outcomes, aligning architecture with value, sequencing initiatives, and communicating trade-offs effectively to stakeholders.
By the end of this session, participants will be able to:
Understand how systems thinking reveals dependencies and leverage points within technical ecosystems
Identify how business outcomes can be mapped to technical capabilities
Practice creating an adaptive roadmap using “Now / Next / Laterˮ framing
Learn to communicate trade-offs and priorities in a way that aligns with business goals Leave with a reusable framework and template for turning architectural strategy into delivery steps
There's an implied context to your software running in the world and processing data. The problem is that it is usually a reductive and insufficient context to capture the fluency of change that occurs at multiple layers. This need for shared context spreads to API usage which often necessitates fragile, custom development.
In this talk we will address the importance of dynamic context in software systems and how to engender flexible, sufficiently rich context-based systems.
We will cover the history of context-based thinking in the design of software systems and network protocols and how the ideas are merging into something along the lines of “Information DNS” where we resolve things at the time and place of execution into the form in which we need it.
Consider software systems with the technical and financial properties of the Web.
While this is a developing approach to software development, it builds on established ideas and will help provide the basis for next-generation development.
Java 25 has been released, but the Java release train coninues chugging along with Java 26.
In this presentation we will start with a quick review of the key changes from 17-21 and how they have improved developer experience, performance, and supporting Java applications in production. From there we will transition to changes to Java post-21 and how the various changes are bringing important stores into focus including; improved concurrency support, data-oriented programming, native support, and more! The Java platform is evolivng quickly to keep pace with the current needs of users, be sure to attend this presentation if you want to keep up!
Data is at the center of any organization. So it stands to reason that data should be at the center of how we design and write our Java applications.
In this talk we are going to look at how recent changes to the Java language; Records, Pattern Matching, Seal Hierarchies, are enabling Java applications to be written in a Data-Oriented Programming (DOP) paradigm. We will look at the core concepts of DOP, and how it compares and contrasts with the OOP approach familiar to many Java developers.
In this architectural kata, you will step into the shoes of a software architect tasked with designing a modern healthcare management system for a rapidly growing provider, MedBest.
The challenge is to create a system that integrates patient records, appointment scheduling, billing, and telemedicine while ensuring robust security, compliance with regulations, scalability, and cost efficiency.
Authentication and authorization are foundational concerns in modern systems, yet they’re often treated as afterthoughts or re-implemented inconsistently across services.
In this talk, we’ll explore Keycloak, an open-source identity and access management system, and how it fits into modern application architectures. We’ll break down what Keycloak actually does (and what it doesn’t), explain the role of JWTs and OAuth2/OpenID Connect, and examine how identity, trust, and access control are handled across distributed systems.
We’ll also compare Keycloak to secret management systems like Vault, clarify common misconceptions, and walk through integrations you will need with Spring, Quarkus, and other frameworks
By the end, you’ll understand when Keycloak is the right tool, how to integrate it cleanly, and how to avoid the most common architectural mistakes.
In this session, we will define what Keycloak is, its value, and how it integrates with your existing architecture. Here is the layout of the talk:
Prometheus and Grafana form the backbone of modern metrics-based observability, yet many teams struggle to move from “we collect metrics” to “we understand our systems.”
This talk builds a clear mental model for Prometheus and Grafana: how metrics are exposed, scraped, stored, queried, and visualized — and how those metrics connect to real operational decisions. We’ll explore Prometheus architecture, PromQL, Kubernetes integration via the Prometheus Operator, and how metrics power advanced workflows like canary deployments with Argo Rollouts and OpenTelemetry-based telemetry.
Attendees will leave knowing what to measure, how to measure it, and where to start on Monday.
This talk builds a practical mental model for metrics-based observability using Prometheus and Grafana. Rather than focusing solely on dashboards, we’ll explore how metrics are exposed, collected, queried, and ultimately used to make real operational decisions. We’ll connect application-level instrumentation, Kubernetes-native monitoring, and modern telemetry standards, showing how Prometheus fits into today’s production environments and deployment workflows.
Did you get into computers to avoid people, and then suddenly found yourself in charge of them?
In the world of software engineering, we’re trained to debug systems - yet rarely taught to debug ourselves or our teams. This talk is a field guide for technical leaders ready to evolve from code-centric problem solvers into emotionally intelligent catalysts for growth. Drawing from his unusual background in psychology and anthropology paired with over 20 years of pragmatic experience in the field, Robert offers a fresh lens on engineering leadership: one that sees developers not just as resources, but as richly complex humans navigating fear, ego, burnout, and brilliance.
Through real-world coaching, surreal metaphors, and lived experience leading engineers through transformation, Robert unpacks the invisible forces that shape developer behavior. You’ll learn how to spot emotional “exceptions” in your team, refactor toxic dynamics, and architect cultures of trust and psychological safety. Whether you're a tech lead, manager, or founder, this session offers practical tools and mindset shifts to help you lead with empathy, clarity, and courage - without sacrificing technical rigor.
Come ready to rethink leadership as a creative, iterative process - one that’s less about control and more about connection. Because the most powerful debugging tool in your stack… might just be you.
What happens when a self-taught programmer with a background in anthropology finds himself leading engineering teams? In this candid, humorous, and emotionally resonant talk, Robert Harris shares his journey from BASIC on a Commodore 64 to building psychologically safe, high-performing cultures in modern software organizations.
Blending fieldwork with frameworks, Robert explores the human side of engineering leadership—imposter syndrome, accidental management, and the painful lessons that shaped his philosophy. Drawing on his training in anthropology, he offers a practical guide to shaping team culture through shared language, rituals, experiences, and artifacts—from flaming pull request beacons to rubber duck onboarding kits.
Attendees will leave with:
•A fresh perspective on leadership rooted in emotional intelligence and cultural design
•Actionable strategies for building trust, accountability, and psychological safety
•A toolkit of metaphors, rituals, and artifacts to transform team dynamics
Whether you’re a reluctant manager, a seasoned leader, or just someone who’s ever stepped on a rake in production, this talk will help you turn dysfunction into culture—and culture into your team’s greatest asset.
Java's Generics syntax provides us with a means to increase the reusability of our code by allowing us to build software, particularly library software, that can work on many different types, even with limited knowledge about those types. The most familiar examples are the classes in Java's core collections API which can store and retrieve data of arbitrary types, without degenerating those types to java.lang.Object.
However, while the generics mechanism is very simple to use in simple cases such as using the collections API, it's much more powerful than that. Frankly, it can also be a little puzzling.
This session investigates the issues of type erasure, assignment compatibility in generic types, co- and contra-variance, through to bridge methods.
Course outline
Type erasure
Two approaches for generics and Java's design choice
How to break generics (and how not to!)
Maintaining concrete type at runtime
Assignment compatibility of generic types
What's the problem–Understanding Liskov substitution in generic types
Co-variance
Two syntax options for co-variance
Contra-variance
Syntax for contra-variance
Worked examples with co- and contra-variance
Building arrays from generic types
Effective use of functional interfaces
Bridge methods
Review of overloading requirements
Faking overloading in generic types
Setup requirements
This course includes extensive live coding demonstrations and attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards, but some might use newer library features. You can use any Java development environment / IDE that you like and no other tooling is required.
Java's Generics syntax provides us with a means to increase the reusability of our code by allowing us to build software, particularly library software, that can work on many different types, even with limited knowledge about those types. The most familiar examples are the classes in Java's core collections API which can store and retrieve data of arbitrary types, without degenerating those types to java.lang.Object.
However, while the generics mechanism is very simple to use in simple cases such as using the collections API, it's much more powerful than that. Frankly, it can also be a little puzzling.
This session investigates the issues of type erasure, assignment compatibility in generic types, co- and contra-variance, through to bridge methods.
Course outline
Type erasure
Two approaches for generics and Java's design choice
How to break generics (and how not to!)
Maintaining concrete type at runtime
Assignment compatibility of generic types
What's the problem–Understanding Liskov substitution in generic types
Co-variance
Two syntax options for co-variance
Contra-variance
Syntax for contra-variance
Worked examples with co- and contra-variance
Building arrays from generic types
Effective use of functional interfaces
Bridge methods
Review of overloading requirements
Faking overloading in generic types
Setup requirements
This course includes extensive live coding demonstrations and attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards, but some might use newer library features. You can use any Java development environment / IDE that you like and no other tooling is required.
For many beginning and intermediate software engineers, design is something of a secret anxiety. Often we know we can create something that works, and we can likely include a design pattern or two tif only to give our proposal some credibility. But sometimes, we're left with a nagging feeling that there might be a better design, or more appropriate pattern, and we might not be really confident that we can justify our choices.
This session investigates the fundamental driving factors behind good design choices so we can balance competing concerns and confidently justify why we did what we did. The approach presented can be applied not only to design, but also to what's often separated out under the term “software architecture”.
Along the journey, we'll use the approach presented to derive several of the well known “Gang of Four” design patterns, and in so doing conclude that they are the product of sound design applied to a context and not an end in themselves.
Course outline
Background: three levels of “design”
Data structure and algorithm
Design
Software Architecture
Why many programmers struggle with design
What makes a design “better” or “worse” than any other?
The pressures of the real world versus a learning environment
A time-honored engineering solution
Identifying the problem
Dissecting the elements
Creating a working whole from the parts
Deriving three core design patterns from principles
Decorator
Strategy
Sidenote, why traditional inheritance is bad
Command or “higher order function”
Setup requirements
This course is largely language agnostic, but does include some live coding demonstrations. Attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards. You can use any Java development environment / IDE that you like and no other tooling is required.
For many beginning and intermediate software engineers, design is something of a secret anxiety. Often we know we can create something that works, and we can likely include a design pattern or two tif only to give our proposal some credibility. But sometimes, we're left with a nagging feeling that there might be a better design, or more appropriate pattern, and we might not be really confident that we can justify our choices.
This session investigates the fundamental driving factors behind good design choices so we can balance competing concerns and confidently justify why we did what we did. The approach presented can be applied not only to design, but also to what's often separated out under the term “software architecture”.
Along the journey, we'll use the approach presented to derive several of the well known “Gang of Four” design patterns, and in so doing conclude that they are the product of sound design applied to a context and not an end in themselves.
Course outline
Background: three levels of “design”
Data structure and algorithm
Design
Software Architecture
Why many programmers struggle with design
What makes a design “better” or “worse” than any other?
The pressures of the real world versus a learning environment
A time-honored engineering solution
Identifying the problem
Dissecting the elements
Creating a working whole from the parts
Deriving three core design patterns from principles
Decorator
Strategy
Sidenote, why traditional inheritance is bad
Command or “higher order function”
Setup requirements
This course is largely language agnostic, but does include some live coding demonstrations. Attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards. You can use any Java development environment / IDE that you like and no other tooling is required.
This talk will guide Java developers through the design and implementation of multi-agent generative AI systems using event-driven principles.
Attendees will learn how autonomous GenAI agents collaborate, communicate, and adapt in real-time workflows using modern Java frameworks and messaging protocols.
As generative AI systems evolve from single LLM calls to complex, goal‑driven workflows, multi‑agent architectures are becoming essential for robust, scalable, and explainable AI applications.
This talk presents a practical framework for designing and implementing multi‑agent generative AI systems, covering four core orchestration patterns that define how agents coordinate:
Orchestrator‑Worker: A central agent decomposes a task and delegates subtasks to specialized worker agents, then aggregates and validates results.
Hierarchical Agent: Agents are organized in layers (e.g., manager, specialist, executor), enabling abstraction, delegation, and error handling across levels.
Blackboard: Agents contribute to and react from a shared “blackboard” workspace, enabling loosely coupled, event‑driven collaboration.
Market‑Based: Agents act as autonomous participants that negotiate, bid, or compete for tasks and resources, useful in dynamic, resource‑constrained environments.
For each pattern, we show concrete use cases, such as customer support triage, research synthesis, code generation pipelines, and discuss trade‑offs in latency, complexity, and observability.
Everybody is talking about Generative AI and models that are better than anything else before. What are they really talking about?
In this workshop with some hands-on exercise, we will discuss Generative AI in theory and will also try it in practice (with free access to an Oracle LiveLab cloud session to learn about Vector Search). You'll be able to understand what Generative AI is all about and how it can be used.
The content will include:
Everybody is talking about Generative AI and models that are better than anything else before. What are they really talking about?
In this workshop with some hands-on exercise, we will discuss Generative AI in theory and will also try it in practice (with free access to an Oracle LiveLab cloud session to learn about Vector Search). You'll be able to understand what Generative AI is all about and how it can be used.
The content will include:
Java has quietly absorbed functional ideas over the last decade. Lambdas, streams, records, sealed types. It has been an amazing journey, but most teams still write code as if none of that really changed anything. This workshop asks a simple question: what if we actually took those features seriously?
In Thinking Functionally in Java, we explore how far disciplined functional design can take us using plain Java with no rewrites, no new language mandates, and no academic detours. Along the way, we address reproducible development environments with Nix, replace exception-driven control flow with explicit error modeling, and uncover why concepts like flatMap, algebraic data types, and composability matter even if you never say the word “monad” out loud.
Show, Eq) and understanding the limits of Java’s type system.You’ve heard the buzz — now roll up your sleeves and build with it. In this hands-on workshop, you’ll learn exactly how the Model Context Protocol (MCP) works — and you’ll write your own MCP server tool from scratch, then author an Agent that uses it to deliver real-time, context-aware help right inside your dev flow.
We’ll break down the raw MCP protocol step by step:
• How it streams context between your IDE and Agents
• How messages are structured and exchanged
• How to wire up an MCP Client to talk to your new tool
By the end, you’ll not only understand the protocol — you’ll have built a working MCP server tool and your own Agent that plugs into it to automate tasks, provide better suggestions, and boost your productivity.
Bring your curiosity — and your laptop — because you’ll walk away with practical code, a working prototype, and the confidence to build and
In an era where digital transformation and AI adoption are accelerating across every industry, the need for consistent, scalable, and robust APIs has never been more critical. AI-powered tools—whether generating code, creating documentation, or integrating services—rely heavily on clean, well-structured API specifications to function effectively. As teams grow and the number of APIs multiplies, maintaining design consistency becomes a foundational requirement not just for human developers, but also for enabling reliable, intelligent automation. This session explores how linting and reusable models can help teams meet that challenge at scale.
We will explore API linting using the open-source Spectral project to enable teams to identify and rectify inconsistencies during design. In tandem, we will navigate the need for reusable models—recognizing that the best specification is the one you don’t have to write or lint at all! These two approaches not only facilitate the smooth integration of services but also foster collaboration across teams by providing a shared, consistent foundation.
Engineering teams have always struggled with fragmented documentation, tribal knowledge, and inconsistent standards. What we now call context engineering is closely related to long-standing knowledge management challenges, but the impact is amplified as AI agents enter everyday development workflows. Gaps, conflicts, and poorly organized context slow developers and lead automated systems to produce unreliable results. Without deliberate strategies, organizations risk scaling confusion instead of capability.
In this session, we will explore practical ways to engineer context and knowledge management that works for both developers and AI agents. Using real examples from building centralized, docs-as-code platforms, we will cover how to organize internal documentation, define ownership between team and platform context, and validate content for gaps and conflicts. We will also examine shared context engineering strategies such as summarization, retrieval, and externalization, including how MCP-based approaches improve clarity, scalability, and execution for humans and machines alike.
We have all seen the “Hello, World” of Spring AI: sending a prompt and getting a response. But as we move toward production, the real challenge is not the LLM call; it is the workflow. How do you ensure an agent does not loop infinitely? How do you coordinate multiple tools without a mess of “if-else” blocks? And how do we keep our Java-centric domain models at the heart of the AI’s reasoning?
Enter Embabel, a new JVM-based framework from Rod Johnson (creator of Spring) designed to bring discipline to agentic AI. Unlike Python-centric alternatives, Embabel is built on the philosophy of strong typing, OODA loops (Observe, Orient, Decide, Act), and Goal-Oriented Action Planning (GOAP).
In this session, we will go beyond basic RAG and explore how to build “digital workers” that can actually plan. You will learn: How to turn your existing Spring Beans into AI Actions.
The shift from imperative coding to Goal-Oriented orchestration.
How Embabel uses DICE (Domain-Integrated Context Engineering) to give agents true domain knowledge.
Why the JVM is actually the best place to run mission-critical AI agents.
Join us for a code-heavy look at the future of Java backend development. We are moving to a world where our systems do not just respond to requests, but actively work to achieve goals.
The “Hello, World” of Spring AI involves sending a prompt and receiving a text response. This is no longer enough for production. To build enterprise grade AI, we must move beyond simple request and response cycles toward autonomous agents capable of reasoning, planning, and executing complex workflows. The challenge is doing this without losing the type safety, observability, and domain driven design that makes the Java ecosystem the backbone of enterprise software.
Join us for a three hour deep dive into Embabel, the new JVM framework from Rod Johnson designed for disciplined agentic AI. This workshop moves past the “if-else” mess of basic orchestration and introduces an architecture based on OODA loops and Goal Oriented Action Planning (GOAP).
Part 1: From Prompting to Planning (90 Minutes)
In the first half, we move from imperative logic to Goal Oriented orchestration. You will learn the core philosophy of Embabel and how it uses the OODA loop (Observe, Orient, Decide, Act) to maintain stateful awareness. This module focuses heavily on DICE (Domain-Integrated Context Engineering) which allows you to move beyond simple RAG by injecting your existing Java domain models directly into the agent’s reasoning process. You will learn the planning mindset by defining clear goals rather than rigid paths, allowing the AI to navigate your business rules dynamically.
Part 2: Building the Digital Worker (90 Minutes)
The second half is a hands-on lab where we turn theory into a functioning agent. We will explore the process of turning your existing Spring Beans into AI Actions by defining preconditions and effects so that Embabel can construct plans without infinite loops. We will also address what happens when an LLM hallucinates or a tool fails. This includes exploring advanced patterns for error handling and plan repair to demonstrate why the JVM is the superior environment for mission critical AI.
By the end of this workshop, you will have built a functional Digital Worker capable of navigating a complex domain and interacting with real Spring managed services. You will leave with a local prototype and a blueprint for bringing agentic AI to your organization. Participants should have experience with Spring Boot and Java or Kotlin as well as a laptop with a JDK 17+ environment.
Stop just calling APIs and start building workers. This workshop provides a code heavy look at the future of Java backend development where systems do not just respond to requests but actively work to achieve goals.
The “Hello, World” of Spring AI involves sending a prompt and receiving a text response. This is no longer enough for production. To build enterprise grade AI, we must move beyond simple request and response cycles toward autonomous agents capable of reasoning, planning, and executing complex workflows. The challenge is doing this without losing the type safety, observability, and domain driven design that makes the Java ecosystem the backbone of enterprise software.
Join us for a three hour deep dive into Embabel, the new JVM framework from Rod Johnson designed for disciplined agentic AI. This workshop moves past the “if-else” mess of basic orchestration and introduces an architecture based on OODA loops and Goal Oriented Action Planning (GOAP).
Part 1: From Prompting to Planning (90 Minutes)
In the first half, we move from imperative logic to Goal Oriented orchestration. You will learn the core philosophy of Embabel and how it uses the OODA loop (Observe, Orient, Decide, Act) to maintain stateful awareness. This module focuses heavily on DICE (Domain-Integrated Context Engineering) which allows you to move beyond simple RAG by injecting your existing Java domain models directly into the agent’s reasoning process. You will learn the planning mindset by defining clear goals rather than rigid paths, allowing the AI to navigate your business rules dynamically.
Part 2: Building the Digital Worker (90 Minutes)
The second half is a hands-on lab where we turn theory into a functioning agent. We will explore the process of turning your existing Spring Beans into AI Actions by defining preconditions and effects so that Embabel can construct plans without infinite loops. We will also address what happens when an LLM hallucinates or a tool fails. This includes exploring advanced patterns for error handling and plan repair to demonstrate why the JVM is the superior environment for mission critical AI.
By the end of this workshop, you will have built a functional Digital Worker capable of navigating a complex domain and interacting with real Spring managed services. You will leave with a local prototype and a blueprint for bringing agentic AI to your organization. Participants should have experience with Spring Boot and Java or Kotlin as well as a laptop with a JDK 17+ environment.
Stop just calling APIs and start building workers. This workshop provides a code heavy look at the future of Java backend development where systems do not just respond to requests but actively work to achieve goals.
As AI agents become a first-class part of software systems, learning to build MCP servers is becoming as fundamental as learning to build RESTful APIs. Many engineers already interact with MCP through IDEs and agent tooling, and building these servers introduces a new set of design considerations beyond traditional request/response APIs. While MCP servers share familiar API concepts, the move to agent-driven, probabilistic clients changes how engineers think about contracts, tool design, output shaping, state management, error handling, and spec adoption. Building MCP servers is emerging as an important capability for all developers and teams.
In this session, we’ll start with what MCP is and how the protocol defines resources, tools, prompts, and elicitations. We’ll walk through the practical decisions involved in building your first MCP server, including transport choices, authentication, tool and resource granularity, server instructions, and context engineering. You’ll see how MCP aligns with and diverges from traditional API design, explore framework options, preview generated MCP servers, and learn how to design for scaling, versioning, gateways, and deterministic error handling in real-world systems.
In this session, we'll cover several useful prompt engineering techniques as well as some emerging patterns that are categorized within the “Agentic AI” space and see how to go beyond simple Q&A to turn your LLM of choice into a powerful ally in achieving your goals.
At it's core, Generative AI is about submitting a prompt to an LLM-backed API and getting some response back. But within that interaction there is a lot of nuance, particularly with regard to the prompt itself.
It's important to know how to write effective prompts, choosing the right wording and being clear about your expectations, to get the best responses from an LLM. This is often called “prompt engineering” and includes several patterns and techniques that have emerged in the Gen AI space.
If your to-do list keeps growing because “it’s just faster to do it myself,” this workshop is for you. Delegation isn’t just about getting things off your plate—it’s about building others up, focusing on the work only you can do, and staying sane in the process. You’ll learn a practical 7 Steps of Delegation—a clear, practical framework for how to empower your team without micromanaging (that’s time consuming, and no one likes it!). From clarifying the why behind a task to setting outcomes instead of instructions, you’ll discover how to hand off work with trust, clarity, and accountability. You’ll practice how to invite questions, listen with empathy, and create ownership while still keeping your team aligned on milestones and results.
You’ll also build two key skills that make delegation sustainable: how to Let It Go—quieting the perfectionism and control that keep you clinging to work—and how to Own Your Part when mistakes happen. Because they will. Instead of taking it all back or blaming your team, you’ll learn how to lead through those moments with clarity and trust. Walk away with a smarter approach to delegation, stronger team leverage, and more space to think strategically, lead proactively—and get home on time way more often.
For Tech Leaders, the fear of making mistakes can show up everywhere — from over-polishing a roadmap deck to rewriting your team’s code “just to be sure.” But leadership isn’t about getting it perfect — it’s about moving forward, learning fast, and helping others do the same. In this workshop, you’ll learn how to adopt a growth mindset: a practical framework that reframes mistakes as part of progress, not failure. We’ll explore how perfectionism sneaks into your leadership style — and what it’s quietly costing you.
You’ll also build two essential FLOW skills that help you lead through imperfection: how to Focus with Breath so you stay calm when fear kicks in, and how to Let It Go—releasing unrealistic expectations and the pressure to get everything right the first time. Through relatable leadership scenarios and peer-based practice, you’ll develop a more resilient mindset, sharper decision-making, and the confidence to lead with clarity — even when the outcome isn’t guaranteed (or perfect). You’ll walk away ready to apply these skills immediately in high-pressure moments, performance reviews, team decisions, and your day-to-day leadership rhythm.
Missed deadlines. Underperforming teammates. Stakeholders pushing too hard. If you're a technical leader, you're probably avoiding at least one conversation that matters. In this workshop, you’ll learn a simple, proven 4-step framework (inspired by Nonviolent Communication) to handle tough conversations with clarity, confidence, and zero drama. Think of it like debugging communication: identify what’s really going on, name it without blame, and make a request that moves things forward.
You’ll also learn two essential FLOW skills to handle what makes these talks hard in the first place—pre-conversation anxiety and post-conversation rumination—and how to own your part without sliding into blame. You’ll leave this workshop with tools to approach difficult conversations with steady confidence and get your point across without triggering confusion or conflict. It will feel less like a confrontation and more like a real conversation—one that actually brings clarity to the situation.
Ever feel like you're spinning your wheels on tasks that barely move the needle? It's time to flip the script with the 80/20 rule—where 20% of your efforts drive 80% of your results. This talk provides background on how the 80/20 rule transforms productivity at work and home, helping you identify and focus on the tasks that truly matter while minimizing distractions, low-impact work, and yes—no more chasing squirrels! You'll learn practical strategies to prioritize with pinpoint clarity, eliminate time-wasters, and let go of the stuff that doesn’t actually deserve your time or energy. Letting go is a skill—one you’ll start applying here to reclaim focus and get results.
With real-world examples, hands-on techniques, and a touch of humor, you'll walk away ready to do less but achieve so much more. Because why work harder when you can work smarter?
There are certain tech trends people at least know about such as Moore's Law even if they don't really understand them. But there are other forces at play in and around our industry that are unknown or ignored by the ever diminishing tech journalism profession. They help explain and predict the pressures and influences we are seeing now or soon will.
In this talk, I will identify a variety of trends that are happening at various paces in intertwined ways at the technological, scientific, cultural, biological, and geopolitical levels and why Tech Leaders should know about them. Being aware of the visible and invisible forces that surround you can help you work with them, rather than against them. You will also be more likely to make good choices and thrive rather than being buffeted uncontrollably.
Since the Scientific and Industrial Revolutions, there has been more to know every day. No individual can know it all and we have seen the entrenchment of the specialist for the past hundred or so years. When all of this tacit knowledge was locked in our heads, the specialist was rewarded for knowing details.
In our industry we have seen professionals gravitate to specific languages, specific tiers in the architecture (e.g. front-end vs backend), and specific libraries or frameworks. Sometimes they will even go so far as to list specific versions of specific technologies on their resume.
All of this specialization can be beneficial when you need resources that are deep within narrow confines. The ubiquitous glut of available information no longer requires us to know topics to this level of detail. Market realities are also such that nobody has the budget to employ only specialists any more. Developers have needed to learn to become designers, testers, data-experts, security-aware, AI-cognizant, and capable of communicating with various stakeholders.
When your industry epitomizes unfettered change, you need to rely on generalists, not specialists; synthesizers, not knowledge keepers. How can you attract, hire, and benefit from technologists who identify as problem solving value adders rather than programmers of a specific language? How can you encourage their growth and measure success? Even more, how do you lead them yourself?
In this talk we will discuss the rise of the generalist knowledge worker who creates value even in the face of information overflow and AI.
You’ve heard the buzz — now roll up your sleeves and build with it. In this hands-on workshop, you’ll learn exactly how the Model Context Protocol (MCP) works — and you’ll write your own MCP server tool from scratch, then author an Agent that uses it to deliver real-time, context-aware help right inside your dev flow.
We’ll break down the raw MCP protocol step by step:
• How it streams context between your IDE and Agents
• How messages are structured and exchanged
• How to wire up an MCP Client to talk to your new tool
By the end, you’ll not only understand the protocol — you’ll have built a working MCP server tool and your own Agent that plugs into it to automate tasks, provide better suggestions, and boost your productivity.
Bring your curiosity — and your laptop — because you’ll walk away with practical code, a working prototype, and the confidence to build and
New languages often carry an operational burden to deployment and involve tradeoffs of performance for safety. Rust has emerged as a powerful, popular, and increasingly widely-used language for all types of development. Come learn why Rust is entering the Linux kernel and Microsoft and Google are favoring it for new development over C++.
This Introduction to Rust will introduce the students to the various merits (and complexities) of this safe, fast and popular new programming language that is taking the world by storm. This
three day course will cover everything students from various backgrounds will need to get started as a successful Rust programmer.
Attendees will Learn about and how to:
New languages often carry an operational burden to deployment and involve tradeoffs of performance for safety. Rust has emerged as a powerful, popular, and increasingly widely-used language for all types of development. Come learn why Rust is entering the Linux kernel and Microsoft and Google are favoring it for new development over C++.
This Introduction to Rust will introduce the students to the various merits (and complexities) of this safe, fast and popular new programming language that is taking the world by storm. This
three day course will cover everything students from various backgrounds will need to get started as a successful Rust programmer.
Attendees will Learn about and how to:
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Modern AI systems deliver many capabilities, but their effectiveness depends entirely on how well they're prompted. This intensive workshop transforms prompt engineering from trial-and-error guesswork into a systematic, measurable discipline. You'll learn proven frameworks for constructing effective prompts and learn cutting-edge optimization techniques that deliver quality improvements in real-world applications.
Through five hands-on labs in GitHub Codespaces, you'll work with Ollama hosting llama3.2:3b to implement each technique, measure its impact, and build reusable templates. Every concept is immediately validated with code you can deploy tomorrow.
What You'll Master
The workshop progresses through five core competency areas, each reinforced with a practical lab:
Foundations of Effective Prompting begins with the six essential elements every prompt needs: task definition, context, constraints, role assignment, output format, and examples. You'll systematically transform a poorly-constructed prompt into an optimized version, measuring quality improvements at each step. This foundation eliminates the guesswork and establishes a repeatable framework for all future prompt engineering work.
Pattern-Based Techniques introduces few-shot learning and Chain of Thought (CoT) reasoning. Few-shot prompting teaches models through examples rather than explanations, dramatically improving consistency on classification and transformation tasks. Chain of Thought makes reasoning transparent, improving accuracy on complex problems by 20-40% while enabling you to verify the model's logic. You'll build a classification system and compare zero-shot, few-shot, and CoT approaches with measurable accuracy metrics.
Advanced Structural Techniques combines role-based prompting, structured outputs, and constrained generation into enterprise-ready patterns. You'll create an API documentation generator that uses expert personas, enforces strict formatting requirements, outputs reliable JSON, and maintains 90%+ consistency across diverse inputs. This lab produces production templates with automated validation—patterns you can immediately deploy in your organization.
Cutting-Edge Methods explores two powerful techniques gaining traction in 2025-2026. Multi-expert prompting simulates a council of experts (technical, business, security) analyzing complex decisions from multiple perspectives, catching blind spots that single-perspective prompts miss. Reverse prompting flips the traditional interaction: instead of you trying to perfectly specify requirements, the AI asks clarifying questions to discover what you really need. You'll measure 40-60% improvements in decision quality and 80-90% gains in requirement clarity.
Probabilistic and Incentive-Based Optimization introduces the latest research-backed techniques for extracting maximum quality from language models. Stanford's breakthrough probability-based prompting—requesting multiple responses with confidence scores—improves reliability by 30-50% on ambiguous tasks. Incentive framing (yes, “This is critical” and “Take your time” actually work) increases thoroughness by 20-40%. Combined, these techniques deliver 50-70% quality improvements on high-stakes decisions.
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Modern AI systems deliver many capabilities, but their effectiveness depends entirely on how well they're prompted. This intensive workshop transforms prompt engineering from trial-and-error guesswork into a systematic, measurable discipline. You'll learn proven frameworks for constructing effective prompts and learn cutting-edge optimization techniques that deliver quality improvements in real-world applications.
Through five hands-on labs in GitHub Codespaces, you'll work with Ollama hosting llama3.2:3b to implement each technique, measure its impact, and build reusable templates. Every concept is immediately validated with code you can deploy tomorrow.
What You'll Master
The workshop progresses through five core competency areas, each reinforced with a practical lab:
Foundations of Effective Prompting begins with the six essential elements every prompt needs: task definition, context, constraints, role assignment, output format, and examples. You'll systematically transform a poorly-constructed prompt into an optimized version, measuring quality improvements at each step. This foundation eliminates the guesswork and establishes a repeatable framework for all future prompt engineering work.
Pattern-Based Techniques introduces few-shot learning and Chain of Thought (CoT) reasoning. Few-shot prompting teaches models through examples rather than explanations, dramatically improving consistency on classification and transformation tasks. Chain of Thought makes reasoning transparent, improving accuracy on complex problems by 20-40% while enabling you to verify the model's logic. You'll build a classification system and compare zero-shot, few-shot, and CoT approaches with measurable accuracy metrics.
Advanced Structural Techniques combines role-based prompting, structured outputs, and constrained generation into enterprise-ready patterns. You'll create an API documentation generator that uses expert personas, enforces strict formatting requirements, outputs reliable JSON, and maintains 90%+ consistency across diverse inputs. This lab produces production templates with automated validation—patterns you can immediately deploy in your organization.
Cutting-Edge Methods explores two powerful techniques gaining traction in 2025-2026. Multi-expert prompting simulates a council of experts (technical, business, security) analyzing complex decisions from multiple perspectives, catching blind spots that single-perspective prompts miss. Reverse prompting flips the traditional interaction: instead of you trying to perfectly specify requirements, the AI asks clarifying questions to discover what you really need. You'll measure 40-60% improvements in decision quality and 80-90% gains in requirement clarity.
Probabilistic and Incentive-Based Optimization introduces the latest research-backed techniques for extracting maximum quality from language models. Stanford's breakthrough probability-based prompting—requesting multiple responses with confidence scores—improves reliability by 30-50% on ambiguous tasks. Incentive framing (yes, “This is critical” and “Take your time” actually work) increases thoroughness by 20-40%. Combined, these techniques deliver 50-70% quality improvements on high-stakes decisions.
Modern system design has entered a new era. It’s no longer enough to optimize for uptime and latency — today’s systems must also be AI-ready, token-efficient, trustworthy, and resilient. Whether building global-scale apps, powering recommendation engines, or integrating GenAI agents, architects need new skills and playbooks to design for scale, speed, and reliability.
This full-day workshop blends classic distributed systems knowledge with AI-native thinking. Through case studies, frameworks, and hands-on design sessions, you’ll learn to design systems that balance performance, cost, resilience, and truthfulness — and walk away with reusable templates you can apply to interviews and real-world architectures.
Target Audience
Enterprise & Cloud Architects → building large-scale, AI-ready systems.
Backend Engineers & Tech Leads → leveling up to system design mastery.
AI/ML & Data Engineers → extending beyond pipelines to full-stack AI systems.
FAANG & Big Tech Interview Candidates → preparing for system design interviews with an AI twist.
Engineering Managers & CTO-track Leaders → guiding teams through AI adoption.
Startup Founders & Builders → scaling AI products without burning money.
Learning Outcomes
By the end of the workshop, participants will be able to:
Apply a 7-step system design framework extended for AI workloads.
Design systems that scale for both requests and tokens.
Architect multi-provider failover and graceful degradation ladders.
Engineer RAG 2.0 pipelines with hybrid search, GraphRAG, and semantic caching.
Implement AI trust & security with guardrails, sandboxing, and red-teaming.
Build observability dashboards for hallucination %, drift, token costs.
Reimagine real-world platforms (Uber, Netflix, Twitter, Instagram) with AI integration.
Practice mock interviews & chaos drills to defend trade-offs under pressure.
Take home reusable templates (AI System Design Canvas, RAG Checklist, Chaos Runbook).
Gain the confidence to lead AI-era system design in interviews, enterprises, or startups.
Workshop Agenda (Full-Day, 8 Hours)
Session 1 – Foundations of Modern System Design (60 min)
The new era: Why classic design is no longer enough.
Architecture KPIs in the AI age: latency, tokens, hallucination %, cost.
Group activity: brainstorm new KPIs.
Session 2 – Frameworks & Mindset (75 min)
The 7-Step System Design Framework (AI-extended).
Scaling humans vs tokens.
Token capacity planning exercise.
Session 3 – Retrieval & Resilience (75 min)
RAG 2.0 patterns: chunking, hybrid retrieval, GraphRAG, semantic cache.
Multi-provider resilience + graceful degradation ladders.
Whiteboard lab: design a resilient RAG pipeline.
Session 4 – Security & Observability (60 min)
Threats: prompt injection, data exfiltration, abuse.
Guardrails, sandboxing, red-teaming.
Observability for LLMs: traces, cost dashboards, drift monitoring.
Activity: STRIDE threat-modeling for an LLM endpoint.
Session 5 – Real-World System Patterns (90 min)
Uber, Netflix, Instagram, Twitter, Search, Fraud detection, Chatbot.
AI-enhanced vs classic system designs.
Breakout lab: redesign a system with AI augmentation.
Session 6 – Interviews & Chaos Drills (75 min)
Mock interview challenges: travel assistant, vector store sharding.
Peer review of trade-offs, diagrams, storytelling.
Chaos drills: provider outage, token overruns, fallback runbooks.
Closing (15 min)
Recap: 3 secrets (Scaling tokens, RAG as index, Resilient degradation).
Templates & takeaways: AI System Design Canvas, RAG Checklist, Chaos Runbook.
Q&A + networking.
Takeaways for Participants
AI System Design Canvas (framework for interviews & real-world reviews).
RAG 2.0 Checklist (end-to-end retrieval playbook).
Chaos Runbook Template (resilience drill starter kit).
AI SLO Dashboard template for observability + FinOps.
Confidence to design and defend AI-ready architectures in both career and enterprise contexts.
PIs built for humans often fail when consumed by AI agents.
They rely on documentation instead of contracts, return unpredictable structures, and break silently when upgraded. Large Language Models (LLMs) and autonomous agents need something different: machine-discoverable, deterministic, idempotent, and lifecycle-managed APIs.
This session introduces a five-phase API readiness framework—from discovery to deprecation—so you can systematically evolve your APIs for safe, predictable AI consumption.
You’ll learn how to assess current APIs, prioritize the ones that matter, and apply modern readiness practices: function/tool calling, schema validation, idempotency, version sunset headers, and agent-aware monitoring.
Problems Solved
What “AI-Readiness” Means
Common Failure Modes Today
Agenda
Introduction: The Shift from Human → Machine Consumption
Why LLMs and agents fundamentally change API design expectations.
Examples of human-centric patterns that break agent workflows.
Pattern 1: Assessment & Readiness Scorecard
How to audit existing APIs for AI-readiness.
Scoring dimensions: discoverability, determinism, idempotency, guardrails, lifecycle maturity.
Sample scorecard matrix and benchmark scoring.
Pattern 2: Prioritization Strategy
How to choose where to start:
Key Framework References
Takeaways
Autonomous LLM agents don’t just call APIs — they plan, retry, chain, and orchestrate across multiple services.
That fundamentally changes how we architect microservices, define boundaries, and operate distributed systems.
This session delivers a practical architecture playbook for Agentic AI integration — showing how to evolve from simple request/response designs to resilient, event-driven systems.
You’ll learn how to handle retry storms, contain failures with circuit breakers and bulkheads, implement sagas and outbox patterns for correctness, and version APIs safely for long-lived agents.
You’ll leave with reference patterns, guardrails, and operational KPIs to integrate agents confidently—without breaking production systems.
Problems Solved
Why Now
What Is Agentic AI in Microservices
Agenda
Opening: The Shift to Agent-Driven Systems
How autonomous agents change microservice assumptions.
Why request/response architectures fail when faced with planning, chaining, and self-healing agents.
Pattern 1: Event-Driven Flows Use events, queues, and replay-safe designs to decouple agents from synchronous APIs. Patterns: pub/sub, event sourcing, and replay-idempotency.
Pattern 2: Saga and Outbox Patterns Manage long workflows with compensations. Ensure atomicity and reliability between DB and event bus. Outbox → reliable publish; Saga → rollback on failure.
Pattern 3: Circuit Breakers and Bulkheads Contain agent-triggered failure storms. Apply timeout, retry, and fallback policies per domain. Prevent blast-radius amplification across services.
Pattern 4: Service Boundary Design Shape services around tasks and domains — not low-level entities. Example: ReserveInventory, ScheduleAppointment, SubmitClaim. Responses must return reason codes + next actions for agent clarity. Avoid polymorphic or shape-shifting payloads.
Pattern 5: Integrating Agent Frameworks Connect LLM frameworks (Agentforce, LangGraph) safely to services. Use operationId as the agent tool name; enforce strict schemas. Supervisor/planner checks between steps. Asynchronous jobs: job IDs, progress endpoints, webhooks.
Pattern 6: Infrastructure and Operations
Wrap-Up: KPIs and Guardrails for Production Key metrics: retry rate, success ratio, agent throughput, event replay lag. Lifecycle governance: monitoring, versioning, deprecation, and sunset plans.
Key Framework References
Takeaways
Building AI isn’t just about prompting or plugging into an API — it’s about architecture. This workshop translates Salesforce’s Enterprise Agentic Architecture blueprint into practical design patterns for real-world builders.
You’ll explore how Predictive, Assistive, and Agentic patterns map to Salesforce’s Agentforce maturity model, combining orchestration, context, and trust into cohesive systems. Through hands-on modules, participants design a Smart Checkout Helper using Agentforce, Data Cloud, MCP, and RAG—complete with observability, governance, and ROI mapping.
Key Takeaways
Agentic Architecture Foundations: Understand multi-agent design principles — decomposition, decoupling, modularity, and resilience.
Pattern Literacy- Apply patterns: Orchestrator, Domain SME, Interrogator, Prioritizer, Data Steward, and Listener.
Predictive–Assistive–Agentic Continuum: Align AI maturity with business intent — from prediction and guidance to autonomous execution.
RAG Grounding & Context Fabric: Integrate trusted enterprise data via Data Cloud and MCP for fact-based reasoning.
Multi-Agent Orchestration: Implement Orchestrator + Worker topologies using A2A protocol, Pub/Sub, Blackboard, and Capability Router.
Governance & Trust: Embed privacy, bias mitigation, observability, and audit trails — design for CIO confidence.
Business Alignment: Use the Jobs-to-Be-Done and Agentic Map templates to connect AI outcomes with ROI.
Agenda
Module 1 – Enterprise Agentic Foundations
Module 2 – The Big 3 Patterns: Predictive, Assistive, Agentic
Module 3 – Predictive AI → Foresight in Systems
Module 4 – Assistive AI → Guiding Humans
Module 5 – Agentic AI → Autonomy in Action
Module 6 – Agentic Map & Jobs-to-Be-Done Framework
Module 7 – RAG & Context Fabric
Module 8 – Multi-Agent Orchestration with MCP
Module 9 – Governance & Guardrails
Module 10 – From Prototype to Production
What You’ll Leave With
Building AI isn’t just about prompting or plugging into an API — it’s about architecture. This workshop translates Salesforce’s Enterprise Agentic Architecture blueprint into practical design patterns for real-world builders.
You’ll explore how Predictive, Assistive, and Agentic patterns map to Salesforce’s Agentforce maturity model, combining orchestration, context, and trust into cohesive systems. Through hands-on modules, participants design a Smart Checkout Helper using Agentforce, Data Cloud, MCP, and RAG—complete with observability, governance, and ROI mapping.
Key Takeaways
Agentic Architecture Foundations: Understand multi-agent design principles — decomposition, decoupling, modularity, and resilience.
Pattern Literacy- Apply patterns: Orchestrator, Domain SME, Interrogator, Prioritizer, Data Steward, and Listener.
Predictive–Assistive–Agentic Continuum: Align AI maturity with business intent — from prediction and guidance to autonomous execution.
RAG Grounding & Context Fabric: Integrate trusted enterprise data via Data Cloud and MCP for fact-based reasoning.
Multi-Agent Orchestration: Implement Orchestrator + Worker topologies using A2A protocol, Pub/Sub, Blackboard, and Capability Router.
Governance & Trust: Embed privacy, bias mitigation, observability, and audit trails — design for CIO confidence.
Business Alignment: Use the Jobs-to-Be-Done and Agentic Map templates to connect AI outcomes with ROI.
Agenda
Module 1 – Enterprise Agentic Foundations
Module 2 – The Big 3 Patterns: Predictive, Assistive, Agentic
Module 3 – Predictive AI → Foresight in Systems
Module 4 – Assistive AI → Guiding Humans
Module 5 – Agentic AI → Autonomy in Action
Module 6 – Agentic Map & Jobs-to-Be-Done Framework
Module 7 – RAG & Context Fabric
Module 8 – Multi-Agent Orchestration with MCP
Module 9 – Governance & Guardrails
Module 10 – From Prototype to Production
What You’ll Leave With
Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues.
In addition to detecting the presence of common bugs as we have done with static analysis for years, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite and the Moderne cli to fix common security issues and keep them from coming back.
In this talk we will focus on:
Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues.
In addition to detecting the presence of common bugs as we have done with static analysis for years, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite and the Moderne cli to fix common security issues and keep them from coming back.
In this talk we will focus on:
AI, agentic workflows, digital twins, edge intelligence, spatial computing, and blockchain trust are converging to reshape how enterprises operate.
This session introduces Enterprise Architecture 4.0—a practical, future-ready approach where architectures become intelligent, adaptive, and continuously learning.
You’ll explore the EA 4.0 Tech Radar, understand the six major waves of disruption, and learn the ARCHAI Blueprint—a structured framework for designing AI-native, agent-ready, and trust-centered systems.
Leave with a clear set of patterns and a 12-month roadmap for preparing your enterprise for the next era of intelligent operations.
⸻
KEY TAKEAWAYS
•Understand the EA 4.0 shift toward intelligent, agent-driven architecture
•Learn the top technology trends: AI, agents, edge, twins, spatial, blockchain, and machine customers
•See how the ARCHAI Blueprint structures AI-first design and governance
•Get practical patterns for agent safety, digital twins, trust, and ecosystem readiness
•Leave with a concise 12-month roadmap for implementing EA 4.0
⸻
AGENDA
– The Speed of Change
Why traditional enterprise architecture cannot support AI-native, agent-driven systems.
– The EA 4.0 Tech Radar
A 3–5 year outlook across:
•Agentic AI
•Edge intelligence
•Digital twins
•Spatial computing
•Trusted automation (blockchain)
•Machine customers
– The Six Waves of Transformation
Short deep dives into each wave with real enterprise use cases.
– The ARCHAI Blueprint
A clear architectural framework for AI-first enterprises:
•Attention & Intent Modeling
•Retrieval & Knowledge Fabric
•Capability & Context Models
•Human + Agent Co-working Patterns
•Action Guardrails & Safety
•Integration & Intelligence Architecture
This gives architects a single, unified design methodology across all emerging technologies.
– The Architect’s Playbook
Practical patterns for:
•Intelligence fabrics
•Agent-safe APIs
•Digital twin integration
•Trust & decentralized identity
•Ecosystem-ready design
– Operationalizing EA 4.0
How architecture teams evolve:
•New EA roles
•Continuous planning
•Agent governance
•EA dashboards
•The 12-month adoption roadmap
AI agents don’t behave like humans. A single prompt can trigger thousands of parallel API calls, retries, and tool chains—creating bursty load, cache-miss storms, and runaway costs. This talk unpacks how to design and operate APIs that stay fast, reliable, and affordable under AI workloads. We’ll cover agent-aware rate limiting, backpressure & load shedding, deterministic-result caching, idempotency & deduplication, async/event-driven patterns, and autoscaling without bill shock. You’ll learn how to tag and trace agent traffic, set SLOs that survive tail latency, and build graceful-degradation playbooks that keep experiences usable when the graph goes wild.
Why scaling is different with AI
Failure modes to expect (and design for)
Traffic control & fairness
Resilience patterns
Caching that actually works for AI
Async & event-driven designs
Autoscaling without bill shock
Observability & cost governance
Testing & readiness
Runbooks & playbooks
Deliverables for attendees
Learning Objectives (Takeaways)
As enterprises rush to embed large language models (LLMs) into apps and platforms, a new AI-specific attack surface has emerged. Prompt injections, model hijacking, vector database poisoning, and jailbreak exploits aren’t covered by traditional DevSecOps playbooks.
This full-day, hands-on workshop gives architects, platform engineers, and security leaders the blueprint to secure AI-powered applications end-to-end. You’ll master the OWASP LLM Top 10, integrate AI-specific controls into CI/CD pipelines, and run live red-team vs blue-team exercises to build real defensive muscle.
Bottom line: if your job involves deploying, securing, or governing AI systems, this workshop shows you how to do it safely—before attackers do it for you.
What You’ll Learn
Who Should Attend
Takeaways
Agenda
Module 1 – The New AI Attack Surface
Module 2 – OWASP LLM Top 10 Deep Dive
Module 3 – DevSecOps Patterns for LLMs
Module 4 – Real-World Threat Simulations
Module 5 – Business Impact & Mitigation Framework
Graphs aren’t just academic—they power the backbone of real systems: workflows (Airflow DAGs), build pipelines (Bazel), data processing (Spark DAGs), and microservice dependencies (Jaeger).
This session demystifies classic graph algorithms—BFS, DFS, topological sort, shortest paths, and cycle detection—and shows how to connect them to real-world systems.
You’ll also see how AI tools like ChatGPT and graph libraries (Graphviz, NetworkX, D3) can accelerate your workflow: generating adjacency lists, visualizing dependencies, and producing test cases in seconds.
You’ll leave with reusable patterns for interviews, architecture reviews, and production systems.
You’ll leave with reusable patterns for interviews, architecture reviews, and production systems.
Why Now
Problems Solved
Learning Outcomes
Agenda
Opening: From Whiteboard to Production
Why every large-scale system is a graph in disguise.
How workflows, microservices, and dependency managers rely on graph structures.
Pattern 1: Graphs in the Real World
Examples:
Pattern 2: Core Algorithms Refresher
Pattern 3: AI-Assisted Graph Engineering How to use AI tools to accelerate graph work:
Pattern 4: Graph Patterns in Architecture Mapping algorithms to system design:
Pattern 5: AI Demo Prompt → adjacency list → Graphviz/NetworkX render → algorithmic validation. Demonstrate quick prototyping workflow with AI assistance.
Wrap-Up: From Algorithms to Architectural Intuition How graph literacy improves system reliability and scalability. Checklist and reusable templates for ongoing graph-based reasoning.
Key Framework References
Takeaways
Leadership isn’t just about making the right calls — it’s about staying steady while everything around you moves fast. From managing tension in meetings, juggling shifting priorities, and fielding last-minute requests, to navigating unclear direction or supporting a stressed-out team, software leaders are constantly pulled in multiple directions. That’s where FLOW comes in: a practical, four-part skillset designed to help you handle pressure, stay grounded in uncertainty, and show up with clarity when it matters most.
You’ll learn how to Focus with Breath, Let It Go, Own Your Part, and Weave FLOW into Your Day through real-world leadership scenarios and peer-to-peer practice designed to reflect the complexity of actual moments you face. These aren't abstract ideas — they’re trainable skills you can use right away to lead with more clarity, presence, and effectiveness. If you’ve been running on adrenaline, reacting on autopilot, or just trying to hold it all together, this workshop will help you become a more nimble, flexible leader who gets the job done — with a whole lot less effort and a lot more ease.
Software projects can be difficult to manage. Managing teams of developers can be even difficult. We've created countless processes, methodologies, and practices but the underlying problems remain the same.
This session is full of practical tips and tricks to deal with the reallife situations any tech leader regularly encounters. Put these techniques into practice and create an enviable culture and an outstanding development team. At the same time, you'll avoid common management mistakes and pitfalls.
Ever find yourself slipping into endless distractions and losing hours on what started as a quick task? You're not alone, but it is possible to get much better at avoiding the cycle of going down the rabbit hole.
This talk explores why our brains are hardwired for distraction and gives you concrete techniques to stay focused, including baby steps in setting digital boundaries (yes, that means putting your phone down for at least five minutes!). You’ll also learn how to refocus your mind by practicing discernment, intentional breathing, and letting go—a skill that helps you release the mental clutter that pulls you off track.
Through practical exercises and real-world examples, you’ll leave ready to regain control, master your focus, and take back valuable hours of your day.
Picture this: another chaotic project push, overloaded schedules, endless revisions, and frayed tempers all in the name of meeting the impossible delivery deadline. The grind feels inevitable, but it doesn’t have to be. Let us help you to transform high-pressure environments into spaces where you and your team can thrive. Learn to manage delivery stress, navigate challenging interactions, and lead with clarity.
In this interactive session, we’ll apply the CALM framework—four practical, skill-based shifts to help you reset under pressure and lead from grounded, purposeful action:
By combining real-world examples, team activities, and a focus on letting go, you’ll shift from stressful sprints to a cohesive, high-performing environment—and walk away ready to lead friction-free.
A client once asked me to take a team that was new to REST, Agile, etc. and put together a high profile, high value commerce-oriented API in the period of six months. In the process of training the team and designing this API, I hit upon the idea of providing rich testing
coverage by mixing the Behavior-Driven Design testing approach with REST.
In this talk, I will walk you through the idea, the process, and the remarkable outcomes we achieved. I will show you how you can benefit as well from this increasingly useful testing strategy. The approach makes it easy to produce tests that are accessible to business analysts and other stakeholders who wouldn't understand the first
thing about more conventional unit tests.
Behavior is expressed using natural language. The consistent API style minimizes the upfront work in defining step definitions. In the end, \you can produce sophisticated coverage, smoke tests, and more that exercise the full functionality of the API. It also produces another organizational artifact that can be used in the future to migrate to
other implementation technologies.
One of the nice operational features of the REST architectural style as an approach to API Design is that is allows for separate evolution of the client and server. Depending on the design choices a team makes, however, you may be putting a higher burden on your clients than you intend when you introduce breaking changes.
By taking advantage of the capabilities of OpenRewrite, we can start to manage the process of independent evolution while minimizing the impact. Code migration and refactoring can be used to transition existing clients away from older or deprecated APIs and toward new versions with less effort than trying to do it by hand.
In this talk we will focus on:
Managing API lifecycle changes by automating the migration from deprecated to supported APIs.
Discussing API evolution strategies and when they require assisted refactoring and when they don’t.
*Integrating OpenRewrite into API-first development to ensure client code is always up-to-date with ease.
Alistair Cockburn has described software development as a game in which we choose among three moves: invent, decide, and communicate. Most of our time at No Fluff is spent learning how to be better at inventing. Beyond that, we understand the importance of good communication, and take steps to improve in that capacity. Rarely, however, do we acknowledge the role of decision making in the life of software teams, what can cause it to go wrong, and how to improve it.
In this talk, we will explore decision making pathologies and their remedies in individual, team, and organizational dimensions. We'll consider how our own cognitive limitations can lead us to to make bad decisions as individuals, and what we might do to compensate for those personal weaknesses. We'll learn how a team can fall into decisionmaking dysfunction, and what techniques a leader might employ to healthy functioning to an afflicted group. We'll also look at how organizational structure and culture can discourage quality decision making, and what leaders to swim against the tide.
Software teams spend a great deal of time making decisions that place enormous amounts of capital on the line. Team members and leaders owe it to themselves to learn how to make them well.
If only it were so easy! Leadership is a thing into which many find themselves thrown, and to which many others aspire—and it is a thing which every human system needs to thrive. Leading teams in technology organizations is not radically different from any other kind of organization, but does tend to present a common set of patterns and challenges. In this session, I’ll examine them, and provide a template for your own growth as a leader.
We’ll cover the following:
The relationship between leadership, management, and vision
Common decision-making pathologies and ways to avoid them
Strategies for communication with a diverse team
The basics of people management
How to conduct meetings
How to set and measure goals
How to tell whether this is a vocation to pursue
No, you will not master leadership in this short session, but we will cover some helpful material that will move you forward.
LLMs are incredibly powerful, but they have two problems: they only know what they read on the Internet, and they can’t actually do anything—they can only chat with you. If you want to build agentic applications that have access to the immediate, non-public context of your business, and you want your agents to be able take actions in the world, you’ll probably need some help from the Model Context Protocol, or MCP.
And that “business context” increasingly exists in the form of real-time streaming data, often in Kafka topics. Once you’re asking your microservices to interpret natural-language prompts, then deputizing them to take actions on your behalf—this is what an agent is!—you can’t afford for them to be acting on out-of-data context. They need to remain deeply connected to the events that matter to your business.
In this presentation, we’ll get a solid overview of MCP itself, then see you how you can use it to build practical multi-agent architectures powered by real-time, streaming data. We’ll see what’s possible when we stop thinking about AI as an external chatbot and start treating it as part of our streaming architecture. Agents are here, and they are powered by streams.
If everyone agrees with you, you’re probably not innovating, you’re just conforming faster. History’s breakthroughs rarely came from consensus; they came from heretics, hackers, and the hopelessly curious. In this talk, Michael Carducci takes aim at the myth of collective wisdom and explores why the crowd is almost always optimized for the past. Through stories of misfits who changed the world—from computing pioneers to magicians who reinvented wonder; Carducci reveals the hidden patterns of real innovation: discomfort, doubt, and persistence in the face of polite disbelief.
You’ll learn how to recognize the subtle forces that suppress new ideas, how to trust your intuition when it runs counter to consensus, and how to cultivate the curiosity and courage that real innovation demands. This is a talk for the misfits, the tinkerers, and the quietly visionary… because progress has always started at the edges.
Join us for a transformative handson workshop on Personal Knowledge Management (PKM), designed specifically to empower developers, architects, and knowledge workers alike to master information in this information age. Based on Tiago Forte's Building a Second Brain methodology and implemented using the Logseq PKM application, this course aims to equip attendees with the strategies, tools, and insights to streamline their knowledge management, increase productivity, and stimulate creativity. Attendees will learn to construct a personal knowledge graph, effectively annotate and reference digital assets, manage tasks, journal for success, leverage templates, and much more. The ultimate goal is to create a personalized system that enables you to instantly find or recall everything you know and learn.
Throughout this fullday, handson workshop, you will be guided to apply the concepts and practices learned to build your own personal knowledge graph. By the end of the session, you will have a comprehensive system to manage your knowledge effectively, enabling you to spend less time searching for notes or lost information and more time utilizing what you know and learn.
This workshop isn't just about learning new concepts or tools; it's about transforming your relationship with information and your productivity. The skills and practices you will learn are universally applicable, irrespective of the tools you use. We will show you how these methods work in Logseq, but the principles can be adapted to other platforms as well.
Join us for this transformative journey, and experience a significant shift in how you manage and utilize your knowledge, leading to increased productivity, creativity, and overall wellbeing in your personal and professional life.
Bring your curiosity, your questions, and your goals. We look forward to seeing you at the workshop!