UberConf
menu_2
  • Home close
  • Sessions
  • Schedule
  • Speakers
  • Workshops
  • Travel
  • Contact Us
  • Members
    • Sign In Sign Out
    • Workshop Requirements
    • Session Survey
    • Download Slides
    • Session Video
    • Session Evals
    • Event Eval
    • Code of Conduct

Get handson learning to understand and utilize Generative AI from the ground. Work with key AI techniques and implement simple neural nets, vector databases, large language models, retrieval augmented generation and more all in one single day session!

Generative AI is everywhere these days. But there are so many parts of it and so much to understand that it can be overwhelming and confusing for anyone not already immersed in it. In this fullday workshop, opensource author, trainer, and technologist Brent Laster will explain the concepts and working of Generative AI from the ground up. You’ll learn about core concepts like neural networks all the way through to working with Large Language Models (LLM), Retrieval Augmented Generation (RAG) and AI Agents. Along the way we’ll explain integrated concepts like embeddings, vector databases and the current ecosystem around LLMs including sites like HuggingFace and frameworks like LangChain. And, for the key concepts, you’ll be doing handson labs using Python and a preconfigured environment to internalize the learning.

Workshop Requirements

Attendees will need the following to do the hands-on labs:

  • A laptop with power
  • A GitHub account on the public GitHub.com (free tier is fine)
  • Browser

Architectural decisions are often influenced by blindspots, biases, and unchecked assumptions, which can lead to significant long-term challenges in system design. In this session, we’ll explore how these cognitive traps affect decision-making, leading to architectural blunders that could have been avoided with a more critical, holistic approach.

You’ll learn how common biases—such as confirmation bias and anchoring—can cloud judgment, and how to counteract them through problem-space thinking and reflective feedback loops. We’ll dive into real-world examples of architectural failures caused by biases or narrow thinking, and discuss strategies for expanding your perspective and applying critical thinking to system design.

Whether you’re an architect, developer, or technical lead, this session will provide you with tools to recognize and mitigate the impact of biases and blindspots, helping you make more informed, thoughtful architectural decisions that stand the test of time.

Modernizing legacy systems is often seen as a daunting task, with many teams falling into the trap of rigid rewrites or expensive overhauls that disrupt the business. The Tailor-Made Architecture Model (TMAM) offers a new approach—one that is centered on incremental evolution through design-by-constraint. By using TMAM, architects can guide legacy systems through a flexible, structured modernization process that minimizes risk and aligns with both technical and organizational needs.

In this session, we’ll explore how TMAM facilitates smooth modernization by identifying and addressing architectural constraints without resorting to drastic rewrites. We’ll dive into real-world examples of how legacy systems were evolved incrementally and discuss how TMAM provides a framework for future-proofing your systems. Through its focus on trade-offs, communication, and holistic fit, TMAM ensures that your modernization efforts not only solve today’s problems but also prepare your system for the challenges of tomorrow.

This session is ideal for architects, developers, and technical leads who are tasked with modernizing legacy systems and are looking for a structured, flexible approach that avoids the pitfalls of rigid rewrites. Learn how to evolve your legacy system while keeping it adaptable, scalable, and resilient.

The age of hypermedia-driven APIs is finally upon us, and it’s unlocking a radical new future for AI agents. By combining the power of the Hydra linked-data vocabulary with semantic payloads, APIs can become fully self-describing and consumable by intelligent agents, paving the way for a new class of autonomous systems. In this session, we’ll explore how mature REST APIs (level 3) open up groundbreaking possibilities for agentic systems, where AI agents can perform complex tasks without human intervention.

You’ll learn how language models can understand and interact with hypermedia-driven APIs, and how linked data can power autonomous decision-making. We’ll also examine real-world use cases where AI agents use these advanced APIs to transform industries—from e-commerce to enterprise software. If you’re ready to explore the future of AI-driven systems and how hypermedia APIs are the key to unlocking it, this session will give you the knowledge and tools to get started.

REST APIs often fall into a cycle of constant refactoring and rewrites, leading to wasted time, technical debt, and endless rework. This is especially difficult when you don't control the API clients.

But what if this could be your last major API refactor? In this session, we’ll dive into strategies for designing and refactoring REST APIs with long-term sustainability in mind—ensuring that your next refactor sets you up for the future.

You’ll learn how to design APIs that can adapt to changing business requirements and scale effectively without requiring constant rewrites. We’ll explore principles like extensibility, versioning, and decoupling, all aimed at future-proofing your API while keeping backward compatibility intact. Along the way, we’ll examine real-world examples of incremental API refactoring, where breaking the cycle of endless rewrites is possible.

This session is perfect for API developers, architects, and tech leads who are ready to stop chasing their tails and want to invest in designing APIs that will stand the test of time—so they can focus on building great features instead of constantly rewriting code.

Generative AI applications, in general, excel in zero-shot and one-shot types of specific tasks. However, we live in a complicated world and we are beginning to see that today’s generative AI systems are simply not well equipped to handle the increased complexity that is found especially in business workflows and transactions. Traditional architectures often fall short in handling the dynamic nature and real-time requirements of these systems. We will also need a way to coordinate multiple components to generate coherent and contextually relevant outputs. Event-driven architectures and multi-agent systems offer a promising solution by enabling real-time processing, decentralized decision-making, and enhanced adaptability.

This presentation proposes an in-depth exploration of how event-driven architectures and multi-agent systems can be leveraged to design and implement complex workflows in generative AI. By combining the real-time responsiveness of event-driven systems with the collaborative intelligence of multi-agent architectures, we can create highly adaptive, efficient, and scalable AI systems. This presentation will delve into the theoretical foundations, practical applications, and benefits of integrating these approaches in the context of generative AI. We will also take a look at an example on how to implement a simple multi-agent application using a library such as AutoGen, CrewAI, or LangGraph.

Large Language Models like ChatGPT are fantastic for many NLP tasks but face challenges when it comes to real-time, up-to-date knowledge retrieval. Retrieval Augmented Generation (RAG) can effectively tackle this by pulling in external data for better, more context-aware responses.

This talk dives deep into using event-driven streaming through LangStream—an open-source library—to seamlessly integrate real-time data into generative AI applications like ChatGPT. Walk away with actionable insights on how to boost your GenAI applications using event streaming and RAG.

In the world of distributed computing that we live in today, asynchronous programming is taking a central stage from the architecture point of view, in order to provide better scalability. However, we are often cautioned about getting into asynchronous programming due to higher degree of complexity that is involved in writing asynchronous code and in managing exceptions. That cautioned is well founded, however, recent evolutions in Java is resetting the playing field, merging the simplicity of synchronous calls and the power of asynchronous execution.

In this workshop, we will take a deep dive into asynchronous programming in Java and learn about the three major options and which one we should really choose.

With ChatGPT taking center stage since the beginning of 2023, developers who have not had a chance to work with any forms of Artificial Intelligence or Machine Learning systems may find themselves either intrigued by the “maze” of new terminologies, or some may be eager to learn more, while perhaps a smaller group may not actually want to get themselves into a territory that’s unknown to them.

This workshop is catered for Java developers as we start by having a quick introduction to GenAI, ChatGPT, and all of those new terminologies around generative AI. Then we’ll dive right into the hands-on part, about how we can construct a ChatGPT-based app quickly, using state-of-the-art tools such as PgVector, which provides vector extension to the popular open source Postgres.

Hands-on lab will cover:

  • Vector Search with PgVector
  • LLM providers and APIs using Langchain4j
  • Integrating with ChatGPT models
  • Generating embeddings
  • Prompt engineering
  • Building generative AI applications

With ChatGPT taking center stage since the beginning of 2023, developers who have not had a chance to work with any forms of Artificial Intelligence or Machine Learning systems may find themselves either intrigued by the “maze” of new terminologies, or some may be eager to learn more, while perhaps a smaller group may not actually want to get themselves into a territory that’s unknown to them.

This workshop is catered for Java developers as we start by having a quick introduction to GenAI, ChatGPT, and all of those new terminologies around generative AI. Then we’ll dive right into the hands-on part, about how we can construct a ChatGPT-based app quickly, using state-of-the-art tools such as PgVector, which provides vector extension to the popular open source Postgres.

Hands-on lab will cover:

  • Vector Search with PgVector
  • LLM providers and APIs using Langchain4j
  • Integrating with ChatGPT models
  • Generating embeddings
  • Prompt engineering
  • Building generative AI applications

Event-driven architecture (EDA) is a design principle in which the flow of a system’s operations is driven by the occurrence of events instead of direct communication between services or components. There are many reasons why EDA is a standard architecture for many moderate to large companies. It offers a history of events with the ability to rewind the ability to perform real-time data processing in a scalable and fault-tolerant way. It provides real-time extract-transform-load (ETL) capabilities to have near-instantaneous processing. EDA can be used with microservice architectures as the communication channel or any other architecture.

In this workshop, we will discuss the prevalent principles regarding EDA, and you will gain hands-on experience performing and running standard techniques.

  • Key Concepts of Event-Driven Architecture
  • Event Sourcing
  • Event Streaming
  • Multi-tenant Event-Driven Systems
  • Producers, Consumers
  • Microservice Boundaries
  • Stream vs. Table
  • Event Notification
  • Event Carried State Transfer
  • Domain Events
  • Tying EDA to Domain Driven Design
  • Materialized Views
  • Outbox Pattern
  • CQRS (Command Query Responsibility Segregation)
  • Saga Pattern (Choreography and Orchestrator)
  • Avoiding Coupling
  • Monitoring Systems
  • Cloud-Based EDA

Microservices architecture has become a buzzword in the tech industry, promising unparalleled agility, scalability, and resilience. Yet, according to Gartner, more than 90% of organizations attempting to adopt microservices will fail. How can you ensure you're part of the successful 10%?

Success begins with looking beyond the superficial topology and understanding the unique demands this architectural style places on the teams, the organization, and the environment. These demands must be balanced against the current business needs and organizational realities while maintaining a clear and pragmatic path for incremental evolution.

In this session, Michael will share some real-world examples, practical insights, and proven techniques to balance both the power and complexities of microservices. Whether you're considering adopting microservices or already on the journey and facing challenges, this session will equip you with the knowledge and tools to succeed.

You are ready to level up your skills. Or, you've already been playing accidental architect, and need to have a structured plan to be designated as one. Well, your wait is over.

From the author of O'Reilly's best-selling “Head First Software Architecture” comes a full-day workshop that covers all that you need to start thinking architecturally. Everything from the difference between design and architecture, and modern description of architecture, to the skills you'll need to develop to become a successful architect, this workshop will be your one stop shop.

We'll cover several topics:

  • What is architecture, what is design, and the spectrum between them
  • The modern description of architecture
  • Understanding architectural characteristics
  • A brief coverage of some architectural styles
  • Other skills you need to have to become a successful architect

This is an exercise heavy workshop—so be prepared to put on your architect hat!

Microservices have fundamentally changed the way we develop and deploy applications. Everything from team topologies, to DevOps to observability—everything changed, and for the better.

However, it's not all rainbows and unicorns. Operationalizing microservices is hard. Microservices encourage WET (write everything twice) to ensure that services are as decoupled from each other as possible. But how does that work when we have to deal with cross-cutting concerns that we need for every service?

Enter the service mesh. Service meshes like Istio allow us to “slot” in cross-cutting architectural concerns within a kubernetes cluster, letting our services focus on solving actual business concerns.

In this fast-paced session, we will blitz through what Istio is, how it works, and what facilities it offers to DRY out your microservices. Come see how Istio can make your cluster programmable and application-aware.

As leaders, we're often are called upon to solve problems. Not just technical problems; business and organizational problems too. The problem is, many of those cannot be solved.

Here are a few examples of organizational problems that cannot be solved: plan vs. adapt, go quick vs. do it right, empower vs. align, hear people out vs. make a decision.

In fact, they aren't problems at all, they're tensions. They will never go away, and can only be managed. Any efforts to find “answers” to them only serves to surface more frustrations than fixes.


Understanding the difference between problems to solve and tensions to manage is crucial to effective leadership.


This session is designed for leaders seeking to improve their business performance and organizational health. It's not about implementing new frameworks or tools, it's about learning to manage tension better.

Most leaders gain their position based on their expertise. That makes sense. However, it is that same expertise that often traps them into less effective leadership patterns. The expertise that got you into leadership is not the expertise you need for leadership.

Yet, letting go of your expertise has it's own challenges. It not only disconnects you from the real work; your teams are often left aimless. Letting go is only half the battle.


Effective leadership is not about stepping forward or back;
it's about stepping up.


This session guides leaders to better balance their expertise with openness by understanding how their focus is impacting their effectiveness. It further provides space for leaders to explore alternatives to re-blend their focus in ways to improve their impact as a leader.

Key Topics:

  1. Learning how expertise is necessary and yet traps us in bad habits.
  2. Understanding the three lenses of leadership: do, lead and coach.
  3. Discovering that effective leadership is about blending, not shifting.
  4. Blending your focus as a leader for improved impact.

On the one hand, Machine Learning (ML) and AI Systems are just more software and can be treated as such from our development efforts. On the other hand, they behave very differently and our capacity to test, verify, validate, and scale them requires a different set of perspectives and skills.

This presentation will walk you through some of these unexpected differences and how to plan for them. No specific background in ML/AI is required, but you are encouraged to be generally aware of these fields. The AI Crash Course would be a good start.

We will cover:

Matching Capabilities to Needs
Performance Tuning
Vector Databases
Testing Strategies
MLOPs/AIOps Techniques
Evolving these Systems Over Time

On the one hand, Machine Learning (ML) and AI Systems are just more software and can be treated as such from our development efforts. On the other hand, they behave very differently and our capacity to test, verify, validate, and scale them requires a different set of perspectives and skills.

This presentation will walk you through some of these unexpected differences and how to plan for them. No specific background in ML/AI is required, but you are encouraged to be generally aware of these fields. The AI Crash Course would be a good start.

We will cover:

Matching Capabilities to Needs
Performance Tuning
Vector Databases
Testing Strategies
MLOPs/AIOps Techniques
Evolving these Systems Over Time

We have seen how Retrieval Augmented Generation (RAG) systems can help prop up Large Language Models (LLMs) to avoid some of their worst tendencies. But that is just the beginning. The cutting edge stateoftheart systems are Multimodal and Agentic, involving additional models, tools, and reusable agents to break problems down in separate pieces, transform and aggregate the results, and validate the results before returning them to the user.

Come get introduced to some of the latest and greatest techniques for maximizing the value of your LLMbased systems while minimizing the risk.

We will cover:

  • The LangChain and LlamaIndex Frameworks
  • Naive and Intermediate RAG Systems
  • Multimodal Models (Mixing audio, text, images, and videos)
  • Chatbots
  • Summarization Services
  • Agent Protocols
  • Agent Design Patterns

We have seen how Retrieval Augmented Generation (RAG) systems can help prop up Large Language Models (LLMs) to avoid some of their worst tendencies. But that is just the beginning. The cutting edge stateoftheart systems are Multimodal and Agentic, involving additional models, tools, and reusable agents to break problems down in separate pieces, transform and aggregate the results, and validate the results before returning them to the user.

Come get introduced to some of the latest and greatest techniques for maximizing the value of your LLMbased systems while minimizing the risk.

We will cover:

  • The LangChain and LlamaIndex Frameworks
  • Naive and Intermediate RAG Systems
  • Multimodal Models (Mixing audio, text, images, and videos)
  • Chatbots
  • Summarization Services
  • Agent Protocols
  • Agent Design Patterns

Application Programmer Interfaces (APIs) by definition are directed at software developers. They should, therefore, strive to be useful and easy to use for developers. However, when engaging design elements from the Web, they can be useful in much larger ways than simply serializing states in JSON.

There is no right or perfect API design. There are, however, elements and choices that induce certain properties. This workshop will walk you through various approaches to help you find the developer experience and long-term strategies that work for you, your customers and your organization.

We will cover:

The Web Architecture as the basis of our APIs
The REST Architectural Style and its motivations
The Richardson Maturity Model as a way of discussing design choices and induced properties
The implications of contentnegotiation and representation choices such as JSON or JSONLD
The emergence of metadata approaches to describing and using APIs such as OpenAPI and HydraCG
Security considerations
Client technologies
API Management approaches

It's not just you. Everyone is basically thinking the same thing: When did this happen?

We've gone from slow but steady material advances in machine learning to a seeming explosion and ubiquity of AI-based features, products, and solutions. Even more, we're all expected to know how to adopt, use, and think about all of these magical new capabilities.

Equal parts amazing and terrifying, what you need to know about these so-called “AI” solutions is much easier to understand and far less magical than it may seem. This is your chance to catch up with the future and figure out what it means for you.

In this two part presentation, we will cover why this time it is different, except where it isn't. I won't assume much background and won't discuss much math.

A brief history of AI
Machine Learning
Deep Learning
Deep Reinforcement Learning
The Rise of Generative AI
Large Language Models and RAG
Multimodal Systems
Bias, Costs, and Environmental Impacts
AI Reality Check

At the end of these sessions, you will be conversant with the major topics and understand better what to expect and where to spend your time in learning more.

It's not just you. Everyone is basically thinking the same thing: When did this happen?

We've gone from slow but steady material advances in machine learning to a seeming explosion and ubiquity of AI-based features, products, and solutions. Even more, we're all expected to know how to adopt, use, and think about all of these magical new capabilities.

Equal parts amazing and terrifying, what you need to know about these so-called “AI” solutions is much easier to understand and far less magical than it may seem. This is your chance to catch up with the future and figure out what it means for you.

In this two part presentation, we will cover why this time it is different, except where it isn't. I won't assume much background and won't discuss much math.

A brief history of AI
Machine Learning
Deep Learning
Deep Reinforcement Learning
The Rise of Generative AI
Large Language Models and RAG
Multimodal Systems
Bias, Costs, and Environmental Impacts
AI Reality Check

At the end of these sessions, you will be conversant with the major topics and understand better what to expect and where to spend your time in learning more.

Modern application observability involves tracking key metrics and tracing the flow of an application, even across service boundaries. Spring Boot 3 introduced some powerful metrics and tracing capabilities based on Micrometer to open a window into your application's inner-workings.

Among the things you might want to keep an eye on in your Generative AI applications are how many interactions and how much time is spent with vector stores and AI provider APIs and, of course, how many tokens are being spent by your application. And being able to trace the flow of prompts, data, and responses through your application can help identify problems and bottlenecks.

Great news! Spring AI comes equipped to record metrics and tracing information through Micrometer. In this session, you'll learn how to put Spring AI observability to work for you. You'll learn about the metrics it exposes as well as the keys you can use to build dashboards and tracing to build a window into your Generative AI applications.

Spring Security has long been a powerful guard to place around your Spring applications, providing authentication, authorization, and many more concerns around keeping your application secure.

As time has progressed, Spring Security has evolved to provide even more capabilities, but has applied some self-improvement to make working with Spring Security even easier. That is to say, the way you configure and apply Spring Security today has changed dramatically from it's early XML-oriented approach and is even different now than some of the more recent Java-based configuration strategies.

In this example-driven session, we'll explore the latest and greatest that Spring Security has to offer, with an emphasis on how to apply security aspects to your applications with the latest configuration styles supported by Spring Security. You'll see how security is enabled in modern Spring applications using the Lambda DSL configuration approach, the preferred way to configure Spring Security and the ONLY way to configure Spring Security 7.

In this session, we'll cover several useful prompt engineering techniques as well as some emerging patterns that are categorized within the “Agentic AI” space and see how to go beyond simple Q&A to turn your LLM of choice into a powerful ally in achieving your goals.

At it's core, Generative AI is about submitting a prompt to an LLM-backed API and getting some response back. But within that interaction there is a lot of nuance, particularly with regard to the prompt itself.

It's important to know how to write effective prompts, choosing the right wording and being clear about your expectations, to get the best responses from an LLM. This is often called “prompt engineering” and includes several patterns and techniques that have emerged in the Gen AI space.

Since ChatGPT rocketed the potential of generative AI into the collective consciousness there has been a race to add AI to everything. Every product owner has been salivating at the possibility of new AIPowered features. Every marketing department is chomping at the bit to add a “powered by AI” sticker to the website. For the average layperson playing with ChatGPT's conversational interface, it seems easy however integrating these tools securely, reliably, and in a costeffective manner requires much more than simply adding a chat interface. Moreover, getting consistent results from a chat interface is more than an art than a science. Ultimately, the chat interface is a nice gimmick to show off capabilities, but serious integration of these tools into most applications requires a more thoughtful approach.

This is not another “AI is Magic” cheerleading session, nor an overly critical analysis of the field. Instead, this session looks at a number of valid usecases for the tools and introduces architecture patterns for implementing these usecases. Throughout we will explore the tradeoffs of the patterns as well as the application of AI in each scenario. We'll explore usecases from simple, direct integrations to the more complex involving RAG and agentic systems.

Although this is an emerging field, the content is not theoretical. These are patterns that are being used in production both in Michael's practice as a handson software architect and beyond.

Architects must maintain their breadth, and this session will build on that to prepare you for the inevitable AIpowered project in your future.

Since ChatGPT rocketed the potential of generative AI into the collective consciousness there has been a race to add AI to everything. Every product owner has been salivating at the possibility of new AIPowered features. Every marketing department is chomping at the bit to add a “powered by AI” sticker to the website. For the average layperson playing with ChatGPT's conversational interface, it seems easy however integrating these tools securely, reliably, and in a costeffective manner requires much more than simply adding a chat interface. Moreover, getting consistent results from a chat interface is more than an art than a science. Ultimately, the chat interface is a nice gimmick to show off capabilities, but serious integration of these tools into most applications requires a more thoughtful approach.

This is not another “AI is Magic” cheerleading session, nor an overly critical analysis of the field. Instead, this session looks at a number of valid usecases for the tools and introduces architecture patterns for implementing these usecases. Throughout we will explore the tradeoffs of the patterns as well as the application of AI in each scenario. We'll explore usecases from simple, direct integrations to the more complex involving RAG and agentic systems.

Although this is an emerging field, the content is not theoretical. These are patterns that are being used in production both in Michael's practice as a handson software architect and beyond.

Architects must maintain their breadth, and this session will build on that to prepare you for the inevitable AIpowered project in your future.

Join us for a hands-on workshop, GitOps: From Commit to Deploy, where you’ll explore the entire lifecycle of modern application deployment using GitOps principles.

We’ll begin by committing an application to GitHub and watching as your code is automatically built through Continuous Integration (CI) and undergoes rigorous unit and integration tests. Once your application passes these tests, we’ll build container images that encapsulate your work, making it portable, secure, and deployment-ready. Next, we’ll push these images to a container registry preparing for deployment

Next, you will learn how to sync your application in a staging Kubernetes cluster using ArgoCD (CD), a powerful tool that automates and streamlines the deployment process. Finally, we’ll demonstrate a canary deployment in a production environment with ArgoCD, allowing for safe, gradual rollouts that minimize risk.

By the end of this workshop, you’ll have practical experience with the tools and techniques that perform GitOps deployments, so you can take this information and set up your deployments at work.

  • Creating Your Application
  • Running Locally
  • Proper Commits
  • Security Scans
  • Safe Image Creation
  • Image Publishing
  • ArgoCD and Syncing
  • Canary Deployments
Workshop Requirements

1.Git

  1. Github Account
  2. Editor of your choice

Join us for a hands-on workshop, GitOps: From Commit to Deploy, where you’ll explore the entire lifecycle of modern application deployment using GitOps principles.

We’ll begin by committing an application to GitHub and watching as your code is automatically built through Continuous Integration (CI) and undergoes rigorous unit and integration tests. Once your application passes these tests, we’ll build container images that encapsulate your work, making it portable, secure, and deployment-ready. Next, we’ll push these images to a container registry preparing for deployment

Next, you will learn how to sync your application in a staging Kubernetes cluster using ArgoCD (CD), a powerful tool that automates and streamlines the deployment process. Finally, we’ll demonstrate a canary deployment in a production environment with ArgoCD, allowing for safe, gradual rollouts that minimize risk.

By the end of this workshop, you’ll have practical experience with the tools and techniques that perform GitOps deployments, so you can take this information and set up your deployments at work.

  • Creating Your Application
  • Running Locally
  • Proper Commits
  • Security Scans
  • Safe Image Creation
  • Image Publishing
  • ArgoCD and Syncing
  • Canary Deployments
Workshop Requirements

1.Git

  1. Github Account
  2. Editor of your choice

Microservices has emerged as both a popular and powerful architecture, yet the promised benefits overwhelmingly fail to materialize. Industry analyst, Gartner, estimates that “More than 90% of organizations who try to adopt microservices will fail…” If you hope to be part of that successful 10%, read on…

Why is the failure rate so high?

Succeeding with microservices requires optimizing for organization-level goals with all teams “rowing in the same direction” which is easier said than done. While the microservices architecture may have been well-defined at some point in history, today the word “microservices” is used as an umbrella to refer to a vast and diverse set of coarse-to-fine grained distributed systems. In short, thanks to the phenomenon of semantic diffusion, the term “microservices” means many different things to many different people.

The promised benefits of microservices (e.g. extremely high agility, scalability, elasticity, deployability, testability, evolvability, etc.) are achieved through a highly contextual hyper-optimization of both the logical architecture of the system as well as optimization of technical practices, tooling, and the organization itself. Operating based on someone else's reference architecture (optimized for their problem space, not yours) or attempting to apply the architectural topology without the necessary team and process maturity is often just laying the foundation for yet another microservice mega disaster.

There simply is no one-sized-fits-all approach to component granularity, optimal inter-service communication, data granularity, data replication and aggregation, minimizing or managing distributed transactions, asynchronous event orchestration or choreography, technology and platform selection/management, along with so many more decisions that are highly contextual. In short, adopting this architecture requires correctly making hundreds of decisions on a path that is fraught with traps and landmines. This is one of the hardest architectures to execute well. In markets with ever-increasing competitiveness, slow and costly trial-and-error is untenable.

Mastering Microservices

Mastering microservices requires that architects and developers understand the numerous tradeoffs (and their consequences) to arrive at an architecture that is both in-reach of the development teams and organization as well as capable of delivering the promised (and often necessary) -illities. This requires deep understanding of the terrain the architect will be exploring, extensive hands-on practice, and the latest tools, patterns, and practices; all of which this workshop is designed to deliver.

If you or your organization hopes to venture down this path, or if you already have but the organization and system is struggling to deliver necessary -illities, or if you just want to have a better understanding of the complex yet powerful architecture category; this workshop is for you.

If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.

This will be an overview of the benefits of vectors databases as well as an introduction to the major players.

We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.

We will cover:

  • A brief overview of vectors
  • Why vectors are so important to machine learning and datadriven systems
  • Overview of the offerings
  • Adding vector search to other systems
  • Sample use cases shown with one of the key open source engines

If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.

This will be an overview of the benefits of vectors databases as well as an introduction to the major players.

We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.

We will cover:

  • A brief overview of vectors
  • Why vectors are so important to machine learning and datadriven systems
  • Overview of the offerings
  • Adding vector search to other systems
  • Sample use cases shown with one of the key open source engines

Statistically speaking, you are most probably an innovator. Innovators actively seek out new ideas, technologies, and mental models by reading books, interacting with a broader social circle, and attending conferences. While you may leave this conference with the seed of an idea that has the potential to transform your teams, products, and organization; the battle has only begun. While, as a potential changeagent, you are ideally positioned to conceive of the powerful new ideas, you may be powerless to drive the change that leads to adoption. Your success requires the innovation to diffuse outward and become adopted. This is the art of Innovation.

Fortunately there has been over a century of study on the topic of how innovations go from novel idea to mainstream adoption. The art of innovation is difficult, but tractable and this session illuminates the path. You will get to the heart of why some innovations succeed while others fail as well as how to tip the scales in your favor. You'll leave armed with the tools to become a powerful change agent in your career and life and, ultimately, become a more powerful and influential person.

IDEs have provided ways to refactor code for a long time now. In spite of their effectiveness, that journey is arduous and time consuming. Reluctance to refactor increases the cost of development. However, refactoring for the sake of doing so can lead to greater productivity loss as well.

In this presentation we will use data driven approach. We will take examples of code, measure code quality, and then use automated code transformation tools to refactor the code, and then, once again, measure the quality of code and see how much we have improved. This can help us to not only refactor faster but also see the benefits realized and motivate us to move faster with greater efficiency.

Software vulnerability is a huge concern. What's lurking in code is a question that keeps passionate programmers up at night. Is there a memory leak, what about a race condition, oh what about security issues, are we violating purity of functions when we're not supposed to? We have to maintain code that others have written and it's not always easy and quick to detect those defects ticking away in the code.

In this presentation we will use AI based tools to detect issues in code, using multiple examples, and apply automated fixes and will reason about our approach and the change.

We rely heavily on polymorphism when programming with the object-oriented paradigm. That has served us really well, especially to create extensible code. However, like any tool and technique, there are times when that may not be the right choice. Java now provides an alternative that is useful in those select situations—the data-oriented programming.

In this presentation we will start with an example where the highly useful object hierarchy and polymorphism appears as a misfit and discuss how data-oriented programming solves the problem more elegantly. Get a good understanding of when to use each one of these and how to intermix them in your applications.

Dividing a large problem into subproblems that are scheduled to run on different threads is an often used solution. We've used executors and fork join pool for such problems in the past. These solutions, in spite of being very powerful, have significant limitations.

In this presentation we will start with those solutions, discuss the issues, and learn how structured concurrency, introduced in Java 21, can help solve such problems more effectively and elegantly.

Threads are lightweight, but do not scale well. That's one of the reasons we have been focused on the elastic capabilities on the cloud. Unfortunately that has an impact both on our environment and your companies wallet.

In this presentation we will learn how virtual threads reduce those impacts and help us to create scalable applications with minimum change to code.

One of the coolest aspect of Java is the fact that the newer features simply do not live in isolation, but interplay quite nicely with each other. A place where this is clearly evident is the synergy between records, sealed classes, and pattern matching.

In this presentation we will focus on pattern matching and how its power is enhanced by records and also the sealed classes. The details presented will help you to make the best use of all the three features in your own applications.

Functional programming is know for its benefits from ease to reason about the code to safe parallelization. However, exception handing is an area where things begin to fall apart.

In this presentation we will look at some prudent ways to tackle exceptions in functional style code.

Design patterns have been around for a long time and we use them extensively in Java. Yet, as the languages evolve so do the user of patterns.

In this presentation we will learn some anti-patters and patterns and make use of some modern features to create more elegant and maintainable code.

Influence is the essence of leadership. But what shapes our ability to influence?

This session explores multiple sources of leadership power including cultural, positional, prestige, and personal. It further helps leaders tune their own power for more impactful results. Understanding your default power as a leader and learning to harness a balanced power based on respect is the key to effective leadership at every level.


Titles grant authority to influence, but the best leaders influence others regardless of their title.


This session walks through the tension between authority and respect, influence and manipulation, empowerment and alignment, reflection and action, discussions and decisions, and more. Through discussion, we'll uncover practical strategies for better balancing your leadership power.

Key Topics:

  1. Explore the components of a leader's default power.
  2. Understand how your title of authority may hinder your development as a leader.
  3. Learn to hone and balance your power to improve your leadership.

Picture this: another chaotic project push, overloaded schedules, endless revisions, and frayed tempers all in the name of meeting the impossible delivery deadline. The grind feels inevitable, but it doesn’t have to be. Let us help you to transform high-pressure environments into spaces where you and your team can thrive. Learn to manage delivery stress, navigate challenging interactions, and lead with clarity.

In this interactive session, we’ll apply the CALM framework—four practical, skill-based shifts to help you reset under pressure and lead from grounded, purposeful action:

  • Center with Breath — pause by intentional breathing before reacting, even when the heat is on
  • Acknowledge and Own — move from blaming to clarity and accountability
  • Let It Go — learn to let go by paying attention to thoughts
  • Make It a Habit — weave the above practices into tiny daily routines

By combining real-world examples, team activities, and a focus on letting go, you’ll shift from stressful sprints to a cohesive, high-performing environment—and walk away ready to lead friction-free.

Ever feel like you're spinning your wheels on tasks that barely move the needle? It's time to flip the script with the 80/20 rule—where 20% of your efforts drive 80% of your results. This talk provides background on how the 80/20 rule transforms productivity at work and home, helping you identify and focus on the tasks that truly matter while minimizing distractions, low-impact work, and yes—no more chasing squirrels! You'll learn practical strategies to prioritize with pinpoint clarity, eliminate time-wasters, and let go of the stuff that doesn’t actually deserve your time or energy. Letting go is a skill—one you’ll start applying here to reclaim focus and get results.

With real-world examples, hands-on techniques, and a touch of humor, you'll walk away ready to do less but achieve so much more. Because why work harder when you can work smarter?

Ever find yourself slipping into endless distractions and losing hours on what started as a quick task? You're not alone, but it is possible to get much better at avoiding the cycle of going down the rabbit hole.

This talk explores why our brains are hardwired for distraction and gives you concrete techniques to stay focused, including baby steps in setting digital boundaries (yes, that means putting your phone down for at least five minutes!). You’ll also learn how to refocus your mind by practicing discernment, intentional breathing, and letting go—a skill that helps you release the mental clutter that pulls you off track.

Through practical exercises and real-world examples, you’ll leave ready to regain control, master your focus, and take back valuable hours of your day.

Tired of feeling trapped by too many demands and fearful of hearing—or saying—“no”? This interactive workshop dives deep into the transformative power of “no”—both in confidently asserting boundaries and receiving rejection with resilience. You'll learn how to respectfully say “no” to protect your time, priorities, and integrity while maintaining strong relationships.

On the flip side, you’ll gain tools to handle “no” without spiraling—by actually learning to let go. Letting go is a skill, not a mindset, and in this workshop you’ll practice using it in the moments when pressure builds—when your calendar’s packed, the ask feels unreasonable, or the “no” hits harder than expected. Through real-world scenarios, small group exercises, and practicing letting go techniques, you’ll leave equipped with tools to say and receive “no” effectively.

GitHub Copilot is a popular AI assistant that helps software developers more easily create content, get answers to coding-related questions, and handle many of the boilerplate tasks of software development. But it can also do much more in the areas where it can be used. Join Copilot expert (and author of the upcoming “Learning GitHub Copilot” book from O'Reilly) for a quick overview of some of the additional tips and tricks to allow you to make the most from this AI assistant.

Most users know GitHub Copilot can help with the basics of code generation, creating test cases, documentation, etc. But because the tool is AI based, there's a lot more that it can help you with in these areas simply by asking it and giving it the right prompts. In this session, we'll take a look at some ways to leverage beyond the basics of these tasks to creating results that are usable more deeply and widely. We'll also look at some ways to compensate when Copilot does not have the most recent information, or needs to be picking up more relevant context.

Java’s evolution is remarkable, and the leap from JDK 17 to the current version brings a wealth of powerful features to elevate your projects. Join us for an exciting session to explore select JEPs (Java Enhancement Proposals) introduced up to today, diving into their use cases and practical benefits for your work or open-source initiatives.

What You’ll Learn:
How to enable and utilize advanced Java features introduced in JDK 23.
Real-world demonstrations of cutting-edge updates, including:

  • Stream Gatherers: Handle complex data streams with ease.
  • Statements Before super(): Test invariants without constructing objects.
  • Stable Values
  • Unnamed Variables and Parameters: Enhance code readability and maintainability.
  • Launch Multi-File Source-Code Programs: Rapidly prototype with multiple source files.
  • Implicitly Declared Classes & Enhanced Main Methods: Streamline application development.
  • Updates on switch Expressions: We will discuss where we are with pattern matching as well as dealing with primitives
  • I may also sneak in something about my never-ending love for SimpleWebServer.

Why Attend?
Learn how to advocate for and implement your organization's latest Java tools and practices. Gain the knowledge you need to sell the value of next-generation Java and stay at the forefront of software development.

Containers are everywhere. Of course, a large part of the appeal of containers is the ease with which you can get started. However, productionizing containers is a wholly different beast. From orchestration to scheduling, containers offer significantly different challenges than VMs.

In particular, in terms of security. Securing and hardening VMs is very different than that for containers.

In this twopart session, we will see what securing containers involves.

We'll be covering a wide range of topics, including

Understanding Cgroups and namespaces
What it takes to create your own container technology as a basis of understanding how containers really work
Securing the build and runtime
Secrets management
Shifting left with security in mind

Containers are everywhere. Of course, a large part of the appeal of containers is the ease with which you can get started. However, productionizing containers is a wholly different beast. From orchestration to scheduling, containers offer significantly different challenges than VMs.

In particular, in terms of security. Securing and hardening VMs is very different than that for containers.

In this twopart session, we will see what securing containers involves.

We'll be covering a wide range of topics, including

Understanding Cgroups and namespaces
What it takes to create your own container technology as a basis of understanding how containers really work
Securing the build and runtime
Secrets management
Shifting left with security in mind

In this session we will discuss what modular monoliths are, what they bring to the table, and how they offer a great middle ground between monoliths and distributed architectures like microservices.

Monoliths get a bad rep. Experienced software developers have seen one too many monoliths devolve into a big ball of mud, leaving everyone frustrated, with an itch to do a “rewrite”. But monoliths have their pros! They are usually simpler, easier to understand, and faster to build and debug.

On the other side of the spectrum you have microservices—that offer scale, both technically and organizationally, as well as having the badge of honor of being “the new cool kid on the block”. But productionizing microservices is HARD.

Why can't we have our cake and eat it too? Turns out, we can. In this session we will explore the modular monolith—all the upsides of a monolith with none of the downsides of distributed architectures. We'll see what it means to build a modular monolith, and how that differs from a traditional layered architecture. We will discuss how we can build architectural governance to ensure our modules remain decoupled. Finally we'll see how our modules can communicate with one another without violating modularity.

By the end of this session you'll walk away with a greater appreciation for the monolith, and see how you can leverage this within your system architecture.

In this session, we will discuss architectural concerns regarding security. How do microservices communicate with one another securely? What are some of the checklist items that you need?

  • Valet Key
  • mTLS and Sidecars
  • Public Key Infrastructure (PKI)
  • SASL
  • Hashicorp Vault Keys
  • SBOMs
  • JSON Web Tokens

We take a look at another facet of architectural design, and that is how we develop and maintain transactions in architecture. Here we will discuss some common patterns for transactions

  • TwoPhase Commit
  • The Problem with 2PC
  • Using EventDrivenArchitecture to manage transactions
  • Transactional Outbox
  • Compensating Transaction
  • Optimistic vs Pessimistic Locking
  • TCC (TryConfirm/Cancel)
  • Saga Orchestrator
  • Saga Choreography

This session will focus on data governance and making data available within your enterprise. Who owns the data, how do we obtain the data, and what does governance look like?

  • CQRS
  • Materialized Views
  • Warehousing vs Data Mesh
  • OLAP vs OLTP
  • Pinot, Kafka, and Spark
  • Business Intelligence
  • Making Data Available for ML/AI

Join us for an indepth exploration of cuttingedge messaging styles in your large domain.

Here, we will discuss the messaging styles you can use in your business.

  • Event Sourcing
  • EventDriven Architecture
  • Claim Check
  • Event Notification
  • Event Carried State Transfer
  • Domain Events

In this example-driven session, we'll review several tips and tricks to make the most out of your Spring development experience. You'll see how to apply the best features of Spring and Spring Boot, including the latest and greatest features of Spring Framework 6.x and Spring Boot 3.x with an eye to what's coming in Spring 7 and Boot 4.

Spring has been the de facto standard framework for Java development for nearly two decades. Over the years, Spring has continued to evolve and adapt to meet the ever-changing requirements of software development. And for nearly half that time, Spring Boot has carried Spring forward, capturing some of the best Spring patterns as auto-configuration.

As with any framework or language that has this much history and power, there are just as many ways to get it right as there are to get it wrong. How do you know that you are applying Spring in the best way in your application?

Workshop Requirements

You'll need…

  • A Java IDE of your choice. I'll be using IntelliJ, but you are welcome to use Spring Tools, Netbeans, or whatever you prefer for Java development. (Spring Tools–either the Eclipse or VSCode version–or IntelliJ is recommended, though).
  • Java 17 or higher. I recommend using SDKMan (https://sdkman.io/) for managing multiple versions of Java on your machine. I'll be using Zulu 17.0.6 for most tasks, but may switch to a Liberica NIK version for purposes of native builds.
  • Docker (optional, but may be necessary for some tasks)

By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.

So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.

For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot auto-configuration, you'll be able to get straight to the point of asking questions and getting answers your application needs.

In this session, we'll consider a handful of use cases for Generative AI and see how to implement them with Spring AI. We'll start simple, then build up to some more advanced uses of Spring AI that employ your application's own data when generating answers.

Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues. 

 In addition to detecting the presence of common bugs, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite to fix common security issues and keep them from coming back.

 

In this talk we will focus on:

Using OpenRewrite to automatically identify and fix known security vulnerabilities.
Integrating security scans with OpenRewrite for continuous improvement.
*Free up your time to address larger concerns by addressing the pedestrian but time-consuming security bugs.

One of the nice operational features of the REST architectural style as an approach to API Design is that is allows for separate evolution of the client and server. Depending on the design choices a team makes, however, you may be putting a higher burden on your clients than you intend when you introduce breaking changes.

 By taking advantage of the capabilities of OpenRewrite, we can start to manage the process of independent evolution while minimizing the impact. Code migration and refactoring can be used to transition existing clients away from older or deprecated APIs and toward new versions with less effort than trying to do it by hand.

 

In this talk we will focus on:

Managing API lifecycle changes by automating the migration from deprecated to supported APIs.
Discussing API evolution strategies and when they require assisted refactoring and when they don’t.
*Integrating OpenRewrite into API-first development to ensure client code is always up-to-date with ease.

One of the features that distinguished Java from a majority of mainstream languages at the time it was released was that it includes a platform independent threading model.
The Java programming language provides core, low-level, features to control how threads interact: synchronized, wait/notify/notifyAll, and volatile. The specification also provides a “memory model” that describes how the programmer can share data reliably between threads. Using these low-level features presents no small challenge, and is error prone.

Contrary to popular expectation, code written this way is often not faster than code created using the high level java.util.concurrent libraries. Despite this, there are two good reasons for understanding these and the underlying memory model. One is that it's quite common to have code written in this way that must be maintained, and such maintenance is impractical without an understanding of these features. Second, when writing code using the higher level libraries, the memory model, or more specifically, the “happens-before” relationship still guides how and when we should use these libraries.

This workshop presents these features in a way designed to allow you to perform maintenance, and write new code without being dangerous.

One of the features that distinguished Java from a majority of mainstream languages at the time it was released was that it includes a platform independent threading model.
The Java programming language provides core, low-level, features to control how threads interact: synchronized, wait/notify/notifyAll, and volatile. The specification also provides a “memory model” that describes how the programmer can share data reliably between threads. Using these low-level features presents no small challenge, and is error prone.

Contrary to popular expectation, code written this way is often not faster than code created using the high level java.util.concurrent libraries. Despite this, there are two good reasons for understanding these and the underlying memory model. One is that it's quite common to have code written in this way that must be maintained, and such maintenance is impractical without an understanding of these features. Second, when writing code using the higher level libraries, the memory model, or more specifically, the “happens-before” relationship still guides how and when we should use these libraries.

This workshop presents these features in a way designed to allow you to perform maintenance, and write new code without being dangerous.

Java's Generics syntax provides us with a means to increase the reusability of our code by allowing us to build software, particularly library software, that can work on many different types, even with limited knowledge about those types. The most familiar examples are the classes in Java's core collections API which can store and retrieve data of arbitrary types, without degenerating those types to java.lang.Object.
However, while the generics mechanism is very simple to use in simple cases such as using the collections API, it's much more powerful than that. Frankly, it can also be a little puzzling.

This session investigates the issues of type erasure, assignment compatibility in generic types, co- and contra-variance, through to bridge methods.

Course outline
Type erasure
Two approaches for generics and Java's design choice
How to break generics (and how not to!)
Maintaining concrete type at runtime
Assignment compatibility of generic types
What's the problem–Understanding Liskov substitution in generic types
Co-variance
Two syntax options for co-variance
Contra-variance
Syntax for contra-variance
Worked examples with co- and contra-variance
Building arrays from generic types
Effective use of functional interfaces
Bridge methods
Review of overloading requirements
Faking overloading in generic types
Setup requirements

This course includes extensive live coding demonstrations and attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards, but some might use newer library features. You can use any Java development environment / IDE that you like and no other tooling is required.

Java's Generics syntax provides us with a means to increase the reusability of our code by allowing us to build software, particularly library software, that can work on many different types, even with limited knowledge about those types. The most familiar examples are the classes in Java's core collections API which can store and retrieve data of arbitrary types, without degenerating those types to java.lang.Object.
However, while the generics mechanism is very simple to use in simple cases such as using the collections API, it's much more powerful than that. Frankly, it can also be a little puzzling.

This session investigates the issues of type erasure, assignment compatibility in generic types, co- and contra-variance, through to bridge methods.

Course outline
Type erasure
Two approaches for generics and Java's design choice
How to break generics (and how not to!)
Maintaining concrete type at runtime
Assignment compatibility of generic types
What's the problem–Understanding Liskov substitution in generic types
Co-variance
Two syntax options for co-variance
Contra-variance
Syntax for contra-variance
Worked examples with co- and contra-variance
Building arrays from generic types
Effective use of functional interfaces
Bridge methods
Review of overloading requirements
Faking overloading in generic types
Setup requirements

This course includes extensive live coding demonstrations and attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards, but some might use newer library features. You can use any Java development environment / IDE that you like and no other tooling is required.

For many beginning and intermediate software engineers, design is something of a secret anxiety. Often we know we can create something that works, and we can likely include a design pattern or two tif only to give our proposal some credibility. But sometimes, we're left with a nagging feeling that there might be a better design, or more appropriate pattern, and we might not be really confident that we can justify our choices.

This session investigates the fundamental driving factors behind good design choices so we can balance competing concerns and confidently justify why we did what we did. The approach presented can be applied not only to design, but also to what's often separated out under the term “software architecture”.
Along the journey, we'll use the approach presented to derive several of the well known “Gang of Four” design patterns, and in so doing conclude that they are the product of sound design applied to a context and not an end in themselves.

Course outline
Background: three levels of “design”
Data structure and algorithm
Design
Software Architecture
Why many programmers struggle with design
What makes a design “better” or “worse” than any other?
The pressures of the real world versus a learning environment
A time-honored engineering solution
Identifying the problem
Dissecting the elements
Creating a working whole from the parts
Deriving three core design patterns from principles
Decorator
Strategy
Sidenote, why traditional inheritance is bad
Command or “higher order function”

Setup requirements
This course is largely language agnostic, but does include some live coding demonstrations. Attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards. You can use any Java development environment / IDE that you like and no other tooling is required.

For many beginning and intermediate software engineers, design is something of a secret anxiety. Often we know we can create something that works, and we can likely include a design pattern or two tif only to give our proposal some credibility. But sometimes, we're left with a nagging feeling that there might be a better design, or more appropriate pattern, and we might not be really confident that we can justify our choices.

This session investigates the fundamental driving factors behind good design choices so we can balance competing concerns and confidently justify why we did what we did. The approach presented can be applied not only to design, but also to what's often separated out under the term “software architecture”.
Along the journey, we'll use the approach presented to derive several of the well known “Gang of Four” design patterns, and in so doing conclude that they are the product of sound design applied to a context and not an end in themselves.

Course outline
Background: three levels of “design”
Data structure and algorithm
Design
Software Architecture
Why many programmers struggle with design
What makes a design “better” or “worse” than any other?
The pressures of the real world versus a learning environment
A time-honored engineering solution
Identifying the problem
Dissecting the elements
Creating a working whole from the parts
Deriving three core design patterns from principles
Decorator
Strategy
Sidenote, why traditional inheritance is bad
Command or “higher order function”

Setup requirements
This course is largely language agnostic, but does include some live coding demonstrations. Attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards. You can use any Java development environment / IDE that you like and no other tooling is required.

When the world wide web launched in 1993, it presented a revolutionary new way to globally share information. The revolution didn't stop there. The web soon became a platform for building, hosting, and distributing entire applications. Today most applications are built as web applications yet the core capabilities of HTML remain mired in the Web 1.0 days. Ajax was the first of many “hacks” to build web applications that delivered the rich, responsive user experience that rivaled traditional fatclient applications. Early js libraries and frameworks overcame browser incompatibilities and provided the first abstractions to hide the hacks and today's frameworks are so powerful that conventional wisdom states they are the defacto best practice for building modern web applications. But at what cost?

We've gone fullcircle. Today's SPAs have more in common with the fat client applications of the 90s (albeit with simplified deployment) than they do with the web. The modern UX of today's frameworkdriven SPAs is what users demand, thus we follow the everchanging trends; but at what cost? Beyond the bloat, complexity, and ephemerality of the modern webdev toolchain; modern webdev practices have inadvertently abandoned the core ideas of the web that made the platform technologically, architecturally, and philosophically revolutionary.

Leading thinkers in the web development space have long proclaimed that “not everything should be a SPA” however the alternative of a web 1.0 vanilla html application has very limited utility in the year 2024. Are these our only options, or does a “third way” exist?

This session introduces that “third way” based on the revolutionary ideas that empowered the web. A meaningful, practical, and proven alternative to SPA frameworks providing a simpler and more lightweight approach to building applications on the Web and beyond without sacrificing the UX.

Web applications built following this “third way” boast more evolvability, longevity, and simplicity. SPAs will continue to have their place, but good software engineering is about using the right tool for the job. After attending this session, you will have more than just a hammer in your toolbox.

When the world wide web launched in 1993, it presented a revolutionary new way to globally share information. The revolution didn't stop there. The web soon became a platform for building, hosting, and distributing entire applications. Today most applications are built as web applications yet the core capabilities of HTML remain mired in the Web 1.0 days. Ajax was the first of many “hacks” to build web applications that delivered the rich, responsive user experience that rivaled traditional fatclient applications. Early js libraries and frameworks overcame browser incompatibilities and provided the first abstractions to hide the hacks and today's frameworks are so powerful that conventional wisdom states they are the defacto best practice for building modern web applications. But at what cost?

We've gone fullcircle. Today's SPAs have more in common with the fat client applications of the 90s (albeit with simplified deployment) than they do with the web. The modern UX of today's frameworkdriven SPAs is what users demand, thus we follow the everchanging trends; but at what cost? Beyond the bloat, complexity, and ephemerality of the modern webdev toolchain; modern webdev practices have inadvertently abandoned the core ideas of the web that made the platform technologically, architecturally, and philosophically revolutionary.

Leading thinkers in the web development space have long proclaimed that “not everything should be a SPA” however the alternative of a web 1.0 vanilla html application has very limited utility in the year 2024. Are these our only options, or does a “third way” exist?

This session introduces that “third way” based on the revolutionary ideas that empowered the web. A meaningful, practical, and proven alternative to SPA frameworks providing a simpler and more lightweight approach to building applications on the Web and beyond without sacrificing the UX.

Web applications built following this “third way” boast more evolvability, longevity, and simplicity. SPAs will continue to have their place, but good software engineering is about using the right tool for the job. After attending this session, you will have more than just a hammer in your toolbox.

This workshop will explore the principles of the Ports and Adapters pattern (also called the Hexagonal Architecture) and demonstrate how to refactor legacy code or design new systems using this approach. You’ll learn how to organize your domain logic and move UI and infrastructure code into appropriate places within the architecture. The session will also cover practical refactoring techniques using IntelliJ and how to apply Domain Driven Design (DDD) principles to ensure your system is scalable, maintainable, and well-structured.

What You’ll Learn:

  1. What is Hexagonal Architecture?
    Understand the fundamental principles of Hexagonal Architecture, which helps isolate the core business logic (the domain) from external systems like databases, message queues, or user interfaces. This architecture is designed to easily modify the external components without affecting the domain.

  2. What are Ports and Adapters?
    Learn the key concepts of Ports and Adapters, the core elements of Hexagonal Architecture. Ports define the interface through which the domain interacts with the outside world, while Adapters implement these interfaces and communicate with external systems.

  3. Moving Domain Code to Its Appropriate Location:
    Refactor your domain code to ensure it is correctly placed in the core domain layer. You will learn how to separate domain logic from external dependencies, ensuring that business rules are isolated and unaffected by user interface or infrastructure changes.

  4. Moving UI Code to Its Appropriate Location:
    Discover how to refactor UI code by decoupling it from the domain logic and placing it in the appropriate layers. You’ll learn how to use the Ports and Adapters pattern to allow the user interface to communicate with the domain without violating architectural boundaries.

  5. Using Refactoring Tools in IntelliJ:
    Learn how to use IntelliJ’s powerful refactoring tools to streamline code movement. Techniques such as Extract Method, Move Method, Extract Delegate, and Extract Interface will be applied to refactor your codebase.

  6. Applying DDD Software Principles:
    We’ll cover essential Domain-Driven Design principles, such as Value Objects, Entities, Aggregates, and Domain Events.

  7. Refactoring Techniques:
    Learn various refactoring strategies to improve code structure, Extract Method, Move Method, Extract Delegate, Extract Interface, and Sprout Method and Class

  8. Verifying Code with Arch Unit:
    Ensure consistency and package rules using Arch Unit, a tool for verifying the architecture of your codebase. You will learn how to write tests confirming your project adheres to the desired architectural guidelines, including separating layers and boundaries.

Who Should Attend:

This workshop is perfect for developers who want to improve their understanding of Ports and Adapters Architecture, apply effective refactoring techniques, and leverage DDD principles for designing scalable and maintainable systems.

Workshop Requirements

If you wish to do the interactive labs:

  1. Java 21+ Higher
  2. IntelliJ (a must)
  3. Maven

This workshop will explore the principles of the Ports and Adapters pattern (also called the Hexagonal Architecture) and demonstrate how to refactor legacy code or design new systems using this approach. You’ll learn how to organize your domain logic and move UI and infrastructure code into appropriate places within the architecture. The session will also cover practical refactoring techniques using IntelliJ and how to apply Domain Driven Design (DDD) principles to ensure your system is scalable, maintainable, and well-structured.

What You’ll Learn:

  1. What is Hexagonal Architecture?
    Understand the fundamental principles of Hexagonal Architecture, which helps isolate the core business logic (the domain) from external systems like databases, message queues, or user interfaces. This architecture is designed to easily modify the external components without affecting the domain.

  2. What are Ports and Adapters?
    Learn the key concepts of Ports and Adapters, the core elements of Hexagonal Architecture. Ports define the interface through which the domain interacts with the outside world, while Adapters implement these interfaces and communicate with external systems.

  3. Moving Domain Code to Its Appropriate Location:
    Refactor your domain code to ensure it is correctly placed in the core domain layer. You will learn how to separate domain logic from external dependencies, ensuring that business rules are isolated and unaffected by user interface or infrastructure changes.

  4. Moving UI Code to Its Appropriate Location:
    Discover how to refactor UI code by decoupling it from the domain logic and placing it in the appropriate layers. You’ll learn how to use the Ports and Adapters pattern to allow the user interface to communicate with the domain without violating architectural boundaries.

  5. Using Refactoring Tools in IntelliJ:
    Learn how to use IntelliJ’s powerful refactoring tools to streamline code movement. Techniques such as Extract Method, Move Method, Extract Delegate, and Extract Interface will be applied to refactor your codebase.

  6. Applying DDD Software Principles:
    We’ll cover essential Domain-Driven Design principles, such as Value Objects, Entities, Aggregates, and Domain Events.

  7. Refactoring Techniques:
    Learn various refactoring strategies to improve code structure, Extract Method, Move Method, Extract Delegate, Extract Interface, and Sprout Method and Class

  8. Verifying Code with Arch Unit:
    Ensure consistency and package rules using Arch Unit, a tool for verifying the architecture of your codebase. You will learn how to write tests confirming your project adheres to the desired architectural guidelines, including separating layers and boundaries.

Who Should Attend:

This workshop is perfect for developers who want to improve their understanding of Ports and Adapters Architecture, apply effective refactoring techniques, and leverage DDD principles for designing scalable and maintainable systems.

Workshop Requirements

If you wish to do the interactive labs:

  1. Java 21+ Higher
  2. IntelliJ (a must)
  3. Maven

Does your life feel like non stop motion with never a moment to chill, as if you're always reacting to shifting priorities? You’re not alone, and it’s time to bring your A-game beyond the code. In this groomed talk, you'll learn how to use similar concepts—roadmaps, backlogs, and more—from your professional life in analogous ways to bring order to your everyday life.

Discover how to transform chaos into clarity by prioritizing tasks like a pro, tackling personal goals with laser focus, and making “fire drills” far less frequent. You’ll also learn how to let go—yes, it’s a skill, not just a mindset—and drop some of the mental clutter that keeps you spinning. Through relatable examples, humor, no-nonsense strategies, and real-world letting go practices, you’ll walk away with tools to get your life dialed in, reduce stress, achieve what truly matters—and still have time for a beer with friends. No debugging required!

This interactive, hands-on workshop is designed for software developers and architects eager to explore cutting-edge AI technologies. We’ll delve deep into Retrieval-Augmented Generation (RAG) and GraphRAG, equipping participants with the knowledge and skills to build autonomous agents capable of intelligent reasoning, dynamic data retrieval, and real-time decision-making.

Through practical exercises, real-world use cases, and collaborative discussions, you’ll learn how to create applications that leverage external knowledge sources and relational data structures. By the end of the day, you’ll have a solid understanding of RAG and GraphRAG and the ability to integrate these methodologies into production-ready autonomous agents.

In this interactive workshop, participants will delve into the foundational concepts of RAG and GraphRAG, exploring how these technologies can be utilized to develop autonomous agents capable of intelligent reasoning and dynamic data retrieval. The workshop will cover essential topics such as data ingestion, embedding techniques, and the integration of graph databases with generative AI models.
Attendees will engage in practical exercises that involve setting up RAG pipelines, utilizing vector databases for efficient information retrieval, and implementing GraphRAG workflows to enhance the capabilities of their applications. By the end of the workshop, participants will have a comprehensive understanding of how to harness these advanced methodologies to build robust autonomous agents tailored to their specific use cases.

As digital ecosystems evolve at breakneck speed, enterprises must reimagine their architectural blueprints—not as static diagrams, but as adaptive, intelligence-driven systems. This talk offers a compelling preview of Enterprise Architecture 4.0, weaving together cutting-edge technologies, strategic foresight, and practical frameworks to prepare architects and technology leaders for the next wave of transformation.

We begin by exploring how Generative AI, Assistive Agents, Predictive Analytics, and Copilot GPTs can be embedded into modern architectures—turning traditional systems into living, learning ecosystems. Attendees will gain foundational knowledge of RAG (Retrieval-Augmented Generation), vector databases, and graph databases—critical for context-aware reasoning, personalization, and intelligent data processing.

Then, using the Gartner Emerging Tech Impact Radar as our compass, we explore the innovations reshaping enterprise software over the next 3–5 years:

Emerging Technologies Shaping the Future
Generative AI & GPT Agents: Beyond code generation—autonomous reasoning, AI coaching, and domain-specific copilots.

Vector & Graph Databases: Powering search, personalization, and relationship-aware AI.

Augmented & Virtual Reality (AR/VR): Enhancing training, field ops, and immersive collaboration.

Edge Computing: Enabling low-latency intelligence and decentralized decision-making at the enterprise edge.

AI/ML at Scale: Mitigating bias, enforcing AI ethics, and integrating ML into business operations.

Blockchain: Ensuring data integrity, supply chain transparency, and smart contract automation.

Autonomous Agents & Advanced Automation: Automating complex workflows through intelligent, multi-modal agents.

We highlight how Enterprise Architecture becomes the navigation system for these innovations—ensuring alignment with business strategy, reducing technical debt, and unlocking agility and resilience.

Key Takeaways & Outcomes
By the end of this session, attendees will:

Understand the strategic shift toward Enterprise Architecture 4.0 and why it’s essential—not optional.

Gain a preview of patterns, principles, and tools to be explored in the 2-day workshop, including AI-first frameworks and modular architectures.

Learn how to integrate emerging technologies like GPT agents, vector search, blockchain, and AR/VR into a unified roadmap.

Discover how to elevate the EA function—from compliance-oriented governance to future-shaping innovation leadership.

Whether you’re preparing for next-gen AI systems, scaling automation, or future-proofing your enterprise strategy, this talk provides the preview and perspective you need to lead with confidence into the AI-driven future.

Graph technology has emerged as the fastest-growing sector in database systems over the past decade—and now, it's at the heart of AI transformation. This talk explores the strategic imperative of mastering graph technologies for professionals designing intelligent systems, optimizing codebases, and architecting future-ready enterprises.

Mastering graph databases, knowledge graphs, and advanced algorithms is no longer a niche skill—it's foundational to enabling AI use cases, powering semantic search, driving recommendation engines, and orchestrating Retrieval-Augmented Generation (RAG) with high precision.
In this comprehensive session, we'll explore high-level graph algorithms that form the backbone of modern, complex systems and discuss how these algorithms are integral to the architecture of efficient graph databases. We will delve into the advanced functionalities and strategic implementations of knowledge graphs, illustrating their essential role in integrating disparate data sources, empowering AI applications including generative AI, and enhancing business intelligence.

Join us to navigate the complexities and opportunities this dynamic field presents, ensuring you remain at the cutting edge of technology and continue to drive significant advancements in your projects and enterprises.

What You’ll Learn:
Advanced Graph Algorithms
Concise review of key graph theory concepts tailored for AI and data engineers.

Application of algorithms like Greedy, Dijkstra's, Bellman-Ford, and PageRank for real-world graph optimization, pathfinding, and influence modeling.

Graph Database Architecture
Comparison of graph vs. relational models for large-scale, interconnected data.

Best practices in data modeling, indexing, and query performance tuning in platforms like Neo4j, TigerGraph, and Amazon Neptune.

Mastery of Knowledge Graphs
How to build and scale enterprise-grade knowledge graphs for semantic search, personalization, and intelligent recommendations.

Role of ontologies, entities, and relationships in structuring organizational knowledge.

Graph-RAG and AI-Enhanced Use Cases
Deep dive into Graph-RAG (Graph-enhanced Retrieval-Augmented Generation): combining structured knowledge graphs with unstructured retrieval to power trustworthy, explainable generative AI.

Use cases:

Domain-specific copilots with traceable knowledge lineage.

AI assistants that reason over connected knowledge.

Compliance-aware search and recommendations.

Customer 360 + Agent 360 views for enterprise workflows.

Case Studies and Future Technologies
Real-world case studies of graph adoption in healthcare, finance, e-commerce, and public sector AI.

Preview of emerging trends:

Graph Neural Networks (GNNs)

Hybrid vector–graph databases

Multimodal reasoning over structured + unstructured data

Outcomes & Takeaways:
By the end of this session, you will:

Understand why graph mastery is foundational for AI and system innovation.

Learn to architect performant, scalable graph systems for enterprise use.

See how Graph-RAG bridges structured knowledge and LLMs to deliver smarter AI assistants.

Be equipped to apply graph technologies to drive innovation, efficiency, and AI trustworthiness in your own organization.

With advanced AI tools, software architects can enhance their project design, compliance adherence, and overall workflow efficiency. Join Rohit Bhardwaj, an expert in generative AI, for a session that delves into the integration of ChatGPT, a cutting-edge generative AI model, into the realm of software architecture. The session aims to provide attendees with hands-on experience in prompt engineering for architectural tasks and optimizing requirement analysis using ChatGPT. It is a compelling talk explicitly designed for software architects who are interested in leveraging generative AI to improve their work.

Outline:
Introduction

A brief overview of the session.
Importance of generative AI in software architecture.
Introduction to ChatGPT and its relevance for software architects.

Prompt Engineering for Architectural Tasks

Crafting Effective Prompts for ChatGPT
Strategies for creating precise and effective prompts.
Examples of architectural prompts and their impact.
Hands-On Exercise: Creating Architectural Prompts
Interactive session: Participants will craft and test their prompts.
Feedback and discussion on prompt effectiveness.

Optimizing Requirement Analysis

Leveraging ChatGPT for Requirement Analysis and Design
Integration of AI in empathizing with client needs and journey mapping.
Cost Estimations, Compliance, Security, and Performance
Selecting appropriate technologies and patterns with AI assistance
Hands-On Exercise: Requirement Analysis and Design
Case Study
Using Empathy Map and Customer Journey Map tools in conjunction with AI.
Case Study Cost Estimations, Compliance, Security, and Performance

Custom GPTs, Embeddings, Agents

Key Takeaways:
Enhanced understanding of how generative AI can be used in software architecture.
Practical skills in prompt engineering tailored for architectural tasks.
Strategies for effectively integrating ChatGPT into requirement analysis processes.

“By 2030, 80 percent of heritage financial services firms will go out of business, become commoditized, or exist only formally but not competing effectively”, predicts Gartner.

This session explores the integration of AI, specifically ChatGPT, into cloud adoption frameworks to modernize legacy systems. Learn how to leverage AWS Cloud Adoption Framework (CAF) 3.0, Microsoft Cloud Adoption Framework for Azure, and Google Cloud Adoption Framework to build cloud-native architectures that maximize scalability, flexibility, and security. Designed for architects, technical leads, and senior IT professionals, this talk provides actionable insights and strategies for successful digital transformation.

Cloud adoption frameworks are essential for accelerating digital business transformation by leveraging the power of cloud technologies. This talk will guide you through the AWS Cloud Adoption Framework (CAF) 3.0, Microsoft Cloud Adoption Framework for Azure, and Google Cloud Adoption Framework, focusing on building cloud-native architectures that ensure scalability, flexibility, and security.

The session will delve into the strategic role of AI, particularly ChatGPT, in modernizing legacy systems. By understanding and implementing these frameworks, you will learn to navigate the complexities of transitioning from legacy systems to modern cloud-based architectures. This talk will provide practical steps and real-world case studies to help you effectively plan and execute your cloud adoption strategy.

Legacy systems can be assets and obstacles, providing reliable functionality but often becoming burdensome to maintain and evolve. In this talk, we will confront the challenges of working with legacy architectures and discover the strategic approaches for modernization. By examining the benefits and risks of incremental migration versus full system rewrites, attendees will learn the most suitable path for their unique situations.
Through practical examples and case studies, we will explore how successful organizations have revitalized their aging architectures, preserving the value of legacy investments while embracing innovation and adaptability. From small-scale legacy components to large-scale monolithic systems, we'll cover diverse modernization scenarios, allowing participants to glean insights applicable to their projects.
Whether your organization is facing budget constraints, a need for rapid modernization, or concerns about maintaining critical functionality, this talk offers a comprehensive guide to navigating the legacy landscape and crafting a roadmap to rejuvenate aging architectures.

Agenda:

Introduction:

  • Overview of the session
  • Importance of cloud adoption frameworks in digital transformation
  • Introduction to AI and ChatGPT in modernizing legacy systems

Understanding Cloud Adoption Frameworks:

  • Overview of AWS Cloud Adoption Framework (CAF) 3.0
  • Introduction to Microsoft Cloud Adoption Framework for Azure
  • Introduction to Google Cloud Adoption Framework
  • Key components and benefits of each framework

Strategic Role of AI in Legacy Modernization:

  • How AI, particularly ChatGPT, is revolutionizing the modernization of legacy systems
  • Benefits of integrating AI in cloud adoption frameworks

Steps for Moving Legacy Systems to the Cloud:

  • Assessing legacy systems and identifying modernization opportunities
  • Using CAF frameworks to plan and execute migration strategies
  • Incremental migration vs. full system rewrites
  • Ensuring compliance, security, and performance during the transition

ChatGPT's Role in Legacy Analysis:

  • Utilizing ChatGPT for analyzing legacy code
  • Aiding in documentation and understanding complex, outdated codebases
  • Practical examples of ChatGPT in legacy modernization

Building Cloud-Native Architectures:

  • Designing scalable, flexible, and secure cloud-native solutions
  • Leveraging cloud-native services and best practices
  • Implementing continuous integration and continuous delivery (CI/CD) pipelines

Case Studies and Real-World Applications:

  • Examples of successful legacy system modernizations using AI and cloud frameworks
  • Lessons learned and best practices from leading organizations

Practical Tips and Best Practices:

  • Actionable advice for managing and optimizing cloud migration
  • Strategies for ensuring successful digital transformation

Conclusion and Q&A:

  • Recapitulation of key takeaways
  • Addressing final questions and facilitating discussions with the audience
  • Highlighting the future of AI and cloud adoption in modernizing legacy systems

Participants will leave this session equipped with a robust understanding of how to leverage AI, particularly ChatGPT, in the context of legacy system modernization. You will gain strategic insights, practical tools, and actionable knowledge to lead your teams and projects towards successful, AI-enhanced modernization efforts, ensuring your organization remains competitive and agile in a rapidly evolving digital landscape.

Join us for an immersive journey into the heart of modern cybersecurity challenges. In this groundbreaking talk, we delve into the intricacies of securing your digital assets with a focus on three critical domains: applications, APIs, and Large Language Models (LLMs).

As developers and architects, you understand the paramount importance of safeguarding your systems against evolving threats. Our session offers an exclusive opportunity to explore the industry-standard OWASP Top 10 vulnerabilities tailored specifically to your domain.

Uncover the vulnerabilities lurking within your applications, APIs, and LLMs, and gain invaluable insights into mitigating risks and fortifying your defenses. Through live demonstrations and real-world examples, you'll witness firsthand the impact of security breaches and learn proactive strategies to combat them.

Whether you're a seasoned architect seeking to fortify your organization's security posture or a developer striving to build resilient systems, this talk equips you with the knowledge and tools essential for navigating the complex landscape of cybersecurity.

Agenda

  • OWASP Top 10 Overview

    • Introduction to OWASP
    • Significance of OWASP Top 10
    • Overview of OWASP Top 10 for Applications, APIs, and LLMs
  • OWASP Top 10 for Application Security

    • Presentation: Common Vulnerabilities and Mitigation Strategies
    • Demonstration: Live Examples of Application Security Vulnerabilities
  • OWASP Top 10 for API Security

    • Presentation: Key Challenges in API Security and Best Practices
    • Demonstration: Illustration of API Security Vulnerabilities and Attacks
  • OWASP Top 10 for LLM Applications (Large Language Models)

    • Presentation: Unique Security Concerns in LLM Applications
    • Demonstration: Showcase of LLM Security Vulnerabilities and Risks
  • Q&A and Discussion

    • Open Floor for Questions and Discussion
  • Conclusion

    • Summary of Key Takeaways
    • Call to Action: Implementing Security Best Practices

In this session we'll take a tour of some features that you might or might not have heard of, but can significantly improve your workflow and day-to-day interaction with Git.

Git continues to see improvements daily. However, work (and life) can take over, and we often miss the changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.

In this session we will look at some features you are probably aware of, but haven't used, alongside new features that Git has brought to the table. Examples include:

  • Rebase and interactive rebase
  • restore/switch and when to use them
  • worktrees
  • shallow-clones
  • Git's filesystem monitor

By the end of this session, you will walk away with a slew of new tools in your arsenal, and a new perspective on how this can help you and your colleagues get the most out of Git.

It's not just architecture—it's evolutionary architecture. But to evolve your architecture, you need to measure it. And how does that work exactly? How does one measure something as abstract as architecture?

In this session we'll discuss various strategies for measuring your architecture. We'll see how you know if your software architecture is working for you, and how to know which metrics to keep an eye on. We'll also see the benefits of measuring your architecture.

We'll cover a range of topics in this session, including

Different kinds of metrics to measure your architecture
The benefits of measurements
Improving visibility into architecture metrics

In this session we will discuss the need to document architecture, and see what mechanisms are available to us to document architecture—both present and future.

We've all learned that documenting your code is a good idea. But what about your architecture? What should we be thinking about when we document architecture? What tools and techniques can we reach for as we pursue this endeavor? Can we even make this a sustainable activity, or are we forever doomed to architectural documentation getting outdated before the ink is even dry?

In this session we will discuss a range of techniques that will not only help document your architecture, but even provide a mechanism to think about architecture upfront, and make it more predictable. You'll walk away armed with everything you need to know about documenting your current, and future architectures.

Awareness is the knowledge or perception of a situation or fact, which based on myriad of factors is an elusive attribute. Likely the most significant unasked for skill… perhaps because it's challenging to “measure” or verify. It is challenging to be aware of aware, or is evidence of it's adherence. This session will cover different levels of architectural awareness. How to surface awareness and how you might respond to different technical situations once you are aware.

Within this session we look holistically an engineering, architecture and the software development process. Discussing:

* Awareness of when process needs to change (original purpose of Agile)
* Awareness of architectural complexity
* Awareness of a shift in architectural needs 
* Awareness of application portfolio and application categorization 
* Awareness of metrics surfacing system challenges
* Awareness of system scale (and what scale means for your application)
* Awareness when architectural rules are changing
* Awareness of motivation for feature requests
* Awareness of solving the right problem

The focus of the session will be mindful (defined as focusing on one's awareness), commentating in sharing strategies for heightening awareness as an architect and engineer.

In the realm of architecture, principles form the bedrock upon which innovative and enduring designs are crafted. This presentation delves into the core architectural principles that guide the creation of structures both functional and aesthetic. Exploring concepts such as balance, proportion, harmony, and sustainability, attendees will gain profound insights into the art and science of architectural design. Through real-world examples and practical applications, this session illuminates the transformative power of adhering to these principles, shaping not only buildings but entire environments. Join us as we unravel the secrets behind architectural mastery and the principles that define architectural brilliance.

Good architectural principles are fundamental guidelines or rules that inform the design and development of software systems, ensuring they are scalable, maintainable, and adaptable. Here are some key architectural principles that are generally considered valuable in software development:

  • Modularity
  • Simplicity
  • Scalability
  • Flexibility
  • Reusability
  • Maintainability
  • Performance
  • Security
  • Testability
  • Consistency
  • Interoperability
  • Evolutionary Design

Adhering to these architectural principles can lead to the development of robust, maintainable, and adaptable software systems that meet the needs of users and stakeholders effectively.

Embarking on the journey to become an architect requires more than technical expertise; it demands a diverse skill set that combines creativity, leadership, communication, and adaptability. You may be awesome as a developer or engineer, but the skills needed to be an architect are often different and require more than technical awareness to succeed.

This presentation delves into the crucial skills aspiring architects need to cultivate. From mastering design principles and embracing cutting-edge technologies to honing collaboration and project management abilities, attendees will gain valuable insights into the multifaceted world of architectural skills. Join us as we explore practical strategies, real-world examples, and actionable tips that pave the way for aspiring architects to thrive in a dynamic and competitive industry.

As a PhD student, Arty took on a directed studies to learn how to bring her 3D animated character, fervie, into an interactive context to run and jump around in an Augmented Reality (AR) environment with a game controller. Join her as she shares her journey in learning, challenges faced, lessons learned, and a demo of her final project, Learning with Fervie.

If you've ever been interested in building applications for AR/VR space, or working at the intersection of art + coding, Arty will be sharing her journey in learning this space over the last few years. With lots of fun examples, and a proposal for a standard platform for building 3D apps for software engineering, you'll learn about the capabilities of what's possible in this space with new inspiring ideas for building fresh and innovative applications.

What if developers had tools that recorded and helped them explore their historical experiences with the code, and they could identify hotspots of team friction, worthy of discussion, based on empirical data? This talk will explore the possibility and impact of such tools through a design fiction and working prototype of an Augmented Reality (AR) Code Planetarium powered by FlowInsight developer tools.

In an Agile software development process, a software team will typically meet on a regular basis in a “retrospective meeting” to reflect on the challenges faced by the team and opportunities for improvement. On the surface, this challenge might seem straightforward, but modern software projects are complex endeavors, and developers are human – identifying what’s most important in a complex sociotechnical system is a task humans struggle to do well.

You've heard of Terraform, maybe even written some scripts using it. You've heard that Terraform is capable of dynamic behavior using blocks, for loops and counters. And you've glanced at the Terraform functions list, but wondered how one would ever go about using those?

We've got you covered.

In this session, we'll build a set of Terraform scripts that can be fed a YAML file, and using Terraform's dynamic capabilities, we'll build infrastructure as specced out in the YAML file.

We'll be covering a host of different topics in this session

Terraform's dynamic capabilities including dynamic blocks, for and for_each loops
Terraform's functions and datastructures

In this session, we'll rank the features added to Java between versions 1.8 and 24 (or whatever version is current at the time). Those include the basic functional features, like streams, lambdas, and method references, through code improvements like switch expressions, records, and pattern matching. We'll include simple topics like LVTI and collection factory methods, as well more recent additions like sealed interfaces and virtual threads. Vote for your favorite (and/or least favorite) feature!

Examples will demonstrate dataoriented programming concepts, combining sealed interfaces, records, pattern matching for switch, and more. Other examples will access RESTful web services, integrate with AI tool, and refactor existing Java 8 code to take advantage of new features.

AI tools are used to create a short libretto for a classic opera given a frankly ridiculous situation. Two models generate the text in tandem. Another model acts as a reviewer to evaluate the result. Illustrations are generated using an image generator. Snippets of music are created from parts of the text using a music system. A podcast is then generated discussing the quality and historical ramifications of the opera, using yet another model. It's AI models all the way down, with appropriately tragic results.

The idea is to evaluate the limits of what multimodal AI tools can and can not do, both individually and together.

This comprehensive presentation explores the evolution of Java from version 8 through 25, demonstrating how the language has transformed from an object-oriented platform into a modern, multi-paradigm programming language. Starting with Java 8's functional programming revolution—including lambdas, streams, and Optional—the presentation traces Java's journey through significant milestones like records, pattern matching, virtual threads, and data-oriented programming. Through practical code examples from a real repository, attendees will see how these features work together to create more expressive, maintainable, and performant applications.

The presentation begins with Java 8's game-changing features, using the ProcessDictionaryV2 example to showcase functional programming patterns, higher-order functions, and advanced Stream API usage including collectors like groupingBy and teeing. It then progresses through Java 9-11's quality-of-life improvements (var, HTTP Client, String enhancements), Java 12-17's language evolution (text blocks, records, pattern matching, sealed classes), and Java 18-21's modern capabilities (virtual threads for massive scalability, sequenced collections). Special attention is given to Data-Oriented Programming, demonstrating how records, sealed classes, and pattern matching combine to create a new programming paradigm. The presentation also covers cutting-edge features like unnamed variables (_) and looks ahead to Java 25 LTS with scoped values and performance improvements. Throughout, best practices are emphasized, including embracing immutability, leveraging pattern matching for cleaner code, using virtual threads for I/O-bound operations, and adopting modern APIs over legacy alternatives. All examples are drawn from the accompanying repository, providing attendees with working code they can explore and adapt for their own projects.

Workshop Requirements

Java 24 until Java 25 is released, then Java 25. The instructor plans to use IntelliJ IDEA during class, though that is not required. If you do install IDEA, the free community edition is fine.

This comprehensive presentation explores the evolution of Java from version 8 through 25, demonstrating how the language has transformed from an object-oriented platform into a modern, multi-paradigm programming language. Starting with Java 8's functional programming revolution—including lambdas, streams, and Optional—the presentation traces Java's journey through significant milestones like records, pattern matching, virtual threads, and data-oriented programming. Through practical code examples from a real repository, attendees will see how these features work together to create more expressive, maintainable, and performant applications.

The presentation begins with Java 8's game-changing features, using the ProcessDictionaryV2 example to showcase functional programming patterns, higher-order functions, and advanced Stream API usage including collectors like groupingBy and teeing. It then progresses through Java 9-11's quality-of-life improvements (var, HTTP Client, String enhancements), Java 12-17's language evolution (text blocks, records, pattern matching, sealed classes), and Java 18-21's modern capabilities (virtual threads for massive scalability, sequenced collections). Special attention is given to Data-Oriented Programming, demonstrating how records, sealed classes, and pattern matching combine to create a new programming paradigm. The presentation also covers cutting-edge features like unnamed variables (_) and looks ahead to Java 25 LTS with scoped values and performance improvements. Throughout, best practices are emphasized, including embracing immutability, leveraging pattern matching for cleaner code, using virtual threads for I/O-bound operations, and adopting modern APIs over legacy alternatives. All examples are drawn from the accompanying repository, providing attendees with working code they can explore and adapt for their own projects.

Workshop Requirements

Java 24 until Java 25 is released, then Java 25. The instructor plans to use IntelliJ IDEA during class, though that is not required. If you do install IDEA, the free community edition is fine.

Introducing Spring Modulith

Although microservices are still a useful architectural choice, the balance of additional complexity and the advantages of microservice architecture do not necessarily work out in the benefit of all applications. While most application will benefit from improved modularity, the challenges that come with distributed computing may be too much for some applications to take on. A well-structured and modular monolithic application might be a better fit.

In this session, we'll explore Spring Modulith, a relatively new Spring library that enables developers to build well-structured Spring Boot applications, guiding them in discovering domain-driven modules, and verifying that the modular arrangement is correct. We'll also see how Spring Modulith assists with modular integration testing and documentation.

In this example-driven session, we're going to look at how to implement GraphQL in Spring. You'll learn how Spring for GraphQL builds upon GraphQL Java, recognize the use-cases that are best suited for GraphQL, and how to build a GraphQL API in Spring.

Typical REST APIs deal in resources. This is fine for many use cases, but it tends to be more rigid and less efficient in others.

For example, in an shopping API, it's important to weigh how much or how little information should be provided in a request for an order resource? Should the order resource contain only order specifics, but no details about the order's line items or the products in those line items? If all relevant details is included in the response, then it's breaking the boundaries of what the resource should offer and is overkill for clients that do not need it. On the other hand, proper factoring of the resource will require that the client make multiple requests to the API to fetch relevant information that they may need.

GraphQL offers a more flexible alternative to REST, setting aside the resource-oriented model and focusing more on what a client needs. Much as how SQL allows for data from multiple tables to be selected and joined in response to a query, GraphQL offers API clients the possibility of tailoring the response to provide all of the information needed and nothing that they do not need.

Workshop Requirements
  • Java (17 or higher)
  • An IDE of your choosing (I'll be using either Spring Tools or IntelliJ, but VSCode or even NetBeans will be fine if you prefer)

In the fast-moving world of API development, the right tools can make all the difference—especially in the age of AI. As automation accelerates and complexity grows, it’s never been more important to equip your teams with tools that support clarity, consistency, and velocity. This session explores a curated set of essential tools that every API practitioner should know—spanning the entire API lifecycle. From design and development to governance, documentation, and collaboration, we’ll highlight solutions that boost productivity and support scalable, AI-ready API practices. Whether you're refining your stack or starting from scratch, this toolkit will help you deliver better APIs, faster.

Join to discover how this essential toolkit can empower your API journey, enhancing productivity and ensuring optimal performance throughout the API lifecycle.

In an era where digital transformation and AI adoption are accelerating across every industry, the need for consistent, scalable, and robust APIs has never been more critical. AI-powered tools—whether generating code, creating documentation, or integrating services—rely heavily on clean, well-structured API specifications to function effectively. As teams grow and the number of APIs multiplies, maintaining design consistency becomes a foundational requirement not just for human developers, but also for enabling reliable, intelligent automation. This session explores how linting and reusable models can help teams meet that challenge at scale.

We will explore API linting using the open-source Spectral project to enable teams to identify and rectify inconsistencies during design. In tandem, we will navigate the need for reusable models—recognizing that the best specification is the one you don’t have to write or lint at all! These two approaches not only facilitate the smooth integration of services but also foster collaboration across teams by providing a shared, consistent foundation.

Modern HTTP APIs power today’s connected world, acting as the core interface not only for developers, but also for the ever-growing ecosystem of machines, services, and now AI agents. As every organization is increasingly expected to produce and consume APIs at scale, the ability to design, build, deploy, and operate consistent, high-quality APIs has become a key competitive differentiator. With AI accelerating the need for composable, well-structured, and discoverable interfaces, API maturity is no longer optional—it’s essential. However, building and scaling effective API Design First practices across an enterprise is still fraught with manual processes, inconsistent standards, and slow governance models. To succeed, organizations must reimagine API Governance as a strategic enabler—one that prioritizes collaboration, stewardship, and automation.

In this session, we’ll explore the core stages of the API design lifecycle and share how to implement practical, modern governance models that increase productivity without sacrificing control. Drawing on real-world examples from SPS Commerce, as well as off-the-shelf tooling and custom solutions, we’ll show how to align your teams, accelerate delivery, and produce APIs that are robust, reusable, and ready for both human and AI consumers.

Alistair Cockburn has described software development as a game in which we choose among three moves: invent, decide, and communicate. Most of our time at No Fluff is spent learning how to be better at inventing. Beyond that, we understand the importance of good communication, and take steps to improve in that capacity. Rarely, however, do we acknowledge the role of decision making in the life of software teams, what can cause it to go wrong, and how to improve it.

In this talk, we will explore decision making pathologies and their remedies in individual, team, and organizational dimensions. We'll consider how our own cognitive limitations can lead us to to make bad decisions as individuals, and what we might do to compensate for those personal weaknesses. We'll learn how a team can fall into decisionmaking dysfunction, and what techniques a leader might employ to healthy functioning to an afflicted group. We'll also look at how organizational structure and culture can discourage quality decision making, and what leaders to swim against the tide.

Software teams spend a great deal of time making decisions that place enormous amounts of capital on the line. Team members and leaders owe it to themselves to learn how to make them well.

Internal Developer Portals are revolutionizing how teams streamline workflows, enhance developer experience, and boost productivity. As AI begins to weave deeper into engineering workflows, having structured, reliable SDLC data becomes essential to unlock its full potential. But for medium-sized organizations with limited resources, the path to success is unclear and fraught with challenges. Simply mimicking or using industry frameworks like Spotify Backstage might seem like a shortcut, but without a clear vision and strategy, it can lead to frustration and failure.

This talk dives into how to build a successful Internal Developer Portal tailored to your unique needs. We'll explore critical steps: making a compelling business case, navigating the buy-versus-build dilemma, designing scalable architectures, driving adoption, and measuring impact in meaningful ways. Packed with actionable insights from SPS Commerce’s journey—the force behind the world’s largest retail network—you'll gain a roadmap to reduce cognitive load, empower your teams, and align your engineering practices with business goals. Whether you're starting fresh or refining an existing portal, this session will equip you to achieve big wins, even on a lean budget.

Apache Flink has become the standard piece of stream processing infrastructure for applications with difficulttosatisfy demands for scalability, high performance, and fault tolerance, all while managing large amounts of application state.

The key to demystifying Apache Flink is to understand how the combination of stream processing plus application state has influenced its design and APIs. A framework that cares only about batch processing, or one that performed only stateless stream processing, would be much simpler.

We'll explore how Flink's managed state is organized, and how this relates to the programming model exposed by its APIs. We'll look at checkpointing: how it works, the correctness guarantees that Flink offers, how state snapshots are organized on disk, and what happens during recovery and rescaling.

We'll also look at watermarking, which is a major source of complexity and confusion for new Flink developers. Watermarking epitomizes the requirement Flink has to manage application state in a way that doesn't explode as those applications run continuously on unbounded streams.

You'll leave with a good mental model of Apache Flink, ready to use it in your own stateful stream processing applications.

Virtual Threads, the central feature of Project Loom, were released as a final feature in JDK 21! As great as virtual threads are, it's just the start of the story for Project Loom. With the vastly reduced cost of creating threads, comes new opportunities.

In this presentation we will look at the next two major features to be delivered by Project Loom; Structured Concurrency and Scoped Values, and how they will significantly improve the developer experience of writing, running, and debugging code the executes concurrently.

Amber, Valhalla, Loom, Panama, oh my! These are the names of the OpenJDK projects that will be shaping the nearterm future of Java. As these projects begin to deliver, what does this mean for the future of Java?

In this presentation, we will look at the goals of these OpenJDK projects, what has been delivered and is soon to be delivered with each project, and how Java developers can prepare for these future changes. If you are curious about the future of Java or get a deeper understanding of these projects, this is a presentation you want to check out!

It has been said that everything rises and falls on leadership and that there are no bad teams, only bad leaders. As you serve your team as their manager and leader, one of the most critical decisions for you to make is what kind of leader you will be.

In this session, we will focus on one of the keys to good leadership; how you manage yourself. From cultivating healthy personal routines to developing a culture of feedback, during this session we will explore some best practices in self-leadership that will help you bring your very best to the team you serve.

Our industry is in the process of changing our understanding of computational systems. The combination of extreme computational and energy power demand is a key part of modern data centers and runtime platforms. How many calculations can we produce at what energy cost? The limitations are a confluence of material science, system design complexity, and the fundamental laws of physics.

It's about to get weird as we enter the world of quantum and biological systems.

We started with coprocessors, FPGAs, ASICs, GPUs, and DSPs as lowerpower, highperformance custom hardware. We're now seeing the emergence of neural processing units and tensor processing units as well.

But we are on the cusp of enormous shifts in what's possible computationally with the advent of quantum and biological systems. Not every computational element is suitable for every problem, but quantum computing will make some problems impossibly fast to handle. Artificial biological brains will be able to computations, like the human brain, with the power budget of a light bulb.

Come hear how things are already in the process of changing as well as what is likely to come next.

Businesses are investing in Artificial Intelligence (AI) at an unprecedented scale. The transformative potential of AI is too great to ignore. But so are it's costs.

This talk explores the multiplying effect AI is making and why that might be a problem. What role do we play as leaders toward this outcome? What forces can we multiply to change it?

I may be “old”, but my career has lived at the forefront of every technology wave: software, data, agility, wireless, mobile, cloud, and now AI. Each brought new opportunities and sped growth. And now?

The AI wave is different from the rest. Bigger, sure. A tsunami. But who benefits? Past waves enabled content and capability building. They provided new rungs on our career ladder. This one replaces content and capability building. It's smashing the rungs altogether.

We formed the AI Leadership Lap in 2024 to better understand the impacts of Generative AI on business and leaders. Exploring productivity to performance, design to decisions, and goals to governance, we'll discuss insights and impacts to leaders.

By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.

So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.

For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot autoconfiguration, you'll be able to get straight to the point of asking questions and getting answers your application needs.

In this handson workshop, you'll build a complete Spring AIenabled application applying such techniques as prompt templating, Retrieval Augmented Generation (RAG), conversational history, and tools invocation. You'll also learn prompt engineering techniques that can help your application get the best results with minimal “hallucinations” while minimizing cost.

Workshop Requirements

In the workshop, we will be using…

  • Java 17+
  • An IDE of your choosing (I will be using IntelliJ IDEA)
  • OpenAI's API (you will need an account and a small amount of credit $5 is more than enough).

Optionally, you may choose to use a different AI provider other than OpenAI such as Anthropic, Mistral, or Google Vertex (Gemini), but you will need an account with them and some reasonable amount of credit with them. Or, you may choose to install Ollama (https://ollama.com/), but if you do be sure to install a reasonable model (llama3:latest or gemma:9b) before you arrive.

Know that if you choose to use something other than OpenAI, your workshop experience will vary.

If you ask the typical technologist how to build a secure system, they will include encryption in the solution space. While this is a crucial security feature, in and of itself, it is an insufficient part of the plan. Additionally, there are a hundred ways it could go wrong. How do you know if you're doing it right? How do you know if you're getting the protections you expect?

Encryption isn't a single thing. It is a collection of tools combined together to solve problems of secrecy, authentication, integrity, and more. Sometimes those tools are deprecated because they no longer provide the protections that they once did.Technology changes. Attacks change. Who in your organization is tracking and validating your encryption strategy? How are quantum computing advancements going to change the game?No background will be assumed and not much math will be shown.

If you ask the typical technologist how to build a secure system, they will include encryption in the solution space. While this is a crucial security feature, in and of itself, it is an insufficient part of the plan. Additionally, there are a hundred ways it could go wrong. How do you know if you're doing it right? How do you know if you're getting the protections you expect?

Encryption isn't a single thing. It is a collection of tools combined together to solve problems of secrecy, authentication, integrity, and more. Sometimes those tools are deprecated because they no longer provide the protections that they once did.Technology changes. Attacks change. Who in your organization is tracking and validating your encryption strategy? How are quantum computing advancements going to change the game?No background will be assumed and not much math will be shown.

Software projects can be difficult to manage. Managing teams of developers can be even difficult. We've created countless processes, methodologies, and practices but the underlying problems remain the same.

This session is full of practical tips and tricks to deal with the reallife situations any tech leader regularly encounters. Put these techniques into practice and create an enviable culture and an outstanding development team. At the same time, you'll avoid common management mistakes and pitfalls.

In the age of digital transformation, Cloud Architects emerge as architects of the virtual realm, bridging innovation with infrastructure. This presentation offers a comprehensive exploration of the Cloud Architect's pivotal role.

Delving into cloud computing models, architecture design, and best practices, attendees will gain insights into harnessing the power of cloud technologies. From optimizing scalability and ensuring security to enhancing efficiency and reducing costs, this session unravels the strategic decisions and technical expertise that define a Cloud Architect's journey. Join us as we decode the nuances of cloud architecture, illustrating its transformative impact on businesses in the modern era.

The organization has grown and one line of business has become 2 and then 10. Each line of business is driving technology choices based on their own needs. Who and how do you manage alignment of technology across the entire Enterprise… Enter Enterprise Architecture! We need to stand up a new part of the organization.

This session will define the role of architects and architectures. We will walk through a framework of starting an Enterprise Architecture practice. Discussions will include:

  • Differences of EA teams from one organization to another
  • Different architectural roles
  • Challenges that face EA
  • How to start or refine an EA practice

As an architect you're often working at a high level on projects, thinking of architectural concerns such as distributed applications, CI/CD pipelines, inter-team APIs, and setting standards. Code quality affects everything that a software architect needs to work on - from a small way to a big way.

We typically look at 6 Code Quality Areas:

Readability
Flexibility
Reusability
Scalability
Extendibility
Maintainability

In this talk, we'll look at techniques and tools for managing code quality. Our goal as architects is to maximize the manageability of code, consider different coding paradigms and their effect on the six areas, and how to create habits and processes to ensure long term code viability. We'll take a couple of sidebars on performant vs manageable code and OO vs Data Oriented coding. We'll look at tools for doing static analysis vs dynamic analysis.

Spring Boot 3.x and Java 21 have arrived, making it an exciting time to be a Java developer! Join me, Josh Long (@starbuxman), as we dive into the future of Spring Boot with Java 21. Discover how to scale your applications and codebases effortlessly. We'll explore the robust Spring Boot ecosystem, featuring AI, modularity, seamless data access, and cutting-edge production optimizations like Project Loom's virtual threads, GraalVM, AppCDS, and more.

Let's explore the latest-and-greatest in Spring Boot to build faster, more scalable, more efficient, more modular, more secure, and more intelligent systems and services.

The age of artificial intelligence (because the search for regular intelligence hasn't gone well..) is nearly at hand, and it's everywhere! But is it in your application? It should be. AI is about integration, and here the Java and Spring communities come second to nobody.

In this talk, we'll demystify the concepts of modern day Artificial Intelligence and look at its integration with the white hot new Spring AI project, a framework that builds on the richness of Spring Boot to extend them to the wide world of AI engineering.

Platform engineering is the latest buzzword, in a industry that already has it's fair share. But what is platform engineering? How does it fit in with DevOps and Developer Experience (DevEx)? And is this something your organization even needs?

In this session we will aim to to dive deep into the world of platform engineering. We will see what platform engineering entails, how it is the logical succession to a successful DevOps implementation, and how it aims to improve the developer experience. We will also uncover the keys to building robust, sustainable platforms for the future

Platform engineering is the latest buzzword, in a industry that already has it's fair share. But what is platform engineering? How does it fit in with DevOps and Developer Experience (DevEx)? And is this something your organization even needs?

In this session we will aim to to dive deep into the world of platform engineering. We will see what platform engineering entails, how it is the logical succession to a successful DevOps implementation, and how it aims to improve the developer experience. We will also uncover the keys to building robust, sustainable platforms for the future

Sharing code and internal libraries across your distributed microservice ecosystem feels like a recipe for disaster! After all, you have always been told and likely witnessed how this type of coupling can add a lot of friction to a world that is built for high velocity. But I'm also willing to bet you have experienced the opposite side effects of dealing with dozens of services that have had the same chunks of code copied and pasted over and over again, and now you need to make a standardized, simple header change to all services across your platform; talk about tedious, frictional, errorprone work that you probably will not do! Using a variety of codesharing processes and techniques like inner sourcing, module design, automated updates, and service templates, reusing code in your organization can be built as an asset rather than a liability.

In this talk, we will explore the architectural myth in microservices that you should NEVER share any code and explore the dos and don'ts of the types of reuse that you want to achieve through appropriate coupling. We will examine effective reuse patterns, including what a Service Template architecture looks like, while also spending time on the lifecycle of shared code and practically rolling it out to your services. We will finish it off with some considerations and struggles you are likely to run into introducing code reuse patterns into the enterprise.

Mastering the AI-First System Design Methodology is a must-attend talk for developers and architects seeking to elevate their system design capabilities in the era of intelligent systems. In this dynamic 90-minute session, attendees will embark on a comprehensive journey through the foundational principles of modern system design—now reimagined for AI integration—with a practical focus on the C4 model and its application to AI-enabled architectures.

This session is designed to equip professionals with the frameworks and tools necessary to build scalable, efficient, AI-aware systems that deliver lasting impact in a rapidly evolving digital ecosystem.

We'll begin by exploring the critical importance of understanding business requirements and stakeholder intent—an essential step in designing systems that align human values with machine intelligence. From there, we’ll guide attendees through a structured, AI-augmented design methodology: from stakeholder engagement and context modeling to system decomposition and refinement using LLMs and generative AI assistants.

Each stage will be brought to life with real-world examples, hands-on exercises, and interactive discussions—demonstrating how AI can accelerate ideation, automate documentation, optimize decisions, and identify design flaws early in the process.

Special focus will be given to incorporating empathy maps, value chain analysis, and customer journey mapping, enhanced with AI-driven pattern recognition and predictive insights. These tools enable deeper understanding of user behavior and business dynamics, resulting in more responsive and adaptive system architectures.

Whether you're a seasoned architect embracing AI-driven transformation or a developer ready to future-proof your design thinking, this talk will deliver actionable insights into building robust, intelligent, and human-centric systems. Join us to reimagine system design through the lens of AI—and become a key innovator in your organization’s AI-first journey.

The Importance of System Design

  • The role of system design in software development
  • Examples of project successes and failures

Overview of System Design Methodology

  • Introduction to System Design Methodology
  • The C4 Model: Context, Containers, Components, and Code

Deep Dive into the Methodology Stages

* Engage with Business Stakeholders
    * Techniques for engagement and prioritization
    * Case Study: A startup's journey to understand market needs

* Identify Vital Business Capabilities
    * Mapping business capabilities
    * Case Study: Streamlining operations for a logistics company

* Understand the Internal and External Personas
    * Using empathy maps and customer journey mapping
    * Case Study: Designing a healthcare app with patient and provider personas

* Develop a New Value Proposition
    * Crafting value propositions
    * Case Study: Innovating retail experience with a new e-commerce platform

* Define Solution Architecture
    * Detailing architecture and capability modules
    * Case Study: Architectural overhaul for a financial services firm

* Define Component Process Flows
    * Visualizing interactions and process flows
    * Case Study: Enhancing the order fulfillment process for an online retailer

* Review, Refine, and Finalize
    * Consolidating insights and preparing for implementation
    * Case Study: Finalizing and launching a new feature for a social media platform

MCP, or Model Context Protocol, is a standardized framework that allows AI agents to seamlessly connect with external data sources, APIs, and tools. Its main purpose is to make AI agents more intelligent and context-aware by giving them real-time access to live information and actionable capabilities beyond their built-in knowledge.

Join AI technologist, author, and trainer Brent Laster as we learn what MCP is, how it works, and how it can be used to create AI agents that can work with any process that implements MCP. You'll work with MCP concepts, coding, servers, etc. through hands-on labs that teach you how to use it with AI agents.

With MCP, developers can easily integrate AI agents with a wide variety of systems, from internal business databases to third-party services, without having to build custom integrations for each use case. MCP servers act as gateways, exposing specific actions and knowledge to the AI agent, which can then dynamically discover and use these capabilities as needed. This approach streamlines the process of adding new functionalities to AI agents and reduces ongoing maintenance.

MCP is particularly useful for scenarios where AI agents need up-to-date information or need to perform actions in external systems-such as customer support bots fetching live ticket data, enterprise assistants accessing knowledge bases, or automation agents processing transactions. By leveraging MCP, organizations can create more adaptable, powerful, and enterprise-ready AI solutions that respond to real-world business needs in real time

Workshop Requirements

Attendees will need the following to do the hands-on labs:

  • A laptop with power
  • A GitHub account on the public GitHub.com (free tier is fine)
  • Browser

MCP, or Model Context Protocol, is a standardized framework that allows AI agents to seamlessly connect with external data sources, APIs, and tools. Its main purpose is to make AI agents more intelligent and context-aware by giving them real-time access to live information and actionable capabilities beyond their built-in knowledge.

Join AI technologist, author, and trainer Brent Laster as we learn what MCP is, how it works, and how it can be used to create AI agents that can work with any process that implements MCP. You'll work with MCP concepts, coding, servers, etc. through hands-on labs that teach you how to use it with AI agents.

With MCP, developers can easily integrate AI agents with a wide variety of systems, from internal business databases to third-party services, without having to build custom integrations for each use case. MCP servers act as gateways, exposing specific actions and knowledge to the AI agent, which can then dynamically discover and use these capabilities as needed. This approach streamlines the process of adding new functionalities to AI agents and reduces ongoing maintenance.

MCP is particularly useful for scenarios where AI agents need up-to-date information or need to perform actions in external systems-such as customer support bots fetching live ticket data, enterprise assistants accessing knowledge bases, or automation agents processing transactions. By leveraging MCP, organizations can create more adaptable, powerful, and enterprise-ready AI solutions that respond to real-world business needs in real time

Workshop Requirements

Attendees will need the following to do the hands-on labs:

  • A laptop with power
  • A GitHub account on the public GitHub.com (free tier is fine)
  • Browser

Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this presentation, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama.

Join us to learn about all 3 topics in 90 minutes!

No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.

This will be a fast-paced, engaging mixture of presentations interspersed with code explanations and demos building up to the finished product – something you’ll be able to replicate yourself after the session!

Just as CI/CD and other revolutions in DevOps have changed the landscape of the software development lifecycle (SDLC), so Generative AI is now changing it again. Gen AI has the potential to simplify, clarify, and lessen the cycles required across multiple phases of the SDLC.

In this session with author, trainer, and experienced DevOps director Brent Laster, we'll survey the ways that today's AI assistants and tools can be incorporated across your SDLC phases including planning, development, testing, documentation, maintaining, etc. There are multiple ways the existing tools can help us beyond just the standard day-to-day coding and, like other changes that have happened over the years, teams need to be aware of, and thinking about how to incorporate AI into their processes to stay relevant and up-to-date.

Mob Programming is a style of programming in which the entire team sits together and
works on a single task at a time. Teams that have worked this way have found that
many of the problems that plague normal development just melted away, possibly because communication and learning increases. Teams also find that the quality of their code increases. They find their capacity to create increases. However, the best part of all this is that teams end up being happier and more cohesive.

In this session we introduce the core concepts of mob programming and then get handson mobbing on a coding kata.

Apache Iceberg is quickly becoming the foundation of the modern Data Lakehouse, offering ACID guarantees, schema evolution, time travel, and multi-engine compatibility over cheap object storage. We’ll work with Iceberg hands-on and show how to build durable, versioned, trustworthy datasets directly from streaming pipelines.

You’ll see Flink writing to Iceberg, Kafka events flowing into governed tables, and how snapshots let you query “what the data looked like yesterday.” We’ll compact, rewind, evolve schemas, roll back mistakes, and even handle CDC-style updates — all in real time and all powered by open source.

Whether you’re building for Data Mesh, Lakehouse, or stream-batch unification, this talk will show you how to use Iceberg to defend your data and enable self-serve, analytical infrastructure at scale.

  • Set up a full Lakehouse pipeline with Kafka, Flink, Iceberg, and MinIO (S3 local clone)
  • Use Time Travel to query historical snapshots of your data
  • Run Compaction to optimize small files into efficient Parquet chunks
  • Perform Schema Evolution safely with zero downtime
  • Ensure Conflict-Free Streaming Writes with exactly-once guarantees
  • Expire and Roll Back Snapshots to recover from mistakes or manage retention
  • Apply CDC-style Merge/Upserts into Iceberg tables from change logs
  • If time remaining, explore other engines other than Flink, like Dremio or Trino

Discover how Claude Code, Anthropic’s new agentic coding assistant, is redefining developer productivity directly from the terminal. In this session, we’ll explore how Claude Code leverages advanced natural language understanding to help you refactor, document, and debug code using conversational prompts. See live demonstrations of how Claude Code can streamline complex workflows—handling multi-step tasks, automating documentation, accelerating debugging, and even running tests or linting—all in a single pass.

We’ll dive into the tool’s unique capabilities, including its reasoning and language comprehension, multimodal integration, and built-in Git support for seamless version control. You’ll walk away with a clear understanding of how to incorporate Claude Code into your daily development process, improving code quality, maintainability, and collaboration, while saving valuable time. Whether you’re new to AI-assisted coding or looking to expand your toolkit, this session will equip you with practical techniques to harness Claude Code’s full potential and transform your coding workflow.

Hi, Spring fans! Developers today are being asked to deliver more with less time and build ever more efficient services, and Spring is ready to help you meet the demands. In this workshop, we'll take a roving tour of all things Spring, looking at fundamentals of the Spring component model, look at Spring Boot, and then see how to apply Spring in the context of batch processing, security, data processing, modular architecture, miroservices, messaging, AI, and so much more.

Basics
which IDE? IntelliJ, VSCode, and Eclipse
your choice of Java: GraalVM
start.spring.io, an API, website, and an IDE wizard
Devtools
Docker Compose
Testcontainers
banner.txt
Development Desk Check
the Spring JavaFormat Plugin
Python, gofmt, your favorite IDE, and
the power of environment variables
SDKMAN
.sdkman
direnv
.envrc
a good password manager for secrets
Data Oriented Programming in Java 21+
an example
Beans
dependency injection from first principles
bean configuration
XML
stereotype annotations
lifecycle
BeanPostProcessor
BeanFactoryPostProcessor
auto configuration
AOP
Spring's event publisher
configuration and the Environment
configuration processor
AOT & GraalVM
installing GraalVM
GraalVM native images
basics
AOT lifecycles
Scalability
non-blocking IO
virtual threads
José Paumard's demo
Cora Iberkleid's demo
Cloud Native Java (with Kubernetes)
graceful shutdown
ConfigMap and you
Buildpacks and Docker support
Actuator readiness and liveness probes
Data
JdbcClient
SQL Initialization
Flyway
Spring Data JDBC
Web Programming
clients: RestTemplate, RestClient, declarative interface clients
REST
controllers
functional style
GraphQL
batches
Architecting for Modularity
Privacy
Spring Modulith
Externalized messages
Testing
Batch Processing
Spring Batch
load some data from a CSV file to a SQL database
Microservices
centralized configuration
API gateways
reactive or not reactive
event bus and refreshable configuration
service registration and discovery
Messaging and Integration
“What do you mean by Event Driven?”
Messaging Technologies like RabbitMQ or Apache Kafka
Spring Integration
files to events
Kafka
a look at Spring for Apache Kafka
Spring Integration
Spring Cloud Stream
Spring Cloud Stream Kafka Streams
Security
adding form login to an application
authentication
authorization
passkeys
one time tokens
OAuth
the Spring Authorizatinm Server
OAuth clients
OAuth resource servers
protecting messaging code

Hi, Spring fans! Developers today are being asked to deliver more with less time and build ever more efficient services, and Spring is ready to help you meet the demands. In this workshop, we'll take a roving tour of all things Spring, looking at fundamentals of the Spring component model, look at Spring Boot, and then see how to apply Spring in the context of batch processing, security, data processing, modular architecture, miroservices, messaging, AI, and so much more.

Basics
which IDE? IntelliJ, VSCode, and Eclipse
your choice of Java: GraalVM
start.spring.io, an API, website, and an IDE wizard
Devtools
Docker Compose
Testcontainers
banner.txt
Development Desk Check
the Spring JavaFormat Plugin
Python, gofmt, your favorite IDE, and
the power of environment variables
SDKMAN
.sdkman
direnv
.envrc
a good password manager for secrets
Data Oriented Programming in Java 21+
an example
Beans
dependency injection from first principles
bean configuration
XML
stereotype annotations
lifecycle
BeanPostProcessor
BeanFactoryPostProcessor
auto configuration
AOP
Spring's event publisher
configuration and the Environment
configuration processor
AOT & GraalVM
installing GraalVM
GraalVM native images
basics
AOT lifecycles
Scalability
non-blocking IO
virtual threads
José Paumard's demo
Cora Iberkleid's demo
Cloud Native Java (with Kubernetes)
graceful shutdown
ConfigMap and you
Buildpacks and Docker support
Actuator readiness and liveness probes
Data
JdbcClient
SQL Initialization
Flyway
Spring Data JDBC
Web Programming
clients: RestTemplate, RestClient, declarative interface clients
REST
controllers
functional style
GraphQL
batches
Architecting for Modularity
Privacy
Spring Modulith
Externalized messages
Testing
Batch Processing
Spring Batch
load some data from a CSV file to a SQL database
Microservices
centralized configuration
API gateways
reactive or not reactive
event bus and refreshable configuration
service registration and discovery
Messaging and Integration
“What do you mean by Event Driven?”
Messaging Technologies like RabbitMQ or Apache Kafka
Spring Integration
files to events
Kafka
a look at Spring for Apache Kafka
Spring Integration
Spring Cloud Stream
Spring Cloud Stream Kafka Streams
Security
adding form login to an application
authentication
authorization
passkeys
one time tokens
OAuth
the Spring Authorizatinm Server
OAuth clients
OAuth resource servers
protecting messaging code

Hi, Spring fans! Developers today are being asked to deliver more with less time and build ever more efficient services, and Spring is ready to help you meet the demands. In this workshop, we'll take a roving tour of all things Spring, looking at fundamentals of the Spring component model, look at Spring Boot, and then see how to apply Spring in the context of batch processing, security, data processing, modular architecture, miroservices, messaging, AI, and so much more.

Basics
which IDE? IntelliJ, VSCode, and Eclipse
your choice of Java: GraalVM
start.spring.io, an API, website, and an IDE wizard
Devtools
Docker Compose
Testcontainers
banner.txt
Development Desk Check
the Spring JavaFormat Plugin
Python, gofmt, your favorite IDE, and
the power of environment variables
SDKMAN
.sdkman
direnv
.envrc
a good password manager for secrets
Data Oriented Programming in Java 21+
an example
Beans
dependency injection from first principles
bean configuration
XML
stereotype annotations
lifecycle
BeanPostProcessor
BeanFactoryPostProcessor
auto configuration
AOP
Spring's event publisher
configuration and the Environment
configuration processor
AOT & GraalVM
installing GraalVM
GraalVM native images
basics
AOT lifecycles
Scalability
non-blocking IO
virtual threads
José Paumard's demo
Cora Iberkleid's demo
Cloud Native Java (with Kubernetes)
graceful shutdown
ConfigMap and you
Buildpacks and Docker support
Actuator readiness and liveness probes
Data
JdbcClient
SQL Initialization
Flyway
Spring Data JDBC
Web Programming
clients: RestTemplate, RestClient, declarative interface clients
REST
controllers
functional style
GraphQL
batches
Architecting for Modularity
Privacy
Spring Modulith
Externalized messages
Testing
Batch Processing
Spring Batch
load some data from a CSV file to a SQL database
Microservices
centralized configuration
API gateways
reactive or not reactive
event bus and refreshable configuration
service registration and discovery
Messaging and Integration
“What do you mean by Event Driven?”
Messaging Technologies like RabbitMQ or Apache Kafka
Spring Integration
files to events
Kafka
a look at Spring for Apache Kafka
Spring Integration
Spring Cloud Stream
Spring Cloud Stream Kafka Streams
Security
adding form login to an application
authentication
authorization
passkeys
one time tokens
OAuth
the Spring Authorizatinm Server
OAuth clients
OAuth resource servers
protecting messaging code

A large part of embracing DevOps involves embracing automation. Over the last decade we have seen the emergence of “as Code” — Build-as-Code, Configuration-as-Code and Infrastructure-as-Code. The benefits to utilizing such tools are huge! We can codify the state of the world around our applications, giving us the ability to treat everything that our code needs like we treat the code itself. Version control, release management, tagging, even rolling backs are now possible.

Terraform, an open-source tool from HashiCorp allows us to build, control and modify our infrastructure. Terraform exposes a Domain-specific language (DSL) that we can use to express what our infrastructure should look like. Terraform can work with all the major cloud providers, including Amazon AWS, Google GCP and Microsoft Azure.

We will be using AWS as our playground for this workshop

Agenda

  • The place for, and benefits of “Everything as Code” alongside GitOps
  • Terraform's architecture
  • Terraform 101
  • Introduction to HCL
  • What are providers?
  • Initializing terraform and providers
  • Dive right in! Creating your first resource in AWS using Terraform
  • Understanding references, dependencies
  • apply-ing terraform
  • Variables and the HCL type-system
  • Using data and output in your terraform scripts
  • Understanding how Terraform manages state
  • Using S3 as a backend
  • DRY with Terraform modules
  • Collaboration using Terraform
  • Terraform ecosystem, testing, and GitOps
  • Closing arguments, final Q/A, discussion

Instructions

Please visit https://github.com/looselytyped/terraform-workshop/ for detailed instructions. They might seem a tad arduous but it's not as bad as it looks :)

Workshop Requirements

Please visit https://github.com/looselytyped/terraform-workshop/ for detailed instructions. They might seem a tad arduous but it's not as bad as it looks :)

A large part of embracing DevOps involves embracing automation. Over the last decade we have seen the emergence of “as Code” — Build-as-Code, Configuration-as-Code and Infrastructure-as-Code. The benefits to utilizing such tools are huge! We can codify the state of the world around our applications, giving us the ability to treat everything that our code needs like we treat the code itself. Version control, release management, tagging, even rolling backs are now possible.

Terraform, an open-source tool from HashiCorp allows us to build, control and modify our infrastructure. Terraform exposes a Domain-specific language (DSL) that we can use to express what our infrastructure should look like. Terraform can work with all the major cloud providers, including Amazon AWS, Google GCP and Microsoft Azure.

We will be using AWS as our playground for this workshop

Agenda

  • The place for, and benefits of “Everything as Code” alongside GitOps
  • Terraform's architecture
  • Terraform 101
  • Introduction to HCL
  • What are providers?
  • Initializing terraform and providers
  • Dive right in! Creating your first resource in AWS using Terraform
  • Understanding references, dependencies
  • apply-ing terraform
  • Variables and the HCL type-system
  • Using data and output in your terraform scripts
  • Understanding how Terraform manages state
  • Using S3 as a backend
  • DRY with Terraform modules
  • Collaboration using Terraform
  • Terraform ecosystem, testing, and GitOps
  • Closing arguments, final Q/A, discussion

Instructions

Please visit https://github.com/looselytyped/terraform-workshop/ for detailed instructions. They might seem a tad arduous but it's not as bad as it looks :)

Workshop Requirements

Please visit https://github.com/looselytyped/terraform-workshop/ for detailed instructions. They might seem a tad arduous but it's not as bad as it looks :)

MLOps is a mix of Machine Learning and Operations. It is the new frontier for those interested in or knowledgeable about both of these disciplines. MLOps supports the operationalization of machine learning models developed by data scientists and delivers the model for processing via streaming or batch operations. Operationalizing Machine Learning Models is nurturing your data from notebook to deployment through pipelines.

In this workshop, we will describe the processes:

  • Model Development
  • Model Packaging
  • Model Deployment
  • Model Cataloging
  • Model Monitoring
  • Model Maintenance

Some of the technologies we will discover include:

  • Airflow, Kubeflow, MLFlow
  • Prometheus & Grafana
  • TensorFlow, XGBoost,
  • Serving
  • Hyperparameter Tuning

Our exercises will include running and understanding MLFlow.

Workshop Requirements

None

MLOps is a mix of Machine Learning and Operations. It is the new frontier for those interested in or knowledgeable about both of these disciplines. MLOps supports the operationalization of machine learning models developed by data scientists and delivers the model for processing via streaming or batch operations. Operationalizing Machine Learning Models is nurturing your data from notebook to deployment through pipelines.

In this workshop, we will describe the processes:

  • Model Development
  • Model Packaging
  • Model Deployment
  • Model Cataloging
  • Model Monitoring
  • Model Maintenance

Some of the technologies we will discover include:

  • Airflow, Kubeflow, MLFlow
  • Prometheus & Grafana
  • TensorFlow, XGBoost,
  • Serving
  • Hyperparameter Tuning

Our exercises will include running and understanding MLFlow.

Workshop Requirements

None

In an industry obsessed with the next new thing, it’s easy to forget the foundation that brought us here. But progress isn’t built on novelty alone—it’s built on the wisdom of those who came before us. “Standing on the Shoulders of Giants” is an invitation to step back from the fast-paced world of tech innovation and reflect on the people, ideas, and moments that shaped us.

In this talk, Michael Carducci—technologist, speaker, and magician—will share personal stories about the giants whose insights have influenced him, some well-known and others you may not have encountered yet. These stories carry powerful lessons, both magical and technical, and serve as reminders that the smallest contributions can create lasting impacts.

You’ll be invited to reconnect with the timeless wisdom that we often overlook in our rush to keep up with the latest trends. Through a blend of engineering insights, historical perspectives, and magical moments, Michael will encourage you to reflect on your own giants—those who’ve inspired you—and how you, too, can become a force for improvement in the industry.

This talk is an opportunity to pause, look back, and appreciate the foundation we’ve been gifted, all while exploring how it can guide us toward a richer, more grounded future. Because the path forward isn’t just about where we’re going—it’s about the giants who’ve made it possible for us to stand tall.

Unlock your full leadership potential with actionable insights and strategies to take you to the next level of your leadership journey. This immersive, hands on workshop promises no slides- just deep, meaningful discussions and real-world applications to elevate your awareness and capacity to lead.In this session, we walk the path of technical leadership - from solving problems to guiding programs to developing people - and the key challenges and changes required at each stage. Blending research with pragmatic and applicable practice, you’ll leave with new clarity and actionable steps to accelerate your growth as a leader.

Led by Pete Behrens, an engineer turned leadership trainer and coach, this workshop draws on 20 years of his experience developing technical leaders and transforming organizations for sustained performance and health.

Key Topics Include:

  1. Understanding how technology and change are shaping new leadership approaches.
  2. Differentiating your authority as a leader from the respect required for effective leadership.
  3. Recognizing how your own technological mastery impedes your growth as a leader.
  4. The art of letting go - and why it is only half the battle to becoming a better leader.
  5. Developing a more strategic orientation, and aligning and engaging others towards it.
  6. Navigating the fine line between influence and manipulation and how to know if you’ve crossed it.
  7. Honing your power as a leader to foster safety, shared ownership, and engagement.

Note: This workshop is limited to 30-participants, ensuring an intimate, highly interactive experience for leaders at all levels.

  • FAQ
  • Code of Conduct
  • Speakers
  • About Us
  • Contact Us
  • Speak at NFJS Events
  • Site Map

NFJS Events

  • No Fluff Just Stuff Tour
  • UberConf
  • TechLeader Summit
  • Arch Conf
  • DevOps Vision
  • MLOps Vision
  • Code Remix Summit
Big Sky Technology
5023 W. 120th Avenue
Suite #289
Broomfield, CO 80020
help@nofluffjuststuff.com
Phone: (720) 902-7711
NFJS_Logo_2
© 2025 No Fluff, Just Stuff TM All rights reserved.