Attending full-day workshops is optional and requires a workshop ticket ( $575 ). Half-day workshops are open to all conference attendees.
Get handson learning to understand and utilize Generative AI from the ground. Work with key AI techniques and implement simple neural nets, vector databases, large language models, retrieval augmented generation and more all in one single day session!
Generative AI is everywhere these days. But there are so many parts of it and so much to understand that it can be overwhelming and confusing for anyone not already immersed in it. In this fullday workshop, opensource author, trainer, and technologist Brent Laster will explain the concepts and working of Generative AI from the ground up. You’ll learn about core concepts like neural networks all the way through to working with Large Language Models (LLM), Retrieval Augmented Generation (RAG) and AI Agents. Along the way we’ll explain integrated concepts like embeddings, vector databases and the current ecosystem around LLMs including sites like HuggingFace and frameworks like LangChain. And, for the key concepts, you’ll be doing handson labs using Python and a preconfigured environment to internalize the learning.
View Workshop Requirements »You are ready to level up your skills. Or, you've already been playing accidental architect, and need to have a structured plan to be designated as one. Well, your wait is over.
From the author of O'Reilly's best-selling “Head First Software Architecture” comes a full-day workshop that covers all that you need to start thinking architecturally. Everything from the difference between design and architecture, and modern description of architecture, to the skills you'll need to develop to become a successful architect, this workshop will be your one stop shop.
We'll cover several topics:
This is an exercise heavy workshop—so be prepared to put on your architect hat!
In the world of distributed computing that we live in today, asynchronous programming is taking a central stage from the architecture point of view, in order to provide better scalability. However, we are often cautioned about getting into asynchronous programming due to higher degree of complexity that is involved in writing asynchronous code and in managing exceptions. That cautioned is well founded, however, recent evolutions in Java is resetting the playing field, merging the simplicity of synchronous calls and the power of asynchronous execution.
In this workshop, we will take a deep dive into asynchronous programming in Java and learn about the three major options and which one we should really choose.
Event-driven architecture (EDA) is a design principle in which the flow of a system’s operations is driven by the occurrence of events instead of direct communication between services or components. There are many reasons why EDA is a standard architecture for many moderate to large companies. It offers a history of events with the ability to rewind the ability to perform real-time data processing in a scalable and fault-tolerant way. It provides real-time extract-transform-load (ETL) capabilities to have near-instantaneous processing. EDA can be used with microservice architectures as the communication channel or any other architecture.
In this workshop, we will discuss the prevalent principles regarding EDA, and you will gain hands-on experience performing and running standard techniques.
By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.
So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.
For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot autoconfiguration, you'll be able to get straight to the point of asking questions and getting answers your application needs.
In this handson workshop, you'll build a complete Spring AIenabled application applying such techniques as prompt templating, Retrieval Augmented Generation (RAG), conversational history, and tools invocation. You'll also learn prompt engineering techniques that can help your application get the best results with minimal “hallucinations” while minimizing cost.
View Workshop Requirements »Microservices has emerged as both a popular and powerful architecture, yet the promised benefits overwhelmingly fail to materialize. Industry analyst, Gartner, estimates that “More than 90% of organizations who try to adopt microservices will fail…” If you hope to be part of that successful 10%, read on…
Succeeding with microservices requires optimizing for organization-level goals with all teams “rowing in the same direction” which is easier said than done. While the microservices architecture may have been well-defined at some point in history, today the word “microservices” is used as an umbrella to refer to a vast and diverse set of coarse-to-fine grained distributed systems. In short, thanks to the phenomenon of semantic diffusion, the term “microservices” means many different things to many different people.
The promised benefits of microservices (e.g. extremely high agility, scalability, elasticity, deployability, testability, evolvability, etc.) are achieved through a highly contextual hyper-optimization of both the logical architecture of the system as well as optimization of technical practices, tooling, and the organization itself. Operating based on someone else's reference architecture (optimized for their problem space, not yours) or attempting to apply the architectural topology without the necessary team and process maturity is often just laying the foundation for yet another microservice mega disaster.
There simply is no one-sized-fits-all approach to component granularity, optimal inter-service communication, data granularity, data replication and aggregation, minimizing or managing distributed transactions, asynchronous event orchestration or choreography, technology and platform selection/management, along with so many more decisions that are highly contextual. In short, adopting this architecture requires correctly making hundreds of decisions on a path that is fraught with traps and landmines. This is one of the hardest architectures to execute well. In markets with ever-increasing competitiveness, slow and costly trial-and-error is untenable.
Mastering microservices requires that architects and developers understand the numerous tradeoffs (and their consequences) to arrive at an architecture that is both in-reach of the development teams and organization as well as capable of delivering the promised (and often necessary) -illities. This requires deep understanding of the terrain the architect will be exploring, extensive hands-on practice, and the latest tools, patterns, and practices; all of which this workshop is designed to deliver.
If you or your organization hopes to venture down this path, or if you already have but the organization and system is struggling to deliver necessary -illities, or if you just want to have a better understanding of the complex yet powerful architecture category; this workshop is for you.
This interactive, hands-on workshop is designed for software developers and architects eager to explore cutting-edge AI technologies. We’ll delve deep into Retrieval-Augmented Generation (RAG) and GraphRAG, equipping participants with the knowledge and skills to build autonomous agents capable of intelligent reasoning, dynamic data retrieval, and real-time decision-making.
Through practical exercises, real-world use cases, and collaborative discussions, you’ll learn how to create applications that leverage external knowledge sources and relational data structures. By the end of the day, you’ll have a solid understanding of RAG and GraphRAG and the ability to integrate these methodologies into production-ready autonomous agents.
In this interactive workshop, participants will delve into the foundational concepts of RAG and GraphRAG, exploring how these technologies can be utilized to develop autonomous agents capable of intelligent reasoning and dynamic data retrieval. The workshop will cover essential topics such as data ingestion, embedding techniques, and the integration of graph databases with generative AI models.
Attendees will engage in practical exercises that involve setting up RAG pipelines, utilizing vector databases for efficient information retrieval, and implementing GraphRAG workflows to enhance the capabilities of their applications. By the end of the workshop, participants will have a comprehensive understanding of how to harness these advanced methodologies to build robust autonomous agents tailored to their specific use cases.
Application Programmer Interfaces (APIs) by definition are directed at software developers. They should, therefore, strive to be useful and easy to use for developers. However, when engaging design elements from the Web, they can be useful in much larger ways than simply serializing states in JSON.
There is no right or perfect API design. There are, however, elements and choices that induce certain properties. This workshop will walk you through various approaches to help you find the developer experience and long-term strategies that work for you, your customers and your organization.
We will cover:
The Web Architecture as the basis of our APIs
The REST Architectural Style and its motivations
The Richardson Maturity Model as a way of discussing design choices and induced properties
The implications of contentnegotiation and representation choices such as JSON or JSONLD
The emergence of metadata approaches to describing and using APIs such as OpenAPI and HydraCG
Security considerations
Client technologies
API Management approaches
Unlock your full leadership potential with actionable insights and strategies to take you to the next level of your leadership journey. This immersive, hands on workshop promises no slides- just deep, meaningful discussions and real-world applications to elevate your awareness and capacity to lead.In this session, we walk the path of technical leadership - from solving problems to guiding programs to developing people - and the key challenges and changes required at each stage. Blending research with pragmatic and applicable practice, you’ll leave with new clarity and actionable steps to accelerate your growth as a leader.
Led by Pete Behrens, an engineer turned leadership trainer and coach, this workshop draws on 20 years of his experience developing technical leaders and transforming organizations for sustained performance and health.
Key Topics Include:
Note: This workshop is limited to 30-participants, ensuring an intimate, highly interactive experience for leaders at all levels.
For many beginning and intermediate software engineers, design is something of a secret anxiety. Often we know we can create something that works, and we can likely include a design pattern or two tif only to give our proposal some credibility. But sometimes, we're left with a nagging feeling that there might be a better design, or more appropriate pattern, and we might not be really confident that we can justify our choices.
This session investigates the fundamental driving factors behind good design choices so we can balance competing concerns and confidently justify why we did what we did. The approach presented can be applied not only to design, but also to what's often separated out under the term “software architecture”.
Along the journey, we'll use the approach presented to derive several of the well known “Gang of Four” design patterns, and in so doing conclude that they are the product of sound design applied to a context and not an end in themselves.
Course outline
Background: three levels of “design”
Data structure and algorithm
Design
Software Architecture
Why many programmers struggle with design
What makes a design “better” or “worse” than any other?
The pressures of the real world versus a learning environment
A time-honored engineering solution
Identifying the problem
Dissecting the elements
Creating a working whole from the parts
Deriving three core design patterns from principles
Decorator
Strategy
Sidenote, why traditional inheritance is bad
Command or “higher order function”
Setup requirements
This course is largely language agnostic, but does include some live coding demonstrations. Attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards. You can use any Java development environment / IDE that you like and no other tooling is required.
We have seen how Retrieval Augmented Generation (RAG) systems can help prop up Large Language Models (LLMs) to avoid some of their worst tendencies. But that is just the beginning. The cutting edge stateoftheart systems are Multimodal and Agentic, involving additional models, tools, and reusable agents to break problems down in separate pieces, transform and aggregate the results, and validate the results before returning them to the user.
Come get introduced to some of the latest and greatest techniques for maximizing the value of your LLMbased systems while minimizing the risk.
We will cover:
It's not just you. Everyone is basically thinking the same thing: When did this happen?
We've gone from slow but steady material advances in machine learning to a seeming explosion and ubiquity of AI-based features, products, and solutions. Even more, we're all expected to know how to adopt, use, and think about all of these magical new capabilities.
Equal parts amazing and terrifying, what you need to know about these so-called “AI” solutions is much easier to understand and far less magical than it may seem. This is your chance to catch up with the future and figure out what it means for you.
In this two part presentation, we will cover why this time it is different, except where it isn't. I won't assume much background and won't discuss much math.
A brief history of AI
Machine Learning
Deep Learning
Deep Reinforcement Learning
The Rise of Generative AI
Large Language Models and RAG
Multimodal Systems
Bias, Costs, and Environmental Impacts
AI Reality Check
At the end of these sessions, you will be conversant with the major topics and understand better what to expect and where to spend your time in learning more.
Since ChatGPT rocketed the potential of generative AI into the collective consciousness there has been a race to add AI to everything. Every product owner has been salivating at the possibility of new AIPowered features. Every marketing department is chomping at the bit to add a “powered by AI” sticker to the website. For the average layperson playing with ChatGPT's conversational interface, it seems easy however integrating these tools securely, reliably, and in a costeffective manner requires much more than simply adding a chat interface. Moreover, getting consistent results from a chat interface is more than an art than a science. Ultimately, the chat interface is a nice gimmick to show off capabilities, but serious integration of these tools into most applications requires a more thoughtful approach.
This is not another “AI is Magic” cheerleading session, nor an overly critical analysis of the field. Instead, this session looks at a number of valid usecases for the tools and introduces architecture patterns for implementing these usecases. Throughout we will explore the tradeoffs of the patterns as well as the application of AI in each scenario. We'll explore usecases from simple, direct integrations to the more complex involving RAG and agentic systems.
Although this is an emerging field, the content is not theoretical. These are patterns that are being used in production both in Michael's practice as a handson software architect and beyond.
Architects must maintain their breadth, and this session will build on that to prepare you for the inevitable AIpowered project in your future.
Hi, Spring fans! Developers today are being asked to deliver more with less time and build ever more efficient services, and Spring is ready to help you meet the demands. In this workshop, we'll take a roving tour of all things Spring, looking at fundamentals of the Spring component model, look at Spring Boot, and then see how to apply Spring in the context of batch processing, security, data processing, modular architecture, miroservices, messaging, AI, and so much more.
Basics
which IDE? IntelliJ, VSCode, and Eclipse
your choice of Java: GraalVM
start.spring.io, an API, website, and an IDE wizard
Devtools
Docker Compose
Testcontainers
banner.txt
Development Desk Check
the Spring JavaFormat Plugin
Python, gofmt, your favorite IDE, and
the power of environment variables
SDKMAN
.sdkman
direnv
.envrc
a good password manager for secrets
Data Oriented Programming in Java 21+
an example
Beans
dependency injection from first principles
bean configuration
XML
stereotype annotations
lifecycle
BeanPostProcessor
BeanFactoryPostProcessor
auto configuration
AOP
Spring's event publisher
configuration and the Environment
configuration processor
AOT & GraalVM
installing GraalVM
GraalVM native images
basics
AOT lifecycles
Scalability
non-blocking IO
virtual threads
José Paumard's demo
Cora Iberkleid's demo
Cloud Native Java (with Kubernetes)
graceful shutdown
ConfigMap and you
Buildpacks and Docker support
Actuator readiness and liveness probes
Data
JdbcClient
SQL Initialization
Flyway
Spring Data JDBC
Web Programming
clients: RestTemplate, RestClient, declarative interface clients
REST
controllers
functional style
GraphQL
batches
Architecting for Modularity
Privacy
Spring Modulith
Externalized messages
Testing
Batch Processing
Spring Batch
load some data from a CSV file to a SQL database
Microservices
centralized configuration
API gateways
reactive or not reactive
event bus and refreshable configuration
service registration and discovery
Messaging and Integration
“What do you mean by Event Driven?”
Messaging Technologies like RabbitMQ or Apache Kafka
Spring Integration
files to events
Kafka
a look at Spring for Apache Kafka
Spring Integration
Spring Cloud Stream
Spring Cloud Stream Kafka Streams
Security
adding form login to an application
authentication
authorization
passkeys
one time tokens
OAuth
the Spring Authorizatinm Server
OAuth clients
OAuth resource servers
protecting messaging code
On the one hand, Machine Learning (ML) and AI Systems are just more software and can be treated as such from our development efforts. On the other hand, they behave very differently and our capacity to test, verify, validate, and scale them requires a different set of perspectives and skills.
This presentation will walk you through some of these unexpected differences and how to plan for them. No specific background in ML/AI is required, but you are encouraged to be generally aware of these fields. The AI Crash Course would be a good start.
We will cover:
Matching Capabilities to Needs
Performance Tuning
Vector Databases
Testing Strategies
MLOPs/AIOps Techniques
Evolving these Systems Over Time
With ChatGPT taking center stage since the beginning of 2023, developers who have not had a chance to work with any forms of Artificial Intelligence or Machine Learning systems may find themselves either intrigued by the “maze” of new terminologies, or some may be eager to learn more, while perhaps a smaller group may not actually want to get themselves into a territory that’s unknown to them.
This workshop is catered for Java developers as we start by having a quick introduction to GenAI, ChatGPT, and all of those new terminologies around generative AI. Then we’ll dive right into the hands-on part, about how we can construct a ChatGPT-based app quickly, using state-of-the-art tools such as PgVector, which provides vector extension to the popular open source Postgres.
Hands-on lab will cover:
If you ask the typical technologist how to build a secure system, they will include encryption in the solution space. While this is a crucial security feature, in and of itself, it is an insufficient part of the plan. Additionally, there are a hundred ways it could go wrong. How do you know if you're doing it right? How do you know if you're getting the protections you expect?
Encryption isn't a single thing. It is a collection of tools combined together to solve problems of secrecy, authentication, integrity, and more. Sometimes those tools are deprecated because they no longer provide the protections that they once did.Technology changes. Attacks change. Who in your organization is tracking and validating your encryption strategy? How are quantum computing advancements going to change the game?No background will be assumed and not much math will be shown.
Join us for a hands-on workshop, GitOps: From Commit to Deploy, where you’ll explore the entire lifecycle of modern application deployment using GitOps principles.
We’ll begin by committing an application to GitHub and watching as your code is automatically built through Continuous Integration (CI) and undergoes rigorous unit and integration tests. Once your application passes these tests, we’ll build container images that encapsulate your work, making it portable, secure, and deployment-ready. Next, we’ll push these images to a container registry preparing for deployment
Next, you will learn how to sync your application in a staging Kubernetes cluster using ArgoCD (CD), a powerful tool that automates and streamlines the deployment process. Finally, we’ll demonstrate a canary deployment in a production environment with ArgoCD, allowing for safe, gradual rollouts that minimize risk.
By the end of this workshop, you’ll have practical experience with the tools and techniques that perform GitOps deployments, so you can take this information and set up your deployments at work.
This workshop will explore the principles of the Ports and Adapters pattern (also called the Hexagonal Architecture) and demonstrate how to refactor legacy code or design new systems using this approach. You’ll learn how to organize your domain logic and move UI and infrastructure code into appropriate places within the architecture. The session will also cover practical refactoring techniques using IntelliJ and how to apply Domain Driven Design (DDD) principles to ensure your system is scalable, maintainable, and well-structured.
What is Hexagonal Architecture?
Understand the fundamental principles of Hexagonal Architecture, which helps isolate the core business logic (the domain) from external systems like databases, message queues, or user interfaces. This architecture is designed to easily modify the external components without affecting the domain.
What are Ports and Adapters?
Learn the key concepts of Ports and Adapters, the core elements of Hexagonal Architecture. Ports define the interface through which the domain interacts with the outside world, while Adapters implement these interfaces and communicate with external systems.
Moving Domain Code to Its Appropriate Location:
Refactor your domain code to ensure it is correctly placed in the core domain layer. You will learn how to separate domain logic from external dependencies, ensuring that business rules are isolated and unaffected by user interface or infrastructure changes.
Moving UI Code to Its Appropriate Location:
Discover how to refactor UI code by decoupling it from the domain logic and placing it in the appropriate layers. You’ll learn how to use the Ports and Adapters pattern to allow the user interface to communicate with the domain without violating architectural boundaries.
Using Refactoring Tools in IntelliJ:
Learn how to use IntelliJ’s powerful refactoring tools to streamline code movement. Techniques such as Extract Method, Move Method, Extract Delegate, and Extract Interface will be applied to refactor your codebase.
Applying DDD Software Principles:
We’ll cover essential Domain-Driven Design principles, such as Value Objects, Entities, Aggregates, and Domain Events.
Refactoring Techniques:
Learn various refactoring strategies to improve code structure, Extract Method, Move Method, Extract Delegate, Extract Interface, and Sprout Method and Class
Verifying Code with Arch Unit:
Ensure consistency and package rules using Arch Unit, a tool for verifying the architecture of your codebase. You will learn how to write tests confirming your project adheres to the desired architectural guidelines, including separating layers and boundaries.
This workshop is perfect for developers who want to improve their understanding of Ports and Adapters Architecture, apply effective refactoring techniques, and leverage DDD principles for designing scalable and maintainable systems.
View Workshop Requirements »Java's Generics syntax provides us with a means to increase the reusability of our code by allowing us to build software, particularly library software, that can work on many different types, even with limited knowledge about those types. The most familiar examples are the classes in Java's core collections API which can store and retrieve data of arbitrary types, without degenerating those types to java.lang.Object.
However, while the generics mechanism is very simple to use in simple cases such as using the collections API, it's much more powerful than that. Frankly, it can also be a little puzzling.
This session investigates the issues of type erasure, assignment compatibility in generic types, co- and contra-variance, through to bridge methods.
Course outline
Type erasure
Two approaches for generics and Java's design choice
How to break generics (and how not to!)
Maintaining concrete type at runtime
Assignment compatibility of generic types
What's the problem–Understanding Liskov substitution in generic types
Co-variance
Two syntax options for co-variance
Contra-variance
Syntax for contra-variance
Worked examples with co- and contra-variance
Building arrays from generic types
Effective use of functional interfaces
Bridge methods
Review of overloading requirements
Faking overloading in generic types
Setup requirements
This course includes extensive live coding demonstrations and attendees will have access to the code that's created via a git repo. The majority of the examples will work in any version of Java from version 11 onwards, but some might use newer library features. You can use any Java development environment / IDE that you like and no other tooling is required.
MLOps is a mix of Machine Learning and Operations. It is the new frontier for those interested in or knowledgeable about both of these disciplines. MLOps supports the operationalization of machine learning models developed by data scientists and delivers the model for processing via streaming or batch operations. Operationalizing Machine Learning Models is nurturing your data from notebook to deployment through pipelines.
In this workshop, we will describe the processes:
Some of the technologies we will discover include:
Our exercises will include running and understanding MLFlow.
View Workshop Requirements »Platform engineering is the latest buzzword, in a industry that already has it's fair share. But what is platform engineering? How does it fit in with DevOps and Developer Experience (DevEx)? And is this something your organization even needs?
In this session we will aim to to dive deep into the world of platform engineering. We will see what platform engineering entails, how it is the logical succession to a successful DevOps implementation, and how it aims to improve the developer experience. We will also uncover the keys to building robust, sustainable platforms for the future
A large part of embracing DevOps involves embracing automation. Over the last decade we have seen the emergence of “as Code” — Build-as-Code, Configuration-as-Code and Infrastructure-as-Code. The benefits to utilizing such tools are huge! We can codify the state of the world around our applications, giving us the ability to treat everything that our code needs like we treat the code itself. Version control, release management, tagging, even rolling backs are now possible.
Terraform, an open-source tool from HashiCorp allows us to build, control and modify our infrastructure. Terraform exposes a Domain-specific language (DSL) that we can use to express what our infrastructure should look like. Terraform can work with all the major cloud providers, including Amazon AWS, Google GCP and Microsoft Azure.
We will be using AWS as our playground for this workshop
Agenda
apply
-ing terraformdata
and output
in your terraform scriptsInstructions
Please visit https://github.com/looselytyped/terraform-workshop/ for detailed instructions. They might seem a tad arduous but it's not as bad as it looks :)
View Workshop Requirements »One of the features that distinguished Java from a majority of mainstream languages at the time it was released was that it includes a platform independent threading model.
The Java programming language provides core, low-level, features to control how threads interact: synchronized, wait/notify/notifyAll, and volatile. The specification also provides a “memory model” that describes how the programmer can share data reliably between threads. Using these low-level features presents no small challenge, and is error prone.
Contrary to popular expectation, code written this way is often not faster than code created using the high level java.util.concurrent libraries. Despite this, there are two good reasons for understanding these and the underlying memory model. One is that it's quite common to have code written in this way that must be maintained, and such maintenance is impractical without an understanding of these features. Second, when writing code using the higher level libraries, the memory model, or more specifically, the “happens-before” relationship still guides how and when we should use these libraries.
This workshop presents these features in a way designed to allow you to perform maintenance, and write new code without being dangerous.
When the world wide web launched in 1993, it presented a revolutionary new way to globally share information. The revolution didn't stop there. The web soon became a platform for building, hosting, and distributing entire applications. Today most applications are built as web applications yet the core capabilities of HTML remain mired in the Web 1.0 days. Ajax was the first of many “hacks” to build web applications that delivered the rich, responsive user experience that rivaled traditional fatclient applications. Early js libraries and frameworks overcame browser incompatibilities and provided the first abstractions to hide the hacks and today's frameworks are so powerful that conventional wisdom states they are the defacto best practice for building modern web applications. But at what cost?
We've gone fullcircle. Today's SPAs have more in common with the fat client applications of the 90s (albeit with simplified deployment) than they do with the web. The modern UX of today's frameworkdriven SPAs is what users demand, thus we follow the everchanging trends; but at what cost? Beyond the bloat, complexity, and ephemerality of the modern webdev toolchain; modern webdev practices have inadvertently abandoned the core ideas of the web that made the platform technologically, architecturally, and philosophically revolutionary.
Leading thinkers in the web development space have long proclaimed that “not everything should be a SPA” however the alternative of a web 1.0 vanilla html application has very limited utility in the year 2024. Are these our only options, or does a “third way” exist?
This session introduces that “third way” based on the revolutionary ideas that empowered the web. A meaningful, practical, and proven alternative to SPA frameworks providing a simpler and more lightweight approach to building applications on the Web and beyond without sacrificing the UX.
Web applications built following this “third way” boast more evolvability, longevity, and simplicity. SPAs will continue to have their place, but good software engineering is about using the right tool for the job. After attending this session, you will have more than just a hammer in your toolbox.
MCP, or Model Context Protocol, is a standardized framework that allows AI agents to seamlessly connect with external data sources, APIs, and tools. Its main purpose is to make AI agents more intelligent and context-aware by giving them real-time access to live information and actionable capabilities beyond their built-in knowledge.
Join AI technologist, author, and trainer Brent Laster as we learn what MCP is, how it works, and how it can be used to create AI agents that can work with any process that implements MCP. You'll work with MCP concepts, coding, servers, etc. through hands-on labs that teach you how to use it with AI agents.
With MCP, developers can easily integrate AI agents with a wide variety of systems, from internal business databases to third-party services, without having to build custom integrations for each use case. MCP servers act as gateways, exposing specific actions and knowledge to the AI agent, which can then dynamically discover and use these capabilities as needed. This approach streamlines the process of adding new functionalities to AI agents and reduces ongoing maintenance.
MCP is particularly useful for scenarios where AI agents need up-to-date information or need to perform actions in external systems-such as customer support bots fetching live ticket data, enterprise assistants accessing knowledge bases, or automation agents processing transactions. By leveraging MCP, organizations can create more adaptable, powerful, and enterprise-ready AI solutions that respond to real-world business needs in real time
View Workshop Requirements »This comprehensive presentation explores the evolution of Java from version 8 through 25, demonstrating how the language has transformed from an object-oriented platform into a modern, multi-paradigm programming language. Starting with Java 8's functional programming revolution—including lambdas, streams, and Optional—the presentation traces Java's journey through significant milestones like records, pattern matching, virtual threads, and data-oriented programming. Through practical code examples from a real repository, attendees will see how these features work together to create more expressive, maintainable, and performant applications.
The presentation begins with Java 8's game-changing features, using the ProcessDictionaryV2 example to showcase functional programming patterns, higher-order functions, and advanced Stream API usage including collectors like groupingBy and teeing. It then progresses through Java 9-11's quality-of-life improvements (var, HTTP Client, String enhancements), Java 12-17's language evolution (text blocks, records, pattern matching, sealed classes), and Java 18-21's modern capabilities (virtual threads for massive scalability, sequenced collections). Special attention is given to Data-Oriented Programming, demonstrating how records, sealed classes, and pattern matching combine to create a new programming paradigm. The presentation also covers cutting-edge features like unnamed variables (_) and looks ahead to Java 25 LTS with scoped values and performance improvements. Throughout, best practices are emphasized, including embracing immutability, leveraging pattern matching for cleaner code, using virtual threads for I/O-bound operations, and adopting modern APIs over legacy alternatives. All examples are drawn from the accompanying repository, providing attendees with working code they can explore and adapt for their own projects.
View Workshop Requirements »If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover: