Daniel is a programmer, consultant, instructor, speaker, and recent author. With over 20 years of experience, he does work for private, educational, and government institutions. He is also currently a speaker for No Fluff Just Stuff tour. Daniel loves JVM languages like Java, Groovy, and Scala; but also dabbles with non JVM languages like Haskell, Ruby, Python, LISP, C, C++. He is an avid Pomodoro Technique Practitioner and makes every attempt to learn a new programming language every year. For downtime, he enjoys reading, swimming, Legos, football, and barbecuing.
In today’s data-driven world, the ability to process and analyze data in real time is no longer a luxury—it’s a necessity. Apache Flink, a powerful stream processing framework, has emerged as a game-changer for handling high-throughput, low-latency data applications.
In this session, you’ll gain a clear understanding of what Apache Flink is, how it works, and why it’s become a cornerstone for modern data infrastructure. We’ll explore key features such as its robust stream and batch processing capabilities, event-time handling, stateful computations, and fault tolerance. You’ll also discover how Flink integrates seamlessly with popular systems like Kafka, Kubernetes, and major cloud platforms.
Whether you’re working with real-time analytics, event-driven applications, or machine learning pipelines, Apache Flink provides the scalability and flexibility needed to turn massive streams of data into actionable insights. Join us to see why Flink is critical to modern data ecosystems and learn how to start leveraging its power in your projects.
This session will focus on data governance and making data available within your enterprise. Who owns the data, how do we obtain the data, and what does governance look like?
Join us for an indepth exploration of cuttingedge messaging styles in your large domain.
Here, we will discuss the messaging styles you can use in your business.
In this session, we will discuss architectural concerns regarding security. How do microservices communicate with one another securely? What are some of the checklist items that you need?
We take a look at another facet of architectural design, and that is how we develop and maintain transactions in architecture. Here we will discuss some common patterns for transactions
Domain Driven Design has been one of the major cornerstones in large system design for many years. It has recently been in the zeitgeist as of late, especially when it comes to the terms bounded context and microservices. This full-day class introduces you to Domain Driven Design, and why it is important. We will cover the design and pattern, discuss subdomains, context mapping, tools, and management
Our workshop will not only introduce you to the terms, but we will cover planning, and go through challenges on design so we can discuss the tradeoffs that come with the design. We will also cover some of the more modern patterns that came out of DDD like CQRS, Data Mesh, Rest, and more.
Event-driven architecture (EDA) is a design principle in which the flow of a system’s operations is driven by the occurrence of events instead of direct communication between services or components. There are many reasons why EDA is a standard architecture for many moderate to large companies. It offers a history of events with the ability to rewind the ability to perform real-time data processing in a scalable and fault-tolerant way. It provides real-time extract-transform-load (ETL) capabilities to have near-instantaneous processing. EDA can be used with microservice architectures as the communication channel or any other architecture.
In this workshop, we will discuss the prevalent principles regarding EDA, and you will gain hands-on experience performing and running standard techniques.
Domain Driven Design has been guiding large development projects since 2003, when the seminal book by Eric Evans came out. Domain Driven Design is split up into two parts: Strategic and Tactical. One of the issues is that the Strategic part becomes so involved and intense that we lose focus on implementing these sorts of things. This presentation swaps this focus as topic pairs. For example, when we create a bounded context, is that a microservice or part of the subdomain? When we create a domain event, what does that eventually become? How do other tactical patterns fit into what we decide in the strategic phase?
In this workshop, we will break it down into pairs of topics.
In this workshop, we will perform the following activities
Domain Driven Design has been guiding large development projects since 2003, when the seminal book by Eric Evans came out. Domain Driven Design is split up into two parts: Strategic and Tactical. One of the issues is that the Strategic part becomes so involved and intense that we lose focus on implementing these sorts of things. This presentation swaps this focus as topic pairs. For example, when we create a bounded context, is that a microservice or part of the subdomain? When we create a domain event, what does that eventually become? How do other tactical patterns fit into what we decide in the strategic phase?
In this presentation, we will break it down into pairs of topics.
In this presentation, we will introduce neural networks and take it slowly. We first describe the process of learning machine learning. We will then go into the tools typically involved with machine learning and neural networks. The core of this presentation is taking small steps to achieve a big goal: an understanding of a neural network. This presentation assumes that the audience knows nothing about the internals of machine learning.
Join us for a hands-on workshop, GitOps: From Commit to Deploy, where you’ll explore the entire lifecycle of modern application deployment using GitOps principles.
We’ll begin by committing an application to GitHub and watching as your code is automatically built through Continuous Integration (CI) and undergoes rigorous unit and integration tests. Once your application passes these tests, we’ll build container images that encapsulate your work, making it portable, secure, and deployment-ready. Next, we’ll push these images to a container registry preparing for deployment
Next, you will learn how to sync your application in a staging Kubernetes cluster using ArgoCD (CD), a powerful tool that automates and streamlines the deployment process. Finally, we’ll demonstrate a canary deployment in a production environment with ArgoCD, allowing for safe, gradual rollouts that minimize risk.
By the end of this workshop, you’ll have practical experience with the tools and techniques that perform GitOps deployments, so you can take this information and set up your deployments at work.
Apache Iceberg is quickly becoming the foundation of the modern Data Lakehouse, offering ACID guarantees, schema evolution, time travel, and multi-engine compatibility over cheap object storage. We’ll work with Iceberg hands-on and show how to build durable, versioned, trustworthy datasets directly from streaming pipelines.
You’ll see Flink writing to Iceberg, Kafka events flowing into governed tables, and how snapshots let you query “what the data looked like yesterday.” We’ll compact, rewind, evolve schemas, roll back mistakes, and even handle CDC-style updates — all in real time and all powered by open source.
Whether you’re building for Data Mesh, Lakehouse, or stream-batch unification, this talk will show you how to use Iceberg to defend your data and enable self-serve, analytical infrastructure at scale.
For those still grappling with Generics. This will be an attempt to clear the air about generics. What are wildcards? What is extends
? What is super
? What is covariance? What is contravariance? What is invariance? What is erasure? Why and when do I need this?
Generics or parameterized type is one of the more pain items in any statically typed language on the JVM. This presentation is set to overcome some of these hurdles and understand some of these confusing terms. We will cover the following:
Hashicorp Vault stores encrypted secrets securely. You can store anything that you want into Vault including API keys, passwords, and certificates. Vault can also store dynamic secrets where it can negotiate with a cloud service on your behalf without direct interaction with your API keys. Hashicorp Vault is well thought out “bank” of information that handles storage, encryption, leasing, sealing.
This workshop will explore the principles of the Ports and Adapters pattern (also called the Hexagonal Architecture) and demonstrate how to refactor legacy code or design new systems using this approach. You’ll learn how to organize your domain logic and move UI and infrastructure code into appropriate places within the architecture. The session will also cover practical refactoring techniques using IntelliJ and how to apply Domain Driven Design (DDD) principles to ensure your system is scalable, maintainable, and well-structured.
What is Hexagonal Architecture?
Understand the fundamental principles of Hexagonal Architecture, which helps isolate the core business logic (the domain) from external systems like databases, message queues, or user interfaces. This architecture is designed to easily modify the external components without affecting the domain.
What are Ports and Adapters?
Learn the key concepts of Ports and Adapters, the core elements of Hexagonal Architecture. Ports define the interface through which the domain interacts with the outside world, while Adapters implement these interfaces and communicate with external systems.
Moving Domain Code to Its Appropriate Location:
Refactor your domain code to ensure it is correctly placed in the core domain layer. You will learn how to separate domain logic from external dependencies, ensuring that business rules are isolated and unaffected by user interface or infrastructure changes.
Moving UI Code to Its Appropriate Location:
Discover how to refactor UI code by decoupling it from the domain logic and placing it in the appropriate layers. You’ll learn how to use the Ports and Adapters pattern to allow the user interface to communicate with the domain without violating architectural boundaries.
Using Refactoring Tools in IntelliJ:
Learn how to use IntelliJ’s powerful refactoring tools to streamline code movement. Techniques such as Extract Method, Move Method, Extract Delegate, and Extract Interface will be applied to refactor your codebase.
Applying DDD Software Principles:
We’ll cover essential Domain-Driven Design principles, such as Value Objects, Entities, Aggregates, and Domain Events.
Refactoring Techniques:
Learn various refactoring strategies to improve code structure, Extract Method, Move Method, Extract Delegate, Extract Interface, and Sprout Method and Class
Verifying Code with Arch Unit:
Ensure consistency and package rules using Arch Unit, a tool for verifying the architecture of your codebase. You will learn how to write tests confirming your project adheres to the desired architectural guidelines, including separating layers and boundaries.
This workshop is perfect for developers who want to improve their understanding of Ports and Adapters Architecture, apply effective refactoring techniques, and leverage DDD principles for designing scalable and maintainable systems.
The road has been difficult for anyone who has attempted to do any database testing for the last 20 years. One solution that we tried was installing the test database. Oftentimes, that has been hard to maintain, particularly when that test database is shared. Another solution is to use DBUnit on a local system or, possibly, a shared system to set up the database as needed, test it, and tear it down. That too, was very hard to maintain. For many, it was just deploying your application and running end-to-end tests. If it works, it works, and we move on. With the advent of containerization, we now have something substantially better, test containers! Test containers give us the power to bring up a database that we need to test against and the exact version of the database.
With Test Containers, we can programmatically set the required database, possibly insert some data, and run our test against it. Behind the scene, test containers will load and cache the database so we may run the database subsequently and even faster. There are a multitude of databases to choose from, and it is also supported by multiple languages like Java, Go, Ruby, and more.
Remember in the Matrix, when Neo said “I know Kung Fu”, and then Morpheus said “Show me”, well we will be doing that except with IntelliJ. In this dojo, we will be using all the wonderful keymappings that are available in IntelliJ and we will make you a lean mean coding machine!
In this dojo, you will master the art of:
“Show no weakness, Show no mercy”
This presentation is the Dagobah of efficient editing and flow. Come only what you take with you.
Most efficient you will be, when keyboard tricks learned. You'll see. Hmmmm. You must unlearn what you have learned. A Jedi's power comes from knowledge of the tools used. Luminous beings are we… not crude typists. Mouse is your weakness. Learn to use more of the keyboard, you will.
Learn:
Join us for a session on MLOps, where we delve into the transformative practices and tools that bridge the gap between machine learning development and production deployment. Discover how MLOps enhances collaboration, reproducibility, and scalability in machine learning projects, ensuring seamless transitions from data engineering to model monitoring. Learn about the latest technologies, including Docker, Kubernetes, and MLflow, and explore realworld case studies highlighting best practices and common challenges. Whether you’re a data scientist, engineer, or manager, this session will equip you with the knowledge to streamline your ML workflows and drive impactful business outcomes.
This presentation will assume that the attendees have little to no knowledge of creating and operationalizing ML Models.
In this presentation, we will perform a rigorous list of what is required to be successful in the MLOps space.
Then, we will discuss the technologies that we can use to piece these technologies together:
Some of the technologies we will discover include:
Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit testable. Design Patterns are also our common developer language. When a developer says “Let's use the Decorator Pattern” we know what is meant.
What's new though is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be really excited about this topic and putting it into practice on your own codebase.
Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit-testable. Design Patterns are also our common developer language. When a developer says, “Let's use the Decorator Pattern,” we know what is meant.
What's new, though, is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton, and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be excited about this topic and putting it into practice on your codebase.
Java advances quickly. It is incredible how much incremental change accumulates over time. JDK 17 is now three years old, and we are at JDK 22 as of 2025. In this session, I will take some select JEPs (Java Enhancement Process) and demonstrate what they are and their use case. Then you can be ahead of the curve, and give you all the information you need to sell and demand the next generation Java for your work or opensource initiative.
super()
switch
expressionsIn this day-long work workshop, we will walk through a catalog of all the common architectural design patterns. For each design pattern, we will run docker-compose files that demonstrate the strengths and weaknesses of those design patterns. So you have a first-hand, full-on, and highly engaged full-day workshop to give you the knowledge you need to make critical architectural choices.
We will cover:
In this presentation, we will discuss Kafka Connect. Kafka Connect is an opensource project from Confluent. Kafka Connect provides us a way to move data from a data store as a source and stream or batch that information into Kafka. Kafka Connect also gives us a way to take information from Kafka and send that to another data store, a Sink. Every source and sink can be connected to and from various databases and message queues.
What this presentation will entail:
At the end of this presentation, we will have a live demonstration of watching a data pipeline using data stores.
Kafka is a “must know.” It is the data backplane of the modern microservice architecture. It's now being used as the first persistence layer of microservices and for most data aggregation jobs. As such, Kafka has become an essential product in the microservice and big data world.
This workshop is about getting started with Kafka. We will discuss what it is. What are the components, we will discuss the CLI tools, and how to program a Producer and Consumer.
Kafka is more than just a messaging queue with storage. It goes beyond that and with technology from Confluent open source it has become a full-fledged data ETL and data streaming ecosystem.
When we utter the words, Kafka, it is no longer just one component but can be an entire data pipeline ecosystem to transform and enrich data from source to sink. It offers different ways to handle that data as well. In this presentation, we define:
We then discuss KSQLDB. A SQL layer built upon Kafka Streams that provides a simple query language to perform streaming operations
There are multiple elements to Kubernetes where each component seems like a character in a book, pods, services, deployments, secrets, jobs, config maps, and more. In this presentation, we just focus on the security aspect of Kubernetes and the components involved. Particularly centered around RBAC and ServiceAccounts. What they are, what they do. We discuss etcd and secrets. We will also discuss other options for security in Kubernetes.
There are multiple elements to Kubernetes where each component seems like a character in a book, pods, services, deployments, secrets, jobs, config maps, and more. In this presentation, we just focus on the security aspect of Kubernetes and the components involved. Particularly centered around RBAC and ServiceAccounts. What they are, what they do. We discuss etcd and secrets. We will also discuss other options for security in Kubernetes.
Service Accounts
Secrets
Kubernetes API
Authentication
Authorization
RBAC
Roles and Cluster Roles
This workshop builds an entire event driven data pipeline with Machine Learning and Kafka. From Kafka where we use producers or Kafka Connect to generate information, we then will Kafka Streams to apply a machine learning model to make business decisions.
This intensive lab will start by integrating sources into our backplane, then train our models, and operationalize our model using Kafka Streams. We will then create result topics when we can read in as a report and display visualizations of our data. The result will also be scalable and fault tolerant.
MLOps is a mix of Machine Learning and Operations. It is the new frontier for those interested in or knowledgeable about both of these disciplines. MLOps supports the operationalization of machine learning models developed by data scientists and delivers the model for processing via streaming or batch operations. Operationalizing Machine Learning Models is nurturing your data from notebook to deployment through pipelines.
In this workshop, we will describe the processes:
Some of the technologies we will discover include:
Our exercises will include running and understanding MLFlow.
A monumental milestone was the book Design Patterns: Elements of Reusable Object-Oriented Software, also known as the “Gang of Four” book, released in 1994. Both are products of their times, using 1990s languages and tools. It's now 2020's and we have seen some things. Java now uses lambdas and streams extensively. We now desire immutability. Java's garbage collection has vastly improved by leaps and bounds. So, while Java has undoubtedly changed, how about our design patterns? That's why we will discuss one of the patterns from the “Gang of Four” book, the Interpreter Pattern.
How do new features like sealed classes, pattern matching, records, and enhanced switch statements in today's Java change the interpreter pattern? Why is the interpreter so sought out among functional programmers? We will review the classic pattern and its purpose. We replace it with a modern alternative using the latest Java 21 techniques. We discuss its importance and use and how to wire in concepts like programs and monads, thus pushing our knowledge to Java's edge!
We have been using JUnit and doing TDD for years, but you can take testing further. In this session, we will discuss some tools you absolutely need for testing your code outside of the regular stack you currently use.
Java’s evolution is remarkable, and the leap from JDK 17 to the current version brings a wealth of powerful features to elevate your projects. Join us for an exciting session to explore select JEPs (Java Enhancement Proposals) introduced up to today, diving into their use cases and practical benefits for your work or open-source initiatives.
What You’ll Learn:
How to enable and utilize advanced Java features introduced in JDK 23.
Real-world demonstrations of cutting-edge updates, including:
super()
: Test invariants without constructing objects.switch
Expressions: We will discuss where we are with pattern matching as well as dealing with primitivesWhy Attend?
Learn how to advocate for and implement your organization's latest Java tools and practices. Gain the knowledge you need to sell the value of next-generation Java and stay at the forefront of software development.
JDK 11 saw the advent of a new HTTP Client and important new API for calling content on remote RESTful endpoints. This presentation will just focus on the HTTP Client are how to maximize its use.
We will cover
Hopefully, we started moving on from Java 8. One of the great benefits of doing so, and there are many, is a module system. It is a controversial topic indeed, but I am hoping in this presentation to make some solid arguments that it is an essential part of our development.
One of the other essential tasks regarding the JVM is monitoring. How much stack and heap is your JVM using? What CPU saturation are you using? How many threads are being used, and what kind of threads are being used? What is your garbage collector looking like? How can I tap into the JVM and monitor other aspects of the JVM? All these questions are essential to ask since often administrators of your Java application will need to know these values to deploy and monitor your application.
We will look at the following utilities in detail. VisualVM, Java Flight Recorder, and Java Management Extensions. We will also look at some of the important values when instrumenting your JVM and how to gauge usage so that you can provide your containers with the correct resource
information when deploying onto a platform, say, Kubernetes.
Threading has always been tough. Even with new frameworks that can make it easy, sometimes we don't have them at our disposal. This full-day session focuses on threading and the various synchronizers in Java. We will have material you can use as a reference and challenges that will help you remember some pitfalls to avoid.
volatile
Phaser
CountdownLatch
Threading has always been tough. Even with new frameworks that can make it easy, sometimes we don't have those frameworks at our disposal. This full-day session focuses on threading and the various synchronizers in Java. We will have material that you can use as reference and challenges that will help remember some pitfalls to avoid.
We will cover the following items:
volatile
CyclicBarrier
Phaser
CountdownLatch
There is a new way of Threading, which means it is time to prepare. Project Loom has introduced Java Virtual Threads, which is now available in Java 21. Virtual Threads are small Threads meant to perform quick operations with the need to procure long-running OS threads, which can prove expensive. In this presentation, we will learn how to use these threads, what does it mean in relationship with the rest of the Java API, and what does it mean for third-party libraries.
Future
and ReactiveIstio provides functionality to your pods that you don't necessarily want to include in your application. This is based on the Envoy pattern, which provides Sidecar functionality.
If you build your Scala application through Test-Driven Development, you’ll quickly see the advantages of testing before you write production code. This hands-on book shows you how to create tests with ScalaTest and the Specs2—two of the best testing frameworks available—and how to run your tests in the Simple Build Tool (SBT) designed specifically for Scala projects.
By building a sample digital jukebox application, you’ll discover how to isolate your tests from large subsystems and networks with mocking code, and how to use the ScalaCheck library for automated specification-based testing. If you’re familiar with Scala, Ruby, or Python, this book is for you.