After 10 years as a Java developer, Doug transitioned to working on Azul's Java Virtual Machine.
Today, Doug continues his interest in building performance tools for developers working on Datadog's Java Application Performance Monitoring.
While Doug's passion for developing software remains, his true passion is in sharing his
interest in low-level details and JVM performance with others.
The JVM can perform some marvelous feats of optimization but for most developers, its inner workings remain a mystery.
In this talk, we'll walk through how the JVM optimizes a seemingly simple piece of Java code. Starting with how the JVM decides what to compile and then going step-by-step through the different optimizations that are performed. In doing so, you'll learn how the JVM makes your code run fast, but also some things to avoid to keep it running fast.
Most of us don't want to go back to the days of malloc and free, but the magic of garbage collectors while convenient can be mysterious and hard to understand.
In this talk, you'll learn about the many different garbage collectors available in JVMs. The strength and weaknesses of the different allocation and collection strategies used by each collector. And how garbage collectors keep evolving to support today's hardware and cloud environments.
This talk will cover the core concepts in garbage collection: object reachability, concurrent collectors, parallel garbage collectors, and generational garbage collectors. These concepts will be covered by following the progression of garbage collectors in the HotSpot JVM.
While we know that different programming languages are good at different things and perform differently, it would be tempting to conclude that optimizations that work in one language work just as well in another. Unfortunately, that's not true.
In this talk, we'll learn about the different ways that language runtimes work from interpreters to just-in-compilers from JavaScript to Python to Java. We'll explore the strengths and weaknesses of each approach and how to make the most of them.
Our modern JVMs and CPUs are capable of some amazing feats of optimization. In general, for day-to-day work, these optimizations just work, but they also mean that the optimal approach can be surprisingly unintuitive.
In this presentation, we'll examine some surprising performance anomalies. Through learning the mechanisms behind these performance paradoxes, you'll gain insight into how modern compilers and hardware work.