Douglas Hawkins

Lead Developer Java Performance Monitoring at Datadog

Douglas Hawkins has been passionately developing software for the past 20 years.
Throughout Doug's career, he has focused on creating performance intensive applications
in Java ranging from bioinformatics to financial exchanges.

After 10 years as a Java developer, Doug transitioned to working on Azul's Java Virtual Machine.
Today, Doug continues his interest in building performance tools for developers as the
Lead Developer of Datadog's Java Application Performance Monitoring.

While Doug's passion for developing software remains, his true passion is in sharing his
interest in low-level details and JVM performance with others.

Presentations

Architecting with Garbage Collection in Mind - Video Preview

HotSpot provides a variety of garbage collectors with a variety of strengths and weaknesses. To get the most out of our applications, we need to pick the right garbage collector and design to take advantage of its strengths and avoid its
weaknesses.

In this presentation, you'll learn about criteria for picking a garbage collector, how to measure GC performance, and how to write code that works with rather than against the GC.

Concurrency Concepts in Java - Video Preview

Unlike earlier languages, Java had a well-defined threading and memory model from the beginning. And over the years, Java gained new packages to help solve concurrency problems.

Despite this, Java concurrency is sometimes subtle and fraught with peril.

In this talk, you'll learn these subtleties. And finally, you'll learn how to handle concurrency by exploring the concepts behind java.util.concurrent and other concurrency libraries.

How (Not) To Measure and Profile Java Performance - Video Preview

Today, we all benefit from the sophistication of modern compilers and hardware, but that extra complexity can also make it difficult to reason about performance.

In this talk, we'll examine some surprising performance cases and learn how to
use profiling and benchmarking tools to better understand our modern execution environments.

While we know that different programming languages are good at different things and perform differently, it would be tempting to conclude that optimizations that work in one language work just as well in another. Unfortunately, that's not true.

In this talk, we'll learn about the different ways that language runtimes work from interpreters to just-in-compilers from JavaScript to Python to Java. We'll explore the strengths and weaknesses of each approach and how to make the most of them.

Our modern JVMs and CPUs are capable of some amazing feats of optimization. In general, for day-to-day work, these optimizations just work, but they also mean that the optimal approach can be surprisingly unintuitive.

In this presentation, we'll examine some surprising performance anomalies. Through learning the mechanisms behind these performance paradoxes, you'll gain insight into how modern compilers and hardware work.

Everyone worries about performance but few of us have the time to truly understand it. Fortunately, our modern JVMs and CPUs are capable of some amazing performance tricks, but those same tricks only make reasoning about performance that much harder.

In this talk, we'll take a look at some surprising and often unintuitive performance problems and solutions. Not simply with the goal of memorizing solutions but also to better understand the complexity that lies inside both JVMs and CPUs.

Everyone worries about performance but few of us have the time to truly understand it. Fortunately, our modern JVMs and CPUs are capable of some amazing performance tricks, but those same tricks only make reasoning about performance that much harder.

In this talk, we'll take a look at some surprising and often unintuitive performance problems and solutions. But not simply with the goal of memorizing solutions, but to better understand the complexity that lies inside both JVMs and CPUs.

The JVM can perform some marvelous feats of optimization but for most developers, its inner workings remain a mystery.

In this talk, we'll walk through how the JVM optimizes a seemingly simple piece of Java code. Starting with how the JVM decides what to compile and then going step-by-step through the different optimizations that are performed. In doing so, you'll learn how the JVM makes your code run fast, but also some things to avoid to keep it running fast.

Understanding Garbage Collection - Video Preview

Most of us don't want to go back to the days of malloc and free, but the garbage collector isn't always our friend.

In this presentation, you'll learn about the different garbage collection strategies used in JVMs, how to monitor garbage collection, analyze memory dumps, and why you might want to use one collection strategy instead of another.

Thankfully, Java garbage collectors have come a long way in the last 25 years. While the latest GCs: G1 and ZGC usually just work, their inner workings can be harder to understand the GCs that came before.

In this presentation, you'll learn basic garbage collection strategies used by the JVM starting with older collectors and following through the evolution to the modern GCs that we enjoy today.

What's in a Type?  A Mathematical View of the Java Type System - Video Preview

Over the years, Java developers have learned through trial-and-error the best ways to use Java's type system. But certain parts of the type system like wildcard generics and covariant return types are
under used; in-part, because they are not well understood.

Fortunately by going back to the mathematical roots of type systems, we can understand not only wildcard types and covariance, but also why we should prefer composition over inheritance and even how compilers perform some of their optimization magic.