How does designing Microservices differ from designing more traditional applications?
What is a better way to learn than to take a problem, analyze the requirements, explore the design options, apply the concepts of bounded context, and arrive at the architecture and design of Microservices to realize the requirements?
Come to this workshop to get a practical, hands-on practice to navigate from requirements all the way to a workable solution and along the way learn about different architectural goals and approaches that can help with the design.
git client to fetch the examples and labs from a git repository.
In this day-long work workshop, we will walk through a catalog of all the common architectural design patterns. For each design pattern, we will run docker-compose files that demonstrate the strengths and weaknesses of those design patterns. So you have a first-hand, full-on, and highly engaged full-day workshop to give you the knowledge you need to make critical architectural choices.
We will cover:
A GitHub Account
Application Programmer Interfaces (APIs) by definition are directed at software developers. They should, therefore, strive to be useful and easy to use for developers. However, when engaging design elements from the Web, they can be useful in much larger ways than simply serializing states in JSON.
There is no right or perfect API design. There are, however, elements and choices that induce certain properties. This workshop will walk you through various approaches to help you find the developer experience and long-term strategies that work for you, your customers and your organization.
We will cover:
The Web Architecture as the basis of our APIs
The REST Architectural Style and its motivations
The Richardson Maturity Model as a way of discussing design choices and induced properties
The implications of contentnegotiation and representation choices such as JSON or JSONLD
The emergence of metadata approaches to describing and using APIs such as OpenAPI and HydraCG
Security considerations
Client technologies
API Management approaches
According to a CareerBuilder study, only 40% of new tech leaders receive formal training when they become a boss for the first time. The rest are forced to get scrappy to quickly equip themselves with new skills, techniques, and mindsets to effectively transition into their new roles.
This workshop was designed to fill this gap; providing tactical techniques and resources for both new and seasoned technical leaders!
In this full day workshop, you will learn how successfully navigate the complexities of Engineering Management. Topics include:
Purposeful leadership, emotional intelligence, self awareness, time management, prioritizing the right things, social styles, giving feedback, career conversations, 1:1s, psychological safety, recruiting, interviewing, onboarding, measuring team metrics, motivation, engagement, team building, and building sustainable relationships.
Interactive group exercises will help you practice these new ideas and techniques in a safe and engaging environment.
You will leave with new perspectives on Engineering Management, as well as handouts and plenty of resources to grow and further develop skills in your daily work.
Please download (and print if possible) the handout before joining this workshop! (located with session slides).
Please download (and print if possible) the handout before joining this workshop! (located with session slides).
In this full-day workshop, open-source author, trainer, and DevOps director Brent Laster will provide an extensive introduction to GitHub Actions. You’ll learn about the core parts and pieces that make up an action, as well as the types of functionalities and features they provide.
You’ll also see how to combine them in simple workflows to accomplish basic tasks as well as more advanced workflows to automate typical CI/CD and other tasks. And you’ll learn about
how to create and use your own actions, create and manage artifacts, and how to debug and secure your GitHub Action workflows.
This course will leverage hands-on, guided labs using GitHub and GitHub Actions so that participants can gain “real-world” experience with GitHub Actions.”
The requirements for this workshop are minimal, but you will need a browser and an account on GitHub.com. A free account is fine, but please make sure to have set that up and have logged in. A basic working knowledge of Git and GitHub is helpful, but not absolutely necessary as we'll guide you through the hands-on work, step-by-step.
Enterprise applications are complex. A transaction starting in the browser will go through proxies, api gateways, security appliances, application performance monitoring tools, logs, microservices and more microservices. Historically there has been no standard way to get observability and traceability between all the enterprise components. Each product and framework has it's own proprietary way of identifying a transaction making it difficult if not impossible to stitch together a complete picture of a transaction. This is changing with the introduction of the W3C Trace Context standard and the open source initiative of OpenTelemetry.
In this session, you will learn how using Trace Context, OpenTelemetry and other open source and commercial products can improve your observability to help you better triage production issues, improve performance, be proactive and make your users happier.
Many of us have significant experience in Java. Yet, from time to time, we get tripped up by some code that we quite did not expect to behave the way it does.
In this presentation we will take a look at some of those and get a deeper understanding of the language we use everyday.
Java has been evolving at a rapid pace. Some of the most noticed changes are the features in the language. However, there are other interesting and significant changes in Java, not related to the language but in the JDK and the ecosystem.
In this presentation we will take a look at some of those exciting changes and how we can benefit from them.
The switch feature of Java has gone through an amazing transformation. In this presentation will start with switch as a statement, transform from there to an expression, and into a full blown pattern matching syntax.
We will get quite deep into this feature to explore various usecases.
A prerelease feature in Java, structure concurrency is going to change how we do concurrent programming.
In this presentation we will take a look at the benefits of this new feature and how to make use of it.
Threads have been part of Java since the beginning. But, the new virtual threads, introduced as prerelease in Java 19, are different in how they're implemented and how we can benefit from them.
In this presentation we will learn about virtual threads, the problems they solve, and how to make use of them.
We have been using JUnit and doing TDD for years, but you can take testing further. In this session, we will discuss some tools you absolutely need for testing your code outside of the regular stack you currently use.
Hey. Remember that time when we used to create jar or war files and we used to just ssh into a box and deploy on a single box? Well, it was simpler but also maybe that wasn't that great of an idea. Time has certainly moved on, and our releases have become very advanced with very technical CI/CD pipelines, docker or debian packages, multi-purpose testing, producing signatures, perform security scans, perform releases, and then when you're done tell the whole world about it. Whew! This presentation introduces JReleaser, a release platform for Java that does a multitude of chores for you.
In this session we will discuss:
Kafka is a “must know.” It is the data backplane of the modern microservice architecture. It's now being used as the first persistence layer of microservices and for most data aggregation jobs. As such, Kafka has become an essential product in the microservice and big data world.
This workshop is about getting started with Kafka. We will discuss what it is. What are the components, we will discuss the CLI tools, and how to program a Producer and Consumer.
Kafka is a “must know.” It is the data backplane of the modern microservice architecture. It's now being used as the first persistence layer of microservices and for most data aggregation jobs. As such, Kafka has become an essential product in the microservice and big data world.
This workshop is about getting started with Kafka. We will discuss what it is. What are the components, we will discuss the CLI tools, and how to program a Producer and Consumer.
In today’s volatile technology and business climate, big architecture up front is not sustainable. Attempts to define the architectural vision for a system early in the development lifecycle does not work. To accept change, teams are moving to agile methods, but agile methods provide little architectural guidance. In this session, we provide practical guidance for software architecture on agile projects.
We will explore several principles that help us create more flexible and adaptable software systems.We’ll expose the true essence of what’s meant when we say “architectural agility.” And we’ll explore the real goal of software architecture and how we can accommodate architectural change to help increase architectural agility.
The architecture paradigms we’re using are changing. The platforms we deploy our software to are changing. We are confronted with several new architecture paradigms to choose from, such as microservices and miniservices. Should we automatically discard some of the proven architectures we’ve used in the past, including more traditional web services? Likewise, new platforms, such as cloud, complicate the decision. Yet, at the heart of this transformation is modularity. From monoliths to microservices and everything in between, modularity is the foundation.
In this session, we’ll explore how modularity is impacting the platforms we are leveraging and the architecture paradigms we’ll use and offer a clear roadmap with proven guidance on navigating the architecture decisions we must make.
Software development is an amazing profession, requiring the delicate combination of analytical and creative skills. Understanding architectural patterns, agile best practices, and exploring the depths of platforms, tool, and languages requires deep analytical skills. Yet crafting a system also requires vision and understanding when to deviate from traditional best practices.
In this session, we will explore lessons learned over many years of building large software systems. We will challenge traditional assumptions and explore new ways of thinking.
Remember in the Matrix, when Neo said “I know Kung Fu”, and then Morpheus said “Show me”, well we will be doing that except with IntelliJ. In this dojo, we will be using all the wonderful keymappings that are available in IntelliJ and we will make you a lean mean coding machine!
In this dojo, you will master the art of:
“Show no weakness, Show no mercy”
Remember in the Matrix, when Neo said “I know Kung Fu”, and then Morpheus said “Show me”, well we will be doing that except with IntelliJ. In this dojo, we will be using all the wonderful keymappings that are available in IntelliJ and we will make you a lean mean coding machine!
In this dojo, you will master the art of:
“Show no weakness, Show no mercy”
The organization has grown and one line of business has become 2 and then 10. Each line of business is driving technology choices based on their own needs. Who and how do you manage alignment of technology across the entire Enterprise… Enter Enterprise Architecture! We need to stand up a new part of the organization.
This session will define the role of architects and architectures. We will walk through a framework of starting an Enterprise Architecture practice. Discussions will include:
Awareness is the knowledge or perception of a situation or fact, which based on myriad of factors is an elusive attribute. Likely the most significant unasked for skill… perhaps because it's challenging to “measure” or verify. It is challenging to be aware of aware, or is evidence of it's adherence. This session will cover different levels of architectural awareness. How to surface awareness and how you might respond to different technical situations once you are aware.
Within this session we look holistically an engineering, architecture and the software development process. Discussing:
* Awareness of when process needs to change (original purpose of Agile)
* Awareness of architectural complexity
* Awareness of a shift in architectural needs
* Awareness of application portfolio and application categorization
* Awareness of metrics surfacing system challenges
* Awareness of system scale (and what scale means for your application)
* Awareness when architectural rules are changing
* Awareness of motivation for feature requests
* Awareness of solving the right problem
The focus of the session will be mindful (defined as focusing on one's awareness), commentating in sharing strategies for heightening awareness as an architect and engineer.
REST is, undoubtedly one of the most maligned and misunderstood terms in our industry today. So many different things have been called REST, that the world has virtually lost all meaning. Many systems and applications that self-describe as “RESTful” usually are not, at least according to REST as defined in Dr. Roy T. Fielding’s 2000 Dissertation, “Architectural Styles and the Design of Network-based Software Architectures”.
The wild success of the architecture derived by Dr. Fielding led many to want to emulate it (even when it was inappropriate to do so). As a shorthand, organizations began referring to “RESTful” systems, which exposed “RESTful” APIs. Over time “REST” became a buzzword referring to a vague generalization of HTTP/json APIs that typically bear little to no resemblance to the central ideas of REST (and thus elicit few of the benefits). Hypermedia is the central pillar and defining characteristic of the REST architectural style yet it remains almost universally absent.
Hypermedia was a revolutionary idea that, while more relevant than ever, is almost forgotten in today's tech space. Consequently few reap the benefits of this idea and ever fewer know what they might be giving up.
Although not every system needs to (or should be) RESTful, it's helpful to understand the key–and often overlooked–ideas to be able to decide if they make sense for your current next project. This session introduces the key foundational ideas and shows what these ideas look like in practices. Although hypermedia and REST don't make sense for every project or system, you'll leave this session with a better understanding of these groundbreaking ideas, practical insights on how to adopt them today, and ultimately armed to approach the trade-offs of this approach mindfully and deliberately.
The web is arguably the single most impactful revolution in human history (to date). By agreeing on a simple set of standards, we have collectively unlocked all the world's information. Documents can be discovered, retrieved, published, and shared so easily we don't even think about it.
Data, on the other hand, is a different story. Our data remains stuck in the 1980s. Locked in silos, each with a different format, interface, and conventions that must be interpreted by a human, parsed, mapped, and converted. Data is at the heart of many problems we solve today, and we produce data exponentially faster than we can consume it.
Today I can request any document from any server on the web. I need to know nothing about the underlying technology the server uses, nothing about how the information is stored or retrieved, and consume it instantly. We've been evolving those same capabilities with data over the past 20 years and the standards, tools, and technologies are reaching critical mass. The linked data revolution is now one that you can no longer ignore. Join us to see what you've been missing.
Completely Rewritten for 2023
Part one of this series introduces the ideas, motivations, and applications of linked data along with historical context. This more technical session dives deeper into the tech stack and available tooling.
We'll dive into key linked data patterns, explore semantic modeling, graph queries, and talk about applying these ideas in the field, where the rubber meets the road!
Knowledge graphs have been quietly powering the future, unlocking new capabilities that were unimaginable to most just a few years ago. The few, however, have been imagining this future for decades and we've finally arrived at, what industry analysts are calling, “The year of the knowledge graph”
This session provides a historical look at the roots of Knowledge Graphs and how the ideas have evolved over the decades along with breakthroughs in various fields, have brought us to the brink of a new era in technology. Join us to see how far we've come and what is possible next!
Integration, once a luxury, is now a necessity. Doing this well, however, continues to be elusive. Early attempts to build better distributed systems such as DCOM, CORBA, and SOAP were widely regarded as failures. Today the focus is on REST, RPC, and graphql style APIs.
Which is best? The goto answer for architects is, of course, “it depends.”
In this session, we look at the various API approaches, how they attempt to deal with the challenge of decoupling client from server, evolvability, extensibility, adaptability, composability.
The biggest challenge is that needs change over time, and APIs must necessarily evolve. Versioning is challenging, and breaking changes are inevitable. You'll leave this session with a highlevel understanding of these approach, their respective tradeoffs and ultimately how to align your API approach with your architectural and organizational goals.
We live in a world of microservices. Yet, what is a microservice? What defines the boundaries of a microservice? How do we define the relationships between microservices? Thankfully domain-driven design gives us the concepts and practices to better design and decompose our services.
In this session we will consider many of the concepts of DDD — How bounded contexts use Ubiquitous language to model the domain, how context maps can be used to establish the interconnections between services as well aggregates and domains events, all of which will service us well as we go about creating our microservices.
We will also discuss the “tactical” patterns of DDD — We will see how we can “embed” the ubiquitous language in code, and the architectural influences of DDD.
This workshop will have you thinking about how to think in DDD using DDD concepts and ideas. Using polls, and mini-exercises we attempt to better cement the ideas of DDD so we can start applying them at work.
We live in a world of microservices. Yet, what is a microservice? What defines the boundaries of a microservice? How do we define the relationships between microservices? Thankfully domain-driven design gives us the concepts and practices to better design and decompose our services.
In this session we will consider many of the concepts of DDD — How bounded contexts use Ubiquitous language to model the domain, how context maps can be used to establish the interconnections between services as well aggregates and domains events, all of which will service us well as we go about creating our microservices.
We will also discuss the “tactical” patterns of DDD — We will see how we can “embed” the ubiquitous language in code, and the architectural influences of DDD.
This workshop will have you thinking about how to think in DDD using DDD concepts and ideas. Using polls, and mini-exercises we attempt to better cement the ideas of DDD so we can start applying them at work.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
In this workshop, take a practical approach to creating fitness functions that assess the architectural soundness of your systems.
We will learn about different fitness functions that can help architects and their teams create and comply with the architectural expectations expressed by their architect in collaboration with the team. Then we’ll see how we can implement those constraints, as guardrails, using ArchUnit, and get a continuous feedback on the teams' efforts.
git client
Java 8 or newer
Your favorite IDE, IntelliJ IDEA community edition recommended.
In this workshop, take a practical approach to creating fitness functions that assess the architectural soundness of your systems.
We will learn about different fitness functions that can help architects and their teams create and comply with the architectural expectations expressed by their architect in collaboration with the team. Then we’ll see how we can implement those constraints, as guardrails, using ArchUnit, and get a continuous feedback on the teams' efforts.
git client
Java 8 or newer
Your favorite IDE, IntelliJ IDEA community edition recommended.
Architecture is not a static representation of a system. There are several complexities and risks involved in creating them. One way to mitigate the risk is to evolve the architecture. But, there are risks of evolving as much as there are the risks of not evolving. In this interactive workshop we will explore a set of practices that we can use to mitigate the risks. Then we will dive into discussing some common and popular architectural patterns.
Finally, we will take some example applications and discuss how to evolve architecture to meet the needs of those applications.
Computer with git client installed to access the version control system which will have lab related material.
Architecture is not a static representation of a system. There are several complexities and risks involved in creating them. One way to mitigate the risk is to evolve the architecture. But, there are risks of evolving as much as there are the risks of not evolving. In this interactive workshop we will explore a set of practices that we can use to mitigate the risks. Then we will dive into discussing some common and popular architectural patterns.
Finally, we will take some example applications and discuss how to evolve architecture to meet the needs of those applications.
Computer with git client installed to access the version control system which will have lab related material.
We are knowledge workers and ultimately, we must own our growth and learning. Personal Knowledge Management is a process of collecting information that one uses to gather, classify, store, search, retrieve and share knowledge in their daily activities and the way in which these processes support work activities.
Despite taking notes, bookmarking web content, and highlighting passages in books; often we struggle to recall or rediscover these many insights we pick up daily in our work and life. This session introduces a tool and some process recommendations to never again lose discoveries and knowledge resources.
Michael shares the tools and workflow he (and many on the NFJS tour) use to write, organize and share your thoughts, keep your todo list, and build your own digital garden. These approaches naturally connects what you know the same way your brain does, and makes it easier to make everything you learn actionable and always at your fingertips.
You'll learn the basics, tips and tricks, and recommendations of these tools and practices; and leave armed to deploy these right away as you continue learning at the conference!
Just as sharpening the saw is the best way to cut down a tree… sharpening your development environment allows for a more focused more productive experience.
This session is a collection of scripts, aliases, shells, editors and tools which will super charge your development experience.
This session will cover:
Bring your machine and lets have you productive within 2 hours!
Spock is a groovy based testing framework that leverages all the “best practices” of the last several years taking advantage of many of the development experience of the industry. Combine Junit, BDD, RSpec, Groovy and Vulcans… and you get Spock! Feedback from previous attendees experienced with Spock indicated they learned more than they imagined they would as this deep dive session will explore many less documented cases of Spock and is intended for all experience levels.
This workshop assumes some understanding of testing and junit and builds on it. We will introduce and dig deep into Spock as a test specification and mocking tool. This is a hands-on 50% labs workshop. Concepts are presented, followed by labs to help re-enforce understanding.
Spock is a groovy based testing framework that leverages all the “best practices” of the last several years taking advantage of many of the development experience of the industry. Combine Junit, BDD, RSpec, Groovy and Vulcans… and you get Spock! Feedback from previous attendees experienced with Spock indicated they learned more than they imagined they would as this deep dive session will explore many less documented cases of Spock and is intended for all experience levels.
This workshop assumes some understanding of testing and junit and builds on it. We will introduce and dig deep into Spock as a test specification and mocking tool. This is a hands-on 50% labs workshop. Concepts are presented, followed by labs to help re-enforce understanding.
Are you a Java Developer looking to work on a Golang project? Are you looking to get involve on cloud native projects such as Kubernetes? This session is for you! This session assumes are are a Java developer and details the nuances of Go with comparisons against Java-isms.
This session will take a deep dive into Go as a language and provide details necessary to understand and write idiomatic go applications. In addition to differences in how to use the language and packaging structures, we will look at options for standard idiomatic Java. This will include:
In the process, we will look at several Go projects in the Open Source space as style examples.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
Continuous refactoring is critical to succeeding in projects and is an important part of sustainable agile development.
In this workshop, we will start by discussing how to approach refactoring, the essential steps we need to take, and look into how to incrementally improve the internal design of code to make it extensible, maintainable, and cost-effective to change. In addition to discussing the concepts, we will take several code examples from real projects, discuss the code smells and explore the refactoring techniques. Along the way, we will also dive into refactoring short code samples and measure the quality of code before and after refactoring.
Computer with git client to access git repository.
Java 8 or newer
Your favorite IDE
We are all proud of being the best coders, but have we analyzed our code for optimization? What tools can help us guide toward creating algorithms with better time and space complexity?
How you approach a problem is critical. How do you solve the problem? How do you optimize the problem?
This talk will not only prepare you for the interviews but will also give confidence on how to crack the coding challenges.
In this talk, we will be exploring analytical skills and coding skills of the top algorithms and common problems from the real-world, including
Single Source Shortest Paths
Traveling Salesman problems
Pattern Search Algorithms
Greedy Algorithms
KnapSack problem
Priority queue
Problem-solving skills are valuable and help develop optimal algorithms. We will look at problem-solving flow charts to make the right decisions for a given problem.
Topics we will cover are:
Arrays and Strings - Palindrome permutations
Linked Lists - Runner technique, Recursions
Stacks and queues
Trees and Graphs - Binary Tree, Binary Search Tree, Tries
Sorting and Searching
Single-Source Shortest Paths: Bellman-Ford, BFS, DFS, Dijkstra, Dynamic Programming
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
We are all proud of being the best coders, but have we analyzed our code for optimization? What tools can help us guide toward creating algorithms with better time and space complexity?
How you approach a problem is critical. How do you solve the problem? How do you optimize the problem?
This talk will not only prepare you for the interviews but will also give confidence on how to crack the coding challenges.
In this talk, we will be exploring analytical skills and coding skills of the top algorithms and common problems from the real-world, including
Single Source Shortest Paths
Traveling Salesman problems
Pattern Search Algorithms
Greedy Algorithms
KnapSack problem
Priority queue
Problem-solving skills are valuable and help develop optimal algorithms. We will look at problem-solving flow charts to make the right decisions for a given problem.
Topics we will cover are:
Arrays and Strings - Palindrome permutations
Linked Lists - Runner technique, Recursions
Stacks and queues
Trees and Graphs - Binary Tree, Binary Search Tree, Tries
Sorting and Searching
Single-Source Shortest Paths: Bellman-Ford, BFS, DFS, Dijkstra, Dynamic Programming
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
Event-driven architectures are not new, but they are newly ascendant. For the first time since the client-server revolution of 40 years ago, a new architectural paradigm is changing the way we build systems. Apache Kafka and microservices are at the center of this movement.
In this workshop, we’ll discuss the issues that arise turning a monolith into a set of reactive services, including issues like data contracts, integrating with the systems you can't change, handling request-response interfaces, and more. We'll also discuss common infrastructure choices like Apache Flink and Apache Pinot. Hands-on exercises will focus on understanding your organization's data and forming a plan to refactor that monolith that seems like it will never go away.
I'm looking forward to having you in the Event-Driven Architecture Workshop! To get the best use of our time together, the in-person exercises will focus on understanding the systems you're currently using at work and planning their refactoring to microservices. You should bring a laptop, a pen, and energy for the day!
When things get a little bit cheaper, we buy a little bit more of them. When things get cheaper by several orders of magnitude, you don't just see changes in the margins, but fundamental transformations in entire ecosystems. Apache Pinot is a driver of this kind of transformation in the world of real-time analytics.
Pinot is a real-time, distributed, user-facing analytics database. The rich set of indexing strategies makes it a perfect fit for running highly concurrent queries on multi-dimensional data, often with millisecond latency. It has out-of-the box integration with Apache Kafka, S3, Presto, HDFS, and more. And it's so much faster on typical analytics workloads that it is not just a marginally better data warehouse, but the cornerstone of the next revolution in analytics: systems that expose data not just to internal decision makers, but to customers using the system itself. Pinot helps expand the definition of a “decision-maker” not just down the org chart, but out of the organization to everyone who uses the system.
In this talk, you'll learn how Pinot is put together and why it performs the way it does. You'll leave knowing its architecture, how to query it, and why it's a critical infrastructure component in the modern data stack. This is a technology you're likely to need soon, so come to this talk for a jumpstart.
Have you ever stopped to think about how to build a database? The thing is, there isn't just one way, as we can see by the massive number of data infrastructure options we have to choose from. It's a nonstop series of tradeoffs, each motivated by the constraints the database wants to satisfy. An in-memory transactional database would be one thing. A general-purpose, single-server relational database would be another. A low-latency, horizontally scalable analytics database would be…the journey we're going to take.
In this talk, we'll start by picking a data model, make decisions about serialization and storage, choose indexing strategies, pick a query language, and figure out how to scale, eventually ending up with something that looks remarkably like Apache Pinot, a real-time analytics database. Pinot was built on a journey like this, always optimized for ultra low-latency, user-facing analytics at scale. In the real world, Pinot is used by applications like LinkedIn and UberEats to expose the state of the system not just to internal decision-makers, but to the users of the system itself, including all of us people who consumers of analytical queries. By focusing on the internals of Pinot and the tradeoffs made along the way to build a database of its kind, we'll see how it enables a new class of applications that every user of a system into a decision maker.
Continuous refactoring is critical to succeeding in projects and is an important part of sustainable agile development.
In this workshop, we will start by discussing how to approach refactoring, the essential steps we need to take, and look into how to incrementally improve the internal design of code to make it extensible, maintainable, and cost-effective to change. In addition to discussing the concepts, we will take several code examples from real projects, discuss the code smells and explore the refactoring techniques. Along the way, we will also dive into refactoring short code samples and measure the quality of code before and after refactoring.
Computer with git client to access git repository.
Java 8 or newer
Your favorite IDE
GraalVM is open sourced much like OpenJDK and is more openly available to developers. In this presentation we will dive into the power of this alternative JVM, discuss the use cases, the benefits, and the limitations that arise from its use.
We will take a hands on example driven approach to explore GraalVM.
Game of Life is an intriguing game. At first look it looks simple, but as you look closer, it appears to be quite complex. How can we implement this game with different constraints, what are the constraints? Is it possible to use functional programming for this, to honor immutability? You see, it is intriguing.
We will discuss the constraints, think about how we may be able to solve them, and along the way discover how functional programming can play a role. We will have a fully working program, using live coding, at the end of this session, to illustrate some nice ideas that will emerge from our discussions.
According to Akamai, more than 80% of internet traffic is now web API calls and makes up 90% of a web application’s attack surface. With such a critical and vulnerable piece of your architecture, do you know your APIs are secure? Do you know how and if attackers are attempting to exploit your APIs?
This hands-on workshop teaches you how to identify and fix vulnerabilities in Java web APIs. Using an existing API, you will learn ways to scan and test for common vulnerabilities such as excessive data exposure, broken authentication & authorization, lack of resource & rate limiting, and more. You will learn best practices around logging, intrusion detection, rate limiting, authentication, and authorization. You will also learn how to improve security in your APIs using existing tools, libraries, frameworks, and techniques to prevent vulnerabilities.
Remote Desktop (RDP) Software
According to Akamai, more than 80% of internet traffic is now web API calls and makes up 90% of a web application’s attack surface. With such a critical and vulnerable piece of your architecture, do you know your APIs are secure? Do you know how and if attackers are attempting to exploit your APIs?
This hands-on workshop teaches you how to identify and fix vulnerabilities in Java web APIs. Using an existing API, you will learn ways to scan and test for common vulnerabilities such as excessive data exposure, broken authentication & authorization, lack of resource & rate limiting, and more. You will learn best practices around logging, intrusion detection, rate limiting, authentication, and authorization. You will also learn how to improve security in your APIs using existing tools, libraries, frameworks, and techniques to prevent vulnerabilities.
Remote Desktop (RDP) Software
How do we move information realtime and connect machine learning models to make decisions on our business data? This presentation goes through machine learning and Kafka tools that would help achieve that goal.
In this presentation, we start with Kafka as our data backplane and how we get information to our pub/sub. As they enter Kafka, how do we sample that data and train our model, then how do we unleash that model on our real-time data? In other words, picture extracting samples for credit card approvals for training, then attaching the model for online processing: The moment we receive an application, we can either approve or disapprove a credit application based on a machine learning model trained on historical data. We will discuss other options as well like Spark, H2O, and more.
Today, JavaScript is ubiquitous. However, for the longest time, JavaScript for the longest time was deemed quirky and eccentric. Developers had to resort to convoluted programming practices and patterns to avoid the potholes presented by the language.
All that changed in 2015. The central committee that governs the development of the language
announced a slew of changes aiming to propel JavaScript into a new era. Features like let and const aim to deprecate the mischievous var, while fat-arrow functions aim to not only make JavaScript more succinct but also more functional. Developing domains and object hierarchies is also easier using the newly introduced classes. Finally features like promises and async/await make it easier to work with asynchronous operations.
However, there is a ton of legacy code out there that still uses older language constructs, and this code needs to be and should be refactored to use modern JavaScript constructs. This not only makes the code sustainable and evolvable but also clearer, more explicit, and lesser bugs.
In this exercise-driven workshop, you will use a test-driven approach to learn how to safely refactor your legacy JavaScript code using modern constructs.
By the end of this workshop, you will have built a solid foundation of modern JavaScript constructs. You will be ready to take on your next project or refactor an existing one with confidence.
Please visit the repository for this workshop found here and follow the Setup instructions in the README.md
file.
Today, JavaScript is ubiquitous. However, for the longest time, JavaScript for the longest time was deemed quirky and eccentric. Developers had to resort to convoluted programming practices and patterns to avoid the potholes presented by the language.
All that changed in 2015. The central committee that governs the development of the language
announced a slew of changes aiming to propel JavaScript into a new era. Features like let and const aim to deprecate the mischievous var, while fat-arrow functions aim to not only make JavaScript more succinct but also more functional. Developing domains and object hierarchies is also easier using the newly introduced classes. Finally features like promises and async/await make it easier to work with asynchronous operations.
However, there is a ton of legacy code out there that still uses older language constructs, and this code needs to be and should be refactored to use modern JavaScript constructs. This not only makes the code sustainable and evolvable but also clearer, more explicit, and lesser bugs.
In this exercise-driven workshop, you will use a test-driven approach to learn how to safely refactor your legacy JavaScript code using modern constructs.
By the end of this workshop, you will have built a solid foundation of modern JavaScript constructs. You will be ready to take on your next project or refactor an existing one with confidence.
Please visit the repository for this workshop found here and follow the Setup instructions in the README.md
file.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.
The Mockito framework is the most popular library for creating mocks, stubs, and spies for your tests. This talk reviews why and how you might want to do that, including unit vs integration tests, creating your own mocks and stubs, setting expectations, and verifying the results.
The Mockito documentation is notoriously misleading if you don't already know the principles behind the library. This talk gives an example that hopefully clears up any confusion and makes the docs useful. Many examples will be provided covering a wide range of capabilities. In addition to the basics, issues like mocking static methods, mocking final methods and classes, using spies for existing classes, and more will be examined.
The fundamental testing libraries in Java have undergone complete redesigns in the past few years. JUnit 5, known as JUnit Jupiter, redesigns the most well-known tool in all of testing. This talk will demonstrate the new features, how they are intended to be used, and discuss experimental ideas in the pipeline.
JUnit has been remarkably stable over the years and is one of the most widely adopted frameworks in the Java world. The latest version, JUnit 5, takes JUnit to the next level. Full of new features like conditional test execution, parametric testing, labeling and filtering tests, and more, it brings all the modern thinking on testing into the JUnit world. It also takes advantage of the functional features added to Java since version 8 to create a powerful, new library for testing your code.
This workshop discusses the features added to Java since Java 8. After a review of the functional additions (streams, lambda expressions, and method references), topics will include Local Variable Type Inference (LVTI), collection factory methods, the Java shell, the new HTTP client, the enhanced switch statement, text blocks, records, pattern matching, and sealed classes.
Features will be demonstrated for Java versions from 8 through 17, using preview versions where available.
java.time
PackageList.of
Set.of
Map.of
Map.ofEntries
This workshop discusses the features added to Java since Java 8. After a review of the functional additions (streams, lambda expressions, and method references), topics will include Local Variable Type Inference (LVTI), collection factory methods, the Java shell, the new HTTP client, the enhanced switch statement, text blocks, records, pattern matching, and sealed classes.
Features will be demonstrated for Java versions from 8 through 17, using preview versions where available.
java.time
PackageList.of
Set.of
Map.of
Map.ofEntries
Software projects can be difficult to manage. Managing teams of developers can be even difficult. We've created countless processes, methodologies, and practices but the underlying problems remain the same.
This session is full of practical tips and tricks to deal with the reallife situations any tech leader regularly encounters. Put these techniques into practice and create an enviable culture and an outstanding development team. At the same time, you'll avoid common management mistakes and pitfalls.
In this example-driven presentation, we'll focus on how to build reactive APIs in Spring. We'll start with Spring WebFlux, a reactive reimagining of the popular Spring MVC framework for HTTP-based APIs. Then we'll have a look at RSocket, an intriguing new communication protocol that is reactive by design.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
In this example-driven session, we're going to look at how to implement GraphQL in Spring. You'll learn how Spring for GraphQL builds upon GraphQL Java, recognize the use-cases that are best suited for GraphQL, and how to build a GraphQL API in Spring.
Typical REST APIs deal in resources. This is fine for many use cases, but it tends to be more rigid and less efficient in others.
For example, in an shopping API, it's important to weigh how much or how little information should be provided in a request for an order resource? Should the order resource contain only order specifics, but no details about the order's line items or the products in those line items? If all relevant details is included in the response, then it's breaking the boundaries of what the resource should offer and is overkill for clients that do not need it. On the other hand, proper factoring of the resource will require that the client make multiple requests to the API to fetch relevant information that they may need.
GraphQL offers a more flexible alternative to REST, setting aside the resource-oriented model and focusing more on what a client needs. Much as how SQL allows for data from multiple tables to be selected and joined in response to a query, GraphQL offers API clients the possibility of tailoring the response to provide all of the information needed and nothing that they do not need.
In this example-driven session, we're going to look at how to implement GraphQL in Spring. You'll learn how Spring for GraphQL builds upon GraphQL Java, recognize the use-cases that are best suited for GraphQL, and how to build a GraphQL API in Spring.
Typical REST APIs deal in resources. This is fine for many use cases, but it tends to be more rigid and less efficient in others.
For example, in an shopping API, it's important to weigh how much or how little information should be provided in a request for an order resource? Should the order resource contain only order specifics, but no details about the order's line items or the products in those line items? If all relevant details is included in the response, then it's breaking the boundaries of what the resource should offer and is overkill for clients that do not need it. On the other hand, proper factoring of the resource will require that the client make multiple requests to the API to fetch relevant information that they may need.
GraphQL offers a more flexible alternative to REST, setting aside the resource-oriented model and focusing more on what a client needs. Much as how SQL allows for data from multiple tables to be selected and joined in response to a query, GraphQL offers API clients the possibility of tailoring the response to provide all of the information needed and nothing that they do not need.
We are all proud of being the best coders, but have we analyzed our code for optimization? What tools can help us guide toward creating algorithms with better time and space complexity?
How you approach a problem is critical. How do you solve the problem? How do you optimize the problem?
This talk will not only prepare you for the interviews but will also give confidence on how to crack the coding challenges.
In this talk, we will be exploring analytical skills and coding skills of the top algorithms and common problems from the real-world, including
Single Source Shortest Paths
Traveling Salesman problems
Pattern Search Algorithms
Greedy Algorithms
KnapSack problem
Priority queue
Problem-solving skills are valuable and help develop optimal algorithms. We will look at problem-solving flow charts to make the right decisions for a given problem.
Topics we will cover are:
Arrays and Strings - Palindrome permutations
Linked Lists - Runner technique, Recursions
Stacks and queues
Trees and Graphs - Binary Tree, Binary Search Tree, Tries
Sorting and Searching
Single-Source Shortest Paths: Bellman-Ford, BFS, DFS, Dijkstra, Dynamic Programming
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
We are all proud of being the best coders, but have we analyzed our code for optimization? What tools can help us guide toward creating algorithms with better time and space complexity?
How you approach a problem is critical. How do you solve the problem? How do you optimize the problem?
This talk will not only prepare you for the interviews but will also give confidence on how to crack the coding challenges.
In this talk, we will be exploring analytical skills and coding skills of the top algorithms and common problems from the real-world, including
Single Source Shortest Paths
Traveling Salesman problems
Pattern Search Algorithms
Greedy Algorithms
KnapSack problem
Priority queue
Problem-solving skills are valuable and help develop optimal algorithms. We will look at problem-solving flow charts to make the right decisions for a given problem.
Topics we will cover are:
Arrays and Strings - Palindrome permutations
Linked Lists - Runner technique, Recursions
Stacks and queues
Trees and Graphs - Binary Tree, Binary Search Tree, Tries
Sorting and Searching
Single-Source Shortest Paths: Bellman-Ford, BFS, DFS, Dijkstra, Dynamic Programming
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
Real-world applications nowadays are designed using both art and science. What is the process of coming up with a solution which works, scales, and is resilient?
What is it challenging to design a system for disruptive technologies?
System design is unstructured, and there are many ways to solve problems.
Gaining experience in new applications and technologies
Best practices change with time. The best way ten years ago can quickly become an anti-pattern.
In this talk, we will explore step by step guide to approach System design using real-world applications.
Come prepared to design a system for following applications interactively.
We will gain more knowledge with collective experience and best practices.
UBER System Design
NETFLIX System Design
INSTAGRAM System Design
YELP System Design
TWITTER System Design
Search Engines
Auto Suggestions / Recommendations System Design
Fraud Detection System Design
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
Real-world applications nowadays are designed using both art and science. What is the process of coming up with a solution which works, scales, and is resilient?
What is it challenging to design a system for disruptive technologies?
System design is unstructured, and there are many ways to solve problems.
Gaining experience in new applications and technologies
Best practices change with time. The best way ten years ago can quickly become an anti-pattern.
In this talk, we will explore step by step guide to approach System design using real-world applications.
Come prepared to design a system for following applications interactively.
We will gain more knowledge with collective experience and best practices.
UBER System Design
NETFLIX System Design
INSTAGRAM System Design
YELP System Design
TWITTER System Design
Search Engines
Auto Suggestions / Recommendations System Design
Fraud Detection System Design
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
Microservices demand more than just a new architecture, they require a cultural shift. In this workshop, we'll cover:
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
Microservices demand more than just a new architecture, they require a cultural shift. In this workshop, we'll cover:
With globally distributed applications (and teams!) the job of software architect isn’t getting any easier; applications are growing increasingly complex and architects are spread thin. You can’t be involved with every decision, you must empower your teams while ensuring they are making good choices. How do you do that? How can frameworks like Spring not only make your life easier but help your teams deliver robust applications to production? Spring Cloud has a veritable plethora of sub projects from circuit breakers to functions simplifying the task of building cloud native applications while making it easy for developers to adhere to best practices. At the same time it can be overwhelming to get your head wrapped around all the features Spring offers. This talk will show how Spring allows architects to focus on the critical design decisions they need to make while ensuring developers are empowered to implement critical business use cases. Today’s cloud native applications have similar pitfalls, luckily Spring is here to help you resolve them!
This talk will show how Spring allows architects to focus on the critical design decisions they need to make while ensuring developers are empowered to implement critical business use cases. Today’s cloud native applications have similar pitfalls, luckily Spring is here to help you resolve them!
As we migrate towards distributed applications, it is more than just our architectures that are changing, so too are the structures of our teams. The Inverse Conway Maneuver tells us small, autonomous teams are needed to produce small, autonomous services. Architects are spread thin and can’t be involved with every decision. Today, we must empower our teams but we need to ensure our teams are making good choices. How do we do that? How do you put together a cohesive architecture around distributed teams?
This talk will discuss creating “paved roads”, well worn paths that we know works and we can support. We will also explore the importance of fitness functions to help our teams adopt appropriate designs.
Becoming a software architect is a longed-for career upgrade for many software developers. While the job title suggests a work day focused on technical decision-making, the reality is quite different. In this workshop, software architect Nathaniel Schutta constructs a real world job description in which communication trumps coding.
Discover the skill sets needed to juggle multiple priorities, meetings, and time demandsLearn why your best team leadership tool is not a hammer, but a shared cup of coffeeHear the best ways to give and take criticismUnderstand the necessity of writing effective email and formal architecture documentsGet tips for delivering confident career-building presentations to any audienceReview essential techniques for stakeholder management and relationship buildingExplore the critical needs for architecture reviews and an effective process for conducting themThrough lecture and small group exercises, Nathaniel will help you understand what it means to be a successful architect. Working through various problems, attendees will have opportunities to think through architectural decisions and patterns, discuss the importance of non functional requirements and why architects cannot afford to practice resume driven design.
Becoming a software architect is a longed-for career upgrade for many software developers. While the job title suggests a work day focused on technical decision-making, the reality is quite different. In this workshop, software architect Nathaniel Schutta constructs a real world job description in which communication trumps coding.
Discover the skill sets needed to juggle multiple priorities, meetings, and time demandsLearn why your best team leadership tool is not a hammer, but a shared cup of coffeeHear the best ways to give and take criticismUnderstand the necessity of writing effective email and formal architecture documentsGet tips for delivering confident career-building presentations to any audienceReview essential techniques for stakeholder management and relationship buildingExplore the critical needs for architecture reviews and an effective process for conducting themThrough lecture and small group exercises, Nathaniel will help you understand what it means to be a successful architect. Working through various problems, attendees will have opportunities to think through architectural decisions and patterns, discuss the importance of non functional requirements and why architects cannot afford to practice resume driven design.
Development teams often focus on getting code to production losing site of what comes after the design and build phase. But we must consider the full life cycle of our systems from inception to deployment through to sunset, a discipline many companies refer to as site reliability engineering.
While your organization may or may not have an SRE team, you have someone playing that role and we can all benefit from looking at the principles and practices that we can bring to bear on our projects. In this talk, I will introduce the concepts of SRE and how you can adopt these ideas on your applications.
It's not just architecture—it's evolutionary architecture. But to evolve your architecture, you need to measure it. And how does that work exactly? How does one measure something as abstract as architecture?
In this session we'll discuss various strategies for measuring your architecture. We'll see how you know if your software architecture is working for you, and how to know which metrics to keep an eye on. We'll also see the benefits of measuring your architecture.
We'll cover a range of topics in this session, including
Different kinds of metrics to measure your architecture
The benefits of measurements
Improving visibility into architecture metrics
Over the past few years, the basic idioms and recommended programming styles for Java development have changed. Functional features are now favored, using streams, lambda expressions, and method references. The new sixmonth release schedule provides the language with new features, like modules and local variable type inference, much more frequently. Even the new license changes in the language seem to complicate installation, usage, and especially deployment.
The purpose of this training course is to help you adapt to the new ways of coding in Java. The latest functional approaches are included, including using parallel streams for concurrency, and when to expect them to be useful. All the new significant features added to the language will be reviewed and evaluated, with the goal understanding what problems they were designed to handle and when they can be used effectively in your code.
The workshop will use Java 21. You can get that from any major vendor, including Oracle. If you don't have a preferred vendor, then https://adoptium.net/ offers pre-built OpenJDK binaries for free.
We'll use IntelliJ IDEA for coding, but nothing in the materials requires any particular IDE. Only the Community edition is necessary, though the instructor will be using the Ultimate edition.
We will also use Gradle as our build tool, but most of the major IDEs can create Gradle-based Java projects without additional installs. You are welcome to use Maven if you prefer, but the instructor may not be able to help if you run into issues.
The Spring Framework transformed enterprise Java development nearly two decades ago, making it easier to achieve common things such as transactions, security, loose-coupling, and much more Over the years, Spring has continued to rise to every challenge Java developers face, most recently addressing subjects such as reactive programming, cloud computing, and container deployment in Kubernetes. Meanwhile, Spring Boot makes easy work of Spring by employing (among other things) auto-configuration, runtime insight and management, and a practical convention for specifying application properties.
The releases of Spring Framework 6 and Spring Boot 3 bring exciting and useful new capabilities. With features like native compilation, improved observability and tracing, support for HTTP problem details, and declarative HTTP clients, as well as baselining on Java 17 and Jakarta EE 9, Spring is ready for a new generation of application development.
In this workshop, you'll start with a very simple Spring Boot application and learn to grow it into a fully functional application including a web front-end and data persistence. And you'll get hands-on experience with some of the most exciting new features in Spring 6 and Spring Boot 3.
You'll need…
In this example-driven presentation, we'll focus on how to build reactive APIs in Spring. We'll start with Spring WebFlux, a reactive reimagining of the popular Spring MVC framework for HTTP-based APIs. Then we'll have a look at RSocket, an intriguing new communication protocol that is reactive by design.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
In this example-driven presentation, we'll focus on working with reactive data persistence. We'll start by seeing how to create reactive repositories for relational databases with Spring Data R2DBC. Then we'll explore non-relational reactive persistence for MongoDB and Cassandra.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
Shifting business priorities, working from home, layoffs, and the end of the 5 star catered office lunch spread. The world around us can feel like dodging one !$%$ storm after the other as we navigate the chaos of modern software engineering.
In this 60 minute talk, we will plot a safe route for weathering the storm by focusing on what we have control over as a team (and hopefully coming out stronger!)
Attendees will leave with 3 exercises they can facilitate with their teams to improve team operational health, build stronger compassion for one another, and align on a shared mission to propel through the toughest climates.
1-1s are ubiquitous in the software engineering industry, and great leaders cherish these discussions.
What makes 1-1s so valuable?
1-1s are just like any other meeting - they are used to exchange data - be it tactical project updates, personal updates, or perspectives on how a project is going. There is also implicit data exchanged, signals that are interwoven with each word that is spoken. These are our emotions.
Emotions can be felt and observed but can be difficult to interpret and process. Our brains are in overdrive when communicating with other humans as we take in these implicit signals along with the literal words being spoken. During 1-1s, this can intensify as our focus is entirely on the other person; What signals are they sending? How will I respond? What isn’t being said here?
Emotional intelligence is the ability to identify, understand, and manage our emotions to communicate more effectively with others. In this interactive discussion, we will explore how to leverage the four emotional intelligence skills to facilitate more productive and enjoyable 1-1s.
People tend to either love or hate regular expressions. I love them. They are powerful and can be written clearly. This session will start with the syntax for writing basic regular expressions (no experience needed) and how to call them from Java. As the regex get longer, we will focus on readability. Next come more advanced features so you can be a regular expression power user.
You’ve probably heard the expression that when you have a hammer, everything seems like a nail. While I love regular expressions, they are not always the right tool for the job. We will also discuss when regular expressions are not the right tool for the job.
There are a variety of static analysis tools for Java to help improve your code including Checkstyle, SpotBugs, PMD and Sonar.
In this session, I will demonstrate each of them and show how to configure them. Then I will show some useful rules and why they are important. Finally, I will show how to write your own custom rules.
Sharing code and internal libraries across your distributed microservice ecosystem feels like a recipe for disaster! After all, you have always been told and likely witnessed how this type of coupling can add a lot of friction to a world that is built for high velocity. But I'm also willing to bet you have experienced the opposite side effects of dealing with dozens of services that have had the same chunks of code copied and pasted over and over again, and now you need to make a standardized, simple header change to all services across your platform; talk about tedious, frictional, errorprone work that you probably will not do! Using a variety of codesharing processes and techniques like inner sourcing, module design, automated updates, and service templates, reusing code in your organization can be built as an asset rather than a liability.
In this talk, we will explore the architectural myth in microservices that you should NEVER share any code and explore the dos and don'ts of the types of reuse that you want to achieve through appropriate coupling. We will examine effective reuse patterns, including what a Service Template architecture looks like, while also spending time on the lifecycle of shared code and practically rolling it out to your services. We will finish it off with some considerations and struggles you are likely to run into introducing code reuse patterns into the enterprise.
Modern HTTP APIs power today’s connected world, acting as the core interface not only for developers, but also for the ever-growing ecosystem of machines, services, and now AI agents. As every organization is increasingly expected to produce and consume APIs at scale, the ability to design, build, deploy, and operate consistent, high-quality APIs has become a key competitive differentiator. With AI accelerating the need for composable, well-structured, and discoverable interfaces, API maturity is no longer optional—it’s essential. However, building and scaling effective API Design First practices across an enterprise is still fraught with manual processes, inconsistent standards, and slow governance models. To succeed, organizations must reimagine API Governance as a strategic enabler—one that prioritizes collaboration, stewardship, and automation.
In this session, we’ll explore the core stages of the API design lifecycle and share how to implement practical, modern governance models that increase productivity without sacrificing control. Drawing on real-world examples from SPS Commerce, as well as off-the-shelf tooling and custom solutions, we’ll show how to align your teams, accelerate delivery, and produce APIs that are robust, reusable, and ready for both human and AI consumers.
A lot of development teams have built out fully automated CI/CD pipelines to deliver code to production fast! Then you quickly discover that the new bottleneck in delivering features is their existence in longlived feature branches and no true CI is actually happening. This problem compounds as you start spinning up microservices and building features across your multirepo architecture and coordinating some ultrafancy release schedule so it all deploys together. Feature flags provide you the mechanism to reclaim control of the release of your features and get back to shortlived branches with true CI. However, what you're not told about feature flags in those simple “if/else” getting started demos is that there is an upfront cost to your development time, additional complexities, and some pitfalls to be careful of as you begin expanding feature flag usage to the organization. If you know how to navigate these complexities you will start to unleash true velocity across your teams.
In this talk, we'll get started with some of the feature flagging basics before quickly moving into some practical feature flagging examples that demonstrate its usage beyond the basic scenarios as we talk about UI, API, operations, migrations, and experimentation. We will explore some of the hard questions around “architecting feature flags” for your organization.
GitHub needs no introduction as the world's premier source code repository. However, over the past several years GitHub has transformed well beyond a great tool for managing source code. It now provides a compelling onestopshop of capabilities as part of its platform that enables you to cut loose your disparate jungle of other tooling. Being aware of and learning how to effectively use this Swiss Army Knife of GitHub capabilities can substantially reduce your overall development costs while also reducing your team's cognitive overhead.
Join us for an exciting session where we dive deep into the GitHub toolchain, designed to supercharge developer productivity and unite your teams around a powerful engineering platform. Discover how to optimize pull request lifecycles with protected branch configurations, organizational rulesets, and merge queues. We'll also delve into security vulnerability detection using Dependabot and GitHub Advanced Security Code Scanning workflows that developers will love. Don't miss this opportunity to transform your development process and take your GitHub skills to the next level!
Over the last decade, DevOps has emerged as an influential business philosophy and practice, helping businesses drive high quality software to market faster. DevOps focuses on the elimination of bottlenecks that occur when development and operational resources are too divorced from one another. But what about friction in the development and test process? What about the delayed feedback cycles that come from slow builds and test flakiness? How can we reduce friction in areas that are outside of the focus of DevOps? The presentation will include examples of DPE practices in action from Java projects using the Maven or Gradle build tool.
Attendees will walk away from this presentation with a better understanding of:
Please follow the workshop requirements here:
https://nfjs.nyc3.cdn.digitaloceanspaces.com/static/pdf/DPE_Workshop_Prerequisites.pdf
What if we could achieve completely ‘contactless’ software security scanning? As the lines between physical and digital security become blurrier and blurrier, software quality standards and testing methodologies must continue to keep pace. Software fuzzing has long been a trusted method for finding vulnerabilities that are difficult to discover using traditional methods.
The application of AI and ML to this field has already begun to bear very promising results. By leveraging deep learning techniques to improve our input corpus and better understand our program's states, we can shine areas on the code logic that would be hidden by approaches like vulnerability scanning and static code analysis, and even traditional software fuzzing.
When you think of voice assistants like Amazon Alexa, you probably are thinking of a device on a shelf or a desk somewhere, tethered to a power outlet. But what if you could take Alexa with you wherever you go?
Several devices allow for mobile Alexa, including Echo Auto, Echo Frames, Echo Buds, Fossil Gen 6 watches, and the TalkSocket. You can even take Alexa with you on your phone using the Alexa app or on your Apple Watch using the Voice in a Can app. Being able to take Alexa with you opens a whole new world of possibilities, enabling voice application developers to create voice experiences that help users no matter where they are.
In this session, we'll see how to create Alexa applications (called “skills”) that can take advantage of Alexa while on-the-go. You'll learn how to gain permission to access the user's location and provide information relevant to that location. We'll also have a look at a real-world Alexa Skill that uses location awareness to enhance a visit to Disneyland or Disney World theme parks.
The promise of Reactive programming models is that you can free yourself from the constraints of handling one request for each thread and realize increased throughput as a result. The only problem is that it requires a completely different set of APIs that many developers find counter-intuitive. What if you can achieve the same performance using thread-per-request APIs, and let the Java virtual machine handle the hard work of blocking when appropriate, and executing platform threads when the time is right? Enter virtual threads, a key feature of Project Loom, currently available in JDK 19.
In this session, we'll look at how different frameworks, such as Helidon and Quarkus, are using this powerful new feature to increase throughput without requiring reactive programming models.
One of the most common reasons for software defects is poor exception handling. The more complex the application, the more difficult it can be to track down the root cause of a bug. An exception at the service or database layer may manifest itself as unpredictable behavior at the user interface level. Simple coding errors or unexpected inputs may result in unnecessary and confusing error messages. The net result is an application that doesn't meet the user's expectations. These types of issues can be avoided by handling exceptions properly.
In this session, we'll look at examples of what happens when exceptions aren't handled, and how you can avoid unexpected defects by following a few key principles and using some discipline. We'll also examine the importance of establishing logging standards, and look at how to properly configure error pages and use the error handling facilities in Spring and Java EE applications.
Outline:
Bad things happen; expect them
Exception defined
Exception handling amorphisms:
if you can recover from it, catch it and tell the user (if necessary)
Use a logging framework
Do not log an exception more than one time
Throw meaningful exceptions
Don’t eat exceptions
Centralize exception handling
Handing exceptions in browsers
If you talk to the most well-known programmers, whether they’re people within your organization or internationally recognized experts, you’ll find something in common: they’re productive. Usually, it isn’t just dumb luck. More often than not, they’ve focused on becoming more productive. There are dozens of methodologies that claim to increase productivity, but there’s a clear winner amongst highly productive software developers: Getting Things Done (GTD). GTD, originally described in productivity guru David Allen’s bestseller of the same title, describes a set of behaviors that, when followed regularly, reduce stress and help you become more productive at the same time.
This session looks at how programmers, architects, and technical managers can apply GTD principles to improve the productivity of individuals and the group as a whole. In addition to discussing the core principles of GTD, this session also examines tools that can be used to implement the methodology as well as similarities to agile software development practices.
Refactoring is an essential skill for working with legacy code. Knowing code smells and how to correct them through refactoring is essential for maintaining an existing code base. But what are we refactoring to? We don't want to refactor just for the sake of it. This is where design patterns really shine. When we refactor to a proper design pattern, we are putting together a real solution to the problem rather than just moving code around to make it a little more readable.
This session will show you how to use your full developer toolbox so you can go from code smell to refactoring recipe to design pattern to solution.
Knowledge graphs are a rapidly emerging concept for machine-processable models of complex and dynamic domains. They represent the intersection of Web architecture and information. If your organization wants to resolve its most pernicious data integration problems or facilitate machine learning initiatives, knowledge graphs are likely to be part of your future.
We will discuss the emergence of Knowledge Graphs as an emerging solution to a missing capability in most organization's IT strategies. We will discuss how some of the biggest organizations in the world are heading in this direction, it's impact on API design and more. We will focus on specific tools, platforms and standards that are making Knowledge Graphs a crucial part of your overall solutions.
Decentralization and Content-based addressing represent a significant advancement in the development of stable, scalable, censorship-resistant systems. They require a remarkable amount of architectural thinking to work effectively. The Interplanetary File System (IPFS) is an umbrella project covering a cornucopia of extremely well designed layers that will prop up and extend the Web in many new directions. Come here about a future that looks a little bit like combining the Web with Git, Bittorrent, Self-certifying file systems, Distributed Hash Tables and more.
We will discuss the architectural layers of this approach and what each brings to the table.
The LLVM Project has been around for over a decade, but is increasingly important as a compiler infrastructure to get reuse and portability, shared optimizations and a faster time to market. It achieves this by having a pluggable, layered architecture compared to other compiler infrastructure. Many newer programming languages have chosen it as the basis of their toolchain including Swift, Julia, Rust and more.
In this talk, we will talk about the tools, components and layers of LLVM and how it is helping usher in new visions of portability and reuse.
Machine Learning is all the rage, but many developers have no idea what it is, what they can expect from it or how to start to get into this huge and rapidly-changing field. The ideas draw from the fields of Artificial Intelligence, Numerical Analysis, Statistics and more. These days, you'll generally have to be a CUDA-wielding Python developer to boot. This workshop will gently introduce you to the ideas and tools, show you several working examples and help you build a plan to for diving deeper into this exciting new field.
We will cover:
Please install Anaconda for Python 3 before the workshop if possible. https://www.anaconda.com/download
Machine Learning is all the rage, but many developers have no idea what it is, what they can expect from it or how to start to get into this huge and rapidly-changing field. The ideas draw from the fields of Artificial Intelligence, Numerical Analysis, Statistics and more. These days, you'll generally have to be a CUDA-wielding Python developer to boot. This workshop will gently introduce you to the ideas and tools, show you several working examples and help you build a plan to for diving deeper into this exciting new field.
We will cover:
Please install Anaconda for Python 3 before the workshop if possible. https://www.anaconda.com/download
You’ve taken your first steps into Node.js. You’ve learned how to initialize your projects, you’ve played with some dependencies, and you’re ready to get into some serious Node work.
In this session, we’ll dive further into Node as a framework. We’ll learn how to master Node’s inherently asynchronous nature, take advantage of Node’s events and streams capabilities, and learn about sophisticated Node deployments at scale. Participants will leave with a richer understanding of what Node has to offer and higher confidence in dealing with some of Node’s more difficult concepts.
You’ve taken your first steps into Node.js. You’ve learned how to initialize your projects, you’ve played with some dependencies, and you’re ready to get into some serious Node work.
In this session, we’ll dive further into Node as a framework. We’ll learn how to master Node’s inherently asynchronous nature, take advantage of Node’s events and streams capabilities, and learn about sophisticated Node deployments at scale. Participants will leave with a richer understanding of what Node has to offer and higher confidence in dealing with some of Node’s more difficult concepts.
While we know that different programming languages are good at different things and perform differently, it would be tempting to conclude that optimizations that work in one language work just as well in another. Unfortunately, that's not true.
In this talk, we'll learn about the different ways that language runtimes work from interpreters to just-in-compilers from JavaScript to Python to Java. We'll explore the strengths and weaknesses of each approach and how to make the most of them.
Thankfully, Java garbage collectors have come a long way in the last 25 years. While the latest GCs: G1 and ZGC usually just work, their inner workings can be harder to understand the GCs that came before.
In this presentation, you'll learn basic garbage collection strategies used by the JVM starting with older collectors and following through the evolution to the modern GCs that we enjoy today.
The JVM can perform some marvelous feats of optimization but for most developers, its inner workings remain a mystery.
In this talk, we'll walk through how the JVM optimizes a seemingly simple piece of Java code. Starting with how the JVM decides what to compile and then going step-by-step through the different optimizations that are performed. In doing so, you'll learn how the JVM makes your code run fast, but also some things to avoid to keep it running fast.
Our modern JVMs and CPUs are capable of some amazing feats of optimization. In general, for day-to-day work, these optimizations just work, but they also mean that the optimal approach can be surprisingly unintuitive.
In this presentation, we'll examine some surprising performance anomalies. Through learning the mechanisms behind these performance paradoxes, you'll gain insight into how modern compilers and hardware work.
Performance is the number one feature for Progressive web apps to compete with Native Apps. To remove jank from the experience, the Chrome dev tools provide some excellent insight into the root cause.
Let's explore how to find issues in your app and keep your PWAs feeling Native.
Making large, important technical decisions is a critical aspect of a software engineer's role. With the wide impact these decisions can have, it is essential to make the correct decision. Even more vital is ensuring the decision is made and communicated in a way that the team members impacted by it trust and buy-in to the decision. Otherwise, even the best decisions will never realize their full potential when executed.
This case study examines how Comcast has employed the Analytic Hierarchy Process (AHP), a decision-making framework developed in the 1970s, and adapted it for making technical and non-technical decisions both large and small. We will cover the key aspects that have made it successful for engineering teams, what we learned from our early mistakes, signs that the decision-making process you use is working effectively, and how you can easily leverage the AHP for your decisions.
Web Components are a set of web platform APIs that allow you to create new custom, reusable, encapsulated HTML tags which can be shared across frameworks.
This talk provides a deep introduction into how to author Web Components with LitElement, the most popular library for building fast, lightweight web components.
We’ll cover how LitElement reduces the code you have to write to build your own Custom Elements complete with scoped styles that Shadow DOM provides, and reactive properties that you expect from standard browser components. You’ll learn how lit-html, the rendering library for LitElement, makes writing your components easy while at the same time only re-rendering the specific content needed, not entire DOM subtrees (or virtual DOM) that other frameworks rely on.
You'll walk away with a solid understanding of how you could leverage Lit to build shareable components, design systems, and/or a full Progressive Web App.
Web Components allow developers to create reusable components without a framework.
During this talk we’ll learn about Custom Elements, Template, and Shadow Dom specifications with code examples and different tools like Angular to help you utilize these new APIs. We’ll also cover an example custom element that Comcast is using across all of its sites for millions of users. We’ll also demo off component libraries and show how easy they are to integrate into existing sites.
Let’s connect with PWAs! Progressive Web Apps seek to bridge the gap between native Android & iOS apps and the Web. Learn about the APIs available in today’s mobile devices that enable you to leverage web technologies to deliver app-like experiences without having to write a line of Kotlin or Swift.
We’ll cover many of the APis that make Progressive Web Apps possible while building a dating web app for cats. Topics include:
You’ll leave this full-day workshop armed with the hands-on experience to deliver a PWA that starts fast and stays fast.
Workshop materials are available at:
http://tinyurl.com/pwa-workshop
Please be sure to fill out the pre-workshop survey linked from there prior to the workshop
Enterprise Architecture helps in describing what is the current state and helps build a future roadmap. Come prepared to solve many Enterprise Architecture challenges.
As part of the journey, we will explore TOGAF to build our architecture. First, we will create a Baseline Architecture. Next, we will explore the path for the Target Architecture. Finally, after identifying gaps between the two, we will apply a step-by-step process to prepare a roadmap.
“Organizations no longer want their enterprise architecture (EA) practice to be focused on standards, structure and control,” says Marcus Blosch, research vice president at Gartner.
“They want an EA practice that is focused on driving business outcomes, working in a flexible and creative way to help define the future and how to get there.”
We will explore the following domains:
– Data
– Technology
– Application
– Business
This talk will help you build a long-term IT Strategy which matches your Business Strategy.
From Amazon and Apple to Zappos and Zillow and almost every e-commerce and content serving website in between, Lucene is the heart and soul powering their search and discovery capabilities. While Lucene has
been around for a while, it has kept up with modern day scale and the latest search innovations. Developers often come to know Lucene through the engines and products that embed it, such as Elasticsearch, Solr, Lucidworks Fusion, and Atlas Search.
This talk will peel away the layers often surrounding Lucene; let’s take a look at what it can do and how it can be leveraged effectively in your projects.
Code quality is an abstract concept that fails to get traction at the business level. Consequently, software companies keep trading code quality for new features. The resulting technical debt is estimated to waste up to 42% of developers' time, causing stress and uncertainty, as well as making our job less enjoyable than it should be. Without clear and quantifiable benefits, it's hard to build a business case for code quality.
In this keynote, Adam takes on the challenge by tuning the code analysis microscope towards a business outcome. We do that by combining novel code quality metrics with analyses of how the engineering organization works with the code. We then take those metrics a step further by connecting them to values like time-to-market, customer satisfaction, and road-map risks. This makes it possible to a) prioritize the parts of your system that benefit the most from improvements, b) communicate quality trade-offs in terms of actual costs, and c) identify high-risk parts of the application so that we can focus our efforts on the areas that need them the most. All recommendations are supported by data and brand new real-world research. This is a perspective on software development that will change how you view code. Promise.
If only it were so easy! Leadership is a thing into which many find themselves thrown, and to which many others aspire—and it is a thing which every human system needs to thrive. Leading teams in technology organizations is not radically different from any other kind of organization, but does tend to present a common set of patterns and challenges. In this session, I’ll examine them, and provide a template for your own growth as a leader.
We’ll cover the following:
The relationship between leadership, management, and vision
Common decision-making pathologies and ways to avoid them
Strategies for communication with a diverse team
The basics of people management
How to conduct meetings
How to set and measure goals
How to tell whether this is a vocation to pursue
No, you will not master leadership in this short session, but we will cover some helpful material that will move you forward.
Mob Programming is a style of programming in which the entire team sits together and
works on a single task at a time. Teams that have worked this way have found that
many of the problems that plague normal development just melted away, possibly because communication and learning increases. Teams also find that the quality of their code increases. They find their capacity to create increases. However, the best part of all this is that teams end up being happier and more cohesive.
In this session we introduce the core concepts of mob programming and then get handson mobbing on a coding kata.
Mob Programming is a style of programming in which the entire team sits together and
works on a single task at a time. Teams that have worked this way have found that
many of the problems that plague normal development just melted away, possibly because communication and learning increases. Teams also find that the quality of their code increases. They find their capacity to create increases. However, the best part of all this is that teams end up being happier and more cohesive.
In this session we introduce the core concepts of mob programming and then get handson mobbing on a coding kata.
GitHub Actions is the popular automation platform that integrates with your GitHub repositories to easily provide Continuous Integration/Delivery/Deployment and more. If you're someone working with GitHub, there is a significant benefit to using this automation platform integrated in the GitHub ecosystem.
But, what if you have already invested in another CI/CD automation platform such as Jenkins? Is it worth the effort to move? What are the advantages and disadvantages? How do you best go about it?
Join DevOps director and author of “Learning GitHub Actions” Brent Laster to understand the different dimensions and options you have when comparing Jenkins and GitHub Actions and if you choose to migrate from Jenkins to GitHub Actions.
In this 90-minute session, you'll learn:
Prerequisites: Familiarity with Jenkins
GitHub Actions is the popular automation platform that integrates with your GitHub repositories to easily provide Continuous Integration/Delivery/Deployment and more. But, as with any integration that has access to your source code and can execute automation related to it, there is a very real risk of incurring security issues.
Join DevOps director and author of “Learning GitHub Actions” Brent Laster to understand the different risk dimensions you have when using GitHub Actions and how to best shield your repositories, workflows, and actions against them.
In this 90-minute session, you'll learn:
– Security by configuation - implementing appropriate controls and settings to govern what can run and when
– Security by design - leveraging tokens and secrets and to secure data; guarding against common threats such as untrusted input; securing dependencies
– Security by monitoring - reviewing changes especially when coming through pull requests; scanning; monitoring execution
Prerequisites: Good working knowledge of GitHub Actions
Although Java originally promised write once, run anywhere, it failed to fully deliver on that promise. As developers, we can develop, test, and build our applications into WAR or executable JAR files and then toss them over the wall to a Java application server and Java runtime that we have no control over, giving us zero confidence that the application will behave the same as when we tested it.
Containers fulfill the write-once, run anywhere promise that Java wasn't able to, by packaging the runtime and even the operating system along with our application, giving greater control and confidence that the application will function the same anywhere it is run. Additionally, containers afford several other benefits, including easy scaling, efficiency in terms of resource utilization, and security by isolating containers from their host system and from other containers.
While deploying Spring applications in containers has always been possible, Spring Boot makes it easier to containerize our applications and run them in container architectures such as Kubernetes. Spring Boot's support for containerization includes two options: Creating containers based on buildpacks or using layers as a means of modularizing and reducing the size of our application deployments. Moreover, new components in the Spring ecosystem can make your Spring applications Kubernetes-savvy so that they can take advantage of what a containerized architecture has to offer.
In this example-driven session, we're going to look at how to create and deploy Spring applications as container images and deploy them into a Kubernetes cluster. Along the way, we'll also get to know a few of the most useful tools that a Spring developer can employ in their development workflow when building containerized Spring applications. We'll also see how to apply patterns of Spring Cloud–such as configuration, service discovery, and gateways–using native Kubernetes facilities instead of Spring Cloud components. And we'll look at how components of the Spring ecosystem can work with your Spring applications to enable them to thrive in a Kubernetes cluster.
Test your setup: Make sure that the Docker Desktop is running and then type “kind create cluster”. It should take a minute or so, but then you should be able to type “kubectl config current-context” and see “kind-kind” listed.
Although Java originally promised write once, run anywhere, it failed to fully deliver on that promise. As developers, we can develop, test, and build our applications into WAR or executable JAR files and then toss them over the wall to a Java application server and Java runtime that we have no control over, giving us zero confidence that the application will behave the same as when we tested it.
Containers fulfill the write-once, run anywhere promise that Java wasn't able to, by packaging the runtime and even the operating system along with our application, giving greater control and confidence that the application will function the same anywhere it is run. Additionally, containers afford several other benefits, including easy scaling, efficiency in terms of resource utilization, and security by isolating containers from their host system and from other containers.
While deploying Spring applications in containers has always been possible, Spring Boot makes it easier to containerize our applications and run them in container architectures such as Kubernetes. Spring Boot's support for containerization includes two options: Creating containers based on buildpacks or using layers as a means of modularizing and reducing the size of our application deployments. Moreover, new components in the Spring ecosystem can make your Spring applications Kubernetes-savvy so that they can take advantage of what a containerized architecture has to offer.
In this example-driven session, we're going to look at how to create and deploy Spring applications as container images and deploy them into a Kubernetes cluster. Along the way, we'll also get to know a few of the most useful tools that a Spring developer can employ in their development workflow when building containerized Spring applications. We'll also see how to apply patterns of Spring Cloud–such as configuration, service discovery, and gateways–using native Kubernetes facilities instead of Spring Cloud components. And we'll look at how components of the Spring ecosystem can work with your Spring applications to enable them to thrive in a Kubernetes cluster.
Test your setup: Make sure that the Docker Desktop is running and then type “kind create cluster”. It should take a minute or so, but then you should be able to type “kubectl config current-context” and see “kind-kind” listed.
Managing Kubernetes manually is hard. Successfully updating your deployments requires following multiple steps while paying attention to myriad details—even a simple mistake can drastically affect your cluster and the applications running in it. And finding the source of problems and rolling back the state of a system can be nearly impossible when changes are made by human operators.
The GitOps pattern eliminates these issues. With GitOps you manage the state of your deployments in text files that can be stored, tracked, reviewed, etc.—in other words, treated like any other file in Git. And an “Ops” function automatically ensures that your Kubernetes system is set to your desired state. GitOps removes all the pain points and exposure that come from human interactions and automates the entire process.
Join DevOps director, open-source author, and trainer, Brent Laster to explore GitOps and learn about using Argo CD to implement GitOps for your Kubernetes deployments.
In this 90-minute session, you'll learn:
Prerequisites: Basic working knowledge of Git and Kubernetes
Building on the success of DevOps practices, which already employ the Theory of Constraints to tackle bottlenecks, Developer Productivity Engineering (DPE) emerges as the next natural progression. DPE takes a step further by optimizing workflows, automating tasks, and providing real-time feedback to developers, keeping software delivery nimble and efficient.
In this engaging talk, we'll explore how DPE enhances the DevOps framework, streamlining the development process throughout the software delivery lifecycle. See why mastering DPE is essential for all engineers, including Platform and Site Reliability Engineers, as it aligns with core principles like reducing toil, promoting automation, and implementing observability, all while keeping bottlenecks at bay.
With a touch of wit and insight, learn practical tips to harness DPE's power and elevate your software development game. Join us as we reframe the DevOps landscape, revealing the importance of DPE as the next step in our software development evolution, ensuring continued success and growth by effectively managing constraints and bottlenecks.
Dive into a world of wit and wisdom with “Developer Productivity – DIY (with ChatGPT) or How I Learned to Stop Worrying and Love the AI,” a conference talk that subtly weaves humor with practical tips for boosting your development prowess. Our spirited speaker will engage in a lively conversation with ChatGPT, the clever AI language model, to uncover innovative strategies for optimizing your development process, from shortening build feedback loops to conquering flaky tests and caching build results like a master.
Be captivated as our speaker embraces the challenge of implementing ChatGPT's suggestions in a live coding session that promises to entertain and enlighten without explicitly flaunting its humor. This engaging exploration of AI-assisted development will leave you with a fresh perspective on productivity and a subtle smile on your face.
By the end of this charming adventure, you'll have a treasure trove of valuable tips and a newfound appreciation for the delightful synergy between AI and your development process. Get ready to revolutionize your workflow and forge a lasting bond with your new AI confidant, ChatGPT.
This talk will be tailored to Java developers as we delve into the practical applications of AI tools to ease your software development tasks. We'll explore the capabilities of GitHub Copilot used as a plugin for IntelliJ IDEA and VSCode. We'll also play with GPT4 and examine ways it can help.
It's often said that AI tools will not replace existing developers, but that a developer with those tools will have an advantage over developers without them. Join us as we try to demystify the world of AI for Java developers, equipping you with practical skills to incorporate these tools into your development workflow. Note that this is a rapidly changing field, and the talk will evolve to work with the latest features available.
OpenAI
services, you need to register for a developer key at https://platform.openai.com.Ollama
. The installer is located at https://ollama.com, and is available for macOS, Linux, and Windows.gemma2
and moondream
models. The command to do so is ollama run gemma2
and the same for moondream
. You can also use pull
instead of run
.It’s impossible to follow a leader who isn’t moving forward. They wouldn’t be a leader; they would be a stander or an observer. So, if you aren’t personally growing and improving, why do you think anybody wants to follow YOU? Personal growth is one of the most critical characteristics of being a leader others desire to follow. So how can you develop a growth mindset? What areas should you focus on growing in? What habits can develop to help you continuously grow daily?
In this session, we will focus on proven, effective, and actionable tactics, resources, and tools to elevate your leadership now.
Whether you are a programmer, a lead, an architect, a technical manager, or just a nice simple human being your day starts and ends with making decisions. It involves making many small decisions and may involve making some big ones too.
In this keynote we will talk about the art of decision making, the consequences of the choices we make, and tie that into the everyday architecture and design of enterprise systems.
In this ½ day course, author and trainer and DevOps Director Brent Laster will take you beyond the basics of Kubernetes to understand the advanced topics you need to know to ensure your success with K8S.
In plain and simple explanations and hands-on labs, you’ll learn about key concepts such as RBAC, admission controllers, affinity, taints and tolerations mean and how to use them. You’ll learn tips to debug your Kubernetes deployments and how to leverage probes to ensure your pods are ready and healthy – and what happens when they aren’t.
Along the way, we’ll give you hands-on experience and time to play with these concepts in a simple minikube environment running on your own virtual machine that you can keep as a reference environment after the course.
This course has been reworked to offer an option of doing it via a GitHub Codespace environment. This will be the simplest option for doing the labs assuming a solid internet/wifi connection. More details will be provided in the session, but to use this option you only need a browser and GitHub userid.
Otherwise, only if you don't want to use the GitHub Codespace options, you will need to have a Kubernetes environment setup and certain other pieces and tools installed. The setup document (adv-k8s-setup.pdf) in https://github.com/skilldocs/adv-k8s has guidance to help with this.
To be able to do the labs, you can choose to use EITHER a pre-configured VM or setup your own environment.
If you choose the pre-configured VM, it will have everything you will need. To use it, you will need to get and run the application VirtualBox (virtualbox.org) AND downloading the pre-configured VM image from EITHER of the two sites below:
https://www.dropbox.com/s/8gudva4t5ir07a8/adv-k8s-2.1.ova?dl=0
or
https://bclconf.s3.us-west-2.amazonaws.com/adv-k8s-2.1.ova
This is a large image - 3.5G. The setup document at https://github.com/skilldocs/adv-k8s/blob/main/adv-k8s-setup.pdf has more guidance on getting things setup in VirtualBox if you need it.
If you do not want to/can't run VirtualBox with the image, then you can set up your own Kubernetes environment by following the steps in https://github.com/skilldocs/adv-k8s/blob/main/adv-k8s-setup.pdf for the manual environment setup. NOTE: You should not use Kubernetes 1.25 with this setup.
NOTE: Because environments will vary, not all labs are guaranteed to work with a manual (non-VM) setup.
Also you will want to have access to the labs during the class at https://github.com/skilldocs/adv-k8s/blob/main/adv-k8s-labs.pdf
In this ½ day course, author and trainer and DevOps Director Brent Laster will take you beyond the basics of Kubernetes to understand the advanced topics you need to know to ensure your success with K8S.
In plain and simple explanations and hands-on labs, you’ll learn about key concepts such as RBAC, admission controllers, affinity, taints and tolerations mean and how to use them. You’ll learn tips to debug your Kubernetes deployments and how to leverage probes to ensure your pods are ready and healthy – and what happens when they aren’t.
Along the way, we’ll give you hands-on experience and time to play with these concepts in a simple minikube environment running on your own virtual machine that you can keep as a reference environment after the course.
This course has been reworked to offer an option of doing it via a GitHub Codespace environment. This will be the simplest option for doing the labs assuming a solid internet/wifi connection. More details will be provided in the session, but to use this option you only need a browser and GitHub userid.
Otherwise, only if you don't want to use the GitHub Codespace options, you will need to have a Kubernetes environment setup and certain other pieces and tools installed. The setup document (adv-k8s-setup.pdf) in https://github.com/skilldocs/adv-k8s has guidance to help with this.
To be able to do the labs, you can choose to use EITHER a pre-configured VM or setup your own environment.
If you choose the pre-configured VM, it will have everything you will need. To use it, you will need to get and run the application VirtualBox (virtualbox.org) AND downloading the pre-configured VM image from EITHER of the two sites below:
https://www.dropbox.com/s/8gudva4t5ir07a8/adv-k8s-2.1.ova?dl=0
or
https://bclconf.s3.us-west-2.amazonaws.com/adv-k8s-2.1.ova
This is a large image - 3.5G. The setup document at https://github.com/skilldocs/adv-k8s/blob/main/adv-k8s-setup.pdf has more guidance on getting things setup in VirtualBox if you need it.
If you do not want to/can't run VirtualBox with the image, then you can set up your own Kubernetes environment by following the steps in https://github.com/skilldocs/adv-k8s/blob/main/adv-k8s-setup.pdf for the manual environment setup. NOTE: You should not use Kubernetes 1.25 with this setup.
NOTE: Because environments will vary, not all labs are guaranteed to work with a manual (non-VM) setup.
Also you will want to have access to the labs during the class at https://github.com/skilldocs/adv-k8s/blob/main/adv-k8s-labs.pdf
We spend far and away more time reading code than we do writing it, but we spend far more time and money learning how to write it better. We are all familiar with the feeling of looking at a piece of code and having no idea what it does. Trying to debug it is a real chore! What if you could confidently approach unfamiliar code knowing that you can find what you need to in short order?
This session will show you some tools and techniques to be able to read and comprehend unfamiliar code more quickly. We will also go over some debugging techniques to help find bugs more quickly in unfamiliar code.
Athletes and artists practice a lot before they go on stage and perform. It's impossible to practice and learn new skills while we are performing our day job. We have workouts for our bodies.
This talk will introduce some workouts for our minds to sharpen the skills we use daily and help us to learn new skills.
Over the last few years, JavaScript has introduced a slew of new features—fat-arrow functions, maps and sets. But wait! What's a WeakMap
? And there's a WeakSet
? And what exactly does Proxy
do, and why do we need it given JavaScript's dynamic nature? And then there's Proxy's cousin, Reflect
. Are these features more the result of feature envy than of any pragmatic value?
In this fast-paced live coding session, Raju Gandhi will demonstrate how you can use these features in combination to create a reactivity system very similar to ones used by modern view libraries like Vue.js. Of course, if you are interested in Vue.js, or already use it, you have even more reason to see how Vue performs some its magic. Win win!
In this session, we will explore the APIs of several new additions to JavaScript, as well as see how they can be used to build something interesting, including:
If you've heard of these constructs, and want to see how to build something really cool, you really need to attend this session (And as a bonus, you'll see how one of the world's most popular frontend library does some of it's magic).
Containers are everywhere. Of course, a large part of the appeal of containers is the ease with which you can get started. However, productionizing containers is a wholly different beast. From orchestration to scheduling, containers offer significantly different challenges than VMs.
In particular, in terms of security. Securing and hardening VMs is very different than that for containers.
In this twopart session, we will see what securing containers involves.
We'll be covering a wide range of topics, including
Understanding Cgroups and namespaces
What it takes to create your own container technology as a basis of understanding how containers really work
Securing the build and runtime
Secrets management
Shifting left with security in mind
Containers are everywhere. Of course, a large part of the appeal of containers is the ease with which you can get started. However, productionizing containers is a wholly different beast. From orchestration to scheduling, containers offer significantly different challenges than VMs.
In particular, in terms of security. Securing and hardening VMs is very different than that for containers.
In this twopart session, we will see what securing containers involves.
We'll be covering a wide range of topics, including
Understanding Cgroups and namespaces
What it takes to create your own container technology as a basis of understanding how containers really work
Securing the build and runtime
Secrets management
Shifting left with security in mind
A Docker image is the artifact of the container world. Leaner images allow easier for quicker build times, less resource management (disk pressure and network usage), fewer attack vectors, and better performance when pulling or pushing images for storage or upon deployment. Lean images also produce smaller containers, which in turn require fewer resources at runtime, allowing for higher server density. Multistage Dockerfiles can help reduce the complexity of CI/CD pipelines by reducing the number of moving parts in building, testing, and producing a production-grade image. The key to building leaner (smaller) images, with little build-time overhead is to understand how Docker uses the Union File System (UFS), how Docker builds (and when it busts) the cache, and how to use the Dockerfile specification to it's fullest potential.
In this exercises driven, hands-on workshop, we will dive deep, peeking under the hood to get a glimpse of the Union File System, and then proceed to look at the effects of many of the important Dockerfile instructions. We will see how best to use them, and highlight any caveats that we should be aware of.
By the end of this class you will have gained a keen understanding of how best to write your Dockerfiles, and effectively build and design lean images, and containers.
A Docker image is the artifact of the container world. Leaner images allow easier for quicker build times, less resource management (disk pressure and network usage), fewer attack vectors, and better performance when pulling or pushing images for storage or upon deployment. Lean images also produce smaller containers, which in turn require fewer resources at runtime, allowing for higher server density. Multistage Dockerfiles can help reduce the complexity of CI/CD pipelines by reducing the number of moving parts in building, testing, and producing a production-grade image. The key to building leaner (smaller) images, with little build-time overhead is to understand how Docker uses the Union File System (UFS), how Docker builds (and when it busts) the cache, and how to use the Dockerfile specification to it's fullest potential.
In this exercises driven, hands-on workshop, we will dive deep, peeking under the hood to get a glimpse of the Union File System, and then proceed to look at the effects of many of the important Dockerfile instructions. We will see how best to use them, and highlight any caveats that we should be aware of.
By the end of this class you will have gained a keen understanding of how best to write your Dockerfiles, and effectively build and design lean images, and containers.