In this hands-on workshop, you'll learn to create voice-first applications for both Amazon's Alexa and Google's Assistant platforms. In addition, we'll also cover creating visual UIs to accompany the voice-first applications (for devices such as Echo Show).
No prior experience with voice applications is required and you do not even need to own a home assistant device to get started. Elementary experience with NodeJS is required. You'll also need developer accounts for both AWS and Google.
The way we communicate with our applications is an ever-evolving experience. Punch cards gave way to keyboards. Typing on keyboards was then supplemented by pointing and clicking with a mouse. And touch screens on our phones, tablets, and computers are now a common means of communicating with applications.
These all lack one thing, however: They aren’t natural.
As humans, we often communicate with each other through speech. If you were to walk up to another human and start tapping them, you’d likely be tapped (or punched) in response. But when we talk to our applications, we communicate on the machine’s terms, with keyboards, mice, and touch screens. Even though we may use these same devices to communicate with other humans, it’s really the machine we are communicating with—and those machines relay what we type, click, and tap to another human using a similar device.
Voice user-interfaces (Voice UIs) enable us to communicate with our application in a human way. They give our applications the means to communicate to us on our terms, using voice. With a voice UI, we can converse with our applications in much the same way we might talk with our friends.
Voice UIs are truly the next logical step in the evolution of human-computer interaction. And this evolutionary step is long overdue. For as long as most of us can remember, science fiction has promised us the ability to talk to our computers. The robot from Lost in Space, the Enterprise computer on Star Trek, Iron Man’s Jarvis, and HAL 9000 (okay, maybe a bad example) are just a few well-recognized examples of science fiction promising a future where humans and computers would talk to each other.
Our computers are far more powerful today than the writers of science fiction would have imagined. And the tablet that Captain Picard used in his ready room on Star Trek: The Next Generation is now available with the iPad and other tablet devices. But only recently have voice assistants such as Alexa and Google Assistant given us the talking computer promised to us by science-fiction.
For workshop prerequisites, please download and read the instructions at http://www.habuma.com/pwx2020/workshop-setup.pdf.
.
.
Laptops will not be required for the exercises, but you will need them (or an electronic device) to view the online slides as well as the online exercises.
Have you been asked to take a component from one project and “just” put it into another project built with an entirely different JavaScript framework? Do you work at a diverse company with teams using Angular, React and Vue and are wondering how you can stop duplicating effort? Do you have framework fatigue and want to skip the cycle of learning yet another framework? If so, then Web Components are what you have been searching for!
Web Components are a set of web platform APIs that allow you to create new custom, reusable, encapsulated HTML tags which can be shared across frameworks. With libraries like Lit built by Google to extend Web Components, it is now possible to easily create fully featured components that work with Angular, React and Vue.
This workshop deep dives into Web Components and the latest updates from Lit, which provides lit-html & LitElement. You'll learn how to build your own components with both vanilla JavaScript and LitElement. We’ll create a library of shared components and integrate them into an existing React application.
Specific topics covered during the workshop are:
We're looking forward to joining you for a full day of Web Components!
For full instructions on setting up your laptop with the appropriate projects dependencies prior to the workshop, please visit https://tinyurl.com/pwx-wc-workshop
If you have any questions/issues, please reach out to us.
Are you losing revenue to performance? 53% of mobile site visits are abandoned if a page takes longer than 3 seconds to load. Pinterest reduced load times by 40% and saw a 15% increase in sign ups. Starbucks implemented a 2x faster time to interactive resulting in a 65% increase in rewards registrations. AliExpress reduced load by 36% and saw a 10.5% increase in orders.
Performance is important. Tooling can be hard. Do flame charts intimidate you? Come learn how to audit and fix common performance issues using Chrome DevTools, Lighthouse, PageSpeed Insights, and webpagetest.org.
During this hands-on, full-day workshop, you will learn how to:
We will profile real applications to both learn the tools of measurement as well as see real performance problems in action. By the end of this workshop, you will be familiar with the following performance concepts. Many will be covered in-depth with exercises, and others will be covered in an overview with resources to learn more.
Prerequisites: To attend this workshop, you must already have a working understanding of JavaScript, HTML, CSS, git, and the command line, including installing npm packages and running npm scripts (or yarn). You must also have a basic understanding of Chrome DevTools, including inspecting an element and using the console. You do not need advanced mastery of DevTools as we will be learning about the Network and Performance tabs plus other tools during this session. A basic understanding of webpack would be very helpful, but the concepts can still be learned during this segment of the workshop.
Preparation: Please come with a laptop ready for development. You must have Chrome and Node (v 8+) installed.
Prerequisites: To attend this workshop, you must already have a working understanding of JavaScript, HTML, CSS, and the command line. You must also have a basic understanding of Chrome DevTools, including inspecting an element and using the console. You do not need advanced mastery of DevTools as we will be learning about the Network and Performance tabs plus other tools during this session. A basic understanding of webpack would be helpful but is not required.
Preparation: Please come with a laptop ready for development. You must have Chrome and Node (v 8+) installed.
Machine Learning is all the rage, but many developers have no idea what it is, what they can expect from it or how to start to get into this huge and rapidly-changing field. The ideas draw from the fields of Artificial Intelligence, Numerical Analysis, Statistics and more. These days, you'll generally have to be a CUDA-wielding Python developer to boot. This workshop will gently introduce you to the ideas and tools, show you several working examples and help you build a plan to for diving deeper into this exciting new field.
We will cover:
Please install Anaconda for Python 3 before the workshop if possible. https://www.anaconda.com/download
At the end of this workshop, you will be comfortable with designing, deploying, managing, monitoring and updating a coordinated set of applications running on Kubernetes.
Distributed application architectures are hard. Building containers and designing microservices to work and coordinate together across a network is complex. Given limitations on resources, failing networks, defective software, and fluctuating traffic you need an orchestrator to handle these variants. Kubernetes is designed to handle these complexities, so you do not have to. It's essentially a distributed operating system across your data center. You give Kubernetes containers and it will ensure they remain available.
Kubernetes continues to gain momentum and is quickly becoming the preferred way to deploy applications.
In this workshop, we’ll grasp the essence of Kubernetes as an application container manager, learning the concepts of deploying, pods, services, ingression, volumes, secrets, and monitoring. We’ll look at how simple containers are quickly started using a declarative syntax. We'll build on this with a coordinated cluster of containers to make an application. Next, we will learn how Helm is used for managing more complex collections of containers. See how your application containers can find and communicate directly or use a message broker for exchanging data. We will play chaos monkey and mess with some vital services and observe how Kubernetes self-heals back to the expected state. Finally, we will observe performance metrics and see how nodes and containers are scaled.
Come to this workshop the learn how to deploy and manage your containerized application. On the way, you will see how Kubernetes effectively schedules your application across its resources.
Optionally, for more daring and independent attendees, you can also replicate many of the exercises on your local laptop with Minikube or Minishift. There are other Kubernetes flavors as well. However, if during the workshop you are having troubles please understand we cannot deviate too far to meet your local needs. If you do want to try some of the material locally this stack is recommended:
Some of the topics we will explore:
These concepts are presented and reinforced with hands-on exercises:
You will leave with a solid understanding of how Kubernetes actually works and a set of hands-on exercises your can share with your peers. Bring a simple laptop with a standard browser for a full hands-on experience.
With the quick moving 6-month Java train releases, you like many Java developers and organizations may have remained on Java 8 waiting for the next Long-Term-Support (LTS) release. Well, Java 17 is here so it is time to begin the adoption and upgrading. Java 9 was a HUGE release with many impactful features like the module system, jLink, jShell and a hand full of new Project Coin language features. While Java 10-17 were small in comparison based on feature count their influence will be felt. The var keyword, records, pattern matching, vector api, container awareness features along with lambda, thread and garbage collection enhancements will improve development and operations.
This hands-on workshop will provide the knowledge and experience you need to be prepared to migrate your applications from Java 8 to Java 17 successfully.
Java 11 (Any distribution is fine. If you don't have a preference or know which to install go with https://adoptopenjdk.net/) or Docker
IDE
git
Kotlin is one of those few multi-platform languages. You can compile Kotlin to Java bytecode, to Android, to WebAssembly, to run native on different OS, and to JavaScript. The language draws inspiration from many different languages. It is highly fluent and code in Kotlin is concise, elegant, and easy to maintain. This workshop will get you up to speed on using Kotlin for your day-to-day programming.
The workshop will start with the language basics and quickly dive into some of the more advanced features. We will also look at ways to use compile time metaprogramming and create fluent DSLs.
Please install the following:
Git
Kotlin 1.3.40
Java 6 or newer
Your favorite IDE or editor (IntelliJ IDEA Community Edition, for example)
.
.
Jorge Santayana is famous for saying “Those who cannot remember the past are condemned to repeat it”. When SOA (Service-Oriented Architecture) was all the craze, everyone got all excited about services, but forgot about the data. This ended in disaster. History repeats itself, and here we are with Microservices, where everyone is all excited about services, but once again, forgets all about the data. In this session I will discuss some of the challenges associated with breaking apart monolithic databases, and then show the techniques for effectively creating data domains and how to split apart a database. I consider the data part of Microservices the hardest aspect of this architecture style. In the end, it's all about the data.
Agenda
In 250BC Rome began its expansion into Carthage, and later into the divided kingdoms of Alexander, starting the rise of a great empire until its decline starting around 350AD. Much can be learned from the rise and fall of the Roman Empire as it relates to a similar rise and fall: Microservices. Wait. Did I say “fall of microservices”? Over the past 5+ years Microservices has been on the forefront of most books, articles, and company initiatives. While some companies been experiencing success with microservices, most companies have been experiencing pain, cost overruns, and failed initiatives trying to design and implement this incredibly complex architecture style. In this session I discuss and demonstrate why microservices is so vitally important to businesses, and also why companies are starting to question whether microservices is the right solution. Sir Issac Newton once quoted “What goes up must come down”; Blood, Sweat & Tears sang about this in their hit “Spinning Wheel”. Microservices is no exception. Come to this provocative session to learn about the real challenges and issues associated with microservices, how we might be able to overcome some of the technical (and business) challenges, and whether microservices is really the answer to our problems.
.
Have you ever wondered how to share data between microservices? Have you ever wondered how to share a single database schema between hundreds (or even thousands) of microservices (cloud or on-prem)? Have you ever wondered how to version relational database changes when sharing data in a microservices environment? If any of these questions intrigue you, then you should come to this session. In this session I will describe and demonstrate various caching strategies and patterns that you can use in Microservices to significantly increase performance, manage common data in a highly distributed architecture, and even manage data synchronization from cloud-based microservices. I'll describe the differences between a distributed and replicated cache, Using live coding and demos using Hazelcast and Apache Ignite, I'll demonstrate how to share data and also how to do space-based microservices, leveraging caching to its fullest extent.
Agenda:
Software architecture is hard. It is full of tradeoff analysis, decision making, technical expertise, and leadership, making it more of an art than a science. The common answer to any architecture-related question is “it depends”. To that end, I firmly believe there are no “best practices” in software architecture because every situation is different, which is why I titled this talk “Essential Practices”: those practices companies and architects are using to achieve success in architecture. In this session I explore in detail the top 6 essential software architectural practices (both technical architecture and process-related practices) that will make you an effective and successful software architect.
This session is broken up into 2 parts: those essential architecture practices that relate to the technical aspects of an architecture (hard skills), and those that relate to the process-related aspects of software architecture (soft skills). Both parts are needed to make architecture a success.
Whether starting a new greenfield application or analyzing the vitality of an existing application, one of the decisions an architect must make is which architecture style to use (or to refactor to). Microservices? Service-Based? Microkernel? Pipeline? Layered? Space-Based? Event-Driven? SOA?. Having the right architecture style in place is essential to the success of any application, big or small. Come to this fast-paced session to learn how to analyze your requirements and domain to make the right choice about which architecture style is right for your situation.
Agenda
Very few applications stand alone anymore. Rather, they are combined together to form holistic systems that perform complex business functions. One of the big challenges when integrating applications is choosing the right integration styles and usage patterns. In this session we will explore various techniques and patterns for application integration, and look at what purpose and role open source integration hubs such as Camel and Mule play in the overall integration architecture space (and how to properly use them!). Through actual integration scenarios and coding examples using Apache Camel you will learn which integration styles and patterns to use for your system and how open source integration hubs play an part in your overall integration strategy
Agenda:
Java is a language in evolution. There are a handful of language changes in Java 9 and 10 plus several JDK changes in 9, 10, 11, and 12. Some of these changes are significant in that they allow us to do things more effectively than before. The difference can be anywhere from reducing code to avoiding errors that come from verbosity. In this presentation we will explore the language changes first. Then we will visit the additions to the JDK. Along the way we will also look at a few things that have been removed from Java as well.
.
We will program with Java quite differently in the future than we do today. The reason is that Java is embracing asynchronous programming like never before. This will have a huge impact on how we create services and web applications. In this presentations we will look at what asynchronous programming is, what continuations are, how they get implemented under the hood, and how we can benefit from them.
.
Java Modules are the future. However, our enterprise applications have legacy code, a lots of it. How in the world do we migrate from the old to the new? What are some of the challenges. In this presentation we will start with an introduction to modules and learn how to create them. Then we will dive into the differences between unnamed modules, automatic modules, and explicit modules. After that we will discuss some key limitations of modules, things that may surprise your developers if they're not aware of. Finally we will discuss how to migrate current applications to use modules.
.
Some developers simply hate type inference. And then there others who love it. Neither one of them is entirely right. In Java we have been making extensive use of type inference for several years without realizing it. The introduction of “var” in Java 10 has stirred up some surprising debate. In this presentation we will step back and review type inference in Java. Then we will dive deep into type inference in Java 10 and 11. We will wrap up the presentation will good recommendations on when to use type inference and when to avoid it.
.
Functional style of programming was introduced in Java 8. If you are like the speaker, who spent decades on imperative style, then the transition to functional style can be intimidating. In this presentation, we will learn about the fundamentals of programming in functional style, the set of tools that we can reach into to solve problems as a series of state transformation. We will learn the how but also the benefits along the way as well.
.
An evolutionary architecture supports incremental, guided change along multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This talk, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how architectural dimensions interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This keynote illustrates how to achieve evolutionary architectures and how to retrofit existing systems to support better evolution.
Patterns/antipatterns, techniques, engineering practices, and other details showing how to restructure existing architectures and migrate from one architecture style to another.
A common challenge facing many architects today involves restructuring their current architecture or migrating from one architectural style to another. For example, many companies start with monolithic applications for simplicity, but find they must migrate it to another architecture to achieve different architectural characteristics. This session shows patterns/antipatterns, techniques, engineering practices, and other details showing how to make major changes to architectures. This session introduces a new measure, the architectural quantum, as a way of measuring and analyzing coupling and portability within architectures.
This session describes mechanisms to automate architectural governance at application, integration, and enterprise levels
A nagging problem for architects is the ability to enforce the governance policies they create. Yet, outside of architecture review boards or code reviews, how can architects be sure that developers utilize their rules? This session describes mechanisms to automate architectural governance at application, integration, and enterprise levels. By focusing on fitness functions, architects define objective tests, metrics, and other criteria to ensure governance polices stick.
Building Evolutionary Architectures requires identifying and creating architectural fitness functions. This hands-on workshop defines fitness functions and provides group exercises to help identify and discover them.
According to the Building Evolutionary Architectures book, an architectural fitness function provides an objective integrity assessment of some architectural characteristic(s). This hands-on workshop provides examples of fitness functions and group exercises to identify, define, and implement a variety of fitness functions: atomic, holistic, continuous, triggered, temporal, and others.
Stories and lessons from architecture, design, process, and other sources, each illustrating important principles and pitfalls for modern architects.
Those who cannot remember the past are condemned to repeat it. –George Santayana
The past is never dead. It's not even past. –William Faulkner
Most developers pursue the Latest and Greatest with intense fervor, yet the history of engineering, including software projects, contains rich lessons that we risk repeating ad nauseam. This session recounts a variety of stories of projects that failed architecturally…and why. Ranging from the Vasa in 1628 to Knight Capital in 2012, each story tells of a mistaken interpretation of some architectural fundamental principle and the consequences–some good, some less so. I I also look at the common threads for these stories, which resonates with problems many companies have but don't realize.
null
Struggling to get your website to load in less than 5 seconds on a mobile phone? Switching pages are a little sluggish? You’re not alone! Most web developers can build a responsive site, but fail to meet performance requirements for mobile. Using the latest PRPL pattern and Progressive Web API’s, you can provide a compelling alternative to native apps, as long as performance remains your top feature.
This talk will cover the architecture for Xfinity xFi, an enterprise PWA for Comcast, built with Web Components. We’ll then dive into the Chrome performance tools to optimize xFi loading time down by more than half. You’ll walk away knowing what it takes to create a successful PWA and how to find slow downs in your app startup.
If you haven’t explored Web Components yet, you’re missing out on a powerful tool that can greatly enhance reusability of common web elements throughout your websites and web applications. As Comcast has been updating our web properties to unify under a single UX, using Web Components with Lit has helped make that process much more efficient.
This session will introduce you to what exactly Web Components are and how to use them. We’ll also cover building Web Components with Lit, the most popular Web Component library. You’ll get to hear how Comcast is using the web platform to build its next generation single page apps & websites using the latest browser APIs.
You’ll also learn about how easy it is to onboard a team to using Lit, tips for sharing components with other websites & across teams, and best practices Comcast has established for efficient development of Web Components.
Development teams often focus on getting code to production losing site of what comes after the design and build phase. But we must consider the full life cycle of our systems from inception to deployment through to sunset, a discipline many companies refer to as site reliability engineering.
While your organization may or may not have an SRE team, you have someone playing that role and we can all benefit from looking at the principles and practices that we can bring to bear on our projects. In this talk, I will introduce the concepts of SRE and how you can adopt these ideas on your applications.
By now I bet your company has hundreds, maybe thousands of services, heck you might even consider some of them micro is stature! And while many organizations have plowed headlong down this particular architectural path, your spidey sense might be tingling…how do we keep this ecosystem healthy?
In this talk, I will go beyond the buzzwords into the nitty gritty of actually succeeding with a service based architecture. We will cover the principles and practices that will make sure your systems are stable and resilient while allowing you to get a decent night's sleep!
Rich Hickey once said programmers know the benefits of everything and the trade offs of nothing…an approach that can lead a project down a path of frustrated developers and unhappy customers. As architects though, we must consider the trade offs of every new library, language, pattern or approach and quickly make decisions often with incomplete information. How should we think about the inevitable technology choices we have to make on a project? How do we balance competing agendas? How do we keep our team happy and excited without chasing every new thing that someone finds on the inner webs?
As architects it is our responsibility to effectively guide our teams on the technology journey. In this talk I will outline the importance of trade offs, how we can analyze new technologies and how we can effectively capture the inevitable architectural decisions we will make. I will also explore the value of fitness functions as a way of ensuring the decisions we make are actually reflected in the code base.
If you’ve spent any amount of time in the software field, you’ve undoubtably found yourself in a (potentially heated) discussion about the merits of one technology, language or framework versus another. And while you may have enjoyed the technical debate, as software professionals, we owe it to our customers (as well as our future selves) to make good decisions when it comes to picking one technology over another.
In this talk, I will explore what criteria we should consider when comparing technologies, how we can avoid burning platforms as well as what to do when we’ve reached a dead end. We will also apply these techniques to a current technology or two.
A Technology Radar is a tool that forces you to organize and think about near term future technology decisions, both for you and your company. This talk discusses using the radar for personal breadth development, architectural guidance, and governance.
ThoughtWorks Technical Advisory Board creates a “technology radar” twice a year, a working document that helps the company make decisions about interesting technologies and where we spend our time. ThoughtWorks then started conducting radar-building exercises for our clients, which provides a great medium for technologists company-wide to express their opinions about the technologies they use every day. For companies, creating a radar helps you document your technology decisions in a standard format, evaluate technology decisions in an actionable way, and create cross-silo discussions about suitable technology choices. This session describes the radar visualization and how to conduct a radar building session for yourself. After a brief introduction, the bulk of the workshop consists of attendees building a radar for the group, following the same procedure you'll use when you do this exercise at your company. At the end, we'll have created a unique Radar for this event and practiced doing it for yourself.
We build development teams based on individual ability to write code but development of a software project of any significance is beyond a single persons effort with a very particular set of skills. It requires a team of members with a number array of skills. It requires social skills. It requires tools and alignment. It requires shared contextual models.
This session will distill a couple decades of software consulting lessons learn in software engineering along with Ugandan fun to uncover the true way to developing more with less.
Would Chuck Norris ask you to come hear him speak at a conference? No, he wouldn't. He would TELL you that you're coming, and then roundhouse kick you in the face if you gave him any more lip.
“What would Chuck Norris do?” is a philosophy this session will cover in depth. Other topics include: badass vs a-hole, human duck typing, the art of [not] caring, instrumentality, and what your facial hair says about you. You won't learn any new code in this session, but you might unleash a Pandora's box of awesomeness that will change the way you interact with your coworkers forever.
In this example-driven presentation, you'll learn how to leverage Spring Boot to accelerate application development, enabling you to focus coding on logic that drives application requirements with little concern for code that satisfies Spring's needs.
For over a decade, Spring has sought to make enterprise Java development easier. It began by offering a lighter alternative to EJBs, but continued to to address things such as security, working with various sorts of databases, cloud-native applications, and reactive programming. And, along the way, Spring even took steps to make itself easier to use, offering Java-based and automatic component configuration. Even so, there's still a lot of near-boilerplate code required to develop Spring applications.
Enter Spring Boot. Spring Boot's primary purpose is to make Spring easier to work with. It achieves this in three ways:
All together, Spring Boot lets you focus on fulfilling your application's requirements without worrying about writing code that satisfies the needs of a framework.
In this session, you'll learn how to take your Spring Boot skills to the next level, applying the latest features of Spring Boot. Topics may include Spring Boot DevTools, configuration properties and profiles, customizing the Actuator, and crafting your own starters and auto-configuration.
TBD
In this session, we'll explore the Spring Boot Actuator, a runtime component of Spring Boot that lets you peer inside a running application and, in some cases, even tweak configuration on the fly. We'll look at many of the Actuator's endpoints, learn how to customize and even create new endpoints, and see how to expose Actuator metrics to several popular instrumentation and monitoring systems.
Spring Boot makes developing applications with Spring easy work by offering auto-configuration for many common application scenarios. And with Spring Boot's starter dependencies, even an application's build file can be easily managed. But Spring Boot's powers don't end when the application is deployed. That's where the real fun begins.
In this example-driven presentation, we'll look at Spring Data REST, an extension to Spring Data that exposes your data repositories as a RESTful API, complete with hypermedia links. We'll start with essential Spring Data REST, but then go beyond the basics to see how to customize the resulting API to be more than just CRUD operations over HTTP.
Spring Data is a brilliant extension to the Spring Framework that makes simple work of exposing a database–any kind of database–via repositories. But as is often the case, your application's data doesn't usually stay within the application. It is consumed by external applications or from a Javascript client in the web browser. That means, we'll need to build a RESTful API around those repositories.
In this session, we'll explore Spring Security and OAuth2, including building an OAuth2 authorization server, fronting an API with a resource server, and verifying an OAuth2 access token's claims to ensure that the client is allowed to access the resource they are asking for.
Securing REST APIs presents some unique challenges as compared to securing a typical web application. The client of any REST endpoint may not even be a user in the traditional sense, but is more likely to be another application or a browser-based Javascript client. How can you ensure that the clients of your REST API are allowed to access the resources they are asking for?
OAuth2 offers a means by which a client application can request authorization to access a resource and be given an access token that must be presenting when making HTTP requests. This involves creating an authorization server that issues tokens and defining a resource server which acts as a wall around an API that verifies the presented access token's claims before allowing the request to proceed.
Spring Security has historically supported OAuth2 as part of a separate project called Spring Security for OAuth. But gradually, Spring's OAuth2 support is moving into the main Spring Security project.
JavaScript has come a long way. Libraries and frameworks like React, Angular, and Vue use the more recent versions of JavaScript. Getting up to speed will help us get better at programming both front-end and backend JavaScript applications.
The workshop will dive into the features from ES 6 and beyond.
Please have these installed on your system:
Git
Node.js
Your favorite editor or IDE
JavaScript has come a long way. Libraries and frameworks like React, Angular, and Vue use the more recent versions of JavaScript. Getting up to speed will help us get better at programming both front-end and backend JavaScript applications.
The workshop will dive into the features from ES 6 and beyond.
Please have these installed on your system:
Git
Node.js
Your favorite editor or IDE
In this session we will build a full application using Vue.js. We will start by discussing how you can start working with Vue, all the way to seeing what it takes to build an app with Vue, including state management and routing.
Note: We'll be covering Vue version 3
Vue.js, the new kid on the JavaScript framework block is taking the world by storm. Vue has bypassed React in their count of Github stars, alluding to how popular this framework is starting to become. Vue attempts to provide just enough support with libraries like Vuex and the Vue Router, and tooling like the Vue CLI to get developers productive, without aiming to be too opinionated, and too flexible.
If you are curious about Vue, this is the session for you. Come in for 180 minutes of a thrill ride as we explore this fascinating new framework and mindset.
Note: We'll be covering Vue version 3
In this session we will build a full application using Vue.js. We will start by discussing how you can start working with Vue, all the way to seeing what it takes to build an app with Vue, including state management and routing.
Note: We'll be covering Vue version 3
Vue.js, the new kid on the JavaScript framework block is taking the world by storm. Vue has bypassed React in their count of Github stars, alluding to how popular this framework is starting to become. Vue attempts to provide just enough support with libraries like Vuex and the Vue Router, and tooling like the Vue CLI to get developers productive, without aiming to be too opinionated, and too flexible.
If you are curious about Vue, this is the session for you. Come in for 180 minutes of a thrill ride as we explore this fascinating new framework and mindset.
Note: We'll be covering Vue version 3
In this session we will take a gander around the tools and techniques that have evolved around testing Vue applications. Vue testing requires that we understand a set of newer technologies to help test our Vue components, events, routes (using Vue-Router) and state (using Vuex).
We all realize we must test our code, right? Testing our Vue applications isn't only about ensuring it works correctly, but also tests give us the confidence that we truly understand our applications.
Kafka has become a key data infrastructure technology, and we all have at least a vague sense that it is a messaging system, but what else is it? How can an overgrown message bus be getting this much buzz? Well, because Kafka is merely the center of a rich streaming data platform that invites detailed exploration.
In this talk, we’ll look at the entire open-source streaming platform provided by the Apache Kafka and Confluent Open Source projects. Starting with a lonely key-value pair, we’ll build up topics, partitioning, replication, and low-level Producer and Consumer APIs. We’ll group consumers into elastically scalable, fault-tolerant application clusters, then layer on more sophisticated stream processing APIs like Kafka Streams and KSQL. We’ll help teams collaborate around data formats with schema management. We’ll integrate with legacy systems without writing custom code. By the time we’re done, the open-source project we thought was Big Data’s answer to message queues will have become an enterprise-grade streaming platform, all in 90 minutes.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of “technical debt”, but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the “friction” we experience into explicit risk models for project decision-making.
How does your team decide what's the most important problem to solve?
When we ask a question like “what's the biggest problem?“, it doesn't mean the biggest problems will come to mind. Instead, we're biased to think about what's bothered us most recently, annoyances, or pet peeves. It's really easy to spend tons of time working on improvements that make little difference.
But what if we had data that pointed us to the biggest problems across the team?
In this session, we'll dig into the data from a 1-month case study tracking Idea Flow Metrics, and discuss the patterns of friction during development, and how to identify the biggest opportunities for improvement with data.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities as an organization is to better understand ourselves.
In this session, we'll breakdown the dynamics of culture into explicit architecture models based on a synthesis of research that spans cognitive science, biology and philosophy. We'll discuss the nature of Identity, communication, relationships, leadership and human motivation by thinking about humans like code!
If you want to better understand the crazy humans around you, you won't want to miss this talk!
In the world of legacy code, we often end up inheriting a tangled ball of mess with a lack of automation, and no clear surfaces for testing. Yet still, under these circumstances, we're expected to safely make changes without regressions. Where do we start? How do we tackle this challenge? How do we get a handle on re-architecture?
We'll start this discussion with a first-hand use case and example – tackling the re-architecture of an 800k line JBoss application with near-zero unit tests. Ugh. The only option on the table was Selenium. UGH.
Let's talk about alternative strategies. How have you tackled similar situations? How could we build a data-driven regression framework without going through the UI?
This session will be 70% discussion, focused on the challenges you've faced in test harnessing legacy code. Be ready to share the story of a challenge your facing, or help out your fellow attendees with advice.
We'll discuss the strategies used to conquer the challenge in this case study, and how you could apply the same pattern to your own projects.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
Many developers aspire to become architects. Some of us serve currently as architects while the rest of us may hope to become one some day. We all have worked with architects, some good, and some that could be better. What are the traits of a good architect? What are the skills and qualities we should pick to become a very good one?
Come to this presentation to learn about things that can make that journey to be a successful architect a pleasant one.
Learning about design patterns is not really hard. Using design patterns is also not that hard. But, using the right design pattern for the right problem is not that easy. If instead of looking for a pattern to use if we decide to look for the design force behind a problem it may lead to better solutions. Furthermore, with most mainstream languages supporting lambda expressions and functional style, the patterns appear in so many more elegant ways as well.
In this workshop we will start with a quick introduction of a few patterns. Then we will work with multiple examples—take a problem, delve into the design, and as we solve it, see what patterns emerge in the design. The objective of this workshop is to get a hands on experience to prudently identify and use patterns that help create extensible code.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Learning about design patterns is not really hard. Using design patterns is also not that hard. But, using the right design pattern for the right problem is not that easy. If instead of looking for a pattern to use if we decide to look for the design force behind a problem it may lead to better solutions. Furthermore, with most mainstream languages supporting lambda expressions and functional style, the patterns appear in so many more elegant ways as well.
In this workshop we will start with a quick introduction of a few patterns. Then we will work with multiple examples—take a problem, delve into the design, and as we solve it, see what patterns emerge in the design. The objective of this workshop is to get a hands on experience to prudently identify and use patterns that help create extensible code.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Creating code is easy, creating good code takes a lot of time, effort, discipline, and commitment. The code we create are truly the manifestations of our designs. Creating a lightweight design can help make the code more extensible and reusable.
In this presentation we will take an example oriented approach to look at some core design principles that can help us create better design and more maintainable code.
The goal: Clean Code That Works, and getting there is half the fun. Working with a legacy mess can be frustrating, boring, dangerous, and time-consuming. When FIBS occur (FIBs = Fixes that Introduce Bugs) you often enter an endless Test and Fix cycle that can quickly escalate into a nightmare. I've been there, you've been there. How do we return to pleasant dreams?
In this code-centric workshop we'll look at ways to introduce sanity and calmness into the process of maintaining and improving buggy, poorly written, poorly designed code. Few slides, mostly code. Learn how to turn any project around and have fun doing it.
It is helpful to bring a laptop with either
Pycharm
Eclipse, intellij
Clion
Visual Studio + Resharper
or
webstorm
However, I will pair you, so if you don't have any of these it will still be ok.
Learn how to use Heroku's 12 (15) Factor App methodologies to make your applications more portable, scalable, reliable and deployable.
Do you want to improve your application’s portability, scalability, reliability and deploy ability? Now you can, with Heroku’s 12 Factor App methodologies. Learn from their experience hosting and supporting thousands of apps in the cloud. During this hands-on workshop, you will learn how to incorporate factors like configuration, disposability, dev/prod parity and much more into an existing application whether it is an on premise or cloud native app. But wait, there’s more! Act now, and get an additional 3 factors absolutely free! API first, Telemetry and even Authentication and authorization will be included at no additional cost.
Learn how to use Heroku's 12 (15) Factor App methodologies to make your applications more portable, scalable, reliable and deployable.
Do you want to improve your application’s portability, scalability, reliability and deploy ability? Now you can, with Heroku’s 12 Factor App methodologies. Learn from their experience hosting and supporting thousands of apps in the cloud. During this hands-on workshop, you will learn how to incorporate factors like configuration, disposability, dev/prod parity and much more into an existing application whether it is an on premise or cloud native app. But wait, there’s more! Act now, and get an additional 3 factors absolutely free! API first, Telemetry and even Authentication and authorization will be included at no additional cost.
It seems like everyday there is a new headline about a security breach in a major company’s web application. These breaches cause companies to lose their credibility, cost them large sums of money, and those accountable undoubtedly lose their jobs. Security requires you to be proactive. Keep your employer out of the headlines by learning some key security best practices.
This hands-on workshop is designed to teach you how to identify and fix vulnerabilities in Java web applications. Using an existing web application, you will learn ways to scan and test for common vulnerabilities such as hijacking, injection, cross-site scripting, cross-site forgery and more. You will learn best practices around logging, error handling, intrusion detection, authentication and authorization. You will also learn how to improve security in your applications using existing libraries, frameworks and techniques to patch and prevent vulnerabilities.
It seems like everyday there is a new headline about a security breach in a major company’s web application. These breaches cause companies to lose their credibility, cost them large sums of money, and those accountable undoubtedly lose their jobs. Security requires you to be proactive. Keep your employer out of the headlines by learning some key security best practices.
This hands-on workshop is designed to teach you how to identify and fix vulnerabilities in Java web applications. Using an existing web application, you will learn ways to scan and test for common vulnerabilities such as hijacking, injection, cross-site scripting, cross-site forgery and more. You will learn best practices around logging, error handling, intrusion detection, authentication and authorization. You will also learn how to improve security in your applications using existing libraries, frameworks and techniques to patch and prevent vulnerabilities.
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
We developers really like code.
Code, being plain-text, can be version-controlled, versioned, and follow a traditional SDLC lifecycle.
For the longest time however, we were forced to live with having most of our Ci/Cd and server configurations live outside of our codebases, often at the mercy of infrastructure/operations teams.
With the evolution of DevOps comes the notions of constructs like IaaC (Infrastructure-As-A-Code), and with Jenkins 2.0, we can now manage our Jenkins jobs configurations as code!
In this session we will explore the concept of “Pipelines-As-A-Code”, including the DSL that Jenkins offers, and how we can use this to configure Jenkins jobs via simple, version-controlled Jenkins files. We will see how we can create Jenkins jobs by autodiscovering repositories, as well as when we branch our code to create releases.
Using the Microservices Architectural Style to incrementally adopt an Event-driven Architecture (EDA) lowers up-front costs while decreasing time-to-market. EDA extracts value from existing occurrences, limiting invasive refactoring or disrupting existing application development efforts. Implementing Event-driven Microservices yields intelligence, scalable, extensible, reactive endpoints.
This session will cover the fundamentals, patterns, techniques and pitfalls of Event-driven Microservices with several demos leveraging Spring-Boot, Camel, ActiveMQ and Docker.
This two session workshop covers AMQP messaging concepts and technologies including hands-on exercises with RabbitMQ, Spring and Docker
Topics
Fundamentals: AMQP
Technologies and Architectures: RabbitMQ & Spring
Demos and Hands-on Exercises
Download Prior to Workshop
This two session workshop covers AMQP messaging concepts and technologies including hands-on exercises with RabbitMQ, Spring and Docker
Topics
Fundamentals: AMQP
Technologies and Architectures: RabbitMQ & Spring
Demos and Hands-on Exercises
Download Prior to Workshop
No matter the techniques used to make enterprise solutions Highly Available (HA), failure is inevitable at some point. Resiliency refers to how quickly a system reacts to and recovers from such failures. This presentation discusses various architectural resiliency techniques and patterns that help increase Mean Time to Failure (MTTF), also known as Fault Tolerance, and decrease Mean Time to Recovery (MTTR).
Failure of Highly Available (HA) enterprise solutions is inevitable. However, in today's highly interconnected global economy, uptime is crucial. The impact of downtime is amplified when considering Service Level Agreement (SLA) penalties and lost revenue. Even more damaging is the harm to an organization's reputation as frustrated customers express their grievances on social media. Resiliency, often overlooked in favor of availability, is essential. Prezi Presentation
Software architecture involves inherent trade-offs. Some of these trade-offs are clear, such as performance versus security or availability versus consistency, while others are more subtle, like resiliency versus affordability. This presentation will discuss various architectural trade-offs and strategies for managing them.
The role of a technical lead or software architect is to design software that fulfills the stakeholders' vision. However, as the design progresses, conflicting requirements often arise, affecting the candidate architecture. Resolving these conflicts typically involves making architectural trade-offs (e.g. service granularity vs maintainability). Additionally, with time-to-market pressures and the need to do more with less, adopting comprehensive frameworks like TOGAF or lengthy processes like ATAM may not be feasible. Therefore, it is crucial to deeply understand these architectural trade-offs and employ lightweight resolution techniques. Prezi Presentation
In this session you will learn to strategically introduce technology innovations by applying specific change patterns to groups of individuals. Using these patterns and related techniques will not only benefit your organization but will ultimately benefit your career as a technologist by making you a better influencer, writer, and speaker.
The rapid pace of technological innovation has enabled many organizations to dramatically increase productivity while at the same time decrease their overall headcount. However, the vacillating global economy combined with “change fatigue” within organizations has resulted in a risk averse culture. In such an environment how can one possibly introduce and inculcate the latest technology or process within an organization? The answer is to have a solid understanding of Diffusion Theory and to leverage Patterns of Change.
Prezi Location: http://prezi.com/b85wwmw7hccn
You have some modular code with a REST API. You are on your way to Microservices. Next, you package it in a container image that others can run. Simple. Now what? Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Don't get flustered. We will explore how Kubernetes simplifies the complexity of distributed computing.
This session will help you understand the terms, architecture and the mechanics of the Kubernetes tools. You will understand how to target your applications to a seemingly complex distributed compute platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts (1 of 2)
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). The growing community of containers is riding up the hype curve. We will look at many ways to assemble pods using architecture patterns you already know.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
During the presentation, we will explore some examples on Katacoda.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts
At the 2009 Agile conference, J.B.Rainsberger declared “Integration tests are a scam”. I agree. Come see some compelling reasons why consumer-driven contract testing is a much better approach. Particularly for microservices.
We will explore different testing techniques on Kubernetes, including an important one called “Consumer-Driven Contracts”.
After a brief overview of the concepts a live demonstration will show you how to:
This is the droid you are looking for. Within this droid are hundreds of rules designed to review your code for defects, hotspots and security weaknesses. Consider the resulting analysis as humble feedback from a personal advisor. The rules come from your community of peers, all designed to save your butt.
We will explore techniques on how to add these checks to your IDE, your build scripts and your build pipelines.
Too much chatter in your pull requests? See how the analysis tools teach best practices, without ego or criticism, to a spectrum of developers. As a leader see how to develop an effective code quality intern program around this technique. We will also see some techniques to use Kubernetes to obtain reports and dashboards right on your local machine and from your continuous integration pipeline.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
From operating system on bare metal, to virtual machines on hypervisors, to containers orchestration platforms. How we run our code and bundle our applications continues to evolve. Serverless computing continuous our evolutionary path for our architectures.
Kubernetes provides an ideal, vendor-agnostic platform for serverless computing. Serverless promises to developers that we can worry less about the cluster and focus more on their logic. Based on your Kubernetes knowledge we will discover the various contributors of serverless frameworks on Kubernetes. Specifically, we will unpack how two open source serverless frameworks, Kubeless and OpenFaaS, leverage Kubernetes to achieve their promises. We will explore how Knative is helping the serverless providers evolve to the next levels sophistication.
Explore another learning medium to add to your toolbox: Katacoda.
This is a 90-minute mini-workshop where you learn to be an author on Katacoda. Bring your favorite laptop with just a browser and a text editor.
Have a Github account and bring your laptop. Let's learn together.
We are continuously learning and keeping up with the changing landscapes and ecosystems in software engineering. Some technologies are difficult to learn or may take too much time for us to set up just to get to the key points of each technology. One of the reasons why you might be here at NFJS is to do exactly that – too learn. Great!
There are many mediums we use to learn and we often combine them for different perspectives. Books, how-to articles, GitHub readmes, blog entries, recorded talks on YouTube, and online courses. All these help us sort through the new concepts. I'm sure you have your favorites.
Katacoda is becoming a compelling platform for learning and teaching concepts. You can also author your own topics for public communities or private teams. Katacoda offers a platform that hosts live server command lines in your browser with a split screen for course material broken into easy to follow steps.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
Kubernetes is a powerful platform for running containers and distributing computation workloads across resources. A significant question is how do you get all your code to this platform, continuously.
In 2019 our community is bursting with new solutions to assist our delivery pipelines. While Jenkins is a dominant player, there is a growing array of new ideas and choices. From coding at your laptop to building containers to deployments, we will explore the various tools and techniques to reduce the delivery frictions.
Kubernetes is also a fitting platform for hosting your continuous tools, pipeline engines, registries, testing, code analysis, security scans, and delivery workflows.
From this session, you will understand the latest tools and techniques for pipelining on Kubernetes. Let's up the game on your Maturity Model.
Prerequisite: If you are unfamiliar with Kubernetes or Istio meshing be sure to attend: Understanding Kubernetes: Fundamentals or Understanding Kubernetes: Meshing Around with Istio.
Kubernetes is a complex container management system. Your application running in containers is also a complex system as it embraces the distributed architecture of highly modular and cohesive services. As these containers run, things may not always behave as smoothly as you hope. Embracing the notions of antifragility and designing a system to be resilient despite the realities of resource limitations, network failures, hardware failures and failed software logic. All of this demands a robust monitoring system to open views into the behaviors and health of your applications running in a cluster.
Three important aspects to observe are log streams, tracing, and metrics.
In this session, we look at some example microservices running in containers on Kubernetes. We add Istio to the cluster for meshing. We observe how logs are gathered, We see transactions are traced and measured between services. We inspect metrics and finally add alerts when metrics are indicating a problem.
Kubernetes out of the box is a strong platform for running and coordinating large collections of services, containers, and applications. As is, Kubernetes is powerful for many solutions.
Remember Underdog? He was a mild-mannered dog, but when stress and conflict were introduced to the plot he took a magic pill, he became a superhero. Istio is a superhero for Kubernetes.
Istio is an open, platform-independent service mesh that manages communications between services in a transparent way. With a distributed architecture many things can fail, slow down and become less secure. Istio provides solutions to those stresses in our plot toward this architecture style:
• Traffic management
• Observability
• Policy enforcement
• Service identity and security
We will explore these benefits with Istio through some working examples on Kubernetes. The key point is this meshing solution allows your Java code to be less coupled to the inherent weaknesses of a distributed platform.
Help! My app bundle is 5MB! My users are angry that my app is so slow! It’s easy to forget that performance matters when we are under pressure to deliver features quickly. What data should we use to inform our decisions?
From code splitting, lazy loading, and tree shaking to bundle analysis, progressive rendering, and modern transpiling, come learn how you can deliver a better experience to your users with high-performing front-end apps. This talk is library-agnostic (React, Angular, Vue, etc.).
See the code repository link for a gist that includes slides, resources, and more: https://github.com/siakaramalegos/web-performance-long
The fundamental testing libraries in Java have undergone complete redesigns in the past few years. JUnit 5, known as JUnit Jupiter, redesigns the most well-known tool in all of testing. This talk will demonstrate the new features, how they are intended to be used, and discuss experimental ideas in the pipeline.
JUnit has been remarkably stable over the years and is one of the most widely adopted frameworks in the Java world. The latest version, JUnit 5, takes JUnit to the next level. Full of new features like conditional test execution, parametric testing, labeling and filtering tests, and more, it brings all the modern thinking on testing into the JUnit world. It also takes advantage of the functional features added to Java since version 8 to create a powerful, new library for testing your code.
Learn the basic syntax and semantics for the Kotlin programming language. Kotlin is an alternative JVM language that provides null safety, static typing, and powerful IDE support. This talk will emphasize the relationships between Kotlin and Java, highlighting the differences in types, functional programming, collections, and more.
Demonstrations will include:
and much more.
This talk will examine features of Kotlin at a greater depth than most tutorials. Coroutines – the most popular feature of the language – will be covered, as well as higher order functions, reduction operations like reduce and fold, and lambdas with receivers. Those topics progress toward building DSLs and builders in Kotlin. Terms like “apply”, “let”, “use”, “also”, and “with” will be covered along with their typical use cases.
Details of the type system, including the Any, Unit, and Nothing classes, will be included. Examples will be provided on how to define extension functions, infix operators, and inlining functions for efficiency.
This is a revised and updated version of the previous talk, with current thinking from practice and the literature. The talk presents why conflicts with your manager are inevitable based on differences in priorities and perspectives, and how to plan for them. The goal is to show you how to build the loyalty relationship that allows you to get what you need when you need it.
Topics covered will include diagnosing communication styles, lessons from game theory, working within the organizational hierarchy, and lessons on how to build a relationship with your manager that still allows you the freedom to express yourself and what you really want.
Developer First leadership does not depend on accumulating power within a team or company. Instead, it focuses on the needs of the developers, their technical and career growth, and building positive team cultures.
This session will cover 7 different tactics to practice Developer First leadership. These include empowering others, the importance of diversity & inclusion on our dev teams, establishing a positive developer onboarding experience, and how to become an authentic and respected leader.
Participants of this talk will learn how to successfully navigate the complexities of Engineering Leadership; regardless of your title!
Everyone wants to be successful in life. Many have found the SMART (specific, measurable, achievable, relevant & time boxed) goal setting framework to be a powerful tool to help clarify and validate their goals. Unfortunately having well defined goals is not enough to obtain them. This is where WINS (write, incentivize, network & share) comes in.
In this session, you will learn how to become more successful by putting goals into action with SMART WINS.
Most nontrivial software systems suffer from significant levels of technical and architectural debt. This leads to exponentially increasing cost of change, which is not sustainable for a longer period of time. The single best thing you can do to counter this problem is to give some love to your architecture by carefully managing and controlling the dependencies among the different elements and components of a software system. For that purpose we will introduce a DSL (domain specific language) that can be used to describe and enforce architectural blueprints. Moreover we will make an excursion into the topic of legacy software modernization.
In this workshop part participants will use Sonargraph to assess and analyze a software system of their choice (Java, C/C++, C# or Python) and design an architectural model using the domain specific language introduced in the session. The tool and a free 60 day license will be provided during the workshop.
This workshop will use Sonargraph-Architect to create architectural models for a project of your choice. While I will bring software and license keys on a flash drive you could install it upfront by registering on www.hello2morrow.com, download the tool and request an evaluation license. If possible, please bring a project to analyze that can be built on your laptop. Supported languages are Java, C#, C/C++ and Python. For people that cannot bring a project you will be provided with an open source project to work on.
Most nontrivial software systems suffer from significant levels of technical and architectural debt. This leads to exponentially increasing cost of change, which is not sustainable for a longer period of time. The single best thing you can do to counter this problem is to give some love to your architecture by carefully managing and controlling the dependencies among the different elements and components of a software system. For that purpose we will introduce a DSL (domain specific language) that can be used to describe and enforce architectural blueprints. Moreover we will make an excursion into the topic of legacy software modernization.
In this workshop part participants will use Sonargraph to assess and analyze a software system of their choice (Java, C/C++, C# or Python) and design an architectural model using the domain specific language introduced in the session. The tool and a free 60 day license will be provided during the workshop.
This workshop will use Sonargraph-Architect to create architectural models for a project of your choice. While I will bring software and license keys on a flash drive you could install it upfront by registering on www.hello2morrow.com, download the tool and request an evaluation license. If possible, please bring a project to analyze that can be built on your laptop. Supported languages are Java, C#, C/C++ and Python. For people that cannot bring a project you will be provided with an open source project to work on.
Software metrics can be used effectively to judge the maintainability and architectural quality of a code base. Even more importantly they can be used as “canaries in a coal mine” to warn early about dangerous accumulations of architectural and technical debt.
This session will introduce some key metrics that every architect should know and also looks into the current research regarding software architecture metrics. Since we have 90 minutes there will be some time for hands-on software assessments. If you'd like to follow along bring your laptop and install Sonargraph-Explorer from our website www.hello2morrow.com. (It's free and covers most of the metrics we will introduce) Bring a Java, C#, C/C++ or project and run the metrics on your own code. Or just download an open source project and learn how to use metrics to assess software and detect issues.
Programming is a series of frustrations. Everything we do, we could do better or faster if we only had our tools set up just so. If our error messages were a little better, our code a little cleaner, our tests a lot wider. When we spend time on this, it's known as “yak shaving,” and it can get messy.
How do you balance the work you’re supposed to be doing with the work that makes your work, work? Dive into the yak stack with me. We'll see five different species of yak, and discuss how and when to tackle each one. At the bottom of the yak stack, we might find the Golden Yak, with secret wisdom engraved on its skin.
This session will give you reasons to spend time smoothing your development experience, and clues for where to spend that time in ways that help your whole team.
When we have a problem which can be solved using a software, we first design an architecture that will guide how the system will look like. This architecture needs to be robust and well thought of to ensure that it handles all the requirements at hand and flexible enough for the future.
This talk is about some considerations to take while designing a system:
The problem to be solved
The users of the system
Systems integrations
The talk also highlights some common pitfalls that teams fall into during this process:
Database management
Buzzword-oriented architecture
Outcome of the talk:
By the end of this session the listener is be able to:
Interpret the most important considerations while designing a system
Evaluate the business and customer requirements to determine their architecture
Analyse past organisation strengths and shortcomings to make better decisions
Machine Learning is a huge, deep field. Come get a head start on how you can learn about how machines learn.
This talk will be an overview of the Machine Learning field. We’ll cover the various tools and techniques that are available to you to solve complex, data-driven problems. We’ll walk through the algorithms and apply them to some real but accessible problems so you can see them at work.
Documents contain a lot of information. We'll introduce you to a variety of techniques to extract them.
Machine Learning techniques are useful for analyzing numeric data, but they can also be useful for classifying text, extracting content and more. We will discuss a variety of open source tools for extracting the content, identifying elements and structure and analyzing the text can be used in distributed, microservice-friendly ways.
Deep Learning is an evolution of the capabilities of more conventional machine learning to take advantage of the extra data available from Big Data systems. With more data, many of the manual aspects of feature selection and other machine learning steps can be derived automatically. We will highlight many of the main deep learning frameworks and give you a hands on introduction to what is possible and how you can start to use them.
We will cover:
What comes after machine learning and deep learning? How about dynamic systems that need new ways of finding paths through complex scenarios such as video games, challenging board games and more.
In addition to covering the main ideas of deep reinforcement learning, we will cover some of the main tools and frameworks
What happens if Web applications become super fast?
What if the ability to write code once but run it on lots of different platforms was true again?
What if Desktops are no longer interesting because you can do everything in a browser?
What if JavaScript wasn't your only language choice?
These are all starting to happen now that this W3C Standard is supported widely across all major browser vendors, Node and more. It's never been a better time to dig into the future that is playing out now faster than most people realize.
WebAssembly is emerging as an exciting vision for web applications that run at native speeds by using a size and load-time efficient, compiled binary format. Anything from computationally intensive business applications to fully rendered 3D video games will benefit from the mix of speed with other Web-oriented technologies. We'll let you know what is coming and how you'll benefit from it.
We will cover:
This is a hands on workshop of a truly mind-blowing next step evolution of the Web. Don't get left behind.
please install a recent version of git
https://git-scm.com/
What happens if Web applications become super fast?
What if the ability to write code once but run it on lots of different platforms was true again?
What if Desktops are no longer interesting because you can do everything in a browser?
What if JavaScript wasn't your only language choice?
These are all starting to happen now that this W3C Standard is supported widely across all major browser vendors, Node and more. It's never been a better time to dig into the future that is playing out now faster than most people realize.
WebAssembly is emerging as an exciting vision for web applications that run at native speeds by using a size and load-time efficient, compiled binary format. Anything from computationally intensive business applications to fully rendered 3D video games will benefit from the mix of speed with other Web-oriented technologies. We'll let you know what is coming and how you'll benefit from it.
We will cover:
This is a hands on workshop of a truly mind-blowing next step evolution of the Web. Don't get left behind.
please install a recent version of git
https://git-scm.com/
An overview of various popular streaming technologies on the JVM: Kafka Streams, Apache Storm, Spark Streaming, Apache Beam. Discuss “the bill of rights” of what to expect of all streaming libraries and frameworks, security, failover, exactly once processing.
Streaming is now an essential part of our life. We have cheaper drives, faster networks, and more memory. We can haul tons of data, but we need to process that data, manipulate and enrich. To do so we need some sort of streaming solution. Let's look at the most common ones and expose the differences and similarities between frameworks so you, the attendee, can make a better decision.
Kafka is more than just a messaging queue with storage. It goes beyond that and with technology from Confluent open source it has become a full-fledged data ETL and data streaming ecosystem.
When we utter the words, Kafka, it is no longer just one component but can be an entire data pipeline ecosystem to transform and enrich data from source to sink. It offers different ways to handle that data as well. In this presentation, we define:
We then discuss KSQLDB. A SQL layer built upon Kafka Streams that provides a simple query language to perform streaming operations
Spark has a machine learning aspect to it and it's called Spark MLLib. We discuss an intro into machine learning, some models, then apply some of those common machine learning models.
You may also already know what Spark is, if not, well, we will either introduce it again or remind you. We will go over a quick introduction to its purpose. Then we will go all Machine Learning on it. We will have a discussion of the purpose of data science, what the rigors are with data science and then apply this data into Spark MLLib. We will discuss the various models and then apply various data into Spark in order to achieve some insight into the data you have and have currently been aggregating.
For those still grappling with Generics. This will be an attempt to clear the air about generics. What are wildcards? What is extends
? What is super
? What is covariance? What is contravariance? What is invariance? What is erasure? Why and when do I need this?
Generics or parameterized type is one of the more pain items in any statically typed language on the JVM. This presentation is set to overcome some of these hurdles and understand some of these confusing terms. We will cover the following:
Our jobs usually deal with something other than new code. It is usually old spaghetti and difficult-to-read code. How do we test such code? How do we get through it? How can we surgically remove and make some of this harmful code testable?
This session looks at lousy code, and we talk about some strategies we can do to diagnose, test, apply, and finally refactor to produce something that would promote some sanity in your development process. We can do much with our code to make it better and testable while avoiding extensive mocking. The content of this course is all in Java and JUnit.
Some teams seem to have some mysterious chemistry from the beginning. Other teams wallow, bicker, and slog their way to uncertain results. What makes one team soar, and another stumble? It's not just chance.
In this session, we'll explore the essential ingredients that result in that mysterious “chemistry.” For example, we’ll examine the prerequisites for cohesion, and factors that pull teams apart. We'll look at myths and realities of software teams.
You'll gain tools to assess your agile team, and insights on how to adapt the environment for growing great teams.
Learning Outcomes:
Identify the essential elements for great teams.
Strategies to adapt the environment to improve the chance of team success.
Identify common pitfalls for agile teams.
Through table activities and facilitated conversation, we'll explore experiences in teams, talk about what works, and what doesn't. I'll present research about teams, and related it to concrete steps that managers, team leads, and team members can apply to their own situations.
Some teams seem to have some mysterious chemistry from the beginning. Other teams wallow, bicker, and slog their way to uncertain results. What makes one team soar, and another stumble? It's not just chance.
In this session, we'll explore the essential ingredients that result in that mysterious “chemistry.” For example, we’ll examine the prerequisites for cohesion, and factors that pull teams apart. We'll look at myths and realities of software teams.
You'll gain tools to assess your agile team, and insights on how to adapt the environment for growing great teams.
Learning Outcomes:
Identify the essential elements for great teams.
Strategies to adapt the environment to improve the chance of team success.
Identify common pitfalls for agile teams.
Through table activities and facilitated conversation, we'll explore experiences in teams, talk about what works, and what doesn't. I'll present research about teams, and related it to concrete steps that managers, team leads, and team members can apply to their own situations.
Spock is a groovy based testing framework that leverages all the “best practices” of the last several years taking advantage of many of the development experience of the industry. So combine Junit, BDD, RSpec, Groovy and Vulcans… and you get Spock!
There are 3 tools I use on every Java project I control… this is one of them and with good reason.
This session assumes some understanding of testing and junit and builds on it. We will introduce and dig deep into Spock as a test specification and mocking tool. Topics include:
Unit testing
Data driven tests
Mocking and Stubbing
Partial Mocks
Spock Extensions
The way we communicate with our applications is an ever-evolving experience. Punch cards gave way to keyboards. Typing on keyboards was then supplemented by pointing and clicking with a mouse. And touch screens on our phones, tablets, and computers are now a common means of communicating with applications.
These all lack one thing, however: They aren’t natural.
As humans, we often communicate with each other through speech. If you were to walk up to another human and start tapping them, you’d likely be tapped (or punched) in response. But when we talk to our applications, we communicate on the machine’s terms, with keyboards, mice, and touch screens. Even though we may use these same devices to communicate with other humans, it’s really the machine we are communicating with—and those machines relay what we type, click, and tap to another human using a similar device.
Voice user-interfaces (Voice UIs) enable us to communicate with our application in a human way. They give our applications the means to communicate to us on our terms, using voice. With a voice UI, we can converse with our applications in much the same way we might talk with our friends.
Voice UIs are truly the next logical step in the evolution of human-computer interaction. And this evolutionary step is long overdue. For as long as most of us can remember, science fiction has promised us the ability to talk to our computers. The robot from Lost in Space, the Enterprise computer on Star Trek, Iron Man’s Jarvis, and HAL 9000 (okay, maybe a bad example) are just a few well-recognized examples of science fiction promising a future where humans and computers would talk to each other.
Our computers are far more powerful today than the writers of science fiction would have imagined. And the tablet that Captain Picard used in his ready room on Star Trek: The Next Generation is now available with the iPad and other tablet devices. But only recently have voice assistants such as Alexa and Google Assistant given us the talking computer promised to us by science-fiction.
In this example-driven session, we'll explore the Alexa Skills Kit (ASK) and see how to develop skills for Amazon's Alexa. You'll learn how to use the ASK CLI to jumpstart skill development and how to create conversational applications in NodeJS.
This session covers basic application and distributed architectural styles, analyzed along several dimensions (type of partitioning, families of architectural characteristics, and so on).
A key building block for burgeoning software architects is understanding and applying software architecture styles and patterns. This session covers basic application and distributed architectural styles, analyzed along several dimensions (type of partitioning, families of architectural characteristics, and so on). It also provides attendees with understanding and criteria to judge the applicability of a problem domain to an architectural style.
This session describes how architects can identify architectural characteristics from a variety of sources, how to distinguish architectural characteristics from domain requirements, and how to build protection mechanisms around key characteristics. This session also describe a variety of tradeoff analysis techniques for architects, to try to best balance all the competing concerns on software projects.
Architects must translate domain requirements, external constraints, speculative popularity, and a host of other factors to determine the key characteristics of a software system: performance, scale, elasticity, and so on. Yet architects must also analyze the tradeoffs each characteristics entails, arriving at a design that manages to maximize as many beneficial properties as possible. This session describes how architects can identify architectural characteristics from a variety of sources, how to distinguish architectural characteristics from domain requirements, and how to build protection mechanisms around key characteristics. This session also describe a variety of tradeoff analysis techniques for architects, to try to best balance all the competing concerns on software projects.
Every organization has at least a phalanx or two in the “Cloud” and it is, understandably changing the way we architect our systems. But your application portfolio is full of “heritage” systems that hail from the time before everything was as a service. Not all of those applications will make it to the valley beyond, how do you grapple with your legacy portfolio? This talk will explore the strategies, tools and techniques you can apply as you evolve towards a cloud native future.
In this talk, you will learn:
More than half of all agile teams are not collocated. They are distributed or dispersed in some way, all over the world. The problem is these teams have trouble living the agile principles, never mind adopting any specific agile practices. Instead of adopting any given agile approach or framework, see how the hours of overlap governs any agile approach you might consider. You can create a team environment that works. It might not look like a “traditional” agile team. You’ll have the opportunity to create an action plan for when you return to your office.
Note: This workshop focuses on hours of overlap because that is the biggest problem for distributed teams.
Learning objectives:
Please bring pen or pencil so you can draw on paper. I will supply paper. I won't bring pens or pencils!
Many agile teams (and programs) attempt to plan for an entire quarter at a time. Something changes—a better product opportunity, or a product development problem—and the quarter’s plan is not just at risk. That plan is now impossible. Instead of quarterly planning, consider continual planning. Continual planning allows a project or a program to use small deliverables to plan for the near future and replan often to deliver the most value.
Too many programs (collections of projects with one business deliverable) try to use team measurement to extrapolate to the program’s status. That doesn’t work. Teams have personal status, and you can’t add them together to understand the program state. Or, your management wants to know when you will be done, and every team uses relative estimation and you can’t understand how to “add” them all together. (You can’t.)
Instead of trying to “scale” measurements, measure what you want to see and what you don’t want to see. You can use a handful of program measurements that help everyone understand where the program is and where it’s headed. In this talk, Johanna will share program measurements—qualitative and quantitative—that show everyone the program state, and maybe when the program could be done.
“In order to make delicious food…. you need to develop a palate capable of discerning good and bad. Without good taste,
you can't make good food.” - Jiro Ono (World’s Best Sushi Chef)
Many of us are stuck with messy code. We know it’s not great but it works and what can we do? Where and how do you start?
We are going to use some cutting edge training to train your pattern recognition section of your brain to instantly recognize common, reoccurring anti-pattern (smells) in your code.
Then we will learn very specific techniques to start improving on these specific smells.
Once you are trained to see these anti-patterns you’ll recognize them everywhere. Now that you are equipped to handle them your code will start to transform into something beautiful and easy to work with.
Many new features have been added between the last Long Term Support release in Java 8 and the current one in Java 11. This talk will summarize many of those capabilities, from the Jigsaw implementation of JPMS to unmodifiable collections to local variable type inference and more. In addition to the basic code changes, the new six-month release schedule and associated licensing issues will be reviewed.
If, as anticipated, Java 12 is released in March and Java 13 in September, new features from those versions will also be included, even though they will break the joke in the title of this talk.
Gradle introduced Kotlin DSL in 2016 and formally released it in 2019. Recently the Kotlin DSL has become the default language for all new Gradle builds. It's probably time to learn about it: what are its advantages and disadvantages, how you can move from the Groovy DSL to the Kotlin DSL, and whether it's worth your time and effort to learn and use.
The Kotlin DSL brings strong typing, null safety, and, most importantly, powerful IDE support. The goal is to improve the user experience with Gradle build files through code assist and improved readability. This presentation will demonstrate the new build style, both for Java projects and for Kotlin projects.
The talk will include recommendations on how to move from the Groovy DSL to Kotlin, how to navigate and use the Kotlin DSL samples in the documentation, how to define tasks, use plugins, and more.
On the inside, Kafka is schemaless, but there is nothing schemaless about the worlds we live in. Our languages impose type systems, and the objects in our business domains have fixed sets of properties and semantics that must be obeyed. Pretending that we can operate without competent schema management does us no good at all.
In this talk, we’ll explore our how the different parts of the open-source Kafka ecosystem help us manage schema, from KSQL’s data format opinions to the full power of the Confluent Schema Registry. We will examine the Schema Registry’s operations in some detail, how it handles schema migrations, and look at examples of client code that makes proper use of it. You’ll leave this talk seeing that schema is not just an inconvenience that must be remedied, but a key means of collaboration around an enterprise-wide streaming platform.
Angular 7 is a big jump for the entire platform, but what does it mean for you? (We'll also cover what's new in Angular 8!)
In this session we’ll explore the things you couldn’t do before by diving into changes in the core framework, Angular Material, and the CLI. We’ll discuss other improvements to the framework and why they might matter to you. We’ll upgrade an Angular 6 application and add some new features to it. And if you’ve stepped away from Angular for a while, you might be surprised at how easy it is to pick it back up.
Microservices have helped us break apart back end services, but large front ends often remain problematic monoliths.
In this session you’ll learn how to apply the same concepts to large front-end applications, slicing them into end-to-end verticals. These verticals can then be owned by different teams and even written in different frameworks. Can Angular, React, and Vue all live together in harmony? How about AngularJS and Angular2+? With micro frontends, the answer is yes!
Unless you’ve been living under a rock, you know that Git is the most popular source control management in development shops today. And for good reason; its power overshadows tools you may have used in the past, such as Subversion or Team Foundations. While most developers and companies know this, making the switch can be painful. It’s all too common to lose code or introduce bugs because of difficulties merging or resolving conflicts. But fear not - it is possible to get comfortable with Git.
After a brief overview of concepts and capabilities, we’ll walk through exercises to simulate realistic scenarios. We’ll resolve conflicts, squash commits, stomp on other people’s code, fix mistakes, tag our commits, and more. All exercises will be performed on the command line, so you’ll truly understand what’s happening without the aid of GUI-based tools.
Master Git in a day requires a specific terminal, vim primer, specific folder structure, and GitHub account. All of the details about how to download and configure these can be found at http://bit.ly/MasterGit-TheTools.
There's a story to tell, about musicians, artists, philosophers, scientists, and then programmers.
There's a truth inside it that leads to a new view of work, that sees beauty in the painful complexity that is software development.
Starting from The Journal of the History of Ideas, Jessica traces the concept of an “invisible college” through music and art and science to programming. She finds the dark truth behind the 10x developer, a real definition of “Senior Developer” and a new name for our work and our teams.
VueJS is the new contender for 'best front end framework' and is running a very close second place to React in popularity amongst knowledgable developers. It is gaining mindshare and has incredible momentum, all for very good reasons!
Join us for this introductory, full day workshop in which we fully explore everything that makes VueJS the last framework you will ever learn … because you won't ever want to use anything else again!
Upon completion, you will be armed with the skills and knowledge to create sophisticated VueJS components that:
We will also explore deployment options and productionization of your application.
I look forward to sharing this amazing new contender in the front-end SPA framework space!
A basic understanding of JavaScript would be helpful. Knowledge of more ES2015/16 constructs will be reviewed for those who might be unfamiliar with them.
If you’re using Angular 2+ and building forms the way you’ve always built them, you’re missing out on an amazingly powerful feature of the framework. Reactive forms (aka model-driven forms) allow you to build forms in the Typescript file, making complex validation and error-handling a breeze.
In this talk, we’ll walk through the steps to build a standard-but-tricky form, using the Reactive Forms approach, from scratch. Examples shown are in Angular 7. If you're not using Angular today but you need to build form-heavy applications, come see why people choose Angular's robust form features and quick scaffolding.
Git. It can be intimidating if you're accustomed to other kinds of source control management. Even if you're already using it and comfortable with the basics, situations can arise where you wish you understood it better. Developers often just want to write code and tell everyone else to take a hike, but the reality is that most of us work on teams where the feature-based code we write must be integrated, tested, and ultimately released.
This session will cover the most critical git concepts, basic and advanced, in a completely visualized way. At the same time, you’ll pick up git terminal commands to help you understand (or even eliminate) a git GUI you already use. Go beyond the basics to learn how to get yourself out of a git pickle, practical release management strategies, and more.
If you are interested in a different approach to writing your next micro-service, or are knee deep in the DevOps world with Kubernetes and Docker (both written using Go) you need to know go.
Come join me in a rather quick introduction to the language and it's merits and short-comings.
Micro-services, DevOps, command-line utilities — Go has been the catalyst in a quiet revolution happening right under our noses. Go, from Google, aims to be a language that is simple, with the aim of writing scalable and reliable software. Go brings a unique tilt to many aspects of language design, including enforcing a strict project structure, powerful tooling to support things like code-style enforcement, as well as “goroutines” to allow for concurrency.
Go is a fascinating language. While it is simple, it makes some rather interesting decisions on several language features that we take for granted in other languages.
In this session we will take a deeper dive into the language — seeing what it makes it the language of choice for companies like Google, as well as the go to language for large OSS projects like Kubernetes and Docker.
Your goal is simple: take that is happening in your company—every click, every database change, every application log—and made it all available as a real-time stream of well-structured data? No big deal! You’re just taking your decades-old, batch-oriented data integration and data processing and migrating to to real-time streams and real-time processing. In your shop, you call that Tuesday. But of the several challenges to tackle, you’ll have to get data in and out of that stream processing system, and there’s a whole bunch of code there you don’t want to write. This is where Kafka Connect comes in.
Connect is a standard part of Apache Kafka that provides a scalable, fault-tolerant way to stream data between Kafka and other systems. It provides an API for developing standard connectors for common data sources and sinks, giving you the ability to ingest database changes, write streams to tables, store archives in HDFS, and more. We’ll explore the standard connector implementations offered in the Confluent Open Source download, and look at a few operational questions as well. Come to this session to get connected to Kafka!
Tired of trying to manage and maintain servers? Never have a large enough operations team? Don’t have a budget for running lots of server? Don’t want to pay for servers siting idle? Afraid you might become so popular that you won’t be able to scale fast enough? Don’t worry, it is possible to alleviate these issues by moving to a serverless architecture that utilizes microservices hosted in the cloud. This type of architecture can support all different types of clients including web, mobile and IoT.
During this hands-on workshop, you will build a serverless application utilizing AWS services such as Lambda, API Gateway, S3 and a datastore.
During this session you will build a simple web application utilizing AWS services and Angular.
Tired of trying to manage and maintain servers? Never have a large enough operations team? Don’t have a budget for running lots of server? Don’t want to pay for servers siting idle? Afraid you might become so popular that you won’t be able to scale fast enough? Don’t worry, it is possible to alleviate these issues by moving to a serverless architecture that utilizes microservices hosted in the cloud. This type of architecture can support all different types of clients including web, mobile and IoT.
During this hands-on workshop, you will build a serverless application utilizing AWS services such as Lambda, API Gateway, S3 and a datastore.
During this session you will build a simple web application utilizing AWS services and Angular.
Microservices bring about a series of architectural shifts. One of the most powerful is true separation of concerns. This change brings with it incredible security opportunities. Join Aaron as he demonstrates how to identify and execute on these opportunities. In this session you will explore service and data classification techniques, authentication and access control, and service interface design that respects classification boundaries. If you are interested in, building, or currently using Microservices, this session is a must see!
More to follow…
You have an angularJS application and are contemplating the daunting job of modernizing it by moving it to the latest flavor of Angular. Never fear! The job is not as hard as you might think, provided you prepare and plan for this project properly. Join us for this fear reducing session in which I will share with you patterns and strategies to make your migration efforts painless and successful!
Join us as we explore the best practices related to migrating an existing Angular 1 application to Angular 6. We will do this by progressively refactoring an existing Angular 1 application into an Angular 7 version of the same application.
You built the app. You are ready to launch! But how do you proceed from there? You need to ensure that, once deployed, your app remains 'up', healthy, available and secure. For that, you are going to need some serious tools in your belt! Join us as we explore the tools and services you can use to complete your deployment stack and give you all of the monitoring and control that you need for a successful launch!
You built the app. You are ready to launch! But how do you proceed from there? You need to ensure that, once deployed, your app remains 'up', healthy, available and secure. For that, you are going to need some serious tools in your belt! Join us as we explore the tools and services you can use to complete your deployment stack and give you all of the monitoring and control that you need for a successful launch!
Learn about the newest version of the community developed and supported UI Router. Explore its new features and how best to apply this powerful tool in your Angular and React applications!
Join us to explore the powerful features offered by the community driven UI Router for both Angular and React applications. Learn how best to leverage this flexible and very capable tool to make your Angular applications more stable and maintainable.
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
There are many good reasons to use a microservices architecture. But there are no free lunches. The positives of microservices come with added complexity. Teams should happily take on that complexity…provided the application in question benefits from the upside of microservices. This talk will cut through the hype to help you make the right choice for your unique situation.
Once upon a time, it was just me and my app – the days when all I had to know was “get data, put on screen.” Fast forward ten years later, and what the hell happened? The level of complexity that we deal with in modern software development is insane.
Are we really better off than we were 10 years ago, or have we just been putting out our fires with gasoline?
In this session, we'll turn the projector off, and focus on a deep-dive discussion, contrasting the world of 10 years ago versus today. Rather than generalizations and hand-waiving about the golden promises of automation and magic frameworks, we're going to question everything and anchor our discussions in concrete experience.
Looking back across your career in software development, how has the developer experience changed?
First, we'll dig into the biggest causes of friction in software development, and how our solutions have created new problems. Then we'll focus on distilling strategies for overcoming these challenges, and how we can take our teams, and our industry in a better direction.
There is no doubt that Angular is the titan of modern, Javascript frameworks. That made it easier for you to convince the powers-that-be to let you select Angular for your project. You've done a small but successful POC and now your 'big' project has been green lighted to kick off next month. Your team is jazzed but as you start to plan out the real work, you begin to realize that there are many aspects inherent to large Angular projects that have no 'out of the box' answers.
Stack overflow can only contribute 'it depends' answers that leave you more confused than before you read them. And now the panic starts to seep in, killing your buzz. Never fear! In this session, we will explore a variety of ways to architect your project and structure your code. We will look at the pros and cons of each option and discuss when trigger points for choosing each. This session is applicable to all versions of Angular and AngularJS.
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
With TypeScript, the JavaScript + Node ecosystem becomes a serious contender for backend development. This talk describes why: maturity, strong language features, and Enterprise-quality open source tools. Once you know how cool and fun it is, I'll reveal some less-pleasant surprises. Get the information I wish I had when moving from Java/Scala to TypeScript. If you're new to Node or to TypeScript, or if you're experienced but still frustrated, this session will widen your development world and strengthen your superpowers.
The TypeScript compiler is a function from JavaScript + some types => JavaScript + type errors. You get to choose how many type errors you get! In this session, we'll start out lenient and gradually tighten the type checking. See the transition, its beauty and its pain.
See (at least) five things I love about TypeScript, and (at least) five things that really tripped me up. You will love these things too! and you will not be surprised about the hard bits, because you'll know they're coming.
TypeScript is a serious Enterprise-ready language. This talk will get you ready for it.
In some organizations, architects are dismissed as people that draw box and arrow diagrams - the dreaded whiteboard architect. While we don't want to foster that stereotype, it is important for an architect to be able to construct basic architectural diagrams. An architect must also be able to separate the wheat from the chaff eliminating those models that don't help tell the story while fully leveraging those that do.
In this workshop, we'll discuss the various diagrams at our disposal. We'll walk through a case study and as we go, we'll construct a set of diagrams that will help us effectively communicate our design. We'll talk about stakeholders and who might benefit from each typ of diagram. Additionally we'll discuss how to constructively review an architectural model.
Neither a laptop nor special software is required for this workshop though your modeling tool of choice (Spark, Visio, OmniGraffle, etc.) is welcome for the exercises. Of course paper and pencil are very effective too and frankly recommended! Feel free to work in pairs or teams. That's it! Well, and a willingness to participate!
In some organizations, architects are dismissed as people that draw box and arrow diagrams - the dreaded whiteboard architect. While we don't want to foster that stereotype, it is important for an architect to be able to construct basic architectural diagrams. An architect must also be able to separate the wheat from the chaff eliminating those models that don't help tell the story while fully leveraging those that do.
In this workshop, we'll discuss the various diagrams at our disposal. We'll walk through a case study and as we go, we'll construct a set of diagrams that will help us effectively communicate our design. We'll talk about stakeholders and who might benefit from each typ of diagram. Additionally we'll discuss how to constructively review an architectural model.
Neither a laptop nor special software is required for this workshop though your modeling tool of choice (Spark, Visio, OmniGraffle, etc.) is welcome for the exercises. Of course paper and pencil are very effective too and frankly recommended! Feel free to work in pairs or teams. That's it! Well, and a willingness to participate!
Docker! Docker! Docker! Whether its running a piece of software on your local machine, to hermetic deployments of your software in production - docker has a place in your workflow. In this 2 part workshop we will get our hands dirty with Docker. We will create, tear down and modify containers, create our own images, see how to set up networking and volumes for containers, see the role of Dockerfiles, and if we have time, attempt to “compose” an application using “docker-compose”
In this introductory workshop we will flit between practice and theory. We will spend a lot of time working with the Docker CLI, and cement our new found knowledge with hands-on exercises and theory.
I must highlight that this is ONLY a 3 hour workshop, but please ensure that you follow the “Set up” instructions and test to see if all is well before attending this workshop
In this workshop we will cover the following -
You will find all the requirements, and installation instructions in the repository README.
Please read and follow the README carefully. Also note that there are two README's. The first one links to the second
See you soon!!!
Docker! Docker! Docker! Whether its running a piece of software on your local machine, to hermetic deployments of your software in production - docker has a place in your workflow. In this 2 part workshop we will get our hands dirty with Docker. We will create, tear down and modify containers, create our own images, see how to set up networking and volumes for containers, see the role of Dockerfiles, and if we have time, attempt to “compose” an application using “docker-compose”
In this introductory workshop we will flit between practice and theory. We will spend a lot of time working with the Docker CLI, and cement our new found knowledge with hands-on exercises and theory.
I must highlight that this is ONLY a 3 hour workshop, but please ensure that you follow the “Set up” instructions and test to see if all is well before attending this workshop
In this workshop we will cover the following -
https://github.com/looselytyped/nfjs-docker-workshop
You will find all the requirements, and installation instructions in the repository README.
Please read and follow the README carefully. Also note that there are two README's. The first one links to the second
Reactive architecture patterns allow you to build self-monitoring, self-scaling, self-growing, and self-healing systems that can react to both internal and external conditions without human intervention. These kind of systems are known as autonomic systems (our human body is one example). In this session I will show you some of the most common and most powerful reactive patterns you can use to automatically scale systems, grow systems, and self-repair systems, all using the basic language API and simple messaging. Through code samples in Java and actual run-time demonstrations, I'll show you how the patterns work and also show you sample implementations. Get ready for the future of software architecture - that you can start implementing on Monday.
Agenda
One of the hardest activities and strategies of DevOps team or should we say production is how to transition from one version of an application to another version of an application with cascading consequences of service dependencies. There are a number of strategies for managing this concern. In this talk, we will outline a few of them along with required conditions of the underlying infrastructure to achieve it.
This session will demonstrate on a DC/OS platform how to create a continuous delivery solution which pushes builds into production leverage blue / green deployments. Following this we will switch on the fly from blue to green and vice versa. We will stretch this concept to it's extreme and demonstrate A/B testing in a production environment.
Imagine toString
, equals
, and hashCode
in a single class. Can you change implementations on the spot? Probably not, there may be too many dependencies on your implementation. Time to break out an adapter pattern, a utility class, or better yet, a type class! A type class is a kind of template in very static functional programming languages. Imagine a template that can read, write information as a side effect as well? Type classes are powerful.
For these various type-classes, we will be looking at a project called TypeLevel Cats. TypeLevel is a group of projects that adhere to a code of conduct, modular systems, static, functional, open source programming. TypeLevel Cats is the flagship project for Typelevel.
Our presentation will be following this story:
Even if you are not a Scala Programmer, you may want to come in and see how type classes work because I am making a bet… This will be something that will be used by other JVM languages in the future. Kotlin doesn't have it now, Groovy doesn't have it now, TypeScript doesn't either although some projects are working towards this idea.
There's nothing new or exciting about relational databases. We abstract them away with ORMS, grudgingly write a query here or there, but generally try to forget about them entirely. Then the performance and scalability problems begin. “Shading, the secret ingredient to the web-scale sauce” often won't help us.
The database is at the heart of nearly every system we build. Reading data and writing data account for the majority of performance bottlenecks. When it comes to SQL and relational databases, the syntax is easy, but the concepts often aren't. The most important knowledge is not obvious but it is necessary to make the right design, query, and optimization decisions.
Indexing, a glimpse under the hood of the storage engine and the query optimizer, and some best practices are all you need to know bring your DB skills head and shoulders above your peers and ready to build bigger, better, faster apps.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
As the cloud becomes more popular, many cloud-inexperienced architects wonder whether migration to the cloud is the correct way to scale. When they decide to migrate they have to figure out where to start from and which components to use. This talk is not about a particular cloud vendor but the questions and considerations to take while deciding on a cloud architecture for your business.
After deciding to migrate to the cloud, the architecture design will determine the success rate of the infrastructure. This architecture needs to be robust and well thought of to ensure that it handles all the requirements at hand and flexible enough for the future.
This talk is about considerations to take while designing a system, including:
The intended clients
Investment decisions
The business strategy
The development team
Choice of tools
Good development practices
Here we also discuss common pitfalls of the architecture design process, including choosing tools.
The maturing of industry projects and tools around cloud development and administration has led to the formation of the Cloud Native Computing Foundation. This new foundation is similar to the Apache Foundation in that it provides governance over projects from incubation to maturity. These projects define the current and future standards of the cloud which is important for all devops teams to be aware of. This session is a guided at jet speed tour of each project and how it fits in the eco-system.
This session will briefly cover each of the CNCF projects will a outline of:
The projects covered include:
This course will cover the foundations of threat intelligence. It will consist of a combination of lecture and lab where we will work through the concepts of detecting indicators of attack and compromise, and building automation to process and eliminate it. This is a fully immersive, hands on workshop that will include a number of techniques, tools, and code.
It will cover the following topics:
Attendees will leave with a fully functional threat intelligence proof of concept system. This PoC can be used to design further capabilities or to evaluate larger commercial systems. Be prepared for an exciting day of code, modeling, and automation.
You will need the following tools installed and updated prior to the workshop:
Run docker pull
This course will cover the foundations of threat intelligence. It will consist of a combination of lecture and lab where we will work through the concepts of detecting indicators of attack and compromise, and building automation to process and eliminate it. This is a fully immersive, hands on workshop that will include a number of techniques, tools, and code.
It will cover the following topics:
Attendees will leave with a fully functional threat intelligence proof of concept system. This PoC can be used to design further capabilities or to evaluate larger commercial systems. Be prepared for an exciting day of code, modeling, and automation.
You will need the following tools installed and updated prior to the workshop:
Run docker pull
This workshop will bring you up to speed on the essentials to create React applications.
The workshop will start with a quick introduction to the main problems that React solves, and then walk you through the steps to create components, to mange state, and to do automated testing.
-Laptop
-git
-your favorite editor
-node.js latest version installed on your system
This workshop will bring you up to speed on the essentials to create React applications.
The workshop will start with a quick introduction to the main problems that React solves, and then walk you through the steps to create components, to mange state, and to do automated testing.
-Laptop
-git
-your favorite editor
-node.js latest version installed on your system
For those programmers aspiring to be polyglot, there's a virtual machine that's all polyglot. In this presentation you can learn about GraalVM, what it's for, the benefits it offers, and where you may use it.
.
How can we make our tools work with our team? Like a good team member, great tools keep us informed, implement our decisions, and help us understand errors.
Drawing from aviation, medicine, and software, here are strategies for choosing and building tools that enhance us and do not frustrate us.
Great automation doesn't replace humans; it enhances us. The tools we choose or build for our team need to play like team members: keep us informed, do the consistent boring work, and pass the hard decisions to the humans along with the information we need to make them.
It may seem paradoxical that something small leads to something big. Yet this is the case. Big changes can feel like an existential threat and cause major disruption. Tiny changes, working obliquely, evolving towards a more desirable pattern may lack drama, but get you where you need to go.
So how does this work? The same way agile does, iteratively, incrementally, with learning as you go. I’ll share some small ideas that will add up to a big change in how you go about changing your team or organization.
Every organization—whether it is 50, or 50,000 people—faces three broad sets of concerns. How it fits in the market, how it serves customers, how it makes money, what sort of place it wants to be. Leaders in the organization have to figure out what initiatives to invest in, and how to sequence and order work that flows into teams. They have to support teams, so they can do good work. And teams need to figure out the details of their work and how best to collaborate.
The SEEM model provides a way to address these concerns that maximizes the possibility of healthy self-organization, and adaptability.
Rust has quickly become an incredibly popular language with exceptional tooling, documentation and a renowned community that welcomes and helps those who are new. It is intended as a systems programming such as C/C++ but has modern functional capabilities and intentionally-designed safety features.
We will not assume knowledge of Rust and will introduce the major features of this modern systems programming language. This will include:
Come learn why Rust is one of the most popular and important programming languages of the 21st Century.
A down in the trenches look at building, running and day-to-day development with a Continuous Delivery pipeline. This talk is based on my experiences building multiple CD pipelines and optimizing developer workflows to push changes to production all day. I'll walk you through how we transformed a two-day deployment process into a 20-minute CD pipeline and then go on to perform more than 20,000 deployments.
During this presentation we'll walk through the evolution of a team teetering on collapse. Production deployments are a long running ceremony that hasn't really changed in years. Deployments are risky and everyone involved with the project acts accordingly, deployments can take days and the company website has scheduled maintenance windows.
Over several months, the team will transform into a model of agile process mastery. Deployments will take minutes instead of days. The team's structure and concerns over deploying to production will also change shape.
During this talk we'll dig into the anatomy of a continuous delivery pipeline, what it is, how it works, and the challenges you'll face making the transition. Where do you start and what are the big four considerations of continuous delivery? Do you need company buy-in or can you start small and grow out to the rest of the organization?
We'll walk through the entire process, talk about team organization, breaking up the monolith, your first steps towards CD, identifying your primary objective, the building blocks of a Microservices architecture, the psychology of continuous delivery, how to write effective code in a CD ecosystem, and we'll build a continuous delivery pipeline and Microservice during the presentation.
A real-world look at using Consumer Driven Contracts in practice. How to eliminate a test environment and how to build your services with CDC as a key component.
One of the biggest challenges in building out a Microservices architecture is integration testing. If you use a Continuous Delivery pipeline, none of your environments, stage or production, are even in a steady state. How do you perform adequate testing when your environment can change during your test? How do you manage a complex web of interdependent Microservices? How do you safely evolve your API in this environment?
Consumer Driven Contracts are a key component for a successful Microservices strategy. We'll look at different CDC frameworks and how to use them. We'll discuss developer workflows and how to ensure your API changes don't break client implementations. Finally, we'll build a couple of Microservices and walk through the lifecycle of Consumer Driven Contract tests.
Docker has revolutionized how we build and deploy applications. While Docker has revolutionized production, it's also had a huge impact on developer productivity. Anyone that's used Docker for an extensive period of time will tell you it's a blessing and a curse. Yes, it's portable but networking and other characteristics of Docker can make the most chill developer long for plain old Java. During this session we'll look at Docker's good points and how to tackle the difficult areas. The end goal - enable anyone on your team to go from zero to productive in under 20 minutes.
This session will should you how to structure a Java CRUD application that leverages Docker to enable rapid developer onboarding, schema migrations, and utilize common cloud services (like Pub/Sub); all from your laptop. This setup will enable you to build a streamlined, Continuous Delivery ready, Cloud Native application, the same configuration that enables local development will supercharge your CI/CD pipeline.
If you work in a polyglot environment, you know switching to a new service can be a difficult process. There are new tools to install, environments to setup, databases to use and so on. Docker can streamline this process and enable you to switch between services quickly and easily.
By the end of this session, you'll have a pattern for creating team friendly Microservices that works well in a Continuous Delivery Pipeline and can be deployed to any container environment. Docker will enable you to build, test and deploy your code faster and safer than ever before.
How do you build a Cloud Native Applications? So many cloud deployments are a lift and shift architecture, what would it look like if you started from scratch, only used cloud native technologies? During this session we will compare and contrast two applications, one built using a traditional Java application architecture, the other using a cloud native approach. How does building an app for the cloud change your architecture, application design, development and testing processes? We’ll look at all this and more.
During this session we’ll dive in to the details of Cloud Native applications, their origins and history. Then, look at what’s involved when you move from an on-prem data center to the cloud. Should you change your approach to application design now that you are in the cloud? If so, what does a cloud-based design look like.
By the end of the session, you’ll have a better understanding of the benefits of a cloud native application design, how to best leverage cloud capabilities, and how to create performant Microservices.
In tech teams it's a constant firefight. We react. Then we react to the reaction… the cycle continues. In all this noise, in all this chaos, how do we move forward. How do we remain proactive?
A great leader must be an enabler for the team. At times this means insulating the team from the noise. At other times it means improving the environment for the team. At all times, however, it requires setting clear priorities and conditions for success.
This session is focused on the art of moving forward in even the noisiest environments.
One of the biggest impediments to overall developer productivity and the overall success of the software organization is inefficient processes. Without the right tooling to get to the root of the problem, debugging build and test failures is incredibly frustrating and leads to delays in shipping software.
In this workshop, you’ll work through examples using Maven, Gradle, and Gradle Enterprise on our real data and that of some popular open source projects. You'll learn how to measure build speed and reliability, which metrics are important, how to apply these analyses to your own builds, and how to use build caching to make those builds dramatically faster enabling your team to achieve better developer productivity.
One of the biggest impediments to overall developer productivity and the overall success of the software organization is inefficient processes. Without the right tooling to get to the root of the problem, debugging build and test failures is incredibly frustrating and leads to delays in shipping software.
In this workshop, you’ll work through examples using Maven, Gradle, and Gradle Enterprise on our real data and that of some popular open source projects. You'll learn how to measure build speed and reliability, which metrics are important, how to apply these analyses to your own builds, and how to use build caching to make those builds dramatically faster enabling your team to achieve better developer productivity.
Harold McMillan was Prime Minister of England from 1957 to 1963, the last British PM born during Queen Victoria’s rule, and one whose wit and even-keeled nature defined his administration. When asked by a reporter what might force his government off the course he had firmly laid out for it, he allegedly replied “Events, dear boy, events.”
The same might be said about what is driving software architectures today. Event-driven systems have enabled organizations to build substantial microservices ecosystems with all of the decoupling and evolvability that we were promised by the distributed computing technologies of 20 years ago. But these systems raise some interesting questions: if events now rule, what has become of entities? If we store events in logs, do we still need databases? Can we merely produce immutable events to trivially scalable logs and loose our microservices to consume them with no regard for what is actually out there in the world?
To make sense of this, we turn to the past. Spanning 2,500 years before McMillan deployed his wit on that poor reporter, we will look at what Heraclitus, Aristotle, Karl Popper, and W.V.O. Quine thought and wrote about these same questions. Are there things in the world that maintain their identity over time, or is the world just a sequence of experiences? Life may be a stream of events, but sometimes I still want to look things up by key. Four great thinkers will help be better at following the paradigm that will be shaping our systems for the next generation. And as usual, a good philosophy lesson will make us better at practical tasks. We’ll apply a rich view of events and entities to a proposed microservices architecture that can last the next decade.
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.
I hope you'll join me on this exciting survey of Serverless Computing. When you think of Serverless you probably think of Lambda's or Cloud Functions but there's so much more to the Serverless ecosystem. During this session will look at Serverless Computing in all its various forms and discuss why you might want to use a Serverless architecture and how it compares to other cloud services.
Serverless is an exciting component of Cloud computing and it's a growing rapidly. During this session we'll look at all things Serverless and discuss how to incorporate it into your system architecture. We'll build a Lambda function during the presentation and talk about the pros and cons of Serverless and when you should use Serverless systems.
There are a few Serverless frameworks available today to make building a function easier than ever. We'll look at a couple of these frameworks, build a local, Serverless function and deploy it to AWS (if the network cooperates). Finally, we'll talk about performance considerations, how to structure your Serverless functions, and how to perform safe l
The #1 fallacy of distributed computing is “The Network is Reliable.” Yet we still build web apps that 100% rely on a server and a network connection. What if we could build web apps that work, regardless of connection state? The promise and capabilities of Progressive Web Apps (PWAs) make this possible, and these capabilities are available today.
This session doesn't focus on a specific technology or backend technology; while those demonstrations are impressive, they are only useful for a handful of use-cases. Instead we look at architecture patterns and techniques that can work with any framework, any backend, and virtually any app.
In 2005 the way we built web applications changed when Google released Google Maps and the AJAX map canvas. This approach fundamentally changed how users expect to interact with web applications. Suddenly any app that sent postback after postback felt cumbersome and positively ancient.
Progressive web apps and offline capabilities are that next big shift. Soon any app that doesn't work offline is going to be as jarring and frustrating of a user experience as it would be to go back to a pre-ajax world. Don't let this be your app!
In it's essence, DDD (Domain-Driven Design) is a philosophy and set of techniques for constructing a shared mental model of a system, and translating it into software. Ideally, we want conversations with other humans, and conversations with our code, to have the highest possible bandwidth. Every major problem in software development boils down to communication.
What if we enabled a dialog around the challenges, by first deconstructing the Anatomy of Communication itself using our same DDD techniques?
First, we'll start with a familiar system metaphor to give us the structure: a Network.
Imagine everyone on your team, and the software itself, are all connected by wires in a Network. Along each of these wires is a Flow of Ideas. Ideas can flow between humans, or between humans and the software. In the context of an Idea Flow Network, software is a medium for ideas.
From here, we can start to describe the dynamics of an Idea Flow Network in terms of Friction and Flow. We can invent new words, extend the models, and have dialogs about the nature of communication pain – all with our newly invented Ubiquitous Language!
In this session, we'll combine our DDD skills with cognitive science research, to bring clarity and insight to this wordless space.
Containers enable rapid development and rapid software delivery - and with that increase in speed comes a need to shift how people think about and tackle security. Running those containers is part of this consideration - the platform and container orchestration has to figure out and handle all of the moving parts.
In this talk, Laine and Josh will give their recommendations for Kubernetes as a platform to run containers. They'll go through talking about security from the perspective of the pieces that make up the container - the ingredients, and how it runs in addition to where it runs. They'll discuss application and platform boundaries while explaining a simple model to use in order to think about and discuss this complex topic.
Moving a company to continuous delivery isn't easy. Which definition of CD should you even use to explain why it's important? What if it seems like a completely overwhelming transition that couldn't possibly be successful?
We'll talk about our definition of continuous delivery along with the three main types of development and cultural pain in the way getting a company there. Learn how to identify the types of pain, how to solve them, and how to hold on to enough hope to keep trying.
All companies are IT companies. Except…not. All companies SHOULD be IT companies, if they're trying to keep up with the weight of their customers' ever-increasing demands for speed and agility. Unfortunately…most companies don't know how to get there - or even what “there” looks like, or how they'd describe it.
Josh and Laine will talk about how to use a diagram (in this case, a map!) to build and discuss a strategy to navigate the high seas of being a business today in order to deliberately find the treasure. The treasure (continuous delivery) gives IT, and companies, the ability to embrace and empower existing resources, and eventually will give enough resources to thrive even at the lightning-fast pace of being a business today.
A long time ago, in a land far far away, there were monoliths. These fabled artifacts brought consistency and stability to the land - but there was a cost in speed, agility, time, and development pain.
Whether Java EE, .NET, or something else, the big ol' integrated plexi-purpose binaries or yore (and also now…) have grown into problems that hurt developers, architects, and the execution of business goals.
In this talk, Josh and Laine will talk specifics about the pain points of monoliths, and the various strategies they've seen to alleviate that pain.
Graal is a VM and an awesome VM at that. Able to run a variety of languages and fast. The execution times can be impressive too. This VM can run anything, JavaScript, Python 3, Ruby, R, JVM-based languages like Java, Scala, Kotlin, and LLVM-based languages such as C and C++.
We are living in truly exciting times. So much interesting technology including the VM space. Graal is a virtual machine and shared memory system for multiple languages. GraalVM can either run standalone or embedded in OpenJDK or node.js. Graal can even embed inside databases such as MySQL or Oracle. In the presentation, we look at this exciting VM, how to start it, how to run polyglot applications, and how to integrate all within the same VM.
Let's take a look at some of the cool new stuff that we can use. This presentation will assume basic Java knowledge and no Scala knowledge is required.
Our presentation will do a quick little intro, and then we will proceed right into some of the new features.
Again, worth reiterating, no previous Scala knowledge required. Bring questions and your curiosity!
The cloud promises highly scalable infrastructure, economies of scale, lower costs and a more secure platform. When moving to the cloud, how do you take advantage of these new capabilities? How do you optimize your organization to make the best use of the resiliency and elasticity offered by the cloud?
Closely associated with cloud computing is Continuous Delivery, the automated process to get changes to your customers quickly, safely and in a sustainable way. Continuous Delivery was born in the cloud and is a great way to get ideas to your customers. There’s one catch, if you want to adopt a Continuous Delivery strategy, you need to build applications differently, your team structure needs to change and how you test and validate systems needs to adapt to these changes.
This presentation will look at how to transform your organization to take advantage of all the cloud has to offer. We’ll look at strategies for initiating your transition to the cloud, how to adopt a continuous delivery strategy, and how to manage cross-functional teams (sometimes called two-pizza teams) and projects when every team can deploy to production multiple times a day.
Managing teams in chaos will provide you the information needed to implement the two-pizza rule for your organization, enable your teams to work independently while still focusing on a common goal, and how to beat your competition to market.
As Cloud computing becomes more popular and many businesses are keen to adopt it,one of their major concerns is security. In spite of the hype accompanying it and the success stories from the large organisations who have adopted, there are also numerous examples of breaches that have been experienced in the cloud. Many businesses would like to know how to create a secure cloud infrastructure to ensure that all their applications and data is well protected.
This talk is based on my experience in different projects that I have been involved in, some pitfalls that my team has fallen into and considerations that we can take while preparing for new cloud infrastructure.
This talk is not about a particular cloud vendor solutions but about the questions and considerations to take to ensure that your cloud infrastructure is secure.
The considerations include:
Ensuring data is securely protected from anyone who would want to access it.
Encrypting the data so that if it got to the wrong people
Authentication to ensure that only the authorised people can access the data
Enabling due diligence so that data is not accessible by those who eavesdrop and would like to modify it.
Protecting the infrastructure from Denial of Service attacks(DOS) from both internal and external sources
This talk also highlights some common pitfalls:
Using components for a different purpose other that what it is created to for.
Waiting till after the application is built before preparing the infrastructure.
Creating the infrastructure then thinking about the security at the last minute.
At the end of this session, attendees:
Are able to evaluate the level of security that they need
Can reorganise their priority while designing for their cloud infrastructure
Are equipped to create a highly secure infrastructure
In this guided demo, we are going to look at 3 different techniques that are remarkably powerful in combination to cut through legacy code without having to go through the bother of reading or understanding it.
The techniques are:
Combination Testing: to get 100% test coverage quickly
Code Coverage as guidance: to help us make decisions about inputs and deletion
Provable Refactorings: to help us change code without having to worry about it.
In combination, these 3 techniques can quickly make impossible tasks trivial.
Will will be doing this on the Gilded Rose Kata, https://github.com/emilybache/GildedRose-Refactoring-Kata
It is extra beneficial if you try it out yourself first so you can see how your implementation would differ.
This is a high level talk about many of the misconceptions surrounding refactoring, including
What refactoring looks like
Why refactoring is often neglected
The pace of change
Making better choices
The ROI on improvements
This talk is broken in 5 sections:
Evolutionary design
The most common misconception is that refactoring is a mini-rewrite of a section of code. Instead we are going to look into what microevolution looks and feels like. As well and the double edged sword as to why it is both extremely successful yet often unappreciated.
Code Smells
Using cutting edge pattern recognizing training we will show managers and programmers alike how to spot bad code at a glance
Naming
Explore Arlo Belshee 7 steps of improving the naming of your code.
10 X
The ROI (return on investment) is one of the most misunderstood part, because the math is very non-intuitive. Here we will explore why 8,402 is 10 times better than 8,333 ?!?
Let’s get back to basics.
One of the microskills often used in TDD is Consume First Architecture, which simply means using the fields and methods before they exist. Sounds easy? Well yes and no. Even simple lines of code can have HUGE implications on your architecture. The real skill in consume first is to be able to see, question and respond to those implications on sight.
In this lab, we are going to geek out over a single line of code. We will take it and turn it into 40-50 variations and explore how each variation impacts the resulting design.
Over the course of my life I have amassed a great quantity of 1-3 minute talks. Tonight we are going to Randomly pick from that list and see where the adventure takes us!
Talks:
Test Driven Math
10 X
A swimming pool isn’t just a bigger bathtub
Bdd vs TDD
Arlo’s Git Notation
The Curse of knowledge
Do NOT use the greater than sign in programming
DocDoD
Sparrows
Leveling up
On being the best
Quantum Computing
Theory Based thread testing
Better Lunches
Decision trees
Sustainable Pace
Standing alone
Better Interviews
Duplication and Cohesion
Generic Type Information at Runtime in Java
Make the easy change
“Software is eating the world” means all innovations in the company must be channeled through software. Developers are the fuel of this innovation. But can your talented software development team perform at its full potential?
The paradox of a successful software team is that as the codebase and team sizes grow - it becomes harder for developers to maintain the quick feedback cycles that are necessary to work creatively and productively. Compared to other industries, the software development process is in the dark ages, with little data to observe and optimize the process itself. Symptoms of this problem include not only wasted time waiting for builds, CI, IDE’s to do their job, but it also saps our creative flow - limiting early feedback cycles and creating incorrect signals like flakey tests.
Join Hans Dockter, founder and CEO of Gradle for a discussion of how you can use data from across the development process to understand what breaks your codebase and how to speed up cycle times to enable developers to remain creative and innovative even as your code base grows.
Vue is a new, powerful framework for building real-world applications. Enterprise ready, with a rich and diverse ecosystem, Vue is the currently ranked as the #2 front end framework and is rapidly gaining on its older brother, ReactJS. Join us for this second in a comprehensive series of session which will take you from blind novitiate to visionary VueJS expert in no time!
In this second presentation in our VueJS series, we dive deeper in to VueJS and explore:
Meteor is an open-source, all-JavaScript platform for building reactive, top-quality web apps in a fraction of the time. Join us to see how to use VueJS along with this amazing full-stack Javascript framework to build realtime, reactive applications for both the web and for mobile platforms.
From concept to reality, in this session we will live code a fully featured application with VueJS, Vuex and Meteor. Along the way, explore and explain the cool technologies underlying Meteor that empowers such fantastic productivity. You will experience, firsthand, how Meteor makes developing web applications with VueJS both fun and exciting!
Vue is a new, powerful framework for building real-world applications. Enterprise ready, with a rich and diverse ecosystem, Vue is the currently ranked as the #2 front end framework and is rapidly gaining on its older brother, ReactJS. Join us for this first in a comprehensive series of session which will take you from blind novitiate to visionary VueJS expert in no time!
This first session in our VueJS series begins our journey to Vuetopia with an exploration of the basics of the framework including:
If you're not terrified, you're not paying attention.
Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.
Topics include:
If you're not terrified, you're not paying attention.
Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.
Topics include:
We've all got secrets, but nobody seems to know where to put them. This long standing issue has plagued system design for ages and still has many broken implementations. While many consider this an application concern, the foundations rest in the design of the system. Join Aaron for an in-depth workshop that will cover the following secret management solutions:
Additionally, this workshop will demonstrate tools for discovering sensitive information checked in to your project.
This is a two session workshop and is best received by attending both sessions.
You will need the following tools installed and updated prior to the workshop:
Optional
We've all got secrets, but nobody seems to know where to put them. This long standing issue has plagued system design for ages and still has many broken implementations. While many consider this an application concern, the foundations rest in the design of the system. Join Aaron for an in-depth workshop that will cover the following secret management solutions:
Additionally, this workshop will demonstrate tools for discovering sensitive information checked in to your project.
This is a two session workshop and is best received by attending both sessions.
You will need the following tools installed and updated prior to the workshop:
Optional
Security should always be built with an understanding of who might be attacking and how capable they are. Typical threat modeling exercises are done with a static group of threat actors applied in “best guess” scenarios. While this is helpful in the beginning, the real data eventually tells the accurate story. The truth is that your threat landscape is constantly shifting and your threat model should dynamically adapt to it. This adaptation allows teams to continuously examine controls and ensure they are adequate to counter the current threat actors. It helps create a quantitative risk driven approach to security and should be a part of every security teams tools.
Join Aaron as he demonstrates how to look at web traffic to analyze the threat landscape and turn request logs into data that identifies threat actors by intent and categorizes them in a way that can be fed directly into quantitative risk analysis. Aaron will show how important this data is in driving risk analysis and creating an effective and appropriate security program.
Any system of significant scale or latency sensitivity employs the use of caching. It could be as simple as memoization, or as complicated as a fully distributed system. These ideas serve us well, but how do we take it to the next level?
Join Aaron as he demonstrates customizing a caching system. He will discuss the pros and cons of embedding application and domain specificity into your caching model. Aaron will show a start to finish implementation of a custom Redis module that reduces latency, network round trips, and adds pub/sub notifications.
Learn how to take your cache to the next level and encode elements of your system directly into the handling of your most accessed data.
This session will span multiple languages, but will focus on C for the Redis module implementation. Knowledge of C is not required to attend this session, as the details will be explained alongside the code with examples in higher level languages.
DevOps is changing the way that organizations design, build, deploy, and operate online systems. Engineering teams are making hundreds or even thousands of changes per day, and traditional approaches to security are struggling to keep up. Security must be reinvented in a DevOps world to take advantage of the opportunities provided by continuous integration and delivery pipelines.
In this talk, we start with a case study of an organization trying to leverage the power of Continuous Integration (CI) and Continuous Delivery (CD) to improve its security posture. After identifying the key security checkpoints in the pre-commit, commit, acceptance, and deployment lifecycle phases, we will explore how unit testing and static analysis fit into SecDevOps. Live demonstrations will show how to enforce
security unit tests and static analysis in a Jenkins CI build pipeline. Attendees will walk away with a better understanding of how security fits into DevOps to help secure their organization’s applications.
It happens to us all; there are simply days where it seems impossible to get anything done. This session focuses on techniques and tips to get into the zone, stay in the zone and to protect your productivity, even in disruptive environments.
Rather than focusing on any one productivity methodology (e.g. GTD) This talk analyzes the internal and external factors that affect our productivity and offers broader strategies to get back on track.
Many agile teams—distributed or collocated—practice cargo cult agile instead of realizing the freedom and ease an agile approach can actually supply. That’s because they don’t realize there are seven principles they can apply. Applying them isn’t easy but is possible. In this workshop, we'll walk through the seven principles and you'll create your action items for what you can do when you return to the office.
The seven principles:
This workshop will not address the first principle in the book: Establish Acceptable Hours of Overlap. That's a separate workshop all its own.
Want to bring in [new cool thing X] or [necessary technology change Y] to your company, because you know there's a need for it? GOOD IDEA! Except…now what? If your company is more than about 3 people, how do you explain, enable, and encourage the adoption of this change, especially if it will require some work on everyone’s part?
In How to Technology Good, Josh and Laine will explain how bringing in technology is subject to one of the biggest problems in IT - how to scale it. They'll also talk about tips and tricks for how to be as successful as you can, and the main things to keep track of and watch out for. They'll go through each phase of bringing in new tech, all the way from how to pick your success criteria through what to think about when it comes to maintenance.
If companies truly want to go FAST, occasionally that requires changing something about the culture of the company. Processes get stale or overly complex, people don’t know why things are the way they are, and everyone wonders at the wisdom of asking too many questions.
Culture change is hard, and in this talk we’ll explain the most important piece of surviving and even finding JOY in it – having a strong, supportive community.
We work in IT – and while we WORK with computers, we do not always FUNCTION like computers where inputs consistently make the same outputs. Our jobs are mostly theory and design and strategy, with some good old fashioned implementation thrown in – and as skilled knowledge workers, we function best when we respect that our mental and emotional resources matter.
In this talk, we’ll explain some of the best practices we’ve stumbled across for personal (brain and heart) resource maintenance.
We all have an innate sense of what's possible. Not only is this how magicians fool you, but it might also be what's holding you back.
In this session Michael Carducci shares how he applied lessons learned in his career as a professional magician to his “day-job” as a software engineer.
Magicians have a simple process for creating new material; think of the most impossible thing you can imagine, the engineer a way to make it possible. Michael has been engineering solutions to “impossible” problems for nearly 20 years and this has given him a unique perspective on dealing with challenges in all aspects of his life.
This talk combines illusion, anecdotes and real-world examples to help identify and overcome your mental obstacles.
How to architect and deploy a microservice architecture on Amazon Web Services using services such as API Gateway and CloudFormation. We'll touch on a broad swath of services in the AWS suite to learn about what they do and how they fit into a microservice architecture.
First we'll look at the tools needed to build and deploy microservices on AWS with a Continuous Delivery pipeline. Then, we'll talk through some of the challenges of a distributed system and the tools that AWS provides to address them.
This talk assumes you know about microservice architecture at a high level, but assumes no prior knowledge of AWS.
A bug corrupts your critical data, how do you undo it without data loss? Your biggest customer needs to know the exact state of the system at a very specific point in time, how do you find that out? Systems using event sourcing have good answers to these questions. Event sourcing is nothing new. In fact, it's a proven pattern for building reliable systems at scale. For example, it's how most RDBMSes are implemented. Yet many developers are unfamiliar with this approach.
In this talk, we'll demonstrate how event sourcing works with examples. We'll discuss when to use event sourcing, how it relates to CQRS, and how it's a great fit for distributed systems such as microservices.
Everyone understands the Web as a platform, but not as many can see the thread that connects the Web to APIs to linked interoperable data models. In this design workshop, we will investigate the properties of the Web and how they can be applied to data, documents, services and concepts as the basis of truly spectacular vision for information interchange.
We will start with the basics and build up to flexible, evolvable data models and everything in-between.
Topics will include
Everyone understands the Web as a platform, but not as many can see the thread that connects the Web to APIs to linked interoperable data models. In this design workshop, we will investigate the properties of the Web and how they can be applied to data, documents, services and concepts as the basis of truly spectacular vision for information interchange.
We will start with the basics and build up to flexible, evolvable data models and everything in-between.
Topics will include
Synchronous API calls are inherently more resource-intensive than queuing up an async message, and the failure scenarios can be complex. Yet, most developers use synchronous REST or RPC for inter-app communication without questioning it. What would our applications look like if we used asynchronous messages, or events, to send messages from one app to another by default?
In this talk, we'll explore some common use cases to see whether synchronous or async would be a better fit and what the tradeoffs are. Finally, we'll take a high-level look at how companies are embracing async using event stream processing and workflows.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or simply get more out of your team; you must first understand that having a “good idea” is simply the beginning. An idea must be communicated; a case must be made. Communicating that case well is as important, if not more so, than the strength of the idea itself.
You will learn 6 principles to make an optimal case and dramatically increase the odds that the other person will say “Yes” to your requests and suggestions, along with several strategies to build consensus within your teams. As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that are necessary to both understand and leverage if you want to be more effective leader of change in your organization.
This open source machine learning framework from Google has taken off. Come learn what you can do with it in your own organization.
TensorFlow is a powerful data flow-oriented machine learning framework developed by Google's Brain Team. It was designed to be easy to use and widely applicable on both numeric, neural network-oriented problems as well as other domains. We'll cover the over view as well as apply it to several fun, realistic problems.
Jez Humble and David Farley unleashed a revolution with their prolific book “Continuous Delivery”, leading to the evolution of a movement aptly named DevOps.
DevOps is about bring development and operations closer together, letting developers manage their releases and operationalize their applications in production, leaving operations to do what they do best.
In this session what DevOps really entails — what does it mean to embrace DevOps, what changes should we be expecting, what kind of tooling should we be thinking about, and finally, see what are the benefits of embracing DevOps.
Functional programming (FP) is fast becoming the tool that programmers reach for in this era of multi-core processors. Although the definition of “functional” varies quite a bit between implementations, there are a few facets that remain core and true to the paradigm. Facets such as functions as first-class, higher order functions, closures etc. In this session we will explore the meaning of these using JavaScript as our medium.
Why JavaScript? The answer in short is: omnipresence. The long answer is that hiding at the core of JavaScript is a language that is not only beautiful and elegant, but one that supports many of the core ideas in FP. If you are interested in what the fuss is all about, or are confused about some of the concepts that make FP a reality, then this is the session you should attend.
Micronaut is a modern, JVM-based, full-stack framework for building modular, easily testable microservice applications.
In this session we'll dive deep into Micronaut, it's strengths, capabilities and best practices when building & testing services, functions and reactive apps.
How to architect and deploy a microservice architecture on Amazon Web Services using services such as API Gateway and CloudFormation. We'll touch on a broad swath of services in the AWS suite to learn about what they do and how they fit into a microservice architecture.
First we'll look at the tools needed to build and deploy microservices on AWS with a Continuous Delivery pipeline. Then, we'll talk through some of the challenges of a distributed system and the tools that AWS provides to address them.
This talk assumes you know about microservice architecture at a high level, but assumes no prior knowledge of AWS.