It seems like everyday there is a new headline about a security breach in a major company’s web application. These breaches cause companies to lose their credibility, cost them large sums of money, and those accountable undoubtedly lose their jobs. Security requires you to be proactive. Keep your employer out of the headlines by learning some key security best practices.
This hands-on workshop is designed to teach you how to identify and fix vulnerabilities in Java web applications. Using an existing web application, you will learn ways to scan and test for common vulnerabilities such as hijacking, injection, cross-site scripting, cross-site forgery and more. You will learn best practices around logging, error handling, intrusion detection, authentication and authorization. You will also learn how to improve security in your applications using existing libraries, frameworks and techniques to patch and prevent vulnerabilities.
The job Software Architect places in the top ten of most annual surveys of best jobs, yet no clear path exists from Developer to Architect. Why aren’t there more books and training materials to fill this demand? First, software architecture is a massive multidisciplinary subject, covering many roles and responsibilities, making it difficult to teach because so much context is required for the interesting subjects. Second, it’s a fast moving discipline, where entire suites of best practices become obsolete overnight. This workshop provides the fundamentals to transition from developer to architect, or to help “accidental” architects.
Part 1 of this workshop focuses on the many elements required to make the journey from developer to architect, covering process topics like the impact of Continuous Delivery on architecture, technical subjects like application, integration, and enterprise architecture, and soft skills. While we can’t make you an architect overnight, we can start you on the journey with a map and a good compass.
Part 1, From Developer to Architect, covers:
Part two of this workshop takes a deeper dive in application, integration, and enterprise architecture topics, including evaluating architectures via Agile ATAM, the impacts of continuous delivery on architecture, comparing architectures, SOA, SOAP, and REST, integration hubs, and enterprise architecture approaches and strategies.
Part 2, Deeper Dive, covers:
To fully leverage knowledge, you need application. The last part of this workshop uses the public domain Architectural Katas exercise to apply learnings from the first two parts.
Apache Cassandra is one of the best solutions for storing and
retrieving data. We will explore data analytics cluster computing
framework with real-world examples. It is 100x faster than Hadoop!
We will start with an introduction to Apache Cassandra. We will explore challenges encountered when attempting to scale
with relational databases, and how NoSQL databases like Cassandra address those problems. It reviews the Cassandra architecture, benefits, and how to use the Cassandra read and write paths.
Later, you will learn how to effectively and efficiently solve analytical problems using Apache Spark, Apache Cassandra, and DataStax. You will learn about Spark API, Spark-Cassandra Connector, Spark SQL, Spark Streaming, and fundamental performance optimization techniques.
Big Data applications nowadays require a faster speed of data
processing and analysis.
Apache Cassandra is one of the best solutions for storing and
retrieving data. We will explore data analytics cluster computing
framework with real-world examples. It is 100x faster than Hadoop!
We will start with an introduction to Apache Cassandra. We will explore challenges encountered when attempting to scale
with relational databases, and how NoSQL databases like Cassandra address those problems. It reviews the Cassandra architecture, benefits, and how to use the Cassandra read and write paths.
Later, you will learn how to effectively and efficiently solve analytical problems using Apache Spark, Apache Cassandra, and DataStax. You will learn about Spark API, Spark-Cassandra Connector, Spark SQL, Spark Streaming, and fundamental performance optimization techniques.
Big Data applications nowadays require a faster speed of data
processing and analysis.
For this session, you will need a Mac or Windows laptop
1) Download Cassandra:
http://cassandra.apache.org/
2) To setup your environment (on Mac & Windows) for Docker exercises
(1) Download VirtualBox from https://www.virtualbox.org/
(2) Download Docker https://docs.docker.com/kitematic/
(on Linux it should work if you just have the docker package installed)
3) Download Spark
http://spark.apache.org/downloads.html
4) Download Datastax sandbox environment: either virtual box or VMware image
https://academy.datastax.com/downloads/welcome
From:
https://academy.datastax.com/resources/getting-started-apache-spark
Download: DS320 Virtual Machine Download (Includes Exercises) https://s3.amazonaws.com/datastaxtraining/VM/DS320-vm-dsa.zip
5) Download exercises from the workshop site.
This two session workshop covers messaging concepts, standards (JMS, EIP), and technologies including hands-on exercises with ActiveMQ, Spring, and Camel.
Topics
Fundamentals: JMS, EIP
Technologies and Architectures: ActiveMQ
Enterprise Integrations: Spring Framework & Apache Camel
Demos and Hands-on Exercises
Download Prior to Workshop
React.js is a view library from Facecbook for building performant user-interfaces in JavaScript. In this session, we'll explore React.js and understand why it's a great step forward for building complex UI's that run fast. We'll code up an example web application using React.js and step through the basics of using the library while discussing concepts like the virtual DOM and components.
This is a HAND-ON WORKSHOP, INTRODUCTORY LEVEL. Please come with a laptop, and if possible, with the software needed pre-installed (Instructions below). We'll cover these topics - both in code and in discussion format, and do as many as we can in the time alloted:
React.js workshop. Self contained, one-day workshop covering the basics. Please follow all of these instructions before attending the workshop.
1) Node.js 4.6.0 LTS
2) Git 2.10.0
3) Bash Console; examples cmdr on Windows or iTerm on OSX
4) Favorite Editor; examples WebStorm(paid) or Atom(free)
5) Clone Git repo: https://github.com/prpatel/connect.tech-react-workshop
6) cd connect.tech-react-workshop && npm install
Windows users will need the following software in order to run npm install properly. This is primarily due to jsdom; used for testing.
1) Python 2.7
2) Visual Studio 2015 (or 2012) with C++
3) npm config set msvs_version 2015 –global where 2015 is either 2015 or 2012 depending on which you have installed.
4) cd connect.tech-react-workshop && npm install
With many AAA video games in its portfolio, Unity has become a powerhouse within the game development and simulation industry. But Unity is more than a game engine — it's a complete ecosystem of tools, workflows, and integrations. Very small development teams and even individual hobby developers can create great games with Unity, for any platform, 3D or 2D.
In this workshop, I'll give an overview of tools and techniques for building games for desktop and mobile using the Unity. We will break down a real game for iOS, from asset creation, to building scenes, to scripting in C# (on Windows and Mac), to animation, to testing and finally publishing a game. You will leave this workshop with an understanding of the power of Unity, and the knowledge necessary to start building games. Game development is a thrill, and all developers can benefit from a knowledge of the cutting edge tools.
For this session, you will need a Mac or Windows laptop, and to install the free Unity editor, available from http://unity3d.com; please register an account at Unity as well. I will have copies available for install, but we'll save time if you come ready to code. No experience with Unity or C# is required.
It seems like all we talk about these days is making our architectures more modular. Buy why? In this session I will discuss the drivers and reasons why it is essential to move towards a level of modularity in our architectures. I will discuss and show real-world use cases of distributed modular architectures (specifically microservices and service-based architecture), and then discuss in detail the core differences between microservices and service-based architecture and when you should consider each. I'll end the talk by discussing the most effective way of migrating to modular distributed architectures.
Agenda:
Even though teams are gaining more experience in designing and developing microservices, nevertheless there is still a lot to learn about this highly distributed and somewhat complicated architecture style. Unfortunately, lots of microservices anti-patterns and pitfalls emerge during this learning curve. Learning about these anti-patterns and pitfalls early on can help you avoid costly mistakes during your development process. While anti-patterns are things that seem like a good idea at the time and turn out bad (see martinfowler.com/bliki/AntiPattern.html), pitfalls are those practices that are never a good idea at the time - ever. In this session I will cover some of the more common anti-patterns you will likely encounter while creating microservices, and most importantly describe some of the techniques for avoiding these anti-patterns.
Agenda
Even though teams are gaining more experience in designing and developing microservices, nevertheless there is still a lot to learn about this highly distributed and somewhat complicated architecture style. Unfortunately, lots of microservices anti-patterns and pitfalls emerge during this learning curve. Learning about these anti-patterns and pitfalls early on can help you avoid costly mistakes during your development process. While anti-patterns are things that seem like a good idea at the time and turn out bad (see martinfowler.com/bliki/AntiPattern.html), pitfalls are those practices that are never a good idea at the time - ever. In this session I will cover some of the more common pitfalls you will likely encounter while creating microservices, and most importantly describe some of the techniques for avoiding these pitfalls.
Agenda
Microservices are all the rage. But this isn’t a session on microservices. It’s a session on modularity. At the end of the day, microservices are just one way to the increase modularity of our software system. But there are others.
In this session we’ll refactor a monolith using patterns of modular architecture. In the end, you’ll see how the underlying set of principles used to modularize the monolith are virtually identical to the benefits of a microservice architecture, albeit manifest in a different way. Once modularized, you’ll also be amazed at how much architectural agility we have in our ability to now shift between different approaches to modularity, including microservices.
Traditional approaches to software architecture are broken. Attempts to define the architectural vision for a system early in the development lifecycle do not work. In today’s volatile technology and business climate, big architecture up front is not sustainable. In this session, we will explore several principles that help us create more flexible and adaptable software systems. But first, we’ll expose the true essence of what’s meant when we say “architectural agility.”
What’s the goal of architecture? To serve as a blueprint of the system that everyone understands? Possess the flexibility to evolve as new requirements emerge? To satisfy the architectural qualities, including performance, security, availability, reliability, and scalability? Yes. Yes. Yes. At the heart of these three questions are the three pillars of architecture - social, process, and structure. But how do we create software architectures that achieves all of these goals? And how do we ensure no disconnect occurs between developers responsible for implementation and architects responsible for the vision? In this session, we’ll explore several principles to increase architectural agility and provide some actionable advice that will help you get started immediately.
Understand Java from a functional programming point of view. This part covers the basics of lambdas and streams, emphasizing functional programming by transforming collections using the stream approach.
Also includes method references and static and default methods in interfaces.
Functional features in Java, including parallel streams, the java.util.function package, the Optional data type, and reduction operations.
The talk also covers the new date and time package based on Joda time, as well as collectors and implementing the Collector interface.
Java SE 8 introduces many new features that can simplify your code. Using streams, lambdas, and the new Optional type all change the way we write Java. In this presentation, we'll work through a series of examples that show how to rewrite existing code from Java 7 or earlier using the new Java 8 approach.
Examples will include replacing anonymous inner classes with lambdas, switching from iterating over collections into transforming streams, using immutables wherever possible, lazy evaluation, and more.
This workshop prepares web and application developers to build applications with Docker, Kubernetes, and OpenShift. We’ll start with a short introduction to Platform-as-a-Service, Docker, and Kubernetes, which are some of the foundational pieces of OpenShift.
Using hands-on exercises, I will walk you through several use cases for using Docker containers to deploy simple, complicated, stateful, stateless, and microservice applications. You will bring your laptop and I will bring a working platform in a VM and we will spend most of the time getting stuff done.
Please follow the instructions here:
https://github.com/thesteve0/workshops/blob/master/UberConf/install_vm.asciidoc
While rummaging through some books the other day, I came across my copy of The Pragmatic Programmer. Flipping to the copyright page, I realized that it had been 16 years since its publication. Many of our careers have been deeply affected by reading and considering the many nuggets of wisdom contained in this book, and it is near the top of multiple recommended reading lists.
In this presentation, we’ll revisit this book, and we’ll also consider what we’ve learned since its publication - what would we change? And what remains timeless?
Many of us would love to embrace microservices in our day-to-day work. But most of us don’t have the opportunity to start over with a pure greenfield effort. We have to understand how to refactor our existing monolithic applications toward microservices. Practical steps include building new features as microservices, leveraging anti-corruption layers, strangling the monolith.
In this presentation we’ll go light on the theory and walk through the actual process of turning a strawman monolith into a family of well-factored microservices.
In this session, we will take a look at Angular - the powerful MVVM SPA framework from Google. We will discuss some of the terminology that Angular offers, and see how we can use that to develop highly interactive, dynamic web applications. See “Detail” for a list of topics I cover and the Github repo URL
In this session we will take a look at Angular and using it to develop rich web applications. Angular embraces HTML and CSS, allowing you to extend HTML towards your application, and uses plain JavaScript which makes your code easy to reuse, and test.
Note: This is an intro level talk. It is targeted towards developers who are curious about Angular and want to learn about the fundamental features and concepts in Angular.
Topics Covered -
ng-app
ng-init
and the evaluation {{ }}
directive$rootScope
and scoping rulesng-model
ng-repeat
ng-form
, form validation and submission in AngularJSng-messages
to display form validation messages to the userGitHub URL - https://github.com/looselytyped/angudone-workshop/tree/solutions
In this session, we will take a look at Angular - the powerful MVVM SPA framework from Google. We will discuss some of the terminology that Angular offers, and see how we can use that to develop highly interactive, dynamic web applications. See “Detail” for a list of topics I cover and the Github repo URL
In this session we continue our discussion from Part I. As we continue to evolve our application we will seek to use, and understand a few more of AngularJS' core constructs.
ng-view
and $routeProvider
$http
If time permits we will look at a few good practices when developing AngularJS applications, ways to modularize your code, and some tools that aid in the development of AngularJS applications.
In an increasingly crowded field of languages, Clojure stands alone. It is a dynamic, functional, high performance dialect of Lisp that runs on both the JVM and CLR. The creator cast aside assumptions from both the Lisp and Java communities to create a remarkable language implementation.
This workshop introduces Clojure to Java developers who might not have seen a Lisp and don’t yet understand why that’s such an advantage. I introduce the language syntax (what little there is of it), cover interoperability with Java, macros, mutlti-methods, and more. I also cover the functional aspects of Clojure, showing its powerful immutable data structures, working with threads and concurrency, and sequences. Beyond just showing syntax, I also show how to build real applications in Clojure, and give you a chance to do the same. Attending this workshop shows enough to pique your interest and show why many of the people who were interested in Java in 1996 are interested in Clojure now.
In an increasingly crowded field of languages, Clojure stands alone. It is a dynamic, functional, high performance dialect of Lisp that runs on both the JVM and CLR. The creator cast aside assumptions from both the Lisp and Java communities to create a remarkable language implementation.
This workshop introduces Clojure to Java developers who might not have seen a Lisp and don’t yet understand why that’s such an advantage. I introduce the language syntax (what little there is of it), cover interoperability with Java, macros, mutlti-methods, and more. I also cover the functional aspects of Clojure, showing its powerful immutable data structures, working with threads and concurrency, and sequences. Beyond just showing syntax, I also show how to build real applications in Clojure, and give you a chance to do the same. Attending this workshop shows enough to pique your interest and show why many of the people who were interested in Java in 1996 are interested in Clojure now.
The first part of the Continuous Delivery workshop covers the differences between continuous integration, continuous deployment, and continuous delivery). It also introduces the deployment pipeline_, along with usage, patterns, and anti-patterns. This part concludes with some applied engineering principles.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
The first part of the workshop describes the technical differences between related topics such as continuous integration, continuous deployment, and continuous delivery. At the heart of the workshop is a pattern called the deployment pipeline, which involves the creation of a living system that models your organization's value stream for delivering software. I discuss the various stages, how triggering works, patterns and anti-patterns, and how to pragmatically determine what “production ready” means. This session also covers some agile principles espoused by the Continuous Delivery book, including new perspectives on things like developer workstations and configuration management.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant, including the audience and engineering practices around different types of tests. I also cover some best practices around testing, including testing ratios, code coverage, and other topics. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Generally, I talk about building synergistic engineering practices that complement rather than conflict one another. In particular, I discuss why feature branching harms three other engineering practices and describe alternatives. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Over the past year I’ve had the pleasure of wearing the hat of “product manager” for the Spring Cloud Services team at Pivotal, operating using a distributed variant of the Pivotal Labs process. Along the way I’ve learned many valuable lessons that I hope you’ll be able to apply to your product development efforts.
In this presentation we’ll examine the relationship of product management to engineering and to your customer, and how you can be an effective broker between the two groups.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
Modern software development exhibits a curious trend: Yesterday’s Best Practice Becomes Tomorrow’s Antipattern. Why? Haven’t we learned enough about software development to avoid this trap over and over? This keynote investigates why this trend continues, and offers some advice for enduring practices.
Modern software development exhibits a curious trend: Yesterday’s Best Practice Becomes Tomorrow’s Antipattern. Why? EJB and SOA were once Best Practices, now shunned as anti-patterns. This keynote investigates why this trend continues, including increased tech stack complexity, primordial abstraction ooze, code reuse abuse, strangling dependency management, and the fundamental dynamic equilibrium of the software development ecosystem. I also investigate How to Avoid Yesterday’s Best Practice from Becoming Tomorrow’s Antipattern, including domain centric architectures, immutabile infrastructure, evolutionary architecture, incremental architectural change, and how to favor evolvability over predictability. Also includes figurative and literal dumpster fires.
Reactive is a the latest buzzword to consume our industry. This presentation distills and defines reactive systems, describe the difference between reactive architecture vs. reactive programming, describe common patterns, and demos the popular reactive JVM technologies like RXJava, and Akka.
Introduction to reactive gets in deep on a discussion of patterns: Source, Sink, Back Pressure, Reactive Pull/Push including a Light introduction to actors using Akka, ReactiveX using RXJava and Reactive Streams in RXJava and Akka. We also will showcase the differences between ReactiveX and Akka.
Starting with JDK 5, we have had Futures, and they mostly went ignored. Now with concurrency and reactive technology in demand, it is essential that we understand what futures are, and how to handle them and make use of their power in asynchronous systems.
This presentation is a basic ground up introduction to Futures. We start with Futures and how they came packaged with JDK 5. We take a look at Executors, how to create a thread pool, which pool you should choose. How to model Futures in the JDK and show the difference for awaiting the answer and taking on the answer asynchronously. We also take a look at what a Promise is and when to use one. We then invest time taking a look at Guava's callback solution. Then we finally look at the handling of futures in both Scala and Clojure.
ReactiveX is a set of Reactive Extensions developed by Netflix, and is developed for various programming languages, like Java, Scala, and Clojure. ReactiveX overhauls the observable design pattern to achieve reactive goals. This presentation will solely focus on the Java version of ReactiveX, RXJava.
RXJava is combining the Observer Pattern with Functional Programming to compose complex asynchronous reactive systems. This presentation will also give an overview on RXJava concepts like Source, Sink, BackPressure and Reactive Pull and Push.
A set various tools to write reactive, concurrent, fault-tolerant applications and services using immutable data, asynchronous message passing using local and remote actors, software transactional memory, and supervised systems. This entire presentation is done in Java.
Akka is a set of various tools to write reactive, concurrent, fault-tolerant applications and services using immutable data, asynchronous message passing using local and remote actors, software transactional memory, and supervised systems.
Akka is also part of the Typesafe stack, a stack that include the Play web framework, Spray, Slick, and the Scala programming language. This Akka presentation will cover Java style usage of Akka with actors, asynchronous message passing, supervision, and streams
How do we define identity in a distributed software system? How do we manage it securely? How do we make identity assertions and verify those claims?
Technologies don't magically become solutions. They are used within domain, design and deployment contexts. This talk will focus on the singular notion of Identity and how it cross-cuts the distributed systems we are building.
We will focus on a variety of technologies and standards that help us make, identify, claim and verify identities.
Authenticated Identities are the first step to establish Privilege. Most systems fail to have sufficiently, deeply entrenched notion of how to apply and minimize privilege to avoid data and systems from being abused.
Technologies don't magically become solutions. They are used within domain, design and deployment contexts. This talk will focus on the singular notion of Privilege and how it cross-cuts the distributed systems we are building.
This talk will focus on the Valet Key problem and how to avoid it. We will visit various standards and technologies that help us strengthen our security profiles by reducing our dependence on open-ended and unfettered access to our systems and data.
Data integration costs are well beyond what they should be for such a crucial business function. The good news is that they needn't be. By relying on integration-friendly standards and technologies that were designed to support sharing information, we can reduce these costs while increasing our business capabilities.
Technologies don't magically become solutions. They are used within domain, design and deployment contexts. This talk will focus on the singular notion of Integration and how it cross-cuts the distributed systems we are building.
We will look at how the REST Architectural style leads us to integration-friendly standards such as RDF, Linked Data, SPARQL and JSON-LD. These technologies are useful both within our firewalls and with third party partners.
Our biological world changes gracefully. Our information world changes much less so. How can we embrace the inevitable technological, procedural and schematic flux that we know is going to visit upon us at some point?
Technologies don't magically become solutions. They are used within domain, design and deployment contexts. This talk will focus on the singular notion of Evolution and how it cross-cuts the distributed systems we are building.
We will focus on strategies from the Web standards space to define information systems that embrace change and handle it with relative ease.
This will include strategies for dealing with changing technologies, changing schemas and more.
Information conveys value as it travels around our systems, resting for a time in our data stores. The value we get out of it is sometimes matched by the value others would get from it as well. We need mechanisms to protect sensitive information from prying eyes and control with whom we share it.
Technologies don't magically become solutions. They are used within domain, design and deployment contexts. This talk will focus on the singular notion of Secrecy and how it cross-cuts the distributed systems we are building.
This talk will focus on strategies from the world of encryption to keep secrets secret as we produce, store and transfer information in distributed systems. A successful strategy for doing so will rely on notions of Identity and a strong Privilege model, but we will mostly focus on specific building blocks upon which we maintain Privacy and Confidentiality.
We will also address the forces that undermine our ability to trust encryption such as bugs, design flaws and those who wish to actively undermine our need to maintain Secrecy.
In this session, we'll explore Spring Cloud, the extension to Spring which addresses many of the common challenges of developing cloud native applications. We'll focus primarily on Spring Cloud's support for centralized configuration, service discovery, and failover/monitoring.
You wouldn't write your entire application in a single main() method or servlet. Nor would you develop an entire production-ready application in a single class. It's even unlikely that you'd cram everything into a single package.
Modularity helps us gain order in our code, breaking it into easily digestible, refactorable, pluggable, and testable chunks. Classes and methods are a form of modularity that we're all familiar with. But once the code is built, modularity goes away and we're left deploying a single WAR file.
Aside from being buzzword-compliant, Microservices are a means of defining entire systems from composable, but distinct deployment units gaining all of the benefits of finer-grained modularity.
Microservices present new challenges to developers, however. How do you configure your microservices? How are microservices discovered? And how can you avoid a cascading failure when one microservice becomes sluggish, unresponsive, or otherwise unhealthy?
In this session, we look at how to develop clients that consume microservices in the cloud. We'll look at how to solve challenges of cross-origin request sharing (without employing CORS), security, and loose-coupling with regard to service addresses. This session will build upon what was learned in “Cloud Native Spring”, adding the notion of a service gateway to the stack.
Once you've developed the microservices that back your cloud native application, you'll likely need to put a user interface up front. Suddenly a new array of challenges presents itself.
In this session, we'll look at Spring Cloud Data Flow, a cloud native programming and operating model for composable data microservices on a structured platform.
Microservices are commonly thought of as small REST-based services that are assembled to form a larger, more complete application. In reality, however, REST is only the communication mechanism which is only a implementation detail and not intrinsic to the notion of microservices.
Meanwhile, data processing and integration between various components of an application and external services is a key factor of many applications. In cloud native applications, this kind of data flow and processing is still relevant. Spring Cloud Data Flow offers a solution for data processing and integration where each step in the flow is, in fact, a microservice…but not necessarily a REST service.
In this session, we'll open the hood on Spring Boot and see how it works. Using this knowledge, we'll look at ways to optimize Spring Boot, override autoconfiguration, and create custom extensions to Spring Boot's Actuator.
Spring Boot does many wonderful things that get you well on your way to developing amazing Spring applications. But how does it tick? How can you customize it? And how can you override it's default autoconfiguration when you want something a little different?
Security is an important aspect of any application. For many years, Spring Security has been the go-to framework for securing Spring-based application. But historically Spring Security has been cumbersome to work with, involving an enormous amount of XML configuration to shape an application's security scheme.
In recent versions of Spring Security, however, XML-based configuration has taken a backseat to a powerful Java-based configuration option. Spring Security's Java-based configuration offers a fluent API for defining the security constraints for an application which is easy to read and eliminates the need for klunky XML configuration. On top of Spring Security's own configuration improvements, Spring Boot autoconfiguration makes it incredibly easy to get started securing your application, minimizing even the amount of Java configuration required.
In this session, we'll take a look at what's involved in securing a Spring application with Spring Security. In doing so, we'll take full advantage of Spring Boot to autoconfigure as much security as we can get away with and then rely solely on Spring Security's Java-based configuration to shape the security aspect of an application. We'll also briefly look at how to use Spring Security when securing microservices.
You've heard the old adage “It's not what you know it's who you know.” The focus of this session is divided between ways to better connect with everyone you meet as well as ways to grow your network, help and influence people and ultimately build long-term relationships and build your reputation.
Networking isn't about selling nor it isn't about “taking.” Done properly it benefits everyone. Among the benefits are strengthening relationships; getting new perspectives and ideas; building a reputation of being knowledgable, reliable and supportive; having access to opportunities and more!
Slides available online: https://prezi.com/ck1fdbhgqwiq/?token=8f8240f753ad9ae2c50ce696657020f40a877a40fa224790652eb412ac5eb8d3
Take control of your knowledge portfolio and be in demand! Your command of the top JVM languages; Java 8, Groovy, Scala, JRuby, and Clojure; will set you apart from the rest. This presentation will introduce each of these languages, highlight common ground, and show some stark differences.
This presentation will cover:
This is a revised and updated version of the previous talk, with current thinking from practice and the literature. The talk presents why conflicts with your manager are inevitable based on differences in priorities and perspectives, and how to plan for them. The goal is to show you how to build the loyalty relationship that allows you to get what you need when you need it.
Topics covered will include diagnosing communication styles, lessons from game theory, working within the organizational hierarchy, and lessons on how to build a relationship with your manager that still allows you the freedom to express yourself and what you really want.
Hypothesis and data driven development ties together current thinking about requirements, Continuous Delivery, DevOps, modern architecture, and engineering techniques to help rethink building software.
Agile development claims to abhor “Big Design Up Front”…yet what is that giant backlog building session but BDUF in other clothing? Back in the olden days of software development, we were forced to speculate on what users want, then build it. We were basically running a buffet. But what if we could switch to à la carte? With modern engineering practices like Continuous Delivery, we can shift our perspective and start building by hypothesis rather than speculation. This talk shows the full spectrum of software development, from ideation through execution and deployment, through the lens of modern software engineering practices. I discuss building a platform using feature toggles, canary releases, A/B testing, and other modern DevOps tools to allow you to run experiments to see what your users really want. By building a platform for experimentation, product development shifts from up-front guessing to market driven. This talk unifies the practices of modern architecture, DevOps, and Continuous Delivery to provide a new approach to feature development. This talk also demonstrates how to undertake major architectural restructuring with zero regression failures by relying on data and the scientific method.
Concourse (http://concourse.ci/) is a CI system composed of simple tools and ideas. Concourse can express entire pipelines, integrating with arbitrary resources, or it can be used to execute one-off tasks, either locally or in another CI system. Concourse attempts to reduce the risk of adoption by encouraging practices that keep your project loosely coupled to the details of your continuous integration infrastructure.
Concourse optimizes around the following principles:
During this session we'll learn the simple key concepts from which Concourse pipelines are constructed. We'll understand how to deploy a local Concourse cluster using Vagrant as well as a scalable Concourse cluster to your cloud of choice using Cloud Foundry BOSH. Finally, we'll look at basic and advanced examples of pipelines for Java projects.
As we build distributed systems composed of microservices, we introduce new potential performance problems and failure points. As the number of nodes in our system increases, these problems rapidly amplify. In order to keep our composite systems responsive, we can apply the techniques of reactive programming. In order to keep our composite systems healthy, we can apply fault tolerance patterns like circuit breakers and bulkheads.
In this presentation we’ll examine how to leverage two popular libraries from Netflix, Hystrix and RxJava, to create reactive and fault tolerant systems.
Much is said about the decentralized governance of and local autonomy given to “two pizza teams” build microservices. But how do you organize teams to effectively collaborate to build the eventual composite system?
In this presentation we’ll examine how to apply the Tracer Bullet Development methodology described in Ship It! to effectively construct distributed systems composed of microservices.
This session compares Service-oriented, Service-based, and Micro-service architectures, describing the problem each is designed to solve, differences and similarities, variants and hybrids, and engineering practices.
Microservice architectures are quite popular, described as “SOA done correctly”. But what are the real differences between SOA, Microservice, and service-based architectures? What about middle ground between the shared everything of SOA versus shared nothing of microservices? This talk explores the similarities and differences between various service-oriented architectural styles. I describe the characteristics of SOA, microservices, and hybrid service-based architectures, along with the considerations and constraints for each. I also discuss specific engineering practices, orchestration styles, reuse strategies, and migrating from monolithic applications to service-based or microservice architectures. No one architecture can solve every problem, and many projects take on more complexity than necessary by choosing the wrong paradigm.
You went ahead a built a whole new set of shiny microservices. While doing this you realize you can no longer rely on you Application Server to handle all the authentication. Oh, and of course one of your teams used Node.JS How are you going to secure all these endpoints so that the
end user doesn't have to authenticate against each one.
This talk will be a demonstration of using a centralized authentication service to secure many different microservice architecture. The demos will Project Keycloak but would apply just as well with Stormpath, Ping.Indenty, or
similar services.
You don't need Node.js or MongoDB to build “full-stack” solutions, but they sure help! This stack is popular for its scalability, its promise of developer productivity, and the capability to develop all components with a single programming language. Not all use cases are a great fit for JavaScript on the server. But love it or hate it, there are valuable lessons and use cases here for all developers.
We'll examine a complete multiuser end-to-end app using HTML5, CSS, and JavaScript. We'll connect it to a simple Node.js instance using WebSocket. We'll wire up a simple document-oriented persistence layer with MongoDB. And we'll do it all using mostly-vanilla JavaScript to illustrate concepts that don't depend on particular frameworks.
You'll leave this session convinced that full-stack JavaScript has “teeth”, and that it's not all just hype. And whether you intend to use JavaScript, Java, Ruby, or a mix of various frameworks on the server, the architecture of a dynamic HTML5 app will be made transparent and straightforward.
You don't need Node.js or MongoDB to build “full-stack” solutions, but they sure help! This stack is popular for its scalability, its promise of developer productivity, and the capability to develop all components with a single programming language. Not all use cases are a great fit for JavaScript on the server. But love it or hate it, there are valuable lessons and use cases here for all developers.
We'll examine a complete multiuser end-to-end app using HTML5, CSS, and JavaScript. We'll connect it to a simple Node.js instance using WebSocket. We'll wire up a simple document-oriented persistence layer with MongoDB. And we'll do it all using mostly-vanilla JavaScript to illustrate concepts that don't depend on particular frameworks.
You'll leave this session convinced that full-stack JavaScript has “teeth”, and that it's not all just hype. And whether you intend to use JavaScript, Java, Ruby, or a mix of various frameworks on the server, the architecture of a dynamic HTML5 app will be made transparent and straightforward.
You don't need massive frameworks to build mobile apps responsive to touch events, that contain fluid animations, or that are easily deployed to app stores. All you really need is a solid grasp of the JavaScript, CSS3, and HTML5 features and APIs that enable a compelling experience.
In this session, I will show some examples of mobile apps built with HTML5 that offer instantaneous handling of touch events such as pan gestures. I'll demonstrate best practices using CSS3 transitions to implement card and panel design patterns typical of mobile user interfaces. And I'll show just how easy it is to extend the device features available to HTML5 using Cordova, packaging a mobile app for app store deployment.
You don't need massive frameworks to build mobile apps responsive to touch events, that contain fluid animations, or that are easily deployed to app stores. All you really need is a solid grasp of the JavaScript, CSS3, and HTML5 features and APIs that enable a compelling experience.
In this session, I will show some examples of mobile apps built with HTML5 that offer instantaneous handling of touch events such as pan gestures. I'll demonstrate best practices using CSS3 transitions to implement card and panel design patterns typical of mobile user interfaces. And I'll show just how easy it is to extend the device features available to HTML5 using Cordova, packaging a mobile app for app store deployment.
HTML5 hasn't fundamentally changed the way we build web applications — JavaScript frameworks did that. Not so with Web Components! Web Components are the most important update to HTML and the Document Object Model in recent years. They have a major impact on client-side architecture, on framework selection, and on distribution and reuse of code.
In this session, I'll explain to you the four Web Components standards, their current state, and why you should care. I'll give several examples of complex applications built in Web Components, and running in all modern browsers — including a rich mobile game.
Ever wonder when Java will be out of class path hell? Java 9 is for application developers, library developers by enablement of a scalable platform, greater platform integrity, and improved performance. In this talk, we will explore Project Jigsaw, HTTP 2.0, Lightweight JSON API and many other features.
Ever wonder when Java will be out of class path hell? Java 9 is for application developers, library developers by enablement of a scalable platform, greater platform integrity, and improved performance. In this talk, we will explore Project Jigsaw, HTTP 2.0, Lightweight JSON API and many other features.
This talk is designed to catapult your productivity, enhance your emotional intelligence, and refine your problem-solving skills. This talk is not just a series of presentations; it's a transformative experience tailored for the ambitious software developer and architect seeking to leave a mark in the fast-paced world of technology.
Dive into the essence of developer and architect productivity, where we unravel the secrets to optimizing your workflow and leveraging your skills for maximum impact. Discover the “24 Hours Instant Happiness” principle, a proven strategy to inject a dose of joy into your daily routine, fostering a positive work environment and personal life.
“Maximizing Your Impact” takes you deeper into the realm of influence, equipping you with the tools to excel in your projects and inspire those around you. Through “Effective Communication” and the intriguing “Mirror Technique,” learn how to build rapport, foster collaboration, and lead with empathy, amplifying your charisma in all professional interactions.
As we delve into the core of success, “Emotional Intelligence is 85% of Success” highlights the paramount importance of self-awareness, self-regulation, motivation, empathy, and social skills in achieving your goals. The “6 Phase Meditation Approach” and “Day Launcher” sessions are designed to refine your focus, creativity, and emotional stability, setting a solid foundation for a productive day ahead.
The inclusion of “Empathy Maps” and “IDEO Case Studies” offers a practical lens through which to view user-centric design and innovation. At the same time, the “SCAMPER Technique” provides a creative framework for problem-solving, ensuring you're equipped to tackle challenges with agility and inventiveness.
Elevate your productivity to new heights with “5 Choices for Super Productivity,” a comprehensive guide to prioritizing effectively, embracing extraordinary outcomes, and mastering your technology. Learn the art of “Managing Energy, Not Time,” a paradigm shift that promises to enhance your efficiency and job satisfaction.
As the talk culminates, “The Paradox of Choice” and the latest “Technology Trends to Focus On” prepare you to navigate the complexities of the modern tech landscape with confidence and curiosity.
This masterclass is more than just a talk; it's an invitation to transform how you work, lead, and innovate. Join us to unlock your full potential and reshape your future in software development and architecture. Whether you're looking to boost your productivity, enhance your emotional intelligence, or simply find more joy in your work, this talk is your gateway to a more fulfilling career and life.
Developers and Architects are designers, problem solvers, and innovative, creative artists. Software design is an art that requires both left and right brains to be active so you can understand what customers need. Next, we will explore habits and tools to plan, learn, research, organize, teach, develop, mentor, and architect.
Agenda
Enhancing Productivity and Personal Growth
– Developer and Architect Productivity
Strategies for improving daily workflow and efficiency in software development and architecture.
– 24 Hours Instant Happiness
Quick wins for boosting morale and happiness within the team and personal life.
– Maximizing Your Impact
Techniques to increase your influence and contributions in projects and teams.
– Effective Communication
Importance of clear communication and the Mirror Technique to improve understanding and rapport.
– Increasing Charisma
Tips for becoming more charismatic and influential in professional settings.
Building Emotional Intelligence and Mindfulness
– Emotional Intelligence is 85% of Success
Discussing the critical role of emotional intelligence in achieving professional success.
– 6-Phase Meditation Approach
Introducing a meditation technique to enhance focus, creativity, and emotional stability.
– Day Launcher
A strategy to start your day with intention and focus, setting the tone for productivity and success.
– Empathy Map
Utilizing empathy maps to better understand user needs and enhance team collaboration.
– IDEO Case Studies
Examining case studies from IDEO to illustrate successful applications of empathy in design.
– Understanding a Problem with SCAMPER Technique
Exploring the SCAMPER technique to creatively solve problems and innovate solutions.
Strategies for Super Productivity
– 5 Choices for Super Productivity
Detailed strategies for enhancing productivity by prioritizing important tasks, aiming for extraordinary outcomes, scheduling priorities (“big rocks”), mastering technology use, and maintaining energy levels.
– Managing Energy, Not Time
Shifting focus from time management to energy management to maximize productivity and well-being.
– Increasing Frequency to Do What You Want
Techniques to align daily actions with personal and professional goals more effectively.
– The Paradox of Choice
Understanding how reducing options can lead to increased satisfaction and productivity.
– Technology Trends to Focus On
Highlighting current technology trends that developers and architects should be aware of to stay ahead in their field.
Unlike earlier languages, Java had a well-defined threading and memory model from the beginning. And over the years, Java gained new packages to help solve concurrency problems.
Despite this, Java concurrency is sometimes subtle and fraught with peril.
In this talk, you'll learn these subtleties. And finally, you'll learn how to handle concurrency by exploring the concepts behind java.util.concurrent and other concurrency libraries.
Today, we all benefit from the sophistication of modern compilers and hardware, but that extra complexity can also make it difficult to reason about performance.
In this talk, we'll examine some surprising performance cases and learn how to
use profiling and benchmarking tools to better understand our modern execution environments.
Early releases of Java performed poorly, but those issues largely disappeared long ago with the introduction of HotSpot. However, much of the performance advice for Java persists through hearsay from those early days.
In this talk, we'll forget the hearsay and take an objective look using benchmarking and profiling tools to find out which optimizations matter today and just as importantly those that don't.
HotSpot provides a variety of garbage collectors with a variety of strengths and weaknesses. To get the most out of our applications, we need to pick the right garbage collector and design to take advantage of its strengths and avoid its
weaknesses.
In this presentation, you'll learn about criteria for picking a garbage collector, how to measure GC performance, and how to write code that works with rather than against the GC.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure using tools like Puppet and Chef. I also discuss the explosion of tool alternatives in this space, and cover some current-day best practices. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process. This includes database migrations, strategies for enhancing collaboration between application development and data, and database refactoring techniques.
Regular Expressions are an undervalued, underutilized tool in the developer toolbox. Few programming technologies have stood a comparable test of time for their capacity to improve developer productivity, to shortcut complex tasks, to reduce dependency on various libraries, and to encourage code reuse. They also help to teach patterns and improve pattern recognition, not only for code, but for programmers themselves. Competency with regexes will make you a better programmer, regardless of your choice of language or platforms. And it will impress your peers, too!
This workshop will teach you the fundamentals of writing, debugging, and testing PCREs (Perl-compatible Regular Expressions) in multiple programming languages. With hands-on examples we will cover regex syntax, metacharacters, assertions, grouping, quantifiers, greed, capturing, balanced matches, and replacing. We'll compose regexes from scratch to parse some common string formats such as URLs, email addresses, and even JSON. Given enough time, we'll even learn look-around assertions, and examine some creating uses of regexes in the field of natural language processing.
Regular Expressions are an undervalued, underutilized tool in the developer toolbox. Few programming technologies have stood a comparable test of time for their capacity to improve developer productivity, to shortcut complex tasks, to reduce dependency on various libraries, and to encourage code reuse. They also help to teach patterns and improve pattern recognition, not only for code, but for programmers themselves. Competency with regexes will make you a better programmer, regardless of your choice of language or platforms. And it will impress your peers, too!
This workshop will teach you the fundamentals of writing, debugging, and testing PCREs (Perl-compatible Regular Expressions) in multiple programming languages. With hands-on examples we will cover regex syntax, metacharacters, assertions, grouping, quantifiers, greed, capturing, balanced matches, and replacing. We'll compose regexes from scratch to parse some common string formats such as URLs, email addresses, and even JSON. Given enough time, we'll even learn look-around assertions, and examine some creating uses of regexes in the field of natural language processing.
You don't have to wait to use the next generation JavaScript language until all browsers support it - using transpilation you can start using it today and future proof your code and make it more elegant NOW! See the details for the topics covered.
In this rapid fire, live-coding session, we'll look at the features in ES2015:
arrows
classes
enhanced object literals
template strings
destructuring
default + rest + spread
let + const
iterators + for..of
generators
unicode
modules
module loaders
map + set + weakmap + weakset
proxies
symbols
subclassable built-ins
promises
math + number + string + array + object APIs
binary and octal literals
reflect api
tail calls
This session covers the basics of setting up a Web & JavaScript project for Continuous Integration. The goal is to apply the same engineering practices as for projects coded in Java. Topics covered:
This session covers the basics of setting up a Web & JavaScript project for Continuous Integration. The goal is to apply the same engineering practices as for projects coded in Java. Topics covered:
With Java 9, modularity will be built in to the Java platform…Finally! In this session, we explore the default Jigsaw module system and compare it to the alternative module system, OSGi, on the Java platform.
We will demonstrate the impact that Jigsaw will have on our existing applications and identify what we must do to get ready for Jigsaw. You will also see firsthand how to use the Jigsaw module system and the benefits that support for modularity on the Java platform will have on your applications.
“There's a new JS framework every week! There's a new JavaScript feature every week! There's a new HTML5 feature every week! We are losing our minds OMG@#$HELPUS!”
Settle down everybody. Shiny new frameworks distract you from the stability offered by the web platform: ES6 is the first major update to JavaScript since 2009, and HTML5 was 18 years in the making! More importantly, few of these innovations significantly change the architecture of web applications — we owe browser innovation and frameworks for that. But since the browser evolved in to a full-blown application runtime, we now need solid front-end architecture, and front-end architects. It's not just about JavaScript, it's about the entire browser platform. And you can't pick frameworks to simplify that platform until you understand its underpinnings.
In this workshop, we will dissect the components of a modern web client into three buckets:
I'll lay the foundations to simplify the complex world of front-end tools, frameworks, and architecture. I'll share patterns to help you manage the complexity of front-end development and back-end integration for modern web clients. And I'll convince you to never again complain about how fast the world of front-end technologies is moving.
“There's a new JS framework every week! There's a new JavaScript feature every week! There's a new HTML5 feature every week! We are losing our minds OMG@#$HELPUS!”
Settle down everybody. Shiny new frameworks distract you from the stability offered by the web platform: ES6 is the first major update to JavaScript since 2009, and HTML5 was 18 years in the making! More importantly, few of these innovations significantly change the architecture of web applications — we owe browser innovation and frameworks for that. But since the browser evolved in to a full-blown application runtime, we now need solid front-end architecture, and front-end architects. It's not just about JavaScript, it's about the entire browser platform. And you can't pick frameworks to simplify that platform until you understand its underpinnings.
In this workshop, we will dissect the components of a modern web client into three buckets:
I'll lay the foundations to simplify the complex world of front-end tools, frameworks, and architecture. I'll share patterns to help you manage the complexity of front-end development and back-end integration for modern web clients. And I'll convince you to never again complain about how fast the world of front-end technologies is moving.
In this fun 2 part workshop, we'll do a series of exercises to convert “old school” JavaScript code to ES2015, or ECMAScript 2015 code. ES2015 just got finalized, but you don't have to wait to use it. Using transpilation as part of your build process, you can quickly start using it TODAY. See the details below to get a flavour for what we'll be hacking in this fast-paced workshop.
We'll do exercises to convert legacy JavaScript code to ES2015 to help you understand how to migrate existing code and write new code in the new version of the language:
arrows
classes
enhanced object literals
template strings
destructuring
default + rest + spread
let + const
iterators + for..of
generators
unicode
modules
module loaders
map + set + weakmap + weakset
proxies
symbols
subclassable built-ins
promises
math + number + string + array + object APIs
binary and octal literals
reflect api
tail calls
In this fun 2 part workshop, we'll do a series of exercises to convert “old school” JavaScript code to ES2015, or ECMAScript 2015 code. ES2015 just got finalized, but you don't have to wait to use it. Using transpilation as part of your build process, you can quickly start using it TODAY. See the details below to get a flavour for what we'll be hacking in this fast-paced workshop.
We'll do exercises to convert legacy JavaScript code to ES2015 to help you understand how to migrate existing code and write new code in the new version of the language:
arrows
classes
enhanced object literals
template strings
destructuring
default + rest + spread
let + const
iterators + for..of
generators
unicode
modules
module loaders
map + set + weakmap + weakset
proxies
symbols
subclassable built-ins
promises
math + number + string + array + object APIs
binary and octal literals
reflect api
tail calls
Too often, developers drill into the see of data related to a software system manually armed with only rudimentary techniques and tool support. This approach does not scale for understanding larger pieces and it should not perpetuate.
Software is not text. Software is data. Once you see it like that, you will want tools to deal with it.
Developers are data scientists. Or at least, they should be.
50% of the development time is typically spent on figuring out the system in order to figure out what to do next. In other words, software engineering is primarily a decision making business. Add to that the fact that often systems contain millions of lines of code and even more data, and you get an environment in which decisions have to be made quickly about lots of ever moving data.
Yet, too often, developers drill into the see of data manually with only rudimentary tool support. Yes, rudimentary. The syntax highlighting and basic code navigation are nice, but they only count when looking into fine details. This approach does not scale for understanding larger pieces and it should not perpetuate.
This might sound as if it is not for everyone, but consider this: when a developer sets out to figure out something in a database with million rows, she will write a query first; yet, when the same developer sets out to figure out something in a system with a million lines of code, she will start reading. Why are these similar problems approached so differently: one time tool-based and one time through manual inspection? And if reading is such a great tool, why do we even consider queries at all? The root problem does not come from the basic skills. They exist already. The main problem is the perception of what software engineering is, and of what engineering tools should be made of.
In this talk, we show live examples of how software engineering decisions can be made quickly and accurately by building custom analysis tools that enable browsing, visualizing or measuring code and data. Once this door is open you will notice how software development changes. Dramatically.
Our technical world is governed by facts. In this world Excel files and technical diagrams are everywhere, and too often this way of looking at the world makes us forget that the goal of our job is to produce value, not to fulfill specifications.
Feedback is the central source of agile value. The most effective way to obtain feedback from stakeholders is a demo. Good demos engage. They materialize your ideas and put energies in motion. They spark the imagination and uncover hidden assumptions. They make feedback flow.
But, if a demo is the means to value, shouldn’t preparing the demo be a significant concern? Should it not be part of the definition of done?
That is not even all. A good demo tells a story about the system. This means that you have to make the system tell that story. Not a user story full of facts. A story that makes users want to use the system. That tiny concern can change the way you build your system. Many things go well when demos come out right.
Demoing is a skill, and like any skill, it can be trained. Regardless of the subject, there always is an exciting demo lurking underneath. It just takes you to find it. And to do it.
In this session we will get to exercise that skill.
Software has no shape. Just because we happen to type text when coding, it does not mean that text is the most natural way to represent software.
We are visual beings. As such we can benefit greatly from visual representations. We should embrace that possibility especially given that software systems are likely the most complicated creations that the human kind ever produced. Unfortunately, the current software engineering culture does not promote the use of such visualizations. And no, UML does not really count when we talk about software visualizations. As a joke goes, a picture tells a thousand words, and UML took it literally. There is a whole world of other possibilities out there and as architects we need to be aware of them.
In this talk, we provide a condensed, example-driven overview of various software visualizations starting from the very basics of what visualization is.
Visualization 101:
How to visualize
What to visualize
Interactive software visualizations
Visualization as data transformation
We as programmers, spend a lot of time writing and manipulating text. Furthermore, as developers, not only do we appreciate the simplicity and power of plain text, we appreciate tools that manage, and/or manipulate plain text such as Git or Asciidoc. It is therefore prudent that as developers we find, and master a text editor that can let us do what we need to do, while growing to accommodate all of our text editing needs. In this session we will take a look at Sublime Text - a powerful, fast, and flexible text editor that has adoring fans the world over.
We will see what makes Sublime highly suited to the kinds of work that we wish to do, as well as some plugins and customizations that can make your life as a developer using Sublime text.
Want to get your kids interested in programming? Maybe you're a kid at heart too? Come to this session and learn how to use ScriptCraft for Minecraft modding!
In this session, I'll discuss Minecraft modding with ScriptCraft. We'll start with an overview of Minecraft and ScriptCraft. We'll see how we install the necessary software to get our modding environment ready. Then we'll interact with Minecraft by writing code commands directly from within Minecraft! Finally, we'll build some cool mods and invoke them from Minecraft. If you're interested in getting your kids into Minecraft, come to this session to get a primer on everything you need to get them interested and hacking their own code.
Building 3rd party integrations require deep understanding of the business problems. You would not want to reinvent the wheel to solve common issues. In this talk we will explore different case studies with patterns which helps solves real world issues. We will start with exploring SOA patterns like Service host, Active service, Transactional service, Workflodize, Edge component. After that we will look at patterns related to performance, scalability and availability like Decoupled invocation, parallel pipelines, Gridable service, Service instance, Virtual Endpoint, and Service watchdog.
Lastly we will dive into security and manageability patterns like Secured message, Secured Infrastructure, Service firewall, Identify provider, and Service Monitor. Message exchange patterns include Request/Reply, Request/Reaction, Inversion of Communications and Saga. In Service consumer pattern we will explore Reservation, Composite front end and client/Server/Service. In Service integration patterns we will explore Service bus, Orchestration, Aggregated reporting. Lastly we will examine Service anti-patterns like Knot, Nanoservice, Transactional integration.
Building 3rd party integrations require deep understanding of the business problems. You would not want to reinvent the wheel to solve common issues. In this talk we will explore different case studies with patterns which helps solves real world issues. We will start with exploring SOA patterns like Service host, Active service, Transactional service, Workflodize, Edge component. After that we will look at patterns related to performance, scalability and availability like Decoupled invocation, parallel pipelines, Gridable service, Service instance, Virtual Endpoint, and Service watchdog.
Lastly we will dive into security and manageability patterns like Secured message, Secured Infrastructure, Service firewall, Identify provider, and Service Monitor. Message exchange patterns include Request/Reply, Request/Reaction, Inversion of Communications and Saga. In Service consumer pattern we will explore Reservation, Composite front end and client/Server/Service. In Service integration patterns we will explore Service bus, Orchestration, Aggregated reporting. Lastly we will examine Service anti-patterns like Knot, Nanoservice, Transactional integration.
Building 3rd party integrations require deep understanding of the business problems. You would not want to reinvent the wheel to solve common issues. In this talk we will explore different case studies with patterns which helps solves real world issues. We will start with exploring SOA patterns like Service host, Active service, Transactional service, Workflodize, Edge component. After that we will look at patterns related to performance, scalability and availability like Decoupled invocation, parallel pipelines, Gridable service, Service instance, Virtual Endpoint, and Service watchdog.
Lastly we will dive into security and manageability patterns like Secured message, Secured Infrastructure, Service firewall, Identify provider, and Service Monitor. Message exchange patterns include Request/Reply, Request/Reaction, Inversion of Communications and Saga. In Service consumer pattern we will explore Reservation, Composite front end and client/Server/Service. In Service integration patterns we will explore Service bus, Orchestration, Aggregated reporting. Lastly we will examine Service anti-patterns like Knot, Nanoservice, Transactional integration.
Building 3rd party integrations require deep understanding of the business problems. You would not want to reinvent the wheel to solve common issues. In this talk we will explore different case studies with patterns which helps solves real world issues. We will start with exploring SOA patterns like Service host, Active service, Transactional service, Workflodize, Edge component. After that we will look at patterns related to performance, scalability and availability like Decoupled invocation, parallel pipelines, Gridable service, Service instance, Virtual Endpoint, and Service watchdog.
Lastly we will dive into security and manageability patterns like Secured message, Secured Infrastructure, Service firewall, Identify provider, and Service Monitor. Message exchange patterns include Request/Reply, Request/Reaction, Inversion of Communications and Saga. In Service consumer pattern we will explore Reservation, Composite front end and client/Server/Service. In Service integration patterns we will explore Service bus, Orchestration, Aggregated reporting. Lastly we will examine Service anti-patterns like Knot, Nanoservice, Transactional integration.
In this talk, we will explore different cloud computing architecture design patterns blueprints and how you can take advantage of them. We will study cloud patterns for Private and Hybrid cloud deployments, Cloud Services, Common cloud management platforms, Security, Cloud governance, resiliency, Performance, and consumability.
In this talk, we will explore different cloud computing architecture design patterns blueprints and how you can take advantage of them. We will explore cloud patterns for Private and Hybrid cloud deployments, Cloud Services, Common cloud management platforms, Security, Cloud governance, resiliency, Performance, and consumability.
Few patterns we will explore are as follows:
Sharing, Scaling and Elasticity Patterns
Reliability, Resiliency and Recovery Patterns
Data Management and Storage Device Patterns
Cloud Service and Storage Security Patterns
Network Security, Identity & Access Management and Trust Assurance Patterns
This half-day workshop will cover the fundamentals of Graph Databases with hands-on exercises using Neo4J.
Environment Setup
Please also download and install Neo4j 3.0.3+ - https://neo4j.com/download/other-releases/
Community Edition tar or zip (depending on O/S)
This half-day workshop will cover the fundamentals of Graph Databases with hands-on exercises using Neo4J.
Environment Setup
Please also download and install Neo4j 3.0.3+ - https://neo4j.com/download/other-releases/
Community Edition tar or zip (depending on O/S)
Learn how to use the Spock testing framework in both Java and Groovy applications. This half-day workshop will demonstrate testing with both Spock and JUnit together for Java projects, Groovy projects, and projects that combine both technologies.
Topics will include writing specifications, using Spock mocks, and adding Spock enhancements.
Learn how to use the Spock testing framework in both Java and Groovy applications. This half-day workshop will demonstrate testing with both Spock and JUnit together for Java projects, Groovy projects, and projects that combine both technologies.
Topics will include writing specifications, using Spock mocks, and adding Spock enhancements.
Technology changes, it's a fact of life. And while many developers are attracted to the challenge of change, many organizations do a particularly poor job of adapting. We've all worked on projects with, ahem, less than new technologies even though newer approaches would better serve the business. But how do we convince those holding the purse strings to pony up the cash when things are “working” today? At a personal, how do we keep up with the change in our industry?
This talk will explore ways to stay sharp as a software professional. We'll talk about how a technology radar can help you stay marketable (and enjoying your career) and how we can use the same technique to help our companies keep abreast of important changes in the technology landscape. Of course it isn't enough to just be aware, we have to drive change - but how? This talk will consider ways we can influence others and lead change in our organizations.
New architectural paradigms are emerging that challenge traditional assumptions about the way that scalable and adaptable software is built. At the heart of these paradigms is a modular approach that breaks apart the monolithic application. But breaking apart the monolith has implications beyond software architecture and never before has architecture, infrastructure, and methodology been linked in a way that demands a new approach to software development.
In this workshop, we will explore modularity’s role in a large scale software technology agnostic architecture. We’ll compare and contrast different implementation technologies, including Dropwizard and OSGi, for building modular architectures. And we’ll discover the impact that modern architecture has on infrastructure and methodology. Throughout the discussion, we will examine how modern web and mobile apps fit into this overall architectural story. This session is a workshop and hands on labs are available, so bring your laptop if you’d like to perform the exercises.
New architectural paradigms are emerging that challenge traditional assumptions about the way that scalable and adaptable software is built. At the heart of these paradigms is a modular approach that breaks apart the monolithic application. But breaking apart the monolith has implications beyond software architecture and never before has architecture, infrastructure, and methodology been linked in a way that demands a new approach to software development.
In this workshop, we will explore modularity’s role in a large scale software technology agnostic architecture. We’ll compare and contrast different implementation technologies, including Dropwizard and OSGi, for building modular architectures. And we’ll discover the impact that modern architecture has on infrastructure and methodology. Throughout the discussion, we will examine how modern web and mobile apps fit into this overall architectural story. This session is a workshop and hands on labs are available, so bring your laptop if you’d like to perform the exercises.
One of the leading application security vulnerabilities, cross-site scripting (XSS), has been consistently found in many corporate applications, regardless of traditional defense techniques, such as input validation and output encoding. Knowing the number of such vulnerabilities in the organization’s applications is only half the issue. To understand the real risk, it is important to know how many of these vulnerable applications actually get attacked on the day to day basis, and which specific instances of vulnerabilities are being exploited. Such information will answer the questions like: is a certain framework being exploited most of the time, because it has not been patched? Or is it an issue in the custom code that has not gone through the security code review process? Content Security Policy (CSP) is a new HTML5 technology that allows organizations not only protect their applications from cross-site scripting, ensure that the content of the site, such as audio, video, images, fonts, is only loaded from approved locations, but also to get reports on every violation of the policy, such as cross-site-scripting attempts.
This talk will discuss how to best implement Content Security Policy on the organization’s web sites and how to obtain data on the policy violations and attacks. We will first cover the basics of Content Security Policy, how the policy is configured, the possible security issues CSP may have, how it can be applied to an existing application or an application written from scratch. Then we will discuss the reporting mechanism, types of data returned in the violation reports, methods of aggregation, and browser support. At the end, existing report aggregation and analysis tools will be described, together with examples of existing CSP policies implemented by major social media companies.
When talking about finding security defects we first think of security testing and static analysis of the code. Although, penetration testing and secure code review can uncover many types of security issues in an application, there are gaps that simply cannot be found with these traditional analysis techniques. The interactions between the different systems are beyond the code review level and the complex interconnections are often not reachable from the penetration tester’s point of view. Discovering weaknesses in the design of a system is the specific goal of threat modeling. Organizations benefit from this software design analysis because they can perform it without code to discover potential vulnerabilities early in the development cycle.
This talk will describe one of the popular threat modeling methodologies and follow its process of identifying the assets, security controls, and threat agents for a given system, and then creating a prioritized list of attacks. Security analysts together with system architects can then propose appropriate mitigations to be implemented by the team.
Whether your goals are higher concurrency, lower latency or high availability; there are proven techniques and strategies you can implement. Each requires careful consideration and comes with it's own challenges.
In this session we'll examine several architectures for running MySQL at scale and will be building each of the architectures live and hands-on.
As an industry we are collecting more and more data. At some point we have to be able to make sense of the data. Unfortunately many of the tools we have historically used can not scale up to the terabytes and petabytes we have captured. Hadoop is one of those relatively new technologies that is taking the industry by storm since it has proven to scale by taking advantage of the MapReduce pattern and distributed computing.
During this hands-on tutorial you will provision a Hadoop cluster, write MapReduce jobs and learn how to store and access data via Hadoop Distributed File System (HDFS). You will also learn how cloud providers such as Amazon Web Services’ Elastic MapReduce (EMR) and Microsoft’s Azure HDInsight provide Hadoop as a service.
As an industry we are collecting more and more data. At some point we have to be able to make sense of the data. Unfortunately many of the tools we have historically used can not scale up to the terabytes and petabytes we have captured. Hadoop is one of those relatively new technologies that is taking the industry by storm since it has proven to scale by taking advantage of the MapReduce pattern and distributed computing.
During this hands-on tutorial you will provision a Hadoop cluster, write MapReduce jobs and learn how to store and access data via Hadoop Distributed File System (HDFS). You will also learn how cloud providers such as Amazon Web Services’ Elastic MapReduce (EMR) and Microsoft’s Azure HDInsight provide Hadoop as a service.
Architecture doesn't exist in a vacuum, a painful lesson developers who built logically sound but operationally cumbersome architectures learned. Continuous Delivery is a process for automating the production readiness of your application every time a change occurs–to code, infrastructure, or configuration. Some architectures and practices yield code that works better in this environment. This session takes a deep dive into the intersection of the architect role and the engineering practices in Continuous Delivery.
Yesterday's best practice is tomorrow's anti-pattern. Architecture doesn't exist in a vacuum, a painful lesson developers who built logically sound but operationally cumbersome architectures learned. Continuous Delivery is a process for automating the production readiness of your application every time a change occurs–to code, infrastructure, or configuration. Some architectures and practices yield code that works better in this environment. This session takes a deep dive into the intersection of the architect role and the engineering practices in Continuous Delivery. I first set context for the information you must master before delving into the nuances of modern architectural concerns. I discuss the role of metrics to understand code, how Domain Driven Design's Bounded Context reifies in architecture, how to reduce intra-component/service coupling, the tension between coupling and cohesion, microservices architectures, and other engineering techniques.
This workshop allows you to apply techniques either against a sample codebase…or bring one of your own. Suggested exercises include using metrics to learn more about the structure of your class, how to determine how to partition a mono-lithic architecture into a service-based ond, and Architectural Katas with Devops added.
Architecture doesn't exist in a vacuum, a painful lesson developers who built logically sound but operationally cumbersome architectures learned. Continuous Delivery is a process for automating the production readiness of your application every time a change occurs–to code, infrastructure, or configuration. Some architectures and practices yield code that works better in this environment. This session takes a deep dive into the intersection of the architect role and the engineering practices in Continuous Delivery.
Yesterday's best practice is tomorrow's anti-pattern. Architecture doesn't exist in a vacuum, a painful lesson developers who built logically sound but operationally cumbersome architectures learned. Continuous Delivery is a process for automating the production readiness of your application every time a change occurs–to code, infrastructure, or configuration. Some architectures and practices yield code that works better in this environment. This session takes a deep dive into the intersection of the architect role and the engineering practices in Continuous Delivery. I first set context for the information you must master before delving into the nuances of modern architectural concerns. I discuss the role of metrics to understand code, how Domain Driven Design's Bounded Context reifies in architecture, how to reduce intra-component/service coupling, the tension between coupling and cohesion, microservices architectures, and other engineering techniques.
This workshop allows you to apply techniques either against a sample codebase…or bring one of your own. Suggested exercises include using metrics to learn more about the structure of your class, how to determine how to partition a mono-lithic architecture into a service-based ond, and Architectural Katas with Devops added.
It happens to us all; there are simply days where it seems impossible to get anything done. This session focuses on techniques and tips to get into the zone, stay in the zone and to protect your productivity, even in disruptive environments.
Rather than focusing on any one productivity methodology (e.g. GTD) This talk analyzes the internal and external factors that affect our productivity and offers broader strategies to get back on track.
Unlock your latent photographic memory. In this session you'll learn failsafe techniques and systems that allow you to never forget names, appointments, or numbers. In the process you'll be more effective and imaginative at work; improve reading speed and comprehension, and shorten study times.
An improved memory will change your life, literally. In the session we will describe in detail several memory techniques that, with a little practice, will have you remembering virtually anything you want.
Almost every example of an agile project involves a single team and while many successful projects are delivered that way, most enterprise software requires the interaction of several teams. But how do we scale agile beyond a single team? What practices translate and which ones don't? In this talk we'll discuss some of the issues you'll encounter as you move agile beyond a single group and how you can keep multiple stakeholders happy. While it isn't as simple as having a “scrum of scrums” it isn't as hard as replacing every line of COBOL.
Almost every example of an agile project involves a single team and while many successful projects are delivered that way, most enterprise software requires the interaction of several teams. But how do we scale agile beyond a single team? What practices translate and which ones don't? In this talk we'll discuss some of the issues you'll encounter as you move agile beyond a single group and how you can keep multiple stakeholders happy. While it isn't as simple as having a “scrum of scrums” it isn't as hard as replacing every line of COBOL.
Docker and containers are getting a lot of attention these days but what do they mean for devs? How do they fit into DevOps and continuous delivery movements? Where do these tools fit into cloud computing? During this hands-on session we will learn how to install and configure Docker, build images and run containers in a local development environment. But we will also explore using them in a continuous deployment environment by deploying them to on premise as well as cloud services such as AWS.
Docker and containers are getting a lot of attention these days but what do they mean for devs? How do they fit into DevOps and continuous delivery movements? Where do these tools fit into cloud computing? During this hands-on session we will learn how to install and configure Docker, build images and run containers in a local development environment. But we will also explore using them in a continuous deployment environment by deploying them to on premise as well as cloud services such as AWS.
In this session, we'll dig deep into the performance aspects of JavaScript and the Web Browser. Single page web applications are becoming popular very quickly, and understanding the low-level and high-level aspects of the browser platform and JavaScript runtimes embedding in them are important.
We'll cover topics such as browser pipe-lining, memory management, testing and measuring performance.
At the end of the day, an architect's primary job is to communicate. Not only do we need to make sure our teams understand the design of the system well enough to implement it, we must be able to explain our decisions to an audience that isn't impressed with how many TLAs you can rattle off in one sentence. Successful architects need to seamlessly transition from in depth technical conversations to budget meetings to discussions with end users adjusting the message to fit the audience.
While oral communication is key, good architects also spend a good deal of time putting pixel to screen via email, IM and various architectural documents we're expected to create. We need to write clearly and concisely while also knowing when the best course of action is to pick up the phone or walk to someone's desk.
In this talk, we'll explore the various methods that we as architects use to communicate with our stakeholders. We'll talk about knowing our audience, being able to present as well as how to run a good meeting. We'll discuss various patterns (and antipatterns) of presenting along with some concrete advice on how to do it better. At the end of the day, our job is to tell effectively tell a story - this talk will look at ways to do that.
Good architects are, almost by definition, good story tellers. And while good communication skills are vital to success as an architect, so too is an ability to constructively critique an architecture. In this talk, we'll explore why reviews are important and what it takes to perform them well. Additionally, we'll talk about the importance of planning and preparation in conducting a successful review.
Good architects are, almost by definition, good story tellers. And while good communication skills are vital to success as an architect, so too is an ability to constructively critique an architecture. In this talk, we'll explore why reviews are important and what it takes to perform them well. Additionally, we'll talk about the importance of planning and preparation in conducting a successful review.
Functional programming (FP) is fast becoming the tool that programmers reach for in this era of multi-core processors. Although the definition of “functional” varies quite a bit between implementations, there are a few facets that remain core and true to the paradigm. Facets such as functions as first-class, higher order functions, closures etc. In this session we will explore the meaning of these using JavaScript as our medium.
Why JavaScript? The answer in short is: omnipresence. The long answer is that hiding at the core of JavaScript is a language that is not only beautiful and elegant, but one that supports many of the core ideas in FP. If you are interested in what the fuss is all about, or are confused about some of the concepts that make FP a reality, then this is the session you should attend.
In today's world, our applications need to be both responsive, fast and scalable. Our applications need to respond to user interactions such as mouse movements, clicks and inputs as well as asynchronous inputs like XHR calls, server sent events, setInterval, even web socket events! Unfortunately as things stand today, there is no consistent way to deal with the myriad of different “changes” that could happen in an application.
But what if there is? This is what Reactive Extensions (specifially RxJs in this session) allow us to do. It offers us an abstraction that allows us to treat everything from DOM events (infinite streams) to our domain (map
s, set
s and array
s) as streams. This consistent interface now permits us to create and manipulate any source identically. Futhermore, it allows us to react to different sources as if they are one!
Reactive Extensions are fast becoming the de-facto approach of managing asynchronicity in JS land. From Netflix's UI to Angular 2 $http
to ES7 - reactive programming is everywhere!
This session is RxJs 101, covering
If we have time we will look at a simple demo, and reactive progamming's role in Angular 2
Just as a database creates an execution plan to run your SQL queries, HotSpot analyzes your Java code to determine how to best to run your code. And just as with a database where understanding indexes is important to achieving performance, there are a few core concepts that important to understanding Java performance.
In this talk, we'll explore the way HotSpot examines a piece of code in detail, learning not just about the optimizations that HotSpot performs but the hazards that get in its way. In the end, you'll learn how to work with HotSpot so you can write faster code with less effort.
In some organizations, architects are dismissed as people that draw box and arrow diagrams - the dreaded whiteboard architect. While we don't want to foster that stereotype, it is important for an architect to be able to construct basic architectural diagrams. An architect must also be able to separate the wheat from the chaff eliminating those models that don't help tell the story while fully leveraging those that do.
In this workshop, we'll discuss the various diagrams at our disposal. We'll walk through a case study and as we go, we'll construct a set of diagrams that will help us effectively communicate our design. We'll talk about stakeholders and who might benefit from each type of diagram. Additionally we'll discuss how to constructively review an architectural model.
Neither a laptop nor special software is required for this workshop though your modeling tool of choice (Spark, Visio, OmniGraffle, etc.) is welcome for the exercises. Of course paper and pencil are very effective too and frankly recommended! Feel free to work in pairs or teams. That's it! Well, and a willingness to participate!
In some organizations, architects are dismissed as people that draw box and arrow diagrams - the dreaded whiteboard architect. While we don't want to foster that stereotype, it is important for an architect to be able to construct basic architectural diagrams. An architect must also be able to separate the wheat from the chaff eliminating those models that don't help tell the story while fully leveraging those that do.
In this workshop, we'll discuss the various diagrams at our disposal. We'll walk through a case study and as we go, we'll construct a set of diagrams that will help us effectively communicate our design. We'll talk about stakeholders and who might benefit from each type of diagram. Additionally we'll discuss how to constructively review an architectural model.
Neither a laptop nor special software is required for this workshop though your modeling tool of choice (Spark, Visio, OmniGraffle, etc.) is welcome for the exercises. Of course paper and pencil are very effective too and frankly recommended! Feel free to work in pairs or teams. That's it! Well, and a willingness to participate!
This is my story of lessons learned on why improvement efforts fail… I had a great team. We were disciplined about best practices and spent tons of time on improvements. Then I watched my team slam into a brick wall. We brought down production three times in a row, then couldn’t ship again for a year.
Despite our best efforts with CI, unit testing, design reviews, and code reviews, we lost our ability to understand the system. We thought our problems were caused by technical debt building up in the code base, but we were wrong. We failed to improve, because we didn’t solve the right problems. Eventually, we turned our project around, but with a lot of tough lessons along the way.
In this talk, we'll go through a deep-dive case study that starts with project failure, then revisit all the mistakes we made over a 3 year journey to turn the project around. We'll discuss bad assumptions, strategies that failed, ideas that changed, techniques and tools that changed, and how we eventually learned our way to victory.
After reviewing each mistake, we'll have a group discussion about the underlying reasons, so you can avoid these mistakes on your own project.
This is my story of lessons learned on how to stop the crushing effects of business pressure… I was team lead with full control of our green-field project. After a year, we had continuous delivery, a beautiful clean code base, and worked directly with our customers to design the features. Then our company split in two, we were moved under different management, and I watched my project get crushed.
As a consultant, I saw the same pattern of relentless business pressure everywhere, driving one project after another into the ground. I made it my mission to help the development teams solve this problem. This is my story of lessons learned on how to transform an organization from the bottom up. I'll show you how to lead the way.
The crushing business pressure is caused by a broken feedback loop that's baked into the organization's design. In this presentation, I'll show you how to fix the broken feedback loop. Learn how to:
If the system is broken, we need to fix the system. You can change the system by making the decision to lead.
Build tools have evolved slowly over the years and in general have failed to keep up with the ever increasing need to solve complex automation problems. As your project's automation goals become more ambitious you will likely run into the limitations of existing build systems. Gradle is positioning itself to become the de facto build system of the modern continuous delivery age.
This presentation will provide a introduction to Gradle, its features, how it compares to other build systems available and what is coming in the future.
With software build and continuous delivery pipelines becoming more complex, there exists a need to verify the logic powering these processes like any other piece of code. We need tools and methodologies for testing our build logic much in the same way we test or production code. Assertions vary from ensuring the build produces the expected output, custom plugins and extensions modify the build in expected ways, and cross version testing with different versions of the build system.
In this presentation we will discuss and demonstrate methods for testing Gradle builds, including standard unit testing as well as functional testing using Gradle TestKit.
Didn't you hear the news? TDD is dead. Yet many developers rely on it for quality code. Come join the zombie apocalypse and learn and understand TDD, what sets it apart from unit testing after the fact, what to do when you need to update code, effective mocking, automatically generating test data and lots of it, leaving code alone and respecting your work, and more
This presentation will cover:
How do we turn our company into an AWESOME company? Our projects get crushed over and over again by bad decisions caused by relentless business pressure, a lack of visibility, and huge communication problems.
Fixing our software problems isn't an engineering problem, it's an organizational problem. It means the people of the business world and the engineering world need to learn how to communicate, learn together, and work together to optimize the whole.
So how do we get from Point A to Point AWESOME? We learn.
Idea Flow Learning Framework is a data-driven improvement framework specifically designed to get managers and developers all pulling the same direction by combining the power of Idea Flow with the power of science. By visualizing our pain, we create a data-driven feedback loop for systematically improving productivity on our software projects.
In this talk we'll cover:
Learn how to measure the pain across the organization, run experiments to learn what works, and distill your knowledge into patterns and principles. If you want to start learning and improving faster than ever before, you won't want to miss this talk.
About at the same time when Conway was coining his now famous law, Marshall McLuhan warned us that “we shape our tools, and thereafter our tools shape us”.
IDEs are supposed to be the tools software engineers use. Yet, most IDEs are primarily focused on the active part of writing code, while developers actually spend most of their time understanding systems. Assessing software systems is perceived as rather secondary and mostly supported in the small. The IDE is not as integrated as it could or should be.
We have to rethink the developer experience because software is immaterial and the tools we use are the only way through which we experience software. The tools we use matter.
In this talk, we take a systematic look at how a developer experience could look like and what an environment for developers should be made of. We exemplify the message with live demos of the Glamorous Toolkit (http://gtoolkit.org), a project aiming to reinvent the IDE.
Developers are developers. But, developers are also users. And like any users, they need appropriate tools that match and augment their abilities, too.
IDEs are supposed to do just that. They are supposed to bring together into one coherent interface all tools related to development. This is what “I” stands for.
But, let’s look at it: all IDEs I know have in forefront the text editor. And there are some powerful editors out there. Those editors are particularly good at creating code, but are terrible at understanding code. Why terrible? Well, imagine having to optimize the traffic in a city, and only be given a magnifier glass as a tool. The editor is exactly that: a magnifier that shows with great details every line of code. Clearly, the design of the IDE does not favor the big picture.
And that’s not all. Let’s look at search: how do you search for concrete annotations in your IDE, such as quickly finding all Java methods annotated with @Interesting(“with specifics”) that also match a name pattern? Given that annotations are more and more pervasive, should it not follow that the IDE makes it possible to quickly find them without involving regex file search? Or let’s consider inspecting objects: when you inspect a map object, why do you see a tree? Should it not be more appropriate to see a table instead? Should the IDE not make it possible to offer presentations that are better suited based on the context at hand? And why are pictures not pervasive in our world?
These are just some examples. The “I” is not that integrated. We can do better. We have to do better. We have to put the developer experience in the forefront.
Designing an interface starts from understanding the needs. In this talk, we take a systematic look at how a developer experience could look like and what an environment for developers could be made of. We exemplify the message with live demos of the Glamorous Toolkit (http://gtoolkit.org), a project aiming to reinvent the IDE.
With over 3 million apps now deployed in the Apple and Google Play app stores, the importance of mobile application security assessments is at an all time high. With business critical mobile apps handling payment card, healthcare, and financial information on end user devices, organizations are vulnerable to an entirely new class of mobile software vulnerabilities. As the bad guys shift their focus towards attacking mobile applications, defenders are struggling to keep up.
We will discuss some common issues often found in mobile application vulnerability assessments, such as local data storage, inter-process communication (IPC), and broken cryptography. Then, show you mitigation strategies to apply to your organization’s mobile apps.
Join me for this 3/4 day Gradle introduction workshop. We'll cover all of the basic things you need to know about using Gradle. Not only will you understand what Gradle is and why and how it works the way it does, but you'll get extensive hands-on practice so you'll be ready to use Gradle successfully in your own projects immediately after the workshop.
Participants should bring either a Windows or Mac laptop to work through the workshop exercises.
We're looking forward to having you in the Gradle Fundamentals workshop. To maximize the learning and value of our time together, we ask that you prepare your laptop that you're bringing to this very hands on workshop.
1) Choose a Windows or Mac laptop that you'll be bringing to the workshop (we have some downloads and installs that are better to do before the event). Ensure you have admin or sudo privileges on the machine.
2) Please have a recent Java JDK installed.
Topics we will cover include:
For this workshop, you will need a laptop (Windows, OS/X or Linux) that you have admin/sudo privileges on. Prior to the workshop, please install a recent version of Git (http://git-scm.com) and the 2.14 version of Gradle (https://gradle.org/gradle-download/). You will also need a recent Java SDK. Please ensure that both Git and Gradle work by running git –version and gradle –version at prompts in terminals you intend to use in the workshop.
Join me for this 3/4 day Gradle introduction workshop. We'll cover all of the basic things you need to know about using Gradle. Not only will you understand what Gradle is and why and how it works the way it does, but you'll get extensive hands-on practice so you'll be ready to use Gradle successfully in your own projects immediately after the workshop.
Participants should bring either a Windows or Mac laptop to work through the workshop exercises.
We're looking forward to having you in the Gradle Fundamentals workshop. To maximize the learning and value of our time together, we ask that you prepare your laptop that you're bringing to this very hands on workshop.
1) Choose a Windows or Mac laptop that you'll be bringing to the workshop (we have some downloads and installs that are better to do before the event). Ensure you have admin or sudo privileges on the machine.
2) Please have a recent Java JDK installed.
Topics we will cover include:
For this workshop, you will need a laptop (Windows, OS/X or Linux) that you have admin/sudo privileges on. Prior to the workshop, please install a recent version of Git (http://git-scm.com) and the 2.14 version of Gradle (https://gradle.org/gradle-download/). You will also need a recent Java SDK. Please ensure that both Git and Gradle work by running git –version and gradle –version at prompts in terminals you intend to use in the workshop.
Join me for this 3/4 day Gradle introduction workshop. We'll cover all of the basic things you need to know about using Gradle. Not only will you understand what Gradle is and why and how it works the way it does, but you'll get extensive hands-on practice so you'll be ready to use Gradle successfully in your own projects immediately after the workshop.
Participants should bring either a Windows or Mac laptop to work through the workshop exercises.
We're looking forward to having you in the Gradle Fundamentals workshop. To maximize the learning and value of our time together, we ask that you prepare your laptop that you're bringing to this very hands on workshop.
1) Choose a Windows or Mac laptop that you'll be bringing to the workshop (we have some downloads and installs that are better to do before the event). Ensure you have admin or sudo privileges on the machine.
2) Please have a recent Java JDK installed.
Topics we will cover include:
For this workshop, you will need a laptop (Windows, OS/X or Linux) that you have admin/sudo privileges on. Prior to the workshop, please install a recent version of Git (http://git-scm.com) and the 2.14 version of Gradle (https://gradle.org/gradle-download/). You will also need a recent Java SDK. Please ensure that both Git and Gradle work by running git –version and gradle –version at prompts in terminals you intend to use in the workshop.
Take a look at your codebase. Go ahead, this abstract will wait. Notice anything? Perhaps a few more lines of JavaScript than years past? JavaScript is no longer an outlier, a language for the interns, something we can just mash together. Today, JavaScript is a first class citizen. As such, we need to treat it will all the care and feeding we extend our server side languages. This talk will introduce you to a set of tools that will help you write bulletproof JavaScript.
Step one, make sure we aren't making any basic mistakes like using == when we really mean ===. To remedy these types of bugs, we'll leverage JSHint to statically analyze our code. In addition to walking through the setup, we'll discuss how to ratchet up the rules as you improve your codebase. Just like Java or C#, we also need to test our JavaScript code. We'll introduce Jasmine, a BDD style testing tool as well as other tools that make help in the testing process. Last but not least, we'll take a tour of Plato, a JavaScript source code visualizer. Taken together, these tools can go a long way to improve your JavaScript code.
If you have ever studied a martial art, chances are you are familiar with katas: the practice of individual training exercises. Repeatedly. It may seem pointless to practice the same move again and again, the only way to improve is repetition. We can apply the same concept to learning programming languages.
Working individually or in small groups, attendees will work through a set of problems using JavaScript to solve them. Whether you are new to JavaScript or an old hand, this session will give you an opportunity to hone your craft! Bring a laptop or be prepared to make a friend.
Time is very precious and is often threatened by phone calls, emails, co-workers, bosses, and most of all, yourself. The Pomodoro Technique reigns in unfocused time and gives your work the urgency and the attention it needs, and it's done with a kitchen timer.
In this presentation we discuss how to set up, estimate time, log time, deal with interruptions, and integrate with Agile as a team. We discuss timer software and even some of the great health benefits of the Pomodoro Technique.
As teams increasingly move to the cloud, they are met with many challenges when managing a distributed footprint.
This talk will discuss the considerations needed for scaling infrastructure in the cloud, what tooling options are available for managing cloud infrastructure, and ultimately steps that can be taken for ensuring scalability of cloud infrastructure.
The cloud is a rapidly changing landscape, and the options available for running code in the cloud continue to grow with that. Groovy is a versatile language for the JVM, and opens the door on building robust and comprehensive solutions for any cloud deployment. As such, the different cloud runtimes that are available should be examined to ensure that best practices are being followed when developing Groovy projects for the cloud.
This talk will discuss the various options that are available, how Groovy fits into those infrastructures, and the best ways to ensure success when running Groovy in the cloud.
Learn Groovy from a Java developer's perspective. Use Groovy features like native collections, operator overloading, and the Groovy JDK. Additional topics will include closures, builders, AST transformations, and basic metaprogramming.
Tests using both JUnit and Spock will be provided, as well as a Gradle build file. All code will be made available through a git repository.
You need:
Optional:
Note: If you can run a bash shell, consider http://gvmtool.net
Learn Groovy from a Java developer's perspective. Use Groovy features like native collections, operator overloading, and the Groovy JDK. Additional topics will include closures, builders, AST transformations, and basic metaprogramming.
Tests using both JUnit and Spock will be provided, as well as a Gradle build file. All code will be made available through a git repository.
You need:
Optional:
Note: If you can run a bash shell, consider http://gvmtool.net
Data integrity, security, recovery, privacy and regulatory compliance are essential attributes for enterprise implementation. Enterprise customers ask for transparency in how the vendors will provide security programs. Many questions need to be asked for any cloud implementation to policy makers, architects, coders, and testers.
In this presentation, we will explore data security and storage, privacy, and data compliance issues. We will explore the security management in the cloud. The presentation is useful for anyone starting from Executives to developers who are going to implement the Enterprise Applications in both private and public cloud.
Data integrity, security, recovery, privacy and regulatory compliance are essential attributes for enterprise implementation. Enterprise customers ask for transparency in how the vendors will provide security programs. Many questions need to be asked for any cloud implementation to policy makers, architects, coders, and testers.
In this presentation, we will explore data security and storage, privacy, and data compliance issues. We will explore the security management in the cloud. The presentation is useful for anyone starting from Executives to developers who are going to implement the Enterprise Applications in both private and public cloud.
API Gateway is a way to connect real-world cloud-ready applications. New applications need to design the data model and create public APIs to be consumed by mobile apps, third party apps, and different devices. We will explore best practices, which you must adopt to be cloud ready. Firstly, we will examine how contract first API development is helping enable more extensible and reliable APIs. Next, we will look at
We will ask tough questions during this design session.
We will take deep dive into the following areas:
Being a professional software engineer, it's easy to fall into the belief that one's role in a company is to write code.
Another perspective might be that one's role is to solve problems for the business and that writing code is merely one of several tools available to help solve those problems.
There are numerous problem-solving “anti-patterns” that are rampant in the industry today. “Forewarned is forearmed” as they say. In addition to highlighting these “anti-patterns” with real-life examples and the (sometimes) disastrous consequences, Michael asks some of the difficult questions about our true motivations for our decisions and how our decisions can either positively or negatively affect our team and our organization.
Grails is no longer the web framework you remember. Based on Spring Boot and complete with profiles and JSON views, Grails is now an excellent way to build micro-services, access NoSQL databases, provide a powerful REST API, and more.
Grails can do everything Spring Boot can do and more. It combines sophisticated Groovy DSLs with asynchronous capabilities, a newly redesigned data services layer, and more.
Angular is a new JavaScript framework from Google. If you are looking into developing rich web applications, Angular is your friend. Angular embraces HTML and CSS, allowing you to extend HTML towards your application, and uses plain JavaScript which makes your code easy to reuse, and test. In this workshop we will start from the ground up, and build our way through a simple application that will let us explore the various constructs, and the familiarize ourselves with some of the new terminology in Angular.
In this workshop we will get down and dirty with Angular. In this workshop we will start with the very basics of how to boostrap our AngularJS application, and work slowly towards making REST-ful AJAX requests to a backend. List of topics include
ng-app
ng-init
and the evaluation {{ }}
directive$rootScope
ng-model
ng-repeat
ng-form
, form validation and submission in AngularJS$http
$routeProvider
and $routeParams
If time permits we will discuss a few good practices when working with AngularJS applications.
This is a hands on tutorial so bring your laptops!
brew install git
Make sure Git is available on your path
Open up a new terminal window
$ git --version
> # Anything greater than 2 will do
> git version 2.1.3
Make sure leiningen is available
$ lein --version
> Leiningen 2.5.1 on Java 1.8.0_40 Java HotSpot(TM) 64-Bit Server VM
If you have an account on Github, feel free to fork the repository and then clone it.
Clone the repository on Github
At a terminal, cd
into a directory and run git clone git@github.com:looselytyped/angudone-workshop.git
. Note that if you did fork the repository under your account your URL will be something like git@github.com:<YOUR-ACCOUNT-NAME>/angudone-workshop.git
Wake up the application
# at the terminal
# cd to where you cloned the repository above.
$ git checkout master-1.4
$ lein ring server
# This will download a whole bunch of files from Maven Central
# and will end with
# Started server on port 3000
http://localhost:3000/
(If it does not just go to that URL). You should see a HTML page announcing Angudone
Ensure that the REST
endpoints are active
Right-Click -> Inspect Element
(this will open the Inspector) and then go to the Console
tabRight-Click -> Inspect Element
(this will open the Chrome Inspector) and then go to the Console
tab (right most)Both Console
s give you the ability to run JavaScript code. Run the following
http.get("todos")
.success(function(data, status, headers, config) {
console.log("There are " + data.length + " todos");
}).error(function(data, status, headers, config) {
console.error("Oh Noes! Something went wrong" + data);
});
You should see a a valid response indicated by There are 0 todos
Scala for Java Developers is a full live code and fast paced presentation and workshop (laptops optional), and this is all about the Scala language.
Scala is a wonderful functional/hybrid language. It will become one of the 5 languages that you will need to know to be a highly successful JVM developer in the very near future (others being Groovy, Clojure, Java 8, and JRuby). Scala, as opposed to some of the other languages, has quite a learning curve. This presentation was built for questions. We will start with some basics, how this presentation will flow and end will be up to you, the audience. Bring your intellect, curiosity, and your questions, and get ready for some Scala. Laptops optional so you can try stuff out on your machine and create questions of your own!
Some things will be required if you to participate in the workshop
Unfortunately, Internet connectivity is sometimes a dicey affair and at times it can rain on our parade. To avoid having to wait for the install at the conference, you can prepare Scala before the conference! If you don't have the opportunity to do this, we will have either memory sticks or private networks at the conference. But it is preferred to do the installation before the event.
For MacOSX:
For Windows
For Linux
You may also want to take the opportunity to load some Scala Plugins onto your favorite IDE and Editor. Below is a list of resources that you can use to enhance your environment so that you can enjoy Scala syntax highlighting and other helpful tools like refactoring, debugging and analysis.
Eclipse - The Eclipse has an IDE plugin for Scala called aptly Scala-IDE. You can either download the complete Scala IDE which includes the complete Eclipse download. You can also download the plugin. All the information about the plugin can be found at http://scala-ide.org including an easy to follow along video located at http://scala-ide.org/docs/current-user-doc/gettingstarted/index.html. Scala-IDE is also available at the Eclipse Marketplace, although I would recommend getting the latest instructions from scala-ide.org
IntelliJ - IntelliJ has a Scala plugin that can be found by going to Settings -> Plugins, clicking on 'Browse Repositories' button and searching for the 'Scala' plugin on the left. Right click on the 'Scala' and choose 'Install'. IntelliJ will prompt you to restart the IDE, do so, and enjoy.
NetBeans - Currently, Github user 'dcaoyuan' hosts a NetBeans Scala plugin at the address: https://github.com/dcaoyuan/nbscala. I have not tried this out since the number of NetBeans users has shrunk in recent years. If you are an avid NetBeans user and wish to try it, you can let me know the results during the session. There is additional information at http://wiki.netbeans.org/Scala
Emacs - Github user 'aemoncannon' has created 'ENSIME' (Enhanced Scala Interaction Mode for Emacs) at the address and has a great following. https://github.com/aemoncannon/ensime with some documentation at http://aemoncannon.github.io/ensime.
VIM - For VIM users you can use https://github.com/derekwyatt/vim-scala as a VIM plugin that offers Scala color highlighting
VSCode - Download the “Scala Language Plugin” from the plugins within VSCode.
That is it. Hope to see you soon.
Scala for Java Developers is a full live code and fast-paced presentation and workshop (laptops optional), and this is all about the Scala language. This is Part 2, continuing where we left off from Part 1.
Scala is a wonderful functional/hybrid language. It will become one of the 5 languages that you will need to know to be a highly successful JVM developer in the very near future (others being Groovy, Clojure, Java 8, and JRuby). Scala, as opposed to some of the other languages, has quite a learning curve. This presentation was built for questions. We will start with some basics, how this presentation will flow and end will be up to you, the audience. Bring your intellect, curiosity, and your questions, and get ready for some Scala.
Some things will be required if you to participate in the workshop
Unfortunately, Internet connectivity is sometimes a dicey affair and at times it can rain on our parade. To avoid having to wait for the install at the conference, you can prepare Scala before the conference! If you don't have the opportunity to do this, we will have either memory sticks or private networks at the conference. But it is preferred to do the installation before the event.
For MacOSX:
For Windows
For Linux
You may also want to take the opportunity to load some Scala Plugins onto your favorite IDE and Editor. Below is a list of resources that you can use to enhance your environment so that you can enjoy Scala syntax highlighting and other helpful tools like refactoring, debugging and analysis.
Eclipse - The Eclipse has an IDE plugin for Scala called aptly Scala-IDE. You can either download the complete Scala IDE which includes the complete Eclipse download. You can also download the plugin. All the information about the plugin can be found at http://scala-ide.org including an easy to follow along video located at http://scala-ide.org/docs/current-user-doc/gettingstarted/index.html. Scala-IDE is also available at the Eclipse Marketplace, although I would recommend getting the latest instructions from scala-ide.org
IntelliJ - IntelliJ has a Scala plugin that can be found by going to Settings -> Plugins, clicking on 'Browse Repositories' button and searching for the 'Scala' plugin on the left. Right click on the 'Scala' and choose 'Install'. IntelliJ will prompt you to restart the IDE, do so, and enjoy.
NetBeans - Currently, Github user 'dcaoyuan' hosts a NetBeans Scala plugin at the address: https://github.com/dcaoyuan/nbscala. I have not tried this out since the number of NetBeans users has shrunk in recent years. If you are an avid NetBeans user and wish to try it, you can let me know the results during the session. There is additional information at http://wiki.netbeans.org/Scala
Emacs - Github user 'aemoncannon' has created 'ENSIME' (Enhanced Scala Interaction Mode for Emacs) at the address and has a great following. https://github.com/aemoncannon/ensime with some documentation at http://aemoncannon.github.io/ensime.
VIM - For VIM users you can use https://github.com/derekwyatt/vim-scala as a VIM plugin that offers Scala color highlighting
VSCode - Download the “Scala Language Plugin” from the plugins within VSCode.
That is it. Hope to see you soon.
Apache Cassandra is a leading open-source distributed database capable of amazing feats of scale, but its data model requires a bit of planning for it to perform well. Of course, the nature of ad-hoc data exploration and analysis requires that we be able to ask questions we hadn’t planned on asking—and get an answer fast. Enter Apache Spark.
Spark is a distributed computation framework optimized to work in-memory, and heavily influenced by concepts from functional programming languages. In this workshop, we’ll explore Spark and see how it works together with the Cassandra database to deliver a powerful open-source big data analytic solution.
Apache Cassandra is a leading open-source distributed database capable of amazing feats of scale, but its data model requires a bit of planning for it to perform well. Of course, the nature of ad-hoc data exploration and analysis requires that we be able to ask questions we hadn’t planned on asking—and get an answer fast. Enter Apache Spark.
Spark is a distributed computation framework optimized to work in-memory, and heavily influenced by concepts from functional programming languages. In this workshop, we’ll explore Spark and see how it works together with the Cassandra database to deliver a powerful open-source big data analytic solution.
Normally simple tasks like running a program or storing and retrieving data become much more complicated when we start to do them on collections of computers, rather than single machines. Distributed systems has become a key architectural concern, and affects everything a program would normally do—giving us enormous power, but at the cost of increased complexity as well.
Using a series of examples all set in a coffee shop, we’ll explore topics like distributed storage, computation, timing, messaging, and consensus. You'll leave with a good grasp of each of these problems, and a solid understanding of the ecosystem of open-source tools in the space.
So you’re a JVM developer, you understand Cassandra’s architecture, and you’re on your way to knowing its data model well enough to build descriptive data models that perform well. What you need now is to know the Java Driver.
What seems like an inconsequential library that proxies your application’s queries to your Cassandra cluster is actually a sophisticated piece of code that solves a lot of problems for you that early Cassandra developers had to code by hand. Come to this session to see features you might be missing and examples of how to use the Java driver in real applications.
Reactive architecture patterns allow you to build self-monitoring and self-healing systems that can react to both internal and external conditions without human intervention. How would you like to design systems that can automatically grow as the business grows, automatically handle varying load (cyber Monday?), and automatically handle (and repair) internal and external errors, all without human interaction? I'll show you how to do this with your current technology stack (no special languages, tools, frameworks, or products). In this two-part session I will leverage both slides and live coding using Java and RabbitMQ to describe and demonstrate how to build reactive systems. Get ready for the future of software architecture - that you can start implementing on Monday.
Part 1 Agenda:
Reactive architecture patterns allow you to build self-monitoring and self-healing systems that can react to both internal and external conditions without human intervention. How would you like to design systems that can automatically grow as the business grows, automatically handle varying load (cyber Monday?), and automatically handle (and repair) internal and external errors, all without human interaction? I'll show you how to do this with your current technology stack (no special languages, tools, frameworks, or products). In this two-part session I will leverage both slides and live coding using Java and RabbitMQ to describe and demonstrate how to build reactive systems. Get ready for the future of software architecture - that you can start implementing on Monday.
Part 2 Agenda
Too many companies embark on enterprise architecture efforts only to have them fail. One of the biggest reasons for these failed attempts at enterprise architecture is that no one really knows what it is. Ask 10 people what enterprise architecture is, and you are guaranteed to get 10 different answers. Enterprise architecture is more than drawing lots of enterprise-level future-state architecture diagrams – it is about being able to bridge the gap between business needs and IT capabilities. In this session you will learn about the context and goals of enterprise architecture, what skills are necessary to become an enterprise architect, and how to model the enterprise. We'll also take a look at transformation techniques for both data and systems across the enterprise.
Agenda:
The ancient Chinese warrior Sun Tzu taught his men to “know your enemy” before going into battle. For developers, the equivalent is knowing and understanding software development anti-patterns – things that we repeatedly do that produce negative results. Anti-patterns are used by developers, architects and managers every day, and are one of the main factors preventing progress and success. In this humorous and fast-paced session we will take a deep-dive look at some of the more common and significant software development anti-patterns. Through coding and design examples you will see how these anti-patterns emerge, how to recognize when an anti-pattern is being used, and most importantly, learn how to avoid them through effective software development techniques and practices. Although most of the coding examples are in Java, this is largely a technology-agnostic session.
Agenda:
This session takes a deep dive on the design and implementation of the proof of work algorithm at the heart of most cryptocurrencies, along with details of how difficult it is to spoof, how transactions work, what happens when all the coins have been mined, and other practical considerations.
Bitcoin has generated a lot of both negative and positive buzz, but it’s important to differentiate between the different aspects of cryptocurrency: the algorithm, the commodity, and the implications. This session takes a deep dive on the design and implementation of the proof of work algorithm at the heart of most cryptocurrencies, along with details of how difficult it is to spoof, how transactions work, what happens when all the coins have been mined, and other practical considerations. I also discuss the use of the proof of work algorithm to implement a wide variety of other types of distributed trust: legal documents, voting systems, and a modification of the entire commerce structure that has existed since medieval times. Don’t be confused by the hype around specific implementations: this algorithm will have wide reaching, tectonic repercussions.
Moderns IDEs are great, they let us get our work done, focus on solving problems, provide code prompts, and more. On the flip-side, they hide of lot of details and often do not provide everything to help get our work done. Learning to effectively use the command line, can help us navigate around, write script to automate certain routine tasks, isolate and understand issues, and more.
In this presentation we will lear some ticks and tips we can do on the command line, using some common editors, and general navigation, both for unix-like machines and Windows.
Creating code is easy, creating good code takes a lot of time, effort, discipline, and commitment. The code we create are truly the manifestations of our designs. Creating a lightweight design can help make the code more extensible and reusable.
In this presentation we will take an example oriented approach to look at some core design principles that can help us create better design and more maintainable code.
Test Driven Design, we hear is a great way to create lightweight design that is easier to maintain and evolve. Unfortunately, just writing test cases mechanically do not lead to good design. In fact, it may really not lead us anywhere we want to really go!
In this presentation we will discuss some of the challenges with using test driven development, look at practical and pragmatic solutions that will help us make a good use of this wonderful design tool.
Big up front design is discouraged in agile development. However, we know that architecture plays a significant part in software systems. Evolving architecture during the development of an application seems to be a risky business.
In this presentation we will discuss the reasons to evolve the architecture, some of the core principles that can help us develop in such a manner, and the ways to minimize the risk and succeed in creating a practical and useful architecture.
Before spending substantial effort in refactoring or altering design, it would be prudent to evaluate the current quality of design. This can help us decide if we should proceed with refactoring effort or a particular alteration of design. Furthermore, after evolving a design, using some design metrics would help us to evaluate if we have improved on the design front.
In this workshop we will learn about some critical qualities of design and how to measure those. We will learn about these by working through some example code, refactoring it, and evaluating the design again at each stage of refactoring.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Before spending substantial effort in refactoring or altering design, it would be prudent to evaluate the current quality of design. This can help us decide if we should proceed with refactoring effort or a particular alteration of design. Furthermore, after evolving a design, using some design metrics would help us to evaluate if we have improved on the design front.
In this workshop we will learn about some critical qualities of design and how to measure those. We will learn about these by working through some example code, refactoring it, and evaluating the design again at each stage of refactoring.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
In Java, we've programmed with the imperative style for a few decades now. With Java 8, we can also code in Functional Style. This style has a number of benefits: code is concise, more expressive, easier to understand, and easier to make change. But, the transition from imperative to functional style is a hard journey. It's not as much an issue of getting comfortable with the syntax. It's the challenge of thinking functionally. What better way to learn that transition than taking imperative code and refactoring it to a more of a functional style.
In this presentation we will start with multiple code examples that are written in imperative style and learn how to approach and refactor to functional style. You'll learn about some APIs, some hidden functions, but more so what to look for during your own journey to functional style.
A number of developers and organizations are beginning to make use of Java 8. With anything that's new, we often learn it the hard way.
By stepping back and taking a look at programming style as idioms, we can quickly gravitate towards better coding style and also avoid some common traps that we often get drawn towards.
Sure, Java 8 has lambdas and streams. However, the JDK has gone through significant makeover to make good use of lambdas and streams. Furthermore, some of the new functional interfaces have far more than abstract methods.
In this presentation we will look beyond lambdas and streams, we will take a look at some of the fun filled, useful members of the JDK that will help us make better use of lambdas and streams.
“If streams can be parallel, why not make them parallel all the time?” is a common question from developers getting introduced to Java 8 streams. In this talk we'll take on three separate topics. 1. When to consider parallelization and when not to. 2. How to parallelize, how to decide on number of threads, and how to control the threads pool. 3. Learn about some common mistakes people make when using parallel streams.
The goal of this talk is for us to learn when and how to make good use of parallel streams.
Efficiency is achieved not just by running things faster, but by avoiding things that shouldn't be done in the first place. Lazy evaluations are a core feature of many functional programming languages. Your code can benefit from lazy evaluations with lambda expressions and, more so, with the power of Streams.
In this presentation, we'll start with a discussion of lazy evaluations, with short examples from Haskell and Scala. Then we'll dive into Java to see how we can achieve similar benefits using lambdas and the Stream API.
We all have seen our share of bad code and some really good code as well. What are some of the common anti patterns that seem to be recurring over and over in code that sucks? By learning about these code smells and avoiding them, we can greatly help make our code better.
Come to this talk to learn about some common code smell and to share your experiences as well.
What's in Java 9 and, more important, how does that impact us?
In this presentation we will take a look at the major features that are likely to make in to Java 9, discuss the benefits of each one of them using practical hands on examples.
Programming is an act of continuous discoveries. Auto-Completion in IDEs are great, but they're more of a speculation than experimentation. Read-Evaluate-Print-Loop or REPL gives an instant feedback and the ability to quickly try out your ideas. Fast feedbacks are the rage today in development.
Come to this all live coding, no slides session to learn how to leverage the Java 9 REPL to accelerate your Java Development.
Frege is an implementation of Haskell on the JVM. It brings along the strengths and power of one of the most powerful statically typed and functional programming languages.
In this presentation we will learn about Frege and it's implementation on the JVM, using practical examples.
I.flow() AI is an emotional intelligence AI that learns to respond in real-time to the pain of humans, for example, developers that are having a hard time. The I.flow() AI Platform is still in the early stages of mapping theory to concrete implementation, so in this talk we'll breakdown architecture strategy, pain metrics, pair programming buddy, supply chain flows, and the underpinning of Flow theory.
Flow is an old concept, adopted into the software world by mapping Flow from Lean manufacturing. When we map a metaphor between two different domains, our brain locks onto the isomorphisms between contexts, and “Flow” becomes stickies flowing on a whiteboard, or features flowing out to customers. It becomes difficult to see Flow any other way.
What if our object-oriented blinders led to an object-oriented notion of Flow, and there's a totally different way to look at the system? Flow, at it's core, is a paradigm shift, a metaphorical lens, that helps us see, understand, and predict the behavior of any Flow System. Better predictive models, enables AI automation like we've never had before. It's about time we started applying AI to our own problems.
Learning to Trust in a distributed system is a complex and harrowing process. By combining the notions of Identity and Secrecy we can build protocols that help us achieve it.
Technologies don't magically become solutions. They are used within domain, design and deployment contexts. This talk will focus on the singular notion of Trust and how it cross-cuts the distributed systems we are building.
This talk will focus on a variety of standards and technologies that help us connect the worlds of Identity, Secrecy and Integration. We will look at technologies that benefit from open standards to allow us to make and verify claims that strengthen our ability to Trust. This will also include a look at the Distributed Trust models such as Blockchain-based transactions and platforms that build upon them.
While developers and testers use Selenium and other suites to test web application functionality, security often falls to the wayside because it's either too time consuming or they just don't know HOW to test for these issues. In this talk we'll discuss some basic OWASP TOP 10/CWE 25 vulnerabilities and how to discover them.
We'll use Selenium in conjunction with tools, such as ZAP and Burp, to identify vulnerabilities in our applications.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
The technology space is a lot like the ocean - miss one wave and another will come along shortly; most shiny new things begin with a sizable amount of hype as everyone rushes to play with the new toy. This cycle is often met with a level of disappointment as we quickly discover our new bauble isn't all that and a bag of chips so we rush off to the next best thing ever.
A few short years ago, HTML5 was the new hotness but at the time browser support was spotty at best. Despite the spotlight moving on to something else, browser support has improved markedly and we even have new toys to play with! In this talk, I will walk you through what is possible in today's browser as well as what other new features you might not be aware of. HTML5 may no longer qualify as bleeding edge, but it is still deserving of our attention.
It's a great time to start using Angular 2. At this time it is in the release candidate stage and many production apps are being deployed. In this session, we will cover the basics of creating Angular 2 apps and we will show how to integrate with existing Java web applications. At a high-level we'll cover the basics of TypeScript, modules, components, templates, services, routing, and dependency injection.
A challenging aspect of getting started is understanding Angular tooling and integrating your Angular client inside a Java web app. We'll review a tool stack that includes: npm, gulp, angular-cli, and gradle.
Reactive Programming is receiving quite a bit of attention and for good reasons. It’s a nice logic next step from functional programming. It takes the concept of function composition and lazy evaluations to the next level. It streamlines handling of many critical issues that are architectural in nature: resilience, scale, responsiveness, and messaging.
In this workshop, we will start with a quick introduction to reactive programming. We will then dive into code examples and learn how to create reactive applications. We’ll learn to implement observables, to deal with errors in a graceful manner, learn both synchronous and asynchronous solutions, hot vs. cold observables, and dealing with backpressures.
Attendees will pair up to work on the labs. Please be prepared with the following on your systems:
svn (or git-svn) to access the version control for code examples and labs
Java 1.8
Your favorite IDE/TextEditor
The above are the minimum requirements. The reactive library and any other things needed will be downloaded during the workshop. Should you have any questions, please drop an email to Venkat at venkats@agiledeveloper.com.
In this talk we will explore topology of cassandra cluster. We will explore how data is spread around a cluster in cassandra? Then we will look at snitches and launch Cassandra clusters using Docker. In the end we will look at nodetool commands to work with cluster.
We will explore replication strategies within Cassandra. In the end we will look at CQL (Cassandra Query Language), which is used for interacting with Cassandra. We will also explore how Cassandra is integrated with Datastax framework.
In this talk we will explore topology of cassandra cluster. We will explore how data is spread around a cluster in cassandra? Then we will look at snitches and launch Cassandra clusters using Docker. In the end we will look at nodetool commands to work with cluster.
We will explore replication strategies within Cassandra. In the end we will look at CQL (Cassandra Query Language), which is used for interacting with Cassandra.
Key takeaways for this talk will be for a developer and architect to understand how Apache Cassandra is one of the best solutions for storing and retrieving data. We will also explore how Cassandra is integrated with Datastax framework.
In this talk we will take a deep dive to how to design data models for highly available cloud systems. We will start with exploring the Conceptual data model, Application flow, Logical data model and physical data model. We will review Chebotko Diagrams and how they help with modeling of no sql database. As part of this exercise, We will explore Cassandra data modeling goals to spread data evenly around the cluster and minimize the number of partitions read.
Key takeaways for this talk will be for a developer and architect to understand how to design nosql database using strategies discussed.
In this talk we will take a deep dive to how to design data models for highly available cloud systems. We will start with exploring the Conceptual data model, Application flow, Logical data model and physical data model. We will review Chebotko Diagrams and how they help with modeling of no sql database. As part of this exercise, We will explore Cassandra data modeling goals to spread data evenly around the cluster and minimize the number of partitions read.
Key takeaways for this talk will be for a developer and architect to understand how to design highly available NoSql database using strategies discussed.
This talk will explore why Spark is the most prominent solution compared to Hadoop. We will look at MapReduce and how Spark makes the creation of Big Data algorithms simple and faster. Next, we will explore Spark Context and how Resilient Distributed Dataset (RDD) helps establish Directed Acyclic Graph (DAG); Transformations using map and filter; Actions using collect, count, and reduce. Later we will explore the Spark Cassandra connector. We will look at Spark API and Spark SQL.
Key takeaways from this talk will be for a developer and architect to understand how Apache Spark and Apache Cassandra help implement enterprise-level analytical solutions. It is 100x faster than Hadoop!
The key takeaways from this talk will be for a developer and architects to understand:
Embracing microservices also means embracing distributed systems. Distributed systems carry with them multiple challenges. One set of challenges includes problem of visibility into the behavior of the composite system, understanding that behavior, and being able to isolate the cause(s) of problematic behavior. These challenges can be addressed by applying the techniques known collectively as Distributed Tracing.
In this presentation, we’ll examine the theory of distributed tracing put forth in Google’s Dapper paper, and we’ll look at how this theory is put into practice in the design of Zipkin, an OSS distributed tracing platform.
Visibility is one of the primary characteristics of applications that aren’t just coded well, but run well in production. We need visibility to understand:
In this talk we’ll look at the three disciplines of monitoring, metrics, and logging, and see how properly used, they can dramatically increase our system’s inherent visibility.
Topics will include:
Create your own model Continuous Deliver pipeline in a VM using Jenkins, Gradle, Git, Gerrit, Artifactory, Sonar, Jacoco, and Docker. Learn about each of these technologies in brief and see how to integrate them into Jenkins through plugins or scripting. See how to generate and access reports for running testcases, pass/fail for code metrics, coverage info, etc. Learn how to deploy a webapp with a database backend in multiple Docker containers for functional or UAT tests. Learn Jenkins techniques to pass information, environments, and artifacts between jobs in the pipeline.
Important setup required before the workshop:
You will need a laptop for this workshop with the applications as discussed below.
In this workshop, we use a preconfigured VM which requires Virtualbox to be running on your system. Please see https://github.com/brentlaster/conf/blob/master/rwx2016/JDP-Setup.pdf and follow the directions there. (Note: You do not need to do the part about changing the timezone on the VM since RWX 2016 will be in the EST timezone.)
As noted in the PDF, the VM can be downloaded from: https://s3-us-west-2.amazonaws.com/bclconf/CDPipeline/RWX_2016.ova
Please be aware that this VM is ~6G in size and will require significant time to download. Free space of 20G (prior to the download to allow for the download, running the VM, etc.) is recommended on your system for best performance.
In this hands-on workshop, we create a Continuous Delivery pipeline with Jenkins and 7 other technologies. We assemble the Review stage with automated verification builds and code-review via Git and Gerrit. Then we move on to the Commit stage with compiles and unit tests, integration testing via Gradle, code analysis with Sonar and Jacoco, packaging, and publishing of our artifact into Artifactory. Next we handle the acceptance stage of retrieving our artifact and deploying them automatically to a functional test environment in Docker containers. In the final stage, we show how to deploy to a production web engine.
Throughout this workshop, we briefly survey each of these technologies and provide working examples of integrating each of them within Jenkins. Everything is contained within a Linux VM that each participant will have. After the labs, each participant will have their own working Continuous Delivery pipeline.
Users will need a modern laptop with VirtualBox installed and the ability to run images as well as about 20Gig of free disk space.
Important setup required before the workshop:
You will need a laptop for this workshop - the more powerful the better - configured as outlined below.
In this workshop, we use a preconfigured VM which requires Virtualbox to be running on your system. Please see https://github.com/brentlaster/conf/blob/master/rwx2016/JDP-Setup.pdf and follow the directions there. (Note: You do not need to do the part about changing the timezone on the VM since RWX 2016 will be in the EST timezone.)
As noted in the PDF, the VM can be downloaded from: https://s3-us-west-2.amazonaws.com/bclconf/CDPipeline/RWX_2016.ova
Please be aware that this VM is ~6G in size and will require significant time to download. Free space of 20G ( prior to the download for downloading, running the VM, etc.) is recommended on your system for best performance.
Create your own model Continuous Deliver pipeline in a VM using Jenkins, Gradle, Git, Gerrit, Artifactory, Sonar, Jacoco, and Docker. Learn about each of these technologies in brief and see how to integrate them into Jenkins through plugins or scripting. See how to generate and access reports for running testcases, pass/fail for code metrics, coverage info, etc. Learn how to deploy a webapp with a database backend in multiple Docker containers for functional or UAT tests. Learn Jenkins techniques to pass information, environments, and artifacts between jobs in the pipeline.
Important setup required before the workshop:
You will need a laptop for this workshop with the applications as discussed below.
In this workshop, we use a preconfigured VM which requires Virtualbox to be running on your system. Please see https://github.com/brentlaster/conf/blob/master/rwx2016/JDP-Setup.pdf and follow the directions there. (Note: You do not need to do the part about changing the timezone on the VM since RWX 2016 will be in the EST timezone.)
As noted in the PDF, the VM can be downloaded from: https://s3-us-west-2.amazonaws.com/bclconf/CDPipeline/RWX_2016.ova
Please be aware that this VM is ~6G in size and will require significant time to download. Free space of 20G (prior to the download to allow for the download, running the VM, etc.) is recommended on your system for best performance.
In this hands-on workshop, we create a Continuous Delivery pipeline with Jenkins and 7 other technologies. We assemble the Review stage with automated verification builds and code-review via Git and Gerrit. Then we move on to the Commit stage with compiles and unit tests, integration testing via Gradle, code analysis with Sonar and Jacoco, packaging, and publishing of our artifact into Artifactory. Next we handle the acceptance stage of retrieving our artifact and deploying them automatically to a functional test environment in Docker containers. In the final stage, we show how to deploy to a production web engine.
Throughout this workshop, we briefly survey each of these technologies and provide working examples of integrating each of them within Jenkins. Everything is contained within a Linux VM that each participant will have. After the labs, each participant will have their own working Continuous Delivery pipeline.
Users will need a modern laptop with VirtualBox installed and the ability to run images as well as about 20Gig of free disk space.
Important setup required before the workshop:
You will need a laptop for this workshop - the more powerful the better - configured as outlined below.
In this workshop, we use a preconfigured VM which requires Virtualbox to be running on your system. Please see https://github.com/brentlaster/conf/blob/master/rwx2016/JDP-Setup.pdf and follow the directions there. (Note: You do not need to do the part about changing the timezone on the VM since RWX 2016 will be in the EST timezone.)
As noted in the PDF, the VM can be downloaded from: https://s3-us-west-2.amazonaws.com/bclconf/CDPipeline/RWX_2016.ova
Please be aware that this VM is ~6G in size and will require significant time to download. Free space of 20G ( prior to the download for downloading, running the VM, etc.) is recommended on your system for best performance.
Create your own model Continuous Deliver pipeline in a VM using Jenkins, Gradle, Git, Gerrit, Artifactory, Sonar, Jacoco, and Docker. Learn about each of these technologies in brief and see how to integrate them into Jenkins through plugins or scripting. See how to generate and access reports for running testcases, pass/fail for code metrics, coverage info, etc. Learn how to deploy a webapp with a database backend in multiple Docker containers for functional or UAT tests. Learn Jenkins techniques to pass information, environments, and artifacts between jobs in the pipeline.
Important setup required before the workshop:
You will need a laptop for this workshop with the applications as discussed below.
In this workshop, we use a preconfigured VM which requires Virtualbox to be running on your system. Please see https://github.com/brentlaster/conf/blob/master/rwx2016/JDP-Setup.pdf and follow the directions there. (Note: You do not need to do the part about changing the timezone on the VM since RWX 2016 will be in the EST timezone.)
As noted in the PDF, the VM can be downloaded from: https://s3-us-west-2.amazonaws.com/bclconf/CDPipeline/RWX_2016.ova
Please be aware that this VM is ~6G in size and will require significant time to download. Free space of 20G (prior to the download to allow for the download, running the VM, etc.) is recommended on your system for best performance.
In this hands-on workshop, we create a Continuous Delivery pipeline with Jenkins and 7 other technologies. We assemble the Review stage with automated verification builds and code-review via Git and Gerrit. Then we move on to the Commit stage with compiles and unit tests, integration testing via Gradle, code analysis with Sonar and Jacoco, packaging, and publishing of our artifact into Artifactory. Next we handle the acceptance stage of retrieving our artifact and deploying them automatically to a functional test environment in Docker containers. In the final stage, we show how to deploy to a production web engine.
Throughout this workshop, we briefly survey each of these technologies and provide working examples of integrating each of them within Jenkins. Everything is contained within a Linux VM that each participant will have. After the labs, each participant will have their own working Continuous Delivery pipeline.
Users will need a modern laptop with VirtualBox installed and the ability to run images as well as about 20Gig of free disk space.
Important setup required before the workshop:
You will need a laptop for this workshop - the more powerful the better - configured as outlined below.
In this workshop, we use a preconfigured VM which requires Virtualbox to be running on your system. Please see https://github.com/brentlaster/conf/blob/master/rwx2016/JDP-Setup.pdf and follow the directions there. (Note: You do not need to do the part about changing the timezone on the VM since RWX 2016 will be in the EST timezone.)
As noted in the PDF, the VM can be downloaded from: https://s3-us-west-2.amazonaws.com/bclconf/CDPipeline/RWX_2016.ova
Please be aware that this VM is ~6G in size and will require significant time to download. Free space of 20G ( prior to the download for downloading, running the VM, etc.) is recommended on your system for best performance.
Learn to use the new features of Java 8, 9, and beyond, including lambda expressions, method references, and the streaming API.
Exercises will include using lambdas and streams, refactoring existing code, working with map/filter/reduce, and sampling the new java.time package.
Knowledge of earlier versions of Java is assumed. The exercises will be independent of IDE, but IntelliJ IDEA is recommended.
Declarative programming describes what a program should achieve in terms of a problem domain, rather than describing how to achieve it using a sequence of primitive operations. It has long been considered a powerful tool for minimizing complexity of software systems. In this talk we will discuss how to apply the declarative programming paradigm to modern AJAX-based web applications using intercooler.js.
We will begin with a brief overview of intercooler.js, a library that enables a declarative approach to AJAX requests. We will discuss:
Once we have covered these basics, we will examine a few typical web application UX needs and how they can be addressed using the declarative style:
After this talk, the attendee should have a good grasp of how to build a modern web application using declarative programming techniques with intercooler.js
As Tech Leaders, we are presented with problems and work to find a way to solve them, usually through technology. In my opinion this is what makes this industry so much fun. Let's face it - we all love challenges. Sometimes, however, the problems we have to solve are hard - really hard. So how do you go about solving really hard problems? That's what this session is about - Heuristics, the art of problem solving. In this session you will learn how to approach problems and also learn techniques for solving them effectively. So put on your thinking cap and get ready to solve some easy, fun, and hard problems.
Agenda:
Bitcoin has roundly entered the public consciousness, but it is limited in its use beyond the specific constraints of the cryptocurrency. Ethereum is a new platform that has enabled developers to innovate in creating their own cryptocurrencies, platforms, smart contracts and more.
This talk will introduce the larger concepts of blockchains and decentralized applications as well as details on how to build running applications on the Ethereum platform.
These ideas and tools will help innovators disrupt organizations, markets, entire industries and even aspects of society. It's sounds like science fiction, but these thing are already happening. Come learn how.
We will cover:
Architecture does more than describe the system as it is. It also establishes incentives, cost structures, organizational patterns and a marketplace for ideas upon which various players will innovate. One of the reasons the Web has been so successful is because it does this in a way that encourages a wide participation from varied players due to the nature of the architecture upon which it is built: The Internet.
This talk will walk through the design of the Internet Architecture and how it yields the flexibility to innovate to a wide collection of players including VC-backed internet startups, college students working out their room and companies targeting specific types of customers. The choices that have been (and will be) made have enormous implications on how the Internet and Web can be used and evolve and who controls them.
Come think deeply about one of the most important software architectural designs that has ever been designed and why we must protect it.
If you're not terrified, you're not paying attention.
Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.
Topics include:
If you're not terrified, you're not paying attention.
Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.
Topics include:
It begins with a vision - an awe-inspiring idea that both excites and motivates you. Then the compromises begin… Budget, schedule, scope, work/life balance; you're forced to cut a corner here and there and the vision of perfection slips further and further away until the end result is fragile, fetid shell of your original idea. How can we deal with inevitable compromises while maintaining our integrity as engineers (and pride in our work)
Like many of his talks, Michael has a very unique perspective on this phenomenon. After nearly two decades of experience both as a software engineer and as a professional magician he leverages all his skills explores this topic in an entertaining and insightful manner. It turns out creating beautiful, perfect code is not very different from creating the perfect card trick.