Brian Sletten

Forward Leaning Software Engineer @ Bosatsu Consulting

Brian Sletten is a liberal arts-educated software engineer with a focus on forward-leaning technologies. His experience has spanned many industries including retail, banking, online games, defense, finance, hospitality and health care. He has a B.S. in Computer Science from the College of William and Mary and lives in Auburn, CA. He focuses on web architecture, resource-oriented computing, social networking, the Semantic Web, AI/ML, data science, 3D graphics, visualization, scalable systems, security consulting and other technologies of the late 20th and early 21st Centuries. He is also a rabid reader, devoted foodie and has excellent taste in music. If pressed, he might tell you about his International Pop Recording career.

Presentations

The Hypertext Transfer Protocol (HTTP) has been in use for decades. Almost as soon as it was released initially, those surrounding its design began extending it to meet the needs of evolving interaction styles.
HTTP 1.1 was a huge leap forward, but there were still performance issues that were not resolved until HTTP/2.

Now, we are on the cusp of the biggest changes to date with the introduction of HTTP/3 and QUIC. Developers need to understand what is happening so they can build modern, high performance Web-based systems that benefit from the new capabilities.

You will learn about:

- How HTTP has evolved over time
- What the major innovations and limitations have been along the way
- How HTTP/2 changed Web application and API design
- How HTTP/3 and QUIC will change Web application and API design
- How the co-evolution of TLS 1.3 combines with HTTP/3 and QUIC to
  modernize the secure Web

We have seen how Retrieval Augmented Generation (RAG) systems can help prop up Large Language Models (LLMs) to avoid some of their worst tendencies. But that is just the beginning. The cutting edge stateoftheart systems are Multimodal and Agentic, involving additional models, tools, and reusable agents to break problems down in separate pieces, transform and aggregate the results, and validate the results before returning them to the user.

Come get introduced to some of the latest and greatest techniques for maximizing the value of your LLMbased systems while minimizing the risk.

We will cover:

  • The LangChain and LlamaIndex Frameworks
  • Naive and Intermediate RAG Systems
  • Multimodal Models (Mixing audio, text, images, and videos)
  • Chatbots
  • Summarization Services
  • Agent Protocols
  • Agent Design Patterns

It's not just you. Everyone is basically thinking the same thing: When did this happen?

We've gone from slow but steady material advances in machine learning to a seeming explosion and ubiquity of AI-based features, products, and solutions. Even more, we're all expected to know how to adopt, use, and think about all of these magical new capabilities.

Equal parts amazing and terrifying, what you need to know about these so-called “AI” solutions is much easier to understand and far less magical than it may seem. This is your chance to catch up with the future and figure out what it means for you.

In this two part presentation, we will cover why this time it is different, except where it isn't. I won't assume much background and won't discuss much math.

A brief history of AI
Machine Learning
Deep Learning
Deep Reinforcement Learning
The Rise of Generative AI
Large Language Models and RAG
Multimodal Systems
Bias, Costs, and Environmental Impacts
AI Reality Check

At the end of these sessions, you will be conversant with the major topics and understand better what to expect and where to spend your time in learning more.

Application Programmer Interfaces (APIs) by definition are directed at software developers. They should, therefore, strive to be useful and easy to use for developers. However, when engaging design elements from the Web, they can be useful in much larger ways than simply serializing states in JSON.

There is no right or perfect API design. There are, however, elements and choices that induce certain properties. This workshop will walk you through various approaches to help you find the developer experience and long-term strategies that work for you, your customers and your organization.

We will cover:

The Web Architecture as the basis of our APIs
The REST Architectural Style and its motivations
The Richardson Maturity Model as a way of discussing design choices and induced properties
The implications of contentnegotiation and representation choices such as JSON or JSONLD
The emergence of metadata approaches to describing and using APIs such as OpenAPI and HydraCG
Security considerations
Client technologies
API Management approaches

The concept of an API is straightforward enough, but the process of turning the individual endpoints into a collection of valueadding organizational resources is not something that gets a lot of attention. In this talk, we will discuss the various individual, team, and organizational choices that impact the development, planning, testing, standardization, operationalization, and evolution of consistent and compatible APIs.

We will cover API:

Technology choice
Design style
Security
Testing
Monitoring
Scaling
Design Patterns

The Internet works as it was designed. Occasionally new uses, new technologies, and new scenarios confound those designs and force us to evolve. Fortunately, the architecture allows this quite easily, but where and how to effect change is not always obvious.

For those who wish “Full Stack Engineering” to be a more accurate view of their background than simply developing front AND back end systems, this talk will be a comprehensive and illuminating discussion about how the designs of the 1960s have evolved as an architecture and updated collection of protocols and standards.

One of the nice operational features of the REST architectural style as an approach to API Design is that is allows for separate evolution of the client and server. Depending on the design choices a team makes, however, you may be putting a higher burden on your clients than you intend when you introduce breaking changes.

 By taking advantage of the capabilities of OpenRewrite, we can start to manage the process of independent evolution while minimizing the impact. Code migration and refactoring can be used to transition existing clients away from older or deprecated APIs and toward new versions with less effort than trying to do it by hand.

 

In this talk we will focus on:

Managing API lifecycle changes by automating the migration from deprecated to supported APIs.
Discussing API evolution strategies and when they require assisted refactoring and when they don’t.
*Integrating OpenRewrite into API-first development to ensure client code is always up-to-date with ease.

Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues. 

 In addition to detecting the presence of common bugs, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite to fix common security issues and keep them from coming back.

 

In this talk we will focus on:

Using OpenRewrite to automatically identify and fix known security vulnerabilities.
Integrating security scans with OpenRewrite for continuous improvement.
*Free up your time to address larger concerns by addressing the pedestrian but time-consuming security bugs.

On the one hand, Machine Learning (ML) and AI Systems are just more software and can be treated as such from our development efforts. On the other hand, they behave very differently and our capacity to test, verify, validate, and scale them requires a different set of perspectives and skills.

This presentation will walk you through some of these unexpected differences and how to plan for them. No specific background in ML/AI is required, but you are encouraged to be generally aware of these fields. The AI Crash Course would be a good start.

We will cover:

Matching Capabilities to Needs
Performance Tuning
Vector Databases
Testing Strategies
MLOPs/AIOps Techniques
Evolving these Systems Over Time

If you ask the typical technologist how to build a secure system, they will include encryption in the solution space. While this is a crucial security feature, in and of itself, it is an insufficient part of the plan. Additionally, there are a hundred ways it could go wrong. How do you know if you're doing it right? How do you know if you're getting the protections you expect?

Encryption isn't a single thing. It is a collection of tools combined together to solve problems of secrecy, authentication, integrity, and more. Sometimes those tools are deprecated because they no longer provide the protections that they once did.Technology changes. Attacks change. Who in your organization is tracking and validating your encryption strategy? How are quantum computing advancements going to change the game?No background will be assumed and not much math will be shown.

Everyone understands the Web as a platform, but not as many can see the thread that connects the Web to APIs to linked interoperable data models. In this design workshop, we will investigate the properties of the Web and how they can be applied to data, documents, services and concepts as the basis of truly spectacular vision for information interchange.

We will start with the basics and build up to flexible, evolvable data models and everything in-between.

Topics will include

  • REST and Hypermedia
  • LinkedData
  • JSON-LD
  • SPARQL

Machine Learning is all the rage, but many developers have no idea what it is, what they can expect from it or how to start to get into this huge and rapidly-changing field. The ideas draw from the fields of Artificial Intelligence, Numerical Analysis, Statistics and more. These days, you'll generally have to be a CUDA-wielding Python developer to boot. This workshop will gently introduce you to the ideas and tools, show you several working examples and help you build a plan to for diving deeper into this exciting new field.

We will cover:

  • The differences between data science, AI and machine learning
  • The Five Tribes of Machine Learning (as defined by Pedro Domingos)
  • Walkthroughs of some of the main algorithms
  • Examples in Java, R and Python
  • Tools such as Tensorflow and Pytorch
  • The impact of GPUs on machine learning
  • Stories about how companies are being successful with machine learning
  • A discussion about the likely impacts of machine learning on the job market and society

In the last 30 years, our industry has been upended by advancements that unlock previously unimaginable capabilities. It still seems like there is far too much failure and not enough success in IT systems though. To be successful in the 21st Century, you will need to understand where we are and where we are going. It is a complex amalgamation of developments in hardware, computer languages, architectures and how we manage information. Very few people understand all of the pieces and how they connect.

In this talk we will cover how technology changes are enabling longer term capture of business value, modernization of legacy systems, resilience in the face of increased mobile user bases, IT sovereignty and distributed, layered, heterogeneous architectures.

The premise of Nicholas Carr's “Does IT Matter?” book was that if everyone uses the same tools, processes, products, etc., is there any competitive advantage to be had from the average IT organization?

NetKernel represents a fundamentally different approach to building systems. It takes what we like about Unix, REST and SOA and mixes it together. It inexplicably changes everything while allowing you to reuse existing code, services and libraries. Not only can it make building the kinds of systems you are building today easier, it does it more efficiently, with less code and a far more scalable runway to allow you to take advantage of the emerging multi-core, multi-CPU hardware that is coming our way.

This workshop will be a deeper dive into Resource-Oriented Computing with NetKernel. We will explore:

  • the resource model as it applies to general computing
  • the intersection of REST and the resource model
  • scaling your software without really trying
  • interacting with relational databases
  • orchestration around different service types
  • logically-layering applications for flexibility
  • advanced caching strategies
  • leveraging dynamic languages with the resource model
  • instant cloud support

It is rare that a technology comes along that is both revolutionary and lets you reuse what you already know. All it takes is a bit of different thinking and a little courage to try something new.

Our industry is in the process of changing our understanding of computational systems. The combination of extreme computational and energy power demand is a key part of modern data centers and runtime platforms. How many calculations can we produce at what energy cost? The limitations are a confluence of material science, system design complexity, and the fundamental laws of physics.

It's about to get weird as we enter the world of quantum and biological systems.

We started with coprocessors, FPGAs, ASICs, GPUs, and DSPs as lowerpower, highperformance custom hardware. We're now seeing the emergence of neural processing units and tensor processing units as well.

But we are on the cusp of enormous shifts in what's possible computationally with the advent of quantum and biological systems. Not every computational element is suitable for every problem, but quantum computing will make some problems impossibly fast to handle. Artificial biological brains will be able to computations, like the human brain, with the power budget of a light bulb.

Come hear how things are already in the process of changing as well as what is likely to come next.

At the intersection of Big Data, Data Science and Data Visualization lives a programming language that ranks higher on the TIOBE index than Scheme, Fortran, Scala, Prolog, Erlang, Haskell, Lisp and Clojure. The R language and environment is an open source platform that has quickly become THE language for analyzing data and visualizing the results. This workshop will introduce you to the language, the environment and how it is being used with Big Data and Linked Data.

In the first part of the workshop, we will learn:

  • History of R
  • Language basics, data types and main structures
  • Some statistics fundamentals

At the intersection of Big Data, Data Science and Data Visualization lives a programming language that ranks higher on the TIOBE index than Scheme, Fortran, Scala, Prolog, Erlang, Haskell, Lisp and Clojure. The R language and environment is an open source platform that has quickly become THE language for analyzing data and visualizing the results. This workshop will introduce you to the language, the environment and how it is being used with Big Data and Linked Data.

In the second part of the workshop, we will learn about:

  • R Graphics
  • R and Big Data
  • R and Linked Data

Large Language Models (LLMs) such as ChatGPT and Llama have impressed us with what they can do. They have also horrified us with what they actually do when they are employed with no protection: hallucinations, stale knowledge bases, no conceptual basis for reasoning, and a capacity for toxic and inappropriate content generation. Rather than avoid them altogether or risk legal liability or brand damage, we can put some guardrails around them to benefit from their best traits without fearing their worst.

Retrieval Augmented Generation (RAG) systems augment the process to make it behave more to our liking. Come hear what you can do to benefit from AI systems without fearing them.

We will cover examples using LangChain and LlamaIndex, two open source frameworks for working with LLMs and creating RAG infrastructure.

We will cover:

Introduction to LLMs
Risks and Limitations
Basic RAG Systems
Embeddings
Vector Databases
Prompt Engineering
Testing and Validating LLMs and RAG Systems
Advanced Techniques
AI as Judge

Spring has always been defined by its lightweight core. While there has been an overwhelming explosion in the external projects and protocols it integrates seamlessly with, it has also evolved internally to meet the needs of modern development requirements.

One of the biggest changes in the last several years has been the emergence of Reactive Spring, an attempt to embrace the idea of Reactive Systems in the Spring ecosystem. This is a vision of responsive, resilient, elastic systems. Unfortunately, code alone cannot solve the problems so this is a case where software and architecture meet.

You will learn about:

- The Reactive System vision
- How Spring absorbed these ideas without complicating or
  eliminating the more conventional styles
- How to build, test and consume Reactive Spring applications
- How to architect entire Reactive chains of interacting systems

Many people are drawn to the ideas of REST but aren't sure how to take the next steps. This workshop will help get you to a comfortable place by introducing the concepts and walking through a series of exercises designing REST APIs from a variety of domains.

We will break up into teams and tackle the various aspects of a solid, stable, evolvable REST API design. This will not be a tutorial in particular REST implementations (Jersey, Restlet, etc.). The ideas will transcend specific technologies although we will talk about some particular choices.

REST Workshop : I - Video Preview

Many people are drawn to the ideas of REST but aren't sure how to take the next steps. This workshop will help get you to a comfortable place by introducing the concepts and walking through a series of exercises designing REST APIs from a variety of domains.

This workshop will span two session periods but is one effort. Please plan on coming to both.

We will break up into teams and tackle the various aspects of a solid, stable, evolvable REST API design. This will not be a tutorial in particular REST implementations (Jersey, Restlet, etc.). The ideas will transcend specific technologies although we will talk about some particular choices.

REST Workshop : II - Video Preview

Many people are drawn to the ideas of REST but aren't sure how to take the next steps. This workshop will help get you to a comfortable place by walking through a series of exercises. Bring your computers and bring your brains we will be designing, building and testing REST APIs from a variety of domains.

This workshop will span two session periods but is one effort. Please plan on coming to both. Please bring a computer with a late model Java VM on it and a text editor. curl will also be useful. I will provide the remaining bits.

We will break up into teams and tackle the various aspects of a solid, stable, evolvable REST API design. This will not be a tutorial in particular REST implementations (Jersey, Restlet, etc.) but if you have one you are familiar with, you are free to use that for the code portion of the solutions. I will provide a NetKernel-based framework as it is a self-contained, REST-savvy environment that is easy to get going with. The ideas will largely transcend specific implementations though.

Each of these technologies is transformative on its own. Together, they are a compelling mix of the speed of C++, the safety and portability of Java and a modern, expressive and readable syntax. Come learn why Rust and WebAssembly are two great technologies that are better together. This combination is going to impact your career whether you develop on the front end or the backend on the desktop or in the cloud.

We will not assume knowledge of Rust and will introduce the major features of this modern systems programming language. This will include:

  • The Rust memory model
  • Traits and Generics
  • Functional programming
  • The Standard Library

We will also cover the WebAssembly platform including:

  • The stack architecture
  • The WAST Text format
  • Interacting with Rust
  • Memories and Tables
  • Dynamic-linking
  • The WebAssembly Service Interface (WASI)

Everyone knows security is important. Very few organizations have a robust and comprehensive sense of whose responsibility it is, however. The consequence is that they have ducttapped systems and a Policy of Hope that there will be no issues. (Spoiler: there will be)

We will review the various roles that most organizations need to fill and probably are currently not doing so. We will also focus on how the roles overlap and what should and can be expected from each of them.

Come gain insight on how an organization can start with what you have and move in the direction of strengthened security postures with tangible and practical guidance. You will find both direction and means of measurement to make sure you neither over nor undershoot what is is required.

There is plenty of discussion about how machine learning will be applied to cybersecurity initiatives, but there is precious little conversation about the actual vulnerabilities of these systems themselves. Fortunately, there are a handful of research groups doing the work to assess the threats we face in systematizing datadriven systems. In this session, I will introduce to the main concerns and how you can start to think about protecting against them.

We will mostly focus on the research findings of the Berryville Institute of Machine Learning. They have conducted a survey of the literature and have identified a taxonomy of the most common kinds of attacks including:

  • Adversarial examples
  • Data poisoning
  • Manipulation of online systems
  • Transfer learning attacks
  • Breaching data confidentiality
  • Undermining data trust

This will be a securityfocused discussion. Only basic understanding of machine learning will be required.

Semantic Web Workshop - Video Preview

The Web is changing faster than you can imagine and it is going to continue to do so. Rather than starting over from scratch each time, it builds on what has succeeded already. Webs of Documents are giving way to machine-processable Webs of Information. We no longer care about data containers, we only care about data and how it connects to what we already know.

Roughly 25% of the Web is semantically marked up now and the search engines are indexing this information, enriching their knowledge graphs and rewarding you for providing them with this information.

In the past we had to try to convince developers to adopt new data models, storage engines, encoding schemes, etc. Now we no longer have to worry about that. Rich, reusable interface elements like Web Components can be built using Semantic Web technologies in ways that intermediate developers don’t have to understand but end users can still benefit from. Embedded JSON-LD now allows disparate organizations to communicate complex data sets of arbitrary information through documents without collaboration.

Perhaps the concepts of the Semantic Web initiative are new to you. Or perhaps you have been hearing for years how great technologies like RDF, SPARQL, SKOS and OWL are and have yet to see anything real come out of it.

Whether you are jazzed or jaded, this workshop will blow your mind and provide you with the understanding of a technological shift that is already upon us.

In this workshop, we will:

Explain the Web and Web architecture at a deeper level
Apply Web and Semantic Web technologies in the Enterprise and make them work together
Integrate structured and unstructured information
Create good, long-lived logical names (URIs) for information and services
Use the Resource Description Framework (RDF) to integrate documents, services and databases
Use popular RDF vocabularies such as Dublin Core, FOAF
Query RDF and non-RDF datastores with the SPARQL query language
Encode data in documents using RDFa and JSON-LD
Create self-describing, semantic Web Components
Model and use inferencing with the Web Ontology Language (OWL)

Programmers need to perform tasks outside of their development environments. Unfortunately, it seems like many are unaware of some of the super powers that command line tools will afford them with a bit of an effort to learn. This workshop/dojo will be a general survey of the classics as well as newer replacements that will make your life easier once you start to adopt them in your tool belt.

We will cover tools to help with pattern matching, finding things, processing text, working with remote systems, and much more.

Objects are fundamental to most modern programming languages, but many developers have not had the opportunity to explore OO design from the ground up. In this session we will introduce the fundamental concepts you need to make better choices in your designs to help your software absorb business and technical change.

This will be a fundamental but challenging discussion suitable for people of various backgrounds.

Selecting a programming languages introduces opportunities and limitations. When you have access to objects, there are many choices you can make, but some are less obvious than others. It's helpful to be reminded of what the various techniques provide, how to overcome language limitations, and how encapsulation, modularity, shared types, shared behavior, and specialization allow you to build flexible and extensible software.

We will focus on Java but will compare it to JavaScript, TypeScript, and Python.

The concept of doing machine learning in JavaScript in the browser seems ludicrous at first blush. The reality is, however, it makes all the sense in the world. The question is how to do so performantly.

We will introduce you to a variety of use cases of why this makes sense and how Google has managed to make it a reality through a combination of WebGL, WebAssembly, CUDA, and more.

We will cover:

  • Motivations for this crazy idea in the first place
  • How to achieve portable performance
  • Building applications that reuse existing models
  • Caching strategies for large models
  • Training models in the browser
  • Combining browser capabilities such as images and video with machine learning algorithms

Part of the mystery of large language models (LLMs) like ChatGPT is that they seem to have become at least on the verge of sentience out of nowhere. The demonstrations are truly mind-boggling but they mask the reality of what is happening and how it is being accomplished. Unfortunately, the purveyors of these technologies are not incentivized to focus on the limits they face and how far we are actually (Narrator: VERY!) from artificial intelligence.

In order to better understand where we are, we will explore where we have been. This process will help you decide what is a good use of what is possible even if it isn't in any way “intelligent”.

In this workshop, we will introduce and look at the application of a variety of technologies and tools that have led up to where we find ourselves now.

This will include:

  • Statistics
  • Machine Learning
  • Deep Learning
  • NLP
  • Generative AI
  • Large Language Models

The code samples will be presented in Python, but no proficiency will be expected and everything significant will be explained.

There is no question JavaScript has become one of the most popular and widely-used programming languages. Unfortunately popularity doesn't necessarily translate to easy-to-maintain or always appropriate. Large code bases become difficult to reason over due to JavaScript's dynamic nature and flexible development style.

As a result of their own internal struggles with large JavaScript projects, Microsoft tasked Anders Hejlsberg of Delphi and C# fame to design a solution to the problem. The result is an incredibly useful, fun and effective approach to improving JavaScript development without impacting how you deploy your projects.

TypeScript is a superset of JavaScript that brings static typing, modern JavaScript features that may not yet be supported in your environment and improved tooling and documentation. Surprisingly, the results are then transpiled down to whatever flavor of JavaScript you need for your runtime environment.

In this workshop, we will introduce to you:

  • What TypeScript brings to the table and why you should care
  • The new language features available
  • Approaches to gradually adopt TypeScript in your project
  • How runtime errors become compile time errors with types and static checking
  • How your build processes do and don't change
  • How to set up a more fun and productive (and hopefully less frustrating) development environment

Somewhere between the positions of “AI is going to change everything” and “AI is currently an overhyped means of propping up silicon valley unicorn valuations” lives a useful reality: AI research is producing tools that can be exploited safely, meaningfully, and responsibly. They can save you money, speed up delivery, and create
new opportunities that might not otherwise exist. The trick is understanding what they can do well and what is a big, red flag.

In this talk I will lay out a framework for considering a range of technologies that fall under the umbrella of AI and highlight the costs, benefits, and risks to help you make better choices about what to pursue and what to avoid.

If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.

This will be an overview of the benefits of vectors databases as well as an introduction to the major players.

We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.

We will cover:

  • A brief overview of vectors
  • Why vectors are so important to machine learning and datadriven systems
  • Overview of the offerings
  • Adding vector search to other systems
  • Sample use cases shown with one of the key open source engines

If you're not terrified, you're not paying attention.

Publishing information on the Web does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.

Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources. We will also discuss the assault on encryption, web security features and emerging technologies that will hopefully help strengthen our ability to protect what we hold dear.

Topics include:

  • Security Engineering
  • Software Security
  • Encryption
  • Authentication and Authorization Mechanisms
  • Emerging Web Security Technologies

What happens if Web applications become super fast?
What if the ability to write code once but run it on lots of different platforms was true again?
What if Desktops are no longer interesting because you can do everything in a browser?
What if JavaScript wasn't your only language choice?

These are all starting to happen now that this W3C Standard is supported widely across all major browser vendors, Node and more. It's never been a better time to dig into the future that is playing out now faster than most people realize.

WebAssembly is emerging as an exciting vision for web applications that run at native speeds by using a size and load-time efficient, compiled binary format. Anything from computationally intensive business applications to fully rendered 3D video games will benefit from the mix of speed with other Web-oriented technologies. We'll let you know what is coming and how you'll benefit from it.

We will cover:

  • The History of WebAssembly
  • The JavaScript API
  • The Stack Machine
  • Shared Memory
  • Dynamic Libraries
  • The Text Format
  • Building Web Applications using C, C++, Go, Rust, Kotlin and more
  • Converting existing code and libraries into WebAssembly
  • Running WebAssembly in Node

This is a hands on workshop of a truly mind-blowing next step evolution of the Web. Don't get left behind.

We all basically understand what it means to build Internet-aware applications. But somewhere at the intersection of architecture, application development, protocols and networking, things get a little fuzzy. One of the most widely-used tools in the networking world doesn’t see as much use as it could in the software development world.

We will cover:

  • The Internet’s Architecture and how it delegates functionality and evolves
  • Networking Basics
  • A packet-oriented walk through the Web stack
  • How to capture, visualize and analyze application traffic
  • How Wireshark can be used to identify performance and bandwidth issues
  • How packet analysis can help identify security threats

Come see how you can improve your development skills by understanding more about networking.