Brent Laster

Global author, trainer and founder of Tech Skills Transformations LLC

Hi, I'm Brent Laster - a global trainer and book author, experienced corporate technology developer and leader, and founder and president of Tech Skills Transformations LLC. I've been working with and presenting at NFJS events for many years now and it is always exciting and interesting.

Through my decades in programming and management,I've always tried to make time to learn and develop both technical and leadership skills and share them with others Regardless of the topic or technology, my belief is that there is no substitute for the excitement and sense of potential that come from providing others with the knowledge they need to help them accomplish their goals.

In my spare time, I hang out with my wife Anne-Marie, 4 children and 2 small dogs in Cary, North Carolina where I design and conduct trainings and write books. You can find me on LinkedIn (linkedin.com/in/brentlaster), Twitter (@brentclaster) or through my company's website at www.getskillsnow.com.

Presentations

AI Security for Developers and Practitioners - full day

Designing, Defending, and Deploying Secure AI Systems

This full-day, hands-on workshop equips developers, architects, and technical leaders with the knowledge and skills to secure AI systems end-to-end — from model interaction to production deployment. Participants learn how to recognize and mitigate AI-specific threats such as prompt injection, data leakage, model exfiltration, and unsafe tool execution.

Through a series of focused labs, attendees build, test, and harden AI agents and Model Context Protocol (MCP) services using modern defensive strategies, including guardrails, policy enforcement, authentication, auditing, and adversarial testing.

The training emphasizes real-world implementation over theory, using preconfigured environments in GitHub Codespaces for instant, reproducible results. By the end of the day, participants will have created a working secure AI pipeline that demonstrates best practices for trustworthy AI operations and resilient agent architectures.

The course blends short conceptual discussions with deep, hands-on practice across eight structured labs, each focusing on a key area of AI security. Labs can be completed in sequence within GitHub Codespaces, requiring no local setup.
1.Lab 1 – Mapping AI Security Risks
Identify the unique attack surfaces of AI systems, including LLMs, RAG pipelines, and agents. Learn how to perform a structured threat model and pinpoint where vulnerabilities typically occur.
2.Lab 2 – Securing Prompts and Contexts
Implement defensive prompting, context isolation, and sanitization to mitigate prompt injection, hidden instructions, and data leakage risks.
3.Lab 3 – Implementing Guardrails
Use open-source frameworks (e.g., Guardrails.ai, LlamaGuard) to validate LLM outputs, enforce content policies, and intercept unsafe completions before delivery.
4.Lab 4 – Hardening MCP Servers and Tools
Configure FastMCP servers with authentication, scoped tokens, and restricted tool manifests. Examine how to isolate and monitor server–client interactions to prevent privilege escalation.
5.Lab 5 – Auditing and Observability for Agents
Integrate structured logging, trace identifiers, and telemetry into AI pipelines. Learn how to monitor for suspicious tool calls and enforce explainability through audit trails.
6.Lab 6 – Adversarial Testing and Red-Teaming
Simulate common AI attacks—prompt injection, model hijacking, and context poisoning—and apply mitigation patterns using controlled experiments.
7.Lab 7 – Policy-Driven Governance
Introduce a “security-as-code” approach using policy files that define allowed tools, query types, and data scopes. Enforce runtime governance directly within your agent’s workflow.
8.Lab 8 – Secure Deployment and Lifecycle Management
Apply DevSecOps practices to containerize, sign, and deploy AI systems safely. Incorporate secrets management, vulnerability scanning, and compliance checks before release.

Outcome:
Participants finish the day with a secure, auditable, and policy-controlled AI system built from the ground up. They leave with practical experience defending agents, MCP servers, and model workflows—plus learning for integrating security-by-design principles into future projects.

AI 3-in-1: Agents, RAG and Local Models

Building out an AI agent that uses RAG and runs locally

Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace

In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.

No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.

AI 3-in-1: Agents, RAG and Local Models

Building out an AI agent that uses RAG and runs locally

Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace

In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.

No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.

AI for App Development

Building and deploying AI Apps that leverage agents, MCP and RAG

This hands-on workshop teaches developers how to build modern AI-driven applications using local LLMs, intelligent agents, and standardized tool protocols. Participants learn to move beyond simple prompting to architect production-ready AI systems that integrate reasoning, retrieval, and external data sources. The course emphasizes real-world implementation through step-by-step labs in GitHub Codespaces, requiring no local installs.

By the end of the workshop, attendees will have created functional AI agents that use Ollama for local model execution, FastMCP for standardized tool communication, and RAG (Retrieval-Augmented Generation) for context-grounded responses—all deployable to Hugging Face Spaces.

The course begins with an explanation of running models locally with tools like Ollama and LangChain. Participants progressively advance through seven structured labs covering the following topics :

1.Running Models Locally: Using Ollama to pull, serve, and query models such as llama3.2 via CLI, API, and Python integration.
2.Creating Agents: Building a reasoning agent that uses the Thought → Action → Observation loop to call weather APIs.
3.Exploring MCP (Model Context Protocol): Implementing standardized tool discovery and invocation through a FastMCP server-client setup.
4.Vector Databases: Indexing and searching local data with ChromaDB to enable semantic search and similarity matching.
5.RAG with Agents: Combining retrieval and reasoning so agents can access relevant context from external data sources.

    6.    Streamlit Web Application – Wrap the RAG agent in a modern web UI using Streamlit, complete with a memory dashboard, real-time status, and visual feedback.
  1. Deploying to Hugging Face Spaces – Publish the Streamlit app to a live, shareable web environment, learning how to authenticate, push, and auto-deploy projects for public access.

Each lab reinforces the previous one, culminating in a fully functional, classification-driven RAG agent that demonstrates multi-domain reasoning, semantic search, and modular design.

AI for App Development

Building and deploying AI Apps that leverage agents, MCP and RAG

This hands-on workshop teaches developers how to build modern AI-driven applications using local LLMs, intelligent agents, and standardized tool protocols. Participants learn to move beyond simple prompting to architect production-ready AI systems that integrate reasoning, retrieval, and external data sources. The course emphasizes real-world implementation through step-by-step labs in GitHub Codespaces, requiring no local installs.

By the end of the workshop, attendees will have created functional AI agents that use Ollama for local model execution, FastMCP for standardized tool communication, and RAG (Retrieval-Augmented Generation) for context-grounded responses—all deployable to Hugging Face Spaces.

The course begins with an explanation of running models locally with tools like Ollama and LangChain. Participants progressively advance through seven structured labs covering the following topics :

1.Running Models Locally: Using Ollama to pull, serve, and query models such as llama3.2 via CLI, API, and Python integration.
2.Creating Agents: Building a reasoning agent that uses the Thought → Action → Observation loop to call weather APIs.
3.Exploring MCP (Model Context Protocol): Implementing standardized tool discovery and invocation through a FastMCP server-client setup.
4.Vector Databases: Indexing and searching local data with ChromaDB to enable semantic search and similarity matching.
5.RAG with Agents: Combining retrieval and reasoning so agents can access relevant context from external data sources.

    6.    Streamlit Web Application – Wrap the RAG agent in a modern web UI using Streamlit, complete with a memory dashboard, real-time status, and visual feedback.
  1. Deploying to Hugging Face Spaces – Publish the Streamlit app to a live, shareable web environment, learning how to authenticate, push, and auto-deploy projects for public access.

Each lab reinforces the previous one, culminating in a fully functional, classification-driven RAG agent that demonstrates multi-domain reasoning, semantic search, and modular design.

Incorporating AI into your SDLC

Leveraging AI tooling across the phases of your software development lifecycle

Just as CI/CD and other revolutions in DevOps have changed the landscape of the software development lifecycle (SDLC), so Generative AI is now changing it again. Gen AI has the potential to simplify, clarify, and lessen the cycles required across multiple phases of the SDLC.

In this session with author, trainer, and experienced DevOps director Brent Laster, we'll survey the ways that today's AI assistants and tools can be incorporated across your SDLC phases including planning, development, testing, documentation, maintaining, etc. There are multiple ways the existing tools can help us beyond just the standard day-to-day coding and, like other changes that have happened over the years, teams need to be aware of, and thinking about how to incorporate AI into their processes to stay relevant and up-to-date.

Books

Jenkins 2 - Up and Running

by Brent Laster

All about Jenkins 2, Pipelines-As-Code, CI/CD, etc.

Professional Git

by Brent Laster

Leverage the power of Git to smooth out the development cycle

Professional Git takes a professional approach to learning this massively popular software development tool, and provides an up-to-date guide for new users. More than just a development manual, this book helps you get into the Git mindset—extensive discussion of corollaries to traditional systems as well as considerations unique to Git help you draw upon existing skills while looking out—and planning for—the differences. Connected labs and exercises are interspersed at key points to reinforce important concepts and deepen your understanding, and a focus on the practical goes beyond technical tutorials to help you integrate the Git model into your real-world workflow.

Git greatly simplifies the software development cycle, enabling users to create, use, and switch between versions as easily as you switch between files. This book shows you how to harness that power and flexibility to streamline your development cycle.

  • Understand the basic Git model and overall workflow
  • Learn the Git versions of common source management concepts and commands
  • Track changes, work with branches, and take advantage of Git's full functionality
  • Avoid trip-ups and missteps common to new users

Git works with the most popular software development tools and is used by almost all of the major technology companies. More than 40 percent of software developers use it as their primary source control tool, and that number continues to grow; the ability to work effectively with Git is rapidly approaching must-have status, and Professional Git is the comprehensive guide you need to get up to speed quickly.