Through my decades in programming and management,I've always tried to make time to learn and develop both technical and leadership skills and share them with others Regardless of the topic or technology, my belief is that there is no substitute for the excitement and sense of potential that come from providing others with the knowledge they need to help them accomplish their goals.
In my spare time, I hang out with my wife Anne-Marie, 4 children and 2 small dogs in Cary, North Carolina where I design and conduct trainings and write books. You can find me on LinkedIn (linkedin.com/in/brentlaster), Twitter (@brentclaster) or through my company's website at www.getskillsnow.com.
This full-day, hands-on workshop equips developers, architects, and technical leaders with the knowledge and skills to secure AI systems end-to-end — from model interaction to production deployment. Participants learn how to recognize and mitigate AI-specific threats such as prompt injection, data leakage, model exfiltration, and unsafe tool execution.
Through a series of focused labs, attendees build, test, and harden AI agents and Model Context Protocol (MCP) services using modern defensive strategies, including guardrails, policy enforcement, authentication, auditing, and adversarial testing.
The training emphasizes real-world implementation over theory, using preconfigured environments in GitHub Codespaces for instant, reproducible results. By the end of the day, participants will have created a working secure AI pipeline that demonstrates best practices for trustworthy AI operations and resilient agent architectures.
The course blends short conceptual discussions with deep, hands-on practice across eight structured labs, each focusing on a key area of AI security. Labs can be completed in sequence within GitHub Codespaces, requiring no local setup.
1.Lab 1 – Mapping AI Security Risks
Identify the unique attack surfaces of AI systems, including LLMs, RAG pipelines, and agents. Learn how to perform a structured threat model and pinpoint where vulnerabilities typically occur.
2.Lab 2 – Securing Prompts and Contexts
Implement defensive prompting, context isolation, and sanitization to mitigate prompt injection, hidden instructions, and data leakage risks.
3.Lab 3 – Implementing Guardrails
Use open-source frameworks (e.g., Guardrails.ai, LlamaGuard) to validate LLM outputs, enforce content policies, and intercept unsafe completions before delivery.
4.Lab 4 – Hardening MCP Servers and Tools
Configure FastMCP servers with authentication, scoped tokens, and restricted tool manifests. Examine how to isolate and monitor server–client interactions to prevent privilege escalation.
5.Lab 5 – Auditing and Observability for Agents
Integrate structured logging, trace identifiers, and telemetry into AI pipelines. Learn how to monitor for suspicious tool calls and enforce explainability through audit trails.
6.Lab 6 – Adversarial Testing and Red-Teaming
Simulate common AI attacks—prompt injection, model hijacking, and context poisoning—and apply mitigation patterns using controlled experiments.
7.Lab 7 – Policy-Driven Governance
Introduce a “security-as-code” approach using policy files that define allowed tools, query types, and data scopes. Enforce runtime governance directly within your agent’s workflow.
8.Lab 8 – Secure Deployment and Lifecycle Management
Apply DevSecOps practices to containerize, sign, and deploy AI systems safely. Incorporate secrets management, vulnerability scanning, and compliance checks before release.
Outcome:
Participants finish the day with a secure, auditable, and policy-controlled AI system built from the ground up. They leave with practical experience defending agents, MCP servers, and model workflows—plus learning for integrating security-by-design principles into future projects.
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
This 1/2 day workshop introduces participants to Claude Code, Anthropic’s AI-powered coding assistant. In three hours, attendees will learn how to integrate Claude Code into their development workflow, leverage its capabilities for productivity, and avoid common pitfalls. The workshop also introduces the concept of subagents (specialized roles like Planner, Tester, Coder, Refactorer, DocWriter) to show how structured interactions can improve accuracy and collaboration.
Format: 3-hour interactive workshop (2 × 90-minute sessions + 30-minute break).
Audience: Developers and technical professionals with basic programming knowledge.
Focus Areas:
Core capabilities and limitations of Claude Code.
Effective prompting and iteration techniques.
Applying Claude Code for code generation, debugging, refactoring, and documentation.
Using subagents for structured workflows as an optional advanced technique.
Deliverables:
5 hands-on labs (10–12 minutes each).
Experience with everyday Claude Code workflows plus a brief introduction to subagents.
This 1/2 day workshop introduces participants to Claude Code, Anthropic’s AI-powered coding assistant. In three hours, attendees will learn how to integrate Claude Code into their development workflow, leverage its capabilities for productivity, and avoid common pitfalls. The workshop also introduces the concept of subagents (specialized roles like Planner, Tester, Coder, Refactorer, DocWriter) to show how structured interactions can improve accuracy and collaboration.
Format: 3-hour interactive workshop (2 × 90-minute sessions + 30-minute break).
Audience: Developers and technical professionals with basic programming knowledge.
Focus Areas:
Core capabilities and limitations of Claude Code.
Effective prompting and iteration techniques.
Applying Claude Code for code generation, debugging, refactoring, and documentation.
Using subagents for structured workflows as an optional advanced technique.
Deliverables:
5 hands-on labs (10–12 minutes each).
Experience with everyday Claude Code workflows plus a brief introduction to subagents.
Just as CI/CD and other revolutions in DevOps have changed the landscape of the software development lifecycle (SDLC), so Generative AI is now changing it again. Gen AI has the potential to simplify, clarify, and lessen the cycles required across multiple phases of the SDLC.
In this session with author, trainer, and experienced DevOps director Brent Laster, we'll survey the ways that today's AI assistants and tools can be incorporated across your SDLC phases including planning, development, testing, documentation, maintaining, etc. There are multiple ways the existing tools can help us beyond just the standard day-to-day coding and, like other changes that have happened over the years, teams need to be aware of, and thinking about how to incorporate AI into their processes to stay relevant and up-to-date.
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Modern AI systems deliver many capabilities, but their effectiveness depends entirely on how well they're prompted. This intensive workshop transforms prompt engineering from trial-and-error guesswork into a systematic, measurable discipline. You'll learn proven frameworks for constructing effective prompts and learn cutting-edge optimization techniques that deliver quality improvements in real-world applications.
Through five hands-on labs in GitHub Codespaces, you'll work with Ollama hosting llama3.2:3b to implement each technique, measure its impact, and build reusable templates. Every concept is immediately validated with code you can deploy tomorrow.
What You'll Master
The workshop progresses through five core competency areas, each reinforced with a practical lab:
Foundations of Effective Prompting begins with the six essential elements every prompt needs: task definition, context, constraints, role assignment, output format, and examples. You'll systematically transform a poorly-constructed prompt into an optimized version, measuring quality improvements at each step. This foundation eliminates the guesswork and establishes a repeatable framework for all future prompt engineering work.
Pattern-Based Techniques introduces few-shot learning and Chain of Thought (CoT) reasoning. Few-shot prompting teaches models through examples rather than explanations, dramatically improving consistency on classification and transformation tasks. Chain of Thought makes reasoning transparent, improving accuracy on complex problems by 20-40% while enabling you to verify the model's logic. You'll build a classification system and compare zero-shot, few-shot, and CoT approaches with measurable accuracy metrics.
Advanced Structural Techniques combines role-based prompting, structured outputs, and constrained generation into enterprise-ready patterns. You'll create an API documentation generator that uses expert personas, enforces strict formatting requirements, outputs reliable JSON, and maintains 90%+ consistency across diverse inputs. This lab produces production templates with automated validation—patterns you can immediately deploy in your organization.
Cutting-Edge Methods explores two powerful techniques gaining traction in 2025-2026. Multi-expert prompting simulates a council of experts (technical, business, security) analyzing complex decisions from multiple perspectives, catching blind spots that single-perspective prompts miss. Reverse prompting flips the traditional interaction: instead of you trying to perfectly specify requirements, the AI asks clarifying questions to discover what you really need. You'll measure 40-60% improvements in decision quality and 80-90% gains in requirement clarity.
Probabilistic and Incentive-Based Optimization introduces the latest research-backed techniques for extracting maximum quality from language models. Stanford's breakthrough probability-based prompting—requesting multiple responses with confidence scores—improves reliability by 30-50% on ambiguous tasks. Incentive framing (yes, “This is critical” and “Take your time” actually work) increases thoroughness by 20-40%. Combined, these techniques deliver 50-70% quality improvements on high-stakes decisions.
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Modern AI systems deliver many capabilities, but their effectiveness depends entirely on how well they're prompted. This intensive workshop transforms prompt engineering from trial-and-error guesswork into a systematic, measurable discipline. You'll learn proven frameworks for constructing effective prompts and learn cutting-edge optimization techniques that deliver quality improvements in real-world applications.
Through five hands-on labs in GitHub Codespaces, you'll work with Ollama hosting llama3.2:3b to implement each technique, measure its impact, and build reusable templates. Every concept is immediately validated with code you can deploy tomorrow.
What You'll Master
The workshop progresses through five core competency areas, each reinforced with a practical lab:
Foundations of Effective Prompting begins with the six essential elements every prompt needs: task definition, context, constraints, role assignment, output format, and examples. You'll systematically transform a poorly-constructed prompt into an optimized version, measuring quality improvements at each step. This foundation eliminates the guesswork and establishes a repeatable framework for all future prompt engineering work.
Pattern-Based Techniques introduces few-shot learning and Chain of Thought (CoT) reasoning. Few-shot prompting teaches models through examples rather than explanations, dramatically improving consistency on classification and transformation tasks. Chain of Thought makes reasoning transparent, improving accuracy on complex problems by 20-40% while enabling you to verify the model's logic. You'll build a classification system and compare zero-shot, few-shot, and CoT approaches with measurable accuracy metrics.
Advanced Structural Techniques combines role-based prompting, structured outputs, and constrained generation into enterprise-ready patterns. You'll create an API documentation generator that uses expert personas, enforces strict formatting requirements, outputs reliable JSON, and maintains 90%+ consistency across diverse inputs. This lab produces production templates with automated validation—patterns you can immediately deploy in your organization.
Cutting-Edge Methods explores two powerful techniques gaining traction in 2025-2026. Multi-expert prompting simulates a council of experts (technical, business, security) analyzing complex decisions from multiple perspectives, catching blind spots that single-perspective prompts miss. Reverse prompting flips the traditional interaction: instead of you trying to perfectly specify requirements, the AI asks clarifying questions to discover what you really need. You'll measure 40-60% improvements in decision quality and 80-90% gains in requirement clarity.
Probabilistic and Incentive-Based Optimization introduces the latest research-backed techniques for extracting maximum quality from language models. Stanford's breakthrough probability-based prompting—requesting multiple responses with confidence scores—improves reliability by 30-50% on ambiguous tasks. Incentive framing (yes, “This is critical” and “Take your time” actually work) increases thoroughness by 20-40%. Combined, these techniques deliver 50-70% quality improvements on high-stakes decisions.
Professional Git takes a professional approach to learning this massively popular software development tool, and provides an up-to-date guide for new users. More than just a development manual, this book helps you get into the Git mindset—extensive discussion of corollaries to traditional systems as well as considerations unique to Git help you draw upon existing skills while looking out—and planning for—the differences. Connected labs and exercises are interspersed at key points to reinforce important concepts and deepen your understanding, and a focus on the practical goes beyond technical tutorials to help you integrate the Git model into your real-world workflow.
Git greatly simplifies the software development cycle, enabling users to create, use, and switch between versions as easily as you switch between files. This book shows you how to harness that power and flexibility to streamline your development cycle.
Git works with the most popular software development tools and is used by almost all of the major technology companies. More than 40 percent of software developers use it as their primary source control tool, and that number continues to grow; the ability to work effectively with Git is rapidly approaching must-have status, and Professional Git is the comprehensive guide you need to get up to speed quickly.