Mary is the VP of Global for the Western Hemisphere at the AI Collective, overseeing the health and growth of the community in North and Latin Americas. She started her career in software engineering and has deep interest especially in distributed systems, which cover all spectrums in the computing world. She is also very passionate about tech advocacy and community work, and has been leading the Java users group in Chicago since 2015. She is recognized as a Java Champion and an Oracle ACE Associate.
This talk will guide Java developers through the design and implementation of multi-agent generative AI systems using event-driven principles.
Attendees will learn how autonomous GenAI agents collaborate, communicate, and adapt in real-time workflows using modern Java frameworks and messaging protocols.
With ChatGPT taking center stage since the beginning of 2023, developers who have not had a chance to work with any forms of Artificial Intelligence or Machine Learning systems may find themselves either intrigued by the “maze” of new terminologies, or some may be eager to learn more, while perhaps a smaller group may not actually want to get themselves into a territory that’s unknown to them.
This workshop is catered for Java developers as we start by having a quick introduction to GenAI, ChatGPT, and all of those new terminologies around generative AI. Then we’ll dive right into the hands-on part, about how we can construct a ChatGPT-based app quickly, using state-of-the-art tools such as PgVector, which provides vector extension to the popular open source Postgres.
Hands-on lab will cover:
With ChatGPT taking center stage since the beginning of 2023, developers who have not had a chance to work with any forms of Artificial Intelligence or Machine Learning systems may find themselves either intrigued by the “maze” of new terminologies, or some may be eager to learn more, while perhaps a smaller group may not actually want to get themselves into a territory that’s unknown to them.
This workshop is catered for Java developers as we start by having a quick introduction to GenAI, ChatGPT, and all of those new terminologies around generative AI. Then we’ll dive right into the hands-on part, about how we can construct a ChatGPT-based app quickly, using state-of-the-art tools such as PgVector, which provides vector extension to the popular open source Postgres.
Hands-on lab will cover:
Everybody is talking about Generative AI and models that are better than anything else before. What are they really talking about?
In this workshop with some hands-on exercise, we will discuss Generative AI in theory and will also try it in practice (with free access to an Oracle LiveLab cloud session to learn about Vector Search). You'll be able to understand what Generative AI is all about and how it can be used.
The content will include:
Large Language Models like ChatGPT are fantastic for many NLP tasks but face challenges when it comes to real-time, up-to-date knowledge retrieval. Retrieval Augmented Generation (RAG) can effectively tackle this by pulling in external data for better, more context-aware responses.
This talk dives deep into using event-driven streaming through LangStream—an open-source library—to seamlessly integrate real-time data into generative AI applications like ChatGPT. Walk away with actionable insights on how to boost your GenAI applications using event streaming and RAG.
Generative AI applications, in general, excel in zero-shot and one-shot types of specific tasks. However, we live in a complicated world and we are beginning to see that today’s generative AI systems are simply not well equipped to handle the increased complexity that is found especially in business workflows and transactions. Traditional architectures often fall short in handling the dynamic nature and real-time requirements of these systems. We will also need a way to coordinate multiple components to generate coherent and contextually relevant outputs. Event-driven architectures and multi-agent systems offer a promising solution by enabling real-time processing, decentralized decision-making, and enhanced adaptability.
This presentation proposes an in-depth exploration of how event-driven architectures and multi-agent systems can be leveraged to design and implement complex workflows in generative AI. By combining the real-time responsiveness of event-driven systems with the collaborative intelligence of multi-agent architectures, we can create highly adaptive, efficient, and scalable AI systems. This presentation will delve into the theoretical foundations, practical applications, and benefits of integrating these approaches in the context of generative AI. We will also take a look at an example on how to implement a simple multi-agent application using a library such as AutoGen, CrewAI, or LangGraph.
As generative AI systems evolve from single LLM calls to complex, goal‑driven workflows, multi‑agent architectures are becoming essential for robust, scalable, and explainable AI applications.
This talk presents a practical framework for designing and implementing multi‑agent generative AI systems, covering four core orchestration patterns that define how agents coordinate:
Orchestrator‑Worker: A central agent decomposes a task and delegates subtasks to specialized worker agents, then aggregates and validates results.
Hierarchical Agent: Agents are organized in layers (e.g., manager, specialist, executor), enabling abstraction, delegation, and error handling across levels.
Blackboard: Agents contribute to and react from a shared “blackboard” workspace, enabling loosely coupled, event‑driven collaboration.
Market‑Based: Agents act as autonomous participants that negotiate, bid, or compete for tasks and resources, useful in dynamic, resource‑constrained environments.
For each pattern, we show concrete use cases, such as customer support triage, research synthesis, code generation pipelines, and discuss trade‑offs in latency, complexity, and observability.