Marcelo Araujo

API Architect

As a seasoned API architect, strategist, and evangelist, Marcelo Araujo possesses a wealth of expertise in the dynamic realm of APIs. With a proven track record as an API expert, Marcelo has played a pivotal role in developing and implementing API strategies for two Fortune Global 500 companies and one Fortune 500 company.

Presentations

In today’s fast-paced development environment, delivering robust and efficient APIs requires a streamlined design process that minimizes delays and maximizes collaboration. Mocking has emerged as a transformative tool in the API design lifecycle, enabling teams to prototype, test, and iterate at unprecedented speeds.

This talk explores the role of mocking in enhancing API design workflows, focusing on its ability to:

1.Facilitate early stakeholder feedback by simulating API behavior before development.
2.Enable parallel development by decoupling frontend and backend teams.
3.Identify design flaws and inconsistencies earlier, reducing costly downstream changes.
4.Support rapid iteration and experimentation without impacting live systems.

Using real-world examples and best practices, we’ll demonstrate how tools like Prism and WireMock can be leveraged to create mock APIs that enhance collaboration, improve quality, and dramatically accelerate development timelines. Attendees will leave with actionable insights on integrating mocking into their API design lifecycle, fostering innovation and speed without compromising reliability.

As AI model usage grows across enterprise systems, teams face new infrastructure challenges—fragmented integrations, inconsistent interfaces, and limited visibility into model performance. An AI Gateway bridges this gap by providing an abstraction layer for model routing, guardrails, and observability, standardizing how applications interact with AI models.

This session explores AI Gateway architecture, key design patterns, and integration strategies with existing API and DevOps ecosystems. Attendees will learn how to implement model routing, enforce runtime safety and compliance, and build unified monitoring for prompt-level analytics—all forming the foundation of a scalable enterprise AI platform.

Traditional API linting tools like Spectral, have helped teams identify issues in their OpenAPI specifications by surfacing violations of style guides and best practices. But the current paradigm stops at diagnosis—developers are still left with the manual burden of interpreting warnings, resolving inconsistencies, and applying often repetitive best practice fixes.

This session explores a transformative approach: using large language models (LLMs) fine-tuned on industry API standards to go beyond pointing out what’s wrong—to actively fixing it. Imagine replacing “Here’s a list of errors” with “Here’s your new spec, clean, compliant, and ready to ship.” By shifting from rule-checking to rule-enforcing via intelligent automation, teams can significantly reduce friction in their design workflows, improve standardization, and cut review cycles.