Hi everyone, I’m Chris Duncan, a Customer Engineer at Google Cloud, and in this video, I’ll walk you through a hands-on demonstration of how to build a skeleton Apigee proxy to expose an LLM endpoint via Vertex AI.
If you’re new to Apigee or looking to integrate AI APIs into your architecture, this demo will help you get started. I’ll show you how Apigee enables secure API management, including fine-grained access control, token management, caching, and cost governance for AI-driven applications.
By the end of this session, you’ll have a working Apigee proxy that connects to an LLM deployed in Vertex AI, and you’ll understand how to scale, optimize, and secure your AI API traffic.
Watch the demo now and start building with Apigee and Vertex AI! If you have any questions or thoughts, feel free to drop them in the comments.