Secure your LLM apps with Google Cloud Model Armor

Model armor

It’s crucial to secure inputs and outputs to and from your Large Language Model (LLM). Failure to do so can result in prompt injections, jailbreaking, sensitive information exposure, and more (as detailed in OWASP Top 10 for Large Language Model Applications).

I previously talked about LLM Guard and Vertex AI and showed how to use LLM Guard to secure LLMs. Google Cloud has its own service to secure LLMs: Model Armor. In this post, we’ll explore Model Armor and see how it can help to safeguard your LLM applications.

Read More →

Gen AI Evaluation Service - Multimodal Metrics

Multimodal metrics

This is the sixth and final post in my Vertex AI Gen AI Evaluation Service blog post series. In the previous posts, we covered computation-based, model-based, tool-use, and agent metrics. These metrics measure different aspects of an LLM response in different ways but one thing they all had in common: they are all for text-based outputs.

LLMs nowadays also produce multimodal (images, videos) outputs. How do you evaluate multimodal outputs? That’s the topic of this blog post.

Read More →

Gen AI Evaluation Service - Agent Metrics

Agent metrics

In my previous Gen AI Evaluation Service - Tool-Use Metrics post, we talked about LLMs calling external tools and how you can use tool-use metrics to evaluate how good those tool calls are. In today’s fifth post of my Vertex AI Gen AI Evaluation Service blog post series, we will talk about a related topic: agents and agent metrics.

What are agents?

There are many definitions of agents but an agent is essentially a piece of software that acts autonomously to achieve specific goals. They use LLMs to perform tasks, utilize external tools, coordinate with other agents, and ultimately produce a response to the user.

Read More →

Gen AI Evaluation Service - Tool-Use Metrics

Tool-use metrics

I’m continuing my Vertex AI Gen AI Evaluation Service blog post series. In today’s fourth post of the series, I will talk about tool-use metrics.

What is tool use?

Tool use, also known as function calling, provides the LLM with definitions of external tools (for example, a get_current_weather function). When processing a prompt, the model determines if a tool is needed and, if so, outputs structured data specifying the tool to call and its parameters (for example, get_current_weather(location='London')).

Read More →

Gen AI Evaluation Service - Model-Based Metrics

Model-based metrics

In the Gen AI Evaluation Service - An Overview post, I introduced Vertex AI’s Gen AI evaluation service and talked about the various classes of metrics it supports. In the Gen AI Evaluation Service - Computation-Based Metrics post, we delved into computation-based metrics, what they provide, and discussed their limitations. In today’s third post of the series, we’ll dive into model-based metrics.

The idea of model-based metrics is to use a judge model to evaluate the output of a candidate model. Using an LLM as a judge allows more flexible and rich evaluations that the computational/statistical metrics fail to do.

Read More →

Gen AI Evaluation Service - Computation-Based Metrics

In my Gen AI Evaluation Service - An Overview post, I introduced Vertex AI’s Gen AI evaluation service and talked about the various classes of metrics it supports. In today’s post, I want to dive into computation-based metrics, what they provide, and discuss their limitations.

Computation-based metrics are metrics that can be calculated using a mathematical formula. They’re deterministic – the same input produces the same score, unlike model-based metrics where you might get slightly different scores for the same input.

Read More →

Gen AI Evaluation Service - An Overview

Generating content with Large Language Models (LLMs) is easy. Determining whether the generated content is good is hard. That’s why evaluating LLM outputs with metrics is crucial. Previously, I talked about DeepEval and Promptfoo as some of the tools you can use for LLM evaluation. I also talked about RAG triad metrics specifically for Retrieval Augmented Generation (RAG) evaluation for LLMs.

In the next few posts, I want to talk about a Google Cloud specific evaluation service: the Gen AI evaluation service in Vertex AI. The Gen AI evaluation service in Vertex AI lets you evaluate any generative model or application against a set of criteria or your own custom criteria.

Read More →

Evaluating RAG pipelines with the RAG triad

Retrieval-Augmented Generation (RAG) emerged as a dominant framework for feeding Large Language Models (LLMs) the context beyond the scope of their training data and enabling LLMs to respond with more grounded answers and fewer hallucinations based on that context.

However, designing an effective RAG pipeline can be challenging. You need to answer questions such as:

  1. How should you parse and chunk text documents for vector embedding? What chunk size and overlay size should you use?
  2. What vector embedding model should you use?
  3. What retrieval method should I use to fetch the relevant context? How many documents should you retrieve by default? Does the retriever 1.actually manage to retrieve the applicable documents?
  4. Does the generator actually generate content that is in line with the retrieved context? What parameters (model, prompt template, temperature) work better?

The only way to objectively answer these questions is to measure how well the RAG pipeline works, but what exactly do you measure, and how do you measure it? This is the topic I’ll cover here.

DeepEval adds native support for Gemini as an LLM Judge

DeepEval and Gemini

In my previous post on DeepEval and Vertex AI, I introduced DeepEval, an open-source evaluation framework for LLMs. I also demonstrated how to use Gemini (on Vertex AI) as an LLM Judge in DeepEval, replacing the default OpenAI judge to evaluate outputs from other LLMs. At that time, the Gemini integration with DeepEval wasn’t ideal and I had to implement my own integration.

Thanks to the excellent work by Roy Arsan in PR #1493, DeepEval now includes native Gemini integration. Since it’s built on the new unified Google GenAI SDK, DeepEval supports Gemini models running both on Vertex AI and Google AI. Nice!

Read More →

Much simplified function calling in Gemini 2.X models

Last year, in my Deep dive into function calling in Gemini post, I talked about how to do function calling in Gemini. More specifically, I showed how to call two functions (location_to_lat_long and lat_long_to_weather) to get the weather information for a location from Gemini. It wasn’t difficult but it involved a lot of steps for 2 simple function calls.

I’m pleased to see that the latest Gemini 2.X models and the unified Google Gen AI SDK (that I talked about in my Gemini on Vertex AI and Google AI now unified with the new Google Gen AI SDK) made function calling much simpler.

Read More →