RAG API powered by LlamaIndex on Vertex AI

LlamaIndex and Vertex AI

Introduction

Recently, I talked about why grounding LLMs is important and how to ground LLMs with public data using Google Search (Vertex AI’s Grounding with Google Search: how to use it and why) and with private data using Vertex AI Search (Grounding LLMs with your own data using Vertex AI Search).

In today’s post, I want to talk about another more flexible and customizable way of grounding your LLMs with private data: the RAG API powered by LlamaIndex on Vertex AI.

Read More →

Grounding LLMs with your own data using Vertex AI Search

Introduction

In my previous Vertex AI’s Grounding with Google Search: how to use it and why post, I explained why you need grounding with large language models (LLMs) and how Vertex AI’s grounding with Google Search can help to ground LLMs with public up-to-date data.

That’s great but you sometimes need to ground LLMs with your own private data. How can you do that? There are many ways but Vertex AI Search is the easiest way and that’s what I want to talk about today with a simple use case.

Read More →

Give your LLM a quick lie detector test

Lie Detector LLM

Introduction

It’s no secret that LLMs sometimes lie and they do so in a very confident kind of way. This might be OK for some applications but it can be a real problem if your application requires high levels of accuracy.

I remember when the first LLMs emerged back in early 2023. I tried some of the early models and it felt like they were hallucinating half of the time. More recently, it started feeling like LLMs are getting better at giving more factual answers. But it’s just a feeling and you can’t base application decisions (or any decision?) on feelings, can you?

Read More →

The Consistency vs. Novelty Dilemma

Consistency vs. Novelty

It’s been a while since I wrote a non-work related topic. Last time, I wrote about the unique kindness I experienced in Japan (see The Butterfly effect of kindness). This time, I want to write about a dilemma that I’ve been thinking about for a while.

When I reflect on my life so far, whenever I had some progress (learning a new skill, making new lasting connections, changing to a new job, losing weight), it was always due to consistency in my life. I was not traveling, I was not thinking about where to go, what to do, where to eat, how to get from point A to point B. I was in my familiar environment with a consistent (and maybe boring) routine where the basics of my life were in place. As a result, I had time, got bored, and started exploring. This consistency fueled boredom allowed me to explore an aspect of life that I wasn’t happy about and I put the time and energy into improving it.

Read More →

Vertex AI's Grounding with Google Search - how to use it and why

Introduction

Once in a while, you come across a feature that is so easy to use and so useful that you don’t know how you lived without it before. For me, Vertex AI’s Grounding with Google Search is one of those features.

In this blog post, I explain why you need grounding with large language models (LLMs) and how Vertex AI’s Grounding with Google Search can help with minimal effort on your part.

AsyncAPI gets a new version 3.0 and new operations

Almost one year ago, I talked about AsyncAPI 2.6 and how confusing its publish and subscribe operations can be in my Understanding AsyncAPI’s publish & subscribe semantics with an example post.

Since then, a new 3.0 version of AsyncAPI has been released with breaking changes and a totally new send and receive operations.

In this blog post, I want to revisit the example from last year and show how to rewrite it for AsyncAPI 3.0 with the new send and receive operations.

Read More →

A tour of Gemini 1.5 Pro samples

Introduction

Back in February, Google announced Gemini 1.5 Pro with its impressive 1 million token context window.

Gemini 1.5 Pro

Larger context size means that Gemini 1.5 Pro can process vast amounts of information in one go — 1 hour of video, 11 hours of audio, 30,000 lines of code or over 700,000 words and the good news is that there’s good language support.

In this blog post, I will point out some samples utilizing Gemini 1.5 Pro in Google Cloud’s Vertex AI in different use cases and languages (Python, Node.js, Java, C#, Go).

Read More →

Making API calls exactly once when using Workflows

One challenge with any distributed system, including Workflows, is ensuring that requests sent from one service to another are processed exactly once, when needed; for example, when placing a customer order in a shipping queue, withdrawing funds from a bank account, or processing a payment.

In this blog post, we’ll provide an example of a website invoking Workflows, and Workflows in turn invoking a Cloud Function. We’ll show how to make sure both Workflows and the Cloud Function logic only runs once. We’ll also talk about how to invoke Workflows exactly once when using HTTP callbacks, Pub/Sub messages, or Cloud Tasks.

C# and Vertex AI Gemini streaming API bug and workaround

A user recently reported an intermittent error with C# and Gemini 1.5 model on Vertex AI’s streaming API. In this blog post, I want to outline what the error is, what causes it, and how to avoid it with the hopes of saving some frustration for someone out there.

Error

The user reported using Google.Cloud.AIPlatform.V1 library with version 2.27.0 to use Gemini 1.5 via Vertex AI’s streaming API and running into an intermittent System.IO.IOException.

Read More →

A Tour of Gemini Code Assist - Slides and Demos

This week, I’m speaking at 3 meetups on Gemini Code Assist. My talk has a little introduction to GenAI and Gemini, followed by a series of hands-on demos that showcase different features of Gemini Code Assist.

In the demos, I setup Gemini Code Assist in Cloud Code IDE plugin in Visual Studio Code. Then, I show how to design and create an application, explain, run, generate, test, transform code, and finish with understanding logs with the help of Gemini.

Read More →