TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
Large Language Models / Python

How to Get Started with Google’s Gemini Large Language Model

There are two ways to access Google's Gemini LLM: Vertex AI and Google AI Studio. We show you how to get started with both approaches.
Feb 29th, 2024 5:00am by
Featued image for: How to Get Started with Google’s Gemini Large Language Model
Photo by 12photostory on Unsplash.

In my previous article, I introduced the key capabilities of Gemini, the multimodal generative AI model from Google. In this post, I will walk you through the steps to access the model.

There are two ways to access Gemini: Vertex AI and Google AI Studio. The former is meant for developers familiar with Google Cloud, while the latter is for developers building web and mobile apps with almost no dependency on Google Cloud.

Let’s take a look at these two approaches.

Accessing Gemini through Vertex AI

Assuming you have an active project with billing enabled, here are the steps to accessing the API from your local workstation.

Create a Python virtual environment and install the required modules.


Since we need to authenticate with Google Cloud, let’s run the below commands to cache the credentials. This will be used by the Google Cloud SDK when talking to the API endpoints. This method creates application default credentials (ADC) on your development workstation at at $HOME/.config/gcloud/application_default_credentials.json.


You will see the browser window pop up asking your Google credentials to complete the authentication process. Once this is done, proceed to the next step of installing the Python modules.


Launch Jupyter Lab and access it from your favorite browser.


Start by importing the modules and then initialize the model.


The module vertexai.preview provides access to the foundation models available in Vertex AI. Check the documentation for the latest version and updated API.


Like other LLMs, Gemini has two APIs: text generation and chat completion.

Let’s try the text generation API.


Image

Next, let’s explore the chat completion API. The key difference between text generation and chat completion is the ability to maintain the history of conversations in a list. Passing the history list automatically provides context for the model. It can even be saved to the local disk and loaded to pick up the same thread.



You can access the history list to see the entire conversation.

Accessing Gemini Through Google AI Studio

Google AI Studio is a playground to explore the generative AI models offered by Google. Anyone with a Google account can sign in to experiment with the models. However, for production usage, you still need to have an active project in Google Cloud.

Create an API key and initialize an environment variable.

Image


You need a different Python module to access the models through the AI Studio.


Import the module and see if you can list the available models.




Image

As you can see, we have access to the Gemini Pro multimodal model. Let’s initialize the model.


Let’s now repeat the steps to perform text generation and chat completion.


Image

Image

Counting Tokens to Estimate the Cost

According to Google, text input is charged for every 1,000 characters of input (prompt) and every 1,000 characters of output (response). Characters are counted by UTF-8 code points and white space is excluded from the count. The API has methods to give you the count of tokens that help us estimate the cost. The below code uses the count_tokens methods and usage_metadata properties to translate the prompt and LLM response into billable tokens.


In the next part of this tutorial series, we will explore the basics of prompt engineering with Gemini. Stay tuned.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.