How to Get Started with Google’s Gemini Large Language Model
In my previous article, I introduced the key capabilities of Gemini, the multimodal generative AI model from Google. In this post, I will walk you through the steps to access the model.
There are two ways to access Gemini: Vertex AI and Google AI Studio. The former is meant for developers familiar with Google Cloud, while the latter is for developers building web and mobile apps with almost no dependency on Google Cloud.
Let’s take a look at these two approaches.
Accessing Gemini through Vertex AI
Assuming you have an active project with billing enabled, here are the steps to accessing the API from your local workstation.
Create a Python virtual environment and install the required modules.
|
1 2 |
$ python -m venv venv $ source venv/bin/activate |
Since we need to authenticate with Google Cloud, let’s run the below commands to cache the credentials. This will be used by the Google Cloud SDK when talking to the API endpoints. This method creates application default credentials (ADC) on your development workstation at at $HOME/.config/gcloud/application_default_credentials.json.
|
1 2 |
$ gcloud init $ gcloud auth application-default login |
You will see the browser window pop up asking your Google credentials to complete the authentication process. Once this is done, proceed to the next step of installing the Python modules.
|
1 2 |
$ pip install -U google-cloud-aiplatform $ pip install -U jupyter |
Launch Jupyter Lab and access it from your favorite browser.
|
1 |
$ jupyter notebook --ip='0.0.0.0' --no-browser --NotebookApp.token='' --NotebookApp.password='' |
Start by importing the modules and then initialize the model.
|
1 2 3 |
from google.cloud import aiplatform import vertexai from vertexai.preview.generative_models import GenerativeModel, Part |
The module vertexai.preview provides access to the foundation models available in Vertex AI. Check the documentation for the latest version and updated API.
|
1 2 |
vertexai.init() model = GenerativeModel("gemini-pro") |
Like other LLMs, Gemini has two APIs: text generation and chat completion.
Let’s try the text generation API.
|
1 2 |
response = model.generate_content("I have a Python in the backyard. What should I do?") print(response.text) |

Next, let’s explore the chat completion API. The key difference between text generation and chat completion is the ability to maintain the history of conversations in a list. Passing the history list automatically provides context for the model. It can even be saved to the local disk and loaded to pick up the same thread.
|
1 2 3 |
chat = model.start_chat(history=[]) response = chat.send_message("In one sentence, explain how a computer works to a young child.") response.text |
|
1 2 |
response = chat.send_message("Okay, how about a more detailed explanation to a high schooler?") response.text |
You can access the history list to see the entire conversation.
Accessing Gemini Through Google AI Studio
Google AI Studio is a playground to explore the generative AI models offered by Google. Anyone with a Google account can sign in to experiment with the models. However, for production usage, you still need to have an active project in Google Cloud.
Create an API key and initialize an environment variable.

|
1 |
$ export GOOGLE_API_KEY=YOUR_API_KEY |
You need a different Python module to access the models through the AI Studio.
|
1 |
$ pip install -U google-generativeai |
Import the module and see if you can list the available models.
|
1 2 |
import os import google.generativeai as genai |
|
1 2 |
GOOGLE_API_KEY=os.getenv("GOOGLE_API_KEY") genai.configure(api_key=GOOGLE_API_KEY) |
|
1 2 3 |
models=genai.list_models() for m in models: print(m.name) |

As you can see, we have access to the Gemini Pro multimodal model. Let’s initialize the model.
|
1 |
model = genai.GenerativeModel('gemini-pro') |
Let’s now repeat the steps to perform text generation and chat completion.
|
1 2 |
response = model.generate_content("I have a Python in the backyard. What should I do?") print(response.text) |

|
1 2 3 4 5 |
chat = model.start_chat(history=[]) response = chat.send_message("In one sentence, explain how a computer works to a young child.") print(response.text) response = chat.send_message("Okay, how about a more detailed explanation to a high schooler?") print(response.text) |
Counting Tokens to Estimate the Cost
According to Google, text input is charged for every 1,000 characters of input (prompt) and every 1,000 characters of output (response). Characters are counted by UTF-8 code points and white space is excluded from the count. The API has methods to give you the count of tokens that help us estimate the cost. The below code uses the count_tokens methods and usage_metadata properties to translate the prompt and LLM response into billable tokens.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
import vertexai from google.cloud import aiplatform from vertexai.preview.generative_models import GenerativeModel, Part vertexai.init() model = GenerativeModel("gemini-pro") vertexai.init() model = GenerativeModel("gemini-pro") print(len(prompt)) # 43 print(model.count_tokens(prompt)) # total_tokens: 11 # total_billable_characters: 34 response = model.generate_content(prompt) print(response.text) print(response._raw_response.usage_metadata) # prompt_token_count: 11 # candidates_token_count: 129 # total_token_count: 140 |
In the next part of this tutorial series, we will explore the basics of prompt engineering with Gemini. Stay tuned.
