In this tutorial, you will experiment with several chunking strategies using LangChain and the latest IBM® Granite™ model now available on watsonx.ai™. The overall goal is to perform retrieval augmented generation (RAG).
Chunking refers to the process of breaking large pieces of text into smaller text segments or chunks. To emphasize the importance of chunking, it is helpful to understand RAG. RAG is a technique in natural language processing (NLP) that combines information retrieval and large language models (LLMs) to retrieve relevant information from supplemental datasets to optimize the quality of the LLM’s output. To manage large documents, we can use chunking to split the text into smaller snippets of meaningful chunks. These text chunks can then be embedded and stored in a vector database through the use of an embedding model. Finally, the RAG system can then use semantic search to retrieve only the most relevant chunks. Smaller chunks tend to outperform larger chunks as they tend to be more manageable pieces for models of smaller context window size.
Some key components of chunking include:
There are several different chunking strategies to choose from. It is important to select the most effective chunking technique for the specific use case of your LLM application. Some commonly used chunking processes include:r:
While you can choose from several tools, this tutorial walks you through how to set up an IBM account to use a Jupyter Notebook.
Log in to watsonx.ai using your IBM Cloud® account.
Create a watsonx.ai project.
You can get your project ID from within your project. Click the Manage tab. Then, copy the project ID from the Details section of the General page. You need this ID for this tutorial.
Create a Jupyter Notebook.
This step will open a Notebook environment where you can copy the code from this tutorial. Alternatively, you can download this notebook to your local system and upload it to your watsonx.ai project as an asset. To view more Granite tutorials, check out the IBM Granite Community. This Jupyter Notebook along with the datasets used can be found on GitHub.
Create a watsonx.ai Runtime service instance (select your appropriate region and choose the Lite plan, which is a free instance).
Generate an API Key.
Associate the watsonx.ai Runtime service instance to the project that you created in watsonx.ai.
To set our credentials, we need the WATSONX_APIKEY and WATSONX_PROJECT_ID you generated in step 1. We will also set the URL serving as the API endpoint.
We will use Granite 3.1 as our LLM for this tutorial. To initialize the LLM, we need to set the model parameters. To learn more about these model parameters, such as the minimum and maximum token limits, refer to the documentation.
The context we are using for our RAG pipeline is the official IBM announcement for the release of Granite 3.1. We can load the blog to a document directly from the webpage by using LangChain's WebBaseLoader.
Let’s provide sample code for implementing each of the chunking strategies we covered earlier in this tutorial available through LangChain.
To implement fixed-size chunking, we can use LangChain’s CharacterTextSplitter and set a chunk_size as well as chunk_overlap. The chunk_size is measured by the number of characters. Feel free to experiment with different values. We will also set the separator to be the newline character so that we can differentiate between paragraphs. For tokenization, we can use the granite-3.1-8b-instruct tokenizer. The tokenizer breaks down text into tokens that can be processed by the LLM.
We can print one of the chunks for a better understanding of their structure.
Output: (truncated)
We can also use the tokenizer for verifying our process and to check the number of tokens present in each chunk. This step is optional and for demonstrative purposes.
Output:
Great! It looks like our chunk sizes were appropriately implemented.
For recursive chunking, we can use LangChain’s RecursiveCharacterTextSplitter. Like the fixed-size chunking example, we can experiment with different chunk and overlap sizes.
Output:
The splitter successfully chunked the text by using the default separators: [“\n\n”, “\n”, “ “, “”].
Semantic chunking requires an embedding or encoder model. We can use the granite-embedding-30m-english model as our embedding model. We can also print one of the chunks for a better understanding of their structure.
Output: (truncated)
Documents of various file types are compatible with LangChain’s document-based text splitters. For this tutorial’s purposes, we will use a Markdown file. For examples of recursive JSON splitting, code splitting and HTML splitting, refer to the LangChain documentation.
An example of a Markdown file we can load is the README file for Granite 3.1 on IBM’s GitHub.
Output:
Now, we can use LangChain’s MarkdownHeaderTextSplitter to split the file by header type, which we set in the headers_to_split_on list. We will also print one of the chunks as an example.
Output:
As you can see in the output, the chunking successfully split the text by header type.
Now that we have experimented with various chunking strategies, let’s move along with our RAG implementation. For this tutorial, we will choose the chunks produced by the semantic split and convert them to vector embeddings. An open source vector store we can use is Chroma DB. We can easily access Chroma functionality through the langchain_chroma package.
Let’s initialize our Chroma vector database, provide it with our embeddings model and add our documents produced by semantic chunking.
Output:
Next, we can move onto creating a prompt template for our LLM. This prompt template allows us to ask multiple questions without altering the initial prompt structure. We can also provide our vector store as the retriever. This step finalizes the RAG structure.
Using our completed RAG workflow, let’s invoke a user query. First, we can strategically prompt the model without any additional context from the vector store we built to test whether the model is using its built-in knowledge or truly using the RAG context. The Granite 3.1 announcement blog references Docling, IBM’s tool for parsing various document types and converting them into Markdown or JSON. Let’s ask the LLM about Docling.
Output:
Clearly, the model was not trained on information about Docling and without outside tools or information, it cannot provide us with this information. Now, let’s try providing the same query to the RAG chain we built.
Output:
Great! The Granite model correctly used the RAG context to tell us correct information about Docling while preserving semantic coherence. We proved this same result was not possible without the use of RAG.
In this tutorial, you created a RAG pipeline and experimented with several chunking strategies to improve the system’s retrieval accuracy. Using the Granite 3.1 model, we successfully produced appropriate model responses to a user query related to the documents provided as context. The text we used for this RAG implementation was loaded from a blog on ibm.com announcing the release of Granite 3.1. The model provided us with information only accessible through the provided context since it was not part of the model's initial knowledge base.
For those in search of further reading, check out the results of a project comparing LLM performance using HTML structured chunking in comparison to watsonx chunking.
Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.
Move your applications from prototype to production with the help of our AI development solutions.
Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.