From the course: Level up LLM applications development with LangChain and OpenAI

Retrieval with query analysis

- [Instructor] In some cases, we need to handle large documents and multiple data source. In the upcoming project, we set up a retrieval application, specifically optimized for selection of which retriever to use. And in some cases, it can be also beneficial to set up multiple vector stores. This is when LangChain comes into play by making this type of setup easy and by abstracting a lot of the complexity that lies in how to create a multiple retrieval application. And so the next module and next example, covers how to have multiple retrievals in your application with a query analysis technique that allows for selection of which retriever to use. I will show you a simple example, so let's go back up. First, what we want to do is to create an index with two data source. So first, here that's going to be with mock data. So we're going to have first Harrison worked at Kensho. And next, a second data source, which will be Ankush worked at Facebook. So that's going to be the usual setup. First, we're going to create the index from the data source, then create the embeddings to store into the vector store. So let's begin with the first part. First, we're going to set up the project, then create the index with the two sources of data. That's going to be up next.

Contents