Tutorial: First Step in a RAG pipeline, Data Ingestion (Part 1) - Extracting and Indexing Data from Documents using LangChain.
This tutorial will guide you through the process of data loading and indexing which is the first part of a RAG pipeline. But first, what is a RAG pipeline? And what parts does it have? RAG, or Retrieval Augmented Generation, is a technique that enhances the capabilities of large language models (LLMs) by providing them with relevant information from external sources. The steps in a RAG pipeline are essentially the following two:
- Document ingestion: What we want to do in this step is to extract relevant information from documents in various formats (pdf, video, websites, markdown…) and pre-process it so it can be easily store in a vector database. Once stored, we can easily retrieve this information later using a similarity search for context to our LLM.
- Retrieval of relevant information and response generation: When a user query is received, the RAG pipeline searches through dataset created in the previous step to find the most pertinent information related to the query. For this, vector databases are key since they can do the search much faster using a vector embedded system. This retrieved information is then fed to the LLM as context. The LLM, now equipped with additional context, can generate a more comprehensive and relevant response to the user’s query.
As we discussed, this tutorial focuses on data ingestion, the first step in the RAG pipeline. This step can be further divided into four sub-steps, which are represented in Fig. 1.
In more detail the steps and the technologies used in each of them are as follows:
- Data loading and data splitting: We’ll explore how to extract data from a PDF document and a website using LangChain. This powerful Python framework offers tools for data extraction from various sources (explore their website for more details). LangChain will then be used to split the data into semantic chunks suitable for the embedding model. This step, highlighted in red in the figure, is covered in the first part of the tutorial.
- Data embedding: We’ll utilize a model to transform the preprocessed text from step 1 into vectors. Sentences with similar meanings will be mapped to vectors close together in the vector space. We’ll leverage a model from Cloudflare Workers AI to generate this vectors.
- Data storing: Finally, we’ll store the generated vectors in a vector database (vector DB) for future retrieval. This tutorial utilises Qdrant, which offers a generous free tier that meets our needs. However, since Qdrant doesn’t allow storing text alongside vectors, we’ll also store the raw text in a Cloudflare D1 database for later retrieval.
Great! Let’s dive into the tutorial and explore the data ingestion process step-by-step.
Prerequisites
Before starting this tutorial, ensure familiarity with Cloudflare Workers or Cloudflare Pages, including wrangler
usage and installation. For a brief overview of deploying a Cloudflare Page with AstroJS, refer to the first part of [[Chatbot in AstroJS with CloudFlare Workers AI|my tutorial for creating a Chatbot in AstroJS]].
Additionally, you’ll need the following prerequisites:
As we’ll be writing Python scripts for data loading and splitting, you’ll need to install several Python packages.
Since we’re using the LangChain framework, start by installing the core package:
In addition to the main package, we’ll need to install some dependencies for our script to function properly:
You can save those on a requirements.txt
file and run:
Or you can install them one by one.
Python Script for Data Loading and Data Splitting
This section outlines the creation of a script to extract and index data from a PDF document or a website, marking the first step in the data ingestion phase of the RAG pipeline. I have named the script read-document.py
.
Structure of the Script
The script’s structure is straightforward, incorporating an argument parser for cleaner and more user-friendly usage. The main function will include the following:
This structure allows you to provide the source (PDF path or URL) and optionally specify the chunk size and the overlap for the data splitting. The functions load_pdf_document
, load_web_document
, r_splitter
, and save_document
are defined at the beginning of the document, and their functionalities are explained in the following sections. To execute the script, use the following command in your terminal:
Functions to Load the Data
The load_pdf_document
and load_web_document
functions will load the raw data from the PDF document and the specified URL, respectively. They utilise the loader functions from LangChain:
Splitting the Text into Chunks
After obtaining the raw data from the loaders, we need to split it into meaningful chunks that can be stored in a vector database. These chunks will then be retrieved and used as contexts for our LLM.
To accomplish this, we’ll employ another utility from LangChain called RecursiveCharacterTextSplitter
:
The RecursiveCharacterTextSplitter
utilizes line breaks (\n
) and spaces as separators. It prioritizes splitting on these characters first, aiming to keep paragraphs, sentences, and words together whenever possible. This approach ensures we maintain the strongest semantic relationships within the text chunks.
Saving the Data in a JSON file
Finally, once the text has been split into semantic chunks, it needs to be stored for later processing and storage in a vector database. I’ve chosen to save the text in a JSON file using the following function:
As noted, we’re only storing the text and discarding the metadata for this tutorial. This is because we’re focusing on using plain text for information retrieval with the LLM. If desired, you can choose to save the metadata for future use.
This concludes the first part of this tutorial. Let’s move on to the next section to set up Qdrant and CloudFlare D1 databases.