What is Retrieval Augmented Generation?

Are you curious about how modern AI systems manage to pull accurate information out of what seems like thin air? The answer lies in a breakthrough technology known as Retrieval-Augmented Generation, or RAG for short.

This process vastly improves the output of large language models by enabling them to reference a vast knowledge base before generating a response. In simple terms, RAG acts as a smart assistant, drawing on a library of information to provide the most relevant and accurate answers.

This article will delve into what RAG is, how it operates, and why it is transforming the capabilities of AI systems across various sectors. By integrating RAG, these systems become more trustworthy, resourceful, and responsive to current events and specific queries.

Understanding Retrieval-Augmented Generation

Retrieval-Augmented Generation (RAG) is a novel approach that enhances large language models (LLMs), making their responses more accurate and up-to-date. But how does it work? Imagine RAG as a clever librarian who combs through a global digital library to find the exact information needed to answer your questions on the spot.

RAG Process Explained

The RAG process starts by accumulating data from various sources. It then seeks out the information that best matches the user’s question. This relevant data is mixed into the AI’s initial setup, enhancing the response given. In essence, the LLM first consults its extensive virtual reference library before answering you.

Diagram showing the RAG process
Image courtesy of Stanford

RAG code sample

The below code sample, taken from my course on Retrieval Augmented Generation (see the next section!), shows how to build a RAG project including vectorization of data, vector databases, retrieval, and integration with an LLM:

import { CloudflareWorkersAIEmbeddings } from "langchain/embeddings/cloudflare_workersai";
import { CloudflareVectorizeStore } from "langchain/vectorstores/cloudflare_vectorize";
import { RetrievalQAChain } from "langchain/chains";
import { OpenAI } from "langchain/llms/openai";

export default {
  async fetch(request, env, ctx) {
    const url = new URL(request.url);

    const query = url.searchParams.get("query") || "Hello World";

    const embeddings = new CloudflareWorkersAIEmbeddings({
      binding: env.AI,
      modelName: "@cf/baai/bge-small-en-v1.5",
    });
    const store = new CloudflareVectorizeStore(embeddings, {
      index: env.VECTORIZE_INDEX,
    });

    if (url.pathname === "/add") {
      const body = await request.json();
      const id = body.id;
      const text = body.text;

      await store.addDocuments([text], { ids: [id] });

      return new Response("Not found", { status: 404 });
    }

    const storeRetriever = store.asRetriever();

    const model = new OpenAI({
      openAIApiKey: env.OPENAI_API_KEY,
    });

    const chain = RetrievalQAChain.fromLLM(model, storeRetriever);

    const res = await chain.call({ query });

    return new Response(JSON.stringify(res), {
      headers: {
        "content-type": "application/json;charset=UTF-8",
      },
    });
  },
};

How to build a RAG application

Want to build your own RAG application? Our newest free course on our YouTube channel shows how to build a RAG project using Langchain and Cloudflare AI.