Improving Retrieval with Auto-Merging and Hierarchical Document Retrieval
Last Updated: September 24, 2024
This notebook shows how to use experimetnal components for Haystack: AutoMergingRetriever
and HierarchicalDocumentSplitter
.
- π Read the full article here
- π Please join the discussion for these experimental components
- Documentation for
AutoMergingRetriever
- Documentation for
HierarchicalDocumentSplitter
- This is material part of the blog post on the Auto-Merging Retriever - add link here
Setting up
!pip install git+https://github.com/deepset-ai/haystack-experimental.git@main
!pip install haystack-ai
Let’s get a dataset to index and explore
-
We will use a dataset containing 2225 new articles part of the paper by “Practical Solutions to the Problem of Diagonal Dominance in Kernel Document Clustering”, Proc. ICML 2006. by D. Greene and P. Cunningham.
-
The original dataset is available at http://mlg.ucd.ie/datasets/bbc.html, but we will instead use a CSV processed version available here: https://raw.githubusercontent.com/amankharwal/Website-data/master/bbc-news-data.csv
!wget https://raw.githubusercontent.com/amankharwal/Website-data/master/bbc-news-data.csv
--2024-09-06 09:41:04-- https://raw.githubusercontent.com/amankharwal/Website-data/master/bbc-news-data.csv
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5080260 (4.8M) [text/plain]
Saving to: βbbc-news-data.csvβ
bbc-news-data.csv 100%[===================>] 4.84M --.-KB/s in 0.09s
2024-09-06 09:41:05 (56.4 MB/s) - βbbc-news-data.csvβ saved [5080260/5080260]
Let’s convert the raw data into Haystack Documents
import csv
from typing import List
from haystack import Document
def read_documents() -> List[Document]:
with open("bbc-news-data.csv", "r") as file:
reader = csv.reader(file, delimiter="\t")
next(reader, None) # skip the headers
documents = []
for row in reader:
category = row[0].strip()
title = row[2].strip()
text = row[3].strip()
documents.append(Document(content=text, meta={"category": category, "title": title}))
return documents
docs = read_documents()
docs[0:5]
[Document(id=8b0eec9b4039d3c21eed119c9cbf1022a172f6b96661a391c76ee9a00b388334, content: 'Quarterly profits at US media giant TimeWarner jumped 76% to $1.13bn (Β£600m) for the three months to...', meta: {'category': 'business', 'title': 'Ad sales boost Time Warner profit'}),
Document(id=0b20edb280b3c492d81751d97aa67f008759b242f2596d56c6816bacb5ea0c08, content: 'The dollar has hit its highest level against the euro in almost three months after the Federal Reser...', meta: {'category': 'business', 'title': 'Dollar gains on Greenspan speech'}),
Document(id=9465b0a3c9e81843db56beb8cb3183b14810e8fc7b3195bd37718296f3a13e31, content: 'The owners of embattled Russian oil giant Yukos are to ask the buyer of its former production unit t...', meta: {'category': 'business', 'title': 'Yukos unit buyer faces loan claim'}),
Document(id=151d64ed92b61b1b9e58c52a90e7ab4be964c0e47aaf1a233dfb93110986d9cd, content: 'British Airways has blamed high fuel prices for a 40% drop in profits. Reporting its results for th...', meta: {'category': 'business', 'title': "High fuel prices hit BA's profits"}),
Document(id=4355d611f770b814f9e7d33959ad9d16b69048650ed0eaf24f1bce3e8ab5bf4c, content: 'Shares in UK drinks and food firm Allied Domecq have risen on speculation that it could be the targe...', meta: {'category': 'business', 'title': 'Pernod takeover talk lifts Domecq'})]
We can see that we have successfully created Documents.
Document Splitting and Indexing
Now we split each document into smaller ones creating an hierarchical document structure connecting each smaller child documents with the corresponding parent document.
We also create two document stores, one for the leaf documents and the other for the parent documents.
from typing import Tuple
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.document_stores.types import DuplicatePolicy
from haystack_experimental.components.splitters import HierarchicalDocumentSplitter
def indexing(documents: List[Document]) -> Tuple[InMemoryDocumentStore, InMemoryDocumentStore]:
splitter = HierarchicalDocumentSplitter(block_sizes={10, 3}, split_overlap=0, split_by="word")
docs = splitter.run(documents)
# Store the leaf documents in one document store
leaf_documents = [doc for doc in docs["documents"] if doc.meta["__level"] == 1]
leaf_doc_store = InMemoryDocumentStore()
leaf_doc_store.write_documents(leaf_documents, policy=DuplicatePolicy.OVERWRITE)
# Store the parent documents in another document store
parent_documents = [doc for doc in docs["documents"] if doc.meta["__level"] == 0]
parent_doc_store = InMemoryDocumentStore()
parent_doc_store.write_documents(parent_documents, policy=DuplicatePolicy.OVERWRITE)
return leaf_doc_store, parent_doc_store
leaf_doc_store, parent_doc_store = indexing(docs)
Retrieving Documents with Auto-Merging
We are now ready to query the document store using the AutoMergingRetriever
. Let’s build a pipeline that uses the BM25Retriever
to handle the user queries, and we connect it to the AutoMergingRetriever
, which, based on the documents retrieved and the hierarchical structure, decides whether the leaf documents or the parent document is returned.
from haystack import Pipeline
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
from haystack_experimental.components.retrievers import AutoMergingRetriever
def querying_pipeline(leaf_doc_store: InMemoryDocumentStore, parent_doc_store: InMemoryDocumentStore, threshold: float = 0.6):
pipeline = Pipeline()
bm25_retriever = InMemoryBM25Retriever(document_store=leaf_doc_store)
auto_merge_retriever = AutoMergingRetriever(parent_doc_store, threshold=threshold)
pipeline.add_component(instance=bm25_retriever, name="BM25Retriever")
pipeline.add_component(instance=auto_merge_retriever, name="AutoMergingRetriever")
pipeline.connect("BM25Retriever.documents", "AutoMergingRetriever.matched_leaf_documents")
return pipeline
Let’s create this pipeline by setting the threshold for the AutoMergingRetriever
at 0.6
pipeline = querying_pipeline(leaf_doc_store, parent_doc_store, threshold=0.6)
Let’s now query the pipeline for document store for articles related to cybersecurity. Let’s also make use of the pipeline parameter include_outputs_from
to also get the outputs from the BM25Retriever
component.
result = pipeline.run(data={'query': 'phishing attacks spoof websites spam e-mails spyware'}, include_outputs_from={'BM25Retriever'})
len(result['AutoMergingRetriever']['documents'])
10
len(result['BM25Retriever']['documents'])
10
retrieved_doc_titles_bm25 = sorted([d.meta['title'] for d in result['BM25Retriever']['documents']])
retrieved_doc_titles_bm25
['Bad e-mail habits sustains spam',
'Cyber criminals step up the pace',
'Cyber criminals step up the pace',
'More women turn to net security',
'Rich pickings for hi-tech thieves',
'Screensaver tackles spam websites',
'Security scares spark browser fix',
'Solutions to net security fears',
'Solutions to net security fears',
'Spam e-mails tempt net shoppers']
retrieved_doc_titles_automerging = sorted([d.meta['title'] for d in result['AutoMergingRetriever']['documents']])
retrieved_doc_titles_automerging
['Bad e-mail habits sustains spam',
'Cyber criminals step up the pace',
'Cyber criminals step up the pace',
'More women turn to net security',
'Rich pickings for hi-tech thieves',
'Screensaver tackles spam websites',
'Security scares spark browser fix',
'Solutions to net security fears',
'Solutions to net security fears',
'Spam e-mails tempt net shoppers']