⛔ [NOT MAINTAINED] An End-To-End Closed Domain Question Answering System.





GitHub Stars



Last Commit

1yr ago










cdQA: Closed Domain Question Answering

Build Status codecov PyPI Version PyPI Downloads Binder Colab Contributor Covenant PRs Welcome GitHub

An End-To-End Closed Domain Question Answering System. Built on top of the HuggingFace transformers library.

⛔ [NOT MAINTAINED] This repository is no longer maintained, but is being kept around for educational purposes. If you want a maintained alternative to cdQA check out:

cdQA in details

If you are interested in understanding how the system works and its implementation, we wrote an article on Medium with a high-level explanation.

We also made a presentation during the #9 NLP Breakfast organised by Feedly. You can check it out here.

Table of Contents


With pip

pip install cdqa

From source

git clone
cd cdQA
pip install -e .

Hardware Requirements

Experiments have been done with:

  • CPU 👉 AWS EC2 t2.medium Deep Learning AMI (Ubuntu) Version 22.0
  • GPU 👉 AWS EC2 p3.2xlarge Deep Learning AMI (Ubuntu) Version 22.0 + a single Tesla V100 16GB.

Getting started

Preparing your data


To use cdQA you need to create a pandas dataframe with the following columns:

The Article Title[Paragraph 1 of Article, ... , Paragraph N of Article]

With converters

The objective of cdqa converters is to make it easy to create this dataframe from your raw documents database. For instance the pdf_converter can create a cdqa dataframe from a directory containing .pdf files:

from cdqa.utils.converters import pdf_converter

df = pdf_converter(directory_path='path_to_pdf_folder')

You will need to install Java OpenJDK to use this converter. We currently have converters for:

  • pdf
  • markdown

We plan to improve and add more converters in the future. Stay tuned!

Downloading pre-trained models and data

You can download the models and data manually from the GitHub releases or use our download functions:

from import download_squad, download_model, download_bnpp_data

directory = 'path-to-directory'

# Downloading data

# Downloading pre-trained BERT fine-tuned on SQuAD 1.1
download_model('bert-squad_1.1', dir=directory)

# Downloading pre-trained DistilBERT fine-tuned on SQuAD 1.1
download_model('distilbert-squad_1.1', dir=directory)

Training models

Fit the pipeline on your corpus using the pre-trained reader:

import pandas as pd
from ast import literal_eval
from cdqa.pipeline import QAPipeline

df = pd.read_csv('your-custom-corpus-here.csv', converters={'paragraphs': literal_eval})

cdqa_pipeline = QAPipeline(reader='bert_qa.joblib') # use 'distilbert_qa.joblib' for DistilBERT instead of BERT

If you want to fine-tune the reader on your custom SQuAD-like annotated dataset:

cdqa_pipeline = QAPipeline(reader='bert_qa.joblib') # use 'distilbert_qa.joblib' for DistilBERT instead of BERT

Save the reader model after fine-tuning:


Making predictions

To get the best prediction given an input query:

cdqa_pipeline.predict(query='your question')

To get the N best predictions:

cdqa_pipeline.predict(query='your question', n_predictions=N)

There is also the possibility to change the weight of the retriever score versus the reader score in the computation of final ranking score (the default is 0.35, which is shown to be the best weight on the development set of SQuAD 1.1-open)

cdqa_pipeline.predict(query='your question', retriever_score_weight=0.35)

Evaluating models

In order to evaluate models on your custom dataset you will need to annotate it. The annotation process can be done in 3 steps:

  1. Convert your pandas DataFrame into a json file with SQuAD format:

    from cdqa.utils.converters import df2squad
    json_data = df2squad(df=df, squad_version='v1.1', output_dir='.', filename='dataset-name')
  2. Use an annotator to add ground truth question-answer pairs:

    Please refer to our cdQA-annotator, a web-based annotator for closed-domain question answering datasets with SQuAD format.

  3. Evaluate the pipeline object:

    from cdqa.utils.evaluation import evaluate_pipeline
    evaluate_pipeline(cdqa_pipeline, 'path-to-annotated-dataset.json')
  4. Evaluate the reader:

    from cdqa.utils.evaluation import evaluate_reader
    evaluate_reader(cdqa_pipeline, 'path-to-annotated-dataset.json')

Notebook Examples

We prepared some notebook examples under the examples directory.

You can also play directly with these notebook examples using Binder or Google Colaboratory:

[1] First steps with cdQACPU or GPUBinder Colab
[2] Using the PDF converterCPU or GPUBinder Colab
[3] Training the reader on SQuADGPUColab

Binder and Google Colaboratory provide temporary environments and may be slow to start but we recommend them if you want to get started with cdQA easily.



You can deploy a cdQA REST API by executing:

export dataset_path=path-to-dataset.csv
export reader_path=path-to-reader-model flask run -h

You can now make requests to test your API (here using HTTPie):

http localhost:5000/api query=='your question here'

If you wish to serve a user interface on top of your cdQA system, follow the instructions of cdQA-ui, a web interface developed for cdQA.


Read our Contributing Guidelines.


📹 VideoStanford CS224N: NLP with Deep Learning Lecture 10 – Question AnsweringChristopher Manning2019
📰 PaperReading Wikipedia to Answer Open-Domain QuestionsDanqi Chen, Adam Fisch, Jason Weston, Antoine Bordes2017
📰 PaperNeural Reading Comprehension and BeyondDanqi Chen2018
📰 PaperBERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingJacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova2018
📰 PaperContextual Word Representations: A Contextual IntroductionNoah A. Smith2019
📰 PaperEnd-to-End Open-Domain Question Answering with BERTseriniWei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin2019
📰 PaperData Augmentation for BERT Fine-Tuning in Open-Domain Question AnsweringWei Yang, Yuqing Xie, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin2019
📰 PaperPassage Re-ranking with BERTRodrigo Nogueira, Kyunghyun Cho2019
📰 PaperMRQA: Machine Reading for Question AnsweringJonathan Berant, Percy Liang, Luke Zettlemoyer2019
📰 PaperUnsupervised Question Answering by Cloze TranslationPatrick Lewis, Ludovic Denoyer, Sebastian Riedel2019
💻 FrameworkScikit-learn: Machine Learning in PythonPedregosa et al.2011
💻 FrameworkPyTorchAdam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan2016
💻 FrameworkTransformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.Hugging Face2018



Rate & Review

Great Documentation0
Easy to Use0
Highly Customizable0
Bleeding Edge0
Responsive Maintainers0
Poor Documentation0
Hard to Use0
Unwelcoming Community0