This library aims to address annotation disagreements in manually labelled data.
I started it as a project to develop some understanding of Python packaging and workflow. (This is the primary reason for the messy release history and commit logs, for which I apologise.) But I hope this will be useful for a wider audience as well.
To install, setup a virtualenv and do:
$ python3 -m pip install --index-url https://pypi.org/project/ disagree
or
$ pip3 install disagree
To update to the latest version do:
$ pip3 install --upgrade disagree
Whilst working in NLP, I've been repeatedly working with datasets that have been manually labelled, and have thus had to evaluate the quality of the agreements between the annotators. In my (limited) experience of doing this, I have encountered a number of ways of it that have been helpful. In this library, I aim to group all of those things together for people to use.
Please suggest any additions/functionalities, and I will try my best to add them.
Visualisations
Annotation statistics:
Worked examples are provided in the Jupyter notebooks directory.
BiDisagreements
class is primarily there for you to visualise the disagreements in the form of a matrix, but has some other small functionalities.
df
: Pandas DataFrame containing annotator labels
Attributes:
agreements_summary()
agreements_matrix()
labels_to_index()
agreements_matrix()
.This module gives you access to a number of metrics typically used for annotation disagreement statistics.
Attributes:
joint_probability(ann1, ann2)
cohens_kappa(ann1, ann2)
:
fliess_kappa()
correlation(ann1, ann2, measure="pearson")
metric_matrix(func)
alpha(data_type="nominal")
Version | Tag | Published |
---|---|---|
1.2.7 | 2mos ago | |
1.2.6 | 2mos ago | |
1.2.5 | 2mos ago | |
1.2.4 | 2mos ago |