rec

rectorch

rectorch is a pytorch-based framework for state-of-the-art top-N recommendation

Showing:

Popularity

Downloads/wk

0

GitHub Stars

131

Maintenance

Last Commit

1yr ago

Contributors

1

Package

Dependencies

5

License

MIT

Categories

Readme

logo

Build Status PyPi DOI Coverage Status docs version issues license

rectorch is a pytorch-based framework for top-N recommendation. It includes several state-of-the-art top-N recommendation approaches implemented in pytorch.

Included methods

The latest PyPi release contains the following methods.

NameDescriptionRef.
MultiDAEDenoising Autoencoder for Collaborative filtering with Multinomial prior[1]
MultiVAEVariational Autoencoder for Collaborative filtering with Multinomial prior[1]
CMultiVAEConditioned Variational Autoencoder[2]
CFGANCollaborative Filtering with Generative Adversarial Networks[3]
EASEEmbarrassingly shallow autoencoder for sparse data[4]
ADMM_SlimADMM SLIM: Sparse Recommendations for Many Users[5]
SVAESequential Variational Autoencoders for Collaborative Filtering[6]

Getting started

Installation

rectorch is available on PyPi and it can be installed using pip

pip3 install rectorch

Requirements

If you install rectorch by cloning this repository make sure to install all the requirements.

pip3 install -r requirements.txt

Architecture

rectorch is composed of 7 main modules summarized in the following.

NameScope
configurationContains useful classes to manage the configuration files.
dataManages the reading, writing and loading of the data sets
evaluationContains utility functions to evaluate recommendation engines.
metricsContains the definition of the evaluation metrics.
modelsIncludes the training algorithm for the implemented recommender systems.
netsContains definitions of the neural newtork architectures used by the implemented approaches.
samplersContains definitions of sampler classes useful when training neural network-based models.

Tutorials

(To be released soon)

We will soon release a series of python notebooks with examples on how to train and evaluate recommendation methods using rectorch.

Documentation

The full documentation of the rectorch APIs is available at https://makgyver.github.io/rectorch/.

Known issues

The documentation has rendering issues on 4K display. To "fix" the problem zoom in ([Ctrl][+], [Cmd][+]) the page. Thanks for your patience, it will be fixed soon.

Testing

The easiest way to test rectorch is using pytest.

git clone https://github.com/makgyver/rectorch.git
cd rectorch/tests
pytest

You can also check the coverage using coverage. From the tests folder:

coverage run -m pytest  
coverage report -m

Dev branch

rectorch is developed using a test-driven approach. The master branch (i.e., the pypi release) is the up-to-date version of the framework where each module has been fully tested. However, new untested or under development features are available in the dev branch. The dev version of rectorch can be used by cloning the branch.

git clone -b dev https://github.com/makgyver/rectorch.git
cd rectorch
pip3 install -r requirements.txt

Work in progress

The following features/changes will be soon released:

  • Splitting of the models module in more sub-modules on the basis of the models' characteristics;
  • Introduction of a "global" setting/configuration for the framework;
  • Adding optimizer's parameters in the configuration;
  • Including horizontal splitting and leave-one-out in DataProcessing.

Suggestions

This framework is constantly growing and the implemented methods are chosen on the basis of the need of our research activity. We plan to include as many state-of-the-art methods as soon as we can, but if you have any specific request feel free to contact us by opening an issue.

Citing this repo

If you are using rectorch in your work, please consider citing this repository.

@misc{rectorch,
    author = {Mirko Polato},
    title = {{rectorch: pytorch-based framework for top-N recommendation}},
    year = {2020},
    month = {may},
    doi = {10.5281/zenodo.3841898},
    version = {0.0.9-beta0},
    publisher = {Zenodo},
    url = {https://doi.org/10.5281/zenodo.153841898991}
}

References

[1] Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. Variational Autoencoders for Collaborative Filtering. In Proceedings of the 2018 World Wide Web Conference (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 689–698. DOI: https://doi.org/10.1145/3178876.3186150

[2] Tommaso Carraro, Mirko Polato and Fabio Aiolli. Conditioned Variational Autoencoder for top-N item recommendation, 2020. arXiv pre-print: https://arxiv.org/abs/2004.11141

[3] Dong-Kyu Chae, Jin-Soo Kang, Sang-Wook Kim, and Jung-Tae Lee. 2018. CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). Association for Computing Machinery, New York, NY, USA, 137–146. DOI: https://doi.org/10.1145/3269206.3271743

[4] Harald Steck. 2019. Embarrassingly Shallow Autoencoders for Sparse Data. In The World Wide Web Conference (WWW ’19). Association for Computing Machinery, New York, NY, USA, 3251–3257. DOI: https://doi.org/10.1145/3308558.3313710

[5] Harald Steck, Maria Dimakopoulou, Nickolai Riabov, and Tony Jebara. 2020. ADMM SLIM: Sparse Recommendations for Many Users. In Proceedings of the 13th International Conference on Web Search and Data Mining (WSDM ’20). Association for Computing Machinery, New York, NY, USA, 555–563. DOI: https://doi.org/10.1145/3336191.3371774

[6] Noveen Sachdeva, Giuseppe Manco, Ettore Ritacco, and Vikram Pudi. 2019. Sequential Variational Autoencoders for Collaborative Filtering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (WSDM ’19). Association for Computing Machinery, New York, NY, USA, 600–608. DOI: https://doi.org/10.1145/3289600.3291007

Rate & Review

Great Documentation0
Easy to Use0
Performant0
Highly Customizable0
Bleeding Edge0
Responsive Maintainers0
Poor Documentation0
Hard to Use0
Slow0
Buggy0
Abandoned0
Unwelcoming Community0
100