Python install pip surprise
Overview
Surprise is a Python scikit for building and analyzing recommender systems that deal with explicit rating data.
Surprise was designed with the following purposes in mind:
- Give users perfect control over their experiments. To this end, a strong emphasis is laid on documentation, which we have tried to make as clear and precise as possible by pointing out every detail of the algorithms.
- Alleviate the pain of Dataset handling. Users can use both built-in datasets (Movielens, Jester), and their own custom datasets.
- Provide various ready-to-use prediction algorithms such as baseline algorithms, neighborhood methods, matrix factorization-based ( SVD, PMF, SVD++, NMF), and many others. Also, various similarity measures (cosine, MSD, pearson…) are built-in.
- Make it easy to implement new algorithm ideas.
- Provide tools to evaluate, analyse and compare the algorithms’ performance. Cross-validation procedures can be run very easily using powerful CV iterators (inspired by scikit-learn excellent tools), as well as exhaustive search over a set of parameters.
The name SurPRISE (roughly 🙂 ) stands for Simple Python RecommendatIon System Engine.
Please note that surprise does not support implicit ratings or content-based information.
Getting started, example
Here is a simple example showing how you can (down)load a dataset, split it for 5-fold cross-validation, and compute the MAE and RMSE of the SVD algorithm.
from surprise import SVD from surprise import Dataset from surprise.model_selection import cross_validate # Load the movielens-100k dataset (download it if needed). data = Dataset.load_builtin('ml-100k') # Use the famous SVD algorithm. algo = SVD() # Run 5-fold cross-validation and print results. cross_validate(algo, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)
Evaluating RMSE, MAE of algorithm SVD on 5 split(s). Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Mean Std RMSE (testset) 0.9367 0.9355 0.9378 0.9377 0.9300 0.9355 0.0029 MAE (testset) 0.7387 0.7371 0.7393 0.7397 0.7325 0.7375 0.0026 Fit time 0.62 0.63 0.63 0.65 0.63 0.63 0.01 Test time 0.11 0.11 0.14 0.14 0.14 0.13 0.02
Surprise can do much more (e.g, GridSearchCV)! You’ll find more usage examples in the documentation .
Benchmarks
Here are the average RMSE, MAE and total execution time of various algorithms (with their default parameters) on a 5-fold cross-validation procedure. The datasets are the Movielens 100k and 1M datasets. The folds are the same for all the algorithms. All experiments are run on a laptop with an intel i5 11th Gen 2.60GHz. The code for generating these tables can be found in the benchmark example.
Movielens 100k | RMSE | MAE | Time |
---|---|---|---|
SVD | 0.934 | 0.737 | 0:00:06 |
SVD++ (cache_ratings=False) | 0.919 | 0.721 | 0:01:39 |
SVD++ (cache_ratings=True) | 0.919 | 0.721 | 0:01:22 |
NMF | 0.963 | 0.758 | 0:00:06 |
Slope One | 0.946 | 0.743 | 0:00:09 |
k-NN | 0.98 | 0.774 | 0:00:08 |
Centered k-NN | 0.951 | 0.749 | 0:00:09 |
k-NN Baseline | 0.931 | 0.733 | 0:00:13 |
Co-Clustering | 0.963 | 0.753 | 0:00:06 |
Baseline | 0.944 | 0.748 | 0:00:02 |
Random | 1.518 | 1.219 | 0:00:01 |
Movielens 1M | RMSE | MAE | Time |
---|---|---|---|
SVD | 0.873 | 0.686 | 0:01:07 |
SVD++ (cache_ratings=False) | 0.862 | 0.672 | 0:41:06 |
SVD++ (cache_ratings=True) | 0.862 | 0.672 | 0:34:55 |
NMF | 0.916 | 0.723 | 0:01:39 |
Slope One | 0.907 | 0.715 | 0:02:31 |
k-NN | 0.923 | 0.727 | 0:05:27 |
Centered k-NN | 0.929 | 0.738 | 0:05:43 |
k-NN Baseline | 0.895 | 0.706 | 0:05:55 |
Co-Clustering | 0.915 | 0.717 | 0:00:31 |
Baseline | 0.909 | 0.719 | 0:00:19 |
Random | 1.504 | 1.206 | 0:00:19 |
Installation
With pip (you’ll need numpy, and a C compiler. Windows users might prefer using conda):
$ pip install numpy $ pip install scikit-surprise
$ conda install -c conda-forge scikit-surprise
For the latest version, you can also clone the repo and build the source (you’ll first need Cython and numpy):
$ pip install numpy cython $ git clone https://github.com/NicolasHug/surprise.git $ cd surprise $ python setup.py install
License and reference
This project is licensed under the BSD 3-Clause license, so it can be used for pretty much everything, including commercial applications.
I’d love to know how Surprise is useful to you. Please don’t hesitate to open an issue and describe how you use it!
Please make sure to cite the paper if you use Surprise for your research:
@article, url = , year = , publisher = , volume = , number = , pages = , author = , title = , journal = >
Contributors
The following persons have contributed to Surprise:
ashtou, bobbyinfj, caoyi, Олег Демиденко, Charles-Emmanuel Dias, dmamylin, Lauriane Ducasse, Marc Feger, franckjay, Lukas Galke, Tim Gates, Pierre-François Gimenez, Zachary Glassman, Jeff Hale, Nicolas Hug, Janniks, jyesawtellrickson, Doruk Kilitcioglu, Ravi Raju Krishna, lapidshay, Hengji Liu, Ravi Makhija, Maher Malaeb, Manoj K, James McNeilis, Naturale0, nju-luke, Pierre-Louis Pécheux, Jay Qi, Lucas Rebscher, Skywhat, Hercules Smith, David Stevens, Vesna Tanko, TrWestdoor, Victor Wang, Mike Lee Williams, Jay Wong, Chenchen Xu, YaoZh1918.
Development Status
Starting from version 1.1.0 (September 2019), I will only maintain the package, provide bugfixes, and perhaps sometimes perf improvements. I have less time to dedicate to it now, so I’m unabe to consider new features.
For bugs, issues or questions about Surprise, please avoid sending me emails; I will most likely not be able to answer). Please use the GitHub project page instead, so that others can also benefit from it.
scikit-surprise 1.1.3
Surprise is a Python scikit for building and analyzing recommender systems that deal with explicit rating data.
Surprise was designed with the following purposes in mind:
- Give users perfect control over their experiments. To this end, a strong emphasis is laid on documentation, which we have tried to make as clear and precise as possible by pointing out every detail of the algorithms.
- Alleviate the pain of Dataset handling. Users can use both built-in datasets (Movielens, Jester), and their own custom datasets.
- Provide various ready-to-use prediction algorithms such as baseline algorithms, neighborhood methods, matrix factorization-based ( SVD, PMF, SVD++, NMF), and many others. Also, various similarity measures (cosine, MSD, pearson. ) are built-in.
- Make it easy to implement new algorithm ideas.
- Provide tools to evaluate, analyse and compare the algorithms’ performance. Cross-validation procedures can be run very easily using powerful CV iterators (inspired by scikit-learn excellent tools), as well as exhaustive search over a set of parameters.
The name SurPRISE (roughly 🙂 ) stands for Simple Python RecommendatIon System Engine.
Please note that surprise does not support implicit ratings or content-based information.
Getting started, example
Here is a simple example showing how you can (down)load a dataset, split it for 5-fold cross-validation, and compute the MAE and RMSE of the SVD algorithm.
Evaluating RMSE, MAE of algorithm SVD on 5 split(s). Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Mean Std RMSE (testset) 0.9367 0.9355 0.9378 0.9377 0.9300 0.9355 0.0029 MAE (testset) 0.7387 0.7371 0.7393 0.7397 0.7325 0.7375 0.0026 Fit time 0.62 0.63 0.63 0.65 0.63 0.63 0.01 Test time 0.11 0.11 0.14 0.14 0.14 0.13 0.02
Surprise can do much more (e.g, GridSearchCV)! You’ll find more usage examples in the documentation .
Benchmarks
Here are the average RMSE, MAE and total execution time of various algorithms (with their default parameters) on a 5-fold cross-validation procedure. The datasets are the Movielens 100k and 1M datasets. The folds are the same for all the algorithms. All experiments are run on a laptop with an intel i5 11th Gen 2.60GHz. The code for generating these tables can be found in the benchmark example.
Movielens 100k | RMSE | MAE | Time |
---|---|---|---|
SVD | 0.934 | 0.737 | 0:00:06 |
SVD++ (cache_ratings=False) | 0.919 | 0.721 | 0:01:39 |
SVD++ (cache_ratings=True) | 0.919 | 0.721 | 0:01:22 |
NMF | 0.963 | 0.758 | 0:00:06 |
Slope One | 0.946 | 0.743 | 0:00:09 |
k-NN | 0.98 | 0.774 | 0:00:08 |
Centered k-NN | 0.951 | 0.749 | 0:00:09 |
k-NN Baseline | 0.931 | 0.733 | 0:00:13 |
Co-Clustering | 0.963 | 0.753 | 0:00:06 |
Baseline | 0.944 | 0.748 | 0:00:02 |
Random | 1.518 | 1.219 | 0:00:01 |
Movielens 1M | RMSE | MAE | Time |
---|---|---|---|
SVD | 0.873 | 0.686 | 0:01:07 |
SVD++ (cache_ratings=False) | 0.862 | 0.672 | 0:41:06 |
SVD++ (cache_ratings=True) | 0.862 | 0.672 | 0:34:55 |
NMF | 0.916 | 0.723 | 0:01:39 |
Slope One | 0.907 | 0.715 | 0:02:31 |
k-NN | 0.923 | 0.727 | 0:05:27 |
Centered k-NN | 0.929 | 0.738 | 0:05:43 |
k-NN Baseline | 0.895 | 0.706 | 0:05:55 |
Co-Clustering | 0.915 | 0.717 | 0:00:31 |
Baseline | 0.909 | 0.719 | 0:00:19 |
Random | 1.504 | 1.206 | 0:00:19 |
Installation
With pip (you’ll need numpy, and a C compiler. Windows users might prefer using conda):
$ pip install numpy $ pip install scikit-surprise
$ conda install -c conda-forge scikit-surprise
For the latest version, you can also clone the repo and build the source (you’ll first need Cython and numpy):
$ pip install numpy cython $ git clone https://github.com/NicolasHug/surprise.git $ cd surprise $ python setup.py install
License and reference
This project is licensed under the BSD 3-Clause license, so it can be used for pretty much everything, including commercial applications.
I’d love to know how Surprise is useful to you. Please don’t hesitate to open an issue and describe how you use it!
Please make sure to cite the paper if you use Surprise for your research:
@article, url = , year = , publisher = , volume = , number = , pages = , author = , title = , journal = >
Contributors
The following persons have contributed to Surprise:
ashtou, bobbyinfj, caoyi, Олег Демиденко, Charles-Emmanuel Dias, dmamylin, Lauriane Ducasse, Marc Feger, franckjay, Lukas Galke, Tim Gates, Pierre-François Gimenez, Zachary Glassman, Jeff Hale, Nicolas Hug, Janniks, jyesawtellrickson, Doruk Kilitcioglu, Ravi Raju Krishna, lapidshay, Hengji Liu, Ravi Makhija, Maher Malaeb, Manoj K, James McNeilis, Naturale0, nju-luke, Pierre-Louis Pécheux, Jay Qi, Lucas Rebscher, Skywhat, Hercules Smith, David Stevens, Vesna Tanko, TrWestdoor, Victor Wang, Mike Lee Williams, Jay Wong, Chenchen Xu, YaoZh1918.
Development Status
Starting from version 1.1.0 (September 2019), I will only maintain the package, provide bugfixes, and perhaps sometimes perf improvements. I have less time to dedicate to it now, so I’m unabe to consider new features.
For bugs, issues or questions about Surprise, please avoid sending me emails; I will most likely not be able to answer). Please use the GitHub project page instead, so that others can also benefit from it.