sklearn.decomposition .LatentDirichletAllocation¶
class sklearn.decomposition. LatentDirichletAllocation ( n_components = 10 , * , doc_topic_prior = None , topic_word_prior = None , learning_method = ‘batch’ , learning_decay = 0.7 , learning_offset = 10.0 , max_iter = 10 , batch_size = 128 , evaluate_every = -1 , total_samples = 1000000.0 , perp_tol = 0.1 , mean_change_tol = 0.001 , max_doc_update_iter = 100 , n_jobs = None , verbose = 0 , random_state = None ) [source] ¶
Latent Dirichlet Allocation with online variational Bayes algorithm.
The implementation is based on [1] and [2].
Parameters : n_components int, default=10
Changed in version 0.19: n_topics was renamed to n_components
Prior of document topic distribution theta . If the value is None, defaults to 1 / n_components . In [1], this is called alpha .
topic_word_prior float, default=None
Prior of topic word distribution beta . If the value is None, defaults to 1 / n_components . In [1], this is called eta .
learning_method , default=’batch’
Method used to update _component . Only used in fit method. In general, if the data size is large, the online update will be much faster than the batch update.
'batch': Batch variational Bayes method. Use all training data in each EM update. Old `components_` will be overwritten in each iteration. 'online': Online variational Bayes method. In each EM update, use mini-batch of training data to update the ``components_`` variable incrementally. The learning rate is controlled by the ``learning_decay`` and the ``learning_offset`` parameters.
Changed in version 0.20: The default learning method is now «batch» .
It is a parameter that control learning rate in the online learning method. The value should be set between (0.5, 1.0] to guarantee asymptotic convergence. When the value is 0.0 and batch_size is n_samples , the update method is same as batch learning. In the literature, this is called kappa.
learning_offset float, default=10.0
A (positive) parameter that downweights early iterations in online learning. It should be greater than 1.0. In the literature, this is called tau_0.
max_iter int, default=10
The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method.
batch_size int, default=128
Number of documents to use in each EM iteration. Only used in online learning.
evaluate_every int, default=-1
How often to evaluate perplexity. Only used in fit method. set it to 0 or negative number to not evaluate perplexity in training at all. Evaluating perplexity can help you check convergence in training process, but it will also increase total training time. Evaluating perplexity in every iteration might increase training time up to two-fold.
total_samples int, default=1e6
Total number of documents. Only used in the partial_fit method.
perp_tol float, default=1e-1
Perplexity tolerance in batch learning. Only used when evaluate_every is greater than 0.
mean_change_tol float, default=1e-3
Stopping tolerance for updating document topic distribution in E-step.
max_doc_update_iter int, default=100
Max number of iterations for updating document topic distribution in the E-step.
n_jobs int, default=None
The number of jobs to use in the E-step. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verbose int, default=0
random_state int, RandomState instance or None, default=None
Pass an int for reproducible results across multiple function calls. See Glossary .
Attributes : components_ ndarray of shape (n_components, n_features)
Variational parameters for topic word distribution. Since the complete conditional for topic word distribution is a Dirichlet, components_[i, j] can be viewed as pseudocount that represents the number of times word j was assigned to topic i . It can also be viewed as distribution over the words for each topic after normalization: model.components_ / model.components_.sum(axis=1)[:, np.newaxis] .
exp_dirichlet_component_ ndarray of shape (n_components, n_features)
Exponential value of expectation of log topic word distribution. In the literature, this is exp(E[log(beta)]) .
n_batch_iter_ int
Number of iterations of the EM step.
n_features_in_ int
Number of features seen during fit .
Names of features seen during fit . Defined only when X has feature names that are all strings.
Number of passes over the dataset.
bound_ float
Final perplexity score on training set.
doc_topic_prior_ float
Prior of document topic distribution theta . If the value is None, it is 1 / n_components .
random_state_ RandomState instance
RandomState instance that is generated either from a seed, the random number generator or by np.random .
topic_word_prior_ float
Prior of topic word distribution beta . If the value is None, it is 1 / n_components .
A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.
“Online Learning for Latent Dirichlet Allocation”, Matthew D. Hoffman, David M. Blei, Francis Bach, 2010 https://github.com/blei-lab/onlineldavb
“Stochastic Variational Inference”, Matthew D. Hoffman, David M. Blei, Chong Wang, John Paisley, 2013
>>> from sklearn.decomposition import LatentDirichletAllocation >>> from sklearn.datasets import make_multilabel_classification >>> # This produces a feature matrix of token counts, similar to what >>> # CountVectorizer would produce on text. >>> X, _ = make_multilabel_classification(random_state=0) >>> lda = LatentDirichletAllocation(n_components=5, . random_state=0) >>> lda.fit(X) LatentDirichletAllocation(. ) >>> # get topics for some given samples: >>> lda.transform(X[-2:]) array([[0.00360392, 0.25499205, 0.0036211 , 0.64236448, 0.09541846], [0.15297572, 0.00362644, 0.44412786, 0.39568399, 0.003586 ]])
Learn model for the data X with variational Bayes method.
Fit to data, then transform it.
Get output feature names for transformation.
Get metadata routing of this object.
Get parameters for this estimator.
Online VB with Mini-Batch update.
Calculate approximate perplexity for data X.
Calculate approximate log-likelihood as score.
Set the parameters of this estimator.
Transform data X according to the fitted model.
Learn model for the data X with variational Bayes method.
When learning_method is ‘online’, use mini-batch update. Otherwise, use batch update.
Parameters : X of shape (n_samples, n_features)
Not used, present here for API consistency by convention.
Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X .
Parameters : X array-like of shape (n_samples, n_features)
y array-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_params dict
Additional fit parameters.
Returns : X_new ndarray array of shape (n_samples, n_features_new)
get_feature_names_out ( input_features = None ) [source] ¶
Get output feature names for transformation.
The feature names out will prefixed by the lowercased class name. For example, if the transformer outputs 3 features, then the feature names out are: [«class_name0», «class_name1», «class_name2»] .
Parameters : input_features array-like of str or None, default=None
Only used to validate feature names with the names seen in fit .
Returns : feature_names_out ndarray of str objects
Transformed feature names.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Returns : routing MetadataRequest
A MetadataRequest encapsulating routing information.
Get parameters for this estimator.
Parameters : deep bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns : params dict
Parameter names mapped to their values.
Online VB with Mini-Batch update.
Parameters : X of shape (n_samples, n_features)
Not used, present here for API consistency by convention.
Partially fitted estimator.
perplexity ( X , sub_sampling = False ) [source] ¶
Calculate approximate perplexity for data X.
Perplexity is defined as exp(-1. * log-likelihood per word)
Changed in version 0.19: doc_topic_distr argument has been deprecated and is ignored because user no longer has access to unnormalized distribution
sub_sampling bool
Returns : score float
Calculate approximate log-likelihood as score.
Parameters : X of shape (n_samples, n_features)
Not used, present here for API consistency by convention.
Returns : score float
Use approximate bound as score.
See Introducing the set_output API for an example on how to use the API.
Parameters : transform , default=None
Configure output of transform and fit_transform .
- «default» : Default output format of a transformer
- «pandas» : DataFrame output
- None : Transform configuration is unchanged
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline ). The latter have parameters of the form __ so that it’s possible to update each component of a nested object.
Parameters : **params dict
Returns : self estimator instance
Transform data X according to the fitted model.
Changed in version 0.18: doc_topic_distr is now normalized
Returns : doc_topic_distr ndarray of shape (n_samples, n_components)
Document topic distribution for X.