Topic Modelling for Humans

gensim – Topic Modelling in Python

Build Status
GitHub release
Mailing List

Gensim is a Python library for topic modelling, document indexing
and similarity retrieval with large corpora. Target audience is the
natural language processing (NLP) and information retrieval (IR)


  • All algorithms are memory-independent w.r.t. the corpus size
    (can process input larger than RAM, streamed, out-of-core),
  • Intuitive interfaces
    • easy to plug in your own input corpus/datastream (trivial
      streaming API)
    • easy to extend with other Vector Space algorithms (trivial
      transformation API)
  • Efficient multicore implementations of popular algorithms, such as
    online Latent Semantic Analysis (LSA/LSI/SVD), Latent
    Dirichlet Allocation (LDA)
    , Random Projections (RP),
    Hierarchical Dirichlet Process (HDP) or word2vec deep
  • Distributed computing: can run Latent Semantic Analysis and
    Latent Dirichlet Allocation on a cluster of computers.
  • Extensive documentation and Jupyter Notebook tutorials.

If this feature list left you scratching your head, you can first read
more about the Vector Space Model and unsupervised document analysis
on Wikipedia.


Please raise potential bugs on github. See Contribution Guide prior to raising an issue.

If you have an open-ended or a research question:


This software depends on NumPy and Scipy, two Python packages for
scientific computing. You must have them installed prior to installing

It is also recommended you install a fast BLAS library before installing
NumPy. This is optional, but using an optimized BLAS such as ATLAS or
OpenBLAS is known to improve performance by as much as an order of
magnitude. On OS X, NumPy picks up the BLAS that comes with it
automatically, so you don’t need to do anything special.

The simple way to install gensim is:

pip install -U gensim

Or, if you have instead downloaded and unzipped the source tar.gz
package, you’d run:

python test
python install

For alternative modes of installation (without root privileges,
development installation, optional install features), see the

This version has been tested under Python 2.7, 3.5 and 3.6. Gensim’s github repo is hooked
against Travis CI for automated testing on every commit push and pull
request. Support for Python 2.6, 3.3 and 3.4 was dropped in gensim 1.0.0. Install gensim 0.13.4 if you must use Python 2.6, 3.3 or 3.4. Support for Python 2.5 was dropped in gensim 0.10.0; install gensim 0.9.1 if you must use Python 2.5).

How come gensim is so fast and memory efficient? Isn’t it pure Python, and isn’t Python slow and greedy?

Many scientific algorithms can be expressed in terms of large matrix
operations (see the BLAS note above). Gensim taps into these low-level
BLAS libraries, by means of its dependency on NumPy. So while
gensim-the-top-level-code is pure Python, it actually executes highly
optimized Fortran/C under the hood, including multithreading (if your
BLAS is so configured).

Memory-wise, gensim makes heavy use of Python’s built-in generators and
iterators for streamed data processing. Memory efficiency was one of
gensim’s design goals, and is a central feature of gensim, rather than
something bolted on as an afterthought.



RaRe Technologiesrare-technologies.comMachine learning & NLP consulting and training. Creators and maintainers of Gensim.
Mindseyemindseye.comSimilarities in legal documents
TalentpairTalentpairtalentpair.comData science driving high-touch recruiting
TailwindTailwindapp.comPost interesting and relevant content to Pinterest
IssuuIssuu.comGensim’s LDA module lies at the very core of the analysis we perform on each uploaded publication to figure out what it’s all about.
Sports Authoritysportsauthority.comText mining of customer surveys and social media sources
Search Metricssearchmetrics.comGensim word2vec used for entity disambiguation in Search Engine Optimisation
Cisco Securitycisco.comLarge-scale fraud detection
12K Research12k.coDocument similarity analysis on media articles
National Institutes of Healthgithub/NIHOPAProcessing grants and publications with word2vec
Codeq LLCcodeq.comDocument classification with word2vec
Mass Cognitionmasscognition.comTopic analysis service for consumer text data and general text data
Stillwater Supercomputingstillwater-sc.comDocument comprehension and association with word2vec
Channel 4channel4.comRecommendation engine
Amazonamazon.comDocument similarity
SiteGround Hostingsiteground.comAn ensemble search engine which uses different embeddings models and similarities, including word2vec, WMD, and LDA.
Jujuwww.juju.comProvide non-obvious related job suggestions.
NLPubnlpub.orgDistributional semantic models including word2vec.
Capital modeling for customer complaints exploration.

Citing gensim

When citing gensim in academic papers and theses, please use this
BibTeX entry:

      title = {{Software Framework for Topic Modelling with Large Corpora}},
      author = {Radim {\v R}eh{\r u}{\v r}ek and Petr Sojka},
      booktitle = {{Proceedings of the LREC 2010 Workshop on New
           Challenges for NLP Frameworks}},
      pages = {45--50},
      year = 2010,
      month = May,
      day = 22,
      publisher = {ELRA},
      address = {Valletta, Malta},