Links

Tools

Export citation

Search in Google Scholar

A Word2Vec Model File Built From The French Wikipedia Xml Dump Using Gensim.

Published in 2016 by Christof Schöch
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

A word2vec model file built from the French Wikipedia XML dump using gensim. The data published here includes three model files (you need all three of them in the same folder) as well as the Python script used to build the model (for documentation). The Wikipedia dump was downloaded on October 7, 2016 from https://dumps.wikimedia.org/. Before building the model, plain text was extracted from the dump. The size of that dataset is about 500 million words or 3.6 GB of plain text. The principal parameters for building the model were the following: no lemmatization was performed, tokenization was done using the "\W" regular expression (any non-word character splits tokens), and the model was built with 500 dimensions.