https://machinelearningmastery.com/develop-word-embeddings-python-gensim/
本教程分为 6 个部分;他们是:
词嵌入
Gensim 库
开发 Word2Vec 嵌入
可视化单词嵌入
加载 Google 的 Word2Vec 嵌入
加载斯坦福大学的 GloVe 嵌入
词嵌入
单词嵌入是一种提供单词的密集向量表示的方法,这些单词捕获了有关其含义的某些信息。
单词嵌入是对更简单的词袋模型单词编码方案(如字数统计和频率)的改进,这些方案会导致描述文档但不描述单词含义的大而稀疏的向量(大多数为 0 个值)。
单词嵌入的工作原理是使用算法基于大型文本语料库训练一组固定长度的密集和连续值向量。每个单词都由嵌入空间中的一个点表示,这些点是根据目标单词周围的单词学习和移动的。
Gensim Python 库
gensim 4.0 版本和3.0版本在语法上差别很大
https://github.com/piskvorky/gensim/wiki/Migrating-from-Gensim-3.x-to-4
word2vector
https://nlp.stanford.edu/projects/glove/
对于自然语言工作者,glove 比 word2vector 更受欢迎
开发word2vector
python
from gensim.models import Word2Vec
# define training data
sentences = [['this', 'is', 'the', 'first', 'sentence', 'for', 'word2vec'],
['this', 'is', 'the', 'second', 'sentence'],
['yet', 'another', 'sentence'],
['one', 'more', 'sentence'],
['and', 'the', 'final', 'sentence']]
# train model
model = Word2Vec(sentences, min_count=1)
# summarize the loaded model
print(model)
# summarize vocabulary
words = list(model.wv.key_to_index)
print(words)
# access vector for one word
print(model.wv['sentence'])
# save model
model.save('model.bin')
# load model
new_model = Word2Vec.load('model.bin')
print(new_model)
可视化单词嵌入
clike
from sklearn.decomposition import PCA
from matplotlib import pyplot
from gensim.models import Word2Vec
# define training data
sentences = [['this', 'is', 'the', 'first', 'sentence', 'for', 'word2vec'],
['this', 'is', 'the', 'second', 'sentence'],
['yet', 'another', 'sentence'],
['one', 'more', 'sentence'],
['and', 'the', 'final', 'sentence']]
# train model
model = Word2Vec(sentences, min_count=1)
# fit a 2D PCA model to the vectors
X = model.wv[model.wv.key_to_index]
pca = PCA(n_components=2)
result = pca.fit_transform(X)
# create a scatter plot of the projection
pyplot.scatter(result[:, 0], result[:, 1])
words = list(model.wv.key_to_index)
for i, word in enumerate(words):
pyplot.annotate(word, xy=(result[i, 0], result[i, 1]))
pyplot.show()