word2vec的算法原理(不用开源包,python实现)

看了很多关于word2vec的算法原理的介绍文章,看明白了,但依然有点不深刻。

以下是python直接实现的word2vec的算法,简单明了,读完就懂了

python 复制代码
import numpy as np

def tokenize(text):
    return text.lower().split()

def generate_word_pairs(sentences, window_size):
    word_pairs = []
    for sentence in sentences:
        for i, center_word in enumerate(sentence):
            for j in range(i - window_size, i + window_size + 1):
                if j >= 0 and j < len(sentence) and j != i:
                    context_word = sentence[j]
                    word_pairs.append((center_word, context_word))
    return word_pairs

def create_word_index(sentences):
    word_set = set(word for sentence in sentences for word in sentence)
    return {word: i for i, word in enumerate(word_set)}


def one_hot_encoding(word, word_index):
    one_hot = np.zeros(len(word_index))
    one_hot[word_index[word]] = 1
    return one_hot

def train_word2vec(sentences, vector_size, window_size, learning_rate, epochs):
    word_index = create_word_index(sentences)
    W1 = np.random.rand(len(word_index), vector_size)
    W2 = np.random.rand(vector_size, len(word_index))

    word_pairs = generate_word_pairs(sentences, window_size)

    for epoch in range(epochs):
        loss = 0
        for center_word, context_word in word_pairs:
            center_word_encoded = one_hot_encoding(center_word, word_index)
            context_word_encoded = one_hot_encoding(context_word, word_index)

            hidden_layer = np.dot(center_word_encoded, W1)
            output_layer = np.dot(hidden_layer, W2)

            exp_output = np.exp(output_layer)
            softmax_output = exp_output / np.sum(exp_output)

            error = softmax_output - context_word_encoded

            dW2 = np.outer(hidden_layer, error)
            dW1 = np.outer(center_word_encoded, np.dot(W2, error))

            W1 -= learning_rate * dW1
            W2 -= learning_rate * dW2

            loss += -np.sum(output_layer * context_word_encoded) + np.log(np.sum(exp_output))

        print(f"Epoch: {epoch + 1}, Loss: {loss}")

    return W1, word_index

sentences = [
    tokenize("This is a sample sentence"),
    tokenize("Another example sentence"),
    tokenize("One more example")
]

vector_size = 100
window_size = 2
learning_rate = 0.01
epochs = 100

W1, word_index = train_word2vec(sentences, vector_size, window_size, learning_rate, epochs)

for word, index in word_index.items():
    print(f"{word}: {W1[index]}")
相关推荐
偷吃的耗子29 分钟前
【CNN算法理解】:CNN平移不变性详解:数学原理与实例
人工智能·算法·cnn
亓才孓33 分钟前
[Class类的应用]反射的理解
开发语言·python
小镇敲码人42 分钟前
深入剖析华为CANN框架下的Ops-CV仓库:从入门到实战指南
c++·python·华为·cann
摘星编程1 小时前
深入理解CANN ops-nn BatchNormalization算子:训练加速的关键技术
python
魔芋红茶1 小时前
Python 项目版本控制
开发语言·python
lili-felicity1 小时前
CANN批处理优化技巧:从动态批处理到流水线并行
人工智能·python
一个有梦有戏的人1 小时前
Python3基础:进阶基础,筑牢编程底层能力
后端·python
dazzle1 小时前
机器学习算法原理与实践-入门(三):使用数学方法实现KNN
人工智能·算法·机器学习
那个村的李富贵1 小时前
智能炼金术:CANN加速的新材料AI设计系统
人工智能·算法·aigc·cann
摘星编程1 小时前
解析CANN ops-nn中的Transpose算子:张量维度变换的高效实现
python