零基础学习人工智能—Python—Pytorch学习(十三)

前言

最近学习了一新概念,叫科学发现和科技发明,科学发现是高于科技发明的,而这个说法我觉得还是挺有道理的,我们总说中国的科技不如欧美,但我们实际感觉上,不论建筑,硬件还是软件,理论,我们都已经高于欧美了,那为什么还说我们不如欧美呢?

科学发现是高于科技发明就很好的解释了这个问题,即,我们的在线支付,建筑行业等等,这些都是科技发明,而不是科学发现,而科学发现是引领科技发明的,而欧美在科学发现上远远领先我们,科技发明上虽然领先的不多,但也有很多大幅领先的,比如chatgpt。

说这些的主要目的是想说明,软件开发也是科技发明,所以这个行业的高手,再高的水平,也就那么回事。

也就是说,即便你是清北的,一旦你进入科技发明的队伍,那也就那么回事了。

神经网络并不难,我的这个系列文章就证明了,你完全不会python,完全没学过算法,一样可以在短时间内学会。我个人感觉,一周到一个月之内,都能学会。

所以,会的不必高人一等的看别人,不会的也不用觉得人家是高水平。

本文内容

本文主要介绍结合神经网络进行机器人开发。

准备工作

运行代码前,我们需要先下载nltk包。

首先安装nltk的包。

pip install nltk

然后下载nltk工具,编写一个py文件,写代码如下:

import nltk
nltk.download()

然后使用管理员打开cmd,运行这个py文件。

C:\Project\python_test\github\PythonTest\venv\Scripts\python.exe C:\Project\python_test\github\PythonTest\robot_nltk\dlnltk.py

然后弹出界面如下,修改保存地址:

PS:有资料说可以直接运行 nltk.download('punkt') ,下载我们需要的指定的包,但我没下载成功,我还是全部下载了。

# nltk.download('punkt') #是 NLTK (Natural Language Toolkit) 库中的一个命令,用来下载名为 'punkt' 的资源,通常用于 分词(Tokenization)
# nltk.download('popular') #命令会下载 NLTK 中大部分常用的资源,比punkt的资源更多

代码编写

编写model

首先编写一个NeuralNet(model.py)如下:

import torch.nn as nn
class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes):
        super(NeuralNet, self).__init__()
        self.l1 = nn.Linear(input_size, hidden_size) 
        self.l2 = nn.Linear(hidden_size, hidden_size) 
        self.l3 = nn.Linear(hidden_size, num_classes)
        self.relu = nn.ReLU()
    
    def forward(self, x):
        out = self.l1(x)
        out = self.relu(out)
        out = self.l2(out)
        out = self.relu(out)
        out = self.l3(out)
        # no activation and no softmax at the end
        return out

然后编写一个工具nltk_utils.py如下:

import numpy as np
import nltk

from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer()

def tokenize(sentence):
    return nltk.word_tokenize(sentence)


def stem(word):
    return stemmer.stem(word.lower())


def bag_of_words(tokenized_sentence, words):
    sentence_words = [stem(word) for word in tokenized_sentence]
    bag = np.zeros(len(words), dtype=np.float32)
    for idx, w in enumerate(words):
        if w in sentence_words: 
            bag[idx] = 1

    return bag

a="How long does shipping take?"
print(a)
a = tokenize(a)
print(a)

这个文件可以直接运行,测试工具内函数的应用。

词干化和token化

词干化就是把单词提取成词干。逻辑如下:

words =["0rganize","organizes", "organizing"]
stemmed_words =[stem(w) for w in words]
print(stemmed_words)

过程如下图:

token化就是把单词转换成token。

下面这段代码就是测试token化。

a="How long does shipping take?"
print(a)
a = tokenize(a)
print(a)

token化的逻辑大致如下:

编写测试数据

编写json文件intents.json(英文版)

{
  "intents": [
    {
      "tag": "greeting",
      "patterns": [
        "Hi",
        "Hey",
        "How are you",
        "Is anyone there?",
        "Hello",
        "Good day"
      ],
      "responses": [
        "Hey :-)",
        "Hello, thanks for visiting",
        "Hi there, what can I do for you?",
        "Hi there, how can I help?"
      ]
    },
    {
      "tag": "goodbye",
      "patterns": ["Bye", "See you later", "Goodbye"],
      "responses": [
        "See you later, thanks for visiting",
        "Have a nice day",
        "Bye! Come back again soon."
      ]
    },
    {
      "tag": "thanks",
      "patterns": ["Thanks", "Thank you", "That's helpful", "Thank's a lot!"],
      "responses": ["Happy to help!", "Any time!", "My pleasure"]
    },
    {
      "tag": "items",
      "patterns": [
        "Which items do you have?",
        "What kinds of items are there?",
        "What do you sell?"
      ],
      "responses": [
        "We sell coffee and tea",
        "We have coffee and tea"
      ]
    },
    {
      "tag": "payments",
      "patterns": [
        "Do you take credit cards?",
        "Do you accept Mastercard?",
        "Can I pay with Paypal?",
        "Are you cash only?"
      ],
      "responses": [
        "We accept VISA, Mastercard and Paypal",
        "We accept most major credit cards, and Paypal"
      ]
    },
    {
      "tag": "delivery",
      "patterns": [
        "How long does delivery take?",
        "How long does shipping take?",
        "When do I get my delivery?"
      ],
      "responses": [
        "Delivery takes 2-4 days",
        "Shipping takes 2-4 days"
      ]
    },
    {
      "tag": "funny",
      "patterns": [
        "Tell me a joke!",
        "Tell me something funny!",
        "Do you know a joke?"
      ],
      "responses": [
        "Why did the hipster burn his mouth? He drank the coffee before it was cool.",
        "What did the buffalo say when his son left for college? Bison."
      ]
    }
  ]
}

intents_cn.json中文版数据。

{
  "intents": [
    {
      "tag": "greeting",
      "patterns": [
        "你好",
        "嗨",
        "您好",
        "有谁在吗?",
        "你好呀",
        "早上好",
        "下午好",
        "晚上好"
      ],
      "responses": [
        "你好!有什么我可以帮忙的吗?",
        "您好!感谢您的光临。",
        "嗨!有什么我可以为您效劳的吗?",
        "早上好!今天怎么样?"
      ]
    },
    {
      "tag": "goodbye",
      "patterns": [
        "再见",
        "拜拜",
        "下次见",
        "保重",
        "晚安"
      ],
      "responses": [
        "再见!希望很快能再次见到你。",
        "拜拜!祝你有个愉快的一天。",
        "保重!下次见。",
        "晚安,祝你做个好梦!"
      ]
    },
    {
      "tag": "thanks",
      "patterns": [
        "谢谢",
        "感谢",
        "多谢",
        "非常感谢"
      ],
      "responses": [
        "不客气!很高兴能帮到你。",
        "没问题!随时为您服务。",
        "别客气!希望能帮到您。",
        "很高兴能帮忙!"
      ]
    },
    {
      "tag": "help",
      "patterns": [
        "你能帮我做什么?",
        "你能做什么?",
        "你能帮助我吗?",
        "我需要帮助",
        "能帮我一下吗?"
      ],
      "responses": [
        "我可以帮您回答问题、提供信息,或者进行简单的任务。",
        "我能帮助您查询信息、安排任务等。",
        "您可以问我问题,或者让我做一些简单的事情。",
        "请告诉我您需要的帮助!"
      ]
    },
    {
      "tag": "weather",
      "patterns": [
        "今天天气怎么样?",
        "今天的天气如何?",
        "天气预报是什么?",
        "外面冷吗?",
        "天气好不好?"
      ],
      "responses": [
        "今天的天气很好,适合外出!",
        "今天天气有点冷,记得穿暖和点。",
        "今天天气晴朗,适合去散步。",
        "天气晴,温度适宜,非常适合外出。"
      ]
    },
    {
      "tag": "about",
      "patterns": [
        "你是什么?",
        "你是谁?",
        "你是做什么的?",
        "你能做些什么?"
      ],
      "responses": [
        "我是一个聊天机器人,可以回答您的问题和帮助您解决问题。",
        "我是一个智能助手,帮助您完成各种任务。",
        "我是一个虚拟助手,可以处理简单的任务和查询。",
        "我可以帮助您获取信息,或者做一些简单的任务。"
      ]
    }
  ]
}

训练数据

训练数据逻辑如下:

import numpy as np
import random
import json

import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader

from nltk_utils import bag_of_words, tokenize, stem
from model import NeuralNet

with open('intents_cn.json', 'r', encoding='utf-8') as f:
    intents = json.load(f)

all_words = []
tags = []
xy = []
# loop through each sentence in our intents patterns
for intent in intents['intents']:
    tag = intent['tag']
    # add to tag list
    tags.append(tag)
    for pattern in intent['patterns']:
        # tokenize each word in the sentence
        w = tokenize(pattern)
        # add to our words list
        all_words.extend(w)
        # add to xy pair
        xy.append((w, tag))

# stem and lower each word
ignore_words = ['?', '.', '!']
all_words = [stem(w) for w in all_words if w not in ignore_words]
# remove duplicates and sort
all_words = sorted(set(all_words))
tags = sorted(set(tags))

print(len(xy), "patterns")
print(len(tags), "tags:", tags)
print(len(all_words), "unique stemmed words:", all_words)

# create training data
X_train = []
y_train = []
for (pattern_sentence, tag) in xy:
    # X: bag of words for each pattern_sentence
    bag = bag_of_words(pattern_sentence, all_words)
    X_train.append(bag)
    # y: PyTorch CrossEntropyLoss needs only class labels, not one-hot
    label = tags.index(tag)
    y_train.append(label)

X_train = np.array(X_train)
y_train = np.array(y_train)

# Hyper-parameters 
num_epochs = 1000
batch_size = 8
learning_rate = 0.001
input_size = len(X_train[0])
hidden_size = 8
output_size = len(tags)
print(input_size, output_size)

class ChatDataset(Dataset):

    def __init__(self):
        self.n_samples = len(X_train)
        self.x_data = X_train
        self.y_data = y_train

    # support indexing such that dataset[i] can be used to get i-th sample
    def __getitem__(self, index):
        return self.x_data[index], self.y_data[index]

    # we can call len(dataset) to return the size
    def __len__(self):
        return self.n_samples

dataset = ChatDataset()
train_loader = DataLoader(dataset=dataset,
                          batch_size=batch_size,
                          shuffle=True,
                          num_workers=0)

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

model = NeuralNet(input_size, hidden_size, output_size).to(device)

# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

# Train the model
for epoch in range(num_epochs):
    for (words, labels) in train_loader:
        words = words.to(device)
        labels = labels.to(dtype=torch.long).to(device)
        
        # Forward pass
        outputs = model(words)
        # if y would be one-hot, we must apply
        # labels = torch.max(labels, 1)[1]
        loss = criterion(outputs, labels)
        
        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
    if (epoch+1) % 100 == 0:
        print (f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')


print(f'final loss: {loss.item():.4f}')

data = {
"model_state": model.state_dict(),
"input_size": input_size,
"hidden_size": hidden_size,
"output_size": output_size,
"all_words": all_words,
"tags": tags
}

FILE = "data.pth"
torch.save(data, FILE)

print(f'training complete. file saved to {FILE}')

编写使用聊天

编写使用聊天代码如下:

import random
import json

import torch

from model import NeuralNet
from nltk_utils import bag_of_words, tokenize

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

with open('intents_cn.json', 'r',encoding='utf-8') as json_data:
    intents = json.load(json_data)

FILE = "data.pth"
data = torch.load(FILE)

input_size = data["input_size"]
hidden_size = data["hidden_size"]
output_size = data["output_size"]
all_words = data['all_words']
tags = data['tags']
model_state = data["model_state"]

model = NeuralNet(input_size, hidden_size, output_size).to(device)
model.load_state_dict(model_state)
model.eval()

bot_name = "电脑"
print("Let's chat! (type 'quit' to exit)")
while True:
    # sentence = "do you use credit cards?"
    sentence = input("我:")
    if sentence == "quit":
        break

    sentence = tokenize(sentence)
    X = bag_of_words(sentence, all_words)
    X = X.reshape(1, X.shape[0])
    X = torch.from_numpy(X).to(device)

    output = model(X)
    _, predicted = torch.max(output, dim=1)

    tag = tags[predicted.item()]

    probs = torch.softmax(output, dim=1)
    prob = probs[0][predicted.item()]
    if prob.item() > 0.75:
        for intent in intents['intents']:
            if tag == intent["tag"]:
                print(f"{bot_name}: {random.choice(intent['responses'])}")
    else:
        print(f"{bot_name}: 我不知道")

运行效果如下:


传送门:
零基础学习人工智能---Python---Pytorch学习---全集


注:此文章为原创,任何形式的转载都请联系作者获得授权并注明出处!



若您觉得这篇文章还不错,请点击下方的【推荐】,非常感谢!

https://www.cnblogs.com/kiba/p/18610399