一、使用huggingface中的预训练模型,先要安装transformers、torch和SentencePiece
bash
pip install transformers
pip install torch
pip install SentencePiece
手动下载:https://huggingface.co/google-bert/bert-base-uncased/tree/main

添加以目录:

二、运行代码
bash
from transformers import BertTokenizer, BertForSequenceClassification, pipeline
# 加载预训练的模型和分词器
model_name = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2) # 假设是二分类问题
# 使用pipeline简化流程
classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
# 文本分类
text = "I hate this movie!"
result = classifier(text)
print(result)
输入:I hate unnecessary waste
输出结果:

输入:I love dance!
输出:
