BERT情感分类

发布于:2025-06-12 ⋅ 阅读:(17) ⋅ 点赞:(0)

参考B站BigC_666微调BERT模型做情感分类实战,代码逐行讲解,100%可以跑通!!! 一键三连+关注,私信即可获得代码_哔哩哔哩_bilibili

 大致记录下出现了哪些问题

首先第一个问题是,huggingface.co无法访问,通过修改环境变量让目标地址改到镜像地址也没起作用,最终解决方案是挂代理下载了

import os
os.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:7890'

后面就是jupyter代码了,修改了评估的部分

from datasets import load_dataset

import os

import os
os.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:7890'


os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
dataset = load_dataset("imdb")
print(dataset)

import datasets
print(datasets.config.HF_DATASETS_CACHE)

from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
print("done")
def tokenizer_func(examples):
    return tokenizer(examples['text'],padding='max_length',truncation=True)
tokens_dataset = dataset.map(tokenizer_func,batched=True)
print(tokens_dataset)
print(tokens_dataset['train'][0])
train_dataset = tokens_dataset['train'].select(range(5000))
test_dataset = tokens_dataset['test'].shuffle(42).select(range(5000))
print(test_dataset['label'])

from transformers import BertForSequenceClassification

classifier = BertForSequenceClassification.from_pretrained('bert-base-uncased')

from transformers import Trainer,TrainingArguments

train_arg = TrainingArguments(
    output_dir='./result',
#    eval_strategy='epoch',
    learning_rate=2e-5,
    per_device_train_batch_size=1,
    per_device_eval_batch_size=1,
    num_train_epochs=1,
    weight_decay=0.02
)
print('done')

print('start')
trainer = Trainer(
    model = classifier,
    args = train_arg,
    train_dataset = train_dataset,
    eval_dataset = test_dataset
)
print('done')

predictions = trainer.predict(test_dataset)

import numpy as np
print('start')
# 模型输出的预测 logits(二维数组)
logits = predictions.predictions  # shape: (num_samples, num_classes)
labels = predictions.label_ids    # shape: (num_samples,)

# 步骤1:将 logits 转换为预测标签(取最大值所在的索引)
predicted_labels = np.argmax(logits, axis=1)

# 步骤2:计算准确率
accuracy = np.mean(predicted_labels == labels)

print(f"Accuracy: {accuracy:.4f}")


trainer.train()

# metric = trainer.evaluate()
print(test_dataset['label'])

predictions = trainer.predict(test_dataset)

print(metric)

print(predictions)

from transformers import Trainer
from sklearn.metrics import accuracy_score
import numpy as np

# 1. 定义 compute_metrics
def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    preds = np.argmax(predictions, axis=1)
    return {"accuracy": accuracy_score(labels, preds)}

# 2. 重新构造 Trainer(用原来的模型和训练参数)
new_trainer = Trainer(
    model=classifier,  # 你之前训练好的模型
    args=train_arg,   # 你原来使用的 TrainingArguments
    compute_metrics=compute_metrics
)

# 3. 调用 predict
results = new_trainer.predict(test_dataset)

print("准确率:", results.metrics["test_accuracy"])

代码比较混乱,可读性比较差,实验性质,各种参数没有详细的考量


网站公告

今日签到

点亮在社区的每一天
去签到