图机器学习(10)------监督学习中的图神经网络
-
- [1. 图卷积神经网络](#1. 图卷积神经网络)
- [2. 使用 GCN 进行图分类](#2. 使用 GCN 进行图分类)
- [3. 使用 GraphSAGE 进行节点分类](#3. 使用 GraphSAGE 进行节点分类)
1. 图卷积神经网络
在无监督图学习中,我们学习了图神经网络 (graph nerual network, GNN) 和图卷积网络 (graph convolutional network, GCN) 的核心原理,重点区分了谱图卷积与空间图卷积的差异。具体而言,我们深入理解了 GCN
层如何通过保持节点相似性等图属性,在无监督环境下实现图结构或节点的编码。
在本节中,将探索监督学习框架下的这些方法。此时的核心目标转变为:学习能够精准预测节点或图标签的图/节点表征。需要注意的是,编码函数保持不变,改变的是优化目标。
2. 使用 GCN 进行图分类
(1) 使用 PROTEINS
数据集:
python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch_geometric.data import DataLoader
from torch_geometric.nn import global_sort_pool, GCNConv
from torch_geometric.datasets import TUDataset
from sklearn.model_selection import train_test_split
import numpy as np
# Load PROTEINS dataset
dataset = TUDataset(root='data/TUDataset', name='PROTEINS')
(2) 接下来,实现图分类 GCN
算法。构建 GCN
模型:
python
class DGCNN(nn.Module):
def __init__(self, hidden_channels=32, num_layers=4, k=35):
super(DGCNN, self).__init__()
self.k = k
self.convs = nn.ModuleList()
for _ in range(num_layers):
self.convs.append(GCNConv(-1, hidden_channels))
# Corrected Conv1d layers
self.conv1 = nn.Conv1d(hidden_channels * num_layers, 16, kernel_size=1)
self.pool = nn.MaxPool1d(kernel_size=2)
self.conv2 = nn.Conv1d(16, 32, kernel_size=5, stride=1)
# Calculate the correct input size for the linear layer
# After sort pooling: [batch_size, k * hidden_channels * num_layers]
# After conv1: [batch_size, 16, k]
# After pool: [batch_size, 16, k//2]
# After conv2: [batch_size, 32, (k//2)-4]
linear_input_size = 32 * ((k // 2) - 4)
self.fc1 = nn.Linear(linear_input_size, 128)
self.dropout = nn.Dropout(0.5)
self.fc2 = nn.Linear(128, 1)
def forward(self, x, edge_index, batch):
xs = []
for conv in self.convs:
x = torch.tanh(conv(x, edge_index))
xs.append(x)
x = torch.cat(xs, dim=1)
x = global_sort_pool(x, batch, self.k) # [batch_size, k * hidden_channels * num_layers]
# Reshape for Conv1d: [batch_size, channels, sequence_length]
batch_size = len(torch.unique(batch))
x = x.view(batch_size, self.k, -1) # [batch_size, k, hidden_channels * num_layers]
x = x.permute(0, 2, 1) # [batch_size, hidden_channels * num_layers, k]
x = F.relu(self.conv1(x))
x = self.pool(x)
x = F.relu(self.conv2(x))
x = x.view(batch_size, -1) # Flatten
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = torch.sigmoid(self.fc2(x))
return x
(3) 实例化模型,使用二元交叉熵损失函数(用于衡量预测标签与真实标签之间的差异),并使用 Adam
优化器,学习率为 0.0001
:
python
model = DGCNN(hidden_channels=32, num_layers=4, k=35)
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
criterion = nn.BCELoss()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.to(device)
(4) 创建训练集和测试集,在本节中,使用 70%
的数据集作为训练集,剩余的作为测试集:
python
train_idx, test_idx = train_test_split(
range(len(dataset)),
test_size=0.3,
stratify=[data.y.item() for data in dataset],
random_state=42
)
train_dataset = dataset[train_idx]
test_dataset = dataset[test_idx]
train_loader = DataLoader(train_dataset, batch_size=50, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False)
(5) 训练模型 100
个 epoch
:
python
def train():
model.train()
total_loss = 0
for data in train_loader:
data = data.to(device)
optimizer.zero_grad()
out = model(data.x, data.edge_index, data.batch)
loss = criterion(out, data.y.float().unsqueeze(1))
loss.backward()
optimizer.step()
total_loss += loss.item() * data.num_graphs
return total_loss / len(train_dataset)
def test(loader):
model.eval()
correct = 0
for data in loader:
data = data.to(device)
with torch.no_grad():
pred = model(data.x, data.edge_index, data.batch)
pred = (pred > 0.5).float()
correct += int((pred == data.y.unsqueeze(1)).sum())
return correct / len(loader.dataset)
# Training loop
for epoch in range(1, 101):
loss = train()
train_acc = test(train_loader)
test_acc = test(test_loader)
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}, Train Acc: {train_acc:.4f}, Test Acc: {test_acc:.4f}')
训练过程输出如下所示,可以看到在训练集上准确率可以达到 75%
,在测试集上达到了约 74%
的准确率:
shell
Epoch: 100, Loss: 0.4896, Train Acc: 0.7599, Test Acc: 0.7485
3. 使用 GraphSAGE 进行节点分类
(1) 接下来,我们将训练 GraphSAGE
来对 Cora
数据集的节点进行分类。首先加载数据集:
python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import SAGEConv
from sklearn.model_selection import train_test_split
import numpy as np
# Load Cora dataset
dataset = Planetoid(root='data/Planetoid', name='Cora')
data = dataset[0]
(2) 拆分数据集,使用 90%
的数据集作为训练集,剩余部分作为测试集:
python
nodes = np.arange(data.num_nodes)
train_nodes, test_nodes = train_test_split(
nodes, train_size=0.1, test_size=None, stratify=data.y.numpy()
)
data.train_mask = torch.zeros(data.num_nodes, dtype=torch.bool)
data.train_mask[train_nodes] = True
data.test_mask = torch.zeros(data.num_nodes, dtype=torch.bool)
data.test_mask[test_nodes] = True
(3) 创建模型。在本节中,使用一个三层的 GraphSAGE
编码器,分别为 32
、32
和 16
维度。然后,编码器将连接到一个具有 softmax
激活的全连接层来执行分类。使用学习率为 0.03
的 Adam
优化器,并将损失函数设置为categorical_crossentropy
:
python
class GraphSAGE(nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels, num_layers, dropout):
super(GraphSAGE, self).__init__()
self.convs = nn.ModuleList()
self.convs.append(SAGEConv(in_channels, hidden_channels))
for _ in range(num_layers - 2):
self.convs.append(SAGEConv(hidden_channels, hidden_channels))
self.convs.append(SAGEConv(hidden_channels, out_channels))
self.dropout = dropout
def forward(self, x, edge_index):
for i, conv in enumerate(self.convs[:-1]):
x = conv(x, edge_index)
x = F.relu(x)
x = F.dropout(x, p=self.dropout, training=self.training)
x = self.convs[-1](x, edge_index)
return F.log_softmax(x, dim=-1)
model = GraphSAGE(
in_channels=dataset.num_features,
hidden_channels=32,
out_channels=dataset.num_classes,
num_layers=3,
dropout=0.6
)
optimizer = torch.optim.Adam(model.parameters(), lr=0.003)
criterion = nn.NLLLoss()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.to(device)
data = data.to(device)
(4) 训练模型 20
个 epoch
:
python
def train():
model.train()
optimizer.zero_grad()
out = model(data.x, data.edge_index)
loss = criterion(out[data.train_mask], data.y[data.train_mask])
loss.backward()
optimizer.step()
return loss.item()
def test():
model.eval()
with torch.no_grad():
out = model(data.x, data.edge_index)
pred = out.argmax(dim=1)
correct = (pred[data.test_mask] == data.y[data.test_mask]).sum()
return correct / int(data.test_mask.sum())
# Training loop
for epoch in range(1, 21):
loss = train()
test_acc = test()
print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Test Acc: {test_acc:.4f}')
训练过程输出如下所示,可以看到,模型在测试集上达到了约 80%
的准确率:
