图数据集的加载

原文参考官方文档:

https://pytorch-geometric.readthedocs.io/en/latest/modules/loader.html

torch_geometric.loader 库中, 该库中包含了多种 图数据集的 加载方式,

这里主要介绍 DenseDataLoader and DataLoader 这两者之间的区别;

Difference between DenseDataLoader and DataLoader in PyTorch Geometric

1. DenseDataLoader:

  • Purpose: Specifically designed for loading batches of dense graph data where all graph attributes have the same shape.

  • Stacking: Stacks all graph attributes in a new dimension, which means that all graph data needs to be dense (i.e., have the same shape).

  • Use Case: Ideal for situations where the adjacency matrices and feature matrices of graphs in the dataset are of consistent size and can be stacked without any padding or truncation.

  • Implementation : Uses a custom collate_fn that stacks all attributes of the graph data objects into a new dimension. This is suitable for dense graph data.

    python 复制代码
    class DenseDataLoader(torch.utils.data.DataLoader):
        ...
        def __init__(self, dataset: Union[Dataset, List[Data]],
                     batch_size: int = 1, shuffle: bool = False, **kwargs):
            kwargs.pop('collate_fn', None)  # Ensure no other collate function is used
            super().__init__(dataset, batch_size=batch_size, shuffle=shuffle,
                             collate_fn=collate_fn, **kwargs)

2. DataLoader:

  • Purpose: General-purpose data loader for PyTorch Geometric datasets. It can handle both homogeneous and heterogeneous graph data.

  • Flexibility: Can handle varying graph sizes and structures by merging data objects into mini-batches. Suitable for heterogeneous data where graph attributes may differ in shape and size.

  • Collate Function : Uses a custom collate function, Collater, which can handle different types of data elements (e.g., BaseData, Tensor, etc.). This function is versatile and can manage the complexity of heterogeneous graph data.

  • Use Case: Ideal for most graph data scenarios, especially when graphs vary in size and shape, and when working with both homogeneous and heterogeneous data.

    python 复制代码
    class DataLoader(torch.utils.data.DataLoader):
        ...
        def __init__(
            self,
            dataset: Union[Dataset, Sequence[BaseData], DatasetAdapter],
            batch_size: int = 1,
            shuffle: bool = False,
            follow_batch: Optional[List[str]] = None,
            exclude_keys: Optional[List[str]] = None,
            **kwargs,
        ):
            kwargs.pop('collate_fn', None)  # Ensure no other collate function is used
            super().__init__(
                dataset,
                batch_size,
                shuffle,
                collate_fn=Collater(dataset, follow_batch, exclude_keys),
                **kwargs,
            )

Key Differences

  1. Data Shape Consistency:

    • DenseDataLoader: Requires all graph attributes to have the same shape.
    • DataLoader: Can handle variable graph sizes and shapes.
  2. Batching Mechanism:

    • DenseDataLoader: Stacks all attributes into a new dimension, suitable for dense data.
    • DataLoader : Uses the Collater class to handle complex data batching, suitable for heterogeneous and variable-sized graph data.
  3. Use Cases:

    • DenseDataLoader: Best for datasets with consistent graph sizes and shapes.
    • DataLoader: Best for general-purpose graph data loading, especially with varying graph structures.

Practical Example of Each Loader

DenseDataLoader Example:

python 复制代码
from torch_geometric.datasets import TUDataset
from torch_geometric.loader import DenseDataLoader

# Load a dataset where all graphs have the same number of nodes
dataset = TUDataset(root='/tmp/ENZYMES', name='ENZYMES')

# Create a DenseDataLoader
loader = DenseDataLoader(dataset, batch_size=32, shuffle=True)

for batch in loader:
    print(batch)
    break

DataLoader Example:

python 复制代码
from torch_geometric.datasets import TUDataset
from torch_geometric.loader import DataLoader

# Load a dataset with graphs of varying sizes
dataset = TUDataset(root='/tmp/ENZYMES', name='ENZYMES')

# Create a DataLoader
loader = DataLoader(dataset, batch_size=32, shuffle=True)

for batch in loader:
    print(batch)
    break

Summary

  • Use DenseDataLoader when working with datasets where all graphs have the same size and shape.
  • Use DataLoader for more flexible and general-purpose graph data loading, especially when dealing with variable graph structures.
相关推荐
新智元18 分钟前
一句话,性能暴涨 49%!马里兰 MIT 等力作:Prompt 才是大模型终极武器
人工智能·openai
猫头虎24 分钟前
猫头虎AI分享|一款Coze、Dify类开源AI应用超级智能体Agent快速构建工具:FastbuildAI
人工智能·开源·github·aigc·ai编程·ai写作·ai-native
新智元40 分钟前
AI 版华尔街之狼!o3-mini 靠「神之押注」狂赚 9 倍,DeepSeek R1 最特立独行
人工智能·openai
天下弈星~1 小时前
GANs生成对抗网络生成手写数字的Pytorch实现
人工智能·pytorch·深度学习·神经网络·生成对抗网络·gans
重启的码农1 小时前
ggml介绍 (8) 图分配器 (ggml_gallocr)
c++·人工智能·神经网络
重启的码农1 小时前
ggml介绍 (9) 后端调度器 (ggml_backend_sched)
c++·人工智能·神经网络
aneasystone本尊1 小时前
学习 Coze Studio 的智能体执行逻辑
人工智能
盏灯1 小时前
Trae SOLO 游戏 —— 🐾🐱🐾猫咪追蝌蚪🐸
人工智能·trae
lisuwen1161 小时前
AI三国杀:马斯克炮轰苹果“偏袒”OpenAI,Grok与ChatGPT的应用商店战争揭秘
人工智能·chatgpt
暮小暮1 小时前
从ChatGPT到智能助手:Agent智能体如何颠覆AI应用
人工智能·深度学习·神经网络·ai·语言模型·chatgpt