pyg-team / pytorch_geometric

Graph Neural Network Library for PyTorch
https://pyg.org
MIT License
21.49k stars 3.69k forks source link

HetroData.to_homogenous seems to change edge orders #8707

Open sorushsaghari opened 11 months ago

sorushsaghari commented 11 months ago

🐛 Describe the bug

i am using light_gcn example provided here. inside the example the HetroData is converted to a homogenous graph. bu when compare the Hetro data edge_index vs the homo data edge_index the are not the same. can sb pls guide me? isn't it wrong?

image

#%%
import os.path as osp

import torch
from tqdm import tqdm

from torch_geometric.datasets import AmazonBook
from torch_geometric.nn import LightGCN
from torch_geometric.utils import degree

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'Amazon')
dataset = AmazonBook(path)
data = dataset[0]
num_users, num_books = data['user'].num_nodes, data['book'].num_nodes
myData = data
data = data.to_homogeneous().to(device)

Versions

Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.26.3 Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.5.119 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU Nvidia driver version: 531.79 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: AuthenticAMD Model name: AMD Ryzen 9 5900HS with Radeon Graphics CPU family: 25 Model: 80 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 0 BogoMIPS: 6587.45 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip vaes vpclmulqdq rdpid fsrm Virtualization: AMD-V Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 256 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 4 MiB (8 instances) L3 cache: 16 MiB (1 instance) Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] numpy==1.23.5 [pip3] pytorch-lightning==2.0.8 [pip3] pytorch-metric-learning==2.0.0 [pip3] torch==2.0.1 [pip3] torch-cluster==1.6.1+pt20cu118 [pip3] torch_geometric==2.4.0 [pip3] torch-scatter==2.1.1+pt20cu118 [pip3] torch-sparse==0.6.17+pt20cu118 [pip3] torch-spline-conv==1.2.2+pt20cu118 [pip3] torch-tb-profiler==0.4.1 [pip3] torchinfo==1.8.0 [pip3] torchmetrics==1.1.1 [pip3] torchvision==0.15.2 [pip3] triton==2.0.0 [conda] Could not collect

rusty1s commented 10 months ago

Yes, the edge_index is not identically. to_homogeneous works by stacking all node types in the node dimension and adjusting the edge_index indices accordingly to point to the correct node indices after stacking.

sorushsaghari commented 10 months ago

so you mean it maps the nodes from different types to a new dimension? is there any way i can access this mapping? beacuse i want to use these user and item ids later.

rusty1s commented 10 months ago

The order of node and edge types can be found in data._node_type_names and data._edge_type_names.