Skip to content

Jacob-Chmura/tgn.cpp

Repository files navigation

Temporal Graph Learning on Streams that Exceed RAM

C++20 Clang Linux macOS CUDA 12.6 CUDA 12.8 CUDA 13.0 Docs Tests

tgn.cpp is a library for large-scale Temporal Graph Learning, built around two components:

1. Temporal Graph Unified Format (TGUF)

A binary, flatbuffer-style on-disc format for graph streams, supporting:

  • Dynamic node and edge events, static node features, pre-computed negatives (for link prediction)
  • Zero-copy tensor reads via memory mapping for out-of-core training and inference
  • Optimized sequential access patterns common in CTDG style methods

2. High-Performance TGN Implementation

A C++20 Port of TGN over pure LibTorch:

  • Built on the TGUF storage engine
  • Minimal abstractions, with efficient sampling kernels and data loading

Tip

Use the Python bindings for easy conversion of your datasets into TGUF

Installation

You should just use the Dockerfile:

# Build for CPU (default)
docker build -t tgn-dev:cpu .

# Build for specific CUDA drivers (e.g. 12.6 for A100/H100)
docker build --build-arg CUDA_VERSION=12.6 -t tgn-dev:cu126 .

If you prefer a bare-metal install:

Linux
# C++ Toolchain: Clang w/ C++20 and the LLVM STL
sudo apt-get install -y clang libc++-dev libc++abi-dev

If you want to run with CUDA, refer to nvidia docs for nvidia toolkit installation.

MacOS
# CMake and OpenMP runtime
brew install cmake libomp

Important

Platform Support:

OS CUDA_VERSION Default
Linux cpu, 12.6, 12.8, 13.0 cpu
macOS cpu cpu

GPU_ARCH: Specifies compute capability (e.g. 80, 90, native) for CUDA backend on Linux.

TGUF Conversion Scripts use uv:
curl -LsSf https://astral.sh/uv/install.sh | sh

Usage

Setup

# Clone the repo
git clone git@github.com:Jacob-Chmura/tgn.cpp.git && cd tgn.cpp

# See all available targets
make help

Running on CPU

# Download `tgbl-wiki` data, convert to `.tguf` and run examples/link_pred.cpp.
make run-link-tgbl-wiki

# Download `tgbn-trade` data, convert to `.tguf` and run examples/node_pred.cpp
make run-node-tgbn-trade

Running on GPU (Linux only)

# Example: Cuda 12.6 on an A100 (Arch 80)
CUDA_VERSION=12.6 GPU_ARCH=80 make run-link-tgbl-wiki ARGS="--device cuda:0"

Tip

Use nvidia-smi to check your CUDA_VERSION and GPU_ARCH