tgn.cpp is a library for large-scale Temporal Graph Learning, built around two components:
A binary, flatbuffer-style on-disc format for graph streams, supporting:
- Dynamic node and edge events, static node features, pre-computed negatives (for link prediction)
- Zero-copy tensor reads via memory mapping for out-of-core training and inference
- Optimized sequential access patterns common in CTDG style methods
A C++20 Port of TGN over pure LibTorch:
- Built on the TGUF storage engine
- Minimal abstractions, with efficient sampling kernels and data loading
Tip
Use the Python bindings for easy conversion of your datasets into TGUF
You should just use the Dockerfile:
# Build for CPU (default)
docker build -t tgn-dev:cpu .
# Build for specific CUDA drivers (e.g. 12.6 for A100/H100)
docker build --build-arg CUDA_VERSION=12.6 -t tgn-dev:cu126 .If you prefer a bare-metal install:
# C++ Toolchain: Clang w/ C++20 and the LLVM STL
sudo apt-get install -y clang libc++-dev libc++abi-devIf you want to run with CUDA, refer to nvidia docs for nvidia toolkit installation.
# CMake and OpenMP runtime
brew install cmake libompImportant
Platform Support:
| OS | CUDA_VERSION | Default |
|---|---|---|
| Linux | cpu, 12.6, 12.8, 13.0 |
cpu |
| macOS | cpu |
cpu |
GPU_ARCH: Specifies compute capability (e.g.
80,90,native) for CUDA backend on Linux.
TGUF Conversion Scripts use uv:
curl -LsSf https://astral.sh/uv/install.sh | sh# Clone the repo
git clone git@github.com:Jacob-Chmura/tgn.cpp.git && cd tgn.cpp
# See all available targets
make help# Download `tgbl-wiki` data, convert to `.tguf` and run examples/link_pred.cpp.
make run-link-tgbl-wiki
# Download `tgbn-trade` data, convert to `.tguf` and run examples/node_pred.cpp
make run-node-tgbn-trade# Example: Cuda 12.6 on an A100 (Arch 80)
CUDA_VERSION=12.6 GPU_ARCH=80 make run-link-tgbl-wiki ARGS="--device cuda:0"Tip
Use nvidia-smi to check your CUDA_VERSION and GPU_ARCH