Skip to content

[AAAI2026] MetaDiT: Enabling Fine-grained Constraints in High-degree-of Freedom Metasurface Design

Notifications You must be signed in to change notification settings

JessePrince/metadit

Repository files navigation

MetaDiT: Enabling Fine-grained Constraints in High-degree-of Freedom Metasurface Design

GitHub Repo stars

The official code implementation of paper MetaDiT: Enabling Fine-grained Constraints in High-degree-of Freedom Metasurface Design in AAAI 2026.

main

Highlights

  • 🌌 Design all parameters by encoding features into different channels, not just the shape!
  • 📈 High resolution spectrum constraints, never miss the important details!
  • 📊 New metrics, to further evaluate the robustness of the model.
  • 🕹️ Coarse to fine condition injection in DiT and contrastively pre-trained spectrum encoder empowers strong performance of MetaDiT!

🔉 News

  • [2025.11.28] 🚀 We're excited to release the training script! While we're still in the process of refactoring the codebase, the current version is functional and sufficient for model training. Please note that the code quality isn't optimal at this stage. We're actively working on improvements and will release a completely refactored, production-ready version in a new branch in the near future. Stay tuned for updates!
  • [2025.11.08] 🎉 Our paper is accepted by AAAI 2026, we are about to refactor this code base for better readability!
  • [2025.08.02] 🔥 We release the first version of our code! We will add more comments and optimize the code structure in the future!

💿 Setups

First clone this repository

git clone https://github.com/JessePrince/metadit.git

cd to the directory

cd metadit

Then, install uv

pip install uv

and install dependencies

uv sync
source .venv/bin/activate

Install PyTorch based on your CUDA version

# First uninstall the torch from uv
uv cache clean
uv pip uninstall torch
uv pip uninstall torchvision
uv pip uninstall torchaudio

# Check your CUDA version
nvcc -V

# If the above command failed, use
nvidia-smi

# Install PyTorch according to your CUDA version (cu118, cu126, cu128)
uv pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu126

Download dataset from https://github.com/SensongAn/Meta-atoms-data-sharing. You can split the dataset to train/val/test set by yourself or use our split version.

📊 Inference

First download our model weights from HuggingFace, click the HuggingFace at the top of this page.

Setup the evaluate script scripts/evaluate.sh

Argument Description Type Default Required
--num_gpus Number of GPUs to use for generation, set to 1 if you have only 1 GPU int 4 Yes
--temp_dir Temporary directory for inference cache str cache/inference Yes
--data_path Path to input data file (.mat format) str sim_data/test_set.mat Yes
--model_path Path to pre-trained model checkpoint str ckpts/metadit-small.bin Yes
--model_type Type of model architecture str metadit_s Yes
--condition_channel Number of condition channels int 301 Yes
--seed Random seed for reproducibility int - Yes
--resolution Output resolution of generated data int 32 Yes
--cfg_scale Classifier-free guidance scale float 4.0 Yes
--batch_size Batch size for generation int 256 Yes
--time_steps Number of diffusion time steps int 500 Yes
--save_path Output path for results (JSON file) str results/seed{seed}.json Yes

This script will loop through the seeds

seeds=(0 7 42 3407)

Then run

bash sciprts/evaluate.sh

Each evaluation result will be saved in seedx.json.

Then, set up metric.sh

Argument Description Type Default Required
--data_path Path to directory containing generated results str results Yes
--model_path Path to surrogate model for evaluation str ckpts/surrogate.bin Yes
--metric_save_path Path to save evaluation metrics (JSON file) str metric/new.json Yes
--k k for AAE&K, could be multiple integers list[int] 2 4 Yes

Then run

bash scripts/metric.sh

We also provide the raw data generated by the model on our machine with seed 0, you can download it from HuggingFace (seed0.json) and put it inside a folder like results/seed0.json and calculate the metric by

python metric.py \
    --data_path results \
    --model_path ckpts/surrogate.bin \
    --metric_save_path metric/seed0_metric.json \

🚀 Training

To train the model, see scripts/train_clip.sh, scripts/train_metadit.sh, scripts/train_surrogate.sh. Run

bash scripts/train_xxx.sh

and the training will begin. The default hyper-parameter in the script is used in our paper.

👀 Citation

@article{li2025metadit,
  title={MetaDiT: Enabling Fine-grained Constraints in High-degree-of Freedom Metasurface Design},
  author={Li, Hao and Bogdanov, Andrey},
  journal={arXiv preprint arXiv:2508.05076},
  year={2025}
}

About

[AAAI2026] MetaDiT: Enabling Fine-grained Constraints in High-degree-of Freedom Metasurface Design

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published