Skip to content
Open

a #7

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
4336f1a
Update README.md
jiaoyining Mar 29, 2023
636cec0
Update README.md
jiaoyining Mar 29, 2023
7abe38d
Update README.md
jiaoyining Mar 29, 2023
dfb6b3a
Update README.md
jiaoyining Mar 29, 2023
a25caaa
Update README.md
jiaoyining Mar 29, 2023
739a619
Update README.md
jiaoyining Mar 22, 2024
fae5eac
Merge pull request #3 from uncbiag/page
jiaoyining Apr 16, 2024
548e921
Create NAISRicon.svg
jiaoyining Apr 23, 2024
b91f206
Update README.md
jiaoyining Apr 23, 2024
e9f4406
Update README.md
jiaoyining Apr 23, 2024
89af98e
Update README.md
jiaoyining Apr 23, 2024
6862fb5
Update README.md
jiaoyining Apr 23, 2024
c06fc7b
Update README.md
jiaoyining Apr 23, 2024
fb9761a
Update README.md
jiaoyining Apr 23, 2024
5dcb26a
Update README.md
jiaoyining Apr 23, 2024
da80e3b
Update README.md
jiaoyining Apr 23, 2024
efb3292
Update README.md
jiaoyining Apr 23, 2024
ae28d48
Update README.md
jiaoyining Apr 23, 2024
a040cfd
Merge pull request #4 from uncbiag/page
jiaoyining Apr 24, 2024
b3e68d8
Create workspace.xml
jiaoyining Apr 24, 2024
fba9d34
Merge pull request #5 from uncbiag/page
jiaoyining Apr 24, 2024
e0b6c73
a
jiaoyining Apr 24, 2024
0714ac5
Merge branch 'main' of https://github.com/uncbiag/NAISR
jiaoyining Apr 24, 2024
10366ab
Update workspace.xml
jiaoyining Apr 24, 2024
8858fff
a
jiaoyining Apr 24, 2024
cad4c52
a
jiaoyining Apr 24, 2024
fd010d6
a
jiaoyining Apr 24, 2024
97e4dd5
a
jiaoyining Apr 24, 2024
90fc103
a
jiaoyining Apr 24, 2024
99f4c95
a
jiaoyining Apr 24, 2024
0e6a27b
Update README.md
jiaoyining Apr 25, 2024
47a55d4
COLAB
jiaoyining Apr 25, 2024
c111de8
Merge branch 'main' of https://github.com/uncbiag/NAISR
jiaoyining Apr 25, 2024
d0645de
Update README.md
jiaoyining Apr 25, 2024
4f0e4b2
Update README.md
jiaoyining Apr 25, 2024
f13b3ed
Update README.md
jiaoyining Apr 25, 2024
35436ce
a
jiaoyining Apr 25, 2024
e4bb50c
Update NAISRicon.svg
jiaoyining Apr 25, 2024
962a92a
Update README.md
jiaoyining Apr 25, 2024
80f8e3f
Update README.md
jiaoyining Apr 25, 2024
a775945
Update README.md
jiaoyining Apr 25, 2024
ec92747
Update README.md
jiaoyining Apr 29, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified .DS_Store
Binary file not shown.
28 changes: 28 additions & 0 deletions .idea/workspace.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions NAISRicon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
216 changes: 213 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,215 @@
# NAISR
NAISR: A 3D Neural Additive Model for Interpretable Shape Representation
# <img src="NAISRicon.svg" width="35" height="35"> NAISR: A 3D Neural Additive Model for Interpretable Shape Representation

Pytorch implementation for our `NAISR` paper <br>
[NAISR: A 3D Neural Additive Model for Interpretable Shape Representation](https://arxiv.org/abs/2303.09234), ICLR 2024 **Spotlight**. <br>
Yining Jiao, Carlton Zdanski, Julia Kimbell, Andrew Prince, Cameron Worden, Samuel Kirse, Christopher Rutter, Benjamin Shields, William Dunn, Jisan Mahmud, Marc Niethammer. <br>
UNC-Chapel Hill

Codes on the way...
#### [Paper](https://arxiv.org/abs/2303.09234) | [Project Page](https://uncbiag.github.io/NAISR/) | [Colab Demos](https://colab.research.google.com/drive/1OudGynEydIXpAgfA9lvi5d0L86kK-uwg?usp=sharing)


<div align=center><img src="https://github.com/uncbiag/NAISR/blob/main/figs/NAISRoverviewwide.svg?raw=true" width="1000" height="600"/></div>

Please cite as:
```
@inproceedings{
jiao2024naisr,
title={\texttt{NAISR}: A 3D Neural Additive Model for Interpretable Shape Representation},
author={Yining Jiao and Carlton Jude ZDANSKI and Julia S Kimbell and Andrew Prince and Cameron P Worden and Samuel Kirse and Christopher Rutter and Benjamin Shields and William Alexander Dunn and Jisan Mahmud and Marc Niethammer},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=wg8NPfeMF9}
}
```

>[!NOTE]
>For the three datasets used in `NAISR` paper, we provided this [Colab demo](https://colab.research.google.com/drive/1OudGynEydIXpAgfA9lvi5d0L86kK-uwg) to investigate the shapes learned with `NAISR`. <br>
We also provide the instructions on this page to apply `NAISR` to your customized shape analysis questions.


## Installation
The code is tested with ``python=3.9``, ``torch=2.1.0``, ``torchvision=0.15.2``.
```
git clone https://github.com/uncbiag/NAISR
cd NAISR
```
Now, create a new conda environment and install required packages accordingly.
```
conda create -n naisr python=3.9
conda activate naisr
pip install -r requirements.txt
```


## Data and Model Weights
We train and test our method on three datasets: Starman, ADNI Hippocampus, and Pediatric Airways.

| Dataset | Description | Dataset Link | Model Link |
|-----------|----------------------------------------------|:------------------------------------:|:------------------------------------:|
|Starman | simulated 2D Starman shapes | [simulation code)][StarmanData] | [weights][StarmanModel] |
|ADNI Hippocampus | 1632 hippocampus from ADNI | [official site][ADNIData] | [weights][ADNIModel] |
|Pedeatric Airway | 357 pediatric airway shapes | NA | [weights][AirwayModel] |



[ADNIData]: https://ida.loni.usc.edu/login.jsp?project=ADNI
[StarmanData]:

[ADNIModel]: https://github.com/uncbiag/NAISR/releases/download/naisr_weights_v0/naisr_weights_adni.pth
[StarmanModel]: https://github.com/uncbiag/NAISR/releases/download/naisr_weights_v0/naisr_weights_starman.pth
[AirwayModel]: https://github.com/uncbiag/NAISR/releases/download/naisr_weights_v0/naisr_weights_pediatric_airway.pth


## Visualizations of Shape Space Extrapolation

### Getting Started
One just need to use this [Colab demo](https://colab.research.google.com/drive/1OudGynEydIXpAgfA9lvi5d0L86kK-uwg) to explore the template shape space of {starmen, hippocampi and airways} used in the `NAISR` paper.

### More functions
Our code repo provides more functions to visualize the shape space, e.g., as a matrix for a specific case or for the template shape (as Figure 3 in the main paper),

To get the matrix of the the matrix of thetemplate shape extrapolation, pls use
```
python evolution_shapematrix.py -e examples/hippocampus/naigsr_0920_base.json
```

To get the matrix of the shape extrapolation for a specific patient, pls use
```
python evolution_shapematrix_specific.py -e examples/hippocampus/naigsr_0920_base.json
```



## Customize
One may also want to use `NAISR` on their own shape analysis problems. For this use we provide our best suggestions/instructions here through the illustration for the simulated starman dataset,

### Data Preprocessing

#### Alignment with Rigid Transformation
The shapes to explore need to be registered with a rigid transformation (translation + rotation).
If paired point clouds are available, we recommend to use ; otherwise, we recommend to use ICP to register the point clouds.
In our case, we use airway landmarks to learn the rigid transformation; and ICP algorithms to register the point clouds of the hippocampi.

#### SDF Extraction

###
```json
{
"Description" : [ "This experiment learns a shape representation for starman dataset." ],
"Device":0,
"DataSource": {"train": "/home/jyn/NAISR/examples/starman/2dshape_train_with_temp.csv",
"test": "/home/jyn/NAISR/examples/starman/2dshape_test_with_temp.csv"},
"Split": null,
"Network": "DeepNAIGSR",
"NumEpochs": 300,
"LoggingRoot": "/playpen-raid/jyn/NAISR/log",
"ExperimentName": "DeepNAIGSR_STARMAN3D_0222_256_base",

"EpochsTilCkpt": 10,
"StepsTilSummary": 1000,
"UseLBFGS": false,
"DoublePrecision": false,
"CheckpointPath": "",
"CodeLength": 256,

"AdditionalSnapshots" : [ 50, 100, 200, 300, 400, 500 ],
"LearningRateSchedule" : [
{
"Type": "Step",
"Initial": 0.00005,
"Interval": 1000,
"Factor": 0.5
},
{
"Type": "Step",
"Initial": 0.001,
"Interval": 1000,
"Factor": 0.5
}],
"SamplesPerScene" : 750,
"BatchSize": 64,
"DataLoaderThreads": 4,
"ClampingDistance": 1,

"Articulation": true,
"NumAtcParts": 1,
"TrainWithParts": false,
"Class": "starman",
"Attributes": ["cov_1", "cov_2"],
"TemplateAttributes": {"cov_1": 0, "cov_2": 0},
"Backbone": "siren",
"PosEnc": false,
"InFeatures": 2,
"HiddenFeatures": 256,
"HidenLayers": 6,
"OutFeatures": 1,
"Loss": {
"whether_sdf": true,
"whether_normal_constraint": true,
"whether_inter_constraint": true,
"whether_eikonal": true,
"whether_code_regularization": true}
```
### Training
For training, one needs to use `train_atlas_3dnaigsr.py` with different networking settings indicated with the `json` files, e.g., for the starman dataset,
```
python train_atlas_3dnaigsr.py -e examples/starman/naigsr_0222_base.json
```

### Shape Reconstruction
For testing/reconstruction without covariates, one needs to `reconstruct_atlas.py` with different network settings indicated with the `json` files, e.g., for the starman dataset.
```
python reconstruct_atlas.py -e examples/starman/naigsr_0222_base.json
```

For testing/reconstruction with covariates, one needs to `reconstruct_atlas.py` with different network settings indicated with the `json` files, e.g., for the starman dataset.
```
python reconstruct_atlas_with_cov.py -e examples/starman/naigsr_0222_base.json
```


### Shape Transport
For shape transport without covariates, one needs to `reconstruct_atlas.py` with different network settings indicated with the `json` files, e.g., for the starman dataset.
```
python transport_general.py -e examples/starman/naigsr_0222_base.json
```

For shape transport with covariates, one needs to `reconstruct_atlas.py` with different network settings indicated with the `json` files, e.g., for the starman dataset.
```
python transport.py -e examples/starman/naigsr_0222_base.json
```


### Shape Evolution and Disentanglement

One just needs to adjust this [Colab demo](https://colab.research.google.com/drive/1OudGynEydIXpAgfA9lvi5d0L86kK-uwg) with their own `NAISR` weights to explore the learned deform the template shape with the query covariates.

To get the matrix of the template shape extrapolation, pls use
```
python evolution_shapematrix.py -e examples/starman/naigsr_0222_base.json
```

To get the matrix of the shape extrapolation for a specific patient, pls use
```
python evolution_shapematrix_specific.py -e examples/starman/naigsr_0222_base.json
```



More instructions on the way...





## Cite this work
```
@inproceedings{
jiao2024naisr,
title={\texttt{NAISR}: A 3D Neural Additive Model for Interpretable Shape Representation},
author={Yining Jiao and Carlton Jude ZDANSKI and Julia S Kimbell and Andrew Prince and Cameron P Worden and Samuel Kirse and Christopher Rutter and Benjamin Shields and William Alexander Dunn and Jisan Mahmud and Marc Niethammer},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=wg8NPfeMF9}
}
```
Binary file removed docs/NAISR poster (2).png
Binary file not shown.
Binary file removed docs/NAISR_icon.png
Binary file not shown.
10 changes: 10 additions & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,16 @@ <h2 class="title is-2 publication-title"><tt>NAISR</tt>: A 3D Neural Additive Mo
</a>
</span>
<!-- video Link. -->
<span class="link-block">
<a href="https://colab.research.google.com/drive/1OudGynEydIXpAgfA9lvi5d0L86kK-uwg?usp=sharing"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-google"></i>
</span>
<span>Video</span>
</a>
</span>
<!-- video Link. -->
<span class="link-block">
<a href="https://www.youtube.com/watch?v=8wSIIPAvlLI&t=43s"
class="external-link button is-normal is-rounded is-dark">
Expand Down
3 changes: 2 additions & 1 deletion examples/starman/naigsr_0222_base.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
{
"Description" : [ "This experiment learns a shape representation for starman dataset." ],
"Device":0,
"DataSource": {"train": "/home/jyn/NAISR/examples/starman/2dshape_train_with_temp.csv", "test": "/home/jyn/NAISR/examples/starman/2dshape_test_with_temp.csv"},
"DataSource": {"train": "examples/starman/2dshape_train_with_temp.csv",
"test": "examples/starman/2dshape_test_with_temp.csv"},
"Split": null,
"Network" : "DeepNAIGSR",
"NumEpochs" : 300,
Expand Down
5 changes: 4 additions & 1 deletion naisr/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,7 @@
#from naisr.model import NAISiren, NAIVF, BaselineVF, FCBaseline, NAIVF_withtempl, SirenlatentVF, NAIlatentVF_withtempl, LipNAIVF_withtempl, Baseline, BaselineVF, ICVF, NAIVF_with3dtempl, NAIVF_autotempl, NAIVF_fixedtempl, NAIVF_fixed, DeepSDF #NAISR, HyperSirenBaseline, SirenBaseline,
from naisr.diff_operators import *
from naisr.metrics import *
from naisr.model_naigsr import DeepNAIGSR
from naisr.model_naigsr import DeepNAIGSR
from naisr.starman_dataset import *
from naisr.adni_dataset import *
from naisr.airway_dataset import *
3 changes: 1 addition & 2 deletions naisr_meshing.py
Original file line number Diff line number Diff line change
Expand Up @@ -1148,10 +1148,9 @@ def convert_3d_sdf_samples_to_ply(
print(numpy_3d_sdf_tensor.max())
print(numpy_3d_sdf_tensor.min())
try:
verts, faces, normals, values = measure.marching_cubes_lewiner(
verts, faces, normals, values = measure.marching_cubes(
numpy_3d_sdf_tensor, level=0., spacing=[voxel_size] * 3
)

except:
path_aligned_surface = os.path.join(savedir, 'surface.stl')
ply_filename_out = os.path.join(savedir, 'surface.ply')
Expand Down
Binary file added publicdata/.DS_Store
Binary file not shown.
2 changes: 2 additions & 0 deletions publicdata/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from publicdata.utils_2d import *
from publicdata.deformation import *
Loading