Skip to content

xieyizheng/DeepShapeMatchingKit

Repository files navigation

logo

DeepShapeMatchingKit

arXiv paper license GPL-3.0 python 3.10 PyTorch >=2.3.0


DeepShapeMatchingKit provides a collection of deep shape matching methods with a few contributions to improve efficiency, understanding, and evaluation, including a 33× faster batched functional map solver. More details are described in our paper.

This codebase is based on ULRSSM and includes implementations of:

Method Paper Year Original Ours Speedup (×)
ULRSSM Cao et al. 2023 429 ms 82 ms 5.23
Hybrid ULRSSM Bastian et al. 2024 555 ms 363 ms 1.53
DPFM Attaiki et al. 2021 115 ms 95 ms 1.21
EchoMatch Xie et al. 2025 215 ms 195 ms 1.10
AttentiveFmaps 🔜 Li et al. 2022 2050 ms 448 ms 4.57

Runtime measured per training iteration. AttentiveFmaps is not yet included in this repository; speedup was tested in the original implementation.

Contributions are welcome! Whether it's adding new methods, improving existing implementations, or fixing bugs — feel free to open an issue or submit a pull request.

Installation

conda create -n deepshapematchingkit python=3.10
conda activate deepshapematchingkit
conda install -c nvidia cuda-toolkit # nvcc for complile pytorch3d with cuda support
pip install -r requirements.txt
Shell-Energy (for Elastic Basis)

If you want to install the shell-energy library and its Python bindings, you can do so as follows:

git clone https://gitlab.com/numod/shell-energy.git
cd shell-energy
mkdir build
cd build
cmake -DBUILD_PYTHON=ON .. -DPython_EXECUTABLE=$(which python)
cmake --build . --config Release
cp python/pyshell.cpython*.so ../../
cd ../../
PyTorch3D (for Diff3F Features)
pip install git+https://github.com/facebookresearch/pytorch3d.git@stable

PyTorch3D is required for computing DINO features using the Diff3F renderer. The installation can be tricky. If you run into installation issues, please check out the installation guide.


Optional: If you are using a cluster with mixed GPU types, you can specify all your GPU architectures to ensure compatibility, e.g.:

export TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0;7.5;8.0;8.6+PTX" # much longer compile time
pip install git+https://github.com/facebookresearch/pytorch3d.git@stable 

Datasets

To download and set up the datasets:

  1. Run the following script download_datasets.sh from the root directory to automatically download and place the datasets:
    bash download_datasets.sh
  2. For the BeCoS dataset, please follow the official instructions at BeCoS repository to manually download and generate the dataset.

All datasets placed under ../data/

├── data
    ├── ...

We thank the original dataset providers for their contributions to the shape analysis community, and that all credits should go to the original authors.

Pre-trained Models

You can find all pre-trained models in checkpoints and config files in options for reproducibility.

In the following, we show how to run an experiment.

Preprocess

python preprocess.py --opt options/echo_match/train/echo_match_psmal_dino.yaml

Optional: parallel_preprocess.py with worker_id and num_workers.

Train

python train.py --opt options/echo_match/train/echo_match_psmal_dino.yaml

The experiments will be saved in experiments folder. You can visualize the training process in tensorboard or via wandb.

tensorboard --logdir experiments/

Test

python test.py --opt options/echo_match/test/echo_match_psmal_dino.yaml

The results will be saved in results folder.

Visualization

Headless

python visualize.py --opt options/echo_match/test/echo_match_psmal_dino.yaml

Interactive

python visualize.py -i --opt options/echo_match/test/echo_match_psmal_dino.yaml

The visualizations will be saved in visualizations folder.

Legacy visualization script for complete shape matching
python visualize_complete.py --opt options/hybrid_ulrssm/test/smal.yaml

Acknowledgement

The framework implementation is adapted from Unsupervised Learning of Robust Spectral Shape Matching.

The implementation of DiffusionNet is based on the official implementation.

The implementation of visualization is based on polyscope.

We thank the original authors for their contributions to this code base.

Citation

If you find this codebase useful, please cite:

@misc{xie2026deepshapematchingkitacceleratedfunctionalmap,
      title={DeepShapeMatchingKit: Accelerated Functional Map Solver and Shape Matching Pipelines Revisited}, 
      author={Yizheng Xie and Lennart Bastian and Congyue Deng and Thomas W. Mitchel and Maolin Gao and Daniel Cremers},
      year={2026},
      eprint={2604.10377},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2604.10377}, 
}

Please also consider citing the original papers:

@article{cao2023unsupervised,
  title={Unsupervised Learning of Robust Spectral Shape Matching},
  author={Cao, Dongliang and Roetzer, Paul and Bernard, Florian},
  journal={ACM Transactions on Graphics (TOG)},
  volume={42},
  number={4},
  pages={1--15},
  year={2023},
  publisher={ACM New York, NY, USA}
}
@inproceedings{bastian2024hybrid,
  title={Hybrid Functional Maps for Crease-Aware Non-Isometric Shape Matching},
  author={Bastian, Lennart and Xie, Yizheng and Navab, Nassir and L{\"a}hner, Zorah},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={3313--3323},
  month={June},
  year={2024}
}
@inproceedings{attaiki2021dpfm,
  title={Dpfm: Deep partial functional maps},
  author={Attaiki, Souhaib and Pai, Gautam and Ovsjanikov, Maks},
  booktitle={2021 International Conference on 3D Vision (3DV)},
  pages={175--185},
  year={2021},
  organization={IEEE}
}
@inproceedings{xie2025echomatch,
  title={EchoMatch: Partial-to-Partial Shape Matching via Correspondence Reflection},
  author={Xie, Yizheng and Ehm, Viktoria and Roetzer, Paul and El Amrani, Nafie and Gao, Maolin and Bernard, Florian and Cremers, Daniel},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={11665--11675},
  year={2025}
}

About

Deep functional map codebase for 3D shape matching with a 33× faster batched solver, multiple method implementations, and evaluation metrics.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors