Skip to content

This is the codebase for "CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception".

License

Notifications You must be signed in to change notification settings

empriselab/CLAMP

Repository files navigation

CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception

Python 3.12+ PyTorch License

This is the codebase for the paper CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception.

Project Website | Paper

Dataset

To download the CLAMP dataset, please go to this link. We house raw data and a filtered dataset with pre-computed vision predictions for each object in our dataset. If you want to directly train models on our data, just use CLAMP_dataset_filtered.npz.

🛠️ Installation

  1. Clone the repository:
git clone https://github.com/empriselab/CLAMP.git
cd CLAMP
  1. Create a Conda Environment for our repository
git submodule update --init --recursive
  1. Create a Conda Environment for our repository
conda env create --file environment.yaml
  1. Install the repository as a Python project
pip install -e .

Model training

To train the haptic encoder, run:

python learning/haptic_learner.py

Next, to train the visuo-haptic model, first update the weights path of the haptic encoder you just trained. You can do this in the constructor of the InceptionTimeWithMLP class, located in models/inception_time_with_mlp.py. Then, run:

python learning/visuohaptic_learner.py

To finetune the visuo-haptic model on robot data, first update the weights path of the visuo-haptic model you just trained. You can do this in the set_model() function in learning/visuohaptic_learner.py. Then, run:

python learning/robot_finetune.py

Be sure to specify the robot embodiment name of the data you want to train on. This codebase does not support cross-embodiment training, because the gripper embodiments are different and hence we featurize proprioception data from different robot embodiments, differently.

📚 Citation

@inproceedings{thakkar2025clamp,
  title={CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception},
  author={Thakkar Pranav N. and Sinha Shubhangi and Baijal Karan and Bian Yuhan (Anjelica) and Lackey Leah and Dodson Ben and Kong Heisen and Kwon Jueun and Li Amber and Hu Yifei and Rekoutis Alexios and Silver Tom and Bhattacharjee Tapomayukh},
  booktitle = {Conference on Robot Learning (CoRL)},
  year   = {2025}
}

About

This is the codebase for "CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published