This repository provides a modular and extensible data preprocessing pipeline for fine-tuning MotionGPT on custom motion datasets.
MotionGPT-CustomDataPipeline handles the end-to-end transformation of raw motion data into the input format required by MotionGPT.
git clone https://github.com/Abhay-1301/MotionGPT-CustomDataPipeline.git
cd MotionGPT-CustomDataPipelineMake sure you have Anaconda or Miniconda installed.
conda env create -f environment.ymlThis will create a new Conda environment (e.g., motiongpt-pipeline) with all required dependencies.
conda activate motiongpt-pipeline
⚠️ Note: If you encounter any package or environment-related issues, please refer to the individual repository setups located in the respective folders (e.g.,mesh,moshpp,soma,HumanML3D,smpl, etc.).
This project integrates and builds upon these external repositories to create a unified preprocessing pipeline, so environment conflicts may occasionally arise due to version mismatches.
If you face any issues while setting up the environment (e.g., version conflicts or missing dependencies), refer to the setup instructions in the corresponding subdirectories:
mesh/moshpp/soma/HumanML3D/smpl/
This pipeline brings together components from these sources, so inconsistencies may arise due to differing requirements.
This repository provides a pipeline to convert custom motion capture data in .trc format into the input format required by MotionGPT.
To get started:
- Place your
.trcfile(s) in theinputTrc/directory. - Open and run the
main.ipynbnotebook following the instructions provided within.
The processed output will be generated accordingly.