Skip to content

cair/ICML26_Supplemental

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

128 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Experiments and Benchmarks for the GraphTsetlinMachine paper

Quick Start

Devcontainer

Devcontainer configuration is provided for VSCode.

  • Remote SSH into cair-gpu17
  • Clone THIS repo somewhere.
  • Open repo as a folder (project) in VSCode.
  • Run the .devcontainer/build script to create devcontainer.json file for the devcontainer, with proper user and stuff.
./.devcontainer/build
  • When prompted to open in devcontainer, select "Reopen in Container".
    • If not prompted, open the command palette (Ctrl+Shift+P), and select "Dev Containers: Reopen in Container".
  • You should be dropped in /workspace folder inside the container, with all the files and environment installed.
  • Any changes to files in the devcontainer will be reflected on the host machine, and vice versa.

Running experiment

  • Copy the template.py file into your own folder.

  • Fill in the parameters and datasets in template.py and rename it to <your_file>.

  • The Benchmark (<your_file>) needs your Binarized Dataset, Graph Dataset, and parameters for the different models.

    • An example of Benchmark with MultiValueXOR is in test/test_bm_xor.py.
  • Parameters

    • gpu_pooling_rate in src/graphtm_exp/benchmark.py
      • If your experiments are large (time-wise), can change gpu_polling_rate to 0.5
      • If your experiments are small (time-wise. e.g. training epoch takes 5 secs), change gpu_polling_rate to 0.01
    • Epochs = 50, num_test_reps = 5 (these have already been set as default, so no need to pass these parameters in <your_file> for benchmarking)
    • Models available are GTM, CoTM, VanillaTM and XGBoost.
      • To opt out of running any of the models, pass the corresponding '*_args' as None in <your_file>
  • To test the script - activate the environment using pixi shell and run python <your_file>.

  • To run the benchmark use pixi run bm <your_folder>/<your_file> <gpuid>.

    • to get , run nvtop or nvidia-smi, check which gpu is free, choose that.

    • This will run the benchmark in a new tmux session, so that the experiment does not stop if the devcontainer is disconnected.

    • To view, run tmux attach.

    • Ctrl+B, followed by D to detach from tmux again

  • Output is a csv file (results) and a pickle file(splits)

    • Results are on different validation splits, using different models, and reported as 'all classes' and 'per class'

Comparing results

  • Open the jupyter notebook model_performance_analysis.ipynb
  • Set your result filename (as saved by the benchmarking) at file_path

Details

Environment Setup

Uses Pixi to create and manage the environment. The environement is defined in the pixi.yaml file. To create the environment, run:

pixi install

Installing additaional packages

If some package is missing from the environment, you can add it using pixi add. If the package is available with conda:

pixi add <package-name> 

If the package is only available with pip:

pixi add --pypi <package-name>

Activating the environment.

If you are using VsCode, the environement should get activated automatically. To activate it manually:

pixi shell

To verify that you are using the correct environement, you can run which python and check if the path points to the something like ...folder/.pixi/envs/.../python

TODO:

  • Add pixi environement
  • Add devcontainer
  • Benchmarks
    • GTM
    • TM
    • XGBoost
    • Memory measurements
    • Energy measurements
  • Test environment
  • Test devcontainer
  • Add detailed instructions

About

Supplementary Code for ICML

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors