Skip to content

multi-label TotalSegmentator training (bile duct, vessels, and tumor) #546

@atwam1

Description

@atwam1

My colleagues and I are currently working on a multi-label abdominal CT segmentation project and are using TotalSegmentator as a key reference method.
We are encountering some difficulty reproducing the reported performance for multi-label training, particularly when jointly training vascular structures and tumors. Despite following the published setup, we are not reproducing the same segmentation quality for certain labels, and we wanted to reach out for clarification and guidance.
In particular, we had a few questions:

  1. Label-wise performance
    Would you be able to share the approximate Dice scores you obtain for:
    o Hepatic vein
    o Portal vein
    o Liver Tumor (as is and if included in the same multi-label training setup)
    Even approximate ranges would be very helpful for benchmarking.
  2. Multi-label training strategy
    When training on multiple labels simultaneously:
    o Are all labels trained in a single nnU-Net run with equal treatment?
    o Do you apply any label-specific weighting, sampling, or post-processing (e.g., small-object filtering, connected components)?
    o Are vascular labels (hepatic vs portal vein) ever trained separately or hierarchically?
  3. Reproducibility / training script
    Would it be possible to share (or point us to) a script or command sequence that recreates your training on the TotalSegmentator data using your reported parameters (e.g., nnU-Net version, configuration, preprocessing, folds, loss, and label mapping)?
    We want to ensure that our setup truly matches yours before drawing conclusions.

For context and comparison, our group develops and evaluates segmentation models using MIST (Medical Imaging Segmentation Toolkit), which supports multi-label and multi-task training pipelines:
https://github.com/mist-medical/MIST
We are currently comparing MIST-based models against nnU-Net/TotalSegmentator-style training under matched data and label conditions, and your guidance would be extremely valuable in ensuring that our comparison is fair and reproducible.

Thank you very much for your time and for making your work publicly available. We greatly appreciate any insight you can share, and we would be happy to provide additional details about our setup if helpful.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions