Skip to content

Evaluation on the test set #33

@liang-hou

Description

@liang-hou

Hi, thanks for the excellent work.
But I noticed that the test set is not available for evaluation, because you don't specify the split argument in the metrics functions. I think it will load the train set for evaluating FID by default. Many GAN papers including SSGAN splits the data set as a train set and a test set. I guess that they evaluated their models using the test set.

Another minor concern is that the inception model will be downloaded and stored in each experiment, which is a waste of time and storage. It would be better to discard the log_dir from the inception_path. Then the inception model will be cached and reused for all experiments.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions