Hi, thank you for your contributions! I found this repo really easy to follow.
However, when I called mmc.metrics.evaluate to evaluate FID score, it turns out that the path to dataset is bound to './datasets', which might not be convenient if someone put their datasets out of the GAN project.
Also, would you mind providing some details of parameter combinations(i.e. batch_size, n_dis in training process and num_samples, num_real_samples and num_fake_samples in evaluating) by which the model can achieve the baseline's performance(IS, FID, KID)?
Hi, thank you for your contributions! I found this repo really easy to follow.
However, when I called
mmc.metrics.evaluateto evaluate FID score, it turns out that the path to dataset is bound to'./datasets', which might not be convenient if someone put their datasets out of the GAN project.Also, would you mind providing some details of parameter combinations(i.e.
batch_size,n_disin training process andnum_samples,num_real_samplesandnum_fake_samplesin evaluating) by which the model can achieve the baseline's performance(IS, FID, KID)?