Hi, thanks for releasing this great project! I’m trying to reproduce the results from your paper on the Lerf dataset.
For the teatime dataset, I obtained the following results:
Mean IoU per class: {'sheep': 0.5264876460419082, 'plate': 0.449207029082698, 'apple': 0.9236299767545247}
Mean Boundary IoU per class: {'sheep': 0.4352036218062042, 'plate': 0.4511066046043869, 'apple': 0.9207977168668688}
Overall Mean IoU: 0.6331082172930437
Overall Boundary Mean IoU: 0.60236931442582
However, the results are quite lower than the MIoU (81.2) stated in the paper.
Could you provide some insight into what might be causing this discrepancy?
Hi, thanks for releasing this great project! I’m trying to reproduce the results from your paper on the Lerf dataset.
For the teatime dataset, I obtained the following results:
Mean IoU per class: {'sheep': 0.5264876460419082, 'plate': 0.449207029082698, 'apple': 0.9236299767545247}
Mean Boundary IoU per class: {'sheep': 0.4352036218062042, 'plate': 0.4511066046043869, 'apple': 0.9207977168668688}
Overall Mean IoU: 0.6331082172930437
Overall Boundary Mean IoU: 0.60236931442582
However, the results are quite lower than the MIoU (81.2) stated in the paper.
Could you provide some insight into what might be causing this discrepancy?