jorghyq2016 / FedHSSL

The implementation of FedHSSL algorithm published in the paper "A Hybrid Self-Supervised Learning Framework for Vertical Federated Learning".
13 stars 1 forks source link

Clarification on ModelNet Dataset Categories and Sample Counts #3

Open languangduan opened 1 year ago

languangduan commented 1 year ago

Hello,

I've come across a couple of interesting quirks while working with the ModelNet dataset:

Sample Shortfall Surprise: According to the paper, there should be 24630 training samples and 6204 test samples. However, even if we assume each sample contains four images as stated, selecting the ten classes with the highest data counts in ModelNet doesn't add up to these quantities.

Accuracy Anomaly: When attempting to replicate the experiments locally, I pretrain on the same ten classes from ModelNet and use the parameters outlined in the paper. Surprisingly, the accuracy achieved exceeds 79%, rather than the 70% range mentioned in the paper.

I'm curious if you could provide some clarification on the specific categories of the ModelNet dataset you used in your experiments. Additionally, any insights you could offer regarding the observed discrepancies in sample counts and accuracy would be greatly appreciated.