In Table 2. Experimental results on DomainNet dataset with feature & Label Shift, the results for Zero-Shot CLIP, PromptFL, PromptFL+FedProx are so strange and even totally wrong.
In your GitHub repository and your work in both this one and ICML 2024: Harmonizing Generalization and Personalization in Federated Prompt Learning, you mention that you use the 10 classes for the task. But as you present, the accuracy is 8.72±1.73 on the Clipart for the Zero-Shot CLIP.
In Table 2. Experimental results on DomainNet dataset with feature & Label Shift, the results for Zero-Shot CLIP, PromptFL, PromptFL+FedProx are so strange and even totally wrong. In your GitHub repository and your work in both this one and ICML 2024: Harmonizing Generalization and Personalization in Federated Prompt Learning, you mention that you use the 10 classes for the task. But as you present, the accuracy is 8.72±1.73 on the Clipart for the Zero-Shot CLIP.
You are really a code genius.
![288151716297465_ pic](https://github.com/HongxiaLee/FedOTP/assets/170433208/35899a35-44c9-427e-8cce-1f9ca3f9f4c4)