yoyo-yun / PLoRA

[AAAI‘24] The official PyTorch implimentation of our AAAI 2024 paper: Personalized LoRA for Human-Centered Text Understanding
5 stars 0 forks source link

Question about paper (Table 2) #2

Open pullwall opened 2 months ago

pullwall commented 2 months ago

At the Table 2, While comparing B-PLoRA (presents corresponding models optimized only from DB) and B-MAA (FS) (FS means corresponding models are first learned in DA and are then adapted to DB in a few-shot learning strategy), after the training, I thought B-PLoRA already knows about user preference at DB and B-MAA (FS) only knows few user preference at DB. Then, why B-MAA (FS) performance is better than B-PLoRA ? Does this mean that it is better to train with a large amount of data included in DA rather than knowing some information about DB users?