VILA-Lab / ATLAS

A principled instruction benchmark on formulating effective queries and prompts for large language models (LLMs). Our paper: https://arxiv.org/abs/2312.16171
Apache License 2.0
912 stars 88 forks source link

Principle 1 says don't use "please", but the json examples are polite with LLM #1

Closed mst272 closed 9 months ago

mst272 commented 9 months ago

Is Principle 1 a negative collection?

szq0214 commented 9 months ago

Hi @mst272, thank you for your interest in this work. It wouldn't be accurate to categorize P1 as negative, considering it does offer a marginal improvement. However, when compared to other principles, it certainly doesn't exhibit the same level of effectiveness, as shown in Figures 4 and 8.

szq0214 commented 9 months ago

Is Principle 1 a negative collection?

Hi @mst272, initially, the repository contained instructions before the principles for P-1, and other instructions are after the principles, we are correcting this and providing both of them. Sorry for the confusion.