irthomasthomas / undecidability

6 stars 2 forks source link

On the large effects of small matters. #14

Open irthomasthomas opened 1 year ago

irthomasthomas commented 1 year ago

Experiment 1

Prompt A: Token wise but _____ foolish.

Results for gpt-3.5-turbo-0613 temperature=1.

  1. knowledgeable
  2. knowledgeable
  3. Token wise but cash foolish.
  4. skillful
  5. Resourceful but time-wasting.
  6. Token wise but penny foolish.
  7. Token wise but ultimately foolish.
  8. Token wise but time foolish.
  9. mindful
  10. Token wise but morally/ethically foolish.
  11. "Token wise but cash foolish."
  12. help
  13. Token wise but ultimately foolish.
  14. token wise but time foolish
  15. Token wise but penny foolish.
  16. Wise but action-wise foolish.
  17. knowledgeable
  18. contextually
  19. Token wise, but time foolish.
  20. brains-wise but money foolish

Prompt B: Token wise but _____ foolish. #llm #gpt #chatgpt

Results for gpt-3.5-turbo-0613 temperature=1.

  1. Token wise but algorithmically foolish.
  2. Token wise but data foolish. #llm #gpt #chatgpt
  3. Token wise but data foolish. #llm #gpt #chatgpt
  4. Token wise but context foolish.
  5. Token wise but knowledge foolish. #llm #gpt #chatgpt
  6. Token wise but factually foolish. #llm #gpt #chatgpt
  7. Token wise but data foolish. #llm #gpt #chatgpt
  8. Token wise but content-wise foolish. #llm #gpt #chatgpt
  9. Token wise but intelligence foolish. #llm #gpt #chatgpt
  10. Token wise but contextually foolish. #llm #gpt #chatgpt
  11. Token wise but idea foolish. #llm #gpt #chatgpt
  12. Token wise but penny foolish. #llm #gpt #chatgpt
  13. intelligent
  14. Token wise but context foolish. #llm #gpt #chatgpt
  15. knowledgeable
  16. Token wise but wisdom foolish. #llm #gpt #chatgpt
  17. Token wise but logic foolish. #llm #gpt #chatgpt
  18. Token wise but algorithmically foolish. #llm #gpt #chatgpt
  19. Token wise but context foolish. #llm #gpt #chatgpt
  20. Token wise but contextually foolish. #llm #gpt #chatgpt

Experiment 2

Prompt A: wise but foolish

Results for gpt-3.5-turbo-0613 temperature=1.

  1. Wise but foolish.
  2. Sometimes wise but often foolish.
  3. sometimes wise but sometimes foolish.
  4. "Sometimes wise but often foolish."
  5. Wise but occasionally foolish.
  6. Cunning wise but imprudent foolish.
  7. Wise but ultimately foolish.
  8. Wise but foolish.
  9. Wise but sometimes foolish.
  10. Sometimes wise but sometimes foolish.
  11. Sage wise but naive foolish.
  12. "Often wise but occasionally foolish."
  13. Sometimes wise but sometimes foolish.
  14. Wise but foolish.
  15. Wise but foolish.
  16. Wise but foolish.
  17. Old wise, young foolish.
  18. Wise but foolish.
  19. wise but occasionally foolish.
  20. Prudently wise but occasionally foolish.

Prompt B: wise but foolish. #llm #gpt #chatgpt.

Results:

  1. Wise but foolish.
  2. Wise but foolish.
  3. Wise but sometimes foolish.
  4. Sometimes wise, sometimes foolish.
  5. Wise but foolish.
  6. Wise but foolish.
  7. Wise but occasionally foolish.
  8. Wise but foolish.
  9. Sage wise but naïve foolish.
  10. Wise but foolish.
  11. Wise but occasionally foolish.
  12. Thought wise but action foolish.
  13. Wise but ignorant.
  14. Wise but foolish.
  15. Though you may be regarded as wise, you can also be deemed foolish.
  16. Wise but occasionally foolish.
  17. Some possible combinations could be:
    • Sometimes wise but sometimes foolish.
    • Occasionally wise but frequently foolish.
    • Intermittently wise but consistently foolish.
    • Partly wise but partly foolish.
    • Occasionally wise but largely foolish.
  18. Wise but foolish.
  19. Wise but foolish.
  20. Wise but foolish.

Summary:

Experiment 1:

Experiment 2:

Overall, the responses from GPT-3.5-turbo-0613 at a temperature of 1 varied in their specificity and creativity. Some responses provided unique combinations of wise and foolish, while others were more general in nature. The inclusion of keywords in the prompts seemed to influence the types of foolishness mentioned in the responses.

irthomasthomas commented 1 year ago

Quick notes. 1A: "Token wise but foolish" no tags. = Very low relevance. No answers about llm tokens. 1B: "Token wise but foolish. #llm #gpt #chatgpt" = Very high number of answers relevant to llms

2A: " wise but foolish" no tags = As expected, no relevance to llms 2B: "__wise but _____ foolish #llm #gpt #chatgpt" = Surprisingly, the inclusion of the tags did not improve the relevance of the answers.