issues
search
ijyliu
/
anlp23-project
An empirical study of the costs and practicalities of prompt engineering techniques on standard and novel benchmarks
0
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
GPT-2 and human writers often fail to maintain prompt complexity when producing output
#34
ijyliu
closed
10 months ago
1
Few-shot prompting
#33
ijyliu
closed
10 months ago
1
Complexity Based Prompting
#32
ijyliu
closed
10 months ago
0
Amount of irrelevant text generated
#31
ijyliu
closed
10 months ago
0
Can we use simplification, summarize, target reading level instructions in prompts
#30
ijyliu
closed
10 months ago
0
Mention of context window and token consumption concerns
#29
ijyliu
closed
10 months ago
0
Similar Paper Concerning Summarization
#28
ijyliu
closed
10 months ago
0
Note on Generalizability/External Validity
#27
ijyliu
closed
10 months ago
0
Add hypothesis testing for all metrics
#26
ijyliu
closed
10 months ago
11
Figure out complexity measures
#25
ijyliu
closed
11 months ago
0
The Cost of Prompt Engineering
#24
ijyliu
closed
11 months ago
0
Does GPT-4 "think" in other languages?
#23
ijyliu
closed
11 months ago
2
Chain of Verification Prompting
#22
ijyliu
closed
11 months ago
1
Data Augmentation
#21
ijyliu
closed
11 months ago
1
How to pick examples for few shot learning
#20
ijyliu
closed
11 months ago
2
Ensemble prompting
#19
ijyliu
closed
11 months ago
0
Prompt Tuning
#18
ijyliu
closed
11 months ago
0
Better models do even worse with commonly held misconceptions
#17
ijyliu
closed
11 months ago
1
Evaluation of LLM ability to check and fix inserted code on graphics for ex
#16
ijyliu
closed
11 months ago
0
Fiction/Stories and Action tasks for GPT
#15
ijyliu
closed
11 months ago
1
OpenAI Evals
#14
ijyliu
closed
11 months ago
1
LLM Generated Data
#13
ijyliu
closed
11 months ago
0
Getting LLMs to produce clarifying questions or operate on assumptions and explain them
#12
ijyliu
closed
11 months ago
0
Specifying Allowable Sources
#11
ijyliu
closed
11 months ago
0
Evaluating logical consistency, not accuracy
#10
ijyliu
closed
11 months ago
0
Comparing LLM Metrics With Each Other and With Response Accuracy
#9
ijyliu
closed
11 months ago
0
What happens if we aggregate the results of many queries to an LLM?
#8
ijyliu
closed
11 months ago
1
More Capable LLMs Managing Less Capable LLMs
#7
ijyliu
closed
11 months ago
1
What parts of prompt engineering really work?
#6
ijyliu
closed
11 months ago
1
Emails as LLM Fine Tuning or Prompting Data
#5
ijyliu
closed
11 months ago
1
LLMs in place of other DS Methods
#4
ijyliu
closed
11 months ago
0
Automatic Choice of Language Model Temperatures
#3
ijyliu
closed
11 months ago
1
Use Text of GitHub Issues to Automatically Create Suggested FAQs
#2
ijyliu
closed
11 months ago
0
Top Positive and Negative Features of Businesses Based on Reviews
#1
ijyliu
closed
11 months ago
0
Previous