trigaten / The_Prompt_Report

https://trigaten.github.io/Prompt_Survey_Site/
MIT License
300 stars 23 forks source link

[TODO #1] due Saturday Oct 28th #2

Closed trigaten closed 1 year ago

trigaten commented 1 year ago
  1. ~Send me~Put these in your short bio (see below) your Github username, email, and Huggingface name
  2. Put a short bio in #intros in slack that talks about your experience and what you hope to do on this project.
  3. Two options here (choose one) 2a. Perform one of the two unsolved setup goals in https://github.com/trigaten/Prompt_Systematic_Review/issues/1 2b. Read 5 papers in this file and label each as one or more of these categories. Upload your results here. Comment on this issue about which papers you are doing before you start, so you don't overlap with others. No need to read the entire paper, just enough to figure out what it is about.
kkahadze commented 1 year ago

I will do the following:

solarwaffles commented 1 year ago

I will read and label the following papers:

hudssntao commented 1 year ago

I will do:

  1. "Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models",http://arxiv.org/abs/2106.13353v2
  2. "Tailored Visions: Enhancing Text-to-Image Generation with Personalized Prompt Rewriting",http://arxiv.org/abs/2310.08129v1
  3. "Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification",http://arxiv.org/abs/2303.07142v3
  4. "Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models",http://arxiv.org/abs/2310.15127v1
  5. "Addressing Compiler Errors: Stack Overflow or Large Language Models?",http://arxiv.org/abs/2307.10793v1
ashay-sriv-06 commented 1 year ago
  1. My hugging face user name is ashay-sriv, email: ashays06@umd.edu.

  2. The papers I will be doing are as follows:

a) Review of Large Vision Models and Visual Prompt Engineering, http://arxiv.org/abs/2307.00855v1

b) User-friendly Image Editing with Minimal Text Input: Leveraging Captioning and Injection Techniques",http://arxiv.org/abs/2306.02717v1

c) Structured Chain-of-Thought Prompting for Code Generation,http://arxiv.org/abs/2305.06599v3

d) Optimizing Prompts for Text-to-Image Generation,http://arxiv.org/abs/2212.09611v1

e) "PEACE: Prompt Engineering Automation for CLIPSeg Enhancement in Aerial Robotics",http://arxiv.org/abs/2310.00085v1

f) "Optimizing Mobile-Edge AI-Generated Everything (AIGX) Services by Prompt Engineering: Fundamental, Framework, and Case Study", http://arxiv.org/abs/2309.01065v1

g) "QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition",http://arxiv.org/abs/2203.01543v2

sauravv15 commented 1 year ago

I'm doing: 1) MetaPrompting: Learning to Learn Better Prompts,http://arxiv.org/abs/2209.11486v4 2) "A Case Study in Engineering a Conversational Programming Assistant's Persona",http://arxiv.org/abs/2301.10016v1 3) Towards Zero-Shot and Few-Shot Table Question Answering using GPT-3,http://arxiv.org/abs/2210.17284v1 4) Cheap-fake Detection with LLM using Prompt Engineering,http://arxiv.org/abs/2306.02776v1 5) ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience",http://arxiv.org/abs/2307.01135v1

vahinpalle commented 1 year ago

huggingface: vahinpalle

The articles I will read and label are:

"Jailbreaker: Automated Jailbreak Across Multiple Large Language Model Chatbots",http://arxiv.org/abs/2307.08715v1

"S3: Social-network Simulation System with Large Language Model-Empowered Agents",http://arxiv.org/abs/2307.14984v2

"Evaluating ChatGPT text-mining of clinical records for obesity monitoring",http://arxiv.org/abs/2308.01666v1

"Beyond Prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations",http://arxiv.org/abs/2210.16637v2

"Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models",http://arxiv.org/abs/2310.07653v2

AayushGupta16 commented 1 year ago

huggingface: aayushgupta

The articles I'm working on are

Prompt Learning for Action Recognition,http://arxiv.org/abs/2305.12437v1

"Effective Structured Prompting by Meta-Learning and Representative Verbalizer",http://arxiv.org/abs/2306.00618v1

"Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting",http://arxiv.org/abs/2307.10573v2

"High-Fidelity Lake Extraction via Two-Stage Prompt Enhancement: Establishing a Novel Baseline and Benchmark",http://arxiv.org/abs/2308.08443v1

Structured Prompt Tuning,http://arxiv.org/abs/2205.12309v1