amosproj / amos2024ss06-health-ai-framework

Ailixir is an application that utilises LLMs and custom user input to generate AI agent prototypes specialised in fields such as health, economics, physics etc. The prototypes enable the user, which is an entrepreneur-developer, to compare the results produced by different LLMs.
MIT License
7 stars 1 forks source link

Scrum spike: Research prompt engineering techniques #160

Open tubamos opened 6 months ago

tubamos commented 6 months ago

Domain

data pipeline optimisation

Description

The focus of this scrum spike is to help the team familiarize with prompt engineering techniques that we will use to improve the interaction with the LLM(s). Popular techniques such as Chain-of-Thought (CoT) prompting and ReACT (integration of reasoning with actionable prompts) should be expored along with other techniques. The developers should engage in hands-on testing and and rank their proposals in relation to our use case.

Example documentaion: https://www.promptingguide.ai/techniques

User Story

Acceptance Criteria

preetvadaliya commented 5 months ago

@tubamos ,

Just an idea for research tasks can we create a notebook and submit them in sprint-dilevriables so that can be stored there and in next sprint we dont need to remove or fix them.

tubamos commented 5 months ago

@preetvadaliya ,

yes, let's do this. We actually discussed this option in the past but we did not conclude on whether we do it that way or not because we wanted to check what options exist in github to make such tasks trackable for formal reasons.

The option to commit any documents in the project folder as you suggest has been one option. I am fine with that if the devs agree.

manikg08 commented 5 months ago

Different Prompt engineering techniques have been added to the code and have been pushed into the main branch. The current problem with prompt engineering is that all the scraped data is in a different format like each scraper has a different JSON structure due to which chunking is quite problematic. We need to create different classes for chunking different scraped data into the Vector Database.

tubamos commented 5 months ago

@manikg08 Let us disucss this during the sprint meeting today.