Open sdg4168 opened 11 months ago
I have same issue I have created one system prompt and give to model then model not give proper answer of that. My system prompt is about to generate color palettes for poster making particular for independence day of India and palette contains background, heading 1 and heading 2 color as per contrast.
Hi I am new to this opensource project. How does contributing here work? Are there people from Meta/Members that help with questions when people try to solve something and get blocked?
So far I haven't seen members in the open and closed issues I browsed. Also what are the expectations? What is like protocol in this Open Source project in terms of help you can expect from Members? Is it more like contributors are expected to do most themselves and I can expect a small hint here and there? Is it like lots of contributors working together and help each other out? No members involved?
@sdg4168 @Vatsal1106Virani Sorry if this stupid question I am a beginner Gen AI Enthusiast. How are you using this model? What are you using to use this? Google Collab?
@sdg4168 @Vatsal1106Virani Sorry if this stupid question I am a beginner Gen AI Enthusiast. How are you using this model? What are you using to use this? Google Collab?
Follow The Instruction From README.md file from Below link. https://github.com/facebookresearch/llama
I have same issue I have created one system prompt and give to model then model not give proper answer of that. My system prompt is about to generate color palettes for poster making particular for independence day of India and palette contains background, heading 1 and heading 2 color as per contrast.
Hi Vatsal, What is the prompt ? What is the answer you get back?
This information will help solve the problem, thanks.
@sdg4168 @Vatsal1106Virani Sorry if this stupid question I am a beginner Gen AI Enthusiast. How are you using this model? What are you using to use this? Google Collab?
@Jaswir Hi Jaswir, I am using t4 gpu in google colab. This is a free version, and you can expect better performance if you use the paid version.
I have same issue I have created one system prompt and give to model then model not give proper answer of that. My system prompt is about to generate color palettes for poster making particular for independence day of India and palette contains background, heading 1 and heading 2 color as per contrast.
Hi Vatsal, What is the prompt ? What is the answer you get back?
This information will help solve the problem, thanks.
You can Learn though Below Course.
https://courses.nvidia.com/courses/course-v1:DLI+S-FX-12+V1/
I have same issue I have created one system prompt and give to model then model not give proper answer of that. My system prompt is about to generate color palettes for poster making particular for independence day of India and palette contains background, heading 1 and heading 2 color as per contrast.
Hi Vatsal, What is the prompt ? What is the answer you get back? This information will help solve the problem, thanks.
You can Learn though Below Course.
https://courses.nvidia.com/courses/course-v1:DLI+S-FX-12+V1/
Wow thanks!
@Vatsal1106Virani @sdg4168
Maybe you should ask your question here: https://github.com/run-llama/llama_index It seems to be specifically focussed on llama index
Hi @Jaswir ,
You are Participate in 48 hour Custom GPTs Hackathon. How is over all experience?
@Vatsal1106Virani
Very good. I met someone that I had a great team up with and learned a lot about Chat GPT plugins and it helped me get Chat GPT 4 subscription and bypass waiting list. I feel my AI knowledge increasing from each participation. I got a better understanding of finetuning gpt's from it. Also it's an accelerator program for startups, if you're interested in setting up a winning startup I think it's valuable also. I learned much about what to think about when creating a startup. Like Tam, Sam, Competitor Analysis.
I'll be joining this one too: https://lablab.ai/event/yi-llms-24-hours-hackathon?enroll=true Invite you to join too. But this one even better if you can make it: https://lablab.ai/event/truera-challenge-build-llm-applications You'll get free google cloud credits and stuff!
You followed that course? https://courses.nvidia.com/courses/course-v1:DLI+S-FX-12+V1/ Is it worth it?
@Vatsal1106Virani
Very good. I met someone that I had a great team up with and learned a lot about Chat GPT plugins and it helped me get Chat GPT 4 subscription and bypass waiting list. I feel my AI knowledge increasing from each participation. I got a better understanding of finetuning gpt's from it. Also it's an accelerator program for startups, if you're interested in setting up a winning startup I think it's valuable also. I learned much about what to think about when creating a startup. Like Tam, Sam, Competitor Analysis.
I'll be joining this one too: https://lablab.ai/event/yi-llms-24-hours-hackathon?enroll=true Invite you to join too. But this one even better if you can make it: https://lablab.ai/event/truera-challenge-build-llm-applications You'll get free google cloud credits and stuff!
You followed that course? https://courses.nvidia.com/courses/course-v1:DLI+S-FX-12+V1/ Is it worth it?
Yes, It's very helpful.
@sdg4168 We have some examples for LlamaIndex and RAG in Llama recipes repo here, can you take a look and let us know there?
@Vatsal1106Virani @WuhanMonkey
I am confused about the Bloke Models. What's the difference between: https://huggingface.co/TheBloke/LLaMa-7B-GGML
and https://huggingface.co/meta-llama/Llama-2-7b-chat-hf I am exploring Llama open source project to contribute, I wonder if someone has an issue about LLama-2-7b-chat-hf on the open source project and I use the bloke's fine tuned version will it provide the same errors as the original models? Can I use it to reproduce errors and such? And to bring solutions. Like if solution works for TheBloke model will it also work for meta-llama's original model?
Let me know the proper place to ask this if not here.
@Vatsal1106Virani @WuhanMonkey
I am confused about the Bloke Models. What's the difference between: https://huggingface.co/TheBloke/LLaMa-7B-GGML
and https://huggingface.co/meta-llama/Llama-2-7b-chat-hf I am exploring Llama open source project to contribute, I wonder if someone has an issue about LLama-2-7b-chat-hf on the open source project and I use the bloke's fine tuned version will it provide the same errors as the original models? Can I use it to reproduce errors and such? And to bring solutions. Like if solution works for TheBloke model will it also work for meta-llama's original model?
Let me know the proper place to ask this if not here.
If your question is not related to this issue, then it is not the proper place to ask.
We also want to have this repo focus on issues with original Llama 2 models. For your type of question, it is better been asked in HF forum.
[ Hi I'm studying about llama2. I'm trying to create a chat bot using llama open source, and My goal is to receive accurate answers when asked about embedded data. A query engine is built by embedding external data in the RAG system created through LlamaIndex. I'm also attempting prompt engineering using Few-shot Prompting, CoT (Chain of Thought), and Self-Consistentcy. ]
[ Here's the problem. If I insert an example of a date in the prompt and ask a question about a single date (ex: 2023-10-19 14:48:29), the model will answer well. However, if I insert several examples in the prompt and ask about the date range (ex: 2023-10-19 14:40:00 ~ 2023-10-19 15:00:00), llama will get an incorrect answer. ]
When what llama provides is accurate information, the prompt written is as follows: ↓↓↓↓
This is the answer given by llama2 : ↓↓↓↓
It's a good example. That is actually correct information about embedded data.
When what llama provides is incorrect information, the prompt written is as follows: ↓↓↓↓
This is the answer given by llama2 : ↓↓↓↓
It's a wrong example. That is actually incorrect information about embedded data.
[ To conclude, I wonder if llama2 is not good at recognizing dates. It embedded data containing dates and showed examples in the prompt, but it doesn't seem to understand date ranges other than single dates. For example, When I asked to know all the logs that occurred on October 19th, if there were actually 8 logs that occurred, I would like them to answer all 8. ]
What do I need??
I use that model : meta-llama/Llama-2-7b-chat-hf.