Vandivier / rect

an AI-powered tool for transforming social content into educational material!
https://rect-alpha.vercel.app
MIT License
16 stars 2 forks source link

Optimize model support #2

Open Vandivier opened 1 year ago

Vandivier commented 1 year ago

During the hackathon I just picked a model without knowing too much about an ideal model fit. Think better and change if needed

Vandivier commented 1 year ago

This one seems easy to set up for the hackathon: https://youtu.be/ByV5w1ES38A

Want to add retrieval augmentation if possible, time box to one day

Vandivier commented 1 year ago

And this https://youtu.be/nVC9D9fRyNU

Vandivier commented 1 year ago

more retrieval augmentation: https://huggingface.co/spaces/deepset/retrieval-augmentation-svb/blob/main/app.py

Vandivier commented 1 year ago

ANOTHA ONE (koala)

https://youtu.be/AZUTsp9Et-o

Vandivier commented 1 year ago

Low memory requirements - 6 gb iirc https://youtu.be/fGpXj4bl5LI

Vandivier commented 1 year ago

GPT4all v2

https://youtu.be/scEMax2r4ts

Vandivier commented 1 year ago

more models https://wandb.ai/wandb_fc/LLM%20Best%20Practices/reports/Should-You-Purchase-an-LLM-or-Train-Your-Own---VmlldzozNjU5NjYy

Vandivier commented 1 year ago

for now, leverage chat gpt ootb token limit and use recursive, by-unit summarization, then commit the summarized outputs, and have docs that instruct the user about how to use these outputs (include a bit about reflection)

Vandivier commented 1 year ago

related https://medium.com/geekculture/list-of-open-sourced-fine-tuned-large-language-models-llm-8d95a2e0dc76

https://www.zdnet.com/article/this-new-technology-could-blow-away-gpt-4-and-everything-like-it/

Vandivier commented 1 year ago

related https://twitter.com/simonw/status/1647620943840428032

we can use webGPU in M2 if reg GPU doesn't get working

Vandivier commented 1 year ago

https://www.reddit.com/r/LocalLLaMA/comments/12vzjti/new_fully_open_source_model_h2ogpt_20b_based_on/

Vandivier commented 1 year ago

fine tune dolly v2 for $30 https://www.tiktok.com/@rajistics/video/7222430618347490602

Vandivier commented 1 year ago

few shot or in-context learning is considered newer than fine-tuning (but is it more performant?)

https://www.tiktok.com/@rajistics/video/7226905183601708331

Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning https://openreview.net/forum?id=rBCvMG-JsPd

but what about a model i can't access like gpt-4? PEFT vicuna vs ICL GPT-4?

Vandivier commented 1 year ago

MPT-7B-StoryWriter-65k+ literally made to write books ("ALiBi, MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens, and we have demonstrated generations as long as 84k tokens on a single node of A100-80GB GPUs.")

that's 60000+ words

that's 120+ pages

that's a book

https://github.com/mosaicml/llm-foundry

Vandivier commented 1 year ago

ChatGLM6B

Vandivier commented 1 year ago

https://github.com/oobabooga/text-generation-webui/blob/main/models/config.yaml

supported oobabooga models

Vandivier commented 1 year ago

https://www.paperspace.com/pricing

Vandivier commented 1 year ago

Geforce RTX 2060

or better to run https://github.com/openai/triton (optimized thingy under MPT)

Vandivier commented 1 year ago

https://github.com/cocktailpeanut/dalai

little lighter weight than oobabooga (maybe?)

Vandivier commented 1 year ago

web gpu acceleration https://github.com/mlc-ai/web-llm

4.8276 tokens/sec for my Nvidia geforce GTX 960, ~5.5 CUDA Compute ability, 4 gb GPU dedicated RAM

Vandivier commented 1 year ago

TODO: cloud dev with langchain try paperspace + A100 can i use IPUs? (paperspace)

Vandivier commented 1 year ago

m2 apple air 16 gb 13.2 got 15 tokens/second

Vandivier commented 1 year ago

peft library: https://pypi.org/project/peft/

can be done using the a100

related https://twitter.com/Sumanth_077/status/1625774615753629696

Vandivier commented 1 year ago

More big context windows

https://www.tiktok.com/t/ZTRKyRvhh/

upstartjohnvandivier commented 1 year ago

100k context window claude and rates lgtm anthropic.com/product

Vandivier commented 1 year ago

https://www.tiktok.com/t/ZTRKbBTwf/

Better than chain of thought prompting

Vandivier commented 1 year ago

new model 99% as good at gpt-3.5 and low mem requirements, fine tunable and open (commercial...?) https://www.youtube.com/watch?v=3PVg86bnKDg

Vandivier commented 1 year ago

https://dev.to/dhanushreddy29/deploy-hugging-face-models-on-serverless-gpu-47am

Vandivier commented 1 year ago

Falcon 40B is a the new open winner and commercially licensed.

Need to pin down exact perf va GPT-3.5 and context tradeoff and cost tradeoff

But v cool it's open and I can finetune on Ladderly info for a fully closed Ladderly-Chat (rect can easily support an OR operation on model by env var so rly don't need to pick A or B, we can also add story writer or another large context option)

https://huggingface.co/blog/falcon#fine-tuning-with-peft

Vandivier commented 1 year ago

Claude 2 and Bing on the table Can we access Bing OCR programmatically?

Legacy OCR (non-LLM) could be fine too

https://youtu.be/anljthOQHhg

Vandivier commented 1 year ago

fine tuning engine https://github.com/scaleapi/llm-engine also we want llama 2 rn as the best open approach; gpt-4 is still better for closed source i think we need to charge more for that tho

Vandivier commented 1 year ago

https://youtu.be/z2QE12p3kMM

Llama 2 custom model fine tuning

Vandivier commented 1 year ago

tuned llama 2 vs gpt-4 perf

https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications

Vandivier commented 1 year ago

Nvidia trains models, apple silicon runs models

another awesome place to train models is: https://vast.ai/pricing

with a 4090, i could lease my gpu for $.40 an hour atm (no idea about rate of decay) this is not crazy money though. that works out to $200-300/month not counting the energy bill

Vandivier commented 1 year ago

wait i guess vast pricing is per minute not per hour based on this blog https://vast.ai/article/running-the-70B-LLama2-GPTQ

i'm super confused...if that's the case it's an obvious buy though https://twitter.com/JohnVandivier/status/1693110093858931142

Vandivier commented 1 year ago

did we mention llava? (multimodal) https://stackshare.io/llava

Vandivier commented 1 year ago

IDEFICS largest open multimodal. Understands images

https://youtu.be/Uif25fPbeuQ?si=hW2h4GWZDfJqVNGo