issues
search
paolorechia
/
learn-langchain
MIT License
275
stars
41
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
example error
#35
abdoelsayed2016
opened
1 year ago
1
Embeddings example with text_generation_web_ui
#34
ernestp
closed
1 year ago
5
Colab
#33
shamitb
opened
1 year ago
2
Executor tests
#32
paolorechia
closed
1 year ago
0
Code it tool
#31
paolorechia
closed
1 year ago
0
Vicuna is pretty cool, I also build a project use it.
#30
csunny
opened
1 year ago
2
Running code generation on your own API Docs
#29
mikolodz
opened
1 year ago
5
How to get Agent Execution running, no output from server
#28
unoriginalscreenname
opened
1 year ago
13
Upload dataset
#27
paolorechia
closed
1 year ago
0
Scripts to generate synthetic task data
#26
paolorechia
closed
1 year ago
0
Error from `load_quant`
#25
saikatbhattacharya
opened
1 year ago
2
I found a way how to use these models directly with Text Generation WebUI
#24
GMartin-dev
closed
1 year ago
8
pip install -r requirements.txt
#23
simulanics
closed
1 year ago
1
Linux install, No module named 'gptq_for_llama'
#22
unoriginalscreenname
closed
1 year ago
9
Collect logs and extract into dataset
#21
paolorechia
closed
1 year ago
0
Update load_config.py
#20
bigjeager
closed
1 year ago
0
Starcoder
#19
paolorechia
closed
1 year ago
0
AutoGPT example
#18
paolorechia
closed
1 year ago
0
langchain autogpt examples without opeanai embeddings and faiss vector store
#17
unoriginalscreenname
closed
1 year ago
8
Model Loader
#16
unoriginalscreenname
closed
1 year ago
8
UnboundLocalError: local variable 'stop_list' referenced before assignment
#15
alexandme
closed
1 year ago
2
Support oogabooga web server
#14
paolorechia
closed
1 year ago
0
Why create your own server?
#13
unoriginalscreenname
closed
1 year ago
9
Install older gptq version
#12
paolorechia
closed
1 year ago
4
Code editor tool
#11
paolorechia
closed
1 year ago
0
Specifying a local model
#10
unoriginalscreenname
closed
1 year ago
12
Unexpected MMA layout version found"' failed (gptq_for_llama)
#9
alexandme
closed
1 year ago
11
vicuna_request_llm.py error stop + ["Observation:"]
#8
unoriginalscreenname
closed
1 year ago
2
error running 13b and 4bit
#7
unoriginalscreenname
closed
1 year ago
3
Load different models command on windows
#6
unoriginalscreenname
closed
1 year ago
2
Load 4 bit
#5
paolorechia
closed
1 year ago
0
Not pulling a model a model from a repo
#4
d3ztr0yur3000
closed
1 year ago
11
Embedding
#3
paolorechia
closed
1 year ago
0
React lora
#2
paolorechia
closed
1 year ago
0
[help] Vicuna with local document
#1
qkyyds666
closed
1 year ago
6