issues
search
replicate
/
replicate-python
Python client for Replicate
https://replicate.com
Apache License 2.0
744
stars
212
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Skip streaming integration tests if `REPLICATE_API_TOKEN` isn't set
#316
mattt
closed
3 months ago
0
Update recommendations for passing file inputs to models
#315
mattt
closed
3 months ago
0
Include specific base64 mention on file input
#314
GothReigen
closed
3 months ago
1
Suggestion: Allow Passing API Token as a Parameter in Replicate Python Client
#313
Mahad-lab
closed
3 months ago
1
Verifying webhooks
#312
aaronn
closed
3 months ago
2
Populate training destinations from output
#311
mattt
closed
3 months ago
0
Poll valid PredictionParams
#310
Wheest
closed
3 months ago
2
asynchronously close of response
#309
alex-does-stuff
closed
3 months ago
2
Training failed. Failed to create trained image after successful training run.
#308
khenzo
closed
3 months ago
2
model container failed to boot and complete setup within 600 seconds
#307
christopher5106
closed
3 months ago
10
Getting frequent replicate.exceptions.ModelError on small, text only queries using python API, llama-3-70b-instruct model
#306
gthaker
opened
4 months ago
3
`meta/meta-llama-3-70b` ignores `max_tokens`
#305
johny-b
opened
4 months ago
0
Document how to set a custom token
#304
zeke
closed
3 months ago
1
add a Pull Request template
#303
zeke
closed
3 months ago
1
Improve support for `models.get` and `models.delete` endpoints
#302
mattt
closed
3 months ago
0
llama-2-70b-chat Not Working
#301
yk803
closed
4 months ago
1
Curly brace ({}) in prompt
#300
wernerulbts
closed
2 months ago
6
Ability to process in Batches
#299
charliemday
closed
2 weeks ago
5
setup timeout without meaningful logs
#298
afpro
closed
5 months ago
4
Can you upload local files to create training with sdxl?
#297
Vadimkomis
closed
5 months ago
3
FOCUS-API: Send back image URLS before the generation is complete
#296
fmattera
opened
5 months ago
0
Use Bearer authorization scheme
#295
mattt
closed
5 months ago
0
Replicate Streamlit Langchain Streaming + Vector DB Assertion Error
#294
OlexiyPukhov
closed
5 months ago
1
Getting destination as None. When training the model
#293
kartikwar
closed
3 months ago
3
Update readme to llama 3
#292
bfirsh
closed
5 months ago
0
Generate API reference
#291
deepfates
closed
3 months ago
0
Support `predictions.create` with `model`, `version`, or `deployment` parameters
#290
mattt
closed
5 months ago
0
add proxy
#289
xmduhan
closed
2 weeks ago
4
Fix initialization of stream decoder
#288
mattt
closed
5 months ago
0
LLama3 streaming repeats the previous request's first token.
#287
mikutsky
closed
5 months ago
8
async example, and link documentation
#286
cbh123
closed
3 months ago
1
Fix missing word in README.md
#285
DuskSwan
closed
3 months ago
1
Use rye for package management
#284
mattt
closed
3 months ago
0
Replace pip and pip-tools with uv
#283
mattt
closed
5 months ago
1
Using API to Enable/Disable Deployments in Scripts
#282
talhen123
closed
5 months ago
1
How to deploy images to my own S3 server?
#281
unsanny
opened
6 months ago
0
error when running another version on the server replicate.com via a python script
#280
kinofq
closed
2 months ago
1
Include `stream=True` in prediction stream snippet
#279
nateraw
closed
6 months ago
0
replicate.stream() gives blank spaces at the beginning of output stream
#277
yosuaw
closed
5 months ago
4
Token usage tracking
#276
pavansai26
opened
6 months ago
0
newww
#273
isabelyork15
closed
6 months ago
0
Setting timeouts in for replicate.run()
#272
darhsu
closed
2 weeks ago
3
Fix `Deployment` model definition
#271
mattt
closed
6 months ago
0
replicate.deployments.get fails
#270
alxiang
closed
6 months ago
1
Improve ergonomics of streaming predictions
#269
mattt
closed
6 months ago
0
Apply custom headers passed to client constructor
#268
mattt
closed
6 months ago
0
Fix linter warnings in `exceptions.py`
#267
mattt
closed
6 months ago
0
Update project for ruff 0.3.3
#266
mattt
closed
6 months ago
0
documentation: error handling
#265
RichardNeill
closed
2 months ago
1
`meta/llama-2-70b` maximum input size (1024) differs from the LLaMA-2 maximum context size (4096 tokens)
#264
jdkanu
opened
6 months ago
0
Previous
Next