issues
search
bigscience-workshop
/
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://petals.dev
MIT License
9.11k
stars
512
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Can petals run a quantized (use GPTQ) llama model ?
#514
sa1utyeggs
closed
1 year ago
3
Optimize LLaMA for inference
#513
mryab
closed
10 months ago
2
Add YaLM-100B
#512
Aspect004
closed
1 year ago
1
[might be bug?] Failed to connect to bootstrap peers when using docker image on truenas scale
#511
TomLBZ
opened
1 year ago
4
Store (start_block, end_block) in each DHT record for reliability
#510
borzunov
closed
1 year ago
3
Petals server Network ports remain open after python process shutdown
#509
redcap3000
opened
1 year ago
1
No target modules when attempting LoRA (peft) training
#508
kallewoof
closed
1 year ago
1
Extend ServerInfo with (start_block, end_block)
#507
borzunov
closed
1 year ago
1
Add loading LoRA adapters from clients' requests
#506
artek0chumak
opened
1 year ago
0
Remove smaller limit for legacy bfloat16 serialization
#505
borzunov
opened
1 year ago
0
delete duplicate tests
#504
justheuristic
closed
1 year ago
1
Issue with beam search decoding
#503
Vincent-Stragier
closed
11 months ago
2
Bump version to 2.2.0
#502
borzunov
closed
1 year ago
0
Fix prompt tuning after #464
#501
borzunov
closed
1 year ago
0
Optimize the Falcon block for inference
#500
mryab
closed
1 year ago
1
Add Falcon support
#499
borzunov
closed
1 year ago
1
Llama: Merge query/key/value projection layers
#498
mryab
opened
1 year ago
0
Force use_cache=True in config only
#497
borzunov
closed
1 year ago
0
Force use_cache=True
#496
borzunov
closed
1 year ago
0
More powerful session API
#495
Mathnerd314
opened
1 year ago
1
Incentive system based on Lightning
#494
earonesty
opened
1 year ago
1
mitigate unnecessary swarm rebalancing
#493
iateadonut
opened
1 year ago
0
Need help with text generation adapter finetune
#492
Miralumix
closed
1 year ago
0
Create model index in DHT
#491
borzunov
closed
1 year ago
0
[don't merge] Test with hivemind@dht-fork-process branch
#490
borzunov
closed
1 year ago
1
Replace dots in repo names when building DHT prefixes
#489
borzunov
closed
1 year ago
0
Can't install on windows
#488
ParisNeo
closed
1 year ago
9
Fix race condition in MemoryCache
#487
borzunov
closed
1 year ago
0
Wait for DHT storing state OFFLINE on shutdown
#486
borzunov
closed
1 year ago
0
Fix `.generate(input_ids=...)`
#485
borzunov
closed
1 year ago
0
Remove no-op process in PrioritizedTaskPool
#484
borzunov
closed
1 year ago
0
How can we make this work long term?
#483
physiii
opened
1 year ago
2
Refactor readme
#482
borzunov
closed
1 year ago
0
`model.generate(input_ids=...)` support
#481
borzunov
closed
1 year ago
1
Fix requiring transformers>=4.32.0
#480
borzunov
closed
1 year ago
0
Require transformers>=4.32.0
#479
borzunov
closed
1 year ago
0
Don't install cpufeature on non-x86_64 machines
#478
borzunov
closed
1 year ago
0
Support macOS natively
#477
borzunov
closed
1 year ago
0
Hide excess key message
#476
borzunov
closed
1 year ago
0
Update peft to 0.5.0 version
#475
artek0chumak
closed
1 year ago
0
Bump version to 2.1.0
#474
borzunov
closed
1 year ago
0
Support loading weights from Safetensors on server
#473
borzunov
closed
1 year ago
0
Change transformers version assert
#472
justheuristic
closed
1 year ago
0
Support transformers 4.32.x
#471
justheuristic
closed
1 year ago
0
Temporarily require peft<0.5.0, transformers<4.32.0
#470
justheuristic
closed
1 year ago
0
Python the only option to run?
#469
arthurwolf
closed
1 year ago
2
Random error when starting docker container again
#468
CIB
opened
1 year ago
0
Forward arbitrary kwargs to remote blocks
#467
justheuristic
opened
1 year ago
3
how to avoid this server failure? Seems to happen randomly after 1 hour of running a script.
#466
ryanshrott
closed
1 year ago
1
Out of memory on a client with 8 GB RAM
#465
ryanshrott
closed
1 year ago
4
Previous
Next