issues
search
tunib-ai
/
parallelformers
Parallelformers: An Efficient Model Parallelization Toolkit for Deployment
https://tunib-ai.github.io/parallelformers
Apache License 2.0
779
stars
61
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Use this library for CNN networks like Unet
#55
cporrasn
opened
5 months ago
0
Title: RuntimeError: Timed out initializing process group in store based barrier
#54
hugocool
opened
1 year ago
0
Still in development?
#53
codeananda
opened
1 year ago
0
freeze_support()
#52
vinnitu
opened
1 year ago
0
Support for Falcon-7B and Falcon-40B models
#51
mahdyshabeeb
opened
1 year ago
0
Support for LLaMA
#50
IzzetYoung
opened
1 year ago
3
Add GPT Neox Policy
#49
abhilash1910
opened
1 year ago
2
Cross-node inference
#48
BDHU
opened
1 year ago
4
Do not check if an object is pickable
#47
mkardas
opened
1 year ago
1
Speed up results serialization
#46
mkardas
opened
1 year ago
0
Add Vision Encoder Decoder model to parallelformers
#45
gagan3012
opened
1 year ago
0
RuntimeError: CUDA error: peer access is not supported between these two devices
#44
Dorcoh4
opened
1 year ago
1
Bug with T511b inference
#43
ZeyiLiao
opened
2 years ago
0
OSError: [Errno 9] Bad file descriptor
#42
aws-stdun
opened
2 years ago
1
A bug with `n_fused`
#41
JiayiFeng
opened
2 years ago
4
torch no_grad
#40
zelcookie
opened
2 years ago
1
INT8 support
#39
volkerha
opened
2 years ago
0
Support Codegen 12B
#38
Tiiiger
opened
2 years ago
0
Bus error in parallelformers 1.2.7 for OPT model
#37
sindhuvahinis
opened
2 years ago
1
[Feature Request] Add Bloom to the Auto Policy
#36
airsplay
opened
2 years ago
2
Can you please add Question Answering models like LayoutLMv2ForQuestionAnswering
#35
sujit420
opened
2 years ago
0
Can you please add support for gpt_neox
#34
tahercoolguy
opened
2 years ago
2
Support for GPT2-XL
#33
snoop2head
closed
2 years ago
3
RuntimeError: Cannot re-initialize CUDA in forked subprocess
#32
cabal-daniel
closed
2 years ago
12
add Meta opt model policy
#31
dongs0104
closed
2 years ago
3
Support for OPT
#30
mrzjy
closed
2 years ago
1
Error using google/UL2 model
#29
dnhkng
closed
2 years ago
6
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
#28
samarthsarin
opened
2 years ago
2
GPT2 parallelism does not work on the Tesla K80
#27
0x7o
closed
2 years ago
1
EncoderDecoder support
#26
d-miketa
opened
2 years ago
0
Recommended way for cleaning up?
#24
creatorrr
opened
2 years ago
0
Issue running parallelformers test script in a VM
#23
Mehrad0711
opened
2 years ago
1
freeze_support()
#22
psinha30
closed
2 years ago
2
replace `torch.multiprocessing` with `multiprocess`
#21
Oaklight
closed
2 years ago
0
AttributeError: Can't get attribute 'MegatronPolicy' on <module '__main__' (built-in)>
#20
Oaklight
closed
2 years ago
6
GPU행업 이슈
#19
jason9693
opened
2 years ago
4
다중 Model 로드 방법
#18
Don9wanKim
closed
2 years ago
6
KoGPT3와 연동시 품질 이슈
#17
BangDaeng
closed
2 years ago
13
AssertionError: Model should be on CPU before parallelization. It is more memory-efficient.
#16
juliensalinas
closed
2 years ago
29
GPT models hang on large token generation. Lower performance?
#15
mallorbc
opened
2 years ago
1
How can I parallelize the MegatronBertModel?
#14
kajyuuen
closed
2 years ago
1
How do I use this for zero shot classification tasks
#12
subhamkhemka
closed
2 years ago
1
Integration Note with Huggingface Transformers & Microsoft DeepSpeed
#11
hyunwoongko
closed
2 years ago
1
Add guides about the number of GPUs to the documentation
#10
hyunwoongko
closed
2 years ago
0
complete downstream task test for albert
#9
fightnyy
closed
3 years ago
0
complete downstream task test for albert
#8
fightnyy
closed
3 years ago
0
[#4] Backward compatibility patch
#7
hyunwoongko
closed
3 years ago
0
[#5] Fix bug about AlbertModel
#6
hyunwoongko
closed
3 years ago
0
Bug about `AlbertModel`
#5
hyunwoongko
closed
3 years ago
0
Support for GPT-J
#4
andreamad8
closed
2 years ago
11
Next