Open Moran232 opened 1 year ago
为什么我安装的环境,编译时会有bug?你遇到过吗?
编译我通过了,跑不了他给的example,这库不行, 别纠结了
AssertionError: CUDA_HOME does not exist, unable to compile CUDA op(s)
有人遇到一样的问题吗?安装了pytorch==2.1.0
+ cuda11
,竟然说找不到CUDA……
跑的知乎上的demo
import torch
from lightseq.training.ops.pytorch.transformer_encoder_layer import LSTransformerEncoderLayer
def train(model, inputs, masks):
inputs = inputs.to(device="cuda:0")
masks = masks.to(device="cuda:0")
model.to(device="cuda:0")
model.train()
opt = torch.optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(1000):
opt.zero_grad()
outputs = model(inputs, masks)
loss = torch.square(outputs).mean()
loss.backward()
opt.step()
if epoch % 200 == 0:
print("epoch {:>3d}: loss = {:>5.3f}".format(epoch, loss))
if __name__ == "__main__":
# 定义LightSeq配置
config = LSTransformerEncoderLayer.get_config(
max_batch_tokens=4096,
max_seq_len=256,
hidden_size=1024,
intermediate_size=4096,
nhead=16,
attn_prob_dropout_ratio=0.1,
activation_dropout_ratio=0.1,
hidden_dropout_ratio=0.1,
pre_layer_norm=True,
fp16=False,
local_rank=0
)
# 随机生成输入
bsz, sl = 10, 80
inputs = torch.randn(bsz, sl, config.hidden_size)
masks = torch.zeros(bsz, sl)
# 定义LightSeq编码层并进行训练
model = LSTransformerEncoderLayer(config)
train(model, inputs, masks)
您的邮件已收到!谢谢
https://github.com/bytedance/lightseq/issues/430