princeton-vl / DROID-SLAM

BSD 3-Clause "New" or "Revised" License
1.75k stars 295 forks source link

how to process backend on multi-gpus? #42

Open liangyongshi opened 2 years ago

liangyongshi commented 2 years ago

how to process backend on multi-gpus?

liangyongshi commented 2 years ago

can the frontend and backend be processed respectively in two GPUS?

liangyongshi commented 2 years ago

I process the froentend in 5 GPUs ,and report errors: ii, jj = torch.as_tensor(es, device=self.device).unbind(dim=-1) ValueError: not enough values to unpack (expected 2, got 0) how to process the backend in multi GPUS ?

xhangHU commented 2 years ago

Hi, I also encountered this problem, did you solve it?

liangyongshi commented 2 years ago

no😅😅

---Original--- From: @.> Date: Wed, Apr 27, 2022 12:48 PM To: @.>; Cc: @.**@.>; Subject: Re: [princeton-vl/DROID-SLAM] how to process backend on multi-gpus?(Issue #42)

Hi, I also encountered this problem, did you solve it?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

billamiable commented 2 years ago

Having same issue, anyone can help? Meanwhile, the current implementation actually runs global BA just before system termination, which is not real-time performance..

buenos-dan commented 2 years ago

Having same issue, anyone can help? Meanwhile, the current implementation actually runs global BA just before system termination, which is not real-time performance..

I have the same question.

liangyongshi commented 2 years ago

这是来自QQ邮箱的假期自动回复邮件。   您好,已经收到您的邮件,无法亲自回复,还请见谅。

liangyongshi commented 1 year ago

这是来自QQ邮箱的假期自动回复邮件。   您好,已经收到您的邮件,无法亲自回复,还请见谅。