Open AmitMY opened 1 year ago
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
src/app/pages/translate/pose-viewers/human-pose-viewer/human-pose-viewer.component.ts | 0 | 8 | 0.0% | ||
<!-- | Total: | 0 | 8 | 0.0% | --> |
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
src/app/core/services/assets/assets.service.ts | 1 | 39.66% | ||
src/app/pages/translate/pose-viewers/human-pose-viewer/human-pose-viewer.component.ts | 1 | 16.67% | ||
<!-- | Total: | 2 | --> |
Totals | |
---|---|
Change from base Build 4618504967: | 0.6% |
Covered Lines: | 1115 |
Relevant Lines: | 2003 |
Visit the preview URL for this PR (updated for commit 5d9fe2e):
https://translate-sign-mt--pr83-pix2pix-batching-q8o1qeuz.web.app
(expires Fri, 14 Apr 2023 20:17:00 GMT)
🔥 via Firebase Hosting GitHub Action 🌎
Sign: 739446cfe7a349700ebd347d2a39e3b90ba24425
Fixes
Relates to #58
Description
Since GPUs are highly parallelizable, could we perform inference on multiple frames at once instead of one-by-one?
Yes, we can! However, it does not improve performance much if the GPU is weak.
Benchmark on Macbook Pro with M1 Max, shows no substantial improvement from batching. We observe a linear scale by batch size.