Open HAOCHENYE opened 1 year ago
Is it only open for residents in China? The "discord" server QR code above directs me to We Chat. Also it requires a Chinese phone number to sign up using the application form link provided above.
Hi @ahmedmustahid , thank you very much for your interest in our activities, you can join our discussion group by clicking https://discord.gg/KuWMWVbCcD.
Introduction
Interested in deeply participating in OpenMMLab projects? Want to learn more about OpenMMLab's awesome tools without wasting plenty of time reading docs? The First OpenMMLab Codecamp has begun!! We provide more than a hundred tasks from the seventeenth research directions for you to pick. Whether you are a novice in AI or a senior developer, there are suitable tasks for you to participate in. We will provide quick responses and full guidance to help you smoothly complete those tasks and upgrade to a core contributor of OpenMMLab. We combined Beijing Super Cloud Center to support with computing power.
How to participate?
Select the task you are interested in and submit registration here. We will inform you in three days if you have enrolled for the tasks, and then you can formulate the task plan with tutor and start development! Once your PR has passed preliminary review, you can apply for the next task or just wait for the award! More details: OpenMMLab Activity page.
Task description
As MMCV supports more and more downstream repositories, more and more operators are added to MMCV. However, not all operators are adapted to all hardware, for example, we only provide
softnms
with CPU implementation but not GPU implementation, and focal loss only with GPU implementation but not CPU implementation.In order to make it easier for everyone to run MMCV on different devices, we call on everyone to participate in this operator implementation activity, from which you will learn:
If you are interested, you can claim a task here
The full PR steps can be referred to in the guide.
Template PR
2365 provide the template for softnms, sigmoid_focal_loss and softmax_focal_loss. It only finishes the registering, dispatching, and updating unit test process, and you need to implement the function which will output
"Please implement xxx"
.Operators
softnms - CPU Reference Implementation: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/mmcv/ops/csrc/pytorch/cpu/nms.cpp#L59 - Unit test to be updated: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/tests/test_ops/test_nms.py#L41 - Technical Tags C++; CUDA; Python; Detection
nms_match - CPU Reference Implementation: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/mmcv/ops/csrc/pytorch/cpu/nms.cpp#L168 - Unit test to be updated: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/tests/test_ops/test_nms.py#L110 - Technical Tags C++; CUDA; Python; Detection
sigmoid_focal_loss - CUDA Reference Implementation: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/mmcv/ops/csrc/common/cuda/sigmoid_focal_loss_cuda_kernel.cuh#L12 - Unit test to be updated: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/tests/test_ops/test_focal_loss.py#L63 - Technical Tags C++; CUDA; Python
softmax_focal_loss - CUDA Reference Implementation: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/mmcv/ops/csrc/common/cuda/softmax_focal_loss_cuda_kernel.cuh#L12 - Unit test to be updated: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/tests/test_ops/test_focal_loss.py#L42 - Technical Tags C++; CUDA; Python
assign_score_withk - CUDA Reference Implementation: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/mmcv/ops/csrc/common/cuda/assign_score_withk_cuda_kernel.cuh#L20 - Unit test to be updated: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/tests/test_ops/test_assign_score_withk.py#L10 - Technical Tags C++; CUDA; Python
bbox_overlaps - CUDA Reference Implementation: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/mmcv/ops/csrc/common/cuda/bbox_overlaps_cuda_kernel.cuh#L33 - Unit test to be updated: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/tests/test_ops/test_bbox.py#L9 - Technical Tags C++; CUDA; Python; Detection
correlation - CUDA Reference Implementation: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/mmcv/ops/csrc/common/cuda/correlation_cuda.cuh#L36 - Unit test to be updated: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/tests/test_ops/test_correlation.py#L19 - Technical Tags C++; CUDA; Python; Detection
ms_deform_attn - CUDA Reference Implementation: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/mmcv/ops/csrc/common/cuda/ms_deform_attn_cuda_kernel.cuh#L18 - Unit test to be updated: https://github.com/open-mmlab/mmcv/blob/652b1bf207616608179398abdcf17276b4a0ab27/tests/test_ops/test_ms_deformable_attn.py#L107 - Technical Tags C++; CUDA; Python; Detection Sign Up Here : [application form](https://openmmlab.com/activity/codecamp/apply) :rocket: :rocket: :rocket: By the way, we strongly encourage you to publish your experience on social media like medium or twitter with tag "OpenMMLab Codecamp" to share your experience with more developers! :laughing: :laughing: :laughing: Discussion group: [discord link](https://discord.gg/KuWMWVbCcD) Welcome to join the discussion below or in discord. Come to take the challenge, and become a contributor to the OpenMMLab ! :partying_face: :partying_face: