We want to use computer vision algorithms for the following few tasks. They may require very different models/skills.
Determine if the battery is successfully grabbed by the crimper robot. This should be a relatively simple binary classification task.
Determine the status of the trays. The model needs to know the existence of the 8x8 grid and determine whether there is a battery component at every index. For example, it can return a 2d array taken[8][8], where taken[i][j] == 1 means there is a component at the grid[i][j].
Help align the components when assembling them on the assembly post. This requires the error not to be too huge and the assembly robot can pick up every component but may just not be perfectly at center. The well position error issue needs to be solved before this can happen.
We can use 2 cameras for this task. One camera is looking up and can see the bottom of the grabbed component. The second camera is the camera on top of the assembly robot, and it looks down at the assembly post to verify the component is at the center.
Vision-based servoing. Currently, it is very tedious to obtain the precise locations of each well, and is not robust against any errors--- all the positions need to be extremely accurate to ensure the success of battery assembly. If we can apply vision-based servoing, the well positions only need to be relatively close to the exact value, and more errors can be tolerated. However, I am not sure if the camera on top of the Meca500 can achieve this task. And this requires some good algorithms.
Tower camera for component pick-up at irregular locations. Currently, the users need to manually put every component on the tray, but this process can be tedious. It would be extremely helpful if the users only need to pour 64 components into the box under the tower camera and the assembly robot would figure out how to pick them up in the box and put them on the tray by itself.
We want to use computer vision algorithms for the following few tasks. They may require very different models/skills.
taken[8][8]
, wheretaken[i][j] == 1
means there is a component at thegrid[i][j]
.We can use 2 cameras for this task. One camera is looking up and can see the bottom of the grabbed component. The second camera is the camera on top of the assembly robot, and it looks down at the assembly post to verify the component is at the center.
Vision-based servoing. Currently, it is very tedious to obtain the precise locations of each well, and is not robust against any errors--- all the positions need to be extremely accurate to ensure the success of battery assembly. If we can apply vision-based servoing, the well positions only need to be relatively close to the exact value, and more errors can be tolerated. However, I am not sure if the camera on top of the Meca500 can achieve this task. And this requires some good algorithms.
Tower camera for component pick-up at irregular locations. Currently, the users need to manually put every component on the tray, but this process can be tedious. It would be extremely helpful if the users only need to pour 64 components into the box under the tower camera and the assembly robot would figure out how to pick them up in the box and put them on the tray by itself.