Closed ebgoldstein closed 2 years ago
as mentioned here: https://github.com/UNCG-DAISY/Instagrain/issues/118#issuecomment-1234559910, RPiOS will need an update to work with the arducam
@jacobstasiewicz , here are some docs for autofocus camera that i found: https://www.arducam.com/raspberry-pi-camera/autofocus/
I can confirm the cropped image is roughly 32x32 +/- 1mm
2 questions for you:
The new camera is attached and working in bullseye but only via shell, I have been battling some demons trying to get it working right on python. I will update you on progress. I think I downloaded a janky package somewhere along the line.
I will have to measure once I get into the lab, but im going to look through my notes quickly and see if I wrote it down.
How would I go about finding this out?
the easiest way to see the # of pixels is just with the RPi desktop environment. navigate to the picture, right click on it, go down to properties, and then look at the image tab... when i did it on the zoom above it is 1024 x 1024 pixels... (but if you could do it on the RPi just to check, that would be good.
yes the crop is 1024x1024
ok cool - i have confidence that we will get it working with python..
i suggest that you design/scale a tent that is the correct distance off the bed, and then print it.. this could be printed with cheaper resin (draft, or normal grey/black/white.. i have these)...
then we can build a new autofocus camera as a dev. machine to get the python code working..
thoughts?
Yes that sounds good
i made 2 new issues (#131 and #132).. we can close this for now since the testing is done.
from an initial test I think this camera will need to be farther away from the bed than the V2 camera system, for example here is an image taken at roughly 76 mm from the bed (lens to bed) vs 72mm for V2) The image size in pixels is 4656x3496 Changing the resolution to 2048x2048 here is the resulting picture
This leads to three questions I have.
Ok.. nice work...
so, naively, the goal is to make a 1024 x1024 image that is 32mm x 32mm to be harmonized with the other camera and therefore the existing model. Note that the model uses the 1024x1024 images, but rescaled to 224x224... images that are 1024 x 1024 are just too big for an on-device model... (see code block 5 here):
#set color
c_ch = 3
c_mode = 'rgb'
#set Image size (RGB so imshape is 3)
pix_dim = 224
imsize = (pix_dim, pix_dim)
imshape = (pix_dim, pix_dim, c_ch)
so, for the autofocus, there are two options:
i propose that we get there with software... so, if you use the lens tent that we have, then in the code we do something like this:
the only check i want to make is just to make sure that the cropped image is bigger than 1024x1024...
ALSO - we still want to save the fill size image.. they might be useful later when working with bigger grain sizes...
does this make sense?
With this new plan, i propose we close this issue (since we have a path forward)... also this means that #131 can be closed.. and these software changes mentioned above can be implemented in #132
sounds good
Replacing our current camera + lens setup with an autofocus camera module (specifically the 16MP one from arducam) would be easier on an operator b/c there would be no need to deal with focusing (see wiki docs on focusing).
Additionally, the autofocus camera is less expensive ($30) than the current $75 setup ( HQ cam ($50) + Lens ($25) )
Previous experiments with the current setup suggest the 1024x1024 center crop used for ML is ~32mm x ~32mm. see: https://github.com/UNCG-DAISY/Instagrain/issues/33#issuecomment-940111636. This should be retested with the current v2 camera, results posted here..
Note that the current ML model works with a 1024 x 1024 center crop that is downscaled to 224 x 224 and then run through the network.
To incorporate the autofocus camera, our constraints are:
To accomplish this, there will need to be some experimentation with the camera - adjusting distance - with some way to measure the size of the imaging surface (graph paper/rulers/scales)...
This issue is setup to track this project, post pictures from the autofocus camera, and decide whether it should be incorporated into future builds. There are 3 action items:
If the tests look promising, there will be two future issues to make:
main_tk.py