Closed Bra1nsen closed 2 years ago
Dear developer, I would like to save raw images
Hi, I'm not quite sure what you're trying to do, the following line captures a raw image and stores the result in a numpy array:
raw = picam2.capture_array("raw")
Note that this data won't have been demosaiced.
If you want to save a raw image to a file that can be loaded in an image editor capture_dng.py or capture_dng_and_jpeg.py might be more handy for you.
thanks chris. and how can i save .raw file?
capture_dng.py will save you a DNG file, which is a raw image format that can be loaded into image editing software ( https://en.wikipedia.org/wiki/Digital_Negative )
from picamera2 import *
from time import sleep
from fractions import Fraction
with picamera2.Picamera2() as camera:
config = camera.create_still_configuration(main={"size": (600, 400)},raw={}, display=None)
camera.configure(config)
camera.set_controls({"ExposureTime": 120, "AnalogueGain": 1, "ColourGains": (1.8,1.8)})
camera.start()
r = camera.switch_mode_capture_request_and_stop(config)
r.save("main", "full.jpg")
r.save_dng("full.dng")
metadata = camera.capture_file('full.dng')
print(camera.capture_metadata())
camera.stop()
camera.close()
how can i change the resolution of teh .dng file? Its always 2028x1080. Only the jpg is 600x400. &why does it not print metadata?
kind regards
Hi, DNGs are fundamentally different in that we're saving the exact data that came out of the image sensor. Generally speaking (and certainly for all Raspberry Pi supported sensors) image sensors expose only a limited number of "modes", and you can't get any other resolutions in a DNG file beyond those that correspond to one of these sensor modes.
You can find out exactly what sensor modes are available with
from picamera2 import Picamera2
from pprint import *
picam2 = Picamera2()
pprint(picam2.sensor_modes)
These is some further explanation in the "Raw Streams" section on page 19 of the manual.
great thank you david, I managed to set the SRGGB10 format, which is great, because it reaches 31us shutter speed! Is there any way to catch metadata equal the way you showed me with jpg ?
with picamera2.Picamera2() as camera:
config = camera.create_still_configuration(main={"size": (600, 400)}, raw={"format": "SRGGB10"}, display=None)
camera.configure(config)
camera.set_controls({"ExposureTime": 31, "AnalogueGain": 1, "ColourGains": (1.0,1.0)})
camera.start()
print(camera.capture_metadata())
r = camera.switch_mode_capture_request_and_stop(config)
r.save("main", 2.jpg")
r.save_dng("2.dng")
buffers, metadata = camera.switch_mode_and_capture_buffers(config, ["main", "raw"])
camera.helpers.save(camera.helpers.make_image(buffers[0], config["main"]), metadata, "2.jpg")
camera.helpers.save_dng(buffers[1], metadata, config["raw"], "2.dng")
print(metadata)
camera.stop()
camera.close()
unfortunalty it does print nothing. i wanted to get sensortemperature etc.. :]
crazy
Hi, there are a few things going on here, let me go through them:
capture_request
instead of switch_mode_capture_request_and_stop
.r
because still configurations by default only use 1 request, so you'll never get another image until you release it. But in any case, I don't think you need this capture at all.dcraw
to have a quick look at raw files. Use dcraw -w 2.dng
(sudo apt install dcraw
if you don't have it).To summarise, I'd probably try some code like this:
import picamera2
with picamera2.Picamera2() as camera:
config = camera.create_still_configuration(main={"size": (600, 400)}, raw={"format": "SRGGB10"})
camera.configure(config)
camera.set_controls({"ExposureTime": 31, "AnalogueGain": 1, "ColourGains": (1.0,1.0)})
camera.start()
r = camera.capture_request(config)
r.save("main", "2.jpg")
r.save_dng("2.dng")
print(r.get_metadata())
r.release()
hey David how are you? - thank your for you snippet, that helped me alot!
In any case its great that I can use raw camera sensor data, because image compression algorithm like jpeg introducing nonlinearities in the pixel values, which affects the process of estimating solar irradiance.
I still dont know how to save .raw file & is it actually also possible to get .tga files from the raw modes?
Python's PIL module actually supports TGA files, so what you want is almost very easy. Except that I think there are some problems with the fact that you would need to save 16-bit pixels if you don't want to lose any of the raw data. Kind of like this:
from picamera2 import Picamera2
from PIL import Image
import numpy as np
picam2 = Picamera2()
config = picam2.create_still_configuration(raw={"format": "SBGGR10"})
picam2.configure(config)
picam2.start()
array = picam2.capture_array("raw")
array_16bit = array.view(np.uint16) * 64 # convert 10-bit values to 16
im = Image.fromarray(array_16bit, 'I;16L')
im.save("test.tga")
The catch is that the save operation fails - PIL's TGA plugin appears not to support 16-bit formats. I'm not sure if there's anything else out there that does, so I don't really know what else to try. But how is it you want to use these files? DNG seems fairly standard for raw files, and Python's rawpy module will read them quite happily.
ahh damn that would have been great solution. I use a c programm for transforming the fisheye images. maybe i have to modify the code. https://github.com/Bra1nsen/fish2skyplane
when I use:
libcamera-raw -t 1000 --ev -10 --awbgains 1.0,1.0 --gain 1.0 --analoggain 1.0 --denoise off --rawfull 1 --shutter 9000
I get a .raw file. How do I get that srggb format in python?
I'm so thankful for your support david, I really don't know what I would do without you.
The ".raw" files saved by libcamera-raw
are flat binary files with no formatting data whatsoever. There's a console message that tells you the exact dimensions and format, but then it's up to you.
You can make like slightly easier by adding --mode 4056:3040:12:U
to the command which will force the raw file to contain unpacked pixels, that is, each pixel (still with a 12-bit dynamic range) will occupy 2 bytes, as opposed to "packed" format where 2 pixels occupy 3 bytes.
Anyway, once you've got your unpacked format raw file (let's assume you have a single frame which should be about 24MB), you can load it into Python simply with
import numpy as np
array = np.fromfile("test.raw", dtype=np.uint16).reshape((3040, 4056))
Looking back at the earlier code snippet, this actually gives you the equivalent array to
...
array = picam2.capture_array("raw").view(np.uint16)
pi@allskyLA:~ $ libcamera-raw -t 1000 --ev -10 --awbgains 1.0,1.0 --gain 1.0 --analoggain 1.0 --denoise off --rawfull 1 --shutter 90 --mode 4056:3040:12:U -o packed.raw
[144:22:18.175741643] [3683] INFO Camera camera_manager.cpp:293 libcamera v0.0.0+3866-0c55e522
[144:22:18.221979126] [3684] INFO RPI raspberrypi.cpp:1374 Registered camera /base/soc/i2c0mux/i2c@1/imx477@1a to Unicam device /dev/media0 and ISP device /dev/media2
[144:22:18.223072623] [3683] INFO Camera camera.cpp:1035 configuring streams: (0) 640x480-YUV420 (1) 4056x3040-SBGGR12
[144:22:18.223614816] [3684] INFO RPI raspberrypi.cpp:761 Sensor: /base/soc/i2c0mux/i2c@1/imx477@1a - Selected sensor format: 4056x3040-SBGGR12_1X12 - Selected unicam format: 4056x3040-BG12
Raw stream: 4056x3040 stride 8128 format SBGGR12
import numpy as np
array = np.fromfile("packed.raw", dtype=np.uint16).reshape((3040, 4056))
print(array)
ValueError: cannot reshape array of size 49418240 into shape (3040,4056)
packed.raw ~ 100MB
Looks like you've captured multiple raw images into the same file - which is the default behaviour, it writes them all to the same file. You can use -o test%d.raw --segment 1
on the command line if you want each written to a separate file.
it created automatically test.raw test0.raw test1.raw test2.raw and test3.raw O.O
How big is test0.raw? It should be a single raw frame, which in SBGGR12 format should be about 24MB. After loading with array = np.fromfile("test0.raw", np.uint16)
you should find that len(array)
is about 12 million. What numbers are you seeing?
yep 23,4Mb and ~12 million..
ValueError: cannot reshape array of size 12354560 into shape (4056,3040)
What Iam looking at ;D?
I think it may just be that I made a mistake using 4056 instead of 4064. The reshape
needs to pass the width including the row padding.
What happens is this: every (unpacked) raw frame contains 4056x3040 pixels, each 16 bits, but the padding on the end of every row means the stride is not 4056 2, it's 4064 2 = 8128. So every saved raw frame should contain 3040 * 8128 = 24709120 bytes.
You should be able to load this using array = np.fromfile("test0.raw", dtype=np.uint16)
, and len(array)
should be 12354560, which I think is what you had. You can then reshape it to (3040, 4064)
(not 4056 which I used earlier - sorry about that). Does that fix it?
Goood Morning David, I hope you slept well. In fact it did work (3040,4064).
Some questions have come up again:
I would like to know how I set exactly which image format is currently being used. (with app and if possible with python)
All formats are .raw files, aren't they? With libcamera python you couldn't generate a .raw file at all, right?
pi@allskyLA:~ $ libcamera-raw -t 1000 --ev -10 --awbgains 1.0,1.0 --gain 1.0 --analoggain 1.0 --denoise off --rawfull 1 --shutter 90 --mode 4056:3040:12:U -o test%.raw --segment 1
Hi, not totally sure I understand all the questions, but let me answer what I can.
libcamera-raw
only outputs these flat binary raw (Bayer) files, nothing else. You can force the use of a particular raw sensor mode with the --mode
parameter. You can choose either packed or unpacked output.libcamera-still
can output the raw Bayer image to a DNG file as well.picam2.sensor_modes
. The only further choice you have is whether the pixels are packed or unpacked.In Python I expect you can easily write the numpy raw image array to a ".raw" file , I think the tofile
method (the opposite of fromfile
which we used earlier) would do it.
Let me know if there's anything I haven't answered!
Unfortunalty DNG specifications are not public and I havent found a complete third party programming API.
The only difference between XRGB8888 and RGB888 and uncompressed TGA is a header in TGA that defines the dimensions, bit depth and some other parameters.
I would like to know, how to get a raw non-Bayer image like XRGB8888. And how can I check the format after the image was taken?
I wonder if there's a bit of confusion over the use of the word "raw". I always use "raw" to mean the raw Bayer image coming straight off the image sensor. Unfortunately, there's also a historic use of the word "raw" to mean "uncompressed" or "unencoded" (video people often talk about "raw video" when they mean "uncompressed video", for example). In such circumstances I avoid "raw" and try to use the most appropriate alternative.
I also use the word "processed" meaning an image that has been through the ISP (Image Signal Processor). The only thing that converts a raw image to a processed (i.e. non-raw) image is the ISP, and that "raw", at least in the way that Picamera2 uses it, actually has nothing to do with "uncompressed".
So to be clear, do you want raw (i.e. Bayer), or are you after something else? If you want a processed RGB image but wish to avoid lossy compression, PNG is quite often a good choice. There are also options that might involve changing what the ISP does to give you different kinds of processed image. Can you explain a little more about what you're after, perhaps?
(The DNG specification is here, but it is quite a complicated file format based on TIFF which means you don't want to be writing your own code to handle it!)
Youre absolut right, thanks for making that clear. So currently iam after a processed [R,G,B] image like RGB888.
The idea is to debayer it externally to transform it afterwards with fish2skyplane (where dng isnt fitting yet, its just too difficult)
I would like know how to get a XBGR8888 and/or XRGB888 and/or RGB888 file
Picamera2 will easily give you RGB888 images on its "main" stream, and you can save these as JPEG, PNG, BMP. From what I said earlier, I think even tga might "just work". Or you could save them as unformatted binary files using numpy.tofile. Does this cover what you need?
Ok thanks, yes it does, sorry if I'm a bit difficult. I guess I'm still missing the basics.
As we tried .tga didnt work or? The problem with image compression algorithm like jpeg, png, bmp is that there are introducing nonlinearities in the pixel values, which then affects the process of estimating solar irradiance. So i was looking for a "raw" RGB888 file. Or am I wrong ?
tga didn't work when trying to save a (true) raw Bayer file, but once you have a non-raw file I think it would:
from picamera2 import Picamera2
picam2 = Picamera2()
picam2.start()
picam2.capture_file("test.tga")
You're right that processed images have an inverse gamma transform applied to them, making the non-linear in terms of brightness. You can stop the ISP from doing that, but I wouldn't say that's a good idea if you want to use the images as "normal" images later.
I seem to recall discussing the "Lux" metadata value before. Now that is always a linear measure, though it's for the whole image and you'd have to calibrate what those units mean for you. But still, double the illumination and the lux value will double too.
Other options might include undoing the inverse gamma transform, this wouldn't be too hard. Or you could take the (true) raw data and de-Bayer it - the number of operations required could be really quite minimal, depending on what you want.
Maybe I should give you a bit more context. The idea is to feed a neuronal network with the sky images labeled with solar irradiance.
The artifical network ( i use VertexAI from google) will work best with raw values because they capture the radiometric characteristics of the scene (which iam interessted in)
There are two approaches which I would like to test.
non-raw Bayer matrix like RGB888 , where the difficulty lies in the compatibility with my image transformation program
debayered image format which is as pure as possible (radiometric/solar).
Thanks for the info. First of all, let me see if I understand!
You're going to want 3-channel RGB images (top row of pictures) which you will crop and de-fisheye (middle row). Then you're going to want these in quite a low resolution (last row) to feed to a neural network. Finally, the neural network will output a "solar irradiance" value? (Just for my curiosity, is that a single number or something?)
If I've got that right then I'll follow up with some suggestions on these two approaches...
yes exactly, one way will be through generating a hdr image via exposure fusion, another one will be through feeding the neuronal network with a tensor of all images taken.
The target is DNH - global radiation which is measured in W/m² Moreover I will add weather data to the model like wind direction/speed to further improve the prediction. Its about seconds to minutes.
Here you can see the solarpower of 3 days, if there were no clouds it should be a perfect hill. Abrupt drops in the performance of a photovoltaic system are problematic for the grid.
You're going to want 3-channel RGB images
It would also be very interesting to use a non raw Bayer Matrix (like RGB888) image. We pretend it's a monochrome picture. Of course for us humans the image would probably make no sense, but I feel like the AI will see it differently.
OK, thanks!
So I think your first idea was to use the raw Bayer image. It sounds like you'd have to "de-Bayer" this yourself in order to make it work with your other software. Here are the steps you would need - you'd have to think which of these are important.
I think your other plan involved letting the ISP do the de-Bayering, which would do all the above for you. You could set the format to 'RGB888' (or maybe 'BGR888') and choose whatever output resolution you want. If these images are to be consumed only by the neural network, you could turn off the inverse gamma transform completely. Try this:
from picamera2 import Picamera2
tuning = Picamera2.load_tuning_file("imx477.json")
contrast = Picamera2.find_tuning_algo(tuning, "rpi.contrast")
contrast["ce_enable"] = 0
contrast["gamma_curve"] = [0, 0, 65535, 65535]
picam2 = Picamera2(tuning=tuning)
picam2.start(show_preview=True)
from picamera2 import *
tuning = Picamera2.load_tuning_file("imx477.json")
contrast = Picamera2.find_tuning_algo(tuning, "rpi.contrast")
contrast["ce_enable"] = 0
contrast["gamma_curve"] = [0, 0, 65535, 65535]
ExposureTime_1 = 31
ExposureTime_2 = 120
ExposureTime_3 = 350
ExposureTime_4 = 650
ExposureTime_5 = 1000
ExposureTime_6 = 1500
def exposure_bracket():
with picamera2.Picamera2(tuning=tuning) as camera:
config = camera.create_still_configuration(main={"size": (4056, 3040),"format": "RGB888"})
camera.configure(config)
camera.set_controls({"ExposureTime": ExposureTime_6, "AnalogueGain": 1, "ColourGains": (1.0,1.0)})
camera.start()
camera.capture_file("#6.tga")
camera.stop()
camera.close()
exposure_bracket()
Thats the current snippet, Iam wondering if it does matter which image format I choose (RGB888, BGR888, XBGR8888, XRGB8888) when it gets transformed to .tga or .jpg
Would it be possible to use sensor mode SRGGB10 while using main? Because so one could further decrease shutter speed to 31us
And for the debayering approach, which way would you save these bayer matrixes/images - with numpy.tofile?
The 24 bit formats will use slightly less memory. For jpeg I'd expect no difference in the saved file, for tga the 32 bit images may get saved with an alpha channel.
No problem using the 10 bit sensor mode.
Well, once you've debayered the image and got regular RGB you can take your pick... jpeg, png, tga, "tofile", whatever is easiest!
Yes sure, but first I have to provide my friend a RGB888 or XRGB8888 file for debayering. how can I set the 10 bit sensor mode in my snippet david?
The raw Bayer image just isn't RGB888, you have only one sample at each pixel location. You could save it as DNG, or using tofile (in this case the receiver would have to know what to expect).
You could even pretend it's a greyscale image and you might be able to save that as a 16 bit PNG, but it would depend on whether the libraries you're all using support that.
How would it look like to get a raw Bayer matrix as DNG file in RGB888. I mean I cant use RGB888 in raw format and with main I cant get raw images I thought..
with picamera2.Picamera2() as camera:
config = camera.create_still_configuration(main={"size": (600, 400), "format": "RGB888"}, raw={"format": "SRGGB10"})
camera.configure(config)
camera.set_controls({"ExposureTime": ExposureTime_1, "AnalogueGain": 1, "ColourGains": (1.0,1.0)})
camera.start()
r = camera.capture_request(config)
r.save("main", "2.dng")
r.save_dng("2.dng")
metadata= r.get_metadata()
print(metadata)
print(metadata["Lux"])
r.release()
camera.close()
I worry a bit that we're not talking about pixel formats in quite the same way.
When I talk about RGB888 or BGR888 images, each pixel has 3 values (R, G and B). If you capture such an image and ask for its shape, it will report (height, width, 3). You can save these as normal images such as JPEG, PNG, BMP or TGA.
There are also the XBGR8888 formats. Here there's a 4th channel which is set to a fixed value (255, I think). You can normally save these as regular images too. Some formats (JPEG for example) will drop the 4th channel. Other formats may preserve it as an alpha channel.
Then there are the SBGGR10 type formats. These are raw Bayer formats. They all begin with S, the BGGR may change to indicate the Bayer pattern order and the 10 is the bit depth of the samples. Capturing one of these images and querying the shape will yield (height, stride) - only 2 dimensions this time and not 3. These are not RGB888 images and you can't save them as a normal image. You can save them as DNG files, or you can "roll your own format" using numpy.tofile.
Does that make things clearer? The code you have above looks to me like it will set up a 600x400 RGB888 main stream, and a raw stream which will probably choose the 1332x990 10-bit mode (the console log messages will confirm). Saving the "main" stream images as .dng doesn't really make sense - maybe you'll get a JPEG file with a funny name, I'm not sure. But the code looks to be saving the "raw" stream image as I would expect, in DNG format.
ahh okay my question was obvisouly not thought through. A raw Bayer Matrix has only 2 dimensions and cant be RGB888 format.
one question remains david, how can I set the 10bit sensor mode for my .tga file snippet in code (4h ago)? by default its srggb12
The line you had previously:
config = camera.create_still_configuration(main={"size": (600, 400), "format": "RGB888"}, raw={"format": "SRGGB10"})
should give you the 10-bit sensor mode. As I said, the console messages will confirm exactly what it chose, but it looks right to me. The sensor only has one 10-bit mode IIRC, which is the 1332x990 one. All other modes will give you 12 bits. (Use, for example, libcamera-hello --list-cameras
to see exactly what modes are supported.)
If you capture one of the raw frames as an array (camera.capture_array("raw")
) then you should find all the even columns have values from 0-255, and the odd columns only 0-3 (being just the top two bits of the 10-bit values).
with picamera2.Picamera2(tuning=tuning) as camera:
config = camera.create_still_configuration(main={"size": (4056, 3040),"format": "RGB888"}, raw={"format": "SRGGB10"})
camera.configure(config)
camera.set_controls({"ExposureTime": ExposureTime_6, "AnalogueGain": 1, "ColourGains": (1.0,1.0)})
camera.start()
camera.capture_file("#6.tga")
camera.stop()
camera.close()
[190:58:21.634475252] [32177] INFO RPI raspberrypi.cpp:761 Sensor: /base/soc/i2c0mux/i2c@1/imx477@1a - Selected sensor format: 4056x3040-SBGGR12_1X12 - Selected unicam format: 4056x3040-BG12
Why does it show SBGRR12, Where is my misconception?
Ahh okay I had to change the resolution in main to make it work :)!
config = camera.create_still_configuration(main={"size": (1332, 990),"format": "RGB888"}, raw={"format": "SRGGB10"})
ERROR