Open revast opened 10 years ago
Thanks for the info, have really to look into it. Nice to have also other people who can contribute something, infos or code. Hopefully it will get more frequent.
At the moment i'm trying to get the basic things done, like plugins and OpenGL preview. It's crawling a little bit atm, as i also have to get my regular job done. Also i'm trying to get LibRaw again in to the code this weekend, I removed parts after testing for Model-View-Presenter rework.
Ffmpeg was always really good. I already used it in another project (train simulator with real videos) and the performance was much better than reading directories full of individual image files (frames). Maybe me or someone else can make a plugin/module out of it. At the moment the application still lacks plugin manager, but first steps first.
No problem, I am not really a coder, but really want to push this. Having no floss tool out there which handles cinemaDNG sequences properly and not being a photo app with a photo workflow is great, and I will try to convince others. Ever since magiclantern appeared, or the kickstarted digitalbolex, I hoped for such an effort to rise.
first steps first. tell me what you need, and I will try to help.
As I see you investigate in MXF-wrapped dng, it would be very helpful if ffmpeg could support this first. http://ffmpeg.org/pipermail/ffmpeg-devel/2010-July/090069.html, maybe just add a feature request and see what's happening.
I also think using openimageio would be a good idea, has raw support, and deeply features all the other file formats of interest, with strong support from the vfx industies, and really nice library usage.
As a side-point, you could look into http://opensourcevfx.org/ or https://natron.inria.fr/ maybe the one or another idea could rise up from there (ofx plugin support ;-)
You seem to be a little bit more experienced with this stuff and could do a little investigation about raw libs if you want. I'm a little bit rusted as my daily job has to do with MFC, data structures and large amount of ugly code (medical devices), not really graphics (besides WPF GUI sometimes), but nevertheless i have enough experience with FFmpeg, OpenGL etc. to get many things done.
My latest test was using LibRaw and loaded al the frames, after decoding of course, into RAM. Memory usage is 1:1, no optimizations, no partial loading to VRAM etc. But it played really nice and fast.
At the moment i'm doing quick prototype of OpenGL preview pane to get some shaders into the play. I would really like to don't hassle with raw loading and let the libs do the work. Afterwards de-bayering, color grading, white balancing etc. getting done with shaders (fragment or compute shader).
I am by no means an expert about this. I just happen to be interested in these technologies, and see what could be connected to get things forward.
could do a little investigation about raw libs if you want. shure! despite from possible license-issues, these come in my mind at first:
openimageIO ("3-clause BSD" license) does also use libraw, has deep support for exr, dpx, JPG-2000 for creating DCP's for cinema projectors. And as it gets used inside the programs and pipelines of the big vfx studios, it's constantly developed and quite future-proof, also said to have a very fine API. https://github.com/OpenImageIO/oiio/blob/master/src/doc/openimageio.pdf http://www.ianwootten.co.uk/2014/09/15/processing-camera-raws-with-openimageio-and-python/
The OpenFX plugin benefit would be to be able to use, apart from even commercial ones, many floss ones in one snap: http://www.tuttleofx.org/license Color correction: Color Transform Language (CTL), OpenColorIO LUT, Color Transfer, ... Filtering: Blur, Sobel, Non-Local Means Denoising, ... Geometric distorsion: Crop, Resize, Flip, Flop, Pinning, Lens Distort, ...
This could also be interesting: Tuttle Host Library - A C++ library to load OpenFX plugins and manipulate a graph of OpenFX nodes. So you can use it as a high level image processing library.
OpenColorIO (http://opencolorio.org/License.html) is also a great thing to have (think of ACES, color space conversions), Tuttle features it, one abstraction layer above.
And as OpenFX is already built into node-based compositing solution natron (mpl license), maybe it's a good idea to ask at natron team, they have a tremendously fast dev pace right now, actually get paid for programing floss, they also use qt and c++.
As for Bayer demosaicing, and having the most (best) choice in doing it, there is actually rawtherapee which has to be mentioned: AMaZE, IGV, LLMSE, EAHD, HPHD, VNG4, DCB, AHD, fast and bilinear algorithms. http://50.87.144.65/~rt/w/index.php?title=Demosaicing http://renderingpipeline.com/2013/04/a-look-at-the-bayer-pattern/ RL Deconvolution sharpening tool sounds also nice: http://50.87.144.65/~rt/w/index.php?title=Sharpening http://petersonlive.com/raw-therapee-review/
Then there is RawSpeed (lgpl) library: https://github.com/klauspost/rawspeed RawSpeed is not intended to be a complete RAW file display library, but only act as the first stage decoding, delivering the RAW data to your application.
And that's only the stuff I know about without thorough investigation. Though I do not have insight into code too much, I could see great benefits from not inventing the wheel too often.
Wow, many thanks, that's what i meant. Very good info. You know things that slipped my attention while i was looking for first infos to get started with OpenCine development.
I'm also familiar with RawTherapee and Darktable, as i'm a little bit into photography (especially HDR). While i'm developing i try to get closer to their workflow and look. In my opinion they are really straightforward, but i'm open for other opinions too. Have to make a list of control elements which would be used for plugin GUI.
I heard about OpenFX before, but succesfully forgot about it. Have to look into it to not "reinvent the wheel" for plugins and such.
Really important thing for me is to have as much as possible processed by GPU. This would deliver generally great performance. Although SSE/SIMD optimized code can be also really quick. Fast previews would benefit workflow and better workflow is what the most people want.
Another thing that comes to my mind at the moment is Superresolution: http://en.wikipedia.org/wiki/Superresolution But it's more as note to myself, maybe useful in the later development.
As I understand it the problem with GPU code is that CUDA and OpenCL are really two different things, performance-, feature-wise.. There are many small things which makes it complicated to support wide range of computers, blender cycles renderer is suffering from that issue e.g. Also, Nvidia provides OpenCL version 1.0, AMD provides up to date version 1.1, under a restrictive license.
thus, compute shaders would come in handy http://wili.cc/blog/opengl-cs.html: But why did Khronos introduce compute shaders in OpenGL when they already had OpenCL and its OpenGL interoperability API? Well, OpenCL (and CUDA) are aimed for heavyweight GPGPU projects and offer more features. Also, OpenCL can run on many different types of hardware (apart from GPUs), which makes the API thick and complicated compared to light compute shaders. Finally, the explicit synchronization between OpenGL and OpenCL/CUDA is troublesome to do without crudely blocking (some of the required extensions are not even supported yet). With compute shaders, however, OpenGL is aware of all the dependencies and can schedule things smarter. This aspect of overhead might, in the end, be the most significant benefit for graphics algorithms which often execute for less than a millisecond.
Since last summer, compute shaders, part of OpenGL 4.3, seem to be fully supported by NVIDIA and AMD. Even mesa and thus linux floss drivers are into to supporting it, but it's not there. http://www.phoronix.com/scan.php?page=news_item&px=MTU5MzQ http://mesamatrix.net/
Nice thing is that OpenCL also runs on CPUs... thus no extra fiddling if the computer has no proper graphics card/drivers.
This guy, Syoyo Fujita, did an experimental raw decoder/player for magiclantern raw files. He is very into OpenGL compute shader, GLSL, very good source: http://syoyo.wordpress.com/2013/04/18/4k-raw-realtime-playback-using-gpu/ http://syoyo.wordpress.com/2013/05/24/camera-raw-pipeline/ http://vimeo.com/67542994 10-years of open source developer, quite open to talk with, but won't share his raw-realtime-playback code as far as I can tell.
http://cuj2k.sourceforge.net/ is a floss CUDA jpeg2000 encoder I am aware of.
http://slowmovideo.granjow.net/ creates slow-motion videos via opticalFlow on GPU "GPU implementation of feature point tracking with and without simultaneous gain estimation", available as libV3D. https://github.com/valgit/slowmoVideo/tree/master/src/V3D
Superresolution sounds nice, I found quite a few projects on github. https://github.com/stefanv/supreme (python) http://mentat.za.net/supreme/ he made phd dissertation about it This one seems nice, too https://github.com/Palethorn/SuperResolution (c++)
This is one of the pits which i don't want to fall in: missing portability. For example Darktable (i like this software very much) is missing a Windows version (i tried to build it, but had not much time so i gave up for the moment), which led met to overthink the way of OpenCine from the beginning. And this is also the point why i'm trying to use OpenGL 3.3 (3.0 atm as my laptop has only Intel HD4000 3rd gen), to reach more people. I could aim at 4.2 with direct access features (my desktop uses GTX660), but not many people have up to date hardware. What do you think, how should the roadmap be for such things? I wouldn't go below OpenGL 3 as you have sometimes to make clear cut.
To be honest, i hadn't tested OpenCine for some time on Windows. But my previous playback version ran pretty well, was looking exactly the same as on Linux. I will update it again when OpenGL playback is working. At the moment it is showing still DNG image (LibRaw/dcraw), had problems at first to show something besides black rectangle. Also still trying to structure the software architecture, separate views and handling of data.
Maybe you should be added to the project and extend Wiki with your infos and ideas. Just write to Sebastian Pichelhofer (https://github.com/sp-apertus) if you have interest.
Some of the projects are not really maintained anymore, i'm not saying outdated, cause algorithms which are implemented correctly are still used for years or decades in famous software packages. I also started to look into OpenFX API (not OpenFX, Blender-like software), hopefully this thing is flexible enough to use the libs you suggested as i like to have all the tools inside OpenCine and not some command line tools (at least not visible). It's my personal opinion on that topic, but i like polished software GUI and workflow and not popping up command line windows which show cryptic output of some process.
I think i will record a video or two and upload to YouTube, when i have something to show.
No tests done yet as i'm still trying to get FFMPEG linked. But started to looking also into other things to get some sort of roadmap.
Edit: Hooray!!! http://soledadpenades.com/2009/11/24/linking-with-ffmpegs-libav/
It required 2(!) days to find the answer. Nevertheless, learned more about CMake and other useful things.
No luck with samples from here: http://www.magiclantern.fm/forum/index.php?topic=11899.0
FFmpeg returns "Invalid data found when processing input". Investigation still going on.
Update: Also no luck with MlRawViewer. It reports: ImportError: No module named scandir
Update 2: Build and added scandir. Hasn't seen anything about it in the MlRawViewer README. Nice tool to use as reference.
Update 3: FFmpeg command line converts MLV fine, but C++ API has some problems. Still Investigating.
right, good to see your progress
Lastly I have not had time to thoroughly research further, but now I have.
First thing to look at (I guess) would be this newly open-sourced ( BSD three-clause license) library for writing CUDA/openCL code: http://arrayfire.com/why-arrayfire/
#########################################################
I also have been looking into dng (lossless) compression,
From Wikipedia: The process of DNG conversion involves extracting raw image data from the source file and assembling it according to the DNG specification into the required TIFF format. This optionally involves compressing it (lossless Huffman JPEG, called LJPEG http://www.jpeg.org/jpeg/jpegls.html). Metadata as defined in the DNG specification is also put into that TIFF assembly. So a DNG converter must have knowledge of the camera model concerned, and be able to process the source raw image file including key metadata. Optionally a JPEG preview is obtained and added. Finally, all of this is written as a DNG file. right, so one must So either compress at DNG creation INSIDE the camera, or Convert DNGs to compressed DNGs afterwards. Compressed DNGs can (should!?!) be transparently used in other DNG - aware applications
#########################################################
DNG SDK is open source, but License... http://www.adobe.com/support/downloads/dng/dng_sdk_eula_mac.html http://www.adobe.com/support/downloads/dng/dng_sdk_eula_win.html
The license appears seems to allow unrestricted use (and modification, etc.), as long as the Adobe copyright notice is included in the right places. The documentation license isn't OSD compliant, it limits number of copies and disallows derivative works. The Adobe license is similar to the BSD Open Source license (with none of the GPL requirements), but Adobe doesn't refer to the DNG SDK as "Open Source." It appears that Adobe isn't trying to restrict anything--they just want usage of DNG to be as friction-free as possible.
The Adobe DNG SDK provides support for reading and writing DNG files as well as support for converting DNG data into a format easily displayed or processed by imaging applications. This SDK can serve as a starting point for adding DNG support to existing applications that use and manipulate images or as an aid to adding DNG support within cameras Conversion of raw formats to DNGs is not one of the the things the SDK can do
Apparently works on Linux: http://techvineyard.blogspot.co.at/2014/05/dng-14-parser.html
#########################################################
DNG converter kipi-plugin (sounds like a MATCH! ) https://www.digikam.org/node/373 uses Adobe DNG SDK
DNG converter compiles fine also under Windows and MacOS-X, as it uses Qt4 for it's gui, and it supports the lossless compression (so the SDK seems to support the compression).
The tool transfers RAW data to DNG, creates a JPEG preview and a thumbnail, sets a lots of TIFF-EP/DNG tags with original raw files metadata, does a backup of all Exif metadata and makernotes, and creates a set of XMP tags, accordingly. The generated is a valid DNG file, which is checked by Adobe DNGValidate tool...
Adobe dng converter does it too, but not available on linux (without wine), non-floss etc. http://www.bmcuser.com/showthread.php?8227-Free-DNG-Compression-Tool!
#########################################################
mlv_dump (the reference tool of magicLantern, compression for archiving, i386 only)
http://ml.g3gg0.de/modules/mlv_dump.zip/mlv_dump.zip http://www.magiclantern.fm/forum/index.php?topic=7122.0
- can dump .mlv to CinemaDNG + .wav
- can compress and decompress frames using LZMA for archiving purposes.
- convert bit depth (any depth in range from 1 to 16 bits)
compression: you can get a data reduction of ~60% with 12 bit files. down-converting to 8 bits gives you about 90% data reduction. this feature is for archiving your footage. converting back to e.g. legacy raw doesn't need any parameters - it will decompress and convert transparently without any additional parameter.
#########################################################
http://m8raw2dng.de/ this guy managed to get uncompressed DNGs from a Leica M8 with full 14bit information, platform independent code, maybe useful..
#########################################################
Deep look into cDNG: (very good read) http://blog.inventome.com/Blog/2014/7/insidecdng/Inside-cDNG-Files
Einen Overhead hat Cinema DNG dadurch, dass viele Daten in jedem Frame wiederholt geschrieben werden. So ist in jedem einzelnen Frame einer Black Magic Kamera eine Liniarisierungstabelle enthalten. Theoretisch könnte damit jedes Frame eine andere Tabelle enthalten. In der Praxis scheint mir dies aber nie erforderlich zu sein.
#########################################################
DNG Recover Edges: (freeware standalone from thomas knoll, like the idea)
http://web.archive.org/web/20070228060106/http://www.luminous-landscape.com/recover_edges/ http://www.luminous-landscape.com/contents/DNG-Recover-Edges.shtml
recovers all of the pixels that any supported digital camera records, whether it's hidden edges or intentionally cropped formats which get cut by firmware.
RAW-Bilder besitzen in der Regel einen zusätzlichen Spielraum von bis zu 20 Pixeln am Bildrand. Bei der kamerainternen JPEG-Wandlung geht dieser Rand dagegen verloren. Zwar schneiden gängige RAW-Konverter die Pixel ebenfalls ab, doch ein kleines Programm für Windows und Mac verspricht Abhilfe. „DNG Recover Edges" verlangt vor der Wiederherstellung der Randpixel die Konvertierung ins herstellerunabhängige „Digital Negative Format" (DNG). Anschließend zieht der Anwender das gewandelte Bild einfach auf die Programmdatei. Eine Statusfenster gibt darüber Aufschluss ob die Wiederherstellung der Rand-Pixel geklappt hat.
#########################################################
DNGStrip: (would be nice to ask for the sourececode, preprocessing could be done with floss kipi-pugins dngconverter)
http://www.shutterangle.com/2014/lossless-compression-for-dng-raw-video-dngstrip/
squeeze the maximum out of DNG Converter’s compression. DNGStrip is a free command line tool which processes DNG sequences produced by DNG Converter and strips them of thumbnails and unnecessary tags added by Adobe. It will also remove any JPEG previews included in the compressed DNG files in case you accidentally enable them when converting with DNG Converter. DNGStrip doesn’t change the actual image data in any way. DNGStrip is also pretty fast. Its speed will likely be limited by the speed of the storage drives used. It also keeps the files standard and therefore compatible with both Adobe Camera Raw and Resolve.
You can easily predict the gains of running DNGStrip over your DNG Converter compressed footage. DNGStrip will generally reduce frames by the same amount of bytes, provided they have the same aspect ratio. For example, Blackmagic Cinema Camera raw frames, Digital Bolex raw frames and Canon Magic Lantern 16:9 raw frames will all be around 114KB smaller (116 638 bytes, to be precise), and Canon Magic Lantern 2:1 frames will be around 102KB smaller after being processed by DNGStrip. So if your footage is homogeneous the gains after running DNGStrip are proportional to the number of frames processed. The forementioned Canon Magic Lantern 2:1 footage (a total of 233 516 frames) was reduced by DNGStrip from 399GB to 377GB, which is 53.1% of the original 710GB size. Quite a serious reduction, considering there is no loss of quality whatsoever compared to the original footage.
#########################################################
dcpTool (maybe useful, cross platform, uses Adobe DNG SDK; also works on linux) http://dcptool.sourceforge.net/Introduction.html a command line GPL utility for processing DNG Camera Profiles (.dcp files). DCP files are an open standard for defining the color rendition of digital cameras, and are used by various Adobe products including Lightroom and Photoshop/Camera Raw.
Camera Profiles define how a raw image is rendered by image processing software. Specifically, they contain a definition of exactly what the color of a particular pixel should be, relative to the raw data in the original image. Previous generations are Adobe’s image processing software (Photoshop/Camera Raw and Lightroom), had camera profiles for a wide variety of camera embedded within them. The latest generation of this software however has separated out theses profiles into DNG Camera Profiles. Although called DNG Camera Profiles, actually they apply to any raw file that an Adobe product loads.
DNG Camera Profiles in effect provide a recipe for getting from raw data in a file to real colors on a screen or printer.
dcpTool can do five things:
1.Decompile binary format DCP files into a text based XML format. This XML file can then be studied and/or edited with a text editor or specialized XML tool.
2.Compile XML format DCP files into binary files that can be used with any DCP aware software.
3.Extract DCP profiles that are embedded into DNG files into a separate DCP file.
4.Make a profile Invariate. An invariate profile won’t cause changes tint when you make adjustments to exposure settings.
5.Untwist a profile. An invariate profile won’t change color when you adjust exposure settings, but still has hue twists embedded within it. Untwisting a profile removes all hue twists completely.
#########################################################
Grain by Roald Christesen http://vimeo.com/48886514 Has written his own software, wanted to release in 2012, apparently did not happen. Maybe good to ask him for help, as he has been through many of the things which are to be faced by opencine. Email in vimeo link is dead, have not yet tried to contact him via vimeo. Also has an account at german slashcam forum. http://forum.slashcam.de/profile.php?mode=viewprofile&u=31133
He even does build on his own camera: http://www.slashcam.de/news/single/GRAIN--digitale-16mm-Raw-Kamera--Made-in-Flensburg-10886.html
Hi. Sorry fot the late reply, but it's pre-christmas time and as usual my daily work got more stressful because of it, so the progress slowed down at the moment. But nevertheless i'm still on it and if not coding, then thinking about structuring, architecture, writing some small prototypes to check my ideas etc.
As always, thanks, it's great collection of information. My latest progress included playing back raw Bayer data which i got from LibRaw (CPU usage about 2%-4% on Dell Latitude E6530, 1920x1080). What isn't implemented now, and what will be tested by me as soon as i have some free time, is OpenCL demosacing. I got OpenCL working under Linux and have to look into it more thoroughly.
Also OpenFX/TuttleOFX is on the radar. But OC isn't there at the moment, besides missing UI generation for plugins or processing pipeline, i'm still not done with structuring basic architecture to separate views and processing, also cleanup of code is missing to get fresh start on new features. I commited some really ugly parts of source code.
Edit: Trying to NOT reinvent the wheel, but many things on that topic are not always portable or maintained.
Great to see so much effort/energy at work here :)
Keep it up!
<<Trying to NOT reinvent the wheel Yeah, no problem; I have info, you decide what's usable in opencine.
<<missing UI generation for plugins or processing pipeline, again, maybe natron approach/code/people could be helpful here as this is what they tackle in terms of OFX, and its fresh portable code recently written; You could also look into ButtleFX people, which is another. student-mentor-driven, actively pursued OFX host Project. http://libregraphicsworld.org/blog/entry/buttleofx-goes-alpha
<<I got OpenCL working under Linux and have to look into it more thoroughly. nice! I am trying to play with above mentioned arrayfire, which is portable, claims to be Hardware neutral, and that you can switch between CUDA or OpenCL without changing your code. Also, it states to be as fast or faster as even handwritten kernels...Intriguing enough for me to eventually package it up for openArtist.
What has also come to my mind is two things:
Interaction with the old, "grand Dames" of Linux Turnkey systems like Autodesk Flame, Smoke et al, also SGO Mistika (with it's cheaper silbing mambaFX), Baselight etc. have no(t yet) DNG Support, would at least require quite good support to output to DPX (with it's timecode features), and openEXR. OpenImageIO does this, as well as graphicsMagick: http://www.graphicsmagick.org/motion-picture.html It has to be mentioned here because they claim to support the complete DPX specification.
Then there is also movit: http://movit.sesse.net/ High-performance, high-quality video filters for the GPU, which are to my knowledge available as a separate library, but also integrated into MLT/Kdenlive-git/Shotcut. Movit does all the processing on a GPU with GLSL fragment shaders. The library works in a linear color space with floating point precision, can do basic color space conversions (e.g. sRGB to XYZ and back) and mix sources. Currently supported GPU-based video filters include scaling, blur, sharpening, overlaying, color grading, and more. So there is some momentum here to provide functionality which is not only written for one program. http://libregraphicsworld.org/blog/entry/introducing-movit-free-library-for-gpu-side-video-processing
MLT itself is definitely also worth a look, as there are so many things you can do with it. http://www.mltframework.org/bin/view/MLT/Features (HD-SDI in/out with Blackmagic hardware jumps into my mind!)
<<Great to see so much effort/energy at work here :) :) should we put it to the wiki?
With "reinventing the wheel" i meant that i prefer to use existing libs if available and also possible to compile without much hassle on Win/Linux. On the other hand fewer dependencies means better portability as some projects tend to bring a lot of things with them which aren't necessary for OC. I tried OpenImageIO, but had trouble to get it build. This happened also with some other libs as well. Another task which has to be done in the course of OC development.
All these infos !!!have!!! to be moved to wiki, as they are a really good reference to look into. Can you do it? What i haven't done at the moment is a proper roadmap. This would set some goals and make planning easier.
ArrayFire and Movit look nice. For me It doesn't matter which library is used at the end, but the knowledge which one has to acquire is of importance. At the moment i'm trying to understand how to process Bayer pattern with 2x2 matrix and generate an array with 3 float values (RGB) for each pixel out of it. Also multi-threaded processing is an important thing. Later it can be easily ported, i (personally) have no problem with refactoring a lot of things.
DPX, MXF etc. are also on the radar. That's why IDataProvider interface is there, but hasn't been completed yet. OC is limited to DNG folders at the moment (without subfolders), but the import will be extended by a dialog window which scans folders with-sub-folders and allows user to select items to be imported. Afterwards one can manage clips: rename them, edit meta-data and so on. How this will be accomplished, e.g. by selecting required data ("DNG folder", "MXF" ...) or automatically, isn't elaborated at the moment.
But as always: First things first. No need to rush.
Regarding the debayering process there are multiple well established algorithms: bilinear, AHD, VNG, etc. (http://en.wikipedia.org/wiki/Demosaicing)
I added a link on the wiki frontpage to this subsite: https://wiki.apertus.org/index.php?title=Open_Cine
I'm only trying to understand the "de-Bayering" to know the whole process involved, not trying to develop some new algorithm. Kind of a personal thing, to get deeper knowledge about things i like and do.
Some material that might be interesting: http://www.stark-labs.com/craig/resources/Articles-&-Reviews/Debayering_API.pdf source code of dcraw: http://www.cybercom.net/~dcoffin/dcraw/ source code of darktable: https://github.com/darktable-org/darktable
Really interesting stuff. I also have found some useful docs and links before and will add them later, as i'm at the work now.
Edit: The topic is very interesting so i cannot leave it easily. :D It will move to Wiki asap.
https://www.informatik.hu-berlin.de/forschung/gebiete/viscom/teaching/media/cphoto10/cphoto10_03.pdf http://www.arl.army.mil/arlreports/2010/ARL-TR-5061.pdf
I was successfully able to use the openCV de-bayering API previously for 8 bit and 16 bit images, using AHD and VNG as I recall. Let me know if that would be of use and I can dig up the code. I was working on performing the XYZ calibration transforms we were getting from Argyll and converting the final output to sRGB for image previews (for example on camera previews).
-Gabe
On Mon, Dec 1, 2014 at 3:56 AM, Sebastian Pichelhofer < notifications@github.com> wrote:
Some material that might be interesting:
http://www.stark-labs.com/craig/resources/Articles-&-Reviews/Debayering_API.pdf source code of dcraw: http://www.cybercom.net/~dcoffin/dcraw/ source code of darktable: https://github.com/darktable-org/darktable
— Reply to this email directly or view it on GitHub https://github.com/apertus-open-source-cinema/opencine/issues/13#issuecomment-65048815 .
How was the performance? I'm aiming for realtime or at least fast pre-rpocessing of pre-loaded data in RAM. That's why i'm trying to get as most as possible done by GPU.
<<I tried OpenImageIO, but had trouble to get it build. Strange, this should normally be pretty straightforward; OIIO guys are also pretty fast in fixing such things, as this is production code used by the big vfx companies. For Ubuntu Linux, there is a PPA: https://launchpad.net/~irie/+archive/ubuntu/openimageio
<<Another task which has to be done in the course of OC development. I could help with that. Tell me what you need. Will start with above mentioned arrayfire debian packages.
<< !!!have!!! to be moved to wiki if sebastian gives me rights to do so, I will.
Quote: if sebastian gives me rights to do so, I will.
Mr. Pichelhofer your help is needed! ;)
I'll respond about libraries when i had a look at the project after work. My files are on another system at the moment (external HDD with LinuxMint partition).
sure you can move it to the wiki, its a public wiki, nobody needs my personal permission to add/change something :)
Should we move the whole content of "local" Github Wiki to apertus Wiki? I'm not really satisfied with Markdown and missing features on Github.
yes please, then everything is in one place
Alright. I will enter new issue for the task.
Depends on the desired image size and quality. The code is written in C or c++ so it should be pretty fast. You can switch algorithms to trade off between quality and processing time.
-Gabe
On Mon, Dec 1, 2014 at 6:34 AM, BAndiT1983 notifications@github.com wrote:
How was the performance? I'm aiming for realtime or at least fast pre-rpocessing of pre-loaded data in RAM.
— Reply to this email directly or view it on GitHub https://github.com/apertus-open-source-cinema/opencine/issues/13#issuecomment-65064819 .
It would be nice if you could provide the code. All submissions are important in one way or another. Even to get the theory of such things as VNG/AHD etc. helps a lot.
I have no actual plan about further libs atm, have to get some more progress to do some research about workflow. You could write down you experience with ArrayFire and other libs, possible build problem, pits or even tricks.
Or maybe think about possible workflows and how the user should accomplish things. For example starting with "Manage" layout where user imports or manages different clips to "Process" layout to get basic effects/grading applied and further to "Export" which explains itself. Intuitive is the keyword.
Edit: I haven't used video related software, like Premiere, for a long time. That's why i lack some knowledge about todays progress and needs on that topic.
will try to wrap my head around the libraries..
<<Or maybe think about possible workflows What comes first into my mind is this now defunct blog, blendervse
http://web.archive.org/web/20140606202603/http://blendervse.wordpress.com/ This guy really worked into the topic of processing raw files floss-wise. The blog itslef is a valuable resource not only for dealing with blender video sequence editor, but also has numerous articles about colorspace conversion, retaining the best video quality, and FLOSS raw VIDEO workflows, e.g http://web.archive.org/web/20131112191042/http://blendervse.wordpress.com/2013/03/21/darktable-for-video/
He also mentions and uses avisynth, which brings me to its cross-platform successor, vapoursynth http://www.vapoursynth.com/ which should also be considered for a review. Uses Python as scripting language, c++ as core, with numerous plugins already ported from avisynth, this one is growing to a really good scriptable video editor and real avisynth successor, as most of the really cool and importent libraries are now ported over; and if one knows about avisynth and what this tool is capable in terms of features, quality combined with the fully scriptable workflow...... win xp support is now gone, but given to the fact that no on should use it any more securitywise...
<<video related software well, premiere is not really what I would choose as and example here, I think it would be better to look at the tools which are more direct competitors for OC. Like Pomfort's tools as they are made for digitalBolex http://kurtlancaster.com/06/01/2014/bolex-lightpost-from-pomfort-a-quick-and-powerful-color-grading-tool/, or Sonys freeware raw tools http://www.sonycreativesoftware.com/rawviewer, even RED's REDCINE-X PRO https://www.red.com/downloads?category=Software&release=final for finishing the r3d files, or the tool from Arri, Assimilate Scratch Play, cliptoolz color http://www.cliptoolz.com/color.html
Colorgrading-wise, apart from Resolve, the SGO mambaFX http://www.sgo.es/mambafx/, (free eval version available) tools are quite sophisticated, they stem from the $ 100000+ mistika turnkey system.
Import-wise, I think Rapid Photo Downloader http://www.damonlynch.net/rapid/ is quite intriguing. While not cross-platform, its packed with many useful features, and made by a photographer for photographers, it could be a good example about how to deal with multiple media-imports at the same time.
export-wise the now crowdfunded MOX codec https://www.indiegogo.com/projects/mox-file-format should be in the Birdseye view
Hm, that's plenty of infos and really interesting. Have to look into that after work, but it will take much more time to analyze the info and get the roadmap done. Yesterday i started porting Wiki over to Apertus.
Just a note: I like some GUI elements of the tools you listed. Good to get some inspiration. But for OC we should think about some "unique characteristic" (if possible) for UX/UI in the future. I know, not an easy task, cause there are so much of complete video processing software packages out there.
I learned about MOX some weeks ago and mentioned it to Sebastian then. We should definitely shouldn't lose sight of it. And gather as much infos as possible to be prepared to implement import/export.
By the way: Do you know some free OpenFX plug-ins? I need some to test the integration later, but i found only commercial ones. Simple ones would also fit for tests. Maybe a dumb question and i should look into OpenFX or TuttleFX source folder.
<<"unique characteristic" me wants too!. The OC Mockups with the stack processing smell like nuke/natron for my taste. Nevertheless Stack/Node based interface would be super awesome.
<<Do you know some free OpenFX plug-ins? shure, there is the extensive list of the natron guys: http://devernay.free.fr/hacks/openfx/ within https://github.com/devernay/openfx-misc collection should be sth simple to play around with.
Tuttleofx is a big block of plugins, has long compile-time; though they provide ready-to-use binaries , too (openSuse 12.1, gcc 4.6.2; boost version is the problem with these on Ubuntu. Natron also ships with it as an all-in-one package, with the other plugins mentioned above included).
I would stick to "Stack" first. Let's see it as Darktable for video processing for now. Afterwards it can be changed and extended. I should be doing things more pragmatic, that would accelerate the whole progress, but it often ends in ugly/unmaintainable code.
Maybe you, and other people too, get some interesting ideas or wishes about UI. Not so good example of UI (for me at least): I forgot the name of the video software with nodes shown as circles, seems interesting, but irritates me just by looking at the screenshots.
Edit: Googled it, and found that you mentioned it already -> Autodesk Smoke
I like node-based interfaces as i think they are very flexible, but ... i will try to stay away from them in OC as long as possible to keep things simple. :D Just a note: What is very important for such interfaces is grouping feature, where you group nodes and so get a new node with user-defined in- and outputs, like Blender does it. Otherwise it gets very messy and confusing. But nevertheless your links are very interesting as always. I lose myself in numerous ideas for OC, have to write them down asap. I'm doing some sequence diagrams at the moment to focus on next tasks for clip import.
But it's another thing which i would stash away for future at the moment. In two weeks i have three weeks of holidays and will try to get progress rolling on and not just creeping slowly like now.
Could you move your info from here to apertus Wiki, if you haven't started to do so?
And one thing which we should consider, is discussing such things here: https://www.apertus.org/forums/viewforum.php?f=5&sid=b4d3b1eddff043144fcdee136059765d
I have nothing against discussing here, but it will just polute the issue too much, which should stay focussed on ML and FFmpeg related stuff. A forum is better suited for our ideas on progress.
Edit: Seems like nice tool to use for UI discussions: https://moqups.com Edit2: Got another tool, as i don't like to spread my personal data everywhere on the internet, Pencil: http://pencil.evolus.vn/
Tried to use OpenImageIO as LibRaw alternative for simple DNG loading, but all i get is the thumbnail.
It's a pity that RawSpeed originally isn't packed into a libary. There was a problem to get it compiled in Windows.
Update: Now i have finally wrapped my head a little bit around loading real raw data from LibRaw. My prototype reads DNG file, moves the values according to 2x2 Bayer pattern into right position (R, G or B, unsigned short at the moment, thinking about float or at least half-float) and writes, thanks to OIIO, the data to TIFF file to get some preview. No processing done at the moment, at least not in the prototype. Next step is to get the playback working again in OC and then use the tests from the prototype to get more progress done.
nice!
Just a note that i'm still alive ;) and on my way to get playback working again.
Wrecked my LinuxMint recently, don't know how. Luckily the installation and configuration can be done much faster than setting up Windows with Visual Studio etc.
Has someone already any fresh ideas for UI or workflow? Or any suggestions?
I have lots of ideas but figured its better to keep them to myself until the foundation is solid and we can build on top of it :)
Could you make some screenshots and a feature checklist what is working already and what is planned in the near future, that would help a lot of people to understand where we are and where we are going I think.
I already thought about a roadmap, but had no time while doing my daily work. Hm, i'll create a "Features" page in the apertus° Wiki for it.
Edit: https://wiki.apertus.org/index.php?title=OpenCine.Features#Features.2FRoadmap
At the moment still working on things which are mostly relevant for OC development, to get more or less a solid foundation and then proceed to add features. Next step, after playback, is to get two buffers per frame, original and processed data, which results in further things to estimate, like RAM usage and what is an acceptable limit. It's an important step before doing simple processing.
Also still thinking about how to use OpenFX/TuttleOFX and also OpenCL for parallel processing. Too many ideas atm which i have to prioritize.
Merry Christmas!
I got the playback working again in the meantime. Not all things are really cleaned up at the moment (at least to my liking), e.g. UI elements still have default/not very descriptive names in code or chunks of code which are commented out and are obsolete, but that's what i will do while implementing further features.
I will do another video to show current progress soon, but this time the link to the YT video will go to apertus° Wiki. And also the information from Github Wiki will be moved there step by step.
Edit: And yes, i know that the code isn't beautiful in some parts at the moment. ;) Edit 2: I also haven't used cppcheck that much lately, together with ugly code (at prototype level) it may result in memory leaks or similar atm.
Great progress!
sorry,was not able to follow lately. Shall I reorganize this here into wiki? Anything urgent?
Hi,
it would help a lot if you would move the info to Wiki. Nothing really urgent at the moment, as i'm trying to get some tests related to processing done. Hope to get some results this weekend, because there is not much time on work days.
One thing that i consider to do, is to switch to Windows for development. The reason is simple, the tools under Windows are better, e.g. CodeXL, NVIDIA Nsight, Visual Studio etc. Linux is great too, but the driver for Intel HD cards doesn't have some things included which are present under Windows. It would also improve quality of the code and fix some bugs, which GCC doesn't recognize as such (there ist also the other case where VS doesn't complain, but GCC does).
You could look into the issues, also other entries are in the lab at lab.apertus.org. Phabricator is very good as bug tracker as i can assign task to commit. This helps to check the history and resolve bugs in the future.
<< it would help a lot if you would move the info to Wiki. ok
<<switch to Windows as you wish ;-)
quick googling shows: Nsight™ Eclipse Edition for Linux and Mac OS.
AMD CodeXL is available both as a Visual Studio® extension and a standalone user interface application for Windows® and Linux®.
<<driver for Intel HD shure they are behind the windows driver, at least you could try these for bleeding edge: https://01.org/linuxgraphics/downloads
<<which GCC doesn't recognize as such not to forget I 've read that llvm is quite good in code analysis...
If you wish to use Visualstudio, then why not - You work with it in a daily basis in your job I guess. I just assume that switching to windows will have some penalties too. Valgrind, dtrace (https://github.com/dtrace4linux/linux) come into my mind... Which distro do you use currently?
<<You could look into the issues, also other entries are in the lab at lab.apertus.org. Phabricator is very <<good as bug tracker as i can assign task to commit. This helps to check the history and resolve bugs in <<the future.
will look into that.
My hardware differs on lpatop and desktop PC. Laptop has Intel HD 4000 3rd Gen and desktop NVIDIA 660. While NVIDIA drivers good is the Intel driver or better to say Mesa not up to the level of Intel driver for Windows. gDebugger is no more and CodeXL as successor still does not run on pure Intel HD graphics card, but no problem to run it on NVIDIA. I'm using external HDD with Linux partition for development, by the way, so i can switch between different machines easily and usually the Linux loads the right driver at startup.
I'm already using bleeding edge repository for some time and updates are coming, but it's better than the driver from the default repo which gave me graphics glitches for some Qt/OpenGL examples.
You are right, i'm working every day with VS, but that's not the main point for the switch as i'm comfortable with QtCreator also. Every tool has advantages or disadvantages, depends on the task. It's more for checking CMake scripts and library requirements under Windows, e.g. there is an issue with hardcoded paths.
I'm just going to alternate between Win and Linux (don't want to think about MaxOSX for now, but there are possibilities...) to verify portability and to fix bugs which differ between compilers.
There are such tools like Visual Leak Detector or HeapInspector which are really helpful for tracing memory leaks and VS is also full of debug different tools.
LinuxMint 17.1 is my current distro for development.
Edit: Checked newest CodeXL, still does not work under Linux with Intel HD: "OpenGL module not found".
I guess it would also solve the "CinemaDNGDecoder" problem, as ffmpeg supports .mlv since a few months.
I have collected some stuff regarding DNG and Linux, see http://www.magiclantern.fm/forum/index.php?topic=9335.0 for an overview of all the tools available.
Especially MlRawViewer, raw2cdng could come in handy, maybe some problems you guys face are already solved there.