micro-manager / mmCoreAndDevices

Micro-Manager's device control layer, written in C++
39 stars 101 forks source link

API needed for Camera triggers (Chapter 1) #84

Closed nicost closed 2 years ago

nicost commented 2 years ago

Cameras need a trigger to start exposing (or to start a series of exposures). Triggers can be provided through software, using a clock internal to the camera (internal trigger), or by an external trigger source. Each of these triggers can take various forms: software triggers can start a single exposure or a sequence of exposures (driven by the internal clock), external triggers can trigger on the rising edge of the signal or the falling edge, and sometimes exposure continues as long as the trigger is active (sometimes called "bulb" mode). It would be wonderful to provide and API so that the same call can be used for cameras from different vendors.

In addition, there is some interplay with the SnapImage and StartSequence API calls that the MMDevice interface mandates. For instance, it does not make sense in the Sequence functions to send a software trigger for each image in the sequence. Internal triggering (or software-started internal triggering) is much more useful to the end user). On the other hand, in the SnapImage function, software triggering is almost always desired over internal triggering (since response times using software triggering are lower and much more predictable). We should offer some kind of guidelines to the device adapter authors what choices work best.

marktsuchida commented 2 years ago

Here is one possible model for the standardized settings, though I haven't yet tested it against actual camera APIs to make sure it will work with the widest possible range of devices.

Some cameras have multiple trigger input pins that can be mapped to the start/frame trigger. Let's assume that such mapping is done by device-specific properties.

Of the 4 possible combinations of Start and Frame trigger source, the 3 that are more common are (Start = None, Frame = Internal), which is the default, (Start = None, Frame = External), and (Start = External, Frame = Internal). For cameras that can use separate input pins for the start trigger and frame trigger, (Start = External, Frame = External) also makes sense; for cameras with a single input pin, this setting should behave like (Start = None, Frame = External).

An alternative to the "start trigger" vs "frame trigger" distinction would be "trigger" vs "clock", which would be similar to the DAQmx terminology, but this is far from obvious, and "clock" has a different meaning for image sensors, i.e., vertical and horizontal clocks.

I would propose that "Snap" should work identically to a 1-frame sequence acquisition. Any other rule would seem impossible to memorize.

Many cameras support a software frame trigger. It is unfortunate that we don't support this, but it is hard to change this now in Micro-Manager, so I agree that the frame trigger source should be Internal or External only, at least for now. Camera adapters can internally use software triggers to improve Snap performance, as some already do; this is not visible through the MMCore API (and therefore doesn't enter into the model proposed here).

I propose calling the default start trigger source "None", rather than "Software", because our device interface does not have the capability to separate the (slow) acquisition setup from a (fast) software start trigger. But we could imagine adding, in the future, an MMCore function to send such a software trigger, so we should reserve the term "Software". Instead of "None", we could also call it "Automatic" (in the sense that the start trigger is automatically/implicitly sent after setting up the acquisition).

A start trigger type of Level would mean "keep acquiring frames while the trigger is at active level". I'm not sure how common this is; we can probably omit the Start trigger type entirely, and always consider the start trigger to be an edge trigger.

The terms "edge-triggered" and "level-triggered" are used in the context of flip-flops, processor interrupts, and (less relevant) OS kernel I/O readiness APIs. One could argue that the meaning here differs from the meaning in the flip-flop/latch context, but it is close to the usage in interrupt triggering. There may be a better pair of terms. ("Bulb" is quite correct for a level-triggered frame, but requires familiarity with traditional photography to understand, and we would need a different term for the start trigger type (if supporting); also, it is not clear what the antonym should be.)

For the trigger polarities, I think it makes more sense to let it be device-specific, because they only need to be configured once and will not be switched during an acquisition (unless somebody can think of a counterexample), and therefore rarely require programmatic control. The fewer things we formalize, the lower chance that we encounter a device that doesn't play well with our model.

Aside from the above model and terms, there is the question of whether to implement this as dedicated device member functions (like SetExposure()) or as properties with standardized names (like Exposure).

There are also additional parameters that could potentially be standardized, such as start trigger delay and frame trigger delay. Also the timeout for external trigger, which currently causes much grief.

Finally, it might be worth comparing with how these settings are standardized by industry standards such as GenICam SFNC (and maybe 1394 IIDC), although supporting all the possibilities is probably too much, and we have our own backward-compatibility considerations.

tomhanak commented 2 years ago

I like Mark's proposal. However, I cannot map all PVCAM triggers to it. The very generic design of triggers in GenICam is worth reading, at least. Here is what PVCAM offers and naming used.

Trigger sources (from camera hardware point of view):

Trigger types (not all can be used with all trigger sources):

The statement "a Snap should work identically to 1-frame sequence acquisition" is fully truth in PVCAM. Unfortunately, the Snap doesn't work this way in Micro-Manager. Nico wrote me once that MM expects the camera is always ready for a Snap. That explains why there is no "PrepareForSnap" function in camera API. The consequence is whenever user presses Snap button, or MDA is configured with a delay between frames, the device adapter calls "setup acq." and "start acq." PVCAM functions in order to capture one frame. Although the camera could be almost always ready (if we snap frames with the same setup), the "start acq." PVCAM function e.g. pins memory buffers for DMA that must be done again for every acquisition. So for PVCAM doing 100 Snaps and collecting 100 frames in one sequence acquisition are two completely different things. A Software trigger with a sequence acquisition started once would eliminate the time needed to setup and start each snap acquisition, but that's not possible with current MM API.

jondaniels commented 2 years ago

Two trigger modes I don't yet see mentioned we use heavily:

Having a camera API for setting the trigger mode would be very useful in the context of the MM1.4 ASIdiSPIM plugin and in the MM2 replacement being worked on now. For the MM1.4 plugin we created our own Java enum to implement the following modes in supported cameras:

marktsuchida commented 2 years ago

Link to MM1.4 ASIdiSPIM code: https://github.com/micro-manager/micro-manager/blob/svn-mirror/plugins/ASIdiSPIM/src/org/micromanager/asidispim/Data/CameraModes.java

henrypinkard commented 2 years ago

I'm going to try and synthesize all this info to produce a draft API, but before I do a couple things:

@jondaniels

  • "Light sheet" or "virtual slit" which really is a special case of edge trigger mode and involves camera shutter behavior. So maybe it should be excluded from the camera trigger API, but in general it would be useful to have an API to use for virtual slit operation to make things uniform across cameras.

Could you explain more about what this is? Is this somehow related to rolling shutter or TDI cameras?

@marktsuchida

I would propose that "Snap" should work identically to a 1-frame sequence acquisition. Any other rule would seem impossible to memorize.

Are you referring to specifically core.snapImage() and/or the snap() function in the camera API?

Aside from the above model and terms, there is the question of whether to implement this as dedicated device member functions (like SetExposure()) or as properties with standardized names (like Exposure).

I lean towards functions like SetExposure(), not because it is more naturally expressed like that than in properties, but because I think standardized properties are much less clear from a user perspective. Correct me if I'm wrong, but there's no clear way when using the application to see if a particular property is standardized or device specific, right? So if you're just writing some code and have only one camera on hand, its hard to know when setting properties if those will generalize to to other cameras. There's also the disadvantage that if you're just reading through the API docs there's no mention of property standardization. Am I missing anything here? Is there something in the Core that could be changed to clarify this?

jondaniels commented 2 years ago

@henrypinkard: regarding "virtual slit" mode: Yes it is related to rolling shutter and conceptually a bit like TDI cameras but definitely not the same. Many but not all high-end sCMOS cameras support such a "virtual slit" mode. Only a subset of rows are exposing at any instant, and every so often one row is read out and another row on the other side starts exposing so the number of exposing rows stays constant, but the exposing region moves across the sensor unidirectionally at a speed that is no faster (but optionally slower) than the row readout.

Different companies call it different things but here are some links (in some cases you need to search within the document) Hamamatsu Lightsheet Readout Mode pco ightsheet scanning mode Andor LightScan PLUS including FlexiScan Photometrics Programmable Scan Mode

Some characteristics that could be abstracted include directionality, shutter height in pixels, row interval in microseconds, and delay in microseconds. These would only apply when the camera is in the appropriate trigger mode which is a special case of per-frame edge trigger.

marktsuchida commented 2 years ago

I would propose that "Snap" should work identically to a 1-frame sequence acquisition. Any other rule would seem impossible to memorize.

Are you referring to specifically core.snapImage() and/or the snap() function in the camera API?

Yes. For example, if the trigger mode is set to external start-on-edge (or whatever it may be called), a sequence acquisition of length 1 should acquire one frame at the software-configured exposure, starting when the trigger arrives. I think snap should do the same.

Correct me if I'm wrong, but there's no clear way when using the application to see if a particular property is standardized or device specific, right?

Yes, at the moment, although such a way could be devised and implemented, if desired. For example, there is no reason why you can't have a property on the device side and a function on the application side (with the property also visible, allowing subscribing to change notifications -- although I don't know how important that is for the trigger settings). It would be important, in that case, to choose a property name that won't be used elsewhere (previously, or inadvertently in the future) -- perhaps with some special prefix -- and also to enforce rules about the type and allowed values at the time of property creation (or have a dedicated function to create the property).

henrypinkard commented 2 years ago

Correct me if I'm wrong, but there's no clear way when using the application to see if a particular property is standardized or device specific, right?

Yes, at the moment, although such a way could be devised and implemented, if desired. For example, there is no reason why you can't have a property on the device side and a function on the application side (with the property also visible, allowing subscribing to change notifications -- although I don't know how important that is for the trigger settings). It would be important, in that case, to choose a property name that won't be used elsewhere (previously, or inadvertently in the future) -- perhaps with some special prefix -- and also to enforce rules about the type and allowed values at the time of property creation (or have a dedicated function to create the property)

That makes sense. There would probably also need to be something on the device adapter side to compel device adapter authors to implement such properties. I think this would all be quite useful, but it is probably beyond the goal/scope of this issue.

marktsuchida commented 2 years ago

Yeah, I think an interface based on dedicated functions is reasonable here.

henrypinkard commented 2 years ago

Here is a potential abstraction. If this covers everything we're trying to do, I can write out a corresponding API.

To begin, I'll clarify terminology, which I've chosen mostly for consistency with GenICam, not based on the current micro-manager API (as will be described shortly).

Introduction

The important concepts are Frames, Bursts, and Acquisitions.

A Frame describes the acquisition of a single image, including a reset before exposure, exposure, readout, and optional delays before reset and after readout (often these will be 0):

frame

A Burst is sequence of frames. Shown below they occur one after another, but as will be described below in some cases parts of frames can overlap:

burst

An Acquisition is a sequence of Bursts:

acquisition

In Micro-Manager's current terminology, Bursts are called "Sequence Acquisitions", which are distinct from the higher-level concept in Micro-Manager, which are functionally equivalent to the Acquisition described here. Because this is potentially confusing, I'll drop "Sequence Acquisition" and stick with Burst.

Acquisitions don't really figure into the design of this API--they are handled at a higher level of abstraction.

Types of triggers

A Trigger is a signal to take some action related to Frames/Bursts. There are 4 triggers defined in this proposed API. Every burst will require a source for each trigger.

BurstStart and BurstEnd triggers signal the start and end of a burst of frames.

burst_triggers

The FrameStart trigger signals the start of a frame. The ExposureEnd trigger signals the end of the frame's exposure (but not the end of the frame, since there is still readout).

frame_triggers

Sources of triggers

Triggers can come from software commands from computer to camera (Software), hardware TTL signals (External), or can be generated by the camera itself (Internal).

FrameExposureMode

Separate from the source of the trigger, there are 3 different modes for FrameStart and ExposureEnd triggers.

In Distinct mode, FrameStart and ExposureEnd triggers are separate events, and can in some cases come from different sources (e.g. external/internal).

In Sustained mode, which only makes sense with external FrameStart and ExposureEnd triggers, one long TTL pulse is used to signal the interval from frame start to exposure end. (This is also known as "strobe" or "bulb" or "level" triggering).

In Combined mode, the ExposureEnd trigger for frame N is the same as the FrameStart trigger for frame N+1. This covers @tomhanak's "Level pulsed" and @jondaniels' "synchrnous"/"overlap". It seems to me this could make sense with either external or software FrameStart and ExposureEnd, though they'd probably need to be the same type (i.e. both external or both software).

frameexposuretypes

Examples of common scenarios

Below is a representative, though not exhaustive, list of example combinations. It would probably be useful to make a table of all possible combinations.

legend

Software triggered burst

Make an API call to start a burst, with the interval between frames determined by exposures and/or framerate properties and controlled by the cameras internal clock

software_burst

Software triggered burst with TTL triggered frames

Make an API call to arm the camera to start a burst, with individual frames started according to received TTL pulses, and exposure end controlled by the camera

frame_ttl

TTL trigger a burst with internally triggered frames

An external TTL triggers the start of a burst, after which the camera controls frame starts and and ends. The camera has been preprogrammed (through the API) to collect a certain number of frames (here 2).

burst_ttl

Bulb/Strobe triggers

Software trigger a burst, with exposure controlled by a sustained TTL pulse (i.e. FrameExposureMode == Sustained)

strobe

Synchronous

FrameStart for next image is identical to previous frame's ExposureEnd. (i.e. FrameExposureMode == Combined)

synchronous

Still left to figure out

marktsuchida commented 2 years ago

This is good progress.

If this covers everything we're trying to do, I can write out a corresponding API.

Actually, I feel that it would be a lot easier to answer this question once there is an API design written out. I don't think the functions and their constraints automatically follow from the terminology alone (or maybe I just don't see it).

As I mentioned the other day, it will be especially helpful to show how it corresponds to camera SDK APIs (and their enum constants), for those that have publicly-accessible documentation. And GenICam, which has a good model (even if not covering everything we want?) followed by many of the industrial/vision camera manufacturers.

In fact that could be more important than covering every possible triggering mode, because it's not a huge deal if we don't (initially) support some exotic mode only found in a few vendors. It is a big problem if the typical/common/standard modes take a lot of effort to implement in the camera adapters or have a confusing mapping to our API.

henrypinkard commented 2 years ago
////////////////////////////////////////////
// Call these methods first to describe the desired trigger mode 
////////////////////////////////////////////

// Control delays
int SetPreFrameDelay(double time_ms);
int SetPostFrameDelay(double time_ms);

// "external" (TTL pulse), "software" (API call), 
"internal" (Is this possible? Maybe if camera is always ready for a burst?)
int SetBurstStartTriggerType(const char * type);

// "external" (TTL pulse), "software" ("API call"), or "internal" (same question as above)
int SetBurstEndTriggerType(const char * type);

// "external" (TTL pulse),  "software" (API call), "internal" (camera handles based on exposure/delays)
int SetFrameStartTriggerType(const char * type);

// "external" (TTL pulse),  "software" (API call, though unclear why'd you ever use it), 
// "internal"  (based on camera.SetExposure(double ms))
int SetExposureEndTriggerType(const char * type);

//// Special conditions not covered by above settings
// "distinct" (start + end TTLs), "sustained" (bulb/strobe TTL), or "combined" (synchronous TTLs)
// Only applicable with "external" FrameStartTrigger/ExposureEndTrigger
int SetFrameExposureMode(const char * mode);

////////////////////////////////////////////
// Running a burst
////////////////////////////////////////////

// Arm the camera so that its ready to receive a BurstStartTrigger
// If the requested combination of trigger settings does not make sense is nnot supported, throw exception
// Needs to know the number or images if using  "internal" BurstEndTrigger (i.e. camera stops its own burst)
// If numImages == -1, run until an explicit BurstEndTrigger from API call ("software") or TTL pulse ("external")
int PrepareForBurst(int numImages);

////////////////////////////////////////////
// Software Triggering
////////////////////////////////////////////
// These apply when using "software" triggers of the corresponding type
int SendBurstStartTrigger();
int SendFrameStartTrigger();
int SendExposureEndTrigger();
int SendBurstEndTrigger();

Using this API, camera.SnapImage() becomes:

camera.SetBurstStartTriggerType("software"); 
camera.SetFrameStartTriggerType("software"); //
camera.SetExposureEndTriggerType("internal"); // Determined previously by SetExposure(double exposure)
camera.SetBurstEndTriggerType("internal"); // Determined by numImages 

camera.PrepareForBurst(1); // 1 image

camera.SendBurstStartTrigger();
camera.SendFrameStartTrigger();
// ExposureEnd and BurstEnd "internally" triggered based on exposure time and numImages, respectively

As @tomhanak mentions, the current API doesn't allow you to split these different functions apart. Doing so explicitly will thus allow for high level code to be written that uses cameras more efficiently.

Live mode, which is currently:

camera.StartSequenceAcquisition(double ms))
camera.StopSequenceAcquistion())

becomes

// Assuming SetPreFrameDelay(double ms) and SetPostFrameDelay(double ms) 
// delays have already been set, which along with exposure control the frame rate
camera.SetBurstStartTriggerType("software"); 
camera.SetFrameStartTriggerType("internal");
camera.SetExposureEndTriggerType("internal"); // Determined previous call to SetExposure(double exposure);
camera.SetBurstEndTriggerType("software"); // Go until API call to stop

camera.PrepareForBurst(-1); // Go indefinitely

camera.SendBurstStartTrigger();
//////////////////////////////
// Images are continuously acquired and pumped into buffer
//////////////////////////////
camera.SendBurstEndTrigger();

And GenICam, which has a good model (even if not covering everything we want?) followed by many of the industrial/vision camera manufacturers.

I think it covers everything + some more. This API is pretty close to being a subset of their model.

A subset of this proposed API also very closely corresponds to our current API, which should make transitioning not too difficult.

Comparison of current API and proposed new API

// Nothing does this in current API, may be implemented in properties
int SetPreFrameDelay(double time_ms);
int SetPostFrameDelay(double time_ms);
int SetBurstStartTriggerType(const char * type);
int SetBurstEndTriggerType(const char * type);
int SetFrameStartTriggerType(const char * type);
int SetExposureEndTriggerType(const char * type);
int SetFrameExposureMode(const char * mode);

// Current API
int PrepareSequenceAcquistion();
// New API
int PrepareForBurst(int numImages);

// Current API
int StartSequenceAcquisition(int numImages);
// New API
int SendBurstStartTrigger();

// Current API
int SnapImage();
// New API
int SendFrameStartTrigger();

// Noting does this in current API
int SendExposureEndTrigger();

// Current API
int StopSequenceAcquisition();
// New API
int SendBurstEndTrigger();
nicost commented 2 years ago

// Current API int SnapImage(); // New API int SendFrameStartTrigger();

Do you propose to make the current SnapImage into a special case of a Burst? The current SnapImage function blocks for duration of the exposure to enable a simple way to synchronize light sources with camera exposure, and it also has a dedicated method to retrieve the pixel data ("GetImageBuffer"). How will that work with the proposed API? Probably more general, how will synchronization with light sources and data retrieval work with this new API?

henrypinkard commented 2 years ago

Do you propose to make the current SnapImage into a special case of a Burst?

In this paradigm it could be a burst of 1 frame or it could be 1 frame in a longer burst. I'm not sure which one it makes sense to map to the default implementation of SnapImage. It sounds like this currently varies by device adapter: @marktsuchida mentioned something about one of of them always waiting for a signal to snap (i.e. a BurstStartTrigger has already been sent), and then switching modes when an internally triggered burst is needed.

The current SnapImage function blocks for duration of the exposure to enable a simple way to synchronize light sources with camera exposure, and it also has a dedicated method to retrieve the pixel data ("GetImageBuffer"). How will that work with the proposed API? Probably more general, how will synchronization with light sources and data retrieval work with this new API

Upon further consideration, it probably makes sense to not have these be two separate functions:

int SendFrameStartTrigger();
int SendExposureEndTrigger();

I can't think of why you'd ever actually need to send an ExposureEndTrigger, so that could be dropped from the API, and SendFrameStartTrigger() could block until exposure end. This might actually be an improvement over SnapImage(), because that blocks until after readout/copy of data into an image buffer (I think?), so you could potentially turn off light sources even faster. This change would mean that an image isn't necessarily ready after SendFrameStartTrigger() returns, so we'd probably want to also add something into the core function for getting the image that blocks until an image is ready in order to not introduce confusing behavior for people still using SnapImage.

As for readout, I think this would be a good opportunity to rectify the often confusing fact that the snap buffer and circular buffer are separate entities. So I would propose a unified mechanism where any frames acquired through all means are placed in a common buffer, and dedicated thread in the core that whenever a burst is active is constantly polling for new images. Would be curious to hear others thoughts about whether this would lead to problems and if there are better alternatives. This should probably be considered in tandem with improvements to the circular buffer/multi-camera behavior https://github.com/micro-manager/mmCoreAndDevices/issues/168.

nicost commented 2 years ago

This might actually be an improvement over SnapImage(), because that blocks until after readout/copy of data into an image buffer (I think?), so you could potentially turn off light sources even faster.

The "contract" for the current SnapImage is to block for duration of the exposure (i.e. not for the readout), so - if the SnapImage function is carried out correctly, it is as fast it should be.

As for readout, I think this would be a good opportunity to rectify the often confusing fact that the snap buffer and circular buffer are separate entities.

Agreed, but this will make it a very substantial change to how many things in Micro-Manager operate. It then makes sense to add the plans for data transfer and shutter synchronization to this API sketch. I guess we should keep the things that need to implemented by the Camera Device Adapter authors separate from additions/changes to behavior of the Core and user layers, but we should have a clear idea how the whole will function.

marktsuchida commented 2 years ago

Sorry for the late reply.

I agree with the points that @nicost made. It would be great if the new API could be based on the old one, and interoperate where possible, rather than just ignoring the old API. Ignoring the existing API is easier for the designer, of course, but it puts burden on everybody consuming the API.

For example, what happens if you configure the new-style trigger and then call the existing snapImage() or startSequenceAcquisition()? Presumably you want that to work if the configured trigger type can be supported using the existing API. Worse, what if one intermixes the new and old prepare/start/trigger functions? Even if you just say that's an error, checking all these cases would require a lot of state management.

Perhaps it would be best to just handle the trigger configuration (and information querying, and sending software triggers) with the new API but make it work with the existing acquisition API first, and then tackle the "better acquisition API" as a separate project (it will be connected to the multi-camera support, too, and should have a good way to segregate it from the existing API).

(It is also totally unclear to me how the device-side interface (MMDevice) will work under this scheme.)

So, getting back to the actual trigger specification part:

henrypinkard commented 2 years ago

I agree with the points that @nicost made. It would be great if the new API could be based on the old one, and interoperate where possible, rather than just ignoring the old API. Ignoring the existing API is easier for the designer, of course, but it puts burden on everybody consuming the API.

For example, what happens if you configure the new-style trigger and then call the existing snapImage() or startSequenceAcquisition()? Presumably you want that to work if the configured trigger type can be supported using the existing API. Worse, what if one intermixes the new and old prepare/start/trigger functions? Even if you just say that's an error, checking all these cases would require a lot of state management.

This is a miscommunication. I am not suggesting to ignore the old API

What do we gain by defining our own way to specify triggers instead of just copying GenICam as closely as possible (and possibly adding minor extensions or additional parameters)? (We lose a lot, so it will only be worth it if we gain a lot.)

This was meant to copy a subset of GenICam as closely as possible, though it may not yet be perfect. GenICam supports many many many more fetures that copied over here. This simplification is essential.

Because the configuration is split across 7 functions, it would seem impossible for user code to tell which combinations of options are supported. For example, let's say I want to set the frame start trigger to software, but setFrameStartTriggerType throws an exception. How do I know if that is because the camera doesn't support software trigger at all, or if the camera doesn't support software frame start in combination with the current values of all the other settings? The result may even differ depending on whether I set the frame start trigger first or the burst start trigger first. I suspect (but have not confirmed) that GenICam solves this using their feature graph system, but it might become unusable without such a mechanism. Not to mention that GenICam allows you to query the allowed values for each feature and all that.

I'm not sure what you're mean by the "feature graph" (the word "graph" does not appear anywhere in the document). As far as I can tell, GenICam does not solve this problem

There are 4 triggers and only (up to) 3 types that can be set, so at least two triggers need to be connected to the same input. I think you intended those to be software (the only one that has separate send-trigger functions), but then one presumably needs to send pointless software triggers that the device adapter somehow ignores(?). GenICam allows you to switch trigger on or off.

I don't understand what you're talking about here. GenICam "off" is equivalent to "internal" in the spec above. This is for consistency with the language we've used in earlier posts on this thread

I think it makes sense to have a single sendSoftwareTrigger function that takes an argument specifying which trigger. That way, we don't need to add functions when a new type is added in the future. Again, see GenICam.

I think dedicated API functions for each is more consistent with how MMCore is currently set up. For example, we have startContinuousSequenceAcquisition, not AcquisitionMode="Continous" like GenICam. So if we copied the genICam way for the new stuff, but aren't going to change the existing API, we'd have a hybrid API where sometimes GenICam is followed and sometimes it's not. Personally, I think programming with the API is a little clearer if it's all consistent, and I also prefer having dedicated method names rather than having to know valid enum values. But I'm fine with implementing whichever

henrypinkard commented 2 years ago

Other points:

I wrote up a document describing this proposal here

Closing this issue so that we can open a fresh one #226 and bring new people on to give feedback on this API

nicost commented 2 years ago

So you didn't actually read the GenICam standard.

That is a disrespectful comment, and unnecessary. Please stay constructive

This discussion becomes long, and hard to understand. Henry provided an API proposal in which he tried to accommodate all input, including everything Mark brought up. Mark, can you please provide a concrete proposal rather than continue to point out the negatives? If we do not move forward, we will never get anywhere.

At this point, I will invite comments from camera companies to the document provided by Henry. Mark, if you see room for improvements, can you please provide those in that document?

marktsuchida commented 2 years ago

So you didn't actually read the GenICam standard.

That is a disrespectful comment, and unnecessary. Please stay constructive

@nicost You are right. @henrypinkard I apologize for that. I shouldn't have used such a tone.

henrypinkard commented 2 years ago

Thank you for apologizing @marktsuchida. I appreciate that you're very passionate about getting this right. And I don't know where we would be without your encyclopedic knowledge and many excellent suggestions!

To be clear, I do not mean to dismiss any of your previous concerns. I'm using the word "degenerate" in the linear algebra sense--no unique solution. I can see valid reasons to adopt your proposal and valid reasons to stick with mine. For that reason, I think it is important to solicit input from others. You will, of course, have more chances to provide feedback on this. It would be helpful if you could continue workshopping your ideas into more concrete proposals for changes to the API, so that we can decide on whether to adopt them without losing too much time to extensive back-and-forths, since there are many other areas that need attention.

marktsuchida commented 2 years ago

@henrypinkard Thank you.