deepskystacker / DSS

DeepSkyStacker
Other
894 stars 90 forks source link

Cannot input long exposures using properties in filelist #162

Closed rickconv closed 1 year ago

rickconv commented 1 year ago

I have an issue using 4.2.6. in Windows 11

If I select a file in the file list window, then click properties, I cannot add an exposure longer than 5000 s. If I enter 5001 s for example, the OK gets grayed out so that setting cannot be saved. Anything up to 5000 s is okay.

Exposures embedded in the image files are fine with any value, this is only a problem if I try to add or change an exposure manually from the file list window.

Note sure what is so magic about 5000? But hopefully a simple fix?

This is important to me now as I use combinations of multiple stacks from different years, each of which can be as much as 20 to 30 hrs. I realize you may not be planning a version 4 update, but if you could at least ensure that v5 fixes this, that would be fine.

Thanks so much Rick Veregin DSS issue

perdrix52 commented 1 year ago

The exposure time in V5 can now be edited to allow up to 23hr 59 m 59.999s

If that's not enough ... please say so

markmac99 commented 1 year ago

Hi, I'm curious as to why this information is important, and also why you're stacking stacks.

Stacking stacks is not the same as stacking the combined raw data since the average of averaged subsets is not the same as the average of the whole . Secondly, i don't think DSS uses this value anywhere except to give you an indication of the approx total exposure time.

This might be a topic to raise in the DSS groups.io group.

perdrix52 commented 1 year ago

It has been suggested that you could be stacking stacked images. To quote from the message in question on the mailing list:

I am pretty sure this doesn't do what he thinks it is doing since an average of averages is not the average of the raw data. I also question whether "exposure time" of a stack is really the same as the sum of the individual - colloquially we might refer to it as a total exposure of 6000s, but in reality outliers have been dropped, data averaged etc. If you're going to combine different data capture sessions, you need to stack the raw data not the stacks.

If that is the case, then I see no need to change the code in 5.1

rickconv commented 1 year ago

Yes, I am stacking stacks. The reason is if you have 30+ hours of data each year for a target over multiple years, taken with 1 minute exposures, with as many as 40 separate nights, with different flats, darks, it would be pretty much impossible to go back now and stack as one--at least in the time I have left in life. I do smaller stacks as I go along then stack them. Now I am thinking of combining multiple years.

I have to disagree that the stack of stacks process is absolutely incorrect. Before I started doing this a few years ago I made sure I understood under what conditions it would work.

First, for a simple average, like that for the light signals, an average of average is exactly correct to the overall average if each data set averaged has the same number of elements. Though if the number of elements is known for each average, a correction could be applied to make the average of averages to come out correctly.

The only other thing we need to worry about is the noise. I could not find a derivation for that, so I did my own proof (for two data sets) for S/N. The result is the same as with the signal average case, the S/N ratio is identical for a single stack as it is stacking two sets of data, as long as both stacks have an equal number of elements.

I do straight average mostly, not much need for rejection when you have so many frames, and I manually delete 1 minute bad frames from each night. I try to keep each chunk of data the about the same size for each stack. It turns out there is reasonable leeway in the stack lengths that introduce only very small errors in S/N.

If one does rejection, then yes, some pixels might be not included in two shorter stacks that would have been included in the longer stack, or vice versa. But a large outlier is still likely a large outlier. And small outliers taken out or left in with so many frames is not going to be important though.

So I have a practical reason and a mathematical demonstration I can do this, with the proper care in how I do it. And it does work extremely well.

I do agree with you that any arbitrary mix of stack lengths will not necessarily lead to the same result as an overall average would have.

I do admit I do not know what DSS does with the exposure info, but just in case it is used I wanted to make sure I put in correct values. Also, it is nice to have the correct values there for future reference.

if you don't choose to implement any further changes for this it is fine for me, I can work with 24 hour or less chunks in the future that you enable in v5. Thanks so much

Rick

perdrix52 commented 1 year ago

Please will you post that to the DeepSkyStacker mailing list on groups.io.

I'm closing this now with the comment that the speed of DeepSkyStacker 5.1.0 Beta3 means you should have no excuse for not re-stacking all the original frames - that will also allow you to validate (or not) our view that its incorrect to stack stacked images. Though if all you ever use is average stacking then perhaps you have less of a problem.

Beta3 will ship in a day or so.

markmac99 commented 1 year ago

Hi Rick. I note that your studies looked at identical sized sets, but these are a special case and in general you won't have identical sized sets in your stacks, nor will the noise have the same profile, and so the special case does not apply.
I can see how one might be persuaded otherwise with synthetic data; in tools like Excel its easy to inadvertently create two datasets that are in fact scaled copies of one another, and which therefore fit the special case. Its hard to generate truly random data in consumer tools. Its also possible that in your case of huge datasets, or if the noise is very low in the first place, the discrepancies may be sufficiently small that its good enough or unnnoticeable. As David says i would strongly encourage you to raise this point on the DSS forum on Groups.io. where there are many far more knowledgeable people than me who can probably explain further.

rickconv commented 1 year ago

First, I do use stacks of similar size. And small differences in stack size make little difference in the way noise is propagated, based on my mathematical model.

Actually, I do think my noise profile is similar night to night. The most sensitive part of the image to noise is right at the background level, which is dominated by Poisson noise from the sky brightness. I cool my camera, which has low thermal noise, so no real contribution. My read noise is low and also the same for every exposure--as I use a standard gain. But again, my conditions are chosen so sky brightness dominates over read noise. Generally sky brightness is similar night to night on average--especially with most of my work done with NB filters, which cuts out most of the stray light. If there is a big difference from one night that the sky is bright, something was wrong, and I don't use the data. Exposures are the same. So noise profile will be similar in my controlled conditions.

I may not have been clear on the model. I have not used any synthetic data. What I did is derived the S/N equation for an average of averages in general, for two stacks of p and q elements. I did assume the noise, N, is the same on every raw file, as I explained above I believe I am in that situation by carefully controlling noise. One should do this anyway, otherwise it is easy to mess up a big data set with bad data. Then I put in the values for the two stack lengths, p and q, into the equation and calculate the effect in terms of an arbitrary N, as 1/sqrt(N) is just a multiplier if we have the same N in each raw file. This is the Excel result I was talking about, just plotting S/Nav vs. p and q. But my model is an actual mathematical model for S/N, no fake random data was harmed by my analysis.

I wish it were easy to verify the model with my data, but generating S/N accurately from OSC data is incredibly difficult. Any gradients, hot/warm/cold pixels and faint nebulosity, can really interfere with the general S/N of the background. Note to do this one cannot use flats or darks as those introduce noise, one needs to look at raw images and raw stacks. Not to mention the debayering problem mixing pixel values and the need to measure noise on each color channel separately (otherwise RGB pixel intensities come out as noise, when they are just differences in intensities of the different colors). So theory will have to do.

I'm not disagreeing that you are correct in general, just pointing out that controlling factors well enables me to do what I am doing with little if any impact on the S/N profile.

I have posted my previous comments to the DSS forum now.

Rick