michaeltown-phd / snowisoDataMunging

This is an initial processing and analysis of data from the SNOWISO project at EastGRIP.
MIT License
0 stars 0 forks source link

clean all iso data once and for all in the ods spread sheets. #12

Open michaeltown-phd opened 2 years ago

michaeltown-phd commented 2 years ago

pos1 20170701 d18O/dD, doubles at 30 cm, 60cm pos3 20170726 d18O/dD at 50 cm pos1 20180707 d18o/dD doubles at 30-36 cm, 65 pos1 20180721 d18o/dD doubles at 30-36 cm, 65, 90 pos2/3 20180608 d18O/dD top number? pos3 20180512 d18o/dD top numbers 0-4 cm? pos3 20180721 dD doubles at 0 cm? pos5 20180622/20180707 d18O/dD doubles at 18, 36 cm, 58 cm, 78 cm, 90 cm pos5 20180512 top 10 cm all the same? pos4 20170526 check accumulation pos4 20180806 0-5 cm all the same?

michaeltown-phd commented 2 years ago

will not be so thorough with the 2016 data because there is not accumulation data here to make alignment of the cores easier and appropriate.

michaeltown-phd commented 2 years ago

doubles are being induced by the accumulation code, and then rounding to the new height. See if this can be mitigated.

michaeltown-phd commented 2 years ago

regarding sorting the doubles at a few levels across the data set: not so easy to solve because the interpolation must be done by depth to newDepth. However, the data frame is stored by samplenumber and linear interpolation between adjacent values in the df would not end well. If our results are dependent on this level of sensitivity, then I will go back and do the linear interpolation. Right now, the depth values are simply snapped to adjacent heights inducing and error of about 1 cm. This results in some doubles at some heights, but I think this is well within the realm of the uncertainty presented by the data set as a whole.