Open michaeltown-phd opened 2 years ago
will not be so thorough with the 2016 data because there is not accumulation data here to make alignment of the cores easier and appropriate.
doubles are being induced by the accumulation code, and then rounding to the new height. See if this can be mitigated.
regarding sorting the doubles at a few levels across the data set: not so easy to solve because the interpolation must be done by depth to newDepth. However, the data frame is stored by samplenumber and linear interpolation between adjacent values in the df would not end well. If our results are dependent on this level of sensitivity, then I will go back and do the linear interpolation. Right now, the depth values are simply snapped to adjacent heights inducing and error of about 1 cm. This results in some doubles at some heights, but I think this is well within the realm of the uncertainty presented by the data set as a whole.
pos1 20170701 d18O/dD, doubles at 30 cm, 60cm pos3 20170726 d18O/dD at 50 cm pos1 20180707 d18o/dD doubles at 30-36 cm, 65 pos1 20180721 d18o/dD doubles at 30-36 cm, 65, 90 pos2/3 20180608 d18O/dD top number? pos3 20180512 d18o/dD top numbers 0-4 cm? pos3 20180721 dD doubles at 0 cm? pos5 20180622/20180707 d18O/dD doubles at 18, 36 cm, 58 cm, 78 cm, 90 cm pos5 20180512 top 10 cm all the same? pos4 20170526 check accumulation pos4 20180806 0-5 cm all the same?