Closed dmzuckerman closed 6 years ago
@mangiapasta I moved your subsection on qualitative behavior (of strain) to the Qualitative (formerly quick-and-dirty) section. Please check whether you're happy with it as written in that location. Thank you.
@mangiapasta sorry for all the notes - this should be the last one. I was thinking we should probably omit the paragraph and table you wrote about significant figures at the end of Sec. 7.1, though perhaps it can be estimated to be consistent with our current approach and terminology. See what you think.
I didn't write that; not sure where it came from. Feel free to do what you like with that.
From: dmzuckerman notifications@github.com Sent: Wednesday, December 6, 2017 4:16:59 PM To: dmzuckerman/Sampling-Uncertainty Cc: Patrone, Paul (Fed); Mention Subject: Re: [dmzuckerman/Sampling-Uncertainty] Specific observables section (#6)
@mangiapastahttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmangiapasta&data=02%7C01%7Cpaul.patrone%40nist.gov%7C6fa5627b96f64b6022ce08d53ceeb338%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636481918267758903&sdata=PPU7TnYRJMEad%2Fvj%2BZn8Jo5WzXw69mwWoI5VfJqUAfU%3D&reserved=0 sorry for all the notes - this should be the last one. I was thinking we should probably omit the paragraph and table you wrote about significant figures at the end of Sec. 7.1, though perhaps it can be estimated to be consistent with our current approach and terminology. See what you think.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdmzuckerman%2FSampling-Uncertainty%2Fissues%2F6%23issuecomment-349777411&data=02%7C01%7Cpaul.patrone%40nist.gov%7C6fa5627b96f64b6022ce08d53ceeb338%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636481918267758903&sdata=1wAZ4p4Vb429pCvCIRIpqkRi9Lv5lFYyQcI5dHped6w%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAd9eEUzc_z7KEjrtwCzNvT8S8q_nz2XQks5s9wRLgaJpZM4PnxtI&data=02%7C01%7Cpaul.patrone%40nist.gov%7C6fa5627b96f64b6022ce08d53ceeb338%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636481918267758903&sdata=zGMwT1B0Jmk%2B4W9PqZQotn5zo7KLM74UNxy27HJt2yM%3D&reserved=0.
@dwsideriusNIST @ajschult did one of you make this table at end of Sec. 7.1 and write the accompanying text? I was thinking we should omit it or revise to be consistent with rest of paper. Thank you. (Sorry @mangiapasta about mistaken authorship.)
I'm not sure where that came from...
Right, I wrote that based on the item in our planning document about "basics" (see comments in this issue from Sep 30, Oct 1). I think when we were in D.C. that someone mentioned people reporting values with absurd numbers of digits. In any case, I'm not emotionally attached to the paragraph; parts of it certainly seem to be redundant with the rest of section 7.1
Thanks @ajschult . I think the point about absurd numbers of digits is valid - and the problem can contribute to misunderstanding. Would you be willing to rewrite this bit so that it's a narrow point about significant figures aligned with uncertainty? I think the table can probably be omitted unless it will help to make the point. Thank you.
So I actually broke up the subsection I wrote in the qualitative behavior section. In particular, I moved the first paragraph to the beginning of the entire section, and then recast the rest of my subsection as an example on how to determine the existence of linear regimes in data. I think overall the quick-and-dirty section is really about assessing quality of data, and we provide examples of various ways to do this. Feel free to change / edit if you don't think that's appropriate.
From: dmzuckerman notifications@github.com Sent: Wednesday, December 6, 2017 4:14:35 PM To: dmzuckerman/Sampling-Uncertainty Cc: Patrone, Paul (Fed); Mention Subject: Re: [dmzuckerman/Sampling-Uncertainty] Specific observables section (#6)
@mangiapastahttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmangiapasta&data=02%7C01%7Cpaul.patrone%40nist.gov%7C36376a5b5ab94c5db88608d53cee5a7b%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636481916779474728&sdata=6UtBsq1SEo12Wp%2BE3IweJubVlnf7tyMC2N2mr%2FnQw3g%3D&reserved=0 I moved your subsection on qualitative behavior (of strain) to the Qualitative (formerly quick-and-dirty) section. Please check whether you're happy with it as written in that location. Thank you.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdmzuckerman%2FSampling-Uncertainty%2Fissues%2F6%23issuecomment-349776808&data=02%7C01%7Cpaul.patrone%40nist.gov%7C36376a5b5ab94c5db88608d53cee5a7b%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636481916779474728&sdata=l47ct6EoYnzbQP94DICtbhwbif5bAa76hL%2Fa5gy5SM4%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAd9eEcacITozkM_k8X8Jl7Lj84yPqVJxks5s9wO7gaJpZM4PnxtI&data=02%7C01%7Cpaul.patrone%40nist.gov%7C36376a5b5ab94c5db88608d53cee5a7b%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636481916779474728&sdata=VuxQllZOYPOuoIAF6gyxY98kCzAbgDOigWmjw7Iezvk%3D&reserved=0.
I'll look at this after Sec. 7 is revised so I can do everything at once.
I pushed some updates to specific observables, fleshing out the block averages section and adding one on autocorrelation analysis. Definitely not yet, but on the way.
I think the intro to sec 7 is going to need a few "why we recommend 90% CI" sentences, because a lot of readers/reviewers are going to be surprised that 95% is not recommended straightaway. The thing is, I don't think that there is anything special about 95%. It probably became a de facto standard because the +/- ~2s confidence interval makes for easy math. Without some statement of "why" we open up the accusation of cooking the data to look better.
Good point, @dwsideriusNIST: "readers/reviewers are going to be surprised that 95% is not recommended"
I'm probably guilty of suggesting 90% but not for a good reason. I'm happy enough if we change to 95%.
Should we? @mangiapasta ? @agrossfield ?
95% will be a safe choice if we want to avoid criticism, founded or unfounded.
Ultimately, we need to stress that the writer should report the chosen confidence level. No one should argue with that; as a reviewer I would frown on someone using a 50% confidence level (OK, it's absurd, but just tag along for a moment) to set error bars, but if they say that honestly and upfront, I'll accept it. I'll be more critical of claims based on the CI, but the statistics are at least honest.
I'm fine with whatever confidence intervals we end up suggesting.
From: Daniel W. Siderius notifications@github.com Sent: Tuesday, December 12, 2017 11:10:24 AM To: dmzuckerman/Sampling-Uncertainty Cc: Patrone, Paul (Fed); Mention Subject: Re: [dmzuckerman/Sampling-Uncertainty] Specific observables section (#6)
95% will be a safe choice if we want to avoid criticism, founded or unfounded.
Ultimately, we need to stress that the writer should report the chosen confidence level. No one should argue with that; as a reviewer I would frown on someone using a 50% confidence level (OK, it's absurd, but just tag along for a moment) to set error bars, but if they say that honestly and upfront, I'll accept it. I'll be more critical of claims based on the CI, but the statistics are at least honest.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdmzuckerman%2FSampling-Uncertainty%2Fissues%2F6%23issuecomment-351099162&data=02%7C01%7Cpaul.patrone%40nist.gov%7C3764a24610274484b3e808d5417adb6f%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636486918276802351&sdata=sxvKmGRQIF9f6qQ4jBKFibBNUoPKLVRLJO9HEsp41ZQ%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAd9eEedi-8tiQJumJL9Z8OtBSX2q9kDUks5s_qVwgaJpZM4PnxtI&data=02%7C01%7Cpaul.patrone%40nist.gov%7C3764a24610274484b3e808d5417adb6f%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636486918276802351&sdata=ZMaKOeaSSVkotN7Ly8Fib2pnT3h32SfElmpKSbjlmSA%3D&reserved=0.
@dwsideriusNIST and @mangiapasta - I changed to 95%ile CI recommendation and updated corresponding table for coverage factors. Thanks again for suggesting that.
@mangiapasta @dmzuckerman I think we may need to make a change of notation to define the arithmetic mean and true value.
In the specific observables section, we need a functional representation of the arithmetic mean; currently we use an overbar, but I don't think this is going to work well. E.g., equation 14 on page 13. We may need to use the <...> notation for arithmetic means and something else for the true value.
This is a departure from the VIM, but the paper needs to express arithmetic means of mathematical expressions whereas the VIM doesn't address that level of computation.
One option: Use
I'm open to other suggestions. One benefit is that
@dwsideriusNIST and @mangiapasta some brief comments:
@dmzuckerman and @mangiapasta in reply
So... I'm going to make a code branch and switch our convention so that E[x] indicates a true expectation value and bracket(x) indicates a sample average. We can give it a look, then vote on it.
took care of N_lags
parametric bootstrap - we need to work on this next. Honestly, I think the regular bootstrap section needs a bit of example/math to make it less opaque.
LASTLY: I'm going to switch our "typical variable" to be x throughout - we originally used q to match the GUM, but now we're using both.
@dwsideriusNIST thanks. Regarding notation, I'm not certain what the best thing is. In statistical mechanics the < . > notation usually refers to a true expectation. The overbar in stat mech is usually defined in context, so a sample average seems should be an ok way to use it.
I do have the worry that the E[.] notation will be off-putting to some, even though it shouldn't. Anyone else have thoughts before Dan S does a bunch of work on this? @drroe @agrossfield ?
I would definitely appreciate any thoughts regarding the mean/expectation notation. Please speak up if equation 11 is OK; it's only an aesthetic issue for me.
As a side note, the MC community uses the <.> notation for both pencil/paper expressions for the theoretical expectation value and a simulation-obtained ensemble average.
I decided to try out the E[x] and
In particular, compare eq. 11 on page 14 to that in the master branch.
@dwsideriusNIST thanks for trying that. In the end, I prefer <.> and overbar, not E[x]. I think the one that decided me was Eq (7), where you didn't yet put the E[ . ] around the whole thing ... but that gets particularly cumbersome.
Good catch... OK, there are only votes against the proposed notation, so I'll abandon that branch.
I am going to bring some macros from the avg_notation branch over to the main branch. (I created \expval and \mean macros that allowed me to switch notation quickly.) Might add one or two for the standard uncertainties as well as it helps enforce consistency.
I like the macros idea — I have a ton of macros I use for latex, but I’ve refrained from introducing them here because I didn’t want to impose my way on everyone else.
Cheers,
Alan
On Jan 5, 2018, at 1:33 PM, Daniel W. Siderius notifications@github.com<mailto:notifications@github.com> wrote:
Good catch... OK, there are only votes against the proposed notation, so I'll abandon that branch.
I am going to bring some macros from the avg_notation branch over to the main branch. (I created \expval and \mean macros that allowed me to switch notation quickly.) Might add one or two for the standard uncertainties as well as it helps enforce consistency.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dmzuckerman_Sampling-2DUncertainty_issues_6-23issuecomment-2D355630349&d=DwMFaQ&c=4sF48jRmVAe_CH-k9mXYXEGfSnM3bY53YSKuLUQRxhA&r=49qnaP-kgQR_zujl5kbj_PmvQeXyz1NAoiLoIzsc27zuRX32UDM2oX8NQCaAsZzH&m=SnpxlY2nNA_K0OfPChDiMAlwTp99H_AATmp3m-eIrPs&s=d9wFltViHQQEgULsG47aZ1BkkFRHsbv1lj0R7YKqVg4&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AM-5F-2D8od1mqh-5FkOcdVOG85DE3Ol-2Dft-2D7kks5tHmr4gaJpZM4PnxtI&d=DwMFaQ&c=4sF48jRmVAe_CH-k9mXYXEGfSnM3bY53YSKuLUQRxhA&r=49qnaP-kgQR_zujl5kbj_PmvQeXyz1NAoiLoIzsc27zuRX32UDM2oX8NQCaAsZzH&m=SnpxlY2nNA_K0OfPChDiMAlwTp99H_AATmp3m-eIrPs&s=qj_0lUwEw-9idWU97w6fXUK7KnKhR5vh8Wg-c_-CJPw&e=.
Dr. Alan Grossfield Associate Professor Department of Biochemistry and Biophysics University of Rochester Medical Center 610 Elmwood Ave, Box 712 Rochester, NY 14642 Phone: 585 276 4193 http://membrane.urmc.rochester.edu
@agrossfield thanks for the vote of confidence. The most recent commit has a few macros, now used in most parts of the paper (dark uncertainty, propagation of uncertainty currently excluded): \expval{x}: the expectation value of x \mean{x}: the arithmetic mean of x \stdev{x_k}: the experimental standard deviation of x, given a set of measurements {x_k} \stdevmean{\mean{x}}: the experimental standard deviation of the mean of x
(currently, \stdev and \stdevmean yield the same, but I kept them separate in case we wanted to use different notation for the \stdevmean)
As the paper gets closer to finality, I'll continue to convert any residual statistical quantities to use these macros, so far as is possible.
I'm part way through the specific observables section. I left many comments in the .tex file. @dmzuckerman et al., any feedback on the questions would be useful. I can incorporate changes while I edit if you're able to look over them in the next few days.
I looked at your comments, @mangiapasta . One issue is this section has been gone over by a few folks, so there is some heterogeneity in the writing ... both for style and content. However, ultimately we need a consistent (and correct) document. I trust your instincts on cleaning things up. Here's how I suggest you proceed:
@mangiapasta @dmzuckerman Current commit (ac02c666e36d98c0113d33da502ed54f257bcf14) has some replies to PNP's comments. I cut the clause that seemed to imply that time correlation functions required bootstrapping.
Clicked the wrong "update"... sorry.
Can someone with more bootstrap experience clarify the section "Bootstrapping and uncertainty propagation?" To me, this section is specific to the parametric bootstrap.
For example, if I try to calculate the mean and uncertainty of, say \exp{-\beta U} using regular, nonparametric bootstrap and sampling from collected measurements of U (internal energy), then the resultant distribution of \exp{-\beta U} will obey the physical constraints (can't be negative) on that quantity by its mathematical nature.
If, however, I use a modelling approach in which we presume a distribution of \exp{-\beta U} but do not constrain it appropriately, then it's ripe to yield unphysical samples.
Or am I missing something else?
@dwsideriusNIST Yes, that's correct. That section could be folded into the previous section, but in with the bits about parametric bootstrapping.
@ajschult Thanks for the clarification. I'm working on a revision that moves that section to the "bootstrapping variants"
@mangiapasta Paul, I revised the error propagation section. It is significantly expanded, to give some background on 1) where the error propagation expression comes from and 2) why we use a Taylor-series expansion. Please give it a read to see if I'm off base anywhere. I did assume linearly uncorrelated raw data throughout, to avoid introducing covariances, etc., though with a reference that readers can follow if interested.
Okay thanks. I'll try to give this a look soon. Currently we're in the middle of IMS season, with a large deadline coming up this Friday. I can't promise anything before then, but I should be back on this by early next week at the latest.
I've added a PDF to the repo.
@dwsideriusNIST and @mangiapasta please check out my recent commit which is mostly quite small stuff in response to NIST reviewers. One thing of substance I have realized since we started this project is that bootstrapping confidence intervals (CIs) are not completely reliable, so I added a cautionary note and a couple of references.
And this is not in the manuscript, but ... I would probably go a little further to say that for some data sets (e.g., some of mine!) it is intrinsically impossible to make a reliable 95% CI because the sample is simply too small. When you only have a small number, say O(10), of i.i.d. samples from a distribution which is unknown, there is always a decent chance that you have missed a value (part of the distribution) which will completely throw off your average. Although you can estimate the chances that you missed such a value, you can't know how much it will change your estimated mean ... and hence you can't make a CI. This is somewhat analogous to the chances of completely missing a free energy basin in a simulation of a physical system, but it's more mathematically fundamental because it arises even for i.i.d. samples.
Should we write up something about this issue??
Aside from that, I'm ready to submit. Are we still waiting on NIST?
This is where I usually reference Rumsfeld talking about "unknown unknowns".
Dr. Alan Grossfield Department of Biochemistry and Biophysics University of Rochester Medical Center
On Apr 26, 2018, at 4:57 PM, dmzuckerman notifications@github.com<mailto:notifications@github.com> wrote:
@dwsideriusNISThttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dwsideriusNIST&d=DwMFaQ&c=4sF48jRmVAe_CH-k9mXYXEGfSnM3bY53YSKuLUQRxhA&r=49qnaP-kgQR_zujl5kbj_PmvQeXyz1NAoiLoIzsc27zuRX32UDM2oX8NQCaAsZzH&m=EdijO5AlYUPX4iic1QKVVJB0MFZyW8RqoYlGebdpd80&s=LkDv0NePgCpHpwDZgDwJ_nKbg7f7y21XBC37i7ZuUrc&e= and @mangiapastahttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_mangiapasta&d=DwMFaQ&c=4sF48jRmVAe_CH-k9mXYXEGfSnM3bY53YSKuLUQRxhA&r=49qnaP-kgQR_zujl5kbj_PmvQeXyz1NAoiLoIzsc27zuRX32UDM2oX8NQCaAsZzH&m=EdijO5AlYUPX4iic1QKVVJB0MFZyW8RqoYlGebdpd80&s=Tz2jea2E_vWNlcCvEdE7JIOaFDayIYt_HjlpIphrw4Y&e= please check out my recent commit which is mostly quite small stuff in response to NIST reviewers. One thing of substance I have realized since we started this project is that bootstrapping confidence intervals (CIs) are not completely reliable, so I added a cautionary note and a couple of references.
And this is not in the manuscript, but ... I would probably go a little further to say that for some data sets (e.g., some of mine!) it is intrinsically impossible to make a reliable 95% CI because the sample is simply too small. When you only have a small number, say O(10), of i.i.d. samples from a distribution which is unknown, there is always a decent chance that you have missed a value (part of the distribution) which will completely throw off your average. Although you can estimate the chances that you missed such a value, you can't know how much it will change your estimated mean ... and hence you can't make a CI. This is somewhat analogous to the chances of completely missing a free energy basin in a simulation of a physical system, but it's more mathematically fundamental because it arises even for i.i.d. samples.
Should we write up something about this issue??
Aside from that, I'm ready to submit. Are we still waiting on NIST?
- You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dmzuckerman_Sampling-2DUncertainty_issues_6-23issuecomment-2D384786549&d=DwMFaQ&c=4sF48jRmVAe_CH-k9mXYXEGfSnM3bY53YSKuLUQRxhA&r=49qnaP-kgQR_zujl5kbj_PmvQeXyz1NAoiLoIzsc27zuRX32UDM2oX8NQCaAsZzH&m=EdijO5AlYUPX4iic1QKVVJB0MFZyW8RqoYlGebdpd80&s=cPAi1DU8HAXwXaIkIq8y2jRMPDNNCgr9kdzGZFnNmK8&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AM-5F-2D8rkbqUGrt13cNMEU8nNlqCgR4kCWks5tsjSxgaJpZM4PnxtI&d=DwMFaQ&c=4sF48jRmVAe_CH-k9mXYXEGfSnM3bY53YSKuLUQRxhA&r=49qnaP-kgQR_zujl5kbj_PmvQeXyz1NAoiLoIzsc27zuRX32UDM2oX8NQCaAsZzH&m=EdijO5AlYUPX4iic1QKVVJB0MFZyW8RqoYlGebdpd80&s=oDCzfQF9wAR-bj3bu6BztkmeVD_xZJDo4jQOAw6ZwLs&e=.
Dan, I'll look at your revisions either this afternoon (east coast time) or on Monday. I would guess we have 1 1/2 weeks max before NIST approval; it needs one more signature before going to to the review committee.
I'm reading through the whole manuscript in detail now. I'll get back to you in a day or so with comments. Currently through 10 of 23 pages. Overall this is looking much better. I like the RMSD discussion now. It needs a small amount of tweaking in my opinion, but nothing major. I will weigh in on that when I compile my full list of comments.
From: dmzuckerman notifications@github.com Sent: Thursday, April 26, 2018 4:57:21 PM To: dmzuckerman/Sampling-Uncertainty Cc: Patrone, Paul (Fed); Mention Subject: Re: [dmzuckerman/Sampling-Uncertainty] Specific observables section (#6)
@dwsideriusNISThttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FdwsideriusNIST&data=02%7C01%7Cpaul.patrone%40nist.gov%7C1a847af0d5cb4b62034f08d5abb84e7a%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636603730446493930&sdata=YAR2AH9Pmu9b9LHNAeKm2tasxyTL7vYpa%2FjG44qKE9E%3D&reserved=0 and @mangiapastahttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmangiapasta&data=02%7C01%7Cpaul.patrone%40nist.gov%7C1a847af0d5cb4b62034f08d5abb84e7a%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636603730446493930&sdata=enUoBcx4bg0LLbi0cub7UQzAR25X%2BocFrtj8%2FjZriao%3D&reserved=0 please check out my recent commit which is mostly quite small stuff in response to NIST reviewers. One thing of substance I have realized since we started this project is that bootstrapping confidence intervals (CIs) are not completely reliable, so I added a cautionary note and a couple of references.
And this is not in the manuscript, but ... I would probably go a little further to say that for some data sets (e.g., some of mine!) it is intrinsically impossible to make a reliable 95% CI because the sample is simply too small. When you only have a small number, say O(10), of i.i.d. samples from a distribution which is unknown, there is always a decent chance that you have missed a value (part of the distribution) which will completely throw off your average. Although you can estimate the chances that you missed such a value, you can't know how much it will change your estimated mean ... and hence you can't make a CI. This is somewhat analogous to the chances of completely missing a free energy basin in a simulation of a physical system, but it's more mathematically fundamental because it arises even for i.i.d. samples.
Should we write up something about this issue??
Aside from that, I'm ready to submit. Are we still waiting on NIST?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdmzuckerman%2FSampling-Uncertainty%2Fissues%2F6%23issuecomment-384786549&data=02%7C01%7Cpaul.patrone%40nist.gov%7C1a847af0d5cb4b62034f08d5abb84e7a%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636603730446493930&sdata=Xz%2BfhBCjqf%2Fte7YUQxuusFZTFmteZKKf0QxNF1aGqKk%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAd9eEfjwMwH98_1e0xCaymGdKmBpJyEdks5tsjSxgaJpZM4PnxtI&data=02%7C01%7Cpaul.patrone%40nist.gov%7C1a847af0d5cb4b62034f08d5abb84e7a%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636603730446493930&sdata=lVhHqyI4EOk7y2taVn8yCVQxDLwO9LKv1C%2BuxN3Wh%2Fo%3D&reserved=0.
@mangiapasta please just edit (rather than commenting) unless you think something is very controversial. This will save time. I'll be happy to look over any changes from your commit. Thanks!
Will do
From: dmzuckerman notifications@github.com Sent: Friday, April 27, 2018 10:34:10 AM To: dmzuckerman/Sampling-Uncertainty Cc: Patrone, Paul (Fed); Mention Subject: Re: [dmzuckerman/Sampling-Uncertainty] Specific observables section (#6)
@mangiapastahttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmangiapasta&data=02%7C01%7Cpaul.patrone%40nist.gov%7C286e7a6d1358410e2f7908d5ac4bf1f2%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636604364550570518&sdata=qy%2BCfce7KPNo8nVyzsmQgRHTmG%2BK5WK0ZHPbFNDKKJQ%3D&reserved=0 please just edit (rather than commenting) unless you think something is very controversial. This will save time. I'll be happy to look over any changes from your commit. Thanks!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdmzuckerman%2FSampling-Uncertainty%2Fissues%2F6%23issuecomment-384988748&data=02%7C01%7Cpaul.patrone%40nist.gov%7C286e7a6d1358410e2f7908d5ac4bf1f2%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636604364550570518&sdata=czTEtDoLHOcXl1tQpFsOXkROdyek%2F8GKPWwvTLR7%2Fzc%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAd9eEYz0QlSOAyZkkBKnwaCvsa6l9bXYks5tsyxigaJpZM4PnxtI&data=02%7C01%7Cpaul.patrone%40nist.gov%7C286e7a6d1358410e2f7908d5ac4bf1f2%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636604364550570518&sdata=4z7H1bE%2FAvLWuN65mupKd7D1DktSryoB%2FK%2FUPkAxvqE%3D&reserved=0.
@dmzuckerman I looked over your changes and only had a few small edits on top of them (see commits from this afternoon).
@dwsideriusNIST Dan, in Eqs. 18 and 19 of the Specific Observables section, there is weird notation {x{l \neq j}} appearing after one of the parentheses and before the larger vertical bar. Is that a typo or did you mean something by it? I couldn't figure out what that "l" (lowercase L) was referring to, and it didn't seem consistent with what you were trying to say re: Taylor's theorem. Also, in Eqs. 16 through 19 you use {x_i} to denote a collection of raw measurements. The set notation throws me for a loop because sets are generally unordered objects. I think it's more natural to write \boldsymbol {\roman x} = (x_1,x_2,...,x_M) so that, e.g. I can intrepret the RHS of Eq. 16 as a dot product. Did you have a specific reason for using set notation?
@mangiapasta I looked over the equations to figure out what the issue is. The weird notation you pointed out was the "hold these fixed" part of the partial derivative - the "not equal to" sign was mangled, it must be a font issue. It's a notation convention that I (many of us) use in macroscopic thermodynamics, i.e., use an index variable (in this case "l") that is otherwise unused to indicate which generic variables are held fixed.
I'm OK dropping the "hold these fixed" list; it should be obvious that the partial derivative w.r.t. x_j is done while holding the other variables fixed.
Regarding the set notation, my original version of that section actually used vector/tensor notation (eq. 16 as "F = c + a.x", eq 18 as F = F(bar{x}) + grad(F).x + ... ; etc.). Eqs 17 and 19 start to get really nasty, as you have to introduce a transpose vector and then represent the variance as a variance-covariance tensor. In the end, I switched to set notation to avoid confusing readers unfamiliar with the tensor approach.
Ah, okay I see what you were saying about the "hold these fixed". I've seen similar things in some stat-mech textbooks, but never with the not-equal sign. Anyways, I have a modest preference for removing that if you're okay with it. The partial derivative implies the other variables should be fixed, as you point out.
It seems like you had a justification for using the set notation, so let's leave it unless someone in WERB (Andrew?) takes issue.
From: Daniel W. Siderius notifications@github.com Sent: Monday, April 30, 2018 9:36:07 AM To: dmzuckerman/Sampling-Uncertainty Cc: Patrone, Paul (Fed); Mention Subject: Re: [dmzuckerman/Sampling-Uncertainty] Specific observables section (#6)
@mangiapastahttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmangiapasta&data=02%7C01%7Cpaul.patrone%40nist.gov%7Cd08276a801f04d2b722b08d5ae9f5416%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636606921695652406&sdata=kMy9KWzh29lrmBLb%2BJYsJN2cTwtXUzaEBAnDIz%2Fxtts%3D&reserved=0 I looked over the equations to figure out what the issue is. The weird notation you pointed out was the "hold these fixed" part of the partial derivative - the "not equal to" sign was mangled, it must be a font issue. It's a notation convention that I (many of us) use in macroscopic thermodynamics, i.e., use an index variable (in this case "l") that is otherwise unused to indicate which generic variables are held fixed.
I'm OK dropping the "hold these fixed" list; it should be obvious that the partial derivative w.r.t. x_j is done while holding the other variables fixed.
Regarding the set notation, my original version of that section actually used vector/tensor notation (eq. 16 as "F = c + a.x", eq 18 as F = F(bar{x}) + grad(F).x + ... ; etc.). Eqs 17 and 19 start to get really nasty, as you have to introduce a transpose vector and then represent the variance as a variance-covariance tensor. In the end, I switched to set notation to avoid confusing readers unfamiliar with the tensor approach.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fdmzuckerman%2FSampling-Uncertainty%2Fissues%2F6%23issuecomment-385399867&data=02%7C01%7Cpaul.patrone%40nist.gov%7Cd08276a801f04d2b722b08d5ae9f5416%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636606921695652406&sdata=GvGJi364w%2Fsn%2Bld0qLYNrbcq7WHQ8C7M2u7yIsbsMiI%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAd9eERRMffjgf_AkyNwCPtq4T29lYSR2ks5ttxNHgaJpZM4PnxtI&data=02%7C01%7Cpaul.patrone%40nist.gov%7Cd08276a801f04d2b722b08d5ae9f5416%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C636606921695652406&sdata=YGZgGdgfqR3LFF0AnKDF42gnmhPqeIcZNFnugZPp7d4%3D&reserved=0.
@mangiapasta and @ajschult - you guys got a great draft going for the specific observables section. I have some questions/comments that hopefully you could address in the next few days:
As I said, this is off to a great start. I'm nitpicking so it can be even better.