PresConsUIUC / PSAP

Home of the Preservation Self-Assessment Program application.
https://psap.library.illinois.edu/
Other
4 stars 1 forks source link

test that maximum scoring gives 100 #268

Closed jhain closed 9 years ago

jhain commented 9 years ago

test for each format available that if choosing best options for all levels, score comes out to be 100

areisemann commented 9 years ago

hoo boy, i've been doing this, and it's not going very well. scores are not adding up. i'll post more specifics in a bit.

jhain commented 9 years ago

F

From: areisemann [mailto:notifications@github.com] Sent: Monday, April 06, 2015 3:11 PM To: PresConsUIUC/PSAP Cc: Teper, Jennifer Hain Subject: Re: [PSAP] test that maximum scoring gives 100 (#268)

hoo boy, i've been doing this, and it's not going very well. scores are not adding up. i'll post more specifics in a bit.

— Reply to this email directly or view it on GitHubhttps://github.com/PresConsUIUC/PSAP/issues/268#issuecomment-90226394.

areisemann commented 9 years ago

I made each of all the assessed values for all institutions, locations, resources, and sub-resources come out to 100 so there wouldn't be some weird crossover thing where the score of one resource was factoring in another location's score (i.e. one that had less than perfect conditions). I also made a new location (Score-checking territory) in my testing-repository (Repository for user testing).

a1-a4 are single item resources, each from one of the four areas of item type on the assessment panel. b1 is a collection with an item (b11) and a sub-collection (b12) containing one item (b121).

Issue 1: [affects a1] After you've just assessed a new resource item, if you see your assessed score and then go back to edit/correct an answer to the assessment, that edit/correction is not reflected in the score count (i.e. the number above the colored assessment bar). The score count still displays the old score count number. Additionally, the score bar doesn't show the change either. BUT, if you click on the ? help button next to the score count, it shows the updated/corrected score.
--Relatedly, on the Location screen, the score displayed is still the old one.

Issue 2: [affects a2 - b11] As I stated above, all values have been set as excellent/best available from the given choices and should therefore be equal to 100. However, the final assessment scores for my newly created resources are in varying degrees not 100. Scores are instead: a2 (78.1); a3 (77.3); a4 (97.3); b11 (96.4). From the details given by the ? button by the assessment bar and score count, it seems like the format of the resource is weighted such that it would be impossible for any resource of that type to get a perfect score. Taking a random example of a3 (aniline print), the ? button states that: its assessment score is 100; format score is 50; location score is 100; temperature score is 67; relative humidity score is 25.

areisemann commented 9 years ago

Issue 3: [affects b11] In some places the score seems to be 81.4 [i.e. the score for it's only resource, b121], but then somehow its assessment score is 156. According to the note provided, "This is the average score of the items within the collection." But again, there is only one resource (one item) in this collection. The only other difference between b121 and a1-a4--each of which are resource items- is that b121 is a collection item that has been chosen by sampling.

areisemann commented 9 years ago

Issue 4: [affects b1/b12] When I'm providing details about the creators of a resource, the form allows me to enter several creators (e.g. Lorelai Layne and Tamira Trent), yet then when I try to save the information on the form, I get an error message and nothing that I typed in gets saved. The error message is something along the lines of "area = 2 when it can only be a 1 or 0".

adolski commented 9 years ago

I'm not able to reproduce issue 1 -- I see edits reflected immediately, for both resources and locations. Could you demonstrate this in person? I don't know when, if the next long meeting is too far away.

For issue 2, this is correct -- most resources cannot get a perfect score. The problem with Aniline Print is that its format score is only 50, which makes its maximum possible score 80. The 50 comes from the Format Scores tab of the questionDependencies.xslx spreadsheet.

Will respond to the other issues soon; haven't looked at them yet.

jhain commented 9 years ago

Alex is right on # 2 now that I think about it. Many formats cannot get a perfect score b/c of inherent format issues, though I believe that is the only part of scoring that cannot achieve a perfect score (my bad I was having a brain fart). Let me pull up the question dependencies and we should be able to come up with the highest possible score for all formats.

Sent from an itty-bitty keyboard

On Apr 6, 2015, at 6:40 PM, adolski notifications@github.com<mailto:notifications@github.com> wrote:

I'm not able to reproduce issue 1 -- I see edits reflected immediately, for both resources and locations. Could you demonstrate this in person? I don't know when, if the next long meeting is too far away.

For issue 2, this is correct - some (most?) resources cannot get a perfect score. The problem with Aniline Print is that its format score is only 50, which makes its maximum possible score 80. The 50 comes from the Format Scores tab of the questionDependencies.xslx spreadsheet.

Will respond to the other issues soon; haven't looked at them yet.

— Reply to this email directly or view it on GitHubhttps://github.com/PresConsUIUC/PSAP/issues/268#issuecomment-90285533.

areisemann commented 9 years ago

It is not, however, just a condition of the format. Using my aniline print example from above again, its assessment score is 100; format score is 50; location score is 100; temperature score is 67; and, relative humidity score is 25. Thus, even though its format can't get higher than a 50, the temperature and relative humidity scores are off too, which they shouldn't be since I assessed all locations/etc in the application as being perfect preservation environments.

adolski commented 9 years ago

An institution/location may score 100 on its own, but still have a suboptimal temperature/RH for a given format. The temp/RH scores come from columns J-O of the Format Scores tab of questionDependencies.xslx.

adolski commented 9 years ago

I can confirm that issue 3 is a bug, and I just pushed a fix. Haven't looked into issue 4 yet, but it's probably a bug too -- will get to it soon.

areisemann commented 9 years ago

i've uploaded a spreadsheet that should help to google docs "PSAP-Questions&Scores_2015.xlsx". almost all of it is from about a year ago, and it lays out at least for some formats how the scores were supposed to break down.
i've also add new content, which is the first sheet. this is where you'll find the "perfect" scores for the formats/processes as they are now.

areisemann commented 9 years ago

re: issue 1 and 4, i did a fresh pull before working on the scores yesterday. post-pull, i don't seem to have these issues anymore.

adolski commented 9 years ago

I just looked into issue 4 and can't reproduce it either. When you saw the error message ("area = 2 when it can only be a 1 or 0"), are you sure that "area" was the word?

areisemann commented 9 years ago

i believe so. i don't think it's an issue any longer though. after my last pull, the message hasn't been showing up.