Open patwater opened 9 years ago
Also, is there any documentation for the calculate_heat_values.R? Bit opaque and really just curious to see the underlying math driving the calculation. Imagine it's some sort of inverse distance logic though curious what metric you used (Euclidean, Manhattan, whatever) and how that's justified. The report provided is very impressive though "local intensity" leaves a lot to the imagination.
From my understanding of the code, there is a function called calculate_heat_values.R in the Functions folder. It appears to use a kernel density estimation with a grid of .01.
Anticipating this question I actually already have a comparison between the
KDE function we use and the standard KDE in the MASS
package: .\CODE\not used\kde_comparison
Ironically I was initially annoyed that Allstate had created a new KDE function. I believe my exact thoughts were "Why not use the KDE in MASS? Show-offs.". Upon examination, I saw that their code is only a slight modification of the function in MASS, but their version increases the computational efficiency because it skips the parts we don't need. My annoyance was quickly converted into appreciation, and I kept the comparison for other people's future reference.
This heatmap function is a great example of a limitation of the "evaluation" nature of this project. It's very specific to this project, and even the density estimates are hard-coded. (You can blame me for this short sightedness!) It would be better to pull out these functions into a much more generic package that calculates scores for arbitrary data (probably inspections). Then it would be nice to see that package applied to this evaluation. This is high on my personal wishlist.
On Thu, Jul 30, 2015 at 7:49 AM, Rajiv Shah notifications@github.com wrote:
From my understanding of the code, there is a function called calculate_heat_values.R in the Functions folder. It appears to use a kernel density estimation with a grid of .01.
— Reply to this email directly or view it on GitHub https://github.com/Chicago/food-inspections-evaluation/issues/80#issuecomment-126311689 .
Thanks for the detail and ya I know from experience how it's tough to think about generalizing when you're in the thick of just getting it right for the immediate task at hand i.e. Chicago! Do you know off hand how the kde bandwith was selected?
h <- if (missing(h)) c(bandwidth.nrd(x), bandwidth.nrd(y)) h <- h / 4
Mostly just curious for that. And do you know how much computational efficiency is gained from that code line improvement? Is it material in generating the data to run the model? We're talking about an upper bound of 337k observations for the crime dataset so is run time really that much of an issue?
More broadly though it does seem that best practice as we look to apply these sorts of city analytics projects to more than just a single city would be to use a generic package (ideally off CRAN) so it's easier to redeploy.
Cheers,
PA
Out of all the scripts the heatmap calculations are by far the most time intensive. For example last night it took 668 seconds to run the heat map script, and the next longest time was the business download which only took 142 seconds. I have not benchmarked how much time the alternative KDE calculation saves.
There is a discussion on the original kde2d
function from MASS on page 131 (really it starts on page 126) of the accompanying book, Modern Applied Statistics with S by W.N. Venables and B.D. Ripley. I have the 4th edition, so the page numbers may vary depending on which one you're using. I'm no expert on kenel density bandwith selection, but I trust whatever Venables and Ripley have to say about it.
BTW, I do think it would be good to test the effectiveness of different assumptions in the density estimation, and I'm not sure how much of this was done at Allstate.
I was thinking it would be good to test
Part of the challenge with redeploying this cool model is that Chicago's food inspection data is as you can imagine somewhat different from say LA's, which is run by the County rather than the city. This list on the readme is a great conceptual starting point for digging into what data is need though more granularity is required for redeployment:
Business Licenses Food Inspections Crime Garbage Cart Complaints Sanitation Complaints Weather Sanitarian Information
The report is some help though really it'd be nice to have simple, clear metadata on what data fields are used, what values they take on and some quick data provenance about how those values were measured.
I suppose you can get that by digging into and comparing say this restaurant inspection field from LA: https://data.lacounty.gov/Public-Health/2014-Restaurants-And-Markets-Violations/kbia-7mpx
With what you used at Chicago though it'd be nice to know right up front Food Inspection [First field used] [Second field used] etc. Mostly just food for thought as a few of us play with redeploying this model and as a field we develop best practices for redeploying these sorts of pioneering tools in other cities.
Cheers,
PA
PS Say hi to Tom for me. Gonna see if in the MacArthur event in NYC next week.