$ grep Glasgow data/covid-19-cases-uk.csv | tail
2020-04-03,Scotland,S08000031,Greater Glasgow and Clyde,779
2020-04-04,Scotland,S08000031,Greater Glasgow and Clyde,851
2020-04-05,Scotland,S08000031,Greater Glasgow and Clyde,931
2020-04-06,Scotland,S08000031,Greater Glasgow and Clyde,984
2020-04-07,Scotland,S08000031,Greater Glasgow and Clyde,1094
2020-04-08,Scotland,S08000031,Greater Glasgow and Clyde,1166
2020-04-09,Scotland,S08000031,Greater Glasgow and Clyde,1251
2020-04-10,Scotland,S08000031,Greater Glasgow and Clyde,1314
2020-04-11,Scotland,S08000031,Greater Glasgow and Clyde,1387
2020-04-12,Scotland,S08000031,Greater Glasgow and Clyde,"1,449"
Seems a bit odd to suddenly switch the way numbers are formatted. Again, nothing that can't be dealt along with the 1 to 4 and NaN type things, and it's perfectly well formed CSV... but it does seem a bit odd when that region has never done the comma thousands numeric formatting thing before (and none of the other regions with counts over 1000 does either I think).
@timday thanks for reporting this. I agree it's inconsistent, and have fixed the data and the underlying code so that commas are removed from numbers for this case.
Note the last record:
Seems a bit odd to suddenly switch the way numbers are formatted. Again, nothing that can't be dealt along with the
1 to 4
andNaN
type things, and it's perfectly well formed CSV... but it does seem a bit odd when that region has never done the comma thousands numeric formatting thing before (and none of the other regions with counts over 1000 does either I think).