Closed mem48 closed 12 months ago
E.g. split between commute, school, utility, leisure and other trips
I am in the process of getting the Scotland Household Survey data for the AADF calculations. I found this. Table TD3 in the dairy tables has the overall purpose splits (not by mode).
This is really helpful, many thanks Juan!
I have created a repository for the Transport and Travel in Scotland - Scottish Household Survey nptscot/TT_-Scottish_Household_Survey. . This can be used for including other trip purposes and #63 .
@Robinlovelace where should I upload the zip files?
Releases in the https://github.com/nptscot/TT_-Scottish_Household_Survey repo sounds good to me for open data, have just created a placeholder release that you should be able to edit: https://github.com/nptscot/TT_-Scottish_Household_Survey/releases/tag/1
Would it be better to use the Scottish Household Survey results from 2019? That seems like a more useful guide than 2020, which was during the midst of the lockdowns.
Thanks @Robinlovelace.
I just uploaded the files that I used. They are the results from 2014 to 2019.
For the analysis in the repo, I used the purpose_old
column to calculate the splits. However, it does not seem to match the 14 purposes in the results tables. There are two additional purpose classifications purpose_new
and purposenewv2
which will have to be grouped to reproduce the results of the tables.
Also, I used a combination of the individual weight and the travel diary weight, it might not be the correct approach, see this
Here are shop polygons in Scotland, I also have data for points too
500m spatial grid of shop density (for point objects)
Looking good.
Any ideas on how to reduce the computation time of odjitter::jitter()
@Robinlovelace ?
Is it best to reduce the scale of the numbers in the disaggregation_key?
How long is it taking currently?
How many desire lines and subpoints do you have?
What value do you currently have for the disaggregation threshold?
In my experience jittering time is negligible compared with routing.
First thought: reduce the number of input desire lines and subpoints.
May have further thoughts with more information.
Currently it hasn't been able to run fully, it's taking too long so I've had to stop the process. The disaggregation threshold is 1000. I made a high threshold to reduce the number of output OD pairs. There are 85,000 rows in the input OD object The subpoints file is osm_highways, for both origins and destinations. In atumie we used osm_highways for origins and a 500m grid for destinations. The disaggregation key uses the zone population, instead of the number of shops, so it's much higher. I thought this would just be for weighting though. I've tried dividing the population by 50,000 to get more reasonable numbers and the jittering was still too slow to function.
The disaggregation threshold is 1000. I made a high threshold to reduce the number of output OD pairs.
That won't necessarily speed-up the results, only OD pairs with more than 1000 trips would be affected which in this case is not many.
I suggest using different subpoints. For shopping trips could you use shops from OSM?
Another thought @joeytalbot: have you tried running the code for the Edinburgh region?
Done and all that's left to do is check the results, so superseded by #341
To start with I'll check out the overall distribution of cycle trip purposes using NTS data.