The reason for this might be the following: recently I changed the frequency of scraping from every 15mins to every 20mins. This barely changes the app and the data but it makes fewer requests to AWS and allows us to keep using the free tier.
However, now we have data every 15mins (:00, :15, :30, :45) until last week or so, and after that we have data every 20mins (:00, :20, :40). When we do the average, we have the "old" data in the 15min interval and the "new" data in the 20min interval. When we take the average of them all, we end up with data for :00, :15, :20, :30, :40, :45. That might be why the values jump up and down so much. We should unify these values, so that the :15, :30, :45 get transformed into :20 and :40. We can do this by calculating the average, for example if the occupancy at :15 is 22 and at :30 it's 28, at :20 it would be 24. Obtaining this number is a simple math problem :)
If the ripple persists after unifying these values, we'll have to explore further
The average occupancy plots' values jump a lot:
The reason for this might be the following: recently I changed the frequency of scraping from every 15mins to every 20mins. This barely changes the app and the data but it makes fewer requests to AWS and allows us to keep using the free tier.
However, now we have data every 15mins (:00, :15, :30, :45) until last week or so, and after that we have data every 20mins (:00, :20, :40). When we do the average, we have the "old" data in the 15min interval and the "new" data in the 20min interval. When we take the average of them all, we end up with data for :00, :15, :20, :30, :40, :45. That might be why the values jump up and down so much. We should unify these values, so that the :15, :30, :45 get transformed into :20 and :40. We can do this by calculating the average, for example if the occupancy at :15 is 22 and at :30 it's 28, at :20 it would be 24. Obtaining this number is a simple math problem :)
If the ripple persists after unifying these values, we'll have to explore further