Closed shankari closed 3 years ago
This behavior existed even before the "to label" changes, but would typically not get triggered, since we would load all trips by default. So the chances that we would not add any elements to the list for each call to readFromServer
was slim. But now that we displayed a filtered list of trips by default, if the user has been good and labeled all their trips, we are not going to add any entries to the list from readFromServer
, and so we keep contacting the server for more data until we reach the first trip.
Checking server calls from the user who reported this, I find one call to get_range_ts
, followed by a LOOONG sequence of calls to read trips.
2021-08-13 22:45:36,246:DEBUG:139659208341248:END POST /pipeline/get_range_ts redacted_user_id 0.0639808177947998
2021-08-13 22:45:36,323:DEBUG:139658956621568:END POST /profile/update redacted_user_id 0.004297971725463867
2021-08-13 22:45:37,042:DEBUG:139659191555840:END POST /profile/update redacted_user_id 0.003402233123779297
2021-08-13 22:45:38,103:DEBUG:139658965014272:END POST /datastreams/find_entries/timestamp redacted_user_id 0.8656806945800781
2021-08-13 22:45:38,105:DEBUG:139658981799680:END POST /datastreams/find_entries/timestamp redacted_user_id 0.8651690483093262
2021-08-13 22:45:38,123:DEBUG:139658939836160:END POST /datastreams/find_entries/timestamp redacted_user_id 0.8822128772735596
2021-08-13 22:45:39,643:DEBUG:139659216733952:END POST /datastreams/find_entries/timestamp redacted_user_id 0.9273574352264404
2021-08-13 22:45:44,262:DEBUG:139658948228864:END POST /datastreams/find_entries/timestamp redacted_user_id 0.35425877571105957
2021-08-13 22:45:48,087:DEBUG:139658973406976:END POST /datastreams/find_entries/timestamp redacted_user_id 0.40505051612854004
2021-08-13 22:45:52,298:DEBUG:139659199948544:END POST /datastreams/find_entries/timestamp redacted_user_id 0.4466133117675781
2021-08-13 22:45:56,418:DEBUG:139659208341248:END POST /datastreams/find_entries/timestamp redacted_user_id 0.4910106658935547
2021-08-13 22:46:02,535:DEBUG:139658956621568:END POST /datastreams/find_entries/timestamp redacted_user_id 0.23488211631774902
2021-08-13 22:46:06,892:DEBUG:139659191555840:END POST /datastreams/find_entries/timestamp redacted_user_id 0.25998401641845703
2021-08-13 22:46:11,012:DEBUG:139658965014272:END POST /datastreams/find_entries/timestamp redacted_user_id 0.2637605667114258
2021-08-13 22:46:15,589:DEBUG:139658981799680:END POST /datastreams/find_entries/timestamp redacted_user_id 0.2039635181427002
2021-08-13 22:46:19,319:DEBUG:139658939836160:END POST /datastreams/find_entries/timestamp redacted_user_id 0.22194910049438477
2021-08-13 22:46:22,049:DEBUG:139659216733952:END POST /datastreams/find_entries/timestamp redacted_user_id 0.24005985260009766
2021-08-13 22:46:23,948:DEBUG:139658948228864:END POST /datastreams/find_entries/timestamp redacted_user_id 0.2691798210144043
2021-08-13 22:46:27,012:DEBUG:139658973406976:END POST /datastreams/find_entries/timestamp redacted_user_id 0.24216985702514648
2021-08-13 22:46:29,371:DEBUG:139659199948544:END POST /datastreams/find_entries/timestamp redacted_user_id 0.2115011215209961
2021-08-13 22:46:31,988:DEBUG:139659208341248:END POST /datastreams/find_entries/timestamp redacted_user_id 0.2220454216003418
2021-08-13 22:46:35,304:DEBUG:139658956621568:END POST /datastreams/find_entries/timestamp redacted_user_id 0.25846004486083984
2021-08-13 22:46:36,711:DEBUG:139659191555840:END POST /datastreams/find_entries/timestamp redacted_user_id 0.2137467861175537
2021-08-13 22:46:39,630:DEBUG:139658965014272:END POST /datastreams/find_entries/timestamp redacted_user_id 0.22156310081481934
2021-08-13 22:46:42,134:DEBUG:139658981799680:END POST /datastreams/find_entries/timestamp redacted_user_id 0.25339365005493164
2021-08-13 22:46:45,173:DEBUG:139658939836160:END POST /datastreams/find_entries/timestamp redacted_user_id 0.27750182151794434
2021-08-13 22:46:47,384:DEBUG:139659216733952:END POST /datastreams/find_entries/timestamp redacted_user_id 0.21747136116027832
As the user indicated, this is followed by a call to the diary
2021-08-13 22:47:03,686:DEBUG:139658973406976:END POST /pipeline/get_complete_ts redacted_user_id 0.0071010589599609375
2021-08-13 22:47:03,776:DEBUG:139658948228864:END POST /timeline/getTrips/2021-08-13 redacted_user_id 0.13147234916687012
If I am right, the steps to reproduce should be:
Yup, reproduced using my test phone, which has no unlabeled trips. @GabrielKS, I am sending you the token privately if you want to use it to test. It doesn't take one minute for me, but that is because I have a lot fewer trips than the user who reported this.
Here are the calls from the test phone
2021-08-14 06:03:29,753:DEBUG:139658956621568:END POST /pipeline/get_range_ts redacted_user_id 0.0309598445892334
2021-08-14 06:03:29,863:DEBUG:139659216733952:END POST /profile/update redacted_user_id 0.0034046173095703125
2021-08-14 06:03:30,375:DEBUG:139659199948544:END POST /profile/update redacted_user_id 0.003096342086791992
2021-08-14 06:03:30,941:DEBUG:139658981799680:END POST /datastreams/find_entries/timestamp redacted_user_id 0.14992070198059082
2021-08-14 06:03:30,941:DEBUG:139658973406976:END POST /datastreams/find_entries/timestamp redacted_user_id 0.1509857177734375
2021-08-14 06:03:30,948:DEBUG:139659191555840:END POST /datastreams/find_entries/timestamp redacted_user_id 0.1630094051361084
2021-08-14 06:03:31,552:DEBUG:139658965014272:END POST /datastreams/find_entries/timestamp redacted_user_id 0.06690454483032227
2021-08-14 06:03:34,268:DEBUG:139658939836160:END POST /datastreams/find_entries/timestamp redacted_user_id 0.062041521072387695
2021-08-14 06:03:35,788:DEBUG:139658948228864:END POST /datastreams/find_entries/timestamp redacted_user_id 0.06934094429016113
2021-08-14 06:03:37,200:DEBUG:139659208341248:END POST /datastreams/find_entries/timestamp redacted_user_id 0.04717898368835449
2021-08-14 06:03:37,824:DEBUG:139658956621568:END POST /datastreams/find_entries/timestamp redacted_user_id 0.04965639114379883
2021-08-14 06:03:38,546:DEBUG:139659216733952:END POST /datastreams/find_entries/timestamp redacted_user_id 0.05420088768005371
2021-08-14 06:03:39,264:DEBUG:139659199948544:END POST /datastreams/find_entries/timestamp redacted_user_id 0.04860949516296387
2021-08-14 06:03:40,162:DEBUG:139658981799680:END POST /datastreams/find_entries/timestamp redacted_user_id 0.05234265327453613
2021-08-14 06:03:42,303:DEBUG:139658973406976:END POST /datastreams/find_entries/timestamp redacted_user_id 0.05430126190185547
@GabrielKS please outline your proposed fix here since I don't think it is trivial. LMK if you want me to design a fix instead.
If the problem is as you've described, then I agree the fix will take some thought. I don't think I'll have time to design and implement a fix by Monday, so if we're still looking to deploy quickly it might be best if you took this one.
Given the tight timeframe, I want a relatively simple fix, not one that ideally does not modify the data retrieval process.
For example, one potential fix would be to query the server for the most recent "to label" trip, and start retrieving from that point. But that would some significant API redesign since the current retrieval code depends only on the object keys, and not the other fields of the object. Should we support retrieval by other keys? Which subset of keys? Or should we support all keys? Is the server then turning into a proxy for mongodb? Also, if we use that approach, then the filters will also need to be rewritten. If a user chooses "All Trips", for example, we need to retrieve all trips from now to the first "to label" trip since they won't be in memory any more.
While this may be the long-term solution, it is a significant rewrite that cannot be completed in a day.
But looking at the underlying problem, the real problem is that we keep requesting updates all the way to the oldest trip without giving the user a chance to weigh in, like they could do when we were loading all trips. So even if the user knows that they don't have any more "to label" trips, or don't want to deal with them now, they can't do anything to stop the retrieval. In other words, we are "scrolling", whether the user wants it or not.
So an alternate fix is to just stop doing that. After we retrieve the first batch, even if it is empty, don't keep loading. Instead display a message saying something like "no labels found until X, scroll to load more". This will require changes to the scrolling code that @GabrielKS wrote, but hopefully it is a simple question of adding additional state around the code which tracks when the scroll status is at the top, and calls the reload function.
Here's where we make the first call to readDataFromServer
index.html:145 top = 36.73271942138672 < threshold = 75 checking state
index.html:145 calling readDataFromServer with {"threshold":75,"scrollLock":true,"fromBottom":1564.2672805786133,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1628111493.888}
And then we make a server call
index.html:145 getRawEntries: about to get pushGetJSON for the timestamp
index.html:145 About to return message {"key_list":["analysis/confirmed_trip"],"start_time":1627506693.888,"end_time":1628111493.888,"key_time":"data.end_ts"}
We get another scroll message but ignore it since we are still processing the existing call
index.html:145 At Sun Aug 15 2021 14:29:39 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = true
We received 5 more messages
index.html:145 DEBUG:Received batch of size 5
...
index.html:145 DEBUG:After adding batch of size 5 cumulative size = 19
index.html:145 DEBUG:new end time = 1627521169.141
And then broadcast that we are done. We get another scroll event
index.html:145 At Sun Aug 15 2021 14:29:40 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false
index.html:145 Debouncing scroll lock, = false
index.html:145 top = 0 < threshold = 75 checking state
so we load again and get a batch of size 3
index.html:145 getRawEntries: about to get pushGetJSON for the timestamp
index.html:145 About to return message {"key_list":["analysis/confirmed_trip"],"start_time":1626916369.141,"end_time":1627521169.141,"key_time":"data.end_ts"}
index.html:145 At Sun Aug 15 2021 14:29:40 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = true
index.html:145 DEBUG:Received batch of size 3
index.html:145 DEBUG:Broadcasting infinite scroll complete
But then we get a scroll message again, and get a batch of size 0
index.html:145 At Sun Aug 15 2021 14:29:40 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false
index.html:145 Debouncing scroll lock, = false
index.html:145 top = 36.73271942138672 < threshold = 75 checking state
index.html:145 DEBUG:Received batch of size 0
index.html:145 DEBUG:Broadcasting infinite scroll complete
And then we get a scroll message again, and get a batch of size 2
index.html:145 At Sun Aug 15 2021 14:29:41 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false
index.html:145 Debouncing scroll lock, = false
index.html:145 top = 0 < threshold = 75 checking state
index.html:145 Not paused for input, reloading data
index.html:145 calling readDataFromServer with {"threshold":75,"scrollLock":true,"fromBottom":1601,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1626798172.476}
index.html:145 DEBUG:Received batch of size 2
Until we finally end.
index.html:145 At Sun Aug 15 2021 14:29:44 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false
index.html:145 Debouncing scroll lock, = false
index.html:145 reached end
index.html:145 At Sun Aug 15 2021 14:29:44 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false
index.html:145 Debouncing scroll lock, = false
index.html:145 reached end
I wonder if we just need to fix this with standard debouncing logic. Once we have called the readDataFromServer
we set up a timer, and don't make any additional calls until the timer runs out.
Setting a debouncing timeout of 100 ms seems to still be problematic.
top = 74.22686004638672 < threshold = 75 checking state
Not paused for input, reloading data
calling readDataFromServer with {"threshold":75,"debouncing":false,"scrollLock":true,"fromBottom":1526.7731399536133,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1628111493.888}
getRawEntries: about to get pushGetJSON for the timestamp
About to return message {"key_list":["analysis/confirmed_trip"],"start_time":1627506693.888,"end_time":1628111493.888,"key_time":"data.end_ts"}
2At Sun Aug 15 2021 16:07:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = true debouncing = false
DEBUG:Received batch of size 5
DEBUG:Broadcasting infinite scroll complete
At Sun Aug 15 2021 16:07:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false debouncing = true
At Sun Aug 15 2021 16:07:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false debouncing = true
At Sun Aug 15 2021 16:07:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false debouncing = false
After debouncing, = false
top = 74.22686004638672 < threshold = 75 checking state
Not paused for input, reloading data
calling readDataFromServer with {"threshold":75,"debouncing":false,"scrollLock":true,"fromBottom":1526.7731399536133,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1627521169.141}
getRawEntries: about to get pushGetJSON for the timestamp
At Sun Aug 15 2021 16:07:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = true debouncing = false
DEBUG:Received batch of size 3
But a debouncing timeout of 5 seconds seems fine
At Sun Aug 15 2021 16:14:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false debouncing = false
After debouncing, = false
At Sun Aug 15 2021 16:14:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false debouncing = false
After debouncing, = false
At Sun Aug 15 2021 16:14:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false debouncing = false
After debouncing, = false
top = 49.77288842171299 < threshold = 75 checking state
Not paused for input, reloading data
calling readDataFromServer with {"threshold":75,"debouncing":false,"scrollLock":true,"fromBottom":1551.227111578287,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1628111493.888}
getRawEntries: about to get pushGetJSON for the timestamp
About to return message {"key_list":["analysis/confirmed_trip"],"start_time":1627506693.888,"end_time":1628111493.888,"key_time":"data.end_ts"}
2At Sun Aug 15 2021 16:14:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = true debouncing = false
At Sun Aug 15 2021 16:14:15 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = true debouncing = false
DEBUG:Received batch of size 5
DEBUG:Broadcasting infinite scroll complete
At Sun Aug 15 2021 16:14:16 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false debouncing = true
At Sun Aug 15 2021 16:14:16 GMT-0700 (Pacific Daylight Time) User has scrolled with scrollLock = false debouncing = true
Now that we have a working workaround, debouncing.patch.gz
since the real problem is multiple calls to scope.handleScroll
when the user is not actually scrolling, trying some other, more principled fixes.
$scope.readDataFromServer = function() {
$scope.infScrollControl.scrollLock = true;
+ $timeout(function() {
+ $scope.$apply(function() {
+ $ionicScrollDelegate.freezeScroll(false);
+ });
+ }, 100); // 100 ms
+ $ionicScrollDelegate.freezeScroll(true);
$scope.infScrollControl.fromBottom = $ionicScrollDelegate.getScrollView().__contentHeight-$ionicScrollDelegate.getScrollPosition().top;
console.log("calling readDataFromServer with "+
JSON.stringify($scope.infScrollControl));
did not work
Tried to scroll down by a little so we wouldn't trigger the scroll behavior again.
@@ -152,6 +152,16 @@ angular.module('emission.main.diary.infscrolllist',['ui-leaflet',
$scope.infScrollControl.callback = undefined;
}
$scope.infScrollControl.scrollLock = false;
+ console.log("top = "+$ionicScrollDelegate.getScrollPosition().top
+ + " threshold = "+ $scope.infScrollControl.threshold);
+ if ($ionicScrollDelegate.getScrollPosition().top < $scope.infScrollControl.threshold) {
+ $scope.$apply(function() {
+ $ionicScrollDelegate.scrollBy($scope.infScrollControl.threshold);
+ console.log("After scrolling, top = "
+ +$ionicScrollDelegate.getScrollPosition().top
+ + " threshold = "+ $scope.infScrollControl.threshold);
+ });
+ }
});
didn't work
top = 0 threshold = 75
After scrolling, top = 0 threshold = 75
calling readDataFromServer with {"threshold":75,"scrollLock":true,"fromBottom":1578.128020327388,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1628111493.888}
top = 0 threshold = 75
After scrolling, top = 0 threshold = 75
calling readDataFromServer with {"threshold":75,"scrollLock":true,"fromBottom":1578.128020327388,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1627521169.141}
top = 22.871979672612042 threshold = 75
After scrolling, top = 22.871979672612042 threshold = 75
calling readDataFromServer with {"threshold":75,"scrollLock":true,"fromBottom":1578.128020327388,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1627402972.476}
top = 22.871979672612042 threshold = 75
After scrolling, top = 22.871979672612042 threshold = 75
Added some additional logging to see how the top and the threshold change and found some interesting results.
After the initial load, we scroll up
handleScroll: top = 198.2401580810547 threshold = 75
handleScroll: top = 144.2901611328125 threshold = 75
handleScroll: top = 97.6612548828125 threshold = 75
handleScroll: top = 91.38833618164062 threshold = 75
handleScroll: top = 25.8846435546875 threshold = 75
we read data from the server and then we "adjust" by scrolling further
calling readDataFromServer with {"threshold":75,"currTop":25.8846435546875,"scrollLock":true,"fromBottom":1575.1153564453125,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1628111493.888}
handleScroll: top = 2.250732421875 threshold = 75
handleScroll, after adjusting: top = 2.250732421875 threshold = 75
infiniteScrollComplete: top = 2.250732421875 threshold = 75
After scrolling, top = 2.250732421875 threshold = 75
The adjustment
$ionicScrollDelegate.scrollBy(0, -$scope.infScrollControl.fromBottom+clientHeight);
Generates additional scroll events which move things around
handleScroll: top = 1.024017333984375 threshold = 75
calling readDataFromServer with {"threshold":75,"currTop":1.024017333984375,"scrollLock":true,"fromBottom":1599.9759826660156,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1627521169.141}
handleScroll: top = 956 threshold = 75
handleScroll: top = 25.8846435546875 threshold = 75
handleScroll, after adjusting: top = 100.8846435546875 threshold = 75
infiniteScrollComplete: top = 100.8846435546875 threshold = 75
handleScroll: top = 100.8846435546875 threshold = 75
handleScroll: top = 956 threshold = 75
handleScroll: top = 1.024017333984375 threshold = 75
And so on
calling readDataFromServer with {"threshold":75,"currTop":1.024017333984375,"scrollLock":true,"fromBottom":1599.9759826660156,"reachedEnd":false,"pipelineRange":{"start_ts":1624582869.304,"end_ts":1628554975.735},"currentEnd":1627402972.476}
handleScroll, after adjusting: top = 1.024017333984375 threshold = 75
infiniteScrollComplete: top = 1.024017333984375 threshold = 75
After scrolling, top = 1.024017333984375 threshold = 75
handleScroll: top = 956 threshold = 75
handleScroll: top = 76.02401733398438 threshold = 75
These additional scrolls seem to be the reason why we get multiple calls to the server. I have an intuition that they may also be responsible for the misordered trips, although I can't see any concrete use case of how that would happen.
I have a workaround in place, but am going to dig deeper into the scrolling for now to see if I can fix it at that point.
I found some examples of "reverse infinite scroll using angular JS" https://stackoverflow.com/questions/19929487/implementing-a-reverse-infinite-scroll-using-nginfinitescroll-directive-in-angul
There are two libraries that claim to work for this. Maybe we should try one of them...
As a first step, I just commented out the scrolling code in the callback.
$scope.handleScroll = function() {
+ console.log("handleScroll: top = "
+ +$ionicScrollDelegate.getScrollPosition().top
+ + " threshold = "+ $scope.infScrollControl.threshold
+ + " scrollLock = "+$scope.infScrollControl.scrollLock);
if ($scope.infScrollControl.scrollLock) return;
if ($scope.infScrollControl.reachedEnd) {
console.log("reached end");
@@ -163,9 +183,21 @@ angular.module('emission.main.diary.infscrolllist',['ui-leaflet',
if ($ionicScrollDelegate.getScrollPosition().top < $scope.infScrollControl.threshold) {
$scope.infScrollControl.callback = function() {
// This whole "infinite scroll upwards" implementation is quite hacky, but after hours of work on it, it's the only way I could approximate the desired behavior.
+ /*
$ionicScrollDelegate.scrollBottom();
+ console.log("handleScroll: after scrolling to the bottom top = "
+ +$ionicScrollDelegate.getScrollPosition().top
+ + " threshold = "+ $scope.infScrollControl.threshold);
const clientHeight = $ionicScrollDelegate.getScrollView().__clientHeight;
$ionicScrollDelegate.scrollBy(0, -$scope.infScrollControl.fromBottom+clientHeight);
+ console.log("handleScroll, adjusting by = "
+ +(-$scope.infScrollControl.fromBottom+clientHeight)
+ +"new top = "
+ +($ionicScrollDelegate.getScrollPosition().top -
+ $scope.infScrollControl.fromBottom+clientHeight)
+ + " threshold = "+ $scope.infScrollControl.threshold);
+
+ */
};
$scope.readDataFromServer();
}
And visually, everything seemed to work fine even for:
@GabrielKS do you remember why you had to add this code to scroll to the bottom and then scroll back to the top? If we are already at the top, it seems like, at least in the case where we do get new entries added above, it should be fine to just stay there.
While loading data for the user who originally reported the problem, though, we run into the issue that opening the popover to fill in the values generates a subsidiary scroll event, which:
I saw similar behavior with the pullToRefresh
code in the diary, which is why I ended up disabling it
https://github.com/e-mission/e-mission-phone/commit/85d55861e71c8e8c243b65265b56f9f8a0e1081c#diff-e694b32469d49b6682b8a207844dab1a6950a61c487d58216b109e9a0c1ccc85
Given that the scrolling code is so finicky, I might just disable it and switch to a "load more" button instead. Planning to work on that tomorrow morning.
@GabrielKS do you remember why you had to add this code to scroll to the bottom and then scroll back to the top? If we are already at the top, it seems like, at least in the case where we do get new entries added above, it should be fine to just stay there.
My thinking was that if we scroll up to the top to load more trips, the newly loaded trips should appear to load "above" the current scroll position, so the same trips appear on the screen after the load completes and the user can scroll up to see the newly loaded ones. This behavior is shown in https://github.com/e-mission/e-mission-phone/pull/769, and it was what we had back when we infinite scrolled downwards. Back then, the behavior would happen automatically because loading new trips would not change the position of existing trips on the page, but now when new trips load they get inserted in the page before existing trips and push the existing trips down. Thus, we have to automatically scroll back down to the trips we were looking at before.
The $ionicScrollDelegate
exposes a method to scrollTo
a given distance from the top of the page, but there is no built-in way to scroll to a given distance from the bottom of the page. However, we can scrollBottom
and then scrollBy
a given amount, which accomplishes the same thing. I tried using just scrollTo
by getting the page height before and after and doing some math, but getting the page height didn't seem to work very well; in the end, the least buggy way I could find was with the scrollBottom
technique.
One option I did not try is to insert some placeholder element in the DOM at the current scroll position before loading new trips and then use anchorScroll
to scroll back to it. Haven't thought through how easy or hard that would be.
I can confirm that going back to the original "upwards scrolling" implementation causes multiple calls in quick succession. I wasn't able to see if the popup behavior was the same because there was no gap in the reading where I could try to access them. I could only interact with the UI after all the loading was complete.
At that point, though, there was no additional scrolling, so I was able to set the values.
I can also confirm that this behavior is not related to the scroll order, but rather, to the selection of a filtered screen as the default view. We did not see this behavior when the default was "AllTrips" because there would typically be some additional entries loaded. I reverted to the change before the scroll order
$ git checkout dee142999681fb9378eedf3679e4bcec0681a715
I then reloaded the label screen for the user with no "to label" entries. I got the same behavior.
I was able to reproduce the duplicated trips issue while testing against dee142999681fb9378eedf3679e4bcec0681a715 Unfortunately, I was not connected to the console at the time. But I could clearly see that it was while we were making this quick succession of server calls.
Was able to reproduce it again! Not sure what race condition is causing the second read. Note that the first call overlaps with the initial calls to get the user inputs, which may be a clue...
Given that this appears to be a race condition with multiple calls to readDataFromServer
, serializing the calls through adding additional button clicks should fix the out-of-order problem as well. BTW, the duplicate entry problem is now 100% reproducible for me.
I'm now going to tackle "Introduce a delay or a confirm button before the trip disappears from "To Label" so that users have time to correct mistakes" since I think it is pretty urgent.
The obvious solution is to add a timeout before marking the trip for removal. But we currently remove the trips by calling $scope.recomputeDisplayTrips();
, which is not trip specific. So if we introduced a timeout of 3 seconds, and the user labeled trip1 at time t1 and trip2 and time t1+3, then trip2 would disappear immediately.
We can fix this by either:
Note that even if we do introduce a flag in the entry, we will still have to either:
If we don't, when the user edits one trip multiple times, we will schedule multiple deferred calls for the same item, and may end up deleting too close to the second edit.
Pros and cons:
Actually, $timeout
returns a promise, so we can store that instead of a timer. And then we don't have to deal with the case in which we load the trips but the user has not yet edited anything.
Let's try that first.
well, that was easy https://github.com/e-mission/e-mission-phone/pull/787
Android | iOS |
---|---|
Current status of these fit and finish fixes is:
Not sure that the three dots is a showstopper. I also can't reproduce it in the emulator. @GabrielKS will fix walkthrough. At that point, all fit and finish changes will be done.
I had some users test out the delayed deleted today morning, with a specific focus on whether the delay was confusing. One of them preferred to have the trips disappear after a button click - i.e. the user would select the three popup labels and then click on the green check mark to confirm correctness and cause the trip to disappear.
That is a significant difference from the planned "auto-disappearing" functionality, and it actually increases the number of clicks. So I don't think we should go down that route for now. But it is something to consider for a future change if we get a lot of confusion and compliants with the current approach.
I also wonder whether we should lower the delay to 30 secs. When I saw my kids using the screen, by 1 minute, they had forgotten about the labeled trip and were kind of surprised that it went away. @GabrielKS what do you think?
I wouldn't mind 30 seconds. I agree that 1 minute is kind of a long time.
Adjusted in https://github.com/e-mission/e-mission-phone/pull/790 I also added the day of the week after seeing my daughter try and figure out which day 2nd Aug was (for example).
@GabrielKS just writing these down so you can finish them off.