DigitalCommons / mykomap

A web application for mapping initiatives in the Solidarity Economy
3 stars 0 forks source link

Multi data set reloading inconsistency #90

Closed ColmMassey closed 3 years ago

ColmMassey commented 3 years ago

When loading the dual data sets on

https://dev.newbridge.solidarityeconomy.coop/map.html

there seems to be a timing bug where the map appears before all the data has loaded and then if proceeds with the partial data set.

wu-lee commented 3 years ago

I am not sure if this is related, but sometimes I get a blank map visiting that link, and the browser console contains this stack trace:

Uncaught Error: Invalid LatLng object: (0, NaN)
    j https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300
    unproject https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300
    pointToLatLng https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300
    unproject https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300
    _getBoundsCenterZoom https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300
    fitBounds https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300
    fitBounds https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:8146
    onInitiativeComplete https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:7403
sunnydean commented 3 years ago

@ColmMassey is deep-adaptation a copy of newbridge?

wu-lee commented 3 years ago

The deep-adaptation data is generated using the same "lime-survey core" conversion as newbridge, yes. This accepts any schema which has a superset of the "lime survey core" schema's fields. The downloader is not a copy, however - it uses the URL downloader as used by mutual-aid.

sunnydean commented 3 years ago

Hey @wu-lee a couple of questions before i can do anything: Are there any major differences then? Why are we using two datasets that overlap here? Are we doing some special same-as handling? If we are, where can I find the query/code? If we are not, should we allow duplicates?

Thx

sunnydean commented 3 years ago

Also, where is deep-adaptation hosted? Couldn't find it on dev.data

ColmMassey commented 3 years ago

I suggest you could test multi-data set loading with two datasets you are familiar with. Easier to narrow down where the problems lie.

ColmMassey commented 3 years ago

@ColmMassey is deep-adaptation a copy of newbridge?

The source of the DeepAdaptation data is a google spreadsheet. The DeepAdaptation sheet has 'candidates' for inclusion. As those organisations fill in the survey and get accepted, they get periodically, manually removed from the candidate set.

ColmMassey commented 3 years ago

Are there any major differences then?

Yes.

Why are we using two datasets that overlap here?

Explained above I hope.

Are we doing some special same-as handling?

No need to, and wasn't expecting that that was enabled.

If we are, where can I find the query/code?

NA

If we are not, should we allow duplicates?

YES

wu-lee commented 3 years ago

So, I see Colm has answered this before I could finish this and post it. But:

If you need to see the code in action,

Are there any major differences then?

No, not really. ([later] Although Colm said yes... I guess it depends what you are asking about. I mean the data conversion is essentially the same, and the output data is very similar in many cases but will have arbitrary differences in content.)

Why are we using two datasets that overlap here?

The Deep Adaptation data set is an incubator for candidates for the Newbridge set, I think.

Are we doing some special same-as handling? If we are, where can I find the query/code?

No, no SAMEAS.

If we are not, should we allow duplicates?

The deep-adaptation dataset does not share an identifier namesace with any of NOM (in fact they all have their own namespaces). So duplicates of Newbridge (or any NOM initiative) are okay.

wu-lee commented 3 years ago

I should add this: the Deep Adaptation data is added to the newbridge-project map site, and I noticed that the addition of it to the dev branch wasn't pushed yet (as I said, still nominally work in progress). Now pushed.

wu-lee commented 3 years ago

See also https://github.com/SolidarityEconomyAssociation/open-data-and-maps-outreach/issues/217

sunnydean commented 3 years ago

image

the query + endpoint we are currently using are not working @wu-lee ? And again, I can't seem to find it on the dev server. (we should best call for a short explainer)

This is why we're getting this error sometimes BTW:

Uncaught Error: Invalid LatLng object: (0, NaN) j https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300 unproject https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300 pointToLatLng https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300 unproject https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300 _getBoundsCenterZoom https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300 fitBounds https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:6300 fitBounds https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:8146 onInitiativeComplete https://dev.newbridge.solidarityeconomy.coop/map-app/app.js:7403

I'll proceed as @ColmMassey 's suggested and fix it on data i'm familiar with

sunnydean commented 3 years ago

This bug is connected to three other bugs:

  1. there seems to be a timing bug where the map appears before all the data has loaded and then if proceeds with the partial data set. (as originally described, this behavior is coded in and allowed)
  2. Caching doesn't work for multiple datasets (and that causes other problems, potentially related to 1)
  3. Potential issues with deep-adaptation (I might be wrong and it might be working as expected, @wu-lee to advise?)

@ColmMassey should I create the two issues and leave this one ?

ColmMassey commented 3 years ago

@ColmMassey should I create the two issues and leave this one ?

So leave this ticket open to cover the timing issue.

https://github.com/SolidarityEconomyAssociation/newbridge-project/issues/2

can cover any problems relating to the creation of the deepadaptation graph.

https://github.com/SolidarityEconomyAssociation/sea-map/issues/79 covers making caching optional, which would

decouple the issues, as both datasets are small and won't have big audiences.

https://github.com/SolidarityEconomyAssociation/sea-map/issues/25

already covers implementing multi set caching when we are ready for it.

sunnydean commented 3 years ago

OK, so my course of action will be to decouple the issues first (#79 covers making caching optional) then follow-up with fixing this issue.

sunnydean commented 3 years ago

fixed, will push this to a map soon

note: if one of the datasets does not load for whatever the reason, the screen will be left perpetually loading.

I can add a timeout for that, which after a certain amount of time, will show the map with a partially loaded dataset? Should we add that as an issue?

ColmMassey commented 3 years ago

fixed, will push this to a map soon

Where can I see this fix? Which of the symptoms discussed above have been fixed?

ColmMassey commented 3 years ago

I can add a timeout for that, which after a certain amount of time, will show the map with a partially loaded dataset? Should we add that as an issue?

Can you please write one with enouhg detail for Nick to do it?

sunnydean commented 3 years ago

there seems to be a timing bug where the map appears before all the data has loaded and then if proceeds with the partial data set. (as originally described, this behavior is coded in and allowed)

This will be addressed and fixed

Caching doesn't work for multiple datasets (and that causes other problems, potentially related to 1)

This is done only with a workaround by disabling caching, i.e. currently one cannot use caching for multiple data sources.

Potential issues with deep-adaptation (I might be wrong and it might be working as expected, @wu-lee to advise?)

This has not been addressed and should be a seperate issue as I assume there is something wrong with the data (as described in the https://github.com/SolidarityEconomyAssociation/sea-map/issues/90#issuecomment-744086953)

@wu-lee i'm attempting to deploy a example map on playground with the following config: { "namedDatasets" : ["ica","dotcoop"], "namedDatasetsVerbose" : ["test map"], "filterableFields" : [{ "field": "country", "label": "Countries" }], "showDatasetsPanel" : true, // "searchedFields" : ["activities"], " " : "ICA Map", "defaultLatLng": ["50.84999", "4.39434" ], "useCache": false }

it does not seem to get deployed to http://dev.playground.solidarityeconomy.coop/ for some reason? is playground set up for this?

I can also deploy an example to dev.ica.solidarityeconomy.coop @ColmMassey to show the live functionality if that would not conflict with anything that you're currently showing there for ica?

sunnydean commented 3 years ago

created this issue for timeouts: https://github.com/SolidarityEconomyAssociation/sea-map/issues/94

ColmMassey commented 3 years ago

Caching doesn't work for multiple datasets (and that causes other problems, potentially related to 1) This is done only with a workaround by disabling caching, i.e. currently one cannot use caching for multiple data sources.

So we must always disable caching in the configuration when we have multiple datasets, until we support multidataset cacheing. Understood.

ColmMassey commented 3 years ago

Potential issues with deep-adaptation (I might be wrong and it might be working as expected, @wu-lee to advise?)

The data is fine, we believe.

ColmMassey commented 3 years ago

it does not seem to get deployed to http://dev.playground.solidarityeconomy.coop/ for some reason? is playground set up for this?

We haven't used playground for ages, so may not be setup.

Test on https://dev.newbridge.solidarityeconomy.coop/ and use a oxford data instead of deepadaptaion, if you still have a problme with that dataset.

ColmMassey commented 3 years ago

Test on https://dev.newbridge.solidarityeconomy.coop/ and use a oxford data instead of deepadaptaion, if you still have a problme with that dataset.

Then if that works, we switch to just loading deepadaption on its own and then newbridge + deep adaption.

ColmMassey commented 3 years ago

While you are here, how easy is it to distinguish between origins of initiatives when you are rendering the directory panel and assigning icons and marker detail to the initiaive for display on the map?

wu-lee commented 3 years ago

I'll bear that in mind when I'm digging.

wu-lee commented 3 years ago

Ok, I've deployed a lightly tested version of a fix for this to dev.newbrige. It seems to work, although I've had to disable data caching to make it reliable (so will need to address that issue too).

The sea-map version is still misleading, will try and find a solution for that.

ColmMassey commented 3 years ago

The sea-map version is still misleading, will try and find a solution for that.

Do you mean the displayed version #?

wu-lee commented 3 years ago

Yes, the displayed version - although in this case it happens to be kinda correct (confusingly!)

ColmMassey commented 3 years ago

I have just noticed activity on both versions of the DeepAdaptation spreadsheet. Can you confirm which is the one people should be updating?

wu-lee commented 3 years ago

I made a copy for testing, which will be under my personal google account. But I think I deleted that, I can't find it on my account's drive.

There do seem to be two others in my drive:

Going via the menu Tools -> Script Editor on both, the latter seems to be an old version. The former is the one which is synching to Github, and is the one people should be updating.

wu-lee commented 3 years ago

Ok, I've modified sea-map's package.json file so deployment with a more meaningful version in the sidebar is possible.

To explain briefly: to do this, deploy from the sea-map project with ext/ directory symlinked to a website project (as is normal in development mode). Use npm run build; npm run dev-deploy (rather than the production way, which is to deploy is to use npm run build; npm run deploy in the website project).

Then, version.json and hence the sidebar will have:

However, there's now a suffix which consists of:

For example, in this case the version shows as: 0.1.65_7e5fbda-dev.

I'll add this info to the docs for sea-map.

ColmMassey commented 3 years ago

Going via the menu Tools -> Script Editor on both, the latter seems to be an old version. The former is the one which is synching to Github, and is the one people should be updating.

I am confused. the later, Deep Adaptation Map Data (restricted) seems newer then Deep Adaptation Map Data? Deep Adaptation Map Data (restricted), the one created recently by Hannah K to resolve the security issues we had with the old one? Can you check you have the ages the right way around?

wu-lee commented 3 years ago

Deep Adaptation Map Data (restricted) seems newer then Deep Adaptation Map Data?

Could that just be because someone modified it more recently?

But empirically too, I don't seem to have it the wrong way around: the former triggers updates, which you can see here, the latter does not.

I can only guess that maybe the spreadsheet was copied after I started connecting it up, but before I finished? The copy (the latter sheet) doesn't seem to have the latest scripts etc. attached, and isn't the one which is configured to be downloaded in the data-capture project.

Is there any way to set the restrictions in-place without copying? If not, then:

ColmMassey commented 3 years ago

Could that just be because someone modified it more recently?

I was looking at the file history, not how recently it was changed.

Let's go through this in a call.

wu-lee commented 3 years ago

Ok, I've attempted to re-connect the github action in the data-capture project to the new restricted spreadsheet. However, although I can set up the notification from Google to GitHub, the sheet is not readable to non-authenticated users so GitHub can't then access it to download the data. (On the earlier spreadsheet, there was no authentication needed to read the data.)

Authenticating with Google to get this isn't very simple, because it uses OAuth2. So I will need to think about how to work around this... perhaps I can adjust the notification to include the data in CSV format, so avoiding the need for GitHub to connect to the Google sheet. I seem to remember there being drawbacks to this, however.

wu-lee commented 3 years ago

I think I've got this idea working: the notification now includes the data, so no need for Github to download it (and so nor to authenticate with Google in this case). See commits 8a577db6d2e4e31eb623d61f8dcaf43e7082c245 and 281e1e7c6e202b2d9b8611fdb00c18f90df677b1 on the data-capture project master branch (copied onto the deep-adaptation branch here 7582d5d43a3ae978c6f6e62de82ad6e6071f21b7)

There were a bunch of edits whist I was doing this, but I've preserved these.

I notice the older data has the fields we wanted to hide included in them. Should I take the effort to rewrite that personal data out of the commit history?

wu-lee commented 3 years ago

Ok, post-meeting, we agreed to

All of which I've done...

However, I've hit a new problem:

Exception: Request failed for https://api.github.com returned code 422. Truncated server response: {"message":"inputs are too large.","documentation_url":"https://docs.github.com/rest/reference/actions#create-a-workflow-dispatch-event"} (use muteHttpExceptions option to examine full response) at WEBHOOK(Code:38:15)

This error is from the triggers log on the Google Spreadsheet. I think it means that the data with the descriptions is now too big and has hit some limit of GitHub. Which is exasperating as it means I'll need to rethink this again.

ColmMassey commented 3 years ago

How about limiting the description to a single sentence? But that would require a complex extra column...Hmmm

wu-lee commented 3 years ago

Ok, so I've dug around and google scripts has got both gzip and base64-encode functions, so I compressed the CSV string and stored it base64-encoded in the notifier field, which reduced the size of the field considerably. There's still a limit, but we aren't hitting it yet. The receiving end can decode and uncompress this fairly simply with some shell scripting, resulting in a CSV file again.

I also fixed the sausage machine, which needs all the schema fields to exist even if empty. Therefore I created some placeholder columns, but these are hidden and validated as being blank on the spreadsheet.

But now the script runs again, and the newbridge map has deep-adaptation pins again.

(Although I notice some sort of page reset means that the dataset highlighting gets lost when the data loads. I think that's a separate bug tho., created #103 )

ColmMassey commented 3 years ago

So let's close this ticket. The solution of having the notification including the data will only work for small spreadsheets, but that may be fine for the forseeable.