Open meh9 opened 8 years ago
Yeah, nice one. Hard, but important.
Btw, I think your basic premise that Google can't run JavaScript isn't quite right: http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
I suspect that even once the page is rendered, it's still not at all search engine friendly though.
I am still working through this, have a demo up & hope to get a set of 3 PRs up for it (for TerriaMap, terriajs & terriajs-server)
droughtmap is in Oct decommission, moving to next sprint
Update 15 March:
Update 12 Apr:
For SEO we should generate a sitemaps of catalogue routes (i.e. map.terria.io/catalog/d89fe routes) and ensure that these pages can be crawled.
Options:
Next steps: Try improving client side rendering, then look at pre-rendering.
Update 7 Jun 23:
At the moment if you search for some specific dataset that you are interested in using some internet search engine, no links to that dataset on any TerriaJS instances will be found unless maybe if somebody has made a share URL and put that into a static document which was then crawled by a search engine. This is a shame because this would probably lead to a lot more users finding useful data.
The problem is that search engine crawlers can probably not "run" the TerriaJS instance JavaScript in order to meaningfully extract the data.
We should implement some feature that makes it possible for search engines to deep link to the layers provided on TerriaJS instances.
This might take a number of forms: