DataONEorg / dataone

DataONE information and general-purpose issue tracking
Apache License 2.0
2 stars 0 forks source link

Identify and address Google search and Dataset search issues #12

Open amoeba opened 3 years ago

amoeba commented 3 years ago

Being comprehensively indexed by search engines such as Google is a substantial benefit for DataONE and DataONE Members. Ideally, the whole variety of information (datasets, people, portals, metrics, etc.) housed within DataONE would be findable through traditional search engines and, for datasets, also Google Dataset Search.

As of 2021, our primary tool for knowing whether or not we are comprehensively indexed is the Google Search Console which provides a whole suite of tools for diagnosing issues.

Some of the problems we've addressed in the past include:

Issues we have ahead of us include:

Dataset coverage

Summary

At the time of writing, we have 846,622 dataset and portal URLs listed in our sitemaps and Google has discovered them all correctly. 812k of these are marked as "Excluded". When we drill down into the index coverage of those URLs, we get this breakdown:

Type # Pages
Discovered - not currently indexed 770,302
Duplicate, submitted URL not selected as canonical 30,977
Duplicate without user-selected canonical 12,265
Crawled - currently not indexed 7,986
Pages with redirect 424
Blocked due to other 4xx issue 6

Discovered - not currently indexed

The majority of these are "Discovered - not currently indexed", which his defined as:

Discovered - currently not indexed: The page was found by Google, but not crawled yet. Typically, Google wanted to crawl the URL but this was expected to overload the site; therefore Google rescheduled the crawl. This is why the last crawl date is empty on the report.

This makes sense to me as our individual dataset landing pages are very slow relative to what Google expects. I'm hoping that we can hear more back from the Google team about whether this is truly what's going on or if it's something else. We know from unofficial sources that Google's crawl infrastructure has two queues: One for fast sites, and one for sites it had hoped were fast but put in a separate, lower-priority queue. I'd guess we're in the latter.

Duplicate, submitted URL not selected as canonical

What we see here is that Google is selecting a URL like https://dataone.org/datasets/R1-x138-079-0042-010 as our canonical. These URLs are reported in our embedded JSON-LD and they really are our canonical URLs. I think we should consider switching our sitemap implementation on the CNs to use https://dataone.org/datasets URLs instead.

Other things

Aside from the above categories, some of the odd stuff we've seen is:

[Note: Please feel to edit this issue to be more complete]

Next steps

mbjones commented 3 years ago

Thanks for the summary, Bryce.

One thing we can do right off is to eliminate crawling of our stage, dev, and sandbox hosts, to eliminate some of the errors we see from those hosts. I suggest that we add:

    ## Disallow Robots for all content
    Header Set X-Robots-Tag "noindex, nofollow"

to the apache config for all non-production search and API servers. This is the google recommended way to indicate robot preferences (instead of robots.txt), and it is really easy to implement. I tested it on cn-stage.test.dataone.org and it seems to deliver the expected headers.

amoeba commented 3 years ago

Thanks @mbjones, that sounds like a good step.

As an update about our coverage in Google Search, since writing this issue and talking with folks at Google, our coverage has gone up a bit, by maybe 10k records. So we can track progress here, I'm counting 48,434 sitemap entries covered, 812k excluded.

mbjones commented 3 years ago

I also see a similar exclusion pattern for the Arctic Data Center (18.3K Valid, 11.6K Excluded). The following is the breakdown of the excluded URIs by reason on the ADC site:

adc-excluded-2021-08-16
amoeba commented 3 years ago

Just updating on status here: I've had a few back and forths with folks at Google and they're planning to take a closer look at things. The vast majority of our dataset landing pages (~660k) aren't indexed at all, despite being discovered by Google via our sitemaps. At this point, I'm guessing they'll come back and say it's because our individual pages are slow enough to cause their crawlers to think they're overloading us (they aren't) and for them to stop crawling.

We have at least one ticket on this topic but I can't seem to find it right now.

I'll update here when I hear back from Google.

amoeba commented 2 years ago

Ran into a new twist today. It looks like a Dataset record will fail Google's validation if it doesn't have a description between 50 and 5000 characters. @mbjones found this out while looking at some the invalid datasets in our Search Console and checking Google's guidelines.

@mbjones said on Slack:

I pulled our abstracts from SOLR for all non-obsoleted metadata in DataONE; we have 857 datasets with abstract > 5000 chars, and 434,422 with abstract < 50 chars. Of the latter , 409695 are missing abstracts

So about half of DataONE's content may be failing validation for this reason. Adding this note here just for the papertrail.

laurenwalker commented 2 years ago

Can we create a description from other parts of the metadata if there is no abstract? Such as piece together the creator names, title, location, etc. to create a pseudo abstract?

amoeba commented 2 years ago

Yeah @laurenwalker, that's a good idea. I think an easy thing to do would be to detect too-short-for-Google abstracts and pad on a phrase like "For complete metadata, visit https://arcticdata.io/catalog/view/doi:1234/AA/5678." which is, by itself, 81 characters.

mbjones commented 2 years ago

That sounds like a great idea!

mbjones commented 2 years ago

And I guess truncate the ones that are too long.