This was partially fixed in akvo/akvo-rsr#2064 using a meta tag in the
HTML template. But, that fix didn't work for other content like PDFs.
This commit adds an X-Robots-Tag header and stops any further indexing,
and also removes already indexed content in the next crawl. Adding a
robots.txt would only stop future crawling, but would only indirectly
effect the indexing.
https://developers.google.com/webmasters/control-crawl-index/docs/faq#h17
This was partially fixed in akvo/akvo-rsr#2064 using a meta tag in the HTML template. But, that fix didn't work for other content like PDFs.
This commit adds an X-Robots-Tag header and stops any further indexing, and also removes already indexed content in the next crawl. Adding a robots.txt would only stop future crawling, but would only indirectly effect the indexing. https://developers.google.com/webmasters/control-crawl-index/docs/faq#h17
Fixes akvo/akvo-rsr#2232