Open maehr opened 3 weeks ago
@maehr I like the idea of it in "utilities". However, robots.txt should only be at the root of a domain or subdomain, and only one per domain. I think the majority of CB projects are likely not at the root, so it might just end up being inapplicable and potentially confusing. The noindex: true
option will cover adding robots meta tags to individual pages.
I think "how to add a robots.txt" might be a good as a cb-docs "advanced" topic, with the code for the example template? Then people who are actually at a subdomain root or root can add one in and learn more about why.
I will work on the docs and get back to you.
PS: I could add a check for baseurl:
to make sure it's not hosted in a subfolder.
Describe what feature you'd like. Pseudo-code, mockups, or screenshots of similar solutions are encouraged!
Hi guys, we implemented a robots.txt over here https://github.com/Stadt-Geschichte-Basel/forschung.stadtgeschichtebasel.ch/pull/113
Do you want me to open a PR for CB-CSV as well?
What type of pull request would this be?
New Feature
Any links to similar examples or other references we should review?
No response