Closed mojavelinux closed 2 years ago
@mojavelinux Sorry for the hassle. I put the robots.txt for now.
I looked for the latest user manual source but I couldn't find it. I would appreciate it if you could point me to the latest user manual source.
The Asciidoctor user manual is no more. We migrated all the content to the new documentation site built with Antora, hosted at https://docs.asciidoctor.org. The way Antora works, the content is pre-chunked, so there's no longer a need to split it up. (See https://github.com/asciidoctor/asciidoc-docs/tree/main/docs/modules/attributes/pages).
If you are looking for a big AsciiDoc file to chunk, I suggest looking elsewhere. One example I can give you is the Raspberry Pi documentation. See https://github.com/raspberrypi/documentation/blob/develop/documentation/asciidoc/computers/raspberry-pi.adoc
The robots.txt file only works if it's at the root of the domain. See https://developers.google.com/search/docs/advanced/robots/create-robots-txt
So the exclusion has to go in this file: http://www.seinan-gu.ac.jp/robots.txt
Oh, please excuse my ignorance! I will contact the system administrator on Monday. I will let you know when the setting is completed.
Don't worry, I didn't know that rule until recently either. Thanks again for your attentiveness to this request.
Hi, it is now set properly! I hope this will solve the problem.
Lastly, I appreciate your work and dedication on asciidoctor. I really respect your work!
Thanks @wshito! I appreciate the kinds words as well. They definitely keep me going!
The sample output is a copy of the old Asciidoctor user manual and is tainting results for Asciidoctor and AsciiDoc in the search engines. Please add either a robots.txt that disallows crawling or an equivalent meta tag.
robots.txt example:
meta tag: https://developers.google.com/search/docs/advanced/crawling/block-indexing