Currently, web crawler bots are following every possible wiki link (for example I saw a log entry for reading the calendar in 1982).
Create a robots.txt that makes sense, i.e. allow actual wiki content and disallow special pages like creating accounts or calendar entries far away from the current date.
Currently, web crawler bots are following every possible wiki link (for example I saw a log entry for reading the calendar in 1982).
Create a
robots.txt
that makes sense, i.e. allow actual wiki content and disallow special pages like creating accounts or calendar entries far away from the current date.