At least one current deployment is seeing requests from crawlers such as Google's. This is presumably because they have found the site through links from other sites.
It would make sense to at least have the ability to block these through a robots.txt file. That's not completely trivial, however, if the context root isn't the site root. It might be worth either making those independent or looking into how to get two application contexts (or static content) into the Jetty instance that Spring Boot includes.
At least one current deployment is seeing requests from crawlers such as Google's. This is presumably because they have found the site through links from other sites.
It would make sense to at least have the ability to block these through a
robots.txt
file. That's not completely trivial, however, if the context root isn't the site root. It might be worth either making those independent or looking into how to get two application contexts (or static content) into the Jetty instance that Spring Boot includes.