-
Just a note that if you are publishing your crawler results to please respect sites which host a robots.txt restricting spidering.
For instance: gopher://gopher.black/0/robots.txt
Thanks
-
- Site: [https://redesigned-spork-x56779pgr93p4xv-3000.app.github.dev](https://redesigned-spork-x56779pgr93p4xv-3000.app.github.dev)
**New Alerts**
- **Content Security Policy (CSP) Header Not S…
-
- Site: [https://demo.owasp-juice.shop](https://demo.owasp-juice.shop)
**New Alerts**
- **Content Security Policy (CSP) Header Not Set** [10038] total: 3:
- [https://demo.owasp-juice.shop](h…
-
This issue shows a list of robots that can be used as inspiration for other robots in air4children
-
throwing error, not rendering
-
Is there a way to configure robots.txt per domain? Sometimes we run multiple sites in one Umbraco instance and it would be great if we can configure the robots.txt separately. 😄
-
# Acceptance Criteria
- [ ] ~~Only certain users are allowed to access [the site](https://princeton-manifold-production.softserv.cloud)~~
- [ ] Search engines are blocked from indexing the site
#…
-
- Site: [http://myolink.info.gf](http://myolink.info.gf)
**New Alerts**
- **Content Security Policy (CSP) Header Not Set** [10038] total: 2:
- [http://myolink.info.gf/](http://myolink.info.g…
-
- Site: [http://localhost:3000](http://localhost:3000)
**New Alerts**
- **Content Security Policy (CSP) Header Not Set** [10038] total: 4:
- [http://localhost:3000/](http://localhost:3000/) …
-
## Long story short
robots.txt might need to be updated or created (or something else fixed, IDK I don't do SEO):
![image](https://user-images.githubusercontent.com/63089004/125269966-90207d00-e34c-…