-
Ecosystem instances should have the possibility to configure the searchability from google and other searchengines. Therefore a robots.txt should be configurable during setup process.
-
Please follow https://github.com/volCommunity/vol-crawlers/blob/master/CONTRIBUTING.md#adding-new-countries--spiders
If you want an extra challenge, start work on #32 first and use the result to ge…
-
Each ontology term should have `rdfs:isDefinedBy` to the ontology node.
This allows sem.web crawlers that stumble upon a CIM term, to discover the whole CIM ontology.
https://github.com/Sveino/Ins…
-
When I make a request to the `/tallks` endpoint the newest talk I see currently is from `2018-09-26`.
How often are the archives updated?
-
We should to block certain web crawlers from putting extra load on the database. We need to make a list of URLs that should be excluded from crawling and add them to `robots.txt` file.
-
## Details
This is a follow-up to https://github.com/readthedocs/readthedocs.org/issues/11733 I guess, but it is driving me a bit nuts :)
I think this is cloudflare doing something, but the way …
-
As we have the designer for some more time, we could have a button, to ask for images without words. (Maybe only for registered users to prevent to much noise from crawlers and spam bots.)
Clicking t…
-
Key endpoints such as `v1/data` and `v1/now` should require authentication to prevent misuse by users and web crawlers.
`api.sensors.africa/v2` endpoints have authentication intergrated, deploy and te…
-
### Is your feature request related to a problem?
In many environments it is preferable to use an internal mirror of maven central. Support autodiscovery against an internal mirror.
### Solution yo…
-
In the Run for Minecraft Client the showed me this "Unable to load model: 'stone_crawlers_mod_lastcopperking:jasper#inventory' referenced from: stone_crawlers_mod_lastcopperking:jasper#inventory: java…