-
Para mantener la coherencia del código y hacer todo en un solo lenguaje (y no necesitar instalar PHP...) estaría bueno que el parser del PDF esté escrito en JavaScript. De ser difícil o irrelevante, c…
-
While requesting recipes, I discovered that many recipes are enabled in production but are failing, sometimes for multiple times in a row, sometimes forever (at least up to the 10 times that we keep i…
-
```
Como wikipedia no entrega un listado de todas las páginas en Namespaces, se
utiliza el script listar_articulos_en_namespaces.py.
Este script recorre el listado /wiki/Especial:Todas buscando los l…
-
I know you've done NN stuff before and you've done a thing on Markov chains to generate text so I thought this idea of a Recurrent Neural Network Text Predictor might kind of interest you.
I was a…
-
The Santa Clara County bar chart under the Stats tab displays daily and total figures that differ from the [official Santa Clara County Public Health Department dashboard](https://www.sccgov.org/sites…
-
I've been thinking about this for a while and I think it would be very nice to have an automated way of populating the crates, instead of manually adding each of them.
## The AeroRust Website
We…
-
Hi! I love emulationstation but it doesn't work well with installed windows game (the ones in program files). When you make the path to read the program files folder and extensions as .exe you get eve…
-
Certain Wikipedia project pages should be included in every offline distribution of Wikipedia, such as:
- [Wikipedia:About](https://en.wikipedia.org/wiki/Wikipedia:About)
- A subset of the [FAQ](h…
-
- [ ] how to install it
- [ ] how to create simple crawler
- [ ] how to store result in file / json
- [ ] how to compose crawlers
-
For example, this page - http://nvdmc.org/feed/ is parseable with Cheerio so our crawler should just crunch it.
Then it will automatically propagate to Cheerio Scraper that we can use for RSS parsi…