Closed myshzzx closed 5 years ago
i prefer solutions that generate rss/atom content that is then given to rssowlnix to parse. thats much easier to maintain.
the end goal would be something like https://fetchrss.com
as an embedded webserver with a website for creating the special url and a separate processor for that url that outputs rss/atom feed data which then is passed to rssowlnix.
where did u get that style of url? did you think it up yourself or is that supported in other programs?
i don like the url format as it has fixed positions for the parameters which makes it harder to extend or change later. i would just use normal parameters with a special prefix like &auto-link=[item-link-selector]&auto-content=[item-content-selector]
thanks for sharing ill probably change this when i merge it.
in my view, rss/atom is fading away from common life, many valuable info sources don't provide rss/atom, but distributing information via their own apps instead, so there's no need to adapt rss/atom. the value of rssowl is providing a way to manage information, rss/atom is just an information digest that calling people "subscribe me, subscribe me".
when I subscribe a source, I care about the title and the content, the most easy way to create an url describing what I want without a GUI-CSS-selector generator, is a fixed position scheme, and seems nothing to extend in a few years. one day you got a more easy way to create such an url like collect://auto?url=xxx&title=[selector]&author=[selector]&content=[selector]...
by some mouse click, you can simply adapt autohttp
to collect
. in fact autohttp
scheme is offered to programmer, since css-selector is hard for common user.
since 2.8.0 one can use shell:// protocol to call external scripts that provide the rss content.
now I use chrome extension feedbro: https://chrome.google.com/webstore/detail/feedbro/mefgmmbdailogpfhfblcnnjfmnpnmdfa
as to generate atom content, I build one myself, which can parse any page like
the easies way to create the special url i have encountered is https://fetchrss.com/ (not free however) then another way seen is to use regex instead of css selectors.
using a webserver to do the work was always possible, now you can have just a script and dont need to run some separate application all the time. external scripts are easier to change adapt than having to rebuild rssowlnix every time. you can also use any scripting/programming language you want.
most rss aggregators are too simple for me. i dont know of any rss aggregator that can compete with rssowl, even with the problems it has.
ps: programs like https://nodered.org/ can be use to build webservers too using a simple visual programming language or javascript
Now one can subscribe a page without rss/atom offered.
for example, one would like to subscribe page
https://medium.com/topic/artificial-intelligence
, add a feed likeautohttps://medium.com/topic/artificial-intelligence@h1 a,h3 a@div.section-inner
, this means collecting news from the page using css selectorh1 a,h3 a
, thea
text as news' title and link as news' link, the news target page as news' content (by default), herediv.section-inner
means select news content by css selectordiv.section-inner
from news page (optional).autohttp[s]://url.xxx/@[item-selector]@[item-target-page-content-selector(optional)]