textbrowser / dooble

Dooble is a scientific browser. Minimal, cute, unusually stable, and available almost everyware. Completed?
https://textbrowser.github.io/dooble/
Other
474 stars 37 forks source link

Which systems does the Linux DEB package support? Which OSes are supported? #111

Closed Rezzy-dev closed 2 years ago

Rezzy-dev commented 2 years ago

Hi, I'm trying to run the dpkg installed browser on a Debian 9 system, and I received this:

./Dooble: /lib/x86_64-linux-gnu/libm.so.6: version GLIBC_2.27 not found (required by /opt/dooble/Lib/libQt6WebEngineCore.so.6) ./Dooble: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28 not found (required by /opt/dooble/Lib/libQt6WebEngineCore.so.6) ./Dooble: /lib/x86_64-linux-gnu/libz.so.1: version ZLIB_1.2.9 not found (required by /opt/dooble/Lib/libQt6Gui.so.6) ./Dooble: /lib/x86_64-linux-gnu/libm.so.6: version GLIBC_2.27 not found (required by /opt/dooble/Lib/libQt6Gui.so.6) ./Dooble: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.25 not found (required by /opt/dooble/Lib/libQt6Core.so.6) ./Dooble: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28 not found (required by /opt/dooble/Lib/libQt6Core.so.6) ./Dooble: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.11 not found (required by /opt/dooble/Lib/libQt6Core.so.6)

The browser did not launch.

The documentation currently lacks which systems the browser supports. Is Debian 9 supported? Because from the looks of it this doesn't seem to be the case. I can't upgrade glibc without messing up the rest of the system.

textbrowser commented 2 years ago

There are two hurdles. Removing content and displaying content. Removing content is also difficult because it is a game between developers and now machines. Displaying content requires either changes to the engine or a proxy engine which resides between the engine and the browser.

Rezzy-dev commented 2 years ago

You're not going to find such open source because it is expensive to produce. It requires hardware, voice analysis, recognition, cultural awareness, pattern recognition, translators. And probably Amazon, yes, they would be working on such things.

Everything else remains with clicks and with mice.

You're still not getting me. You're thinking catering to disabled needs as a separate thing. I'm thinking innovation and accessiblity for all. A new kind of web browser -- one that caters to all human needs intuitively. That revolutionises how we browse the web.

textbrowser commented 2 years ago

A revolution already occurred hundreds of years ago. The printing press. That is the manner in which pages should be rendered with user input.

Rezzy-dev commented 2 years ago

There are two hurdles. Removing content and displaying content. Removing content is also difficult because it is a game between developers and now machines. Displaying content requires either changes to the engine or a proxy engine which resides between the engine and the browser.

Mhm. The latter is exactly what I have in mind. Either the new rendering features/options need to be implemented directly into the web engine, or the browser needs to have its own rendering engine/layer that processes what the web engine serves. In terms of performance benefits, preferably the former.

Currently you are not modifying anything in the web engine, correct? The browser is just software that utlises the web engine, that sits on top of it, essentially like a shell.

In order to take things further, to have more control, the browser would need to take control of the engine, and become a full-fledged web rendering/browsing software, not just a shell that sits on top of a common web engine. This is achievable either by forking the engine and creating a custom branch for the browser, or by implementing a rendering engine for the browser that sits on top of the common web rendering engine.

textbrowser commented 2 years ago

You need a proxy display. A thing to interpret a page and display it on your preference. A custom printing press which removes everything but the article. A view of a view. You want to get into the mind of Ahab without the insanity. Alice beyond Wonderland.

See, that's the thing. There is no clear definition of what you're asking for. It's like removing all the noise in that famous Blade Runner. Remove the things that draw my attention so I can exist peacefully. The notifications, the highlights, the blurring.

textbrowser commented 2 years ago

Everything on the Web craves your attention. And you need to give it. It isn't a parasitic relationship. The Web lives for you. First it, the browser, should automatically identify and remove content you don't want. Supposedly, it should do this without alerting you and without requiring your advice. It should be aware of what you consider garbage. Then, it is your task to teach it how to behave in this page and maybe pages like it.

Rezzy-dev commented 2 years ago

Okay, I can see we're not on the same page... Sighs.

In my mind, both the objective and the definition is clear. And the process to get there not rocket science. Step by step, a feature at a time.

And the rendering would be designed/built as such that it will be flexible to change as the web evolves. Modules that are no longer useful can be removed, and new ways of content rendering/sorting can be added.

The definition/target you seek is what the user is looking for -- the very content/information they came to acquire from that web site/page, from the web. Remove the noise, aid the user in their browsing/search, and serve what the user asked for -- that's the browser's job.

To sort the books in the scattered library, so that the user can focus on the 8 items that best match what they came to find and actually be able to browse and remember them. To save the human brain from unnecessary noise/work that only serves to wear us out, to slow us down -- that is the task. To let us focus on what we came to do on the web.

textbrowser commented 2 years ago

There are two hurdles. Removing content and displaying content. Removing content is also difficult because it is a game between developers and now machines. Displaying content requires either changes to the engine or a proxy engine which resides between the engine and the browser.

Mhm. The latter is exactly what I have in mind. Either the new rendering features/options need to be implemented directly into the web engine, or the browser needs to have its own rendering engine/layer that processes what the web engine serves. In terms of performance benefits, preferably the former.

Currently you are not modifying anything in the web engine, correct? The browser is just software that utlises the web engine, that sits on top of it, essentially like a shell.

In order to take things further, to have more control, the browser would need to take control of the engine, and become a full-fledged web rendering/browsing software, not just a shell that sits on top of a common web engine. This is achievable either by forking the engine and creating a custom branch for the browser, or by implementing a rendering engine for the browser that sits on top of the common web rendering engine.

That's right. Exception being the removal of ads.

Rezzy-dev commented 2 years ago

There are two hurdles. Removing content and displaying content. Removing content is also difficult because it is a game between developers and now machines. Displaying content requires either changes to the engine or a proxy engine which resides between the engine and the browser.

Mhm. The latter is exactly what I have in mind. Either the new rendering features/options need to be implemented directly into the web engine, or the browser needs to have its own rendering engine/layer that processes what the web engine serves. In terms of performance benefits, preferably the former. Currently you are not modifying anything in the web engine, correct? The browser is just software that utlises the web engine, that sits on top of it, essentially like a shell. In order to take things further, to have more control, the browser would need to take control of the engine, and become a full-fledged web rendering/browsing software, not just a shell that sits on top of a common web engine. This is achievable either by forking the engine and creating a custom branch for the browser, or by implementing a rendering engine for the browser that sits on top of the common web rendering engine.

That's right. Exception being the removal of ads.

Yes, ads would need to be rendered, but in quarantine, so that the user is not impacted by them if they do not wish to see ads, and they can still browse the content they came to find without obstruction. I think the browser should work for the user here instead of in favour of the company who owns the website. Therefore instead of focusing on making the user's life more difficult and the company's job easier, it should be making the company's job harder to enforce ads, and the user's job easier to avoid them safely.

textbrowser commented 2 years ago

Okay, I can see we're not on the same page... Sighs.

In my mind, both the objective and the definition is clear. And the process to get there not rocket science. Step by step, a feature at a time.

And the rendering would be designed/built as such that it will be flexible to change as the web evolves. Modules that are no longer useful can be removed, and new ways of content rendering/sorting can be added.

The definition/target you seek is what the user is looking for -- the very content/information they came to acquire from that web site/page, from the web. Remove the noise, aid the user in their browsing/search, and serve what the user asked for -- that's the browser's job.

To sort the books in the scattered library, so that the user can focus on the 8 items that best match what they came to find and actually be able to browse and remember them. To save the human brain from unnecessary noise/work that only serves to wear us out, to slow us down -- that is the task. To let us focus on what we came to do on the web.

Magazines pollute, television pollutes, and so do newspapers. The problem perhaps is that the Web lacks regulation. And it does. It resembles a nightmare. But, you can tune it out. I see the particular problem. You're essentially asking to have a finite solution to effectively infinitely-many inputs.

textbrowser commented 2 years ago

Ads are bothersome. I do like lots of empty space. Several people have requested improved blocking; some suggesting a blend with professional blockers. Something I will not do because I have enough things to maintain. Better and smarter blockers are nice; as long as they are smarter than the servers. A mechanism to bypass by-passable subscriptions is also nice. Another thing I use Dooble for. Remove cookies.

Rezzy-dev commented 2 years ago

Everything on the Web craves your attention. And you need to give it. It isn't a parasitic relationship. The Web lives for you. First it, the browser, should automatically identify and remove content you don't want. Supposedly, it should do this without alerting you and without requiring your advice. It should be aware of what you consider garbage. Then, it is your task to teach it how to behave in this page and maybe pages like it.

Mmm... not exactly what I have in mind from a technical viewpoint. Think about the data that web pages contain. Content in HTML data structured by content type. All the web browser needs to know is how this content is generally structured in use, and how to provide a few common layout options for this content that meet user needs.

You don't need to analyse web pages, necessarily, to be able to render and browse them in different ways. You don't need AI, necessarily. That's actually making things more complicated.

You have raw data you can work with here: served through HTML, JavaScript, and CSS.

And when you're looking for specific content on the web, it doesn't hurt when that content is displayed consistently across websites.

Rezzy-dev commented 2 years ago

Take the "reader" example, for instance. You have a unified interface that looks for and displays specific content from any web page the same way. You could add automatic colour customisation to fit the web page design, for example (by sampling the colours used on the website), but this isn't crucial/necessary.

When we are looking for specific content on the web, when we are focused on content, it helps to have all that content displayed in the way we are used to having it displayed, so we can browse more efficiently.

Rezzy-dev commented 2 years ago

And having that content taken out of the webpage/website and presented in such a simple and clear way not only helps seeing users, but even blind users. It gets rid of the clutter for everyone, and lets them focus on the content.

In practice, it removes an accessibility barrier.

Rezzy-dev commented 2 years ago

Okay, I can see we're not on the same page... Sighs. In my mind, both the objective and the definition is clear. And the process to get there not rocket science. Step by step, a feature at a time. And the rendering would be designed/built as such that it will be flexible to change as the web evolves. Modules that are no longer useful can be removed, and new ways of content rendering/sorting can be added. The definition/target you seek is what the user is looking for -- the very content/information they came to acquire from that web site/page, from the web. Remove the noise, aid the user in their browsing/search, and serve what the user asked for -- that's the browser's job. To sort the books in the scattered library, so that the user can focus on the 8 items that best match what they came to find and actually be able to browse and remember them. To save the human brain from unnecessary noise/work that only serves to wear us out, to slow us down -- that is the task. To let us focus on what we came to do on the web.

Magazines pollute, television pollutes, and so do newspapers. The problem perhaps is that the Web lacks regulation. And it does. It resembles a nightmare. But, you can tune it out. I see the particular problem. You're essentially asking to have a finite solution to effectively infinitely-many inputs.

The difference between magazines/print and television and the web is that computers and the web are interactive -- that means two-way control and interaction!

On computers, with the right software, you have control over the information/data you receive. You can edit, filter, rearrange, etc. That's the beauty of digital media and the web.

But in the last decade or so the exact opposite has been happening. The mainstream web browsers on the market, and the direction the Internet has been headed by companies has actually removed user control and made this interaction largely one-way, like print/broadcast media.

Where they use the interactive capabilities of the web, they use it to collect user data on their visitors/users, and to serve content they want the user to see/consume. All power to them, none to the user.

Rezzy-dev commented 2 years ago

Also, don't forget that these additonal browsing options/features are just that: additional ways to browse the same content that you can see in full laid out by the web engine, as it has been published, if you choose to.

All you need to do is to switch how you want to browse the web right now.

The browser will load and render the data served by the web server per current user mode/choice active -- leaving out any data that is not relevant in the case of the filtered modes.

The user still has the backup option of switching to full render mode in the event that they wish to see something crucial that may have been left out. (Although, ideally, this shouldn't be necessary after a certain while, once the browsing modes are optimised.)

Rezzy-dev commented 2 years ago

That's essentially what I'm proposing: to start thinking of new ways to render, sort, and display the raw data that is websites into ways that make browsing the web for content/information we are looking for much more (intutive and) easier. Saving the user time and mind-effort (clutter that slows us down), and allowing them to find better focus on what matters to them in the data/information nightmare that is current websites, that is the new web.

And not only the user, but also the computer -- because I don't know if you've noticed, but current websites with their badly optimised, JavaScript-heavy interfaces actually tend to really bog down the computer's system resources and drain the battery. That's something that was not present at all in the web of the past. So there is performance and technical benefits to be gained here, too, from filtering and rendering only the relevant data/content.

textbrowser commented 2 years ago

For some, a nicer Web already exists.

Screenshot_20220110_091623

textbrowser commented 2 years ago

Don't be confused. It's a personalized and shareable database. It's also encrypted. It features a search over encrypted content.

textbrowser commented 2 years ago

A page's content resides in the database. The text. So you can also export information to PDFs.

textbrowser commented 2 years ago

https://user-images.githubusercontent.com/10701156/148802523-38c986af-e302-45df-a180-f7ead382652e.mp4

textbrowser commented 2 years ago

Of course a Web server is included. Its contents can be exported to the world either directly or through an SSH tunnel. The contents of each page are located on a PostgreSQL or SQLite database.

textbrowser commented 2 years ago

The future is already here.

Rezzy-dev commented 2 years ago

Not exactly what I had in mind... This is more like a printing press for web pages -- it removes ads and interactive content, and gives you just the text and images from each page, served like it was served on the site. And it seems overly complicated.

There are some similatities between what I said and this, but for browsing/research purposes/function they stand worlds apart. There's no filtering, separating, and sorting the information, there's no unified, simple visual GUI. This is a first, experimental step. And overly complicated for a first step. Why do you need a web server installed on your computer to do what a web browser could do on the fly?

Rezzy-dev commented 2 years ago

Or not even installed on your computer! This encypted web server you speak of, it's a public server? This is a service?

textbrowser commented 2 years ago

Not exactly what I had in mind... This is more like a printing press for web pages -- it removes ads and interactive content, and gives you just the text and images from each page, served like it was served on the site. And it seems overly complicated.

There are some similatities between what I said and this, but for browsing/research purposes/function they stand worlds apart. There's no filtering, separating, and sorting the information, there's no unified, simple visual GUI. This is a first, experimental step. And overly complicated for a first step. Why do you need a web server installed on your computer to do what a web browser could do on the fly?

I wrote the server and it is included with Spot-On. It's not a service. The server listens on one address (the primary one) and two ports (like 80 and 443). It is not complicated. Yes, the content is static and curated. There's another process which parses numerous RSS feeds and acquires their contents. It downloads the content, parses interesting words, and stores this information locally (SQLite) or remotely (PostgreSQL). If you have other participants, it shares the data with them. It's a distributed shared service. I've accumulated almost 15 GiB of data.

textbrowser commented 2 years ago

The other tabs of Spot-On are for other things. The RSS feeder is full of options. The entire process does not require a UI after it's been configured. I run it on FreeBSD where I can serve the search engine with other people; for instance through an SSH tunnel.

textbrowser commented 2 years ago

I know exactly what you are thinking of but the solution is not immediate because it truly does require a smart proxy between you and the site. Web pages can be very complex. It is one reason why there are so few engines. The market is owned by both the engines and the content which they provide. Non-dynamic pages are wonderful if the purpose is information. JS complicates the view process.

textbrowser commented 2 years ago

This medium is quiet and full of content. It is without the thrills of active pages. And it exists.

You're in search of an active display. Not only is the messiness filtered, but the important content is rearranged. A content blocker is just that. It removes the garbage and the resulting page resembles an awful motif with glaring emptiness.

Something beyond that. Remove the mess and present me with the original page without the berserk gaps.

Sure it's possible.