Closed ichorid closed 11 months ago
The reason is Python copies strings on slicing, resulting in useless waste of memory and CPU cycles.
I agree with the conclusion that implementing a dedicated tunnel endpoint is the next logical step. However, I disagree that this is mainly due to string slicing.
Pretty much all of the slicing was removed from IPv8 packet handling a year ago. We even went so far as to replace bytes
with bytearray
. However, we didn't get the magic performance increase we were hoping for.
What we found (you can also see a hint of this in this breakdown by @ichorid and this breakdown by @egbertbouman) is that the pain lies in exchanging data between Python and <<your low-level language of choice here>>
. For example, ciphers.py
which makes the calls to the to the OpenSSL backend. At some point I even implemented my own C endpoint which would transfer about 800 MB/s and when fed to Python was brought down to roughly 80 MB/s (excluding crypto). Therefore, never feeding this data into Python seems like a logical choice.
never feeding this data into Python seems like a logical choice.
Excellent point! The most logical thing to do would be combining SOCKS proxy with a minimal UDP endpoint, so data never leaves low-level domain.
I'm no longer part of the team, but since I was heavily involved in the anonymity stuff I'll respond anyway.
The result is half-dead DHT performance and unreliable health info.
Every time I suspected DHT nodes blocking Tribler, the real reason turned out to be something else (e.g., https://github.com/Tribler/tribler/pull/6045/commits/43c25990e7b8993ff27469942c99d83a8524435f, https://github.com/Tribler/tribler/commit/3f54b1dc7c6954c4e10ba5a14ec6a5ca8356736e, https://github.com/Tribler/tribler/pull/5357/commits/c5de5d74fb87df1eada5cf54a4688787cf972572). Sure, anonymized DHT lookups are slower than normal lookups and DHT nodes can temporarily block Tribler, but I don't think it's as bad as you make it out to be. Of course, it could be that this has changed recently.
The most logical thing to do would be combining SOCKS proxy with a minimal UDP endpoint, so data never leaves low-level domain.
Agreed, the solution lies in not using Python at all when processing tunnel data. It would be interesting to see what this increase in speed does to the relay/exit nodes. Since there is no bandwidth bandwidth throttling implemented, you may end up trading one issue for another :wink:
Hidden seeding is broken (and useless)
The hidden seeding test appears to be broken in 2 ways:
Regarding the comment that hidden seeding is useless, that's just because the Tribler network is small. So, the chance of success is really low and therefore you'll rarely find a hidden seeder. That's pretty much what you would expect and what we experienced in the past with libswift.
Morning, All paths lead to Arvid version 1.2.13.. Thnx for fixing the tunneltest after being broken/unused for so long! Lot of input to process for roadmap development.
Especially the DHT spam experiments with a bombardment of 1000 UDP messages. Very insightful, we should reproduce those again. A moderate client monoculture is emerging I believe. Exit nodes are not load balancing, we talking 100% versus 0.5% load. Strange. Do we have proof that exit nodes are not the main download speed constraint?
At this point I believe the DHT peer discovery is just one point in the chain. No measurements have been conducted to demonstrate what is going on. My current opinion is we might be close to big performance boost (mere fixes) or not (non-Python) 😜
Exit nodes are not load balancing, we talking 100% versus 0.5% load. Strange.
This might be a symptom of a related component (e.g., caching of exit node network addresses can bias the network towards a subset of exit nodes). Unfortunately, this is hard to tell since we have very little insights in the end-to-end behaviour of our anonymous downloading stack.
No measurements have been conducted to demonstrate what is going on.
This job checks if hidden seeding is "working" (binary check) and, as a next step, could be adopted to get more insights into the load balancing amongst exit nodes. Additionally, we could extend that job to do a DHT spam experiment and see what happens.
The issue dedicated to libtorrent 2.0 support: #5556
Related to https://github.com/Tribler/tribler/issues/143
Preface
Over the last few years, we successfully solved every major technical problem inherited from previous Tribler dev generations. We radically updated Tribler codebase, established a robust code architecture and brought code support up to industry standards. Also, we've cut down on every non-essential feature, focusing development on two things only: metadata delivery and anonymous downloads.
During this journey, we identified a number of technical and scientific problems that block the Tribler project from reaching its goals. We tended not to tackle those in fear of bogging down our understaffed team, focusing on "low-hanging fruits" instead.
Gentlemen! :smoking: I inform you that all the low-hanging fruits are gone, only the hard ones remain. Here is the list.
Network-level problems
Solutions for these are obvious, but we never put enough effort into these, because we never had enough qualified manpower.
:no_entry_sign: DHT calls blocked over tunnels :no_entry_sign:
BitTorrent uses Mainline DHT to find nodes that seed an infohash. Mainline DHT is susceptible to various types of attacks, including DDOS. To solve this problem, BitTorrent libraries use spam control methods, blocking peers that send too many requests. Different clients employ different criteria for detecting DDOS attempts. The problem is Tribler DHT requests are all sent through exit nodes, which may look like a single node to DHT peers, [triggering spam control](https://github.com/Tribler/tribler/issues/3065). The result is ~half-dead~ impaired torrent info fetching and unreliable health info. One solution to this problem could be caching or performing DHT requests on exit nodes on behalf of tunnel users. However, this could result in non-technical problems :copyright: :policeman::snail: Slow tunnels performance :snail:
Our anonymous tunnels code is implemented in Python (though the crypto library is low level). [Performance is very, very bad](https://github.com/Tribler/tribler/issues/2548) compared to VPNs: we do 5 MBytes/s _at best_ and 0,5 MBytes average for an overseeded torrent on a fast modern PC, while a typical VPN (such as Wireguard) will use the whole bandwidth available to the host (about 20-80 MBytes/s). ~~The reason is Python[ copies strings on slicing](https://github.com/Tribler/tribler/issues/4459), resulting in useless waste of memory and CPU cycles~~([solved](https://github.com/Tribler/tribler/issues/6481#issuecomment-949361098)). The reason is slow data exchange between Python lower-level libraries. The solution is to implement a shim IPv8 tunnels - SOCKS endpoint in a [lower-level language](https://github.com/Tribler/tribler/issues/4567). The problem is exacerbated by Libtorrent's [bad performance](https://github.com/Tribler/tribler/issues/2620) when [using uTP](https://github.com/arvidn/libtorrent/issues/3542).:two: Support for BitTorrent 2.0 :two:
LibTorrent 2.0 is actively pushing BitTorrent 2.0 standard, which moves to a more secure, 32-byte SHA-256 hash. Supporting it will touch *every* part of Tribler codebase, as 20 bytes hashes size is hardcoded and expected everywhere.Token economy
Currently, our token economy does not do anything useful, but instead just perplexes the users and annoys the developers. Here are the reasons:
:hole: The exitnode blackhole problem:hole:
When Tribler downloads something through an exit node, the user pays it the corresponding amount of bandwidth tokens. The problem is, 99.99% of seeds are non-Tribler, meaning that exit nodes pay no one. Essentially, exit nodes act the role of super-seeds for the network, but they _never spend their tokens_. The result is, exit nodes become "supermassive black holes" of Tribler economy, constantly dragging regular users to a negative balance. And negative numbers piss off people, incentivizing them to either stop using Tribler, or just regularly delete their identities to [whitewash their balance](https://github.com/Tribler/tribler/issues/4015). There is another problem adding more complexity to the issue: when someone from outside the Tribler network exchanges traffic with a Tribler hidden seeder,[ or just an anonymous downloader](https://github.com/Tribler/tribler/issues/5310), the traffic leaving the Tribler network will never be paid back, ultimately making the economy deflatory. And no, the simple solution of prioritizing Tribler peers will destroy performance: instead of a hundred fast peers, we would use a single slow one. One possible solution is to stop showing balances altogether and instead, [show the user's relative ranking.](https://github.com/Tribler/tribler/issues/3495):-1: Hidden seeding is ~broken~ useless :-1:
~[Receiving UDP over SOCKS is broken in Libtorrent](https://github.com/arvidn/libtorrent/issues/6512)~ When a Tribler starts seeding a torrent in "hidden seeding mode", the torrent will only be available through exit nodes. Unexpectedly, the seeding ratio of the hidden torrent will always be near-zero. The reason is: BitTorrent protocol prefers the :racing_car:fastest:racing_car: seeds, but hidden seeding is always :turtle:slower:turtle: than direct seeding. The result is, hidden seeding does not help with recuperating the bandwidth tokens user spent for anonymous downloads, further breaking Tribler token economy. This is a vicious circle: Tribler users don't seed because no one pays them for seeding, because there is no incentive for Tribler users to download from other Tribler users.:arrow_up: Prioritization of users is semi-functional :arrow_down:
Currently, there is just a single mechanism of prioritizing users on exit nodes: if the user's balance goes too low, the user will have a lower probability of getting service. (@devos50 , correct me if I'm wrong) The mechanism is very primitive, barely working, and is trivial to circumvent by whitewashing.:champagne: The exitnode bottleneck and inflection to self-sufficiency :champagne:
A simple back-of-envelop calculation shows that for the Tribler network to become self-sufficient, there should be about ** one million** Tribler users online at every given moment. The inflexion point is 500 000 users: after that, most of the traffic will be served from inside the "Tribler bubble": ![inf](https://user-images.githubusercontent.com/2509103/140058418-0c381c38-2c9d-4259-ba67-75860cbadf2a.png) However, even then **all** the traffic will have to go through the exit nodes. The reason is, in the current architecture of Tribler anonymization network there is no such thing as "hidden-seeding-only" exit nodes. I.e. if there is a million users and no exit nodes, hidden seeders will not be able to connect. The trivial solution is to add a special class of "pseudo-exit-nodes" that only allow connections to hidden seeders. (@egbertbouman correct me on this if I am wrong.):moneybag: Credit mining fiasco :moneybag:
In an attempt to bootstrap Tribler into a self-sufficient ecosystem, we tried to implement a ["Credit mining" system](https://github.com/Tribler/tribler/issues/4363) that should have allowed users to "invest" some disc space and traffic into seeding Tribler torrents to get token rewards. Unfortunately, the only thing it provided to the users is [a constant stream of lost tokens and disappointment](https://github.com/Tribler/tribler/issues/3778). Eventually, we removed this feature. The reasons why it failed are multiple (e.g. the hidden seeding problem described above), but the ultimate one is: **BitTorrent is a non-zero-sum game**. If everyone is using the same algorithm and downloading the same popular torrent in hopes to profit from it, every megabyte of the torrent they provide to others play against them. In fact, the simplest analysis shows the series of "wins" for a torrent starting from a single peer resembles a harmonic series, grows incredibly slowly because of [diminishing returns](https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)). The solution to this problem is two-fold: 1. devise different token prices for different torrents 2. stop trying to replicate the money economy and instead come up with a social rating system:money_mouth_face: Deanonymization by payouts :money_mouth_face:
In the current architecture, we do payouts immediately. In combination with an open ledger, this could allow deanonymizing people by their traffic patterns. Solution: implement [deferred (and possibly fuzzy) payouts](https://github.com/Tribler/tribler/issues/4255).:handshake: Unite the economies of Metadata, Anonymity, and Seeding :handshake:
If Tribler is ever going to reach its goal of creating **the** [attack resilient economy for media](https://github.com/Tribler/tribler/issues/1), users must be able to provide value and trade different kinds of services in it. There are three primary ways how a user can benefit the media-economy: 1. provide anonymization services (being an exit node or an intermediary peer) 2. provide seeding services (storing data for rare torrents) 3. provide metadata enrichment services (create and maintain channels, categorize metatada, add tags, etc.) **All three kinds of activities must reside in the same, single economic space**. This means either using a single token to reward all three kinds of activities, or creating three different systems of tokens that can be traded on a free market.Content layer
Due to numerous social engineering mistakes, design errors and architectural miscalculations, the current Tribler content layer "Channels 2.0" failed to reach its goal of becoming a decentralized alternative to web-based BitTorrent trackers.
:dizzy: BitTorrent is unsuitable as the Channels backend :dizzy:
Big torrents collections cannot be created or maintained by a single user. Therefore, the bigger the collection, the more people a required to maintain it and edit it simultaneously. If the data is stored as a torrent, each change results in creating a new infohash for the whole collection, eventually leading to **swarm fragmentation**. Thus, collaborating on, or just regularly copying data from a large external source becomes nearly impossible. Clearly, [BitTorrent is unsuitable ](https://github.com/Tribler/tribler/issues/4677)as collaboration platform backend. Also, using torrents as a backend involves pretty complex logic for packing the data into append-only files and dealing with asynchronous events from an external entity (Libtorrent). The solution to this (and other) architectural problems would be abandoning BitTorrent as the Channels backend altogether and instead [fetch data on-demand from other peers](https://github.com/Tribler/tribler/discussions/5721). [Migrating to BitTorrent 2.0 will not help](https://github.com/Tribler/tribler/issues/4672), because the problem is not storing many small files _per se_, but the rate of change being proportional to the number of contributors of a channel (e.g. **quadratic** if every user is a contributor).:pencil2: No crowdsourcing instruments :pencil2:
The ability for users to [author](https://github.com/Tribler/tribler/issues/31) [metadata](https://github.com/Tribler/tribler/issues/2455) [cooperatively](https://github.com/Tribler/tribler/issues/4642) is a critical [requirement](https://github.com/Tribler/tribler/issues/134) for Tribler Project to reach its [goals](https://github.com/Tribler/tribler/issues/1). Channels 2.0 system was initially designed as permissioned, except for the top-level federated channels list (the "Discovered" tab). The plan was to begin from permissioned and then add [permissionless crowdsourcing elements](https://github.com/Tribler/tribler/issues/6217), such as the ability for users to create "[pull requests](https://github.com/Tribler/tribler/issues/6208)" into other's channels. Unfortunately, we became distracted by [gigantomania](https://github.com/Tribler/tribler/issues/21) and [non-essential features](https://github.com/Tribler/tribler/issues/5977). The [upcoming tags system](https://github.com/Tribler/tribler/issues/6214) is a step in the right direction, although its decision to not use the Channels 2.0 backend pose the question about how those two systems are going to be integrated in the future. Also, users must be able to [communicate with each other ](https://github.com/Tribler/tribler/issues/6043)to discuss information organization. No crowdsourcing system could exist without discussion between participants.:mag: Channels search is lame :mag_right:
Our Channels search algorithm is very simple and inefficient: ask five random neighbours for some results on a keyword. That's all. No dynamic deepening of the search, no walks, no deduplication, no popularity concerns, no indexing. Just :five: random hosts :shrug: This is a clear obstacle for the Tribler content layer to become usable. One thing that probably saves us at the moment is the proliferation of [Free-For-All (FFA) entries](https://github.com/Tribler/tribler/issues/3615#issuecomment-651091413) due to local caching when users search for popular keywords. Some related issues: https://github.com/Tribler/tribler/issues/2250 https://github.com/Tribler/tribler/issues/2547:zombie: Brain-dead data transfer: 99.99% of Channels data unused :zombie:
Channels 2.0 design is based on transferring the log of signed channel changes and then replaying it on the user machine to put those entries in the user's local DB. The processing is very slow and inefficient for bigger channels and unreliable for smaller channels. An analysis of possible solutions to the transfer problem [shows](https://github.com/Tribler/tribler/issues/3615#issuecomment-651091413) that there is no good design at all, **if we are downloading full channels**. The real problem is **data integration**: there are no databases that allow transferring and merging indexes in sub-linear time. Also, a human being is only interested in a **handful** of torrents **from a million-torrents** channel, wasting 99.99%. Moving everything around is a brain-dead :zombie: waste of bandwidth, CPU cycles and users' time. Google does not ask the user to download the whole index before usage, really :wink: The solution is to [spread the data around in the network and fetch it dynamically, using the network itself as an index](https://github.com/Tribler/tribler/discussions/5721). ![изображение](https://user-images.githubusercontent.com/2509103/140094424-71f51854-af55-4436-94d8-b35f5cb73ccf.png) (Be warned that that transferring complete SQLite DB's is no option because of security issues.):popcorn: "Popular" suggestions are 95% garbage :popcorn:
The "Popular" tab is served contents by Popular Community. There are multiple problems with how that works: first of all, the info is not propagated transitively, for the fear of spam. Also, Popular Community uses the push-based gossip model, which still keeps the overlay susceptible to flood-spam attacks, but does not allow for the "initial boost" feature of pull-based gossip (that makes Channels discovery usable). The push-based model also prevents us from doing aggressive walks for research purposes, as we did with TrustChain and Channels. Also, there is no bias for newer torrents in the "Popular" tab. The result is, "Popular" tab only shows 2-3 torrents that are really popular at the moment: the rest are typically 2-15(!) years old torrents that showed some great number of seeds at that moment, and for some reason (probably due to DHT bugs) still sometimes show high number of seeds. The problem is exacerbated by the fact that BEP33 checks for seeds are unreliable, and there is basically no way to tell the real number of seeds for a torrent without connecting to the corresponding swarm. In Tribler, these connections always go through exit nodes, which often results in DHT spam filter triggers and unnecessary exit nodes load. The problem of popular torrents is a complex one. Basically, we must design a distributed algorithm for collectively checking a set of entries (infohashes) and sorting those entries dynamically based on a dynamic property (number of seeds). And don't forget the exit nodes bottleneck and the constant danger of spam! Some solutions to this problem could be: * cache health data on exit nodes (dangerous due to non-technical problems) * establish a separate class of "health-checker supernodes". * stop propagating health data and instead go for some relative-popularity, time-based heuristic like VSIDS * switch to pull-based gossip and add dynamic boost based on the state of the local database * split the health checking work between neighbouring peers in a semi-structured way:lips::straight_ruler: Family filter is Victorean-nun-meets-Nazi overzealous :lips::straight_ruler:
Originally, our [Family Filter](https://github.com/Tribler/tribler/issues/1052) was a quick hack made of bag-of-words-grep from some Dutch porn site. That's early-80s state-of-the-art! We **definitely** need something smarter, e.g. [Bert](https://en.wikipedia.org/wiki/BERT_(language_model)).:cat2::cat2: "Just show 100 most popular torrents" will never work :cat2::cat2:
Content popularity for file-sharing networks and the Web is fundamentally different: file-sharing is much more **flat** at the top. Essentially, there are no "Google" or "Facebook" among torrent files. The reasons are many: * movies are available in many languages * movies are available in many resolutions * movies are fast-lived, popular only for a short amount of time Basically, file-sharing is about _streaming_ media: games, shows, movies, music - and the media really _streams_. Torrent popularity is short-lived. Thus, the strategy of "let's just show 1000 most popular things" will never work with torrents: **people's taste for movies is much more diverse than their taste for websites.** Also, torrent collections are products of specific **communities**. Copying content will not copy the associated community: the collection will remain dead :skull:. Best case, if some algorithm would be copying contents continuously, Tribler will forever remain just a mirror of those web-based trackers, like the Internet Wayback Machine. Do people use that often? How many people even now of the Internet Archive? Moreover, copying content from another platform disincentivizes Tribler users from creating functional channels and communities around those. Users just "choke" on those big piles of content, unable to change those or use in their own projects. ![изображение](https://user-images.githubusercontent.com/2509103/140556420-6e359d7e-0f5a-4f3f-aeed-8e05bece3473.png) The way to solve this is to acknowledge that user communities [co-evolve](https://en.wikipedia.org/wiki/Enactivism) with the content they produce and their environment. Instead of focusing on copycatting :cat2: :cat2: contents from others, we should focus on developing efficient crowdsourcing tools for the Tribler community. One can spend infinite amounts of gas trying to start a fire - it will never become :fire:self-sufficient:fire: if the logs are :droplet:wet:droplet:.User interface
Aside from the usual discussion about using Web stack instead of QT, our GUI has a looooooong way to go in regard to style and usability...
:goberserk: UI looks :goberserk:
In general, Tribler UI looks lame and outdated, like a thing made by a schoolkid in early 2000s (which it essentially is :man_facepalming: ). First of all, none of the other torrent clients uses a dark scheme. The thing just does not associate well with what torrents clients do - sending files around. It would be very nice if we could provide both light and dark themes with Tribler: unfortunately, this will require refactoring the QT CSS mess - at the moment, the stylesheets are scattered all around `.ui` files and `.py` files. Some colours are even set up in the code manually. QT bugs do not help with this task either. In general, the solution should be: "move all the stylesheets into a separate file, leave .UI files unstyled". Also, it will require creating position-specific subclasses for many widgets in the GUI. Second, our UI is just... no eye-candy? Inconsistent? Here is an example of a sleek modern PyQT GUI ([PyOneDark](https://github.com/Wanderson-Magalhaes/PyOneDark_Qt_Widgets_Modern_GUI)): ![GUI](https://user-images.githubusercontent.com/60605512/127739671-653eccb8-49da-4244-ae48-a8ae9b9b6fb2.png) Third, lots of little usability details are missing: the keyboard focus is not there where it is expected, we don't use keyboard shortcuts, we raise dialogs on every occasion, dialogs look boring without any icons, etc. In short, when one opens Tribler for the first time, their first reaction is: "OMG, that's **ugly** :japanese_ogre: ". Then they try to use it and it does not disappoint their expectations - the UX is as :hankey: as the UI. With such looks, it will be **extremely** hard to reach 1 million users in the world of 15-seconds attention spans, where each app only gets a single chance to prove itself. The solution is to either hire a professional GUI design team, or move to Web tech and reuse templates that are already there.:neckbeard: Identifying torrent authors (identicons) :neckbeard:
At the moment, when the user searches for content, they get a flat list related to channels, folders and torrents. The problem is, _the user can't see the author or the source channel of those_. This makes it impossible for users to identify good sources of information (e.g. channels to subscribe). The solution is two-fold: 1. show full paths for entries in the search results list, either in form of a pop-up hint, or inline 2. add [identicons](https://github.com/Tribler/tribler/issues/5154) for public keys/channels:deciduous_tree: Add recursive channel search :deciduous_tree:
Currently, the "filter" input box in channels searches for contents only in the current folder/channel, not diving into the folder/channel child folders. It would be very useful to instead make it work recursively on channel's folders. To do this with acceptable performance, we'll have to add some accelerator structure to Channels DB, such as [material paths, transitive closures or matrix encodings](https://vadimtropashko.files.wordpress.com/2011/07/ch5.pdf).:scroll: Downloads table is too wide :scroll:
See https://github.com/Tribler/tribler/issues/6452 ![изображение](https://user-images.githubusercontent.com/2509103/140575673-7c06fcf3-3b2b-4d03-8db5-9dd1f3debb5e.png) Also, the table is not too responsive, especially when Tribler starts. A better way to represent the downloads list is to use a hiding list to the right half of the window. Also, that will enable a natural drag'n'drop way of moving torrents to/from Channels. ![Home_Layout](https://user-images.githubusercontent.com/2509103/140577376-966b3e7a-6769-4b15-abe0-a47bb31bf7d6.png):no_entry_sign: : Can't share files with Tribler, torrent creation broken :no_entry_sign: :
Tribler's primary goal is to enable users to easily share files. Our users asked for a feature to [share a folder of files](https://github.com/Tribler/tribler/issues/4729). At the moment, the torrent creation dialog is [completely broken](https://github.com/Tribler/tribler/issues/4674) (and it has been broken for a couple of years already). This means two things: * torrent creation is too complex * users need a simpler way to share files:house: Bring back the Home screen :house:
Home screen was removed because [it did not bring any value](https://github.com/Tribler/tribler/issues/5071) (it was just a placeholder). Nonetheless, users expect there to be a home screen, something like a personalized dashboard or a[ news feed](https://github.com/Tribler/tribler/issues/5015). This time, it should provide value to the user. Some stuff that should be on the home screen: * updates to subscribed channels * changes to popular torrents list * list of active torrents * list of users' channel * user's [identicon](https://github.com/Tribler/tribler/issues/5154) * pull requests on the user's channels * updates on users' pull requests * new personal messages, etc. * Tribler development news and new version notifications:page_facing_up: Switch to paginated Channels interface :page_facing_up:
Implementing Channels interface with `QTableView` was a big mistake. Yes, it provided the fancy endless scrolling feature good for showing off to journalists, but endless scrolling is useless if all entries in the table are the same height. Google uses endless scrolling just for pictures - its search remains paginated for a reason. Moving to **paginated** `QListView` will result in the following upsides: * better navigation - pages help with that * rich, robust and easy to change entries representation (e.g. with thumbnails) * **much** more simple code (the whole "index to delegate" thing with `QTreeView` is pure horror) * enable us to create a **united model for downloads and torrent entries in Channels** - this will remove all the [synchronization issues](https://github.com/Tribler/tribler/issues/6303) between the Downloads list and Channels contents.