Open synctext opened 1 year ago
First_5G_deployment_of_Distributed_Artificial_Intelligence.pdf
Improved writing in methodology and evaluation, added hypothesis testing to prove increase in connectivity presented the new library, analysed Odido, Orange FR, and SFR added parameters of Epic CY and CytaMobile-VodaFone CY
Taxonomy of NAT boxes and blog posts are not introduction or problem description section material (.5 page). Methdology is "Architecture and design of ..."
Example of intro (page 1 of thesis only + abstract): This work empowers citizens to take back control of The Internet and AI. AI is expected to improve industries like healthcare, manufacturing and customer service, leading to higher-quality experiences for both workers and customers. The leading AI hardware company could be worth $50 trillion within a decade in the scenario where generative AI leads to an industrial revolution. Companies are making small steps toward "Level 5 intelligence". Companies are expected to profit, leading to further market concentration. Ordinary citizens are not expected to profit from this, increasing inequality. Politicians publicly ponder that: Who will control AI, will control the world. Our work is meticulously designed to counter this.
Our work empowers citizens to take back control of their life. More specifically, we present the self-organising technology stack to take back The Internet and AI. Who owns The Internet? Who controls AI? The Internet is essentially private property, with few exceptions. Big Tech AI is build with copyrighted works [REF]. Google, Facebook, Amazon, Apple, Tencent, and others operate the central components of our daily digital lives. For instance, we require permission from Google and Apple to publish software for mobile devices. Their monopoly power means no other meaningful method exists to reach billions of smartphone users with newly created apps. News and media are dominated by American and China-based AI-driven monopolies.
We introduce a novel type of low-level network overlay and proof-of-principle zero-server AI network. Our zero-server architecture offers various networking primitives. These serve as the basic building blocks for creating full fledged AI alternatives for the services of "trusted" third parties or Big Tech companies.
We crafted a decentralised Tiktok to demonstrate the viability of our work. Our proof-of-principle social media app does not require any servers, avoid using any cloud, bypasses the need for any legal entity, and abstains any centrality in general. Relentless improvements in mobile hardware now enable both generative AI and on-device alternatives for the cloud. Our app builds a fully decentralised media experience by building upon the swarming-based Bittorrent protocol.
Our main contribution is bypassing the carrier-grade NAT hardware inside 5G networks. The depletion of IPv4 addresses and lack of cybersecurity forces 5G network operators to violate Internet protocols. === END of INTRO ===
Problem Description Our central scientific problem is how to devise an AI-overlay network on 5G. We aim to create a new ownership model of generative AI. By creating fully decentralised machine learning on 5G it is possible to make AI which is owned by both nobody and everybody. The challenge is to overcome the restricted communication in 5G networks. These mobile networks are excusively designed to communicate with the cloud. Since 2017 Delft University has successfully expanded their research on decentralised 5G overlay network. See the Android devices participating in a peer-to-peer based network overlay. {INSERT PICTURE PAGE 2}
Architecture an AI overlay network on 5G {1 page or less} We present the detailed technical work required for creating {INSERT PICTURE of 2-3 phones running decentral tiktok with successful puncture SIMs} We have a different design and different approach then the TinyML community is persuing. Our architecture is designed to decentralise generative AI, as demonstrated in our prior AI decentralisation efforts. The ability for autonomous AI agents to communicate using 5G is cardinal.
Extensive measurements of 5G networks Present results. Present that Table 2.
Algorithm 2 Function to find the connection initiation timeout upper and lower bounds
{same for algorithm 3}Latest thesis draft: First_5G_deployment_of_Distributed_Artificial_Intelligence (2).pdf
Draft editing for possible APNIC blog post
Latest thesis draft First_5G_deployment_of_Distributed_Artificial_Intelligence.pdf
More polish is needed; email thesis 10Sep evening.
TABLE II: Timeouts of various carriers in seconds.
Polish 2x location columns, replace with country name or delete.tunnel
. Make complete or removeEvaluation results, shown in Table III, indicate that provideraware NAT puncturing led to successful connections in four additional combinations
Actual table is a few pages later, please fix.From the Birthday Paradox calculator [14], one can get a 50% success rate of a match after sending 77162 packets, and for a 99.9% success rate, 243587 packets are needed.
This is incomplete! Either add that you're assuming that ports never close again on the NAT device or give a timeout of certain amount of minutes. Second, you're also assuming a certain pool size of Internet IPv4 addresses of which we draw a semi-random number again for both sides.This will still be attempted based on the Birthday Paradox 99.9% likelihood of success, i.e. a connectivity attempt is comprised of 243587 packets.
Page 9 uses this silent definition of "Attempt" without explanation or reference. A name with clarifty would be "Puncture Unit", hard to justify this semi-arbitrary Orestis numbering system.ToDo:
Thesis defense target: 21 June 2024. Survey target: end of July 2023. Would like to have a fresh master thesis topic, not incremental improvement of other thesis work. Starting roughly Q1 2023 or summer of 2023, flexible. update: starting lit. survey 2nd May update 2: literature survey finished: 3 oct 2023.
RTOS expertise. AWS. Dream of contributing to The Linux Kernel. Byte-level stuff OK, even assembly person in the age of Javascript :-) Like to use machine learning, but not invent new ML stuff or central focus of thesis (no unsupervised learning, no online learning). Thus more ML that is: adversarial, byzantine, decentralised, personalised, local-first AI, edge-devices only, low-power hardware accelerated. Prefer to utilise advanced algorithms msc course knowledge.
Possible brainstorm starting idea: start building the fastest machine learning based on hardware acceleration. First step is get the hardware running fast, stepwise modify algorithms and tweak towards machine learning for learn-to-rank, learn-through-consumption, or even learn-about-trust (reputation graph, work graph, MeritRank inspired etc). Promised phones to test.
Applied ML direction {less interested}. Related work to astronomical hardware cost for AI. OpenAI has spend $63M on hardware at least:
https://rct.doj.ca.gov/Verification/Web/Download.aspx?saveas=560291.pdf&document_id=09027b8f803a8976 [source]
Pure P2P networking for 5G. Second direction is building the world-first overlay network exclusively for mobile devices. No PC, laptop or server allowed. Related: #2754 plus practical work to get 256 reliable neighbors: https://github.com/Tribler/tribler/issues/7074#issuecomment-1406236604
literature survey: read everything about carrier-grade NAT and think of 5G context. Prior 2019 work: Universal communication using imperfect hardware](https://github.com/Tribler/tribler/issues/4827) it gave us IPv8-Kotlin, you work fix the final issues and make it the workhorse for the future Internet (in a survey?). NAT puncturing, birthday paradox. See also the binary transfer protocol, EVA issues. Nothing ambitious :astonished:
literature survey example from prior students