Open mastercyb opened 6 years ago
Currently Proof of History + Tendermint is the things I am obsessed with. Looking for the following properties:
Interested in any researches around performance testing of existing codebases of blockchain implementations.
Recent advances spur the creation of different approaches for consensus computing. Teams of EthDev, Paritytech, Blockstream, Block One, IOHK, Rcoop, foundations of Zilliqa, Skycoin, Qtum, Nebulas, Urbit and Solana (the list is in no way complete though) are actively working in this direction. Choosing the right thing is a non trivial task. Any research in the field, as well as adaptation of VM metering and billing into LLVM is highly appreciated.
I am looking into provable things, thus quadratic voting is definitely a way to go. The problem with QV is that it works well for huge amount of agents. I am looking into any solution that adopt difficulty from linear to QV depending on activity and/or network size. Also I suppose that private voting is crucial thus zero knowledge schemes with reasonable trade-offs with performance are mandatory for the project.
From Nebulas-like ranking to genetic algorithms can be considered. That topics is, surprisingly in the context of search, is the least understood by me.
I consider to merge we concept of a node id and an account in order to make a network more resistant to DDoS attacks and incentivize providers of read operations. NEM like algorithm is a thing to start with, but I am pretty sure that it is not applicable for our use case.
Currently I consider Perun-like virtual channels as a state of the art. Looking into general purpose search-related use cases for outscoring computations to peers.
Cosmos IBC can be (but can be not) the https of chains. My idea is that we can move domain relevance calculations to parallel chains. What else?
Location based search is crucial for utility of cyber•Search. The good thing is that recent advances from FOAM team make possible to proof relative location and incentivize spread of an anchor network to target density. Calculation of relevance relatively to location is as important as domain based relevance. Thus my goal is to add mechanism for mining triangulations during search requests, thus adding tremendous capabilities to a decentralized search.
I am looking into ownership (aka Steem), not rent (aka Ethereum) economic models as if an entity doing 100, or even 1000 searches per day transactions fee model will imply tremendous cognitive costs. Rent based model is not a way to go. Also resource economics of a network such as broadband, storage and computation must be built into the core. Thus any model for this is highly anticipated. Tiering and one-function tokens are also the things we need to keep in mind.
I consider using Rust for reference implementation for performance and safety. What it lacks is implementation of Krust for formal semantics.
Allocate 20% of community bag for contribution to our research papers.
Isn't this decided on in a slightly different way? I.E. its funded by congress now and will be funded by the governance later...?
I would take the info into the new research folder and close the issue
The good example is this