Closed trentmc closed 3 years ago
did we look into timelock contracts? So that we could have a lock period for creator's added liquidity
Great point. Timelocks are a great solution. They require smart contract changes.
Timelocks are at the heart of Balancer V2, which comes out in Q1 2021. I suggest put in timelocks as part of the Ocean upgrade to Balancer V2.
I updated the description to include this, for Balancer V2.
Can we give the community access to these issues?
It was meant to be open already. Rectified.
Problem 1: An issue I see is also verifying of publishers that could bring more trust or at least give greater transparency to good actors of the market.
Solutions: 1) Twitter and/or Keybase verification linking to profiles 2) Website link out to the publisher website
Problem 2: Determining verified partners / data providers and giving them more attention than users jumping pools to lose their funds. Currently, users must do a lot of research on their own and look for signals from the core team on Twitter such as likes, comments, follows etc
Solutions: 1) Update info page to show verified data provider by the core team 2) Add another section on the main page "verified pools"
@keen012 thanks for the thoughts. Both ideas were listed in the IP rights violation epic but not here.
They are:
For thoroughness, I've now listed them here as well.
👋 Saw this while browsing around.
I'm interested, what exactly is a "rug pull"? Can you maybe link me to a description? Thanks!
👋 Saw this while browsing around.
I'm interested, what exactly is a "rug pull"? Can you maybe link me to a description? Thanks!
Explanation Links: https://medium.com/@unishield/how-uniswap-scams-work-ba847275a49f https://coingape.com/solving-the-rug-pull-liquidity-problem-on-uniswap-dex-after-the-sushi-debacle/ https://www.reddit.com/r/ethereum/comments/j3ba7g/rug_pulls_on_uniswap_how_they_work/
Thanks for the links. Interesting problem. If I had to describe it in my own word, then "pulling a rug" is when a malicious user creates their own liquidity pool with an automated market maker (AAM) and then lures other liquidity providers or buyers/sellers into using the pool. To formulate the act of rug-pulling explicitly: It's the act of removing a significant portion of liquidity from an AAM pool such as the goal of profiting from the subsequently-skewed values of the pool. Examples of a "rug-pullable" pools are e.g.:
IMO, between example 1. and 2. there are differences.
I think for both a sufficent solution could be found by introducing a decentralized scoring mechanism representing the "trustworthiness" of the pool. For this to work, however, a resiliant but publicly observable metric would have to be found.
An example of such a metric is the concept of "bitcoin days destroyed" of the Bitcoin network. So why not use something similar. Let's start with a criteria. A trustworthy pool is one that has many individual holders (e.g. relatively to other pools) who ideally all hold an equally-sized portion of the pool.
A few further ideas to create this score without diving too deep:
Ultimately, having this measure would allow pools to be ranked on the market website making it very unlikely for a user to interact with a malicious pool (e.g. similar to how it's unlikely to be scammed by AirBNB/UBER/Ebay providers).
Anyways, just a few ideas from my side :D Curious to see where this goes!
Always appreciate your thinking and insights Tim.
You have captured the essence of the problem.
Some other solutions include:
Cool, I did a small MVP: https://docs.google.com/spreadsheets/d/1AONZoxfiXXm16bdn1q_GrrhwfZTERuh_nLjP2AqpKAw/edit?usp=sharing
EDIT: Sorry link sharing is fixed now I think
Comments:
Feedback appreciated :)
Few more thoughts:
Hey,
so actually I hacked as fast as I could to get something rolled out. Proud to announce: https://rugpullindex.com/ It's using ideas out of the above-mentioned comments (gini coefficient etc).
Of course, I'm hoping that people are gonna use it, as this will keep me motivated to continue working on it. Stats are here: https://plausible.io/rugpullindex.com
Really nice. Suggestions:
By TASLOB-45, I meant the 6-2 character shortname.
1. Use TASLOB-45 as the primary key
Hey, good point. The scoring currently doesn't factor in the number of unique holders. Checking for its score:
SELECT score FROM sets WHERE symbol="TASLOB-45"
> 0.841469360480562
But actually, TASLOB-45 has tons or seemingly legit users: https://etherscan.io/token/0x2655b8a7357f4bb4a8cb2170e196096ac8f0cdf9#balances So next week, I'll have to think of how to improve the index such that TASLOB-45 gets listed. Thanks for the feedback
Happy to share notes with @realdatawhale
Hi everyone!
Hope you are well and great effort on the index!
If it helps, we'd be happy to provide you with access to the Directory, analyzing each dataset and its legitimacy. You may cross-reference your approach with the actualization of rug-pulls etc. Let us know if there's anything else you need to make this work even better.
Small update from my site:
On the biz dev side:
Edit: Also, I'm now only showing data sets that have min(35) liquidity providers. Which interestingly produces already quite interesting results if you ask me (e.g. TASLOB-45 at rank 6). Though all decentralization scores are quite bad still. But maybe this will even out over time.
Small update today:
For each data set, a piechart of the pool LPs is now available by clicking the "Chart" link.
oh look, it's @TimDaub casually dropping an amazing project in some GitHub comments 👋 Love the idea of somehow rationalizing a "rug pull".
I can see how this could be used on the Pool widget in market
, represented and mapped to some color-graded indicator. Some tooltip or help text could then link to your site for further explanations.
For that, an API endpoint would be handy, like:
https://rugpullindex.com/api/<DATATOKENADDRESS>
which could return:
{
"score": 0.73
}
Also worth noting that, technically from the contracts perspective, one datatoken could be in multiple pools since there could be multiple pools for one data set. This is of course crazy confusing, which is why we default in all flows enforced through the UI to the first pool created with a respective datatoken, effectively creating the connection data set === datatoken === poolAddress
, making it way easier to handle in terms of user's cognitive load. While this is something to keep in mind for the future, it might imply right now to rather use the pool address for a possible API parameter instead of datatoken address.
If we decide on using this score somehow, we can also move your app or only the API behind our infrastructure if needed. Cause unless you want to test your DDoS capabilities, you do not want to receive API requests from the live market
right now
oh look, it's @TimDaub casually dropping an amazing project in some GitHub comments 👋 Love the idea of somehow rationalizing a "rug pull".
Thanks 😊
I can see how this could be used on the Pool widget in
market
, represented and mapped to some color-graded indicator. Some tooltip or help text could then link to your site for further explanations.
Yeah, that was my idea too.
For that, an API endpoint would be handy, like https://rugpullindex.com/api/
Makes total sense. That's definitely something I can deliver over the next few weeks.
Also worth noting that, technically from the perspective of the contract, one data token could be in multiple pools since there could be multiple pools for one data set.
Mhh 🤔 Not sure I'm following. I'm familiar with the fact that for a balancer AMM, more than two tokens can be in one pool when for each token "the pool weight" is less than 50%. Is that what you mean?
This is of course crazy confusing, which is why we default in all flows enforced through the UI to the first pool created with a respective datatoken, effectively creating the connection data set === datatoken === poolAddress, making it way easier to handle in terms of user's cognitive load. While this is something to keep in mind for the future, it might imply right now to rather use the pool address for a possible API parameter instead of datatoken address.
What'd help is a specification/document of how Ocean currently works. Can you link me to something like that?
If we decide on using this score somehow, we can also move your app or only the API behind our infrastructure if needed. Cause unless you want to test your DDoS capabilities, you do not want to receive API requests from the live market right now
Hahah, challenge accepted! I'll probably implement caching soon.
Thanks for the feedback @kremalicious!
Edit: Unfortunately, rugpullindex.com was down this night. I'm still fighting with the stability of my cronjob. But I think I'm close to fixing and improving the site's reliability.
Hey everyone,
today I've added a 1 day delta row to the list. You can now see how a data sets rank changed in comparison to yesterday. I've sent a round a few emails, asking for more user feedback to rank which feature is desired the most. I'll try to find time for all of those features to implement in the upcoming weeks.
Best, Tim
Hey 👋
just a heads up that I'm still thinking on how to improve the index. Today, I put down some thoughts into my super minimalistic rugpullindex blog (it's a .txt file lol). You can read them here: https://rugpullindex.com/changelog.txt
Best, Tim
Nonetheless, the pool's publisher stake should experience a higher "weighting" as the lower the publisher's share, the better - meaning that there's a greater distribution of shares.
Agreed. Actually, I'm not totally aware of how data sets are sold initially as for now. Is the publisher uploading their data set and puts an amount of OCEAN that ultimately becomes the initial price of a data set? If so, maybe that's not an ideal strategy to price a data set in the beginning.
Martin Köppelmann has once wrote about "Initial Uniswap Offerings": https://twitter.com/koeppelmann/status/1256201034046885890 and I believe Gnosis has done lots of work on auctions. I know that the dutchX can be used to price and sell off assets. In any case: making sure that a publisher's shares are spread more evenly should be implemented into the Ocean Protocol. A data set that can achieve a fair pricing with many participants will be treated with privilege by rugpullindex.com's rating algorithm.
We are starting to work on an application with a group of React Native developers, it would be great if you could send us the link to rugpullindex APIs again. We will implement your scores on our application (if you dont mind).
Oh cool! Sounds amazing. I'll send you an extra email for that.
I've added a Cache-Control
header and my reverse proxy is allowed to cache now too. Means, page speeds should now be significantly improved. According to my non-scientific measurements they went down from 900ms to roughly 300ms. And since the actual server is only asked for data once a day (to put latest crawl into cache), the site should be pretty scalable now to. At least as much as a 2€ hetzner instance with nginx is scaleable :)
Fingers crossed that the cache-invalidation for tonight works as expected.
Regarding my last update about Cache-Control. It ended up working well and so now rugpullindex.com should be easily able to scale, as its reverse proxy is delivering a single html to all its users.
But regarding my main update: For the past week, I've been thinking a lot about an improvement to rugpullindex.com's current scoring method. I think that today I've made a key discovery about including liquidity into the ranking. I wrote about it in my minimal blog over here: https://rugpullindex.com/changelog.txt
Additionally, and this might be interesting to you @kremalicious and @realdatawhale, I'll soon be starting on implementing several API endpoints. I'll keep you posted about the progress.
Update:
New scoring method is online. It's a mix between liquidity and the gini coefficient. I'll write a detailed summary of it soon.
Also: I've changed the page's copy writing a bit.
This is really great, Tim, keep it coming:)
There's also funding available from OceanDAO and more, we encourage you to go for it:) www.oceanprotocol.com/fund
Another update (maybe interesting for @kremalicious):
GET https:///rugpullindex.com/api/v1/indices/OP-COMPOSITE-V1/assets
> returns all assets sorted (like website)
GET https:///rugpullindex.com/api/v1/indices/OP-COMPOSITE-V1/assets/did:op:0c3d9e5Df48F2917EE3eB452791740A96cB382A6?date=ISO8601DateString
> {"rank":35,"symbol":"MARCUT-0","score":0.044855145528774946,"gini":0.9119733464150174,"lastCrawl":"2020-12-08T23:01:03.313Z","price":71.54826650977905,"a
ddress":"0x31369EA0a323903493f715d4e44081a64D3b77dA","did":"did:op:0c3d9e5Df48F2917EE3eB452791740A96cB382A6","liquidity":1170.9040416102937,"banned":0}
To request API access, write me to tim@daubenschuetz.de or comment here.
There's also funding available from OceanDAO and more, we encourage you to go for it:) www.oceanprotocol.com/fund
Thanks, I'll give it a look.
More updates:
Also an update from today that riffs on @kremalicious's idea of a rugpullindex.com score on the official Ocean Marketplace: Introducing rugpullindex.com badges for Markdown:
[![rugpullindex.com rank](https://img.shields.io/badge/dynamic/json?url=https://rugpullindex.com/api/v1/indices/OP-COMPOSITE-V1/ranks/did:op:7Bce67697eD2858d0683c631DdE7Af823b7eea38&label=rugpullindex.com&query=rank&color=blue&prefix=%23)](https://rugpullindex.com)
For more information, visit: https://rugpullindex.com/#faq
Multi-prong strategy, as below.
This epic can be changed from 'High' to 'Medium' priority once all high-priority issues are resolved.
Done
In Progress
Agreed-upon backlog [empty]
Possible tactics, to discuss more, only add to backlog if agreed