iquidus / explorer

An open source block explorer
BSD 3-Clause "New" or "Revised" License
748 stars 1.33k forks source link

UNSPENT api #37

Open dobbscoin opened 8 years ago

dobbscoin commented 8 years ago

UNSPENT API: Is there such a thing?

My schildbach wallet developer says he needs to be able to querry unspent outputs.. 'er something, and seems to be suggesting that the explorer might have it. I know the wallet can listunspent from addresses it owns, but it's not something the Iquidus Explorer can currently do. -right?

dobbscoin commented 8 years ago

Will 2.0 Iquidus Explorer be able to list UNSPENT from all addresses? My androidDev complains about not being able to SWEEP paper wallets.

mmitech commented 8 years ago

fork insight for your coin, it has an advanced RESTful API and utxo is included

iquidus commented 8 years ago

http://bitcoin.stackexchange.com/questions/4301/what-is-an-unspent-output/4302#4302

Unless you're coin is doing anything odd, this should be the same as total supply. you can get this via the api easily with.

http://example.com/ext/getmoneysupply

insight is ok, however user requests are very slow, this is why i have the server doing all of the heavy lifting with the indexing process, so user requests are simply database queries, much faster page loads for the user.

dobbscoin commented 8 years ago

I Think the developers issue has something to do with how the schildbach android wallet imports balances/sweeps coins from privatekey/paperwallets. He's having exams this week. Thanks for the reply's, I'll see if I cant get him to join in the thread to explain himself a little broader.

iquidus commented 8 years ago

ah I see. I will look into the android wallet.

Infernoman commented 8 years ago

heres an example from insight-api which is... pretty much unusable. http://transfer.infernopool.com/api/addr/TjGbGkh18vLqSKPCTqDjqYUbMCwzcdtEEG/utxo?noCache=1

a much better method for this is needed than what's currently available. I would be able to put a bounty toward this item.

getrawtransaction decoderawtransaction

then add elements into the database so it's not as inefficient Below is an example [{"address":"TjGbGkh18vLqSKPCTqDjqYUbMCwzcdtEEG","txid":"5bc575a29c717c7f7412edc5b240632cd9f7bee384c17b3aeaaae88bddb1ea04","vout":1,"ts":1468733104,"scriptPubKey":"76a9146d45d01257be9d2cdca9f4bf5d4e7ead687eb43e88ac","amount":32.80596375,"confirmations":148,"confirmationsFromCache":false}]

vroomDotClub commented 7 years ago

Anything being done with this?

olegmitrakhovich commented 5 years ago

would be great to have this call.

zilveer commented 4 years ago

Any progress of listing unspent in the API?

uaktags commented 4 years ago

Unfortunately I don't believe anything had come of it. I've also not seen any PRs or forks that include it.

TheHolyRoger commented 4 years ago

Implementing this API requires a complete overhaul of Iquidus indexing.

For reference, insights utxo function: https://gist.github.com/jackzampolin/da3201b89d23dd5fa3becb0185da1fb2#utxo-details

Iquidus currently groups transaction vins/vouts together for address1 <-> address 2.

We'd need to change this to index ALL vins/vouts like insight, then add functionality to mark old transactions VOUTs as SPENT as new TXs come in that use them as VINs.

I'm not opposed to this, as it might fix another issue we had with parallelising TX indexing.

It would probably add some overhead though, to the front end at least as it's less 'clean'.

olegmitrakhovich commented 4 years ago

I did find this... https://github.com/MIPPL/iquidus-getutxos-explorer ... I tried it, it works ok... however, when I started to do load testing on my chain and pushing 1,000s of transactions ... I noticed that this version of iquidus was not keeping up. I later find out, that it is not a good idea to use MongoDB because of the 16mb .bson limitation ... So I think this version of Iquidus will work fine if your chain doesn't have too much traffic.

TheHolyRoger commented 4 years ago

I was actually wondering if it might be an idea to switch to SQL for the performance of historical address balance calculation.

Big change though....

uaktags commented 4 years ago

But MySQL in NodeJS is terrible

olegmitrakhovich commented 4 years ago

Yeah lol, I tried making one using mysql and node.js using this tutorial: https://www.education-ecosystem.com/tvle83/2PY30-how-to-create-a-blockchain-explorer-with-javascript/5pbJX-how-to-create-a-blockchain-explorer-with-javascr-5/

it didn't work out that great ahah. I also found articles of people explaining why it's not a good idea to use SQL, like this one: https://www.quora.com/How-is-Bitcoin-Block-Explorer-Blockchain-info-architected

I really liked Iquidus but I am using Trezor's blockbook at this point, it uses RocksDB ... so maybe if Iquidus changes to RocksDB people would jumping all over it, since Iquidus is much easier to setup than Blockbook.

uaktags commented 4 years ago

I've never really looked into Blockbook being that it's written in go. But in the conversation of RocksDB, I'm not sure the reasoning for using that over Redis (maybe Rocks is more supported in Go than Redis?) unless it's just the fact that RocksDB uses flash storage as well as RAM.

I'd love to use MySQL if coding for it in NodeJS wasn't a test against ones own sanity. I do like the idea of historical data though going to something other than mongo. I think we'll just need to keep in mind the idea that schema's can change, which is the nicety of mongo and other nosql

TheHolyRoger commented 4 years ago

@uaktags are you suggesting a combination of redis + mongo?

The alternative is leveldb like insight I suppose.

Maybe the PR you found is the way to go, that way we're not indexing all vins and vouts which would massively impact the database size

uaktags commented 4 years ago

Well redis probably wouldn't well since your would be loading the database solely in memory. That probably explains their use of rocksdb.

On Fri, Jan 24, 2020, 5:31 PM TheHolyRoger notifications@github.com wrote:

@uaktags https://github.com/uaktags are you suggesting a combination of redis + mongo?

The alternative is leveldb like insight I suppose.

Maybe the PR you found is the way to go, that way we're not indexing all vins and vouts which would massively impact the database size

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/iquidus/explorer/issues/37?email_source=notifications&email_token=AAG2F5B6LNTFSPBUHQ4VPKDQ7NT47A5CNFSM4CBD3GP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJ4JMSI#issuecomment-578328137, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG2F5FXTC5RIQLTZINMIIDQ7NT47ANCNFSM4CBD3GPQ .

smaicloud commented 4 years ago

MongoDB is a no SQL server guys. You can't compare apple and bananas. And MongoDB works really great, you can put tons of data and this nosql db is highly performant. Check the collections and use the databse profiler to tune the performance. You not need to redisign the whole explorer.

uaktags commented 4 years ago

While I appreciate that comment, that's not exactly the point of the debate. The focal point of the topic isn't whether or not we understand if nosql/mongo is performant for the task, but whether if it would be the best tool for this aspect and for historical data.

The task at hand, UNSPENT data would indeed require an overhaul of the explorer to get the data track that's needed to query. Whether or not a change in data storage occured, this 100% has to happen because we don't track data needed for this yet.

Then the topic of historical data and whether or not nosql is the best tool for that. Mongo has a clear limitation that gets hit on larger chains. This is the 16mb bson limitation mentioned. Mongo is clearly not the best tool for longevity as the explorer runs longer and longer. This is why alot of companies would use mongo (or other nosql) for current/recent data, and then store it into sql for historical.

So we're not comparing apples and oranges, but saying how we need to consider implementing apples and oranges.

Unless you have some PR that you're ready to push out to answer the calls?