spesmilo / electrumx

Alternative implementation of spesmilo/electrum-server
MIT License
443 stars 354 forks source link

docs: describe protocol version 2.0 #90

Open SomberNight opened 3 years ago

SomberNight commented 3 years ago

This PR describes an updated Electrum Protocol, version 2.0 (formerly named version 1.5).

Some of these ideas have already been discussed and implemented as part of https://github.com/spesmilo/electrumx/pull/80, however this PR includes even more changes, most notably pagination of long histories (taking into consideration ideas from https://github.com/kyuupichan/electrumx/issues/348 and https://github.com/kyuupichan/electrumx/issues/82).

While the changes described in https://github.com/spesmilo/electrumx/pull/80 are already non-trivial and would be useful in themselves, I have realised I would also like to fix the long-standing issue of serving addresses with long histories, and IMO it would be better to only bump the protocol version once (and to minimise the number of db upgrades/times operators have to resync).

This PR is only documentation, I would like feedback before implementing it.

@romanz, @chris-belcher, @shesek, @cculianu, @ecdsa, @kyuupichan


Compared to the existing Electrum Protocol (1.4.2), the changes are:


Re pagination of long histories:

chris-belcher commented 3 years ago

I read the definition of the new status hash and I think EPS should be able to implement it. The changes to blockchain.scripthash.get_history seem fine from the point of view of EPS.

cculianu commented 3 years ago

@SomberNight Thank you so much for taking the time to think about this deeply, research it, and do PoC implementations to test feasibility. After having discussed this with your on IRC I am pretty optimistic about these changes, and the new status_hash mechanism which solves a long-standing problem. Note that there is a small performance hit for the first time you compute the status hash for a huge history of like 1 million items (in Python 300msec versus 2 seconds or something like that on my machine).

But this cost is paid only once, since the 10k tx checkpointing thing will solve having to compute too long a set of hashes.

That cost can also be mitigated with additional design (such as pushing off the response to such a request to some low priority background task)... or it can be simply paid upfront since it will only be incurred once. And there aren't that many 500k or 1 million tx addresses (although there are more than one would expect).

Anyway, I'm a fan of these changes. I haven't yet tried a PoC implementation to see if there are any gotchas but reading the new spec it seems very sane and reasonable.

cculianu commented 3 years ago

One additional comment and/or question: I know for BTC you guys definitely need blockchain.outpoint.subscribe -- but it may not be needed for BCH immediately.

On Fulcrum, one thought I had was that in the BCH case (but definitely not in the BTC case), I may opt to not offer that by default (or maybe do offer it by default but make it potentially opt-out).

This makes it less painful for server admins to update to the new version since the (very slow to build) spent_txo index won't need to be built for them in that case the first time they run the updated version.

Now, maybe I am overcomplicating things -- and maybe I should just make them eat the cost. But aside from them having to wait X hours for it to build that index, it may also be unpopular due to the additional space requirements.

So.. my thinking is that maybe in the BCH case I will "extend" this protocol to add additional keys to server.features, so that a server can advertise if it lacks the index (if key is missing, can assume it has the index).

What are your thoughts on this? I know this is not your primary concern, and since this is a BCH issue mostly, I know you have plenty to do already -- but I was wondering if you had recommendations on what to call this key or .. what. i was thinking as a BCH extension, in the server.features map, have an optional additional key: "optional_flags" or something and values such as "no_spent_txo" for a server where that index is missing...

shesek commented 3 years ago

blockchain.outpoint.subscribe introduces some challenges for personal servers (eps/bwt):


The status of a scripthash has been re-defined. The new definition is recursive and makes it possible not to redo all hashing for most updates.

Something very similar could be achieved with the current protocol using SHA256 midstate. But making it recursive would make things easier on the implementation's side.

chris-belcher commented 3 years ago

My thoughts on your post @shesek

Edit: Another thought on the new status hash, I think EPS/BWT servers won't even take advantage of the possibility of caching hashes, but just recalculate them from the start each time. This should be fine because the client would just be attacking themselves if they DoS their own personal server. Plus it helps keep the server stateless.

shesek commented 3 years ago

EPS/BWT servers can already never be safely exposed to the public

Agreed, of course. But this still adds another vector of attack for users that have an insecure setup (I suspect there are quite a few of these, unfortunately) that should be taken into account.

Querying the UTXO set with gettxout is much better than getrawtransaction <txid>.

Oh, yes, nice! That is much better. The electrum server will still have to occasionally poll for this, but it doesn't require checking each block separately.

But what happens if the output gets funded then spent immediately after, before the electrum server had a chance to poll gettxout? This could happen if the funding and spending transactions show up in the same block, but also for mempool transactions if the spend happens quickly enough.

From my understanding of Lightning and how Electrum is likely to work, it also seems pretty unlikely that Electrum...

I would consider that this RPC command could be used in the future for other things too, either by Electrum itself or by third party software that leverages Electrum servers.

But I agree that if its expected that the Electrum Lightning implementation wouldn't normally subscribe to spent outpoints, then it could be good enough for now.

ecdsa commented 3 years ago

But I agree that if its expected that the Electrum Lightning implementation wouldn't normally subscribe to spent outpoints, then it could be good enough for now.

You cannot expect that. Electrum needs to know if an outpoint has been spent, so the server needs to distinguish between 3 different cases: utxo does not exist, utxo exists and is unspent, and utxo was spent.

chris-belcher commented 3 years ago

Here are two possible ways to solve the edge case of an outpoint being immediately spent after it is created:

The node could check getrawtransaction <txid> or getmempoolentry <txid> in the case that gettxout returns nothing. Those former two RPC calls will still find the transaction even if the outpoint was immediately spent, and from there be able to import its address into Core's wallet. The server can obtain the lightning channel address from is the method blockchain.transaction.broadcast, because the client will broadcast the lightning funding transaction via the server (unless the other peer broadcasts the funding transaction, which I think happens with open_channel --push_amount). This leaves another rare edge case: If the transaction is broadcasted by the other peer instead of our client and the transaction is immediately spent in the mempool before the server has a chance to see it and the node is running blocksonly and the user is pruning and the user shuts down their server and node before the transaction is confirmed, and then starts them up again after enough time that the user's node prunes the relevant block, then the server won't be able to find the funding transaction.

Another way is to add a method to this protocol which does nothing but notify the server about what address will be later requested in blockchain.outpoint.subscribe. Call it something like blockchain.outpoint.address_notify and the client sends it immediately before subscribing to the outpoint. EPS/BWT servers will import that address into Core which will be able to keep track and know if the outpoint was created and then immediately spent. I believe that would completely solve the edge-case.

shesek commented 3 years ago

the server needs to distinguish between 3 different cases: utxo does not exist, utxo exists and is unspent, and utxo was spent.

Distinguishing between spent txos and non-existent txos in a generic manner that works for any txo is inherently incompatible with pruning.

It seems to me that this could only work with pruning if we loosen the requirements by making some assumptions about electrum's specific usage patterns, and tailoring the electrum server-side solution to work specifically for this.

The server can obtain the lightning channel address from is the method blockchain.transaction.broadcast

How could it tell that its a lightning transaction? Wouldn't it have to import all p2wsh addresses to be sure?

and the node is running blocksonly

blocksonly isn't necessarily a condition, this could happen if the funding and spending transaction appear for the first time in a block, or even if they appear briefly in the mempool but get mined before polling manages to catch it.

the user is pruning and the user shuts down their server and node before the transaction is confirmed, and then starts them up again after enough time that the user's node prunes the relevant block

For the pruning / no txindex case, is this assuming that the electrum server is also checking individual blocks with getrawtransaction <txid> <blockhash>?

Another way is to add a method to this protocol which does nothing but notify the server about what address will be later requested

This would indeed make things easier. The server will simply have to import the addresses, and all the information for the relevant txos will be available in the Bitcoin Core wallet, without any specialized logic for tracking txos.

If we can guarantee that the address notification is always sent before the funding transaction confirms, then this becomes trivial. But even if not, because the address is known, the server could more easily issue a rescan to look for recent funding/spending transactions (say, in the last 144 blocks or so?), without having to check individual blocks manually.

SomberNight commented 3 years ago

Another way is to add a method to this protocol which does nothing but notify the server about what address will be later requested in blockchain.outpoint.subscribe. Call it something like blockchain.outpoint.address_notify and the client sends it immediately before subscribing to the outpoint. EPS/BWT servers will import that address into Core which will be able to keep track and know if the outpoint was created and then immediately spent. I believe that would completely solve the edge-case.

This would indeed make things easier. The server will simply have to import the addresses, and all the information for the relevant txos will be available in the Bitcoin Core wallet, without any specialized logic for tracking txos.

I quite like it that the protocol no longer uses addresses (but script hashes). I refuse to reintroduce them! :D Anyway, it seems pointless to send an extra request. If you think it would be helpful, maybe we could add an optional arg spk_hint to blockchain.outpoint.subscribe, which should be set to the scriptPubKey corresponding to the outpoint. Electrum could then always set this. e-x could just ignore the field completely.

chris-belcher commented 3 years ago

How could it tell that its a lightning transaction? Wouldn't it have to import all p2wsh addresses to be sure?

Yes, or rather than importing straight away it could save every p2wsh address in a 'txid' -> 'address' map or dict.

If you think it would be helpful, maybe we could add an optional arg spk_hint to blockchain.outpoint.subscribe, which should be set to the scriptPubKey corresponding to the outpoint.

Yes(!) This is a much better idea than a separate protocol method. That should totally solve the edge case.

sunnyking commented 3 years ago

Regarding a new get_history, from wallet point of view what we really need is to get the most recent history. Gemmer wallet already runs into 'history too large' error on multiple occasions with normal use. But from Gemmer's point of view, it only needs to fetch the most recent 3 transactions(including unconfirmed) from electrumx server, regardless of how many historic transactions there are for a given address. So the question is If the api allows for a parameter for how many recent transactions client wants, that would actually reduce a lot of server burden on long history. In fact, we would have no problem with the server hard limiting the parameter to a max of e.g. 1k for performance considerations. So basically what we are suggesting is something like

get_history(scripthash, limit=100, last_tx=None) # returns tx summary from most recent

(last_tx maybe for possible pagination purpose, but for our use case we really don't need it)

Then for typical wallet and explorer, it would not burden the server on long history addresses. For example, Gemmer would do

get_history(scripthash, limit=3)

which could be very efficient on the server for long history address compared to current situation where they run into 'history too large'.

SomberNight commented 3 years ago

from wallet point of view what we really need is to get the most recent history it only needs to fetch the most recent 3 transactions(including unconfirmed) from electrumx server, regardless of how many historic transactions there are for a given address

Maybe your altcoin needs that but it is useless for Bitcoin. At the very least, for Bitcoin, we would need to know about all UTXOs, and they can be arbitrarily old.

There are many considerations to keep in mind:

One way to achieve this, is what the pre-1.5 protocol does: define status hash in a way that the client will notice it is missing txs. This proposal here does the same. However, the client then will necessarily have to have downloaded all txs to calculate the status hash itself.

So I see no point in designing an API that allows fetching just the most recent txs. If you want to know if there are txs you are missing; that's already how it works; you compare the status hashes. Then, to be able to recalc the status hash yourself again, trustlessly, you need to obtain all missing txs.

cculianu commented 3 years ago

So I see no point in designing an API that allows fetching just the most recent txs. If you want to know if there are txs you are missing; that's already how it works; you compare the status hashes. Then, to be able to recalc the status hash yourself again, trustlessly, you need to obtain all missing txs.

One can imagine an optimization in the "happy" case where you just are able to download only the tx's you believe you are missing. You don't actually need the entire history to calculate the status hash in the case where everything lines up... only as a fallback if you cannot reconcile would you go ahead and try the full download, and then if that fails, decide which server you believe and try again.

Most of the time nobody is lying to anybody -- and being able to detect omission is already captured by the status hash. The "download last few tx's" thing would be probably enough 99% of the time... and may save some load...

Although already I believe with the changes in 1.5 it should be possible now retrieve a partial history towards the "end" now.. right?

SomberNight commented 3 years ago

Although already I believe with the changes in 1.5 it should be possible now retrieve a partial history towards the "end" now.. right?

Yes. You can call blockchain.scripthash.get_history with a recent from_height param.

So I see no point in designing an API that allows fetching just the most recent txs

One can imagine an optimization in the "happy" case where you just are able to download only the tx's you believe you are missing.

The issue is that you have no idea how many txs you are missing: the status either matches or differs. If you wanted to, for the happy path, with the current proposal, you could set from_height to the last block you believe you have covered. An API that allowed "get most recent 100 txs" seems less useful.

cculianu commented 3 years ago

If you wanted to, for the happy path, with the current proposal, you could set from_height to the last block you believe you have covered.

Yeah, just start with what you last saw, and if that fails to reconcile.. get a full history.

Anyway I was merely pointing out that it's currently possible to do it with the 1.5 spec and that it is a useful optimization for wallet implementors to consider in their interaction with the server...

sunnyking commented 3 years ago

The issue is that you have no idea how many txs you are missing: the status either matches or differs. If you wanted to, for the happy path, with the current proposal, you could set from_height to the last block you believe you have covered. An API that allowed "get most recent 100 txs" seems less useful.

In most our usage scenarios, including wallets and explorers, servers are generally trusted, and applications don't try to remember transaction history it has downloaded from server. I see where the design is coming from, however it seems to me that the focus on SPV has really made an impact on its usability in general.

The problem with the from_height parameter is that server is expecting client to be stateful and store past transaction history even full transaction history on the address. Otherwise it's quite tricky for client to know which height it should supply to the get_history api. It appears to me that with this proposed api, large history addresses would continue to plague non-SPV uses of ElectrumX

SomberNight commented 3 years ago

In most our usage scenarios, including wallets and explorers, servers are generally trusted

applications don't try to remember transaction history it has downloaded from server

I see. Indeed if you wanted to create a block explorer (using a trusted Electrum server as backend) that can efficiently show the (e.g.) 100 most recent txs for an address that would need a different get_history API. (and as a continuation of the idea, I guess you might want the 100 txs before that, etc) Scripthash status is not even needed for this "trusted block explorer" use case.

Indeed the use case we had in mind here is different. get_history and the scripthash status hash have been redesigned together; specifically so that a well-behaving stateful client can use get_history in an efficient way and recalculate the status hash itself. I guess this could be called the "stateful SPV client" use case.

I am not sure if the currently proposed get_history could be changed in a way that made it useful for both. Having implemented get_history I have the impression it is already of significant complexity. I think it might even be reasonable to add a separate history RPC that handles the block explorer use case. (which then could be done later, in a future PR)

I suppose one thing the proposed get_history RPC is missing for the block explorer use case is (1) to allow requesting txs in reverse order (most recent first); and another is (2) that you would want to paginate based on number of txs and not blocks...

SomberNight commented 3 years ago

I've cherry-picked doc changes from https://github.com/spesmilo/electrumx/pull/109 re

:func:blockchain.block.headers now returns headers as a list, instead of a single concatenated hex string.

Also, I've made minor changes as per comments from @Kixunil above.

The force push is just a rebase on HEAD of master.

cculianu commented 3 years ago

Aside from reorg — Can rbf shenanigans also lead to potentially many notifications as well?

SomberNight commented 3 years ago

Aside from reorg — Can rbf shenanigans also lead to potentially many notifications as well?

Oh right, definitely. That too; and other mempool quirks, such as tx eviction. So there can be many reasons for notifications for blockchain.outpoint.subscribe.

cculianu commented 3 years ago

Yeah those are subtle gotchas that a naive client implementation might not anticipate when doing a first pass implementation. Might it be worthwhile to mention that in a sentence or two in the docs to the method so that client implementors are prodded to think about that, and other related, possibilities ?

SomberNight commented 3 years ago

Ok, I've added some examples in https://github.com/spesmilo/electrumx/pull/90/commits/e3f95c760ed5eb6640e0264e064ae63ded3bc802.

Kixunil commented 3 years ago

One thing that came up recently is having some special well-defined error code/message for case when the server is indexing and thus not ready. I suggest:

{
    "error": {
        "code": -42000,
        "message": "The server is indexing the timechain",
        "data": {
            "progress": 42.47,
            "eta": 3600,
            "height": 620000,
            "max_height": 705147,
            "system_status": 0
        }
    }
}

Explanation:

Upon receiving error code -42000 the wallet SHOULD inform the user that the server is syncing the timechain. The contents of the message field SHOULD NOT be displayed in the UI but MAY be printed into logs. Whole data object and all its fields are OPTIONAL. The server MAY send them and the client MAY ignore them.

Currently recognized codes for system_status:

Edit: maybe this should be bitflags or an array so that multiple problems can be reported in a single message.

romanz commented 2 years ago

Maybe we can add a new entry to the dictionary returned by server.features RPC: https://electrumx-spesmilo.readthedocs.io/en/latest/protocol-methods.html#server-features

Result

    A dictionary of keys and values. Each key represents a feature or service of the server, 
    and the value gives additional information.

    The following features MUST be reported by the server. Additional key-value pairs may be returned.

...

Example Result

{
    "genesis_hash": "000000000933ea01ad0ee984209779baaec3ced90fa3f408719526f8d77f4943",
    "hosts": {"14.3.140.101": {"tcp_port": 51001, "ssl_port": 51002}},
    "protocol_max": "1.0",
    "protocol_min": "1.0",
    "pruning": null,
    "server_version": "ElectrumX 1.0.17",
    "hash_function": "sha256"
}

For example, we can add an "index_state" entry (as suggested by @Kixunil above):

    "index_state": {
        "code": -42000,
        "message": "The server is indexing the timechain",
        "data": {
            "progress": 42.47,
            "eta": 3600,
            "height": 620000,
            "max_height": 705147,
            "system_status": 0
        }
    }

WDYT?

Kixunil commented 2 years ago

Good idea adding it there but I think it shouldn't be a replacement of the error message (to improve efficiency). Also code -42000 and message are error information so probably shouldn't be there. So probably just flatten it:

"index_state": {
        "progress": 42.47,
        "eta": 3600,
        "height": 620000,
        "max_height": 705147,
        "system_status": 0
}

Although, when synced, maybe it'd be better to replace the fields above with:

"index_state": {
        "synced": true,
        "max_height": 705147,
        "system_status": 0
}
SomberNight commented 2 years ago

FWIW ElectrumX does not listen while it is indexing the blockchain or when processing the mempool (can take up to several minutes). I mean re initial sync - so e.g. this does not apply when a new block just comes in (but I suspect you would not want to send errors for that case either).

Still, we could of course spec something like this.

Maybe we can add a new entry to the dictionary returned by server.features RPC.

That RPC atm is ~stateless (within session). Would you want clients to poll it? Or what exactly is the idea with putting progress and eta there?

To me it makes more sense to put this info in the error message, as I think clients might naturally retry (~poll) whatever RPC they had tried to send anyway.

Upon receiving error code -42000 the wallet SHOULD inform the user that the server is syncing

To be clear, this error code would be a global constant, not specific to any RPC, right? (so the server could send this error with the same meaning in response to any RPC)

{
    "error": {
        "code": -42000,
        "message": "The server is indexing the timechain",
        "data": {
            "progress": 42.47,
            "eta": 3600,
            "height": 620000,
            "max_height": 705147,
            "system_status": 0
        }
    }
}

Which data fields are mandatory? height and max_height are trivial so they can certainly be, but maybe the others should be optional.

  • progress - server-defined value expressed as percentage. It MAY represent processing unrelated to height (e.g. database compression status), special value -1 MAY be sent by the server meaning "indeterminate" (usually displayed as indeterminate progressbar/wheel). The wallet SHOULD prefer displaying this value over height

I would prefer the [0, 1] interval, so fractions instead of percentages. This is also what bitcoind logs btw. -1 is fine.

  • system_status - system health code, can be used to warn the user about problems with the server. Personal servers are encouraged to send this, public servers SHOULD NOT send this.

Currently recognized codes for system_status:

  • 0 - normal, everything works as expected
  • 1 - undefined system problem, implementations SHOULD send a more specific error if possible
  • 2 - system slowness - the processing speed decreased unexpectedly, may or may not be a problem
  • 3 - high resource utilization - the system RAM, storage space or CPU load are near 100% (exact threshold is implementation-defined), the user should check their system
  • 4 - problems connecting to Bitcoin Core/Bitcoin network
  • 5 - internal diagnostics warning - some kind of diagnostics (watchdog, frequent restarts, data corruption...) detected a problem
  • 6 - configuration issue

Hmm, maybe this field is an argument towards server.features or similar - you might want to expose codes 2, 3, 5 to the user even if the server is synced... e.g. (re code 3) if the disk is almost full but there is still ~some space, would you want to not serve clients but send errors instead? Sounds weird. Also, code 6 and code 4 probably has a large overlap. Maybe it would be easier if for now we just defined 0 and 1, said that any non-zero value means an error-state and defined values over 1000 as implementation-specific.

Kixunil commented 2 years ago

Would you want clients to poll it? Or what exactly is the idea with putting progress and eta there?

Yes, they can poll it (features), assuming sane polling interval.

To be clear, this error code would be a global constant, not specific to any RPC, right? (so the server could send this error with the same meaning in response to any RPC)

Yes, exactly.

Which data fields are mandatory?

None

I would prefer the [0, 1] interval, so fractions instead of percentages.

Percentage is easier to read from communications dump but it's not something I feel strongly about.

you might want to expose codes 2, 3, 5 to the user even if the server is synced... e.g. (re code 3) if the disk is almost full but there is still ~some space, would you want to not serve clients but send errors instead? Sounds weird.

Good point!

Also, code 6 and code 4 probably has a large overlap.

There are setups with Core on a different machine, so better distinguish them IMO.

Maybe it would be easier if for now we just defined 0 and 1, said that any non-zero value means an error-state

I'd very much like to avoid having to write another spec later just because people want to be a bit more specific. I believe those codes are practical and can greatly improve troubleshooting if implemented. After all, it's just a few lines in docs.

defined values over 1000 as implementation-specific.

I'm not sure it's that useful - the wallet would not have a way to match it to some meaningful message. But I'm not against it either. Perhaps say something like:

Implementations may choose to use a code more specific than 1 that is not yet standardized by using number greater than 1000. However they are encouraged to propose standardization of a new code.

Kixunil commented 2 years ago

This issue is somewhat stale but important. Is there anything we can do to move it forward?

cculianu commented 1 year ago

Any chance this protocol version can be renamed to 2.0? I realize the protocol hasn't been using semantic versioning but maybe it's time to go that route. Rationale: this really is a very breaking change. Also, on the BCH side our protocol numbers are constrained to 1.4.x (we have BCH-specific extensions to the protocol) so as to remain at least vaguely version-number compatible with the BTC version of this protocol -- and it would be nice if we had the liebensraum to actually use 1.5.x, 1.6.x, 1.7.x as version numbers for our various extensions.

Kixunil commented 1 year ago

Any chance this protocol version can be renamed to 2.0? I realize the protocol hasn't been using semantic versioning but maybe it's time to go that route.

Oh, yes very much agree with semver.

the-world-ifa-all-apis-used-semver

extensions to the protocol

IMO non-upstream extensions have no reason to be versioned. Those should go into features field.

SomberNight commented 1 year ago

Ok, sure, we can call it 2.0. We can try to use semver for the protocol going forward.

cculianu commented 1 year ago

Ok, sure, we can call it 2.0. We can try to use semver for the protocol going forward.

Wow thank you so much man! I really appreciate it. This means I can rename my BCH-specific extensions that I'm about to push out as 1.5.0 :)

cculianu commented 1 year ago

IMO non-upstream extensions have no reason to be versioned. Those should go into features field.

Yeah except we do some version negotiation to change the "personality" of the server depending on what the client is.. (in order to behave as older clients expect without limiting things for newer clients).

Kixunil commented 1 year ago

I'm not sure if I understand your goal but maybe protocol extensions themselves should be versioned?

cculianu commented 1 year ago

I'm not sure if I understand your goal but maybe protocol extensions themselves should be versioned?

Correct. That's what version negotiation is for ....

Kixunil commented 1 year ago

Then I suggest you name the extensions my-extension-1.2. Is there a value in standardizing the format?

cculianu commented 1 year ago

Dude don't worry about it seriously. This is handled correctly and doesn't need to be discussed here. :)

RCasatta commented 1 year ago

For each mempool tx, form a bytearray: tx_hash+height+fee, where:

Why the fee is used here?

SomberNight commented 1 year ago

For each mempool tx, form a bytearray: tx_hash+height+fee, where:

Why the fee is used here?

Please ask such specific questions in-line (comment on a specific line, instead of in the main thread), so that discussions can be tracked better.

Kixunil commented 1 year ago

One thing I found kinda annoying when implementing electrum client is there's no way to know which request the message is for without accessing internal state which makes deserialization annoying. It doesn't seem to be solvable but just in case is there any way to modify the protocol to allow it?

torkelrogstad commented 1 year ago

One thing I found kinda annoying when implementing electrum client is there's no way to know which request the message is for without accessing internal state which makes deserialization annoying. It doesn't seem to be solvable but just in case is there any way to modify the protocol to allow it?

If the name of the type of the response was included somehow, you could know how to deserialize the message without looking up internal state.


A suggestion/wish-list for the new protocol version: some way of detecting if the address is unused or not. We all want to practice good address hygiene and avoid address reuse. A protocol method for checking if an address is unused would help with this.

My use case: I collect XPUBs from users, allowing them to make withdrawals from my service to new addresses each time they withdraw. These XPUBs could also receive funds from other services, so I need to check if the next address in the derivation path is unused, before sending to it.

I can do this by looking up the history. However, this is inefficient if the address has received lots of transactions. Checking if the address is unused is really just a special-case of looking up the history, where we short circuit when finding the first history element and return early.

I originally brought this up in https://github.com/romanz/electrs/discussions/920