Closed Liongrass closed 8 months ago
Took a quick look at this. The proof was uploaded to the universe by the sender node. Is it possible the address was not created on the receiver node but the sender node by accident? Or another node? Because in the receiver log, I don't see the following log lines (example from when I tried the same locally):
2023-10-18 10:10:09.977 [INF] RPCS: [NewAddr]: making new addr: asset_id=f2bdc9a75c4269eefc06bfb3762bb47964614aa612800cabe16eee4ca63a3c13, amt=1
2023-10-18 10:10:10.017 [WRN] GRDN: Taproot addr tb1pc32y4jpleu5znfx94kueu9qkwa6k0dalvkxcr3jf4qt6qgsmvv9shvazzc was already added to wallet before, skipping
2023-10-18 10:10:10.017 [INF] GRDN: Imported Taproot Asset address taptb1qqqsqqspqqzzpu4aexn4csnfam7qd0anwc4mg7tyv992vy5qpj47zmhwfjnr50qnq5ssx424cy357lfll77gdmx3r2um33phj7d6yx76rv5w5admfknujvenqcssyt5sa9dyyyfdeveygh85ty03awpxdvravsqzfg3a3em5rz5e7ypgpqssyy82evgkmmdda032j3k7s9mnwctnnfj49d2y3vxc3a7c4ywydfl8pgqszrpkw4hxjan9wfek2unsvvaz7tm5v4ehgmn9wsh82mnfwejhyum99ekxjemgw3hxjmn89enxjmnpde3k2w33xqcrywgs46ngn into wallet, watching p2tr address tb1pc32y4jpleu5znfx94kueu9qkwa6k0dalvkxcr3jf4qt6qgsmvv9shvazzc on chain
So the receiver node doesn't know it has to watch the chain for a transfer and therefore also didn't detect it and query for the proof.
May have been fixed between rc2 and rc3?
May have been fixed between rc2 and rc3?
Unlikely, commit b489d04 was more or less rc3.
I mean as part of #594
Allow me to attach the full receiver logs, which include the log line that the address was created:
2023-10-18 02:55:16.772 [INF] RPCS: [NewAddr]: making new addr: asset_id=f2bdc9a75c4269eefc06bfb3762bb47964614aa612800cabe16eee4ca63a3c13, amt=21
2023-10-18 02:55:16.811 [WRN] GRDN: Taproot addr tb1plljn8zlns686wh9qv73xzvz9xf8l3n2d7qt856t7sv23wamhre2spcms20 was already added to wallet before, skipping
2023-10-18 02:55:16.811 [INF] GRDN: Imported Taproot Asset address taptb1qqqsqqspqqzzpu4aexn4csnfam7qd0anwc4mg7tyv992vy5qpj47zmhwfjnr50qnq5ssx424cy357lfll77gdmx3r2um33phj7d6yx76rv5w5admfknujvenqcssy5czsf4akh3y6yhsn6mp3t5rpt7tfrat9znegl9q52z52ldfnv28pqss9qaejy86wuuaf203n0c6nnvnhxeqmh0uxa6qanmnfr2twjsaeel4pgq32rpkw4hxjan9wfek2unsvvaz7tm5v4ehgmn9wsh82mnfwejhyum99ekxjemgw3hxjmn89enxjmnpde3k2w33xqcrywgedu59a into wallet, watching p2tr address tb1plljn8zlns686wh9qv73xzvz9xf8l3n2d7qt856t7sv23wamhre2spcms20 on chain
Unable to reproduce, here's my node on testnet receiving a newly created asset:
"id": { [1/1860]
"group_key": "03704851465720a57693663568e95e315cddab9c413406565eb4f203c3766c6406",
"proof_type": "PROOF_TYPE_UNSPECIFIED"
},
"leaf_key": {
"op": {
"hash_str": "61a8a0062f9d1dd7cf14d817504e90e98922c56da600cef5c2077a37dea411dd",
"index": 0
},
"script_key_bytes": "028f15f1e60c4742f72385595efe979ab0b663bd8a67b0a3687ac565fb12d38cf7"
}
}
2023-10-18 13:55:03.760 [INF] PROF: fetching using: %!(EXTRA string=(universerpc.UniverseKey) id:{group_key:"\x03pHQFW \xa5v\x93f5h\xe9^1\\\x9xcA4\x06V^\xb4\xf2\x03\xc3vld\x06"} leaf_key:{op:{hash_str:"61a8a0062f9d1dd7cf14d
817504e90e98922c56da600cef5c2077a37dea411dd"} script_key_bytes:"\x02\x8f\x15\xf1\xe6\x0cGB\xf7#\x85Y^\xfe\x97\x9a\xb0\xb6c\xbd\x8ag\xb0\xa3hz\xc5e\xfb\x12ӌ\xf7"}
)
2023-10-18 13:55:03.828 [DBG] GRDN: Received proof for: script_key=0298a8d35fb7fc876df9e55812ee5b67d2a3b8485f856812390af06d3268e98bff, asset_id=cf636f5435640a2ea1da9e10c1c09b7c08d1afda8b58e5c18f08c09c8f732eeb
2023-10-18 13:55:03.837 [DBG] PROF: Deriving commitment by asset exclusion
2023-10-18 13:55:03.851 [INF] GRDN: Received new proof file, version=0, num_proofs=2
2023-10-18 13:55:03.852 [INF] GRDN: Watching new proof anchor TX 2051e9e9cb4cbf7acb1786194fd3857086d2a722c3be148e368c7ec78f0bd7da for 1 assets until it reaches 6 confirmations
2023-10-18 13:55:03.855 [DBG] GRDN: Anchor TX 2051e9e9cb4cbf7acb1786194fd3857086d2a722c3be148e368c7ec78f0bd7da was confirmed at height 2534006 (block_hash=000000000000005550fe6efd2350ea7199e3978bd5a130ec71187a7275a4167d), c
hecking if 1 proof(s) need to be updated
2023-10-18 13:55:03.855 [DBG] GRDN: Anchor TX 2051e9e9cb4cbf7acb1786194fd3857086d2a722c3be148e368c7ec78f0bd7da was already confirmed in block 2534006, ignoring confirmation for block 000000000000005550fe6efd2350ea7199e3978b
d5a130ec71187a7275a4167d
This should be fixed by https://github.com/lightninglabs/taproot-assets/issues/512 and https://github.com/lightninglabs/taproot-assets/issues/578
Hi @Roasbeef before https://github.com/lightninglabs/taproot-assets/issues/512 and https://github.com/lightninglabs/taproot-assets/issues/578 are merged is there a workaround for this please ?
@snow884 IIUC, this can happen due to a timing issue between the sender and receiver. By next week we should have a fix up, if not by the end of this week. IIUC, restarting the receiver should be a short term fix. We're looking into other temporary mitigation solutions.
@snow884 you can run your recipient daemon with --proofcourieraddr=hashmail://mailbox.terminal.lightning.today:443
to use the old proof courier method which shouldn't have that issue. You'll need to generate new receive addresses though, as the courier URI is encoded in the taproot asset address.
@guggero thank you very much.
Your fix worked. That mean though that all invoices submitted to the done have to be generated with proofcourieraddr=hashmail://mailbox.terminal.lightning.today:443 correct ?
That mean though that all invoices submitted to the done have to be generated with proofcourieraddr=hashmail://mailbox.terminal.lightning.today:443 correct ?
Yes, that's correct. If you're using gRPC or REST to generate the address, you can also use the proof_courier_addr
field, then you don't need to start the whole daemon that way. We should definitely also add that field to the CLI.
I have an asset stuck in "transmitted" list (tapcli assets t). Restarting sender or receiver node does not help. Any way to nudge it?
First step I'd try is pulling from the latest master, as there are some important bug fixes in there that seemed to have worked for me
I built yesterday from source, nothing was changed
@Impa10r have you been able to try v0.3.1 rc1 yet?
@Impa10r have you been able to try v0.3.1 rc1 yet?
Just tried. Nothing changed. The asset is still in 'transfers' list on node 1, not moving to node 2. I synced both nodes to the same universe testnet.universe.lightning.finance. Not sure how we are supposed to nudge it in such a situation. This is the tx on blockchain.
@Impa10r have you updated to the final 0.3.1 release, are things unstuck after that?
@Impa10r have you updated to the final 0.3.1 release, are things unstuck after that?
Hi, thanks, no, the same (((
$ tapcli getinfo
{
"version": "0.3.1-alpha commit=v0.3.1",
"lnd_version": "0.17.2-beta",
"network": "testnet3",
...
$ tapcli u f l
{
"servers": [
{
"host": "testnet.universe.lightning.finance:10029",
"id": 1
},
{
"host": "testnet.universe.lightning.finance",
"id": 2
}
]
}
$ tapcli2 u f l
{
"servers": [
{
"host": "testnet.universe.lightning.finance:10029",
"id": 1
},
{
"host": "localhost:10029",
"id": 2
},
{
"host": "testnet.universe.lightning.finance",
"id": 3
}
]
}
$ tapcli a t
{
"transfers": [
{
"transfer_timestamp": "1697809690",
"anchor_tx_hash": "d6b93f524cfc0f2bf048374b6fff084d6d520ae6b8838c828e5883bad12663b2",
"anchor_tx_height_hint": 2534369,
"anchor_tx_chain_fees": "257",
"inputs": [
{
"anchor_point": "ffbe053e69a48c23c04668356481f3193ce982275015ad861a2bbe97785931e3:1",
"asset_id": "98fd484cfcaef1efb604bfdae24d39a30b8a987bf82dd0b5c5dcb6c4aced55e0",
"script_key": "0213c74fbd254335608bb244be5073d3ab809e31830ae95447b807272bef51e307",
"amount": "1"
}
],
"outputs": [
{
"anchor": {
"outpoint": "b26326d1ba83588e828c83b8e60a526d4d08ff6f4b3748f02b0ffc4c523fb9d6:0",
"value": "1000",
"internal_key": "03f421023fbc1eabf084800c41ffc682d423b9c9dd0be6d67d7ac907f58f95486c",
"taproot_asset_root": "f45c24d88ce5816cf861dd7df7c8223ef3ac4eb9d2a1d660f84d9a382bb4e19f",
"merkle_root": "f45c24d88ce5816cf861dd7df7c8223ef3ac4eb9d2a1d660f84d9a382bb4e19f",
"tapscript_sibling": "",
"num_passive_assets": 0
},
"script_key": "027c79b9b26e463895eef5679d8558942c86c4ad2233adef01bc3e6d540b3653fe",
"script_key_is_local": false,
"amount": "0",
"new_proof_blob": "5441505...000000",
"split_commit_root_hash": "df1b4307d39372fea318599ba8509dbbbd7204f04face51fd7b590605f060e3e",
"output_type": "OUTPUT_TYPE_SPLIT_ROOT",
"asset_version": "ASSET_VERSION_V0"
},
{
"anchor": {
"outpoint": "b26326d1ba83588e828c83b8e60a526d4d08ff6f4b3748f02b0ffc4c523fb9d6:1",
"value": "1000",
"internal_key": "0301a86d6c274d27a2038b7a53fceec6da730e8adb8cee798d5947e69f3ca19c4f",
"taproot_asset_root": "2cbbc93f360abf4aac27ec5083458aa54ccc340bd4ed30195972e44727248947",
"merkle_root": "2cbbc93f360abf4aac27ec5083458aa54ccc340bd4ed30195972e44727248947",
"tapscript_sibling": "",
"num_passive_assets": 0
},
"script_key": "02c79b0931de935316dfc8131529d42086e1ad5a41f8d881c3c2dee19a008460af",
"script_key_is_local": false,
"amount": "1",
"new_proof_blob": "544150...400000000",
"split_commit_root_hash": "",
"output_type": "OUTPUT_TYPE_SIMPLE",
"asset_version": "ASSET_VERSION_V0"
}
]
}
]
}
$ tapcli2 a l
{
"assets": []
}
@Impa10r what version of tapd
was the address created with that was used in this transfer?
It could be that it didn't specify a proof courier so it won't ever transmit automatically (for this old, existing transfer).
Could you please confirm that by showing the output of tapcli addrs query
?
@Impa10r what version of
tapd
was the address created with that was used in this transfer? It could be that it didn't specify a proof courier so it won't ever transmit automatically (for this old, existing transfer). Could you please confirm that by showing the output oftapcli addrs query
?
v0.3.0-alpha
$ tapcli2 addrs query
{
"addrs": [
{
"encoded": "taptb1qqqsqqspqqzzpx8afpx0eth3a7mqf076ufxnngct32v8h7pd6z6uth9kcjkw640qqcss93umpycaay6nzm0usyc4982zpphp44dyr7xcs8pu9hhpngqggc90pqssxqdgd4kzwnf85gpck7jnlnhvdknnp69dhr8w0xx4j3lxnu72r8z0pgqszrpkw4hxjan9wfek2unsvvaz7tm5v4ehgmn9wsh82mnfwejhyum99ekxjemgw3hxjmn89enxjmnpde3k2w33xqcrywgslc2pq",
"asset_id": "98fd484cfcaef1efb604bfdae24d39a30b8a987bf82dd0b5c5dcb6c4aced55e0",
"asset_type": "COLLECTIBLE",
"amount": "1",
"group_key": "",
"script_key": "02c79b0931de935316dfc8131529d42086e1ad5a41f8d881c3c2dee19a008460af",
"internal_key": "0301a86d6c274d27a2038b7a53fceec6da730e8adb8cee798d5947e69f3ca19c4f",
"tapscript_sibling": "",
"taproot_output_key": "1ef95e7d37969350afeeea0fd2515d17cbab33563694040fe8c6b613ebba0444",
"proof_courier_addr": "universerpc://testnet.universe.lightning.finance:10029",
"asset_version": "ASSET_VERSION_V0"
}
]
}
$ tapcli2 addrs r
{
"events": [
{
"creation_time_unix_seconds": "1697809690",
"addr": {
"encoded": "taptb1qqqsqqspqqzzpx8afpx0eth3a7mqf076ufxnngct32v8h7pd6z6uth9kcjkw640qqcss93umpycaay6nzm0usyc4982zpphp44dyr7xcs8pu9hhpngqggc90pqssxqdgd4kzwnf85gpck7jnlnhvdknnp69dhr8w0xx4j3lxnu72r8z0pgqszrpkw4hxjan9wfek2unsvvaz7tm5v4ehgmn9wsh82mnfwejhyum99ekxjemgw3hxjmn89enxjmnpde3k2w33xqcrywgslc2pq",
"asset_id": "98fd484cfcaef1efb604bfdae24d39a30b8a987bf82dd0b5c5dcb6c4aced55e0",
"asset_type": "COLLECTIBLE",
"amount": "1",
"group_key": "",
"script_key": "02c79b0931de935316dfc8131529d42086e1ad5a41f8d881c3c2dee19a008460af",
"internal_key": "0301a86d6c274d27a2038b7a53fceec6da730e8adb8cee798d5947e69f3ca19c4f",
"tapscript_sibling": "",
"taproot_output_key": "1ef95e7d37969350afeeea0fd2515d17cbab33563694040fe8c6b613ebba0444",
"proof_courier_addr": "universerpc://testnet.universe.lightning.finance:10029",
"asset_version": "ASSET_VERSION_V0"
},
"status": "ADDR_EVENT_STATUS_TRANSACTION_CONFIRMED",
"outpoint": "b26326d1ba83588e828c83b8e60a526d4d08ff6f4b3748f02b0ffc4c523fb9d6:1",
"utxo_amt_sat": "1000",
"taproot_sibling": "",
"confirmation_height": 2534370,
"has_proof": false
}
]
}
Hmm, okay, that looks good so far.
Could you post the full log after a restart of both tapd
s please? You can upload them as text files directly through GitHub.
Thanks for the logs. Are you sure that's the complete log for tapd1
? Because it looks like it received the proof and then just stops doing anything... Is there high CPU usage? And are you running with loglevel=trace
? Perhaps this is just one of the logging infinite loops we fixed...
2023-12-05 17:58:52.103 [INF] PROF: Starting proof transfer backoff procedure for proof (transfer_type=receive, locator_hash=1eea8ff2d8faff7f4cef93f9bb23a0334b0195b123fc263d78e20f9699b245b1)
2023-12-05 17:58:52.309 [INF] PROF: Starting proof transfer backoff procedure for proof (transfer_type=receive, locator_hash=f7874ef86f8e05f4c5dd2e57bbf00d1778476cfb09e56c811432ec8addb290b4)
2023-12-05 17:58:52.510 [DBG] GRDN: Received proof for: script_key=0213c74fbd254335608bb244be5073d3ab809e31830ae95447b807272bef51e307, asset_id=98fd484cfcaef1efb604bfdae24d39a30b8a987bf82dd0b5c5dcb6c4aced55e0
2023-12-05 17:58:52.517 [DBG] PROF: Deriving commitment by asset exclusion
2023-12-05 17:58:52.525 [INF] GRDN: Received new proof file, version=0, num_proofs=2
Thanks for the logs. Are you sure that's the complete log for
tapd1
? Because it looks like it received the proof and then just stops doing anything... Is there high CPU usage? And are you running withloglevel=trace
? Perhaps this is just one of the logging infinite loops we fixed...2023-12-05 17:58:52.103 [INF] PROF: Starting proof transfer backoff procedure for proof (transfer_type=receive, locator_hash=1eea8ff2d8faff7f4cef93f9bb23a0334b0195b123fc263d78e20f9699b245b1) 2023-12-05 17:58:52.309 [INF] PROF: Starting proof transfer backoff procedure for proof (transfer_type=receive, locator_hash=f7874ef86f8e05f4c5dd2e57bbf00d1778476cfb09e56c811432ec8addb290b4) 2023-12-05 17:58:52.510 [DBG] GRDN: Received proof for: script_key=0213c74fbd254335608bb244be5073d3ab809e31830ae95447b807272bef51e307, asset_id=98fd484cfcaef1efb604bfdae24d39a30b8a987bf82dd0b5c5dcb6c4aced55e0 2023-12-05 17:58:52.517 [DBG] PROF: Deriving commitment by asset exclusion 2023-12-05 17:58:52.525 [INF] GRDN: Received new proof file, version=0, num_proofs=2
Here I waited longer, and the logs continued:
Can you try with debuglevel=debug
or debuglevel=info
and see if that changes something?
Can you try with
debuglevel=debug
ordebuglevel=info
and see if that changes something?
I ran with debug before, now with trace. What is the deepest level? The asset still does not move )
Okay, can you try with debuglevel=info
please? My suspicion is that a trace or debug log statement is causing your issue.
Okay, can you try with
debuglevel=info
please? My suspicion is that a trace or debug log statement is causing your issue.
No change. Asset is still stuck in transfer.
Hmm, okay. Thanks for trying. I'll dig deeper into the log file then.
Finally got to taking a closer look. So the proof was successfully uploaded to the universe by the sender. And I figured out why the receiver isn't trying to pull it anymore. I'll create a fix momentarily.
Sorry I lost interest in TA testing and the issue was closed unresolved for me. Out of curiosity, I just upgraded lnd to 0.17.99-beta and tapd to 0.3.2-alpha. Nothing changed, the assets are still stuck in tapcli a t
list ((
I also have several 1000 sats utxos in lnd wallet that cannot be spent: input 0 not found in list of non-locked UTXO
Rescanning the wallet did not help. I don't see these utxos among the anchors of the stuck assets, but they were created around the time I was testing the TA, so I think these are related.
I further discovered that lnrpc/walletrpc/PublishTransaction endpoint locks uxos in such a way, that they don't show up in lncli wallet listleases and cannot be unlocked with lncli wallet releaseoutput. I think this is what happened to my unspendable utxos.
The issue was closed because we merged a PR that should prevent this situation in the future. To resolve your issue some manual intervention might be necessary.
Can you please update tapd
to the latest version on main
and try again? If it doesn't work, the new log on the receiver side would help to figure out what the next (manual) steps would be.
I also have several 1000 sats utxos in lnd wallet that cannot be spent: input 0 not found in list of non-locked UTXO
Yes, those are UTXOs that carry assets. They can't be spent by lnd
as it doesn't have all signing information.
You would need to destroy the assets they are carrying to unlock the sats, which also isn't really supported yet by tapd
(you could do it with some PSBT trickery but not with an official RPC).
How many UTXOs/sats are we talking about? Perhaps you can reduce the amount by just consolidating all asset carrying UTXOs into a single one?
Ok, understood. These are 4 UTXOs on testnet, so no problem if it will not repeat in the future.
tapcli version 0.3.2-alpha commit=v0.3.2-464-gc8978aaa did nothing. both sender and receiver logs are like attached above, no new insights...
Did you try v0.3.3? That's the latest release and also includes some of the fixes mentioned.
v0.3.2-464-gc8978aaa
is current main
, so that implies even newer than v0.3.3
(it's just that the v0.3.3
tag was made from a branch, that's why the git name doesn't include that tag).
v0.3.2-464-gc8978aaa
is currentmain
,
$ git clone -b v0.3.2-464-gc8978aaa --recurse-submodules https://github.com/lightninglabs/taproot-assets.git
Cloning into 'taproot-assets'...
fatal: Remote branch v0.3.2-464-gc8978aaa not found in upstream origin
Tried v0.3.3:
vlad@vlad-VirtualBox:~/taproot-assets$ tapcli --version
tapcli version 0.3.3-alpha commit=v0.3.3
vlad@vlad-VirtualBox:~/taproot-assets$ journalctl -fu tapd
may 01 21:06:43 vlad-VirtualBox systemd[1]: tapd.service: Consumed 5.782s CPU time.
may 01 21:06:43 vlad-VirtualBox systemd[1]: Started Taproot Assets Daemon.
may 01 21:06:43 vlad-VirtualBox tapd[34638]: 2024-05-01 21:06:43.581 [INF] CONF: Attempting to establish connection to lnd...
may 01 21:06:43 vlad-VirtualBox tapd[34638]: 2024-05-01 21:06:43.626 [INF] CONF: lnd connection initialized
may 01 21:06:43 vlad-VirtualBox tapd[34638]: 2024-05-01 21:06:43.626 [INF] CONF: Opening sqlite3 database at: /home/vlad/.tapd/data/testnet/tapd.db
may 01 21:06:43 vlad-VirtualBox tapd[34638]: 2024-05-01 21:06:43.634 [INF] TADB: Applying migrations from version=17
may 01 21:06:43 vlad-VirtualBox tapd[34638]: 2024-05-01 21:06:43.637 [INF] TADB: error: no migration found for version 17: read down for version 17 sqlc/migrations: file does not exist
may 01 21:06:43 vlad-VirtualBox tapd[34638]: error creating server: unable to generate server config: unable to open database: no migration found for version 17: read down for version 17 sqlc/migrations: file does not exist
may 01 21:06:43 vlad-VirtualBox systemd[1]: tapd.service: Main process exited, code=exited, status=1/FAILURE
may 01 21:06:43 vlad-VirtualBox systemd[1]: tapd.service: Failed with result 'exit-code'.
Cannot find instructions how to migrate database?
I was replying to @jharveyb, sorry for the confusion. You're already running the latest main
, so you can't run v0.3.3
as that would be a downgrade (which is why the DB migration complains).
Background
I minted a normal (
--emission_enabled
) asset on the SENDER node. I synced RECEIVER node to the universe, created an address and sent funds from the SENDER node. Upon confirmation, both nodes sync to the universe, but proof files do not seem to get transmitted, and the RECEIVER does not show the assets intapcli assets list
ortapcli assets balance
Your environment
Steps to reproduce
tapcli assets mint --type normal --name leocoin --supply 21000 --enable_emission
)tapcli2 addrs new --asset_id f2bdc9a75c4269eefc06bfb3762bb47964614aa612800cabe16eee4ca63a3c13 --amt 21
)tapcli assets send --addr taptb1qqqsqqspqqzzpu4aexn4csnfam7qd0anwc4mg7tyv992vy5qpj47zmhwfjnr50qnq5ssx424cy357lfll77gdmx3r2um33phj7d6yx76rv5w5admfknujvenqcssy5czsf4akh3y6yhsn6mp3t5rpt7tfrat9znegl9q52z52ldfnv28pqss9qaejy86wuuaf203n0c6nnvnhxeqmh0uxa6qanmnfr2twjsaeel4pgq32rpkw4hxjan9wfek2unsvvaz7tm5v4ehgmn9wsh82mnfwejhyum99ekxjemgw3hxjmn89enxjmnpde3k2w33xqcrywgedu59a
Expected behavior
Once the transaction is confirmed I expect the sender to transmit their proofs to the recipient, who then knows about the assets.
Actual behavior
No sync of proofs appear to happen.
Logs of both sender and receiver:
sender_logs.txt receiver_logs.txt