ordinals / ord

👁‍🗨 Rare and exotic sats
https://ordinals.com
Creative Commons Zero v1.0 Universal
3.81k stars 1.34k forks source link

Indexer ignores inscriptions that are not on first input of TX, but otherwise valid (missing inscription sequence numbers) #2000

Closed veryordinally closed 1 year ago

veryordinally commented 1 year ago

While working on #783 @raphjaph and @veryordinally discovered a bug in the indexer that affects versions of ord up to and including 0.5.1: inscriptions that are not made on the first input are ignored during indexing. This was due to a simplifying assumptions in src/inscription.rs where only the first input was parsed for inscriptions. We fixed that with this commit as part of draft PR #1963

Discussed with @raphjaph and we suggest to disentangle this fix from the larger collections and provenance topic, and also to address this sooner, as it will cause inscription numbers to change since indexers with this fix applied will index those inscriptions that the previous indexer skipped. I will compile and add a list of inscriptions that are currently ignored to this issue.

nothing0012 commented 1 year ago

I think this belongs to the consensus layer of the ord protocol. Should we have a activation blockHeight for this change? So that

Before block height X: only the first input was parsed for inscriptions
After block height X: consider all the inputs

And we give the community enough time to upgrade before block height X.

veryordinally commented 1 year ago

That would seem like a good general mechanism to do these kinds of updates/fixes to me. Would also be an option for collections/provenance. We should probably make an issue to implement an activation block height mechanism if we want t adopt this. @raphjaph what do you think? Downside would be that add in some sense arbitrary constraints on what gets parsed as an inscription and what doesn't which makes the protocol messier.

gmart7t2 commented 1 year ago

While working on #783 @raphjaph and @veryordinally discovered a bug in the indexer that affects versions of ord up to and including 0.5.1: inscriptions that are not made on the first input are ignored during indexing

That is how the docs say it should be:

https://docs.ordinals.com/inscriptions.html

Screenshot_20230406_093102_Presearch

The docs and the code agree that only the first input/output is an inscription

that add in some sense arbitrary constraints on what gets parsed as an inscription

The whole thing is already arbitrary. Adding an extra constraint that allows multiple inscriptions per reveal tx after a certain block height is no different.

timechainz commented 1 year ago

The docs and the code agree that only the first input/output is an inscription

@gmart7t2 If I'm not mistaken, the docs you highlight refer to the first output on which the first sat contains the inscription, but the aforementioned bug relates to the INPUT of the reveal transaction. The code was only looking in the first input. That's not consistent with the doc which doesn't specify it's the first input. I believe that is why @veryordinally is referring to it as a bug. docs Screenshot 2023-04-06 at 5 26 38 PM

gmart7t2 commented 1 year ago

Is it meant to be possible to create multiple inscriptions per reveal tx? There are a few examples of people trying to make multiple inscriptions with a single reveal tx:

gmart7t2 commented 1 year ago

@gmart7t2 If I'm not mistaken, the docs you highlight refer to the first output on which the first sat contains the inscription

It says "the input of a reveal tx", suggesting that reveal transactions only have one input.

Are you suggesting that if there is an inscription in the 5th input of a reveal tx then the inscription should still be assigned to the first sat of the first output? It world seem more natural to apply ordinal theory, and assign the inscription to whatever sat in the output corresponds to the start of the inscription input. Then txs like 84c5ab8d08c769274213cb7554454a30a9d5c4a77ffee7fffdc8e11d6292568ci0 would make sense, creating 2 inscriptions with 1 reveal tx.

timechainz commented 1 year ago

I'm not saying anything in regard to the outputs. I'm simply saying that the fix this issue is addressing is related to selection of input. It selects the first inscription it finds in the inputs rather than only looking in the first input.

I think your feedback regarding multiple inscriptions in a tx is more applicable to #876

DrJingLee commented 1 year ago

Will the reindexing affect some special numbers such as inscription of 500,000, 1,000,000 ,99999,88888 etc. and some special numbers close to boundary such as 99,101, 999, 1000 9999,10000 etc ? If so , May be it s good and fair to leave as it is and fix from the cetain height of blocks later on ? To Treat the bug as a force majeure ......

BennyTheDev commented 1 year ago

I like the negative ordinal number approach that I believe has been mentioned, as it would be a win-win: being in the negative range makes yours special and since literally everyone is relying on the positive numbers already, they would stay the same.

timechainz commented 1 year ago

I’ve thought a lot about this and it came up in a twitter space last night. For the record I have biases in both directions as I hold some inscriptions with nice round numbers and also valid inscriptions that are not showing. These are my current thoughts.

There are two concerns that should be addressed independently:

A transaction witness that has a valid inscription envelope as presently defined by the protocol should be considered a valid inscription. Pretending that valid inscriptions aren’t there because a sat is double-inscribed or before a certain block height seems very fiat and conflicts with the project ethos that Casey established from early on. My preference would be that regardless of what decisions are made on numbering, valid inscriptions should be present in the index, recognized, available on the site by inscription id, etc.

Numbering is a much more difficult question. I agree that it’s more a consensus layer than protocol layer function. There have already been at least a few incidents that have affected numbering (double-inscribed sats, spent as fees, inscriptions that are not on the first input) and there will likely be others. Per Casey, “this ship has sailed in the sense that people are attached to the existing inscription numbers”. While I think that’s a difficult promise to keep in perpetuity I understand the sentiment and think it’s the correct decision. That said, this increases the likelihood of a fork due to decisions made relating to the handling of how and when bugs should be fixed. There’s already (largely tongue-in-cheek) “orthodox inscription numbers” (lol) chatter happening on Twitter so I think it’s important to address the issue sooner rather than later.

@casey mentions the concept of hidden inscriptions in #1455 relating to the double-inscribed sats that are skipped:

Surfacing the hidden ordinals one day as an easter egg, maybe as nega inscriptions with negative numbers that count down from 0

I like that concept but would avoid a negative numbering system because it could suffer from the same renumbering issue, in reverse. People will get attached to the nega numbers again. They can just exist in eternal purgatory (rather than current state of damnation). A dumping ground for all past and future inscriptions that are valid but didn’t qualify for numbering inclusion for some reason.

My take is that the numbering system should be maintained (block height activation) but that surfacing hidden inscriptions without numbering should be prioritized so that unnumbered inscriptions are addressable by id.

Psifour commented 1 year ago

The optimal solution (for this, and future incidents) is one that:

My proposal that addresses all of these is the creation of a flag system. This can be implemented as a set of bit flags on the high-order byte of the inscription number number: u64 -> flags: u8 and number: u56, as a new field flags: u8/u16/u32, or some other arbitrary system (this is a technical decision and independent of the feasibility of this concept).

This would enable us to assign the number that an inscription 'would have been' with flags set to distinguish it from the standard numbering system. This satisfies each of the conditions I presented above (as I will demonstrate below):

tonilainemusic commented 1 year ago

i think the inscriptions should be added, and the inscriptions re-numbered.

If you don't re-number, you will set up a situation where people will go through and search for the 'real' 500k - the real 1 million. And probably somewhere down the line a correction to the 'real' numbers.

I say just fix it now.

Magicking commented 1 year ago

Renumbering can not be changed as it would break application living on this parallel bitcoin*.

I like the flags idea, it introduce the numbering in another dimension, I think this is some kind of versioning over a soft-fork of the ord protocol.

*Caveats to that @veryordinally mentioned on Twitter that inscription ordering is brittle. Which I would understand as the result of previous computation and thus subject to change if said computation is changing.

I would commend for it to be specified somewhere that inscription numbering is meant either as part of the Ordinal protocol or not but since it's part of the reference implementation, i think this may need clarification.

All of that said, if we consider the infancy which are in the Ordinals vs the years of maintenance it could be in, simple rules such as "consider all inputs for inscription numbering after X block" as proposed before would be an easy to implement and most understandable path forward. And both specifications and implementation shall need update.

satoshi0770 commented 1 year ago

i believe we should address two points

1/ hidden inscriptions that were reinscribed on the same sat should be indexed and shown on the main explorer. this can be with the negative numbers -0 first inscription on a sat that already had an inscription, -1 2nd inscription on a sat that already had an inscription ect.

2/ parent child inscriptions (tests) txs that have multiple reveals or txs where the reveal is not the first input, i think the best way to fix this and keep compatibility is to follow a strict ruleset, first input first output should be a new inscription. parent should be 2nd input and 2nd output. and keep 1 reveal inscription per tx. if im not mistaken this will not need any change to numbering.

so we are left with what to do with inscriptions that were revealed in any input other then the first input. not sure on best way to address that. we can possibly completely ignore them as not following consensus of reveal being the first input and new inscription being the first output. but open to hearing other ideas on how to treat those transactions that were not following ord client consensus and possibly (up to debate) the social consensus (which gm quoted from the docs).

BennyTheDev commented 1 year ago

I fully support @satoshi0770's suggestion

yilakb commented 1 year ago

If the inscription number is altered for any wreathen, regardless of the circumstances, all inscriptions will be deemed as shitcoin by consensus, resulting in the end of this experiment.

It is my hope that no one would dare to jeopardize the interests of the 1,062,727 inscription owners for the sake of a mere 1,206 inscriptions. It would be unwise to upset the majority for such a small minority.

I believe it would be prudent to establish a dedicated indexer or side indexer that tracks inscriptions beyond the first input. This approach would allow for a more comprehensive and inclusive counting of inscriptions, without posing a risk to the integrity of the entire project.

Psifour commented 1 year ago

It is my hope that no one would dare to jeopardize the interests of the 1,062,727 inscription owners for the sake of a mere 1,206 inscriptions. It would be unwise to upset the majority for such a small minority.

I would posit the exact opposite, those of us who don't want to change the numbers, but also see the value in the 1,206 missing inscriptions are likely the majority. The creation of a completely external indexer that uses different standards is a much larger threat and is exactly the outcome that all non-malicious interested parties are attempting to take by discussing these changes first.

As for my current stance, I am opposed to injecting any additional ordinals (the mathematical concept) into the primary index below block height 784529. This would assure the first million remain unshifted (minimizing harm of moving targeted projects). Towards this end, I believe a flags system (proposed above) could be used to avoid the performance drawbacks for the 'orphaned' inscriptions and we could then index all occurrences after said block height. This was public prior to 1M inscriptions and therefore I believe it is completely fair (maybe even excessive) to target the block following that milestone.

swapski commented 1 year ago

Why wouldn't we want to re-# ?

Sure short term it may hurt some feelings, but we all knew this was low key experimental right ? We as a community have made Ordinals what they are today, but lets not forget what it is built on. re-# is the most Bitcoin way

Having said that I'm not the one putting my blood, sweat & tears into this on the backend. 😅 So I have no reservations or exceptions on when they should fix / deliver this. ⚒️ I'mma just let them cook 👨‍💻 👩‍💻 lets take a moment to give them their flowers 💐

To be fair, I can see the other side, but not in such a way that re-# will be a determent long term. Ordinals are here to stay imo & if you think the future weighs on this decision GG I heard some cool Ideas floating around but ultimately, I think we know what we need to do 🔘

Just my 2 sats

-swapski

gmart7t2 commented 1 year ago

As I see it the best solution is to pick an arbitrary inscription in the future, far enough ahead that people have time to update their software, and insert the missing inscriptions at that point.

We don't want to reward people who inscribed on inputs other than the first with "special" negative inscription numbers or we will see a spate of such inscriptions being made just to get the special low numbers.

So pick a number, say 1,543,606, and when we reach that number add in the missing 200 or so inscriptions at that point.

That way the inscriptions aren't lost and we don't encourage people to make weird inscriptions in an attempt to get special treatment.

gmart7t2 commented 1 year ago

It is my hope that no one would dare to jeopardize the interests of the 1,062,727 inscription owners for the sake of a mere 1,206 inscriptions

Where did that number come from?

When I search last week there were only 26 reveal txs that inscribe on an input after vin 0.

23 of those inscribed on input 1 and no other input

2 of those inscribed on inputs 0 and 1 only

1 of those inscribed on inputs 0 through 115

Even if we count all of those as inscriptions that's only 23+2+115 = 140 missing inscriptions.

Have there been so many inscriptions on inputs after vin 0 in the last week that the total is now over 1000?

Edit: I just updated my index. There are 4 new transactions that make inscriptions after vin0. 3 in block 784752 each make 200 inscriptions and 1 in block 784754 makes 199 inscriptions.

That makes the total missing inscriptions up to 140+795 = 935, at least 795 of which happened since this issue was opened.

gmart7t2 commented 1 year ago

A list of the transactions with inscriptions that don't currently get indexed:

    reveal txid                                                      block  #missing inscriptions
    ---------------------------------------------------------------- ------ ---------------------
 0: b884d0e57aea0b1444db21c35b977133f4cdf303dd134a684a3846e06bb73ddc 780309   1
 1: 0baae63004061634cf74dd4a017e8695c582257c0e369dce12f38ed1e0afb0ff 780310   1
 2: 6989f4d10654cd2a1092c63832f2aac84c6b4ac4c11bfe568f58561c607d7cd0 780311   1
 3: 96b4bc8b705df43f9dd172f0ebfd3ea7ad4ce164527b56142fd4c47f63349136 780315   1
 4: 7cfbcd1290d7e7a5c31e8524b40ab606043ff0de1b58915ec7e3726a4991222f 780813   1
 5: e54aa253f3e835a7e11c204bce708241811b0bb4d3c264104cf8374cefbec40b 780816   1
 6: bd5bd001c581b8f84be45143dd902d130959b25c904e48d51a7796c048b32857 780848   1
 7: 092111e882a8025f3f05ab791982e8cc7fd7395afe849a5949fd56255b5c41cc 781735 115
 8: 1899feac65385077a7403d75e28415c34f8fe7e9a145704bc3cb7bd3b2371bef 782040   1
 9: f98b8fb28cbf025d1c337705a0f2cedae2d6059ccf9a41ea24d03771d684bec5 782119   1
10: 4744d387a45780e08b57f0ce326cc9852b5e1c5ca7ab2d89a64dc0d9ee3b4d8b 782624   1
11: f36a1dec628e48df6f978db3bbdbcffa81d7c5f43f9c35d7c75ac697f5e84af1 782678   1
12: 3a4f227f35aeaf1f1f4c717da173bc9e1b31f69bfb361a98239eda026572d0d1 782680   1
13: c142a74fa187a6297113d6dc20b6aa72fb0df373ec160dc439ba5761fb7c652c 782681   1
14: 7e4f5fb6997033f6f492e3603a030ac693e9fef278b1d14b88bc1c50ea0c7780 782699   1
15: c091935fac88d301e1f5cbf34eae733fa20ed6e221ab3d35ad6bbaf0354d8350 782700   1
16: f45d4f49b44708c4f0c9472495496f046cee3bd948ef40c3128a30d91fe2069e 782701   1
17: 982c01c3e0807311a567e5402d75dadaf8f32a9a3ce00a7c9ce7b1f4ebee61e1 782824   1
18: 84c5ab8d08c769274213cb7554454a30a9d5c4a77ffee7fffdc8e11d6292568c 782827   1
19: c052b002116518fd7568c13a9902f6892181a8f216f1bc6247cc28a4083d3709 782829   1
20: 692c59d18a007fde419a29ce5bdb282f9758f89890190804c587e665f5bdec95 782837   1
21: 65abdc382ad46f2f7763392a2da31254b2a7aa6b027c875d2959753560b2e1e4 782866   1
22: 8b4b3c5ef4486e0838906f717204b4b26a0d1695e2a03673b1c6755cd9c9edda 783014   1
23: b593fecafe92cd1e00291610dbdc34919e3ebf1f5c54d37f71fa544245d0ac2f 783809   1
24: 70d9e7a113ff3e72599860c57365d7a7ab22aca01532a858703f597c27439e9b 783948   1
25: c634a1bfb6b09696646cbc133570248634e1f1467ab2f1bd55381d6a31560200 784071   1
26: a8a358d78754827c2f8c56dd47df032c968b805f3524f515dd8370878fd8651e 784752 199
27: 75f607b453ff61f63ed614c6476127f89687397488bbe6c49b151fd867f8ef5c 784752 199
28: 207776d0713fe5aa238d9efe12c39e63949612aa95635b295d807c18afe4d769 784752 199
29: 98b1e09b635a9e15234d7be03f03f6d38af2e98e389fe717cb723b0dae638928 784754 198
                                                                            ---
                                                                            935
ghost commented 1 year ago

一个思想实验:假设现在是2140年,现在比特币挖完了,然后2150年有人说比特币代码有bug,有1000个比特币之前没有被索引到。所以这1000个比特币是有效的,最后矿工挖到的1000个比特币无效,包括购买的这些比特币(期间发生的所有交易)也无效。 现在这种情况怎么处理?我想毫无疑问的,我们需要讨论这个问题吗?

A thought experiment: Suppose it is 2140, and now the bitcoins have been mined finished, and then in 2150, someone says that the bitcoin code has a bug, and there are 1000 bitcoins that have not been indexed before. So these 1,000 bitcoins are valid, and the last 1,000 bitcoins mined by the miners are invalid, including the purchased bitcoins. How to deal with this situation now? I think without a doubt, do we need to discuss this?

ghost commented 1 year ago

一个思想实验:假设现在是2140年,现在比特币挖完了,然后2150年有人说比特币代码有bug,有1000个比特币之前没有被索引到。所以这1000个比特币是有效的,最后矿工挖到的1000个比特币无效,包括购买的这些比特币(期间发生的所有交易)也无效。 现在这种情况怎么处理?我想毫无疑问的,我们需要讨论这个问题吗?

A thought experiment: Suppose it is 2140, and now the bitcoins have been mined finished, and then in 2150, someone says that the bitcoin code has a bug, and there are 1000 bitcoins that have not been indexed before. So these 1,000 bitcoins are valid, and the last 1,000 bitcoins mined by the miners are invalid, including the purchased bitcoins. How to deal with this situation now? I think without a doubt, do we need to discuss this?

难道我们能说这10年间这最后挖到的1000个比特币之间的所有交易都是无效的吗? Can we say that in 2140-2150 all transactions between the last 1000 bitcoins mined are invalid?

DrJingLee commented 1 year ago

A list of the transactions with inscriptions that don't currently get indexed:

    reveal txid                                                      block  #missing inscriptions
    ---------------------------------------------------------------- ------ ---------------------
 0: b884d0e57aea0b1444db21c35b977133f4cdf303dd134a684a3846e06bb73ddc 780309   1
 1: 0baae63004061634cf74dd4a017e8695c582257c0e369dce12f38ed1e0afb0ff 780310   1
 2: 6989f4d10654cd2a1092c63832f2aac84c6b4ac4c11bfe568f58561c607d7cd0 780311   1
 3: 96b4bc8b705df43f9dd172f0ebfd3ea7ad4ce164527b56142fd4c47f63349136 780315   1
 4: 7cfbcd1290d7e7a5c31e8524b40ab606043ff0de1b58915ec7e3726a4991222f 780813   1
 5: e54aa253f3e835a7e11c204bce708241811b0bb4d3c264104cf8374cefbec40b 780816   1
 6: bd5bd001c581b8f84be45143dd902d130959b25c904e48d51a7796c048b32857 780848   1
 7: 092111e882a8025f3f05ab791982e8cc7fd7395afe849a5949fd56255b5c41cc 781735 115
 8: 1899feac65385077a7403d75e28415c34f8fe7e9a145704bc3cb7bd3b2371bef 782040   1
 9: f98b8fb28cbf025d1c337705a0f2cedae2d6059ccf9a41ea24d03771d684bec5 782119   1
10: 4744d387a45780e08b57f0ce326cc9852b5e1c5ca7ab2d89a64dc0d9ee3b4d8b 782624   1
11: f36a1dec628e48df6f978db3bbdbcffa81d7c5f43f9c35d7c75ac697f5e84af1 782678   1
12: 3a4f227f35aeaf1f1f4c717da173bc9e1b31f69bfb361a98239eda026572d0d1 782680   1
13: c142a74fa187a6297113d6dc20b6aa72fb0df373ec160dc439ba5761fb7c652c 782681   1
14: 7e4f5fb6997033f6f492e3603a030ac693e9fef278b1d14b88bc1c50ea0c7780 782699   1
15: c091935fac88d301e1f5cbf34eae733fa20ed6e221ab3d35ad6bbaf0354d8350 782700   1
16: f45d4f49b44708c4f0c9472495496f046cee3bd948ef40c3128a30d91fe2069e 782701   1
17: 982c01c3e0807311a567e5402d75dadaf8f32a9a3ce00a7c9ce7b1f4ebee61e1 782824   1
18: 84c5ab8d08c769274213cb7554454a30a9d5c4a77ffee7fffdc8e11d6292568c 782827   1
19: c052b002116518fd7568c13a9902f6892181a8f216f1bc6247cc28a4083d3709 782829   1
20: 692c59d18a007fde419a29ce5bdb282f9758f89890190804c587e665f5bdec95 782837   1
21: 65abdc382ad46f2f7763392a2da31254b2a7aa6b027c875d2959753560b2e1e4 782866   1
22: 8b4b3c5ef4486e0838906f717204b4b26a0d1695e2a03673b1c6755cd9c9edda 783014   1
23: b593fecafe92cd1e00291610dbdc34919e3ebf1f5c54d37f71fa544245d0ac2f 783809   1
24: 70d9e7a113ff3e72599860c57365d7a7ab22aca01532a858703f597c27439e9b 783948   1
25: c634a1bfb6b09696646cbc133570248634e1f1467ab2f1bd55381d6a31560200 784071   1
26: a8a358d78754827c2f8c56dd47df032c968b805f3524f515dd8370878fd8651e 784752 199
27: 75f607b453ff61f63ed614c6476127f89687397488bbe6c49b151fd867f8ef5c 784752 199
28: 207776d0713fe5aa238d9efe12c39e63949612aa95635b295d807c18afe4d769 784752 199
29: 98b1e09b635a9e15234d7be03f03f6d38af2e98e389fe717cb723b0dae638928 784754 198
                                                                            ---
                                                                            935

So 935 is the exact numbers ? Fully support the proposal/solution from @Psifour . But lets have a sampling of How the community think and What is their choices before any actions. A 3 day pull in Chinese is initialted on twitter and lets see the results.

https://twitter.com/0xjingle/status/1645596640408133635?s=20

stet commented 1 year ago

Ordinal Theory is not about order of inscription, so inscription numbers are not a part of the core protocol and rather was a UI decision to add the numbers based on the amount and time indexed. In the spirit of what has been officially expressed on the Ordinals docs, we should be very careful here. Such statements as:

Some (snarky) comments I made on Twitter: https://twitter.com/sull/status/1645473812123615264 https://twitter.com/sull/status/1645478789089034241 https://twitter.com/sull/status/1645485002472267777

But this is how I truly feel: https://twitter.com/sull/status/1645595307571290112

I think for those that perceive the sequence numbering as important, snapshot it and append it moving forward and host it elsewhere from http://ordinals.com to cater to those who want to follow the unchanged numbering. Maybe http://ordinals.com could support linking to the initial inscription that held that number as a gesture. The orphaned inscriptions could be its own subcollection featured on this external resource. Same with any other inscriptions that were effected by earlier bugs. I think it’s important that the main project site, software, github etc show the mathematical truth and not be responsible for collectors perception of value based on order of inscription. Ordinal Theory doesn’t delve into this numbering system as being important but only rare sats and secondarily the associated naming construction based on alphabet to numeric mapping. You could create a section on http://ord.io for this to have for historical purposes and active market value considerations.

There are a variety of custom indexers that exist beyond the reference client in this repo. Any of them can choose to handle this issue in any way. It could be by consensus or opinionated team who developed it. The market will ultimately decide which inscription number set they will refer to when listing and buying etc.

I see very little reason why the core project and team here should be influenced by how and why inscription numbers were used and perceived. I can empathize with those effected by this and other bugs, but this is alpha and should have been expected too.

EOD, Ordinals Theory and the software that got this all started should stay true to Ordinal Theory and the truth of the blockchain, not so much external value systems that were formed in the wild, despite it happening because the UI of a centralized website displayed these numbers as an easier shorter reference.

All of us in some way are now tuned into numbers of all sorts. Ordinal Theory makes us look at and consider numbers in new ways now. This is fine and I do it too. We as an ecosystem and the broader market can continue to do this. It just does not need to be based on the core software, specifically the core indexer, to be the source of number speculations beyond what has already been proposed... rare sats based on blockchain context.

In the not too distant future, we should see entirely new contexts that can place value on other sats not associated with "Rare Ordinals" here. World events, personal life events, numeric conversions and translations etc etc may soon get anchored to sats and create new sats of value based on new critieria. This too happens beyond the scope of the core Ordinals software project and Theory.

Therefore, we might as well embrace the inevitable now and keep these issues and topics one layer outside of the decision-making responsibilities of the core Ordinals team. They have enough to worry about.

Just my quadrillion sats.

@sull

whoabuddy commented 1 year ago

I voiced my 2 sats on twitter, but also wanted to share here: my vote is for an answer that does not change the current inscription numbers.

The inscribe.news API just finished indexing all 1M+ inscriptions and their content, and the Ordinal News Standard itself had it's first article around inscription number 245,000, so both would be affected by that type of change.

For the news standard, it means the numbers change for the news inscriptions, probably not that big of a deal but the indexer allows lookups by hash or by inscription number, which could lead to some inherent confusion.jj

e.g. https://inscribe.news/view-news?id=1060622 https://inscribe.news/api/content/509044 https://inscribe.news/api/content/240500

In addition, the index itself was designed to grab the current information out there and cache it, so this would essentially invalidate the 600k+ inscriptions following that type of change and require querying them all over again to rebuild both the main inscription cache and the ordinal news cache.

I imagine there are other 3rd party implementations and things built on top of Ordinals/Inscriptions that could be adversely affected, e.g. were any of those invalid inscriptions sats names, and would it disturb their "first is first" ordering?

stet commented 1 year ago

I voiced my 2 sats on twitter, but also wanted to share here: my vote is for an answer that does not change the current inscription numbers.

@whoabuddy I would be frustrated if I was in your situation. However, we have sat serial number and Inscription reveal TXID as the immutable index reference IDs that all software should be using. The Inscription Sequence Number of course is nice to use but should not be primary or what any code relies on.

whoabuddy commented 1 year ago

The sequence number was such an easy thing to understand - part of why we gravitated to using it was to have an easy way to filter such a large set of data based on the order things were inscribed, whereas using the reference ID or the sat number makes it hard to distinguish the order of inscriptions in a meaningful way (or I could be missing something - it's all so new that these are probably just growing pains).

For example, take the inscriptions listed above:

Sequence # Sat # Inscription ID
240500 1883723570934120 d2d96c9e5dd9ec426f75ec18678af1fa1262f6123bb0c60ba9538d1bf9ed639fi0
509044 1724957899733851 8606e38cc7e81c3128b03930533e4805c3e186cf7c5a5865a82adf7f0d76f74fi0
1060622 897241551502891 69a453cd0d774dc1b2abbdfcf431c59dccaf58bb1ba2c2d5891d27a59fd33a1ei0

The original version of the KV indexer would store each inscription using the inscription ID, but when querying the list of keys from the Cloudflare namespace, this led to a random set of inscriptions being returned because keys are sorted in alphabetical order.

You can filter the set of keys with a prefix, but outside of that, in order to do anything useful with the data it was much more helpful to have something sequential to work with. The 1:1 relationship worked just fine (give me this ID, you get this data), but when trying to work with a larger set of items like all ONS-related inscriptions, it was useful to have the inscription sequence number to help with scalability.

In retrospect, the design for this index started when we only had 250k-300k inscriptions, and as that grew to 1M it was clear we needed a way to work with subsets of the data, and the goal was to do so without requiring an instance of ord. The data in Cloudflare now was built by using common APIs, starting with ordapi.xyz and ordinals.com, eventually migrating to Hiro's API for the both the info and content.

It was not clear to me that these numbers had no real meaning, as ordinals.com displayed the sequence number for all new inscriptions, and the data was available from both APIs listed above - that combination led me to believe it was something useful and reliable. Every reference I saw was attached to this number, so it felt like it had some permanence.

I understand we're in alpha territory, but I don't feel like I'm the only person that would have this misconception. It was built into all the major viewers and we celebrated each milestone from 10k to 1M based on the sequence number. That makes it feel like there may be other implementations or ideas that would be affected by such a change. :shrug:

stet commented 1 year ago

I understand we're in alpha territory, but I don't feel like I'm the only person that would have this misconception. It was built into all the major viewers and we celebrated each milestone from 10k to 1M based on the sequence number. That makes it feel like there may be other implementations or ideas that would be affected by such a change. 🤷

It is all independently fixable regardless of how Ord indexer and explorer work (respecting erroneous numbering or adjusting to correct order). Some project concepts were based on the number (E.g. 666,666) so that's a tough situation. But your dilemma is inconvenient but fixable without too much tech debt.

Of course sequential order number is useful. It is common for a db to have it. This issue is sort of new territory. It's what happens when there is a rush and fomo before standards and bug fixes. The sequence number is derived from centralized instance of software counting the digital artifacts and it can be buggy or it can lie if a bad actor modified a single file. We cannot indefinitely trust a single source for the data we want to use.

The chain truth is out there.

Let's remember that we are already believing in sat serial numbers to make this fly. Then we believed inscription numbers was paramount. Now we want to believe erroneous inscription numbers are paramount.

If we push the pretending and shared hallucination thing too far, we lose respect and predictability.
At least when it comes to ignoring truthful data.

rayonx commented 1 year ago

Unpopular opinion + kosher ordinals theory: The real serial number comes from the sat number/name, however that is non-sequential ID for inscriptions (considering that we have all paid mining fees etc.), I was going to suggest getting rid of the inscription numbers altogether but the useful application for inscription numbers is to determine which inscription comes first, so we do need that for good UX to determine the order, and that is a technical need.

Bumping inscription numbers would not dismiss the technical needs but it would certainly dismiss your wants. At the current stage of ord client, we are far from fulfilling our needs. I support a simple solution over a compromising one.

@Psifour 's solution seems to cover the needs and the wants, adds a new set of features, but I am unsure of the added complexity. Generally, I like the idea as it could be a win-win.

gmart7t2 commented 1 year ago

@veryordinally wrote:

I will compile and add a list of inscriptions that are currently ignored to this issue.

Did you add the list yet? I'm not seeing it. I see a tweet where you give the number of such inscriptions but it doesn't match the number I arrived at and I'm interested to know where I went wrong in my analysis.

Surya-Sivaprakash commented 1 year ago

Hey everyone, I am new to Ordinals. Wondering whether this issue is just an indexing error that could be solved within a couple of weeks or will this lead to Ordinals inscription being hard forked at the end? Could anyone explain the seriousness of the issue?

yilakb commented 1 year ago

In order to maintain fairness and inclusivity for missing inscriptions on the indexer, while avoiding any disruption to previously assigned numbers, a potential approach is to allocate them to future inscription numbers. One possible method is to randomly select a block number, such as 788888, and assign the next 1206 inscription numbers to the missing inscriptions, without affecting the existing inscriptions already in place. This way, the integrity of the existing inscriptions remains intact while ensuring that the missing inscriptions are appropriately included in the indexer.

blieb commented 1 year ago

In order to maintain fairness and inclusivity for missing inscriptions on the indexer, while avoiding any disruption to previously assigned numbers, a potential approach is to allocate them to future inscription numbers. One possible method is to randomly select a block number, such as 788888, and assign the next 1206 inscription numbers to the missing inscriptions, without affecting the existing inscriptions already in place. This way, the integrity of the existing inscriptions remains intact while ensuring that the missing inscriptions are appropriately included in the indexer.

The "assigned numbers" are based on the order on the blockchain. so the current number is shown wrong because of a bug in the indexer. At the moment we have 1 popular indexer. But in the future I expect there will be more. So I think the best will be to just adding the missing inscriptions on the number where they belong. Else you expect every other indexer also need to add the same "bug" to their code.

lgalabru commented 1 year ago

It would be great to get some lights from @casey - he wrote both the spec and the code, and did not shime in. Quick background: I work at Hiro and I re-implemented the ordinal theory, using a different approach. Instead of maintaining a vector of satoshi ranges for every single existing output (very heavy in terms of I/O, resulting in Tb of writes ops, as you all can experience if you’re running ord on mainnet), we are doing some backward traversals. It’s still in the process of being optimized, but this second implementation looks consistent with what we’re seeing with ord. The main goal of this design is to let us work seamlessly with reorg - avoid hours of downtime because of re-indexing.

My understanding of this issue is that the spec:

The inscription content is contained within the input of a reveal transaction, and the inscription is made on the first sat of its first output.

Is leaving some room for different interpretations. I think if you look at this from a satoshi flow point of view, the room for debates is very small. The whole theory relies on the idea of satoshis moving from inputs to outputs - the first satoshi transferred in an output is the first satoshi of the first input. When you understand that, then inscribing any other input than input #1 should not trigger “inscription is made on the first sat of its first output”. In my mind, the 1200 “missed” inscriptions should not be indexed because they are not covered by the philosophy of the protocol.

There is 2 parts in the sentence:

The inscription content is contained within the input of a reveal transaction (1), and the inscription is made on the first sat of its first output (2).

You can’t have multiple inscriptions per transaction, and inscribe all these inscriptions on the first sat of the first output - in this case you're just re-inscribing.

TLDR; Fix the spec, not the code, and if we want to see multiple inscriptions per transaction for supporting collections, let's augment the spec in a backward compatible fashion. It feels like we're trying to read between the lines and bend the initial spec. This is a dangerous approach, which will for sure divide this nascent ecosystem.

yilakb commented 1 year ago

It would be great to get some lights from @casey - he wrote both the spec and the code, and did not shime in. Quick background: I work at Hiro and I re-implemented the ordinal theory, using a different approach. Instead of maintaining a vector of satoshi ranges for every single existing output (very heavy in terms of I/O, resulting in Tb of writes ops, as you all can experience if you’re running ord on mainnet), we are doing some backward traversals. It’s still in the process of being optimized, but this second implementation looks consistent with what we’re seeing with ord. The main goal of this design is to let us work seamlessly with reorg - avoid hours of downtime because of re-indexing.

My understanding of this issue is that the spec:

The inscription content is contained within the input of a reveal transaction, and the inscription is made on the first sat of its first output.

Is leaving some room for different interpretations. I think if you look at this from a satoshi flow point of view, the room for debates is very small. The whole theory relies on the idea of satoshis moving from inputs to outputs - the first satoshi transferred in an output is the first satoshi of the first input. When you understand that, then inscribing any other input than input #1 should not trigger “inscription is made on the first sat of its first output”. In my mind, the 1200 “missed” inscriptions should not be indexed because they are not covered by the philosophy of the protocol.

There is 2 parts in the sentence:

The inscription content is contained within the input of a reveal transaction (1), and the inscription is made on the first sat of its first output (2).

You can’t have multiple inscriptions per transaction, and inscribe all these inscriptions on the first sat of the first output - in this case you're just re-inscribing.

TLDR; Fix the spec, not the code, and if we want to see multiple inscriptions per transaction for supporting collections, let's augment the spec, it feels like we're trying to bend it and read between the lines :).

I am also inclined to support the decision of not including the missing inscriptions in the indexer, as they may not align with the philosophy of the protocol. Upon reviewing examples, it appears that many of the missing inscriptions are associated with experimental transactions involving two inscriptions, and as such, could be considered as collateral damage.

Furthermore, including these missing inscriptions in the indexer could pose challenges for wallets that only recognize the first satoshi of the first input as the inscription, potentially resulting in unintended spending as fees. Adding these missing inscriptions would require all wallets to also avoid sending the satoshis that were inscribed, leading to potential complications.

Hence, considering the deviation from the protocol's philosophy and potential technical challenges, it may be reasonable to exclude the missing inscriptions from the indexer.

mikeest commented 1 year ago

I believe that @timechainz proposal is the best compromise for everyone. He suggests that the numbering system should be maintained (block height activation), but that surfacing hidden inscriptions without numbering should be prioritized so that unnumbered inscriptions are addressable by ID.

I think we should not retroactively change the inscription numbers as they are an important part of the concept of Ordinals. As @casey has said before, THE ORDINALS PROJECT COMMITS TO IMUTABILITY OF INSCRIPTION NUMBERS. I would agree with that.

Even though inscription numbers are not stored on the blockchain, they have become an integral part for all users. I believe that the view of many individuals that the numbers are immutable and that all Ordinals are connected through their numbers has fascinated many users and is what makes Ordinals special.

As for the inscriptions that have been hidden due to various bugs, I agree with timechainz's suggestion that they should be made visible to users, but WITHOUT numbers. This is because we cannot exclude the possibility of new bugs causing new hidden inscriptions in the future, and we would be faced with the same dilemma as now.

I believe that most users would be satisfied with this proposal, as the inscriptions without numbers would be seen as misprints and have their own unique value. Furthermore, this system would be easily explainable to mainstream users, similar to stamps that are not usable for regular purposes due to production errors but still have their own collector's value.

Regardless of which system we ultimately decide to use, there is one important thing that has not been thoroughly discussed yet and many users may not be aware of:

So far, we have only been discussing making hidden inscriptions visible due to ONE bug. #2000 @veryordinally However, as some may know, there was also a bug in the early stages between inscription numbers 500-600 where multiple inscriptions could be on one sat and are right now hidden. #1455

I believe it is crucial that if we decide to make hidden inscriptions visible, we make ALL current hidden inscriptions on the blockchain visible and not just the inscriptions from this particular bug. Otherwise, it would bring up discussions and discrepancies again as to why certain hidden inscriptions were not made visible.

Following the principle of "all or nothing."

blieb commented 1 year ago

If people don't want to renumber I think the best option is to not make them visible. And change the spec so other indexes will also not make them visible (else they will use other numbers).

Of we want to make them all visible. The order should be as they are on the blockchain. No strange "workarounds" to change numbers so numbers on ordinals will not change. This will make a bigger mess.

I also think most people who did inscribe this way also did accept that their inscription was not accepted when it did not appear on the ordinals website. So I would go for the first option.

BTCAlchemist commented 1 year ago

I agree with @gmart7t2 to pick an arbitrary future inscription # and insert the missing inscriptions into the index there. This would preserve the current inscription # immutability. The question is, who would need to agree to this to get it adopted? Maybe @casey, @gmart7t2, @raphjaph, or @terror know whose agreement would be needed or could suggest a suitable process for consensus.

As I see it the best solution is to pick an arbitrary inscription in the future, far enough ahead that people have time to update their software, and insert the missing inscriptions at that point.

We don't want to reward people who inscribed on inputs other than the first with "special" negative inscription numbers or we will see a spate of such inscriptions being made just to get the special low numbers.

So pick a number, say 1,543,606, and when we reach that number add in the missing 200 or so inscriptions at that point.

That way the inscriptions aren't lost and we don't encourage people to make weird inscriptions in an attempt to get special treatment.

veryordinally commented 1 year ago

Here's my current perspective after considering the various proposals and viewpoints brought forth:

The two main solutions being discussed focus on either retroactively changing inscription numbers or leaving them unchanged. However, I believe we can consider the possibility of embracing diversity in inscription numberings.

Some creative solutions have been proposed, such as using negative numbers or bitwise flags in the numbering system. These alternatives showcase the innovation and flexibility within our community and are worth exploring further.

Diverse indexing approaches would not cause a "fork" of the network, as inscription numbers are not intrinsic to what an inscription is. The Bitcoin blockchain will always remain the ultimate source of truth.

Embracing diverse perspectives on inscription numberings can actually improve decentralization. As more people run explorers embodying their preferred viewpoints on inscription numbers, it strengthens the ordinals ecosystem rather than weakening it IMO.

Our focus as a community should be on maintaining consensus around inscription IDs and inscription encodings, while recognizing that sequence numbers are less relevant. Building an ecosystem that embraces potentially divergent views on sequence numbers can lead to more resilience. Also, sequence numbers become less and less interesting now that we have surpassed 1 million inscriptions.

This approach aligns with the values of open source governance and decentralized systems, encouraging the community to explore alternative methods and interpretations while still reaching consensus on core aspects of the protocol.

veryordinally commented 1 year ago

@veryordinally wrote:

I will compile and add a list of inscriptions that are currently ignored to this issue.

Did you add the list yet? I'm not seeing it. I see a tweet where you give the number of such inscriptions but it doesn't match the number I arrived at and I'm interested to know where I went wrong in my analysis.

Sorry, forgot to attach the list. This was as of Monday, when I tweeted about 1206 "hidden" inscriptions. I obtained this list from running #1963 from @raphjaph and adding some logging.

hidden-inscriptions.log

lgalabru commented 1 year ago

@veryordinally Could you re-run your extract, and ignore the transaction if you have multiple inscriptions? I inspected a small sample manually, and ALL of them already have an inscription in Input #0. Which means that they would not count as valid inscriptions.

lgalabru commented 1 year ago

Again, per the spec:

There is some ambiguity on the first part (red), but the yellow part is crystal clear, and the protocol DO NOT support multiple inscriptions per transaction - as inscription on input #0 (or #N) would be attached to the 1st satoshi of the first output, any subsequent inscribed input would do the same - meaning we would be looking at re-inscriptions - not permitted.

stet commented 1 year ago

Again, per the spec:

There is some ambiguity on the first part (red), but the yellow part is crystal clear, and the protocol DO NOT support multiple inscriptions per transaction - as inscription on input #0 would be attached to the 1st satoshi of the first output, any any subsequent inscribed input would do the same - meaning we would be looking at re-inscriptions - not permitted.

Honestly, if we take this approach then we could also start making arguments that duplicate inscription content or content with json or other mime types before they were "officially supported" or even any experimentation at all with what is contained in the envelope.... would all be up for debate if they should or should not have a superficial sequential number based on total ord protocol specifier transactions.

veryordinally commented 1 year ago

@veryordinally Could you re-run your extract, and ignore the transaction if you have multiple inscriptions? I inspected a small sample manually, and ALL of them already have an inscription in Input #0. Which means that they would not count as valid inscriptions.

Will check, but would be surprised if that is the case. From my eyeballing of the affected inscriptions these were mostly inscriptions with a parent inscription in input and output 0, i.e. the inscription on input 1 would be the first new inscription.

stet commented 1 year ago

Also worth pointing out that the sequential Inscription Number being used on ordinals.com and then obv elsewhere only encouraged spam, duplicate content and fed into a rush to grab a spot as opposed to respecting the protocol to be used for digital artifacts and appreciating meaningful historical sats.

The sequential numbering system has value obviously but making this perceived value as paramount to everything that ordinals docs actually does discuss, is a stretch in the fomo direction that we all should consider as not nec healthy for the overall project. Collectors and speculators gonna set their own sort of rules and critieria for value but this is OUTSIDE the scope of Ordinal Theory.

lgalabru commented 1 year ago

@veryordinally I ran a quick script, assuming that the 1206 hidden inscriptions were correctly extracted, it looks like only 315 of them should be considered, all the other ones are falling in the multiple inscriptions / per transaction bin.

txids.log