Open tunnckoCore opened 8 months ago
@RogerPodacter poke.
Just finally wrote it.
https://ordinals.com/content/177c1e83ca20790448559382232487c7c97767f69bf46ec62152c3da0e099882i0
https://ord.io/content/177c1e83ca20790448559382232487c7c97767f69bf46ec62152c3da0e099882i0 (redirects to vercel prod)
https://ordiscan.com/content/177c1e83ca20790448559382232487c7c97767f69bf46ec62152c3da0e099882i0
I don't think they have such wide-spread support for the inscription's metadata, but it's definitely useful to have access to in/ethscription metadata, that will be our extra.
The proposal is to have the same in Ethscriptions:
ethscriptions.com/ethscriptions/12/content api.ethscriptions.com/ethscriptions/12/content ordex.io/ethscriptions/12/content wgw.lol/ethscriptions/12/content api.wgw.lol/ethscriptions/12/content calldata.space/ethscriptions/12/content
Rooting for /content
because it's actually the content, data
sucks especially if we enforce /metadata
too 😅
The /metadata
is what we usually have in GET in REST endpoints, but we need it added because it should be accessible through non-APIs too.
Another approach of all that is to support Accept headers. Like if user navigates to /ethscription/12
that's a normal request with */*
type; if a dev sends request to the same but with accept header application/json type, the server responds with json metadata...
... but that's a lot more clunky and implicit.
@RogerPodacter bump
From your links it doesn't seem like there is an eco-system wide convention in Ordinals, but rather that everyone uses the Ord indexer for recursion and it has a certain URL scheme.
If this is the case, why not do the same thing with the Ethscriptions Indexer: use it and design around its URL scheme?
@RogerPodacter
The current smallest possible "loader", that tries to load from a bunch of different endpoints, is 464 bytes - A) why users should pay for it, B) it's a dozen of requests actually hurting performance of loading, C) it's always better to just rely on the native built-ins, like dynamic import - even that importEthscription
with the dynamic import in it won't be needed and i'll re-ethscribe, it's there and i'm passing it down to other modules just because there's no other way.
@RogerPodacter ping pong :point_up_2:
I shared it here and there, but will put it here too.
Made a caching proxy of the official v2 API. For the moment it's just a proxy, but soon it will be interface for the own indexer.
Point is, it implements how i see things should be - max caching on things that not change, and separate endpoints for the dynamic parts like for /transfers
, /owner(s)
(responding with current, previous, creator and etc), /number(s)
for ethscription number, event log index, transaction index; and /content
only serving the raw content; /attachment
and /blob
for esip8 content; and a special one /meta
(also /metadata
) that is just a mirror of the bare /ethscriptions/:id
endpoint for consistency.
Then the site will also expose wgw.lol/ethscriptions/:id/content
(and /data
) and wgw.lol/ethscriptions/:id/metadata
(and /meta
) pointing to the API, so that ethscriptions that are served from there could use fetch('/ethscriptions/:id/content')
for loading other ethscriptions, eg. "recursion". That's why /metadata
also exists, cuz they may need meta data for a given ethscription.
All this makes it super fast, and a lot smaller response size because the content isn't included on the main /ethscriptions
responses.
Currently it's on Cloudflare Workers. I started it at Netlify which have a "distributed cache", but hit some problems there, may try again cuz it's pretty cool - like, if i first hit /ethscriptions/:id/number
then hitting the /ethscriptions/:id
endpoint will be 304, cuz it's actually the same core logic/handler/loader under the hood, same applies if i hit it from a totally different browser, it would be directly 304 cuz i already hit it through the other browser, haha.
In fact, more prefer to follow the Ordinals path with /content/:id
and /metadata/:id
Because this allows an indexer to jjust start storing each ethscription's content on its server (and the metadata in KV or vector db) and expose them as just static assets, cached forever with cache-control: public, max-age=31536000, immutable
(1 year).
In fact, once you having the actual content on disk you can train AI or vectorize and make a search engine. Sure, also possible if the content is actually in a DB but yeah.
Currently the API runs on CF, but planning to try again with Bun on Fly.io, so i can really just save the files on the disk, and expose them as static assets through the server, whichever it is - Nitro, Astro, Hono, H3, whatever.
PS: actually.. not that much better, in such scenario they would need to be requested with file extensions which sucks. Duh. So.. yeah.
may 31, update:
Convention-type no-code update: ensuring interoperability, reliability, and composability - enforce sites/platforms/APIs expose at least
/ethscriptions/{id}/content
endpoint.This shouldn't be an "IP", it's just logical thing helping all parties, and the usability of the protocol. I don't think Ordinals community had an IP for that, they all just came up to have at least such endpoint so it gave birth to explosion of recursion.
Make all users/third-parties apps/platforms that want to engage or build on top or visualize (whether an explorer or a market or whatever) ethscriptions to expose 2-3 endpoints on their main domain, not on subdomains. They still can have apis with their own structure, these 3 can be just proxies to their main api whether on
/api
or onapi.
or other subdomains.Point is inscription's content can reliably depend on making dynamic imports or fetches to these endpoints, no matter from what type of page or where they are loaded.
This vastly helps with the so-called recursion. There's nothing complex. You don't need anything but a simple convention you can rely upon anywhere. You don't need "recursion protocol", like Anthony suggested and wrote a paper around such thing, i helped him with that too, but that's not the way. It's a cool thing, for entirely different use case. A basic "recursion" is nothing more than just convention between parties, using the basics and mechanics of The Web, the native and core loading.
The way is how Ordinals recursion is possible - every site/app/platform/api just expose endpoint to the content of the inscription -
/content/{inscription_id}
- ordinals com, ordiscan, hiro api, ord io, magic eden, and many more.Here we can do similar thing:
/ethscriptions/{id}/content
for the inscription data/content_uri/ethscriptions/{id}/metadata
for the json metadata/ethscriptions/{id}/owner
for current owner/transfers
and current/prev owners to its own endpoints, so you can cache responses "forever"I've been fighting with/for and around that for months. I have at least 2 collections that are using recursion and they are pending just because you cannot reliability do it. The current approach is to have an async function that uses try catch blocks to try and load a given ethscription id from multiple endpoints in event of failures or sites taken down one day.
Some months back i made the 0xNeko Recursive, and ethscribed few, the way they actually show everywhere is because this loader shit actually fallbacks to http to the v1 api.
We cannot just use http to the main ethscriptions api. We can, but that's not safe, decentralized, trustless, permissionless. In addition to that, that "loader" is adding cost to the final work.
Currently i have a recursive generative collection where each item consists of few ethscriptions, all combined is 2 libraries plus tiny code on top, and it's 80-100kb. I've ethscribed the parts months ago, so that my final work can be just 1kb or less. But with the current approach with "loader", the loader itself is at least 1-1.5kb which is around 2$ at 8gwei.
Problem is you cannot just ethscribe the loader, cuz again each ethscription gotta load reliably the loader.. which defeats the purpose of extracting some code in ethscription.
All that can be fixed by just having a standardized endpoint for at least the content, the
/content
.Once we have this ESIP-9 accepted, we (me, ordex, eths com, and eths api) can all expose this endpoint. Thus, things like my Moira Hypnosis can be a lot smaller, in my case it literally can be just one line with a script tag pointing to
/ethscriptions/{id}/content
, plus few attributes that actually define the difference between items and the script in that ethscription will use them as options for the generative art.Ethscriptions com
you have ethscription page already on
/ethscriptions/{id}
, just add /content that serves the raw ethscription contentThe recursive 0xNekos and Moira already try to load grom that endpoint.
Ethscriptions api
There, the latest version should always be non-versioned, and have stable endpoints, or at least the above 2-3, or at least the /content
Like
api.ethscriptions.com/ethscriptions/id/content
On Ordex
They have vastly complex and weird api, and it's actually on different domains, but that won't matter if they expose the same endpoints. That's normal, it's a mix of marketplace and indexer.
For their public api, they can expose the above 3 endpoints.
For their frontend site, they can expose the same.
All that makes it possible to have stable resolving anywhere.
This will have zero "loader" shit, and zero problems loading anywhere, whether it's api or an ethscription user page.
There's just no other way than just standardize across the board.