Sparklewerk / hypermap

Tools for exploring NFT collections, permissively MIT licensed
https://sparklewerk.com
MIT License
0 stars 0 forks source link

1D #22

Open JohnTigue opened 2 years ago

JohnTigue commented 2 years ago

Dimensionality reduction techniques can reduced down to one dimension. When that is done, that one dimension provides a distance measure. One very simple rendition of that data would be to drive from one end of the line to the other, and as driving along look at the NFTs along the way, like some many billboards seen while driving.

JohnTigue commented 2 years ago

The simplest yet elegant solution here is to do the 1D UMAPing (or other dimensionality reduction technique) at build time. That is simple an array of NTF IDs. For elegance, all NFTs should be in a spritesheet (if not too big; for CryptoPunks that's only ~850KB, others will be much larger sprite sheets). Anyway, so cache and network friendly view of the NFTs as if they were coming out of a spritesheet (or indexedDB, etc.) and the viewer is simply an image carousel with only one image at a time, changing slides really quickly.

This is not a lot of work and will be very entertaining. Also, as movie: great social media content.

JohnTigue commented 2 years ago

Right above the text "BASTARD GAN PUNKS V2" is a shitty 1D hypermap.

Screen Shot 2022-01-26 at 1 24 04 PM
JohnTigue commented 2 years ago

Once this code is up and running, extend the concept to 2D: https://github.com/ManyHands/hypermap/issues/21

JohnTigue commented 2 years ago

Actually, generating animated GIF 1D hypermaps can be done using ImageMagick, it looks like: https://gist.github.com/tskaggs/6394639.

So do 1D hypermapping in JS AND Python. JS is tink.js based: pull down sprite sheet and feed NFTs to a JS 1D hypermap viewer. In contrast, the Python version makes animated GIF files. Python could also generate files which would do the rendering client side in pure CSS.

JohnTigue commented 2 years ago

Another reason to get on 1D hypermaps is to shut up the detractors. Knowledgable people might say, "Hey, he's not doing anything original." And I'm not, besides applying existing tech to a new domain problem. But there was only one paper I ran across – in neuroscience-land – that used a 1D embedding. So, that plus a simple viewer will give that ilk the pacifier to plug their pieholes.