Closed djairoh closed 6 months ago
did not realize my screenshot function took a print of the whole screen - i'll have to edit that to only windows. my b.
If you manage to have one camera for all the layers, can't you then move the camera itself? Then you wouldn't have to figure out any other linear transformation.
ah, seems i wasn't entirely clear: the images abovev are with one shared camera across all layers. But as you can see the backgroundimage is zoomed in way too far. I'm trying to solve this by transforming the image to exist in the range [-11] (like the glyph layers do), but I can't figure out the right transformation to do so.
I can have the backgroundimage displayed in the window correctly by having it use a private camera (see below), but then moving/zooming would entail applying the operation to multiple cameras (not ideal).
The goal
In trying to implement panning/zooming functionality for the program, I wanted to have all layers in the program share one camera. This is trivial for the data layers, who already effectively shared one camera (the createNormalisedCamera() return object).
The plan
In order to allow the backgroundImage layer to also share this camera, I set out to apply two transformations to the image:
The problem
Moving the image was easy enough, but Im running into some trouble normalizing its' coordinates; i'm convinced this must be possible (without resizing the image to be 2 pixels across ofc), but im really not sure how.
The pursuits (meaning things i've tried so far (had to stick to the
article noun
combo there))Alternate solutions (sorry i had nothing for a noun here i'm kinda tired)
If it's possible to map the coordinates in the different layers to the size of the background image instead, this might be a solution. Right now the actual functionality of moving/zooming the camera is implemented, It's just that the image is zoomed way in (see images: top is starting screen, bottom after zooming out a bunch (note that the arrows/glyphs still exist but theyre too small to see).