Closed Schroedingers-Hat closed 13 years ago
Canvas can perform arbitrary linear transformations on the entire drawing context:
https://developer.mozilla.org/en/Canvas_tutorial/Transformations#Transforms
See the "Transformation matrices" section of this post:
http://dev.opera.com/articles/view/blob-sallad-canvas-tag-and-javascrip/
It seems to me we could use this for Lorentz contraction when changing frames fairly easily. Using save() and restore() judiciously would help.
That'd work well for the approach that (I think) RTR takes. You single out a special frame, do everything there, do all your transformations into whatever other frame then display. The problem comes when you get other objects moving around, their past/future is now the present and all sorts of headaches appear.
I've been taking a different approach -- partly as a learning exercise, partly because I feel that the philosophy of it is closer to the universe. I've tried to keep things as frame free as possible. There is just a list of positions, and a list of momenta, accelerating or translating just consists of multiplying those vectors by a matrix, or adding to them. I feel this is the most graceful way to handle the inevitable errors when we run into our float size (things that are really far away or really fast won't move right) Additionally, they are free to wander off of the drawing area, and they can be quite sparse (1000s of units (pixels for now) away).
Unless what you're saying doesn't mean what I think it means I can't see how it'll help with this problem.
The thing with the canvas transformation matrix is that you can draw all your objects, set a transformation, draw a new object, and then revert to the old transformation matrix. This means, for example, that instead of creating specialized code for each object shape which attempts to Lorentz contract it, we'd simply have to set the transformation matrix correctly before drawing each object.
At least I think it'll work that way. You might have to tinker with it.
Ah, so it was the latter option. :D
I've put a bit of thought into this now and I think the best way to handle it is to do some checks based on angular size and/or distance of centre/width of object then break it up into bins of 1 degree or so, and something similarly significant for radius perform our transformations on these points, then interpolate linearly for the rest of it.
One thought on this, is we could give some vertices and some doppler values, or hues+luminosities or some such to some function which applies a base grey-texture to them to make things that aren't just mono-tone. May be able to take advantage of webGL lighting methods where available for a significant speedup, and fall back on simple graphics when they aren't.
Brain-dead version of this is now working. Tried to do it with a bit less memory/cpu by combining the momentum, but wound up with a twisty mess of references. update() only costs as much as a single inertialObject when out of view, but changeFrame is proportional to the number of points.
Going to have to un-tangle inertialObject and restructure things if we want better
Until I figure out an efficient way of handling aberration, might be a good idea to add a lorentz contracted shape to inertialObject
I think it may be possible to use some set of canvas transformations to achieve this while drawing objects.
The fundamental problem with using a straight-up transformation matrix is that we already transform coordinates based on reference frame, so applying a second transformation when drawing will give the wrong answer. However, it might be possible to adjust our drawing coordinates before we draw so that the transformation matrix will draw the object in the right place.
Using a transformation matrix prevents us from doing incredibly complicated contraction code in each object's draw() method.
Another approach would be to draw objects in an off-screen invisible canvas, then move the contracted object into the correct location on the visible canvas. Sounds slow.
Humm, is aberration linear? Lessee something moving at c, along the path y = y0 will be seen at x^2+y^2=c^2 t^2 or x=sqrt(c^2t^2 - y0^2) -- check, has the right limits so a point on this object which is at x'1,y'1 or x(t)+x1, y0+y1 will be contracted to x(t)+x1/gamma, y0+y1, resulting in it appearing when (x(t)+x1)^2+(y0+y1)^2=c^2t^2 x(t)+x1 is where we want to put the picture so x(t)+x1=sqrt(c^2t^2 - (y0+y1)^2) If we can find a matrix A(p,E) st A (sqrt(c^2t^2 - y0^2), y0, t) = (sqrt(c^2t^2 - (y0+y1)^2),y0+y1,t) then there exists a linear transformation that can do aberration but y1 (the height of the object) is independant of y0, x, x1, t, vx, vy, E so no such function can exist. Thus no transformation matrix exists that provides aberration effects. This could left out and noted as a limitation without too much ill effect. Could be done as a function of the canvas, but it will need to be done for every pixel, faster to just keep track of some vertices that define your shape (or parts of your shape that are small on the scale of the aberration) and: 1) draw it using vector methods 2) distort a mesh and apply a texture (this is something that there should be off the shelf code for) -- this could include an override for when objects are small on the scale of aberration. Thus only close/big things would use this computationally intensive code.
Contraction is easy, just generate a boost/rotation matrix off of the velocity (and possibly record an angle) and multiply a canvas by that.
Another option is a first order approximation (I think this is called Terrell rotation? Might have to dig a bit to find some equations, or just take the limit ourselves.) where you assume the distance y0 is 0 for your calculations of A. Then it becomes a rotation dependent on v
So you think we can do contraction with Canvas transformations, but that aberration has to be done with more complex methods? I'd stick to just contraction, then, since relativistic aberration isn't introduced in introductory physics courses that I know of. We can build it in when we add more advanced features.
Terrell rotation can be done with canvas transformations. I hesitate to make a visualization that looks wrong, including ignoring aberration (or even just using Terrell rotation). This adds further confusion when new concepts are added or correct visualisations are seen, causes huge amounts of confusion to students that are on the verge of working it out on their own and significantly hinders discoverability (no-one will think to ask themselves or tutors why straight lines become curved if they are straight in the demos). I'd rather have unexplained effects than encourage misconceptions.
In the current implementation of extendedObject aberration (and contraction) is done correctly (in the limit that the points it is made of are right next to each other, but the error in drawing something with 20 or so points is insignificant), but there is a lot of redundant calculation and it can only do a single closed path per object. Also doing things like rotating/boosting the object in its own frame are affine and not linear (not too hard to implement).
Humm, checkout dopplerheadlight (it currently has an extendedobject floating about -- until just now including a bug i thought i'd already squashed)
I'll take a look. If Terrell rotation and basic Lorentz contraction can both be done with canvas transformations, that's great; canvas lets you add multiple transformation matrices and multiplies them together to get the final result. We'd just want to find a way to use it the most efficiently.
Also, if you just fixed a bug in dopplerheadlight, can you push your changes to GitHub? I don't see any recent commits.
i...thought i did, 1 sec Edit: got mind fixed on extendedObject and pushed the wrong branch
Okay. Let's see if we can get contraction and Terrell rotation working with canvas, then add in aberration however we can. Should be... interesting.
Hold up a sec, what are we actually trying to achieve? What kind of image are you wanting to aberrate and why? Perhaps the existing extendedObject will be sufficient (recall that we can keep track of about 50,000 independent points, this allows one to draw quite a detailed object using paths. Extensions to stroke paths rather than filling seem trivial enough. This class of object already exists and has aberration working. Describe what you want and then we can decide which path is easier.
Edit: Also explain why you want to be able to draw to separate canvases and compose them, it's a useful thing to be able to do in general, but what are you trying to achieve by doing this? The only thing I can think of is detailed bitmap sprites.
I'm not necessarily interested in composing separate canvases; the problem is that canvas transformations apply to the entire canvas, so if I intend to draw a Lorentz-transformed object at coordinates (4,4), those coordinates will also be transformed according to the matrix I supply canvas. So my objects will end up in unexpected places.
Can extendedObject animate things like stars with full Lorentz contraction and aberration, along with arbitrary shapes? If so, let's just move to that. We'll need subclasses that handle common cases, like building a circular star that Doppler shifts.
Well stars have this handy property in that spherical objects are invariant (if aberration is included, if not they become ellipsoids), the doppler wont be uniform (a gradient is a very close approximation) but the shape will be.
Also the way I've structured everything, coordinates of objects aren't contracted from some preferred frame, they are what is shown. It's as close as I could get to a true geometric model of spacetime. There has to be some preferred frame (the one that's shown at any given time), but that changes as you accelerate. It's a bit hard to explain unless you've read some GR or pre-GR stuff. Or I can sit down and demonstrate (this is actually one of the demos I want to make eventually) with a pencil and arm waving. Her goes an attempt with text, sorry if it comes off as condescending: It's helpful to have a euclidean analogue, so imagine 3D space. You're a little 2D man living on your plane you call the present in 2D space. You're hurtling along the 3rd dimension at the speed of light, but as everything else is, you don't notice. If something accelerates relative to you, the worldline or path it takes, instead of being straight up, is angled slightly so as you go up, it goes right a bit, and doesn't go up as far as you do so what you see is the version of the thing from its future (ie. it ages faster than you, or time dilates -- the opposite way to our universe because this one is euclidean, rather than hyperbolic) Now imagine the object was an extended 2d thing, say a piece of paper, which is tilted so that its normal vector points slightly right you'll notice a few things, first that the close end and the far end are at different times because it's angled (simultaneity). So what you see is an earlier version of the left end, and a later version of the right (again, backwards from our universe because this one is euclidean), another thing you might notice is the paper gets longer (length contraction backwards) (this one's harder with just the sheet, an easier way is to get a ruler, hold it perpendicular to something, note that the two points at which it crosses are the thickness of the ruler apart, then tilt it, it now takes a diagonal which is longer). One (perfectly valid) way of describing our object, is to get the flat piece of paper (assume it doesn't change so don't worry about simultaneity) and stretch it out by some factor depending on the angle (velocity). This is how most introductory courses handle it. Another way (and the way later courses and GR courses do it) is keep track of our full 3 dimensional time-paper extruded thing, then rotate it (boost is just hyperbolic rotation) and cut a little cross section out at the right height. This is (close to) the way I have treated objects and time-lines (they're not a solid piece of paper, rather a bunch of lines that look like a solid 3/4-D lump when viewed from afar). The simulation has absolutely no concept of a Lorentz-transformed object, there are just world-lines, or 1D objects (in 3D think of them as 0 dimensional points which have a set of time coordinates). You can look at these from any time (height in our euclidean analogue) or reference frame (angle in our euclidean analogue). You change frame/accelerate ('rotate') by applying a boost matrix to all objects rotating about the origin. You can accelerate an individual object by rotating all of these 1D world lines about the object's centre of mass/time it accelerated, this is very limited right now because it changes the objects past as well as their future, fixing this is most easily done by implementing a full arbitrary world line (instead of a straight line it can be any type of shape). But after this the entire universe will still be completely static in the 4D sense (ie. deterministic and dynamic in 3d terms).
If by animate things you mean dynamic objects, then that was planned eventually, but I put it on the 'quite hard, do not think about it yet' pile (code for that brings with it any deterministic object, such as planets orbiting around their stars, things accelerating by set formulae and such, they'll all share the same code). If by animate you mean some static object that can move about as a cohesive whole and accelerate/rotate/come into being and be destroyed, then yes, extendedObject can do most of that already. Accelerating (rather than having the player accelerate) is not implemented, but all the pieces are there, it's just translate to origin, accelerate/rotate only those things, translate back (or these operations can be composed -- affine, not linear, unfortunately). This will work fine without extra work as long as the player can't see out of their (present) light-cone, (heavy acceleration by the player may break things as well - this can be kludged around quite easily. Implement a visibleAfter and visibleBefore flag, and have the accelerated and unaccelerated things as separate objects). Anything fancier will require finishing arbitraryPaths
That makes sense. I was thinking in terms of things like inertialObjects, which are point like but have a spatial extent anyway. But you're saying that if we compose an object out of numerous points with proper world lines, Lorentz contraction of that object drops out for free because when the points are viewed from "our" perspective on the screen, they're viewed from an "angle," and so Lorentz contraction falls out as a side effect. If we build things out of extended objects, we don't need to build complicated Lorentz transform code beyond what the renderer already does. Right?
On Aug 3, 2011, at 8:02 PM, Schroedingers-Hat wrote:
Well stars have this handy property in that spherical objects are invariant (if aberration is included, if not they become ellipsoids), the doppler wont be uniform (a gradient is a very close approximation) but the shape will be.
Also the way I've structured everything, coordinates of objects aren't contracted from some preferred frame, they are what is shown. It's as close as I could get to a true geometric model of spacetime. There has to be some preferred frame (the one that's shown at any given time), but that changes as you accelerate. It's a bit hard to explain unless you've read some GR or pre-GR stuff. Or I can sit down and demonstrate (this is actually one of the demos I want to make eventually) with a pencil and arm waving. Her goes an attempt with text, sorry if it comes off as condescending: It's helpful to have a euclidean analogue, so imagine 3D space. You're a little 2D man living on your plane you call the present in 2D space. You're hurtling along the 3rd dimension at the speed of light, but as everything else is, you don't notice. If something accelerates relative to you, the worldline or path it takes, instead of being straight up, is angled slightly so as you go up, it goes right a bit, and doesn't go up as far as you do so what you see is the version of the thing from its future (ie. it ages faster than you, or time dilates -- the opposite way to our universe because this one is euclidean, rather than hyperbolic) Now imagine the object was an extended 2d thing, say a piece of paper, which is tilted so that its normal vector points slightly right you'll notice a few things, first that the close end and the far end are at different times because it's angled (simultaneity). So what you see is an earlier version of the left end, and a later version of the right (again, backwards from our universe because this one is euclidean), another thing you might notice is the paper gets longer (length contraction backwards) (this one's harder with just the sheet, an easier way is to get a ruler, hold it perpendicular to something, note that the two points at which it crosses are the thickness of the ruler apart, then tilt it, it now takes a diagonal which is longer). One (perfectly valid) way of describing our object, is to get the flat piece of paper (assume it doesn't change so don't worry about simultaneity) and stretch it out by some factor depending on the angle (velocity). This is how most introductory courses handle it. Another way (and the way later courses and GR courses do it) is keep track of our full 3 dimensional time-paper extruded thing, then rotate it (boost is just hyperbolic rotation) and cut a little cross section out at the right height. This is (close to) the way I have treated objects and time-lines (they're not a solid piece of paper, rather a bunch of lines that look like a solid 3/4-D lump when viewed from afar). The simulation has absolutely no concept of a Lorentz-transformed object, there are just world-lines, or 1D objects (in 3D think of them as 0 dimensional points which have a set of time coordinates). You can look at these from any time (height in our euclidean analogue) or reference frame (angle in our euclidean analogue). You change frame/accelerate ('rotate') by applying a boost matrix to all objects rotating about the origin. You can accelerate an individual object by rotating all of these 1D world lines about the object's centre of mass/time it accelerated, this is very limited right now because it changes the objects past as well as their future, fixing this is most easily done by implementing a full arbitrary world line (instead of a straight line it can be any type of shape). But after this the entire universe will still be completely static in the 4D sense (ie. deterministic and dynamic in 3d terms).
Reply to this email directly or view it on GitHub: https://github.com/Schroedingers-Hat/jsphys/issues/7#issuecomment-1723456
Correct, an extendedObject is a collection of inertialObjects, and inertialObjects will always be in the correct place according to relativity, this is because the simulation has no concept of euclidean or non-relativistic space. It will also be viewed in the correct place because the display code for inertialObjects is easy to port to the extendedObject and it does aberration (which is logically equivalent to light delay)
also there's no preferred perspective, they're just viewed from our perspective because our perspective is defined by the way we view them. This is not entirely true as there was an angle/perspective we were looking at them from when we first arranged them, but that can (and later will) be updated as the universe evolves.
To get an idea for the level of detail, we probably have the computing budget to fill the screen with objects up to this level of detail http://www.google.com.au/search?q=svg+under+1kb&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a&um=1&ie=UTF-8&tbm=isch&source=og&sa=N&hl=en&tab=wi&biw=1024&bih=512
An svg converter/reader may be doable, although materials will have to be defined with colours other than 'black body'.
I think I was imprecise in my language; by "animate" I just meant "draw on the screen," I think. So extendedObjects are the way to go. I'll merge the ui branch into master soon, so you can pick up the changes made there, and then we can try to merge the extendedObject code and arbitrarypaths, to pick up performance and features. From there we can stabilize and add demos and UI features. Sound right?
Yeah, i might spend a little while on arbitrarypaths to see if i can complete it in short order. It won't involve a lot of code, but not having seen any such problem before I have to keep every detail of the model in my head all at once to work on it and be sure of how it behaves (until I have a working example of it at least) which requires a rather specific state of mind.
Righto. I'll keep working on UI improvements and features.
@capnrefsmmat extendedObject is pretty much done, just replace scene.g.strokeStyle = "#0ff"; inside drawPast with a call inside the for loop to your doppler code. If you want to do it, you can find the velocities to match this.pastPoints[i] in this.radialVPast[i] and this.COM.X0 matches this.COM.radialVPast I'm not sure which method of interpolating the colors is best. Two things come to mind: Create a gradient along the lines and stroke an outline only do something along the lines of for blah { lineTo(COM.X0) lineTo(pastPoints[i]) lineTo(pastPoints[i+1]) closePath fill } with a gradient setting colors for each point of the triangle created. This would limit shapes to anything described by a bunch of triangles which share one point. If you feel that's too limiting, we could create a list of triplets of vertices to be rendered. Then any set of overlapping or non-overlapping triangles works fine. Same with lines Other thoughts are using createPattern (probably won't stretch correctly) to add textures, which we could then color using partial alpha, or some other method.
Do you think, in a two-dimensional plane, it makes much sense to spend loads of effort coloring in the interiors of objects? I'm already worried about the performance implications of coloring and stroking each segment of the exterior line separately.
On the other hand, stars may not make sense without a fill. I'll give the simple version a try, at least.
Simple solid-color-stroked outlines are in 2ed779c.
@Schroedingers-Hat: How many points will be in an average extendedObject? I don't know if I should be worried about stroking every segment of the outline in different colors.
Uhm, as many as perimeter/resolution? Probably a few hundred for large/complicated things will be enough.
One thing to note is that stroking is very slow compared to fill, but if it's only a few thousand it should be fine.
The above commit implements that, I think. Give it a try. With our current blackbody Doppler caching, I don't think it's too much of a performance hit.
Something is...interesting. It shouldn't be red at high blue-shift (unless you've added the gaussian thing with some IR radiation). :/
Ah. I wasn't tracking the lightcone intersection of the COM of an extendedObject (only the light cone intersections of the points). On top of that, the radial velocity of different points on an object varies (will add comments to the commit and make changes).
Though I'd fixed it all in 3d79b9eadbd37b3be0ca but something is still a teensy bit off, the snake doesn't avoid the knives in all frames. Perhaps I forgot a gamma somewhere?
Found it. changeArrayFrame was operating on the COM directly.
Sweet. Looks like the demo works pretty damn well. Time to start building more demos. Thanks!
didn't commit before going to watch tv -- didn't realise you'd be up, there's more -- one sec.
Any additions to this are covered by other issues.
Given position and momentum of the centre of mass, find the Lorentz contracted and light delayed version of an arbitrary rigid object, whether a bitmap or path. Angles are assumed to be constant over the whole object, so this will be inaccurate for large things.