An Archive of all the p5.js mini-excersises from the course in Aesthetic Programming at Aarhus University, along with conceptualizations of the final exam project
First of all the Rawgit didn't seem to work in the same way as what was shown in the video, so I'm focusing on the video, because that is presumebly how it was intended to work.
The work is build around the audio playing in the background. What the code does is analyze the audio being played and then based on that, it creates three dimensional shapes on the screen. It doesn't clear up previous shapes, so as it goes on it piles more and more shapes on top of each other and it eventually fills most of the screen.
Overall I like the work. It's an interesting way to express sound visually and due to the fact that it doesn't clear the screen at any point, it becomes not just a representation of what's currently playing, but of everything that has been played since it started. And since the code itself is what makes the visual representation based on an analysis of what's playing, you could change the audio file used to any other file and then get a different representation. From looking at the code I can tell that there is also some random elements involved in regards to color and the placement of the shapes, which means that even using the same audio file, you would get somewhat different results each time you run it.
It might be worth noting that, without any actual movement in the work (Just more layers of shapes being piled on top of each other) the fact that the shapes are three dimensional isn't as significant as if there had been, since we see the whole thing from a stationary two dimensional perspective. Not saying that using two dimensional shapes or adding movement would necessarily have been an improvement, only that three dimensional shapes (of any kind really) are only really three dimensional to our point of view if we see them from more than one angle. Well, that might be an overstatement, but you get the point, probably.
First of all the Rawgit didn't seem to work in the same way as what was shown in the video, so I'm focusing on the video, because that is presumebly how it was intended to work. The work is build around the audio playing in the background. What the code does is analyze the audio being played and then based on that, it creates three dimensional shapes on the screen. It doesn't clear up previous shapes, so as it goes on it piles more and more shapes on top of each other and it eventually fills most of the screen. Overall I like the work. It's an interesting way to express sound visually and due to the fact that it doesn't clear the screen at any point, it becomes not just a representation of what's currently playing, but of everything that has been played since it started. And since the code itself is what makes the visual representation based on an analysis of what's playing, you could change the audio file used to any other file and then get a different representation. From looking at the code I can tell that there is also some random elements involved in regards to color and the placement of the shapes, which means that even using the same audio file, you would get somewhat different results each time you run it. It might be worth noting that, without any actual movement in the work (Just more layers of shapes being piled on top of each other) the fact that the shapes are three dimensional isn't as significant as if there had been, since we see the whole thing from a stationary two dimensional perspective. Not saying that using two dimensional shapes or adding movement would necessarily have been an improvement, only that three dimensional shapes (of any kind really) are only really three dimensional to our point of view if we see them from more than one angle. Well, that might be an overstatement, but you get the point, probably.