Closed ghost closed 6 years ago
thanks for the initiative, eugene! i think my students in hannover have something to put here — i will talk with them and get back later. cheers - joachim
On 02/12/16 22:35, Eugene Cherny wrote:
Currently, the “projects” section on the web site looks lonely and leaving impression that there's not much things people do with Csound, which is most certainly not true. So we need to fill this section with the projects we know about. I propose to add add ~20–30 projects ourselves (I can definitely contribute to that) and after that to write an instruction for the fellow visitors explaining how to add their projects to the web site. This way we can fill the section in a matter of week or two.
To help us with this I'd like to ask the community to list the projects they know about here, in the comments to this issue, and I'll be adding them as I have a spare time. It'll be awesome if, besides projects links, one would post some additional info, like a name, a description and an image.
You can also add a project yourself by submitting a pull request. All you need to do is to create a
.md
file in the_post/showcase
directory following the example: https://raw.githubusercontent.com/csound/csound.github.io/master/_posts/showcase/01-03-2016-Cabbage.mdSo let's make this section nice and cool!
Sound installation “Flyndre” (2006 – 2026)
The sound installation Flyndre (Flounder) by Oeyvind Brandtsegg is based on a sculpture by Norwegian sculptor Nils Aas. It is exhibited outdoors in a park, some 120 km north of Trondheim Norway. The sounds of the installation are affected by environmental conditions like temperature, light, tidal water, moon phases, seasons of the year and more. Csound is used as the synthesis engine and core part of the software for the installation, supported by Python for composition algorithms and interfacing. Contact speakers are mounted on the resonant metallic structure, turning the whole sculpture into a sound producing object. The installation was opened in September 2006, and was first planned to run for 10 years. In 2016 it was recommissioned by the municipality of Inderøy for another 10 years. A live audio stream of the sound can be heard at http://flyndrestim.itea.ntnu.no:8002 (played through the sculpture, and recorded on site), and http://flyndrestim.itea.ntnu.no:8000 (direct line tap from the computer).
(image: Flyndra005.JPG)
Sound installation “VLBI Music” (2013 – 2020)
The sound installation “VLBI Music” was made as an art piece for the Norwegian Mapping authority’s headquarters in Hønefoss. The composition is based on measurements of distant quasars, done by the mapping authority in collaboration with other scientific institutions worldwide. The purpose of the measurements is to create a stable reference for other positioning services. The sound installation uses the processes of of the VLBI (Verly Long Baseline Interferometry) system as well as the raw data as material for the live running composition. Csound is used both for sound synthesis and compositional algorithms, interfaced to Python for some of data processing from the measurements. The live audio stream can be heard at http://193.90.68.207:8000/stream/1/ and there is an app for Android also relaying the audio stream. More information at https://www.researchcatalogue.net/view/55360/55361 and a TEDx talk about the piece: https://www.youtube.com/watch?v=aBEGnLtVk64
Live processing and crossadaptive processing
Csound is used as a central part of research on improvised electroacoustic music performance centered at NTNU, Trondheim Norway. Using Csound-based custom effect processing routines (including Hadron, Liveconvolver and others), new methods of making music is developed and investigated. A project on live processing (2011-2014) of improvised performance resulted in the creation of a dedicated ensemble (T-EMP) and is documented at https://www.researchcatalogue.net/view/48123/48124/10/10, with an album release at http://www.cdbaby.com/Artist/TEmpTrondheimElectroacousticMu A project on crossadaptive effect processing as musical intervention is running from 2016 to 2018, involving several key members of the Csound community (Oeyvind Brandtsegg, Sigurd Saue, Victor Lazzarini, Rory Walsh, Bernt Isak Wærstad, Andreas Bergsland), and other prominent partners. More information at http://crossadaptive.hf.ntnu.no/
(image: Crossadaptive_Dec_2016.png) Still image from video of a studio session on crossadaptive processing in December 2016. Trond Engum: processing, Tone Åse: vocals, Carl Haakon Waadeland: drums
[self.]
The art installation [self.] was created by Oeyvind Brandtsegg and Axel Tidemann in an attempt to explore artificial intelligence and its relation to us humans. It is a robotic head with speakers, microphones, camera, and video projection. [self.] learns from people talking to it, by analyzing incoming sound and images, comparing similarity and context of sound segments. Its utterances are solely made from learned segments. Audio processing in [self.] is done in Csound, supported by Python for the artificial intelligence. A video of the project: https://www.youtube.com/watch?v=HErOfnqREBQ A limited version of [self.] attended the International Csound Conference in St. Petersburg in 2015, and a video of its memories thereof can be seen here: https://www.youtube.com/watch?v=E7fYV4K-9_s
Hadron Particle Synthesizer
The Hadron Particle Synthesizer is a synth, sampler and effects processor based on the partikkel plugin. The interface has been designed with live performance in mind, providing rich access to the underlying 200+ synthesis parameters via simple user controls. The main parts of Hadron was developed during 2008 to 2011 by Oeyvind Brandtsegg, in collaboration with Arne Skeie (graphic design), Bernt Isak Wærstad (M4L and VST interface) and Sigurd Saue (VST interface). More information at http://www.partikkelaudio.com/
(image: Hadron_VST_interface.png) The VST interface for Hadron
Liveconvolver4
Traditionally, convolution has only been possible on a static (previously stored and analyzed) impulse response. With Liveconvolver4, Oeyvind Brandtsegg and Sigurd Saue have developed a method to convolve two live sources with each other. The impulse response is updated and replaced one partition at a time, in a synchronized manner, allowing for updates to the IR without introducing audio clicks or glitches. This method also allows for parametric processing of the IR while it is actively used for convolution. This work becomes available in Csound 6.09.
(image: Liveconvolver4_Cabbage_gui.png) The Cabbage GUI for Liveconvolver4
Great stuff Oeyvind!
On 16 December 2016 at 22:09, Oeyvind notifications@github.com wrote:
Liveconvolver4
Traditionally, convolution has only been possible on a static (previously stored and analyzed) impulse response. With Liveconvolver4, Oeyvind Brandtsegg and Sigurd Saue have developed a method to convolve two live sources with each other. The impulse response is updated and replaced one partition at a time, in a synchronized manner, allowing for updates to the IR without introducing audio clicks or glitches. This method also allows for parametric processing of the IR while it is actively used for convolution. This work becomes available in Csound 6.09.
(image: Liveconvolver4_Cabbage_gui.png) The Cabbage GUI for Liveconvolver4
[image: liveconvolver4_cabbage_gui] https://cloud.githubusercontent.com/assets/5583565/21280198/496ba3c6-c399-11e6-943c-ad7ca0e790f4.png
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/csound/csound.github.io/issues/53#issuecomment-267708054, or mute the thread https://github.com/notifications/unsubscribe-auth/ACkLGRWU9i2pjeleJ5xnvTJRMkFGVJ1qks5rIwwngaJpZM4LDA2K .
Thanks, and great to have Cabbage to do flexible visualization of the IR.
2016-12-16 14:26 GMT-08:00 Rory Walsh notifications@github.com:
Great stuff Oeyvind!
On 16 December 2016 at 22:09, Oeyvind notifications@github.com wrote:
Liveconvolver4
Traditionally, convolution has only been possible on a static (previously stored and analyzed) impulse response. With Liveconvolver4, Oeyvind Brandtsegg and Sigurd Saue have developed a method to convolve two live sources with each other. The impulse response is updated and replaced one partition at a time, in a synchronized manner, allowing for updates to the IR without introducing audio clicks or glitches. This method also allows for parametric processing of the IR while it is actively used for convolution. This work becomes available in Csound 6.09.
(image: Liveconvolver4_Cabbage_gui.png) The Cabbage GUI for Liveconvolver4
[image: liveconvolver4_cabbage_gui] https://cloud.githubusercontent.com/assets/5583565/21280198/496ba3c6- c399-11e6-943c-ad7ca0e790f4.png
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/csound/csound.github.io/issues/53# issuecomment-267708054, or mute the thread https://github.com/notifications/unsubscribe-auth/ ACkLGRWU9i2pjeleJ5xnvTJRMkFGVJ1qks5rIwwngaJpZM4LDA2K .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/csound/csound.github.io/issues/53#issuecomment-267711156, or mute the thread https://github.com/notifications/unsubscribe-auth/AFUyzWPRONL87mTse4_fEDffP6gC07rUks5rIxAggaJpZM4LDA2K .
While it's not using Csound in a traditional sense, I would very much like "Soundpipe" to be mentioned, as Csound has shaped the library so much.
Soundpipe
Soundpipe is a highly portable and lightweight music DSP library written in C. While not directly using Csound, Soundpipe uses many algorithms adapted from the Csound source code. Soundpipe is suited for a wide array of platforms, and versions have been compiled on Linux, Raspberry Pi, iOS, Android, and the STM32F4 discovery board. It is also the main audio synthesis engine for the iOS framework AudioKit, as well as the stack-based audio language Sporth
@PaulBatchelor Could you provide a picture? Like a screenshot of some soundpipe code or something. The showcase template is designed for showing at least one picture per project.
Closing this now, since everyone submitting projects should do it as individual issues instead.
Currently, the “projects” section on the web site looks lonely and leaves impression that there's not much things what people do with Csound, which is most certainly not true. Thus, we need to fill this section up with the projects we know about. I propose to add add ~20–30 projects ourselves (I can definitely contribute much to that) and after that to write an instruction for the fellow visitors explaining how to add their projects to the web site. This way we can fill the section in a matter of a week or two.
To help us with this I'd like to ask the community to list the projects they know about in the comments to this issue, and I'll be adding them to the website when I have some spare time. It'll be awesome if, besides project links, one would post some additional info: a name, a description and an image.
You can also add a project yourself by submitting a pull request. All you need to do is to create a
.md
file in the_post/showcase
directory following the example: https://raw.githubusercontent.com/csound/csound.github.io/master/_posts/showcase/01-03-2016-Cabbage.mdSo let's make this section nice and cool!