cnlohr / colorchord

Chromatic Sound to Light Conversion System
Other
674 stars 116 forks source link

ColorChord EmbeddedCloud #34

Open pjmole opened 7 years ago

pjmole commented 7 years ago

p5_ws_samp_loop colorchordembeddedcloud

Charles to me this embedded colochord is a wodefull engine for so many things A prolific author could make an entire book using this device as the starting point for so many studies, my imaginiation is overwhelmed.

You probably went to much work to get all the code and web pages on the processor. I have built this bleeding hack above using a clone of your DFT display to use P5.js WEBGL I know nothing of P5.js other than some research on the net for popular javascript tools and seemd to have excellent documentation and examples.

I have learned the SOFTAP mode is problematic so you seem to need to get into infra mode for any semblance of stability.

This opens up the embedded version to anyones favorite javascript tools

I all ready see my lack of experince need to understand how to get P5 canvas better placed

The first screengrab shows a P5.js DFT loop .

The second screen grab shows the furnace sound in the next room at the low frequencys and a radio in the same room at the upper frequencies.

My plan is collect DFT from 4 embedded colorchords , One would be considered master, the rest could be regular models.

cnlohr commented 7 years ago

I appreciate the kind words.

I don't understand: "need to get into infra mode"?

You can put the MPFS at the 1MB boundary and get many MB of space for webpage, scripts, etc. Though I recommend doing websockets with a minimal API, like the one I used for https://cnlohr.github.io/voxeltastic/

Would you want to export colorchord data via UDP to another computer?

pjmole commented 7 years ago

..."need to get into infra mode" It seems the AP mode is "problematic" it's not untill the unit is in station(infra) mode be for I get long stability with my esp8266 projects.

I came late to the esp8266 worold and only have some version of the esp8266-12. Generally remember to set the 1B MPFS thing. Once I did cram nearly 1MB on the esp8266 'cause of makefile *.js.... , mucho time to reflash the chip so I decide to host the larger .js files on my Pi, to reduce development cycle.

The voxelastic project I have seen but not actually tried to use. should build the detector though.

My project is a naive hack of your work. to produce realtime visuization of sounds in virtual space grab 4 DFTs from colorchords arranged in orttogonal position. run thro the the sample lists and place ballons at estimates where notes came from in such virtual space

right now I get 7-14 fps with one colorchord

I hope to finish hack soon, thinkin of adding more ws: one for each colorcord and sending out 4 ws: requests in the WEBGL loop ? colorchordcrowd2

cnlohr commented 7 years ago

So, there is a big thing I'm trying to get to happen and that's get ESP_NONOS_SDK to work with ColorChord. Espressif has helped some, but more work needs to be done to shrink the IRAM footprint on it. The older SDKs do have known stability issues, so that is entirely possible. (Will comment more soon)

cnlohr commented 7 years ago

I now understand some more. That is very cool! I still think it would be very cool to have the ESPs able to stream some sort of summary information about sound going on at the moment to other devices on the network. KEEP GOING!

pjmole commented 7 years ago

I am using ESP_GCC_VERS = 4.8.2 , verry interested in ESP_NONOS_SDK with ColorChord. Tried to use websockets to get extra 3 DFT's for my display but my approch that way was problematic. can one multplex them?

Rethinking possibles, need only to get the DFT's from others, perhaps simple web requests to custom Command?

As for placement for recieivers in the "test booth", 3 on the ceiling and one on the floor. a simple line intersection should keep all bubbles in virtual display.

The one transistor recivers are sensitive in the low frequencey range , should set Fuzzed. as default. but that is an area I m interested in, In our old frames house one can hear many bumps in the night.

cnlohr commented 7 years ago

If you are mostly interested in the lower frequencies, you should be able to filter the signal more heavily (Tweaking the IIR values) and it should be MUCH cleaner at lower frequencies and less responsive at high ones. There is a balance. Those values are currently set for real time music :-D

I don't understand what you are referring to with multiplexing websockets. But, websockets are much more lightweight than full on HTTP requests, though with the current system, every websocket command is also exposed through a regular HTTP interface anyway through (I THINK) it's /d/issue.

Charles

pjmole commented 7 years ago

Thanks for your suggestions. Having a great time makeing/messing with your code

multiplexing websockets. I added three websockets for X Y and Z sensors, and want to grab the DFT sensor data for the display loop in the base sensor from the others.

Your websocket code has a queue mechanism that I don't want to duplicate. the extra sockets I trying to simplify have only one command type sent, and only one type of recieved message, the last DFT data. from the stations

I have the router assign static address 192.168.0.20X to the four stations.

So here the proposed scenario of smallest cloud of 4 station 0 or base , high upper left, station 1 or X high upper right. station 2 or Y ground below station 0 station 3 or Z furrthest away ahead ,most sensitive unit

in decimation loop.... if have Z sample and if have all 4 samples for each station > some threshold, do 2D estimations for X and Y and finalize with 3D estimation Z then place collored bubbles is virtual view.

been museing about better estimation method then simplest. as there is only 8bits in the DFT data hows that relate to dB? how does that affect our sound/distance needed headroom expectations ?

This is a learning exercise I just found OLPC http://wiki.laptop.org/go/Acoustic_Tape_Measure some good thoughts but dated stuff?? Perry

Good news, I am grabbing all 3 extra DFT's now Jan20, now need to figure out distance estimations.

pjmole commented 7 years ago

Hi Charles, Update on progress. Finally satisfied with location of WEBGL canvas.

screencolorchordwebgl

Here is a shot of the sensors in action. There are 4 DFTs disp.layed based on the loudest of each sample. The DFT's are shown whith lowest to left and higher tones below and to right.

This is a fairly noisey environment furnace noise, space heater noise, and a radio playing in the lower right background.

now trying to find method of normalizing samples to estimate position/distance

But my mind wanders... a signal generator might be a good project to add to this esp8266engine?. just to make noises for other sensors.

cnlohr commented 7 years ago

Why close the issue?

pjmole commented 7 years ago

Sorry about closing. I did not mean to do so. My skills are really old. I still rely on linux vi for most of my work. Somedays I consider looking for an IDE.

cnlohr commented 7 years ago

Nah. I lived the IDE life. It's glamorous, but in the grand scheme of things it will only drag you down. I am glad to be free.

pjmole commented 7 years ago

colorchordcloud_hoo_test1

My first hoooo test... in a quiet environment . The red a purple are me saying whooooo..

First the distance calc for lower right

translate(0,0,-215); translate(-dcalc2(samp,sampX),-dcalc2(samp,sampY),dcalc2(samp,sampZ)); }; . . function dcalc2(s1,s2) { if (s1 < s2) { return mult(20Math.log10(s2-threshold)-20Math.log10(s1-threshold)); } else { return mult(20Math.log10(s1-threshold)-20Math.log10(s2-threshold)); }; } and then the sphere fill(color(CCColor( i % globalParams["rFIXBPERO"]))); samp = (samp + sampX + sampY +sampZ) / 4 sphere (1+(samp-threshold)/7,7,7);

I was hopping the DFT might produce same location for different octaves( which reseach had showed to be unlikey), if some kind of digital band equilization is needed I need better microphones. Have been mulling buying 5 from the east or making a better batch of input boards at home.

screenshot at 2017-02-08 15 01 16

Next I need to get video feed background !

cnlohr commented 7 years ago

You realllyyyyy need to get video. What do you mean by "background"

pjmole commented 7 years ago

The final piece of this demonstration is overlaying these blinking dots on a video camera feed showing the 4 sensors in real time. I just found my Kobo (was hiding under a tissue box for a week). On android chrome his "webgl" gets ~50 fps.

There should be less than a dozen lines of code needrd to get the video running.

I've decided to buy some sensors from the orient AND design an ultasound frontend.

There are two classic options the "frequency division with amplitude " and the "local oscillator and mixer", I favor the first option.

cnlohr commented 7 years ago

I was just telling my friends about what you're doing and showing them your pictures! I can't wait for some videos from you. Where do you live? If you're close enough could check it out?

pjmole commented 7 years ago

We live in the Great White North, If you can recieve "102.7 CHOP FM", on your radio I will come and pick you up.

On 02/11/2017 01:18 AM, CNLohr wrote:

I was just telling my friends about what you're doing and showing them your pictures! I can't wait for some videos from you. Where do you live? If you're close enough could check it out?

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/cnlohr/colorchord/issues/34#issuecomment-279124802, or mute the thread https://github.com/notifications/unsubscribe-auth/AXZZWAN3H8n5wVpY15SASFKXrC30RjY8ks5rbVLSgaJpZM4Li0Yk.

cnlohr commented 7 years ago

Noooope. Still hopin' for videos!

pjmole commented 7 years ago

Some Progress whooowebgl Managed to get video in the display from a mjpg_server on the Pi3. Saddly UBUNTU did an upgade this weekend and destroyed my HIRES monitor setting.

The screen grab shows the four DFT virtual display and the lower left sensor in the video. display selection is 0 for DFT diagonals, and 1 for distance estimations. the mult parameter allows expansion and contraction of the 4DFT point results.

colorchordshot_2017-02-14 19 12 48 later "shot in the dark" adjusting layout.

pjmole commented 7 years ago

Charles you should try this out ...

You may need less than five lines of code to add video underlay to java script projects.

change fill to use alpha in the "draw" loop 1) var colors = CCColor( i % globalParams["rFIXBPERO"]); 2) fill(parseInt( colors.substr(1,2),16 ),parseInt(colors.substr(3,2),16 ),parseInt(colors.substr(5,2),16 ),127);

and add the image in the html, any mjpeg video url source should work!( not sure about cross$%@*domain

3)

Actually I don't think I have used much, if any 3D stuf yet, I been looking at stemkoski's excellent Three.js tutorials.

P.S. latest whoooooo!! shot. colorchordcroudscreenshot2017-02-19 10-46-07

Charles I hope you don't mind me using this as a BLOG ! here is a heeeee! shot heecloud2017-02-25_08 35 50 Latest development .... I thought full video used too much network traffic so I changed to grabbing a snapshot after the "draw" loop.

in the "loop": if (samp > (threshold + 20)) noisey++;

after the loop: if (noisey > 3) { document.getElementById('back_image').src = "http://192.168.1.180:8080/?action=snapshot" + new Date().getTime(); noisey = 0; }; WEBGLDataTicker();

pjmole commented 7 years ago

Charles: I have recovered from a major hardware breakdown. Seems I have been overheating my Pi3 and SSD. SSD went completley dead, endend up replacing powered hub, SSD, and SSD-USB adapter. luckily the microsd had a backup of the OS setup.....

Well back running now, but lost most of my hacking is lost #$&%*@#

Discovered that the Pi overheats while building tool chain, but since all my changes have been with the javascript I can refactor at will. My present interest is in javascript and web sockets, I havent changed any of the esp8266 code.

I have started with duplicate of DFT as WEBGL code and added snapshot code to WEBGLDataTicker "if (new Date().getTime() > (last_pic + 500)) { document.getElementById('back_image').src = "http://192.168.1.180:8080/?action=snapshot" + new Date().getTime(); last_pic = new Date().getTime(); }; " next to get transparency and overlay and the 3 extra websockets running ! Update: Morning Mar3 wow got all of above working with only 2d colorchordhandup

snapshot of changes to html and js files, I rarely need to do netweb updates and have yet to build a working toolchain so no netburn yet. This version has video stream background, and does not yet display the extra 3 channels. The Pi3 flickers but the kobo with chrome works great at 30Hz

colorCvideo.zip

I have recoverd most of my code from the few bones I could glean from the remains of the old storage. This occured only days after I ordered some KEYES style microphone boards from my favorite oriental supplier.

Sat Mar 4... Pretty well got old project working now, but the Build process is problematic , I was actually blowing over my Pi to keep it running!!!. I recalled this morning I had used platformio at first. Boy! it is easy to use, got build chain and ..... Compiling .pioenvs/esp12e/src/ArduCAM_ESP8266_OV2640_Capture.ino.o . . Archiving .pioenvs/esp12e/lib/libESP8266WebServer.a Indexing .pioenvs/esp12e/lib/libESP8266WebServer.a Linking .pioenvs/esp12e/firmware.elf Building .pioenvs/esp12e/firmware.bin Calculating size .pioenvs/esp12e/firmware.elf text data bss dec hex filename 247736 12464 31896 292096 47500 .pioenvs/esp12e/firmware.elf [SUCCESS] Took 32.38 seconds no problem. looking at building colorchord with pio more to-day...

Sun Mar 12. still cannot get the toolchain built. and accidently fried one of my sensors! . All of my changes are in the user interface. It would be helpfull if precompiled binarys were available so one could try colorchord without the toolchain build.

There are lots of possibilitys at the presentation level, with the use of multiple ws: sensors. When I limit the number of samps/draw one can get good fps rates with 4 ws: for lengthy periods without much socket drops. I think my input sensors may flub the DFT sometimes? I am using an old TrendNet WS651 webcam for the background video now.

Has anyone done stuf like outputs to midi players from notes information. how about a calliope music player bot? maybe Virtual displays of instruments heard?

Wed Mar 22 Received my 5 micriphone modules from the orient a couple of days ago. The mic parts a very good, since I have five I am already hacking the first one, the red LEDs annoy me so they went first, also the compararator. I have often had dreams of doing some work with SMDs, thres lots of room on the board I could try to ad mu transistor option , but will try to hold off fooling around with the other four too only removing the LED's. screenshot2017-03-22_12 09 57

This screen shot shows my messy workshop and snap of video from cellphone IPWebcam

I have gotton the tool chain to compile once (40 minutes on Pi3 with fan cooling) but there are library propems in the linkage steps I have yet to resolve. .....

pjmole commented 7 years ago

Now it is spring. Time to move my microphone cloud outside. Started thinking of an alternative arragement with all of the receivers on one pole, three faceing down and one up. This led the project to needing panels to isolate the 4 signals. So in my researcch I discovered the PZM or boundry microphones. Seems there is a whole class of microphones used for conferencing and studio applications rangeing in price from under $30 to well over 300.

It appears that the tiny microphones I now have are the same as some of these products use. There is some controversy over the exact structure but one offers 3db gain polar and the other 6db gain

alink http://www.sengpielaudio.com/TwoDifferentBoundaryMicrophones.pdf

So has anyone played with the amature construction of such microphones?

Hi Charles I have ordered another set of microphones with 20db gain, while awaiting them I have been looking at javascript Partical examples. it seemed to me I needed to use a different timeout method so while doing so I used requestAnimationFrame() shim by Paul Irish // http://paulirish.com/2011/requestanimationframe-for-smart-animating/ in menu.js.

and then added code to my WEBGL dropdown to produce sound bubble Particles above the DFT display .... colorchordPanel.zip colorchordPanel2.zip

I would like people to try out this zip of web/page index.html menuinterface.js and main.js just unzip into that directory and do make netweb only!

radio_whooo_2017-05-03_20 09 58

look for stuf around "threshold" in main.js to adjust what is displayed by the WEBGL dropdown. and the index.html contains the reference to background video feed.

May 5 ... This Morning I am using IP-Webcam on my KOBO for the video feed . The system will work without video and show just blank background. The reason for the Particles was to ad short term memory to the system. tappingcolorchord_2017-05-05_10 04 11

This shot shows me tapping my foot while a FM radio is playing. It is using the Adafruit auto gain at 40db. It has seems to have flatter response as the low frequencys seem to be more noticable than my transistor mics.

pjmole commented 7 years ago

ColorChord: Embedded: Stereo

I have not yet recieved my new microphones, so I decided to create a stereo version of my code. colorchordStero.zip

colorchordstero_2017-05-20_08 23 44

I modified esp82xx/common.mf to change the version string to show the date of the esp8266 code and the time of the make invocation so I that I could tell if the web code may be different than the esp8266 code.

VERSSTR := "Version: $(VERSION) - Build $(shell date -R) with $(OPTS)"

VERSSTR := "Version: $(VERSION) - Build $(shell date -r ../image.elf) on $(shell date -R) with $(OPTS)"

The webpage grab shows the stereo code in action with an unamplified piezo microphone on each. There is a vertical center line of detected signals that are nearly equal, and the estimates of loude signal offsets on the left and right.

The microphones are about one inch from a portable stereo radios speakers.

cnlohr commented 7 years ago

Are these on two separate ESPs? how does this work? Start recording videos of what you're doing. Explain, talk, show us what this witchcraft is that is being performed with the aid of cc.

pjmole commented 7 years ago

Yes Charles there a two separate colorchords on same network.

I am getting close to 30fps with this! It tries to resolve volumes with distance? You only need to netburn one of them, which will be the left microphone.

The only changes are to the html and javascript code, so you place the contents of the zip file into the web/page subdirectory (index.html, menuinterface.js, and main.js)

STEP 1 at top main.js are reference to ip's of other colorchords var wsUriX = "ws://192.168.1.143/d/ws/issue"; var wsUriY = "ws://192.168.1.140/d/ws/issue"; var wsUriZ = "ws://192.168.1.143/d/ws/issue"; this only uses the first one X to be the right microphone's address. colorcgordstereo_2017-05-20 20 10 05

colorcordbreadboard_2017-05-20 20 18 05

STEP 2 edit web/page/index.html for the video setting the video reference is at line 58 <CANVAS style="background-image: url('http://192.168.1.105/VIDEO.CGI');" id=WEBGLCanvas width="640" height="480" opacity="0.5" position="absolute" ................

I have used mainly old netcam, but also android IP webcam as source but you will just get a blank white background if not available.

Step 3 make netburn to the device that will be the left microphone source.

Step 4 reload the webpage for the left microphone. and open WEBGL menu

Step 5 look at main.js near Math.sqrt to see my code, I have also made a little adapter so I could hardwire output of cellphone directly colorchords.

It goes electric when I ..... video2.zip

P.S. I really should make a decent video, but the first one should be infinity-mirror colorchord InternetTrinket with clock.

pjmole commented 7 years ago

ct-ng_build_2017-06-18_21 46 18 I have finally built another toolchain colorchordnetbuild_2017-06-19_12 26 38 and got the net functions working again.

colorChordBin.zip -rw-rw-r-- 1 pjm pjm 203072 Jun 19 12:22 image.elf-0x40000.bin -rw-rw-r-- 1 pjm pjm 35568 Jun 19 12:22 image.elf-0x00000.bin -rwxrwxr-x 1 pjm pjm 325066 Jun 19 12:22 image.elf -rw-rw-r-- 1 pjm pjm 290086 Jun 19 12:22 output.map

cnlohr commented 7 years ago

Just FYI... I got ColorChord down to a reasonable size, and it looks like it'll work with the new SDK and esp82xx and everything.

I'll have to do more testing tonight.

You're really going to have to make pull requests to put some of this insanity in its own branch on github otherwise it will be difficult for others to make use of it.

pjmole commented 7 years ago

You must have a busy agenda. I have stil much to learn about architecture of your project and present day development collabralative environment . I think the changes in menuinterface (re: requestAnimationFrame()) belong in esp82xx subgit?

I have no real agenda so I would like to help your efforts in any way I can.

cnlohr commented 7 years ago

Well, it's a funny mix. ColorChord8266 predates esp82xx. But, in general the core of the UI and backbone of the system, i.e. what handles the websockets, HTTP, wifi config, etc. all lives in esp82xx.

And yes, thing got very very busy when I started to take on libsurvive. I have started to withdraw from other responsibilities like co-chairing magstock etc. And right now, I'm trying to get usb full-speed running on the ESP32.

<img 4) src='http://192.168.1.180:8080/?action=stream'>