Open pjmole opened 7 years ago
I appreciate the kind words.
I don't understand: "need to get into infra mode"?
You can put the MPFS at the 1MB boundary and get many MB of space for webpage, scripts, etc. Though I recommend doing websockets with a minimal API, like the one I used for https://cnlohr.github.io/voxeltastic/
Would you want to export colorchord data via UDP to another computer?
..."need to get into infra mode" It seems the AP mode is "problematic" it's not untill the unit is in station(infra) mode be for I get long stability with my esp8266 projects.
I came late to the esp8266 worold and only have some version of the esp8266-12. Generally remember to set the 1B MPFS thing. Once I did cram nearly 1MB on the esp8266 'cause of makefile *.js.... , mucho time to reflash the chip so I decide to host the larger .js files on my Pi, to reduce development cycle.
The voxelastic project I have seen but not actually tried to use. should build the detector though.
My project is a naive hack of your work. to produce realtime visuization of sounds in virtual space grab 4 DFTs from colorchords arranged in orttogonal position. run thro the the sample lists and place ballons at estimates where notes came from in such virtual space
right now I get 7-14 fps with one colorchord
I hope to finish hack soon, thinkin of adding more ws: one for each colorcord and sending out 4 ws: requests in the WEBGL loop ?
So, there is a big thing I'm trying to get to happen and that's get ESP_NONOS_SDK to work with ColorChord. Espressif has helped some, but more work needs to be done to shrink the IRAM footprint on it. The older SDKs do have known stability issues, so that is entirely possible. (Will comment more soon)
I now understand some more. That is very cool! I still think it would be very cool to have the ESPs able to stream some sort of summary information about sound going on at the moment to other devices on the network. KEEP GOING!
I am using ESP_GCC_VERS = 4.8.2 , verry interested in ESP_NONOS_SDK with ColorChord. Tried to use websockets to get extra 3 DFT's for my display but my approch that way was problematic. can one multplex them?
Rethinking possibles, need only to get the DFT's from others, perhaps simple web requests to custom Command?
As for placement for recieivers in the "test booth", 3 on the ceiling and one on the floor. a simple line intersection should keep all bubbles in virtual display.
The one transistor recivers are sensitive in the low frequencey range , should set Fuzzed. as default. but that is an area I m interested in, In our old frames house one can hear many bumps in the night.
If you are mostly interested in the lower frequencies, you should be able to filter the signal more heavily (Tweaking the IIR values) and it should be MUCH cleaner at lower frequencies and less responsive at high ones. There is a balance. Those values are currently set for real time music :-D
I don't understand what you are referring to with multiplexing websockets. But, websockets are much more lightweight than full on HTTP requests, though with the current system, every websocket command is also exposed through a regular HTTP interface anyway through (I THINK) it's /d/issue.
Charles
Thanks for your suggestions. Having a great time makeing/messing with your code
multiplexing websockets. I added three websockets for X Y and Z sensors, and want to grab the DFT sensor data for the display loop in the base sensor from the others.
Your websocket code has a queue mechanism that I don't want to duplicate. the extra sockets I trying to simplify have only one command type sent, and only one type of recieved message, the last DFT data. from the stations
I have the router assign static address 192.168.0.20X to the four stations.
So here the proposed scenario of smallest cloud of 4 station 0 or base , high upper left, station 1 or X high upper right. station 2 or Y ground below station 0 station 3 or Z furrthest away ahead ,most sensitive unit
in decimation loop.... if have Z sample and if have all 4 samples for each station > some threshold, do 2D estimations for X and Y and finalize with 3D estimation Z then place collored bubbles is virtual view.
been museing about better estimation method then simplest. as there is only 8bits in the DFT data hows that relate to dB? how does that affect our sound/distance needed headroom expectations ?
This is a learning exercise I just found OLPC http://wiki.laptop.org/go/Acoustic_Tape_Measure some good thoughts but dated stuff?? Perry
Good news, I am grabbing all 3 extra DFT's now Jan20, now need to figure out distance estimations.
Hi Charles, Update on progress. Finally satisfied with location of WEBGL canvas.
Here is a shot of the sensors in action. There are 4 DFTs disp.layed based on the loudest of each sample. The DFT's are shown whith lowest to left and higher tones below and to right.
This is a fairly noisey environment furnace noise, space heater noise, and a radio playing in the lower right background.
now trying to find method of normalizing samples to estimate position/distance
But my mind wanders... a signal generator might be a good project to add to this esp8266engine?. just to make noises for other sensors.
Why close the issue?
Sorry about closing. I did not mean to do so. My skills are really old. I still rely on linux vi for most of my work. Somedays I consider looking for an IDE.
Nah. I lived the IDE life. It's glamorous, but in the grand scheme of things it will only drag you down. I am glad to be free.
My first hoooo test... in a quiet environment . The red a purple are me saying whooooo..
First the distance calc for lower right
translate(0,0,-215); translate(-dcalc2(samp,sampX),-dcalc2(samp,sampY),dcalc2(samp,sampZ)); }; . . function dcalc2(s1,s2) { if (s1 < s2) { return mult(20Math.log10(s2-threshold)-20Math.log10(s1-threshold)); } else { return mult(20Math.log10(s1-threshold)-20Math.log10(s2-threshold)); }; } and then the sphere fill(color(CCColor( i % globalParams["rFIXBPERO"]))); samp = (samp + sampX + sampY +sampZ) / 4 sphere (1+(samp-threshold)/7,7,7);
I was hopping the DFT might produce same location for different octaves( which reseach had showed to be unlikey), if some kind of digital band equilization is needed I need better microphones. Have been mulling buying 5 from the east or making a better batch of input boards at home.
Next I need to get video feed background !
You realllyyyyy need to get video. What do you mean by "background"
The final piece of this demonstration is overlaying these blinking dots on a video camera feed showing the 4 sensors in real time. I just found my Kobo (was hiding under a tissue box for a week). On android chrome his "webgl" gets ~50 fps.
There should be less than a dozen lines of code needrd to get the video running.
I've decided to buy some sensors from the orient AND design an ultasound frontend.
There are two classic options the "frequency division with amplitude " and the "local oscillator and mixer", I favor the first option.
I was just telling my friends about what you're doing and showing them your pictures! I can't wait for some videos from you. Where do you live? If you're close enough could check it out?
We live in the Great White North, If you can recieve "102.7 CHOP FM", on your radio I will come and pick you up.
On 02/11/2017 01:18 AM, CNLohr wrote:
I was just telling my friends about what you're doing and showing them your pictures! I can't wait for some videos from you. Where do you live? If you're close enough could check it out?
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/cnlohr/colorchord/issues/34#issuecomment-279124802, or mute the thread https://github.com/notifications/unsubscribe-auth/AXZZWAN3H8n5wVpY15SASFKXrC30RjY8ks5rbVLSgaJpZM4Li0Yk.
Noooope. Still hopin' for videos!
Some Progress Managed to get video in the display from a mjpg_server on the Pi3. Saddly UBUNTU did an upgade this weekend and destroyed my HIRES monitor setting.
The screen grab shows the four DFT virtual display and the lower left sensor in the video. display selection is 0 for DFT diagonals, and 1 for distance estimations. the mult parameter allows expansion and contraction of the 4DFT point results.
later "shot in the dark" adjusting layout.
Charles you should try this out ...
You may need less than five lines of code to add video underlay to java script projects.
change fill to use alpha in the "draw" loop 1) var colors = CCColor( i % globalParams["rFIXBPERO"]); 2) fill(parseInt( colors.substr(1,2),16 ),parseInt(colors.substr(3,2),16 ),parseInt(colors.substr(5,2),16 ),127);
and add the image in the html, any mjpeg video url source should work!( not sure about cross$%@*domain
3)
<img 4) src='http://192.168.1.180:8080/?action=stream'> |
Charles to me this embedded colochord is a wodefull engine for so many things A prolific author could make an entire book using this device as the starting point for so many studies, my imaginiation is overwhelmed.
You probably went to much work to get all the code and web pages on the processor. I have built this bleeding hack above using a clone of your DFT display to use P5.js WEBGL I know nothing of P5.js other than some research on the net for popular javascript tools and seemd to have excellent documentation and examples.
I have learned the SOFTAP mode is problematic so you seem to need to get into infra mode for any semblance of stability.
This opens up the embedded version to anyones favorite javascript tools
I all ready see my lack of experince need to understand how to get P5 canvas better placed
The first screengrab shows a P5.js DFT loop .
The second screen grab shows the furnace sound in the next room at the low frequencys and a radio in the same room at the upper frequencies.
My plan is collect DFT from 4 embedded colorchords , One would be considered master, the rest could be regular models.