Closed kubark42 closed 8 months ago
Hi. I'm glad you like it :) Basically the bottleneck is the sampling frequency. ESP32 can not make more than 1 analog sample every 50 us or so. Thus with 1000 samples you can achieve max 20 Hz horizontal frequency, or 10 Hz if two signals are being measured. It is better with digital sampling though. If you manage to overcome this difficulty somehow, other bottlenecks may arise.
If you want to dig into it in more detail, here are some of my thoughts where to start.
oscilloscope.h: increase the array oscSample samples [128]; and find all the controls that prevent filling it over 128 samples
httpServer.hpp: take care that samples array does not exceed HTTP_WS_FRAME_MAX_SIZE which is #defined to 1500 bytes. Add more if necessary. It is possible that this would slow down the network communication a bit since it would probably exceed MTU (which usually is 1500). If you increase HTTP_WS_FRAME_MAX_SIZE it is very likely you will have to increase HTTP_CONNECTION_STACK_SIZE #definition as well.
oscilloscope.html: see how startCommand is constructed before it is being sent to ESP32 server. You can define sampling frequency there and how many samples fit to one screen (like startCommand += "50 us screen width = 100 us").
Hope this helps.
Thanks, that makes sense. I think there are some nice synergies with #21. For instance,
// oscilloscope samples
struct oscSample { // one sample
int16_t signal1; // signal value of 1st GPIO read by analogRead or digialRead
int16_t signal2; // signal value of 2nd GPIO if requested
int16_t deltaTime; // sample time - offset from previous sample in ms or us
}; // = 6 bytes per sample
Wouldn't need the deltaTime
field on a per sample basis, since this will be determined by the ADC clock instead of by timestamping reactively. Furthermore, signal1 and signal2 are, at most, 12 bits and so the two signals can be collapsed into one 3-byte value. The net result is a 2x multiplier in bandwidth without any extra code to support spreading messages across multiple frames.
Continuous sampling also would support a 4x higher sampling rate than you have been able attain with adc1_get_raw()
, so that also adds a little more value to showing more points.
As for now I kept the limitation that 1 screen frame of samples should fit into 1 network block (MTU) of 1500 - 8 bytes (8 bytes are needed for WebSocket header). You still have a room for up to 746, 16 bit samples. The first one is used as a "dummy" sample that marks the beginning of the screen which leaves you at 745, but since the number of samples read with i2s_read function must be an even number, we end up with 744. Occasionally some errors occur at the first i2s_read call after the initialization, at the first 8 samples in the buffer, so they also need to be cut off. This leaves you at 736 samples.
There is still room for improvement, as you suggested, to pack only useful 12 bits of samples in the buffer.
Solved.
I love this project, it's absolutely awesome. One thing I find myself instantly missing is the ability to see the deep resolution I get with ~1000 points per image. What is the current bottleneck on points per update, and what would it take to go past that?