Open elidupree opened 2 years ago
I think is I2S clock issue and prescaler used also for ADC DMA.
In components/hal/adc_hal.c
:
//ESP32 ADC uses the DMA through I2S. The I2S needs to be configured.
#define I2S_BASE_CLK (2*APB_CLK_FREQ)
#define SAMPLE_BITS 16
#define ADC_LL_CLKM_DIV_NUM_DEFAULT 2
#define ADC_LL_CLKM_DIV_B_DEFAULT 0
#define ADC_LL_CLKM_DIV_A_DEFAULT 1
/**
* For esp32s2 and later chips
* - Set ADC digital controller clock division factor. The clock is divided from `APLL` or `APB` clock.
* Expression: controller_clk = APLL/APB * (div_num + div_a / div_b + 1).
* - Enable clock and select clock source for ADC digital controller.
* For esp32, use I2S clock
*/
static void adc_hal_digi_sample_freq_config(adc_hal_context_t *hal, uint32_t freq)
{
#if !CONFIG_IDF_TARGET_ESP32
uint32_t interval = APB_CLK_FREQ / (ADC_LL_CLKM_DIV_NUM_DEFAULT + ADC_LL_CLKM_DIV_A_DEFAULT / ADC_LL_CLKM_DIV_B_DEFAULT + 1) / 2 / freq;
//set sample interval
adc_ll_digi_set_trigger_interval(interval);
//Here we set the clock divider factor to make the digital clock to 5M Hz
adc_ll_digi_controller_clk_div(ADC_LL_CLKM_DIV_NUM_DEFAULT, ADC_LL_CLKM_DIV_B_DEFAULT, ADC_LL_CLKM_DIV_A_DEFAULT);
adc_ll_digi_clk_sel(0); //use APB
#else
i2s_ll_rx_clk_set_src(hal->dev, I2S_CLK_D2CLK); /*!< Clock from PLL_D2_CLK(160M)*/
uint32_t bck = I2S_BASE_CLK / (ADC_LL_CLKM_DIV_NUM_DEFAULT + ADC_LL_CLKM_DIV_B_DEFAULT / ADC_LL_CLKM_DIV_A_DEFAULT) / 2 / freq;
i2s_ll_mclk_div_t clk = {
.mclk_div = ADC_LL_CLKM_DIV_NUM_DEFAULT,
.a = ADC_LL_CLKM_DIV_A_DEFAULT,
.b = ADC_LL_CLKM_DIV_B_DEFAULT,
};
i2s_ll_rx_set_clk(hal->dev, &clk);
i2s_ll_rx_set_bck_div_num(hal->dev, bck);
#endif
}
If the setted sampling frequency is 10e3, the variable bck=160e6/(2+0)/2/10e3=4000
but the I2S_RX_BCK_DIV_NUM[5:0] is only 6 bit wide, so the maximum is 63.
I tried changing
#define ADC_LL_CLKM_DIV_NUM_DEFAULT 256
and the real sampling frequency slow down, the buffer isn't no more always overflowing and the channel sequence is correct.
But I don't know what the exact clock prescaler for which the code was designed for, documentation is not so clear...
Same problem with adc i2s, on ESP32 after v4.4.1 release! Sample rate is broken. Had to roll back idf version. Please help with this error.
I can't make sense of the example at all. Is there some more in-depth documentation on what those struct members actually do?
E.g. the example initializes the ADC with the following two structs
#define TIMES 256
adc_digi_init_config_t adc_dma_config = {
.max_store_buf_size = 1024,
.conv_num_each_intr = TIMES,
.adc1_chan_mask = adc1_chan_mask,
.adc2_chan_mask = adc2_chan_mask,
};
adc_digi_configuration_t dig_cfg = {
.conv_limit_en = ADC_CONV_LIMIT_EN,
.conv_limit_num = 250,
.sample_freq_hz = 10 * 1000,
.conv_mode = ADC_CONV_MODE,
.format = ADC_OUTPUT_TYPE,
};
What is this configuration supposed to do? Read 250 samples at 10kHz? Read 250 * 256 (TIMES) samples at 10kHz? I've tried every IDF version from 4.4 upwards but changing the sample_freq_hz does pretty much nothing...
/edit ok, nvm, there is also a rant about missing docs here https://github.com/espressif/esp-idf/issues/6588
I can confirm the test done by @ssymo84. Did change ADC_LL_CLKM_DIV_NUM_DEFAULT to 256 and got stable DMA readings from 6 channels. By using a signal generator (1Khz) and fft, it looks like 300000Hz equals 26100Hz. My settings are the following:
max_store_buf_size = 4096;
conv_num_each_intr = 2048;
conv_limit_num = 250;
sample_freq_hz = 300000;
According to the API documentation the maximum sample_freq_hz is 83333Hz. (not that it would change anything, in my opinion that setting still is broken)
I did spend some time looking into this issue recently and I found out how to get it working right.
According to the reference manual for ESP32, this is how the sampling frequency is set up: We have the variables N, a, b and M that control the sampling frequency. This image does not take into account the number of channels being sampled and the bits in each sample. All these values are set in two registers:
The formula for the sampling frequency is the following: The number ranges are:
In my test, ch = 6 and bits = 16 (or 12 but in adc_hal.h SAMPLE_BITS is 16 which goes into I2S_SAMPLE_RATE_CONF_REG as I2S_RX_BITS_MOD. This is set in i2s_ll.h line 607). What is left is N, a, b and M. That means four dimensions for finding the right sampling frequency. After discussing this with my friend at work I ended up with the following program to calculate the sampling frequency:
#include <stdio.h>
#include <stdlib.h>
int main() {
// The sampling freq you want to use
float samplingFreq = 10240.0;
// The maximum offset from samplingFreq
float deltaf = 0.000489;
float PLL_D2_CLK = 160000000.0;
float channels = 6;
float bits = 16;
for (int M = 2;M<64;M++)
{
for (int n = 2;n<256;n++)
{
for (int a = 1;a<64;a++)
{
for (int b = 0;b<64;b++)
{
float x = PLL_D2_CLK/(((float)n+(float)b/a)*M*channels*bits/8);
if (samplingFreq > x - deltaf && samplingFreq < x + deltaf)
printf("%f N:%3d M:%3d a:%2d b:%2d\n", x, n, M, a, b);
}
}
}
}
return 0;
}
Running the program (https://cplayground.com/) gives the following result:
10240.000000 N: 86 M: 15 a:36 b:29
Not all frequencies fit into this register setup and therefore you might get a few values for each sampling frequency depending on the deltaf which limits the offset. Running 20480Hz as sampling frequency gives the following:
20480.001953 N:128 M: 5 a:24 b:53
20480.001953 N:129 M: 5 a:24 b:29
20480.001953 N:129 M: 5 a:48 b:58
20480.001953 N:130 M: 5 a:24 b: 5
20480.001953 N:130 M: 5 a:48 b:10
20480.001953 N: 64 M: 10 a:48 b:53
20480.001953 N: 65 M: 10 a:48 b: 5
20480.001953 N: 24 M: 25 a:24 b:49
20480.001953 N: 25 M: 25 a:24 b:25
20480.001953 N: 25 M: 25 a:48 b:50
20480.001953 N: 26 M: 25 a:24 b: 1
20480.001953 N: 26 M: 25 a:48 b: 2
20480.001953 N: 12 M: 50 a:48 b:49
20480.001953 N: 13 M: 50 a:48 b: 1
I tested this a bit and here are some results (1khz signal as input): 10240Hz 20480Hz N: 65 M: 10 a:48 b: 5 23456.0Hz N: 94 M: 6 a:50 b:37 80000Hz (5khz signal as input) NOTE: There were quite a few numbers to select from and not all worked. N: 3 M: 42 a:63 b:61 409.60453125 KHz N: 8 M: 4 a:58 b: 8
The changes I made were all in the adc_hal.c. I defined N, M, a and b in my application and then referenced them with extern:
.
.
.
//ESP32 ADC uses the DMA through I2S. The I2S needs to be configured.
#define I2S_BASE_CLK (2*APB_CLK_FREQ)
#define SAMPLE_BITS 16
//#define ADC_LL_CLKM_DIV_NUM_DEFAULT 255 // [ 7: 0] = 8 bits, 2..255
//#define ADC_LL_CLKM_DIV_B_DEFAULT 0 // [13: 8] = 6 bits, 0..63
//#define ADC_LL_CLKM_DIV_A_DEFAULT 1 // [19:14] = 6 bits, 1..63
//#define I2S_RX_BCK_DIV 63 // [ 5: 0] = 6 bits, 2..63
extern uint32_t ADCI2S_CLKM_DIV_NUM;
extern uint32_t ADCI2S_RX_BCK_DIV;
extern uint32_t ADCI2S_CLKM_DIV_A;
extern uint32_t ADCI2S_CLKM_DIV_B;
.
.
.
static void adc_hal_digi_sample_freq_config(adc_hal_context_t *hal, uint32_t freq)
{
#if !CONFIG_IDF_TARGET_ESP32
uint32_t interval = APB_CLK_FREQ / (ADC_LL_CLKM_DIV_NUM_DEFAULT + ADC_LL_CLKM_DIV_A_DEFAULT / ADC_LL_CLKM_DIV_B_DEFAULT + 1) / 2 / freq;
//set sample interval
adc_ll_digi_set_trigger_interval(interval);
//Here we set the clock divider factor to make the digital clock to 5M Hz
adc_ll_digi_controller_clk_div(ADC_LL_CLKM_DIV_NUM_DEFAULT, ADC_LL_CLKM_DIV_B_DEFAULT, ADC_LL_CLKM_DIV_A_DEFAULT);
adc_ll_digi_clk_sel(0); //use APB
#else
i2s_ll_rx_clk_set_src(hal->dev, I2S_CLK_D2CLK); /*!< Clock from PLL_D2_CLK(160M)*/
//uint32_t bck = I2S_BASE_CLK / (ADC_LL_CLKM_DIV_NUM_DEFAULT + ADC_LL_CLKM_DIV_B_DEFAULT / ADC_LL_CLKM_DIV_A_DEFAULT) / 2 / freq;
i2s_ll_mclk_div_t clk = {
.mclk_div = ADCI2S_CLKM_DIV_NUM,
.a = ADCI2S_CLKM_DIV_A,
.b = ADCI2S_CLKM_DIV_B
};
i2s_ll_rx_set_clk(hal->dev, &clk);
// i2s_ll_rx_set_bck_div_num(hal->dev, bck);
i2s_ll_rx_set_bck_div_num(hal->dev, ADCI2S_RX_BCK_DIV);
#endif
}
Overall, I was able to have spot-on sampling frequencies.
I was also able to go much lower in sampling frequency than explained in the documentation or 829.9616Hz. The sampling frequency given in the adc_digi_configuration_t structure before calling adc_digi_controller_configure() doesn't seem to have any purpose but to check for min max value. The variable conv_limit_num inside adc_digi_configuration_t seems to cause double variables when containing high value and the sampling frequency is high.
- PLL_D2_CLK is a constant 160000000
PLL_D2_CLK = PLL_CLK/2 ? Is the description of the i2s document wrong ?
I find that unlikely.
In the original code inside adc_hal.c (around line 101) there is the definition of I2S_BASE_CLK:
#define I2S_BASE_CLK (2*APB_CLK_FREQ)
Inside soc.h, line 221 and 223, APB_CLK_FREQ is defined with an interesting comment that it might be incorrect:
221 #define CPU_CLK_FREQ APB_CLK_FREQ //this may be incorrect, please refer to ESP32_DEFAULT_CPU_FREQ_MHZ
222 #define APB_CLK_FREQ ( 80*1000000 ) //unit: Hz
This means that I2S_BASE_CLK is set to 160MHz and that's how I ended up with this number. However, I have set CPU frequency to 240MHz in my SDK configuration so PLL_D2_CLK should actually be that number but that does not give me right results.
Here is a more advanced version of the program to calculate the register values:
// Hello world! Cplayground is an online sandbox that makes it easy to try out
// code.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main() {
printf("%d\n", -1%1024);
float sf = 5120.0;
int rM, rN, ra, rb;
//---
float PLL_D2_CLK = 160000000.0;
float channels = 6;
float bits = 16;
float deltaf = 0.5;
float x;
for (int M = 2; M < 63; M++)
{
for (int n = 2; n < 256; n++)
{
for (int a = 1; a < 64; a++)
{
for (int b = 0; b < 64; b++)
{
x = PLL_D2_CLK / (((float)n + (float)b / a) * M * channels * bits / 8);
if (x == sf)
{
printf("Excact: %f N:%3d M:%3d a:%2d b:%2d\n", x, n, M, a, b);
rM = M;
rN = n;
ra = a;
rb = b;
goto end;
}
float offset = fabs(x - sf);
if (offset <= deltaf)
{
deltaf = offset;
printf("Estimate: %f N:%3d M:%3d a:%2d b:%2d\n", x, n, M, a, b);
rM = M;
rN = n;
ra = a;
rb = b;
}
}
}
}
}
end:
printf("Best match: %fHz N:%3d M:%3d a:%2d b:%2d\n", x, rN, rM, ra, rb);
//---
return 0;
}
Thanks @haukurhafsteins for the work, now I foud a commit in v4.4: https://github.com/espressif/esp-idf/commit/cb62457f6dcb3077b38a74cfaffe19f5f23e4a00
They increased the minimum sampling rate to 20khz and used a function to calculate the best frequency divider coefficient.
So now the minimum sampling rate is 20Khz ? So if I collect 10hz, I won't be able to collect the full cycle.
谢谢@haukurhafsteins对于这项工作,现在我在 v4.4 中发现了一个提交:cb62457
他们将最小采样率提高到 20khz,并使用函数来计算最佳分频系数。
So now the minimum sampling rate is 20Khz ? So if I collect 10hz, I won't be able to collect the full cycle.
To collect one cycle (or period) of 10Hz signal you need 20000Hz / 10Hz = 2000 samples.
You can also look at it like this:
谢谢@haukurhafsteins对于这项工作,现在我在 v4.4 中发现了一个提交:cb62457 他们将最小采样率提高到 20khz,并使用函数来计算最佳分频系数。
So now the minimum sampling rate is 20Khz ? So if I collect 10hz, I won't be able to collect the full cycle.
This is only true for the ESP32, for all other devices it's 611Hz.
Can anyone explain to me how SOC_ADC_DIGI_DATA_BYTES_PER_CONV and SOC_ADC_DIGI_RESULT_BYTES are related? Apparently the ESP32 can only do conversions 4 byte at a time. It almost looks like as if each channel is converted twice as most of the log output of the "continuous_read" example shows each channel I configure duplicated. However that's not always the case... sometimes the results only contain a channel once... what exactly is going here?
/edit Here's an example how the output for 3 configured channels looks like ->
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 329
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 339
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 6, Value: 113
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 340
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 341
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 6, Value: 112
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 336
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 6, Value: 112
./src/adc.cpp:65: Unit: 1, Channel: 6, Value: 113
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 334
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
/edit2
I also highly doubt the documented behavior of the adc_continuous_read
return values. According to the docs a return of ESP_ERR_INVALID_STATE
means
Driver state is invalid. Usually it means the ADC sampling rate is faster than the task processing rate.
I've currently configured 3x ADC channels measured at 20kHz. The max_store_buf_size
and conv_frame_size
are set to hold exactly a single frame (12 bytes). My task reads those 12 bytes with adc_continuous_read
and performs a 100ms delay afterwards while keeping the ADC running. Yet a subsequent call to the function never returns ESP_ERR_INVALID_STATE
. Or do the docs referring to the "internal" processing task?
Can anyone explain to me how SOC_ADC_DIGI_DATA_BYTES_PER_CONV and SOC_ADC_DIGI_RESULT_BYTES are related?
I'm trying to search the answer to this and the only result I found so far is this post asking this very question. If anyone understands it, please spend half a minute to enlighten us.
Can anyone explain to me how SOC_ADC_DIGI_DATA_BYTES_PER_CONV and SOC_ADC_DIGI_RESULT_BYTES are related?
I'm trying to search the answer to this and the only result I found so far is this post asking this very question. If anyone understands it, please spend half a minute to enlighten us.
It's actually exactly what the name says it is.
SOC_ADC_DIGI_DATA_BYTES_PER_CONV defines how many bytes a single conversion takes up and SOC_ADC_DIGI_RESULT_BYTES how many bytes the result uses. Those two defines are vastly different across all chip types. I assume because of DMA/I2S constraints, but who knows...
In the end it doesn't really matter. The only thing to look out for is that the buffer you pass to the ADC is in multiples of SOC_ADC_DIGI_DATA_BYTES_PER_CONV.
Thanks.
The only thing to look out for is that the buffer you pass to the ADC is in multiples of SOC_ADC_DIGI_DATA_BYTES_PER_CONV.
Apparently even that is not enough though.
Okay, I think I finally got it. The whole reason behind SOC_ADC_DIGI_DATA_BYTES_PER_CONV
's existence is this quote (TRM, p. 121):
...and that's 4. That's it.
SOC_ADC_DIGI_DATA_BYTES_PER_CONV defines how many bytes a single conversion takes up
Apparently not. A single conversion still uses the same 2 bytes. The adc_digi_output_data_t
that we get either in ISR or from adc_continuous_read
is the raw data straight from the DMA, not the result of some additional conversion by ESP-IDF turning SOC_ADC_DIGI_DATA_BYTES_PER_CONV
bytes of a conversion into SOC_ADC_DIGI_RESULT_BYTES
of a result.
To sum up, SOC_ADC_DIGI_DATA_BYTES_PER_CONV
has nothing to do whatsoever with the whole ADC subsystem and is a pure DMA requirement.
/edit2 I also highly doubt the documented behavior of the
adc_continuous_read
return values. According to the docs a return ofESP_ERR_INVALID_STATE
meansDriver state is invalid. Usually it means the ADC sampling rate is faster than the task processing rate.
I've currently configured 3x ADC channels measured at 20kHz. The
max_store_buf_size
andconv_frame_size
are set to hold exactly a single frame (12 bytes). My task reads those 12 bytes withadc_continuous_read
and performs a 100ms delay afterwards while keeping the ADC running. Yet a subsequent call to the function never returnsESP_ERR_INVALID_STATE
. Or do the docs referring to the "internal" processing task?
Reading the IDF source code suggests that this is a documentation bug, probably a leftover back from the days of (now obsolete) adc_digi_read_bytes
.
As of now, the only way adc_continuous_read
can return ESP_ERR_INVALID_STATE
(aside from a null handle
) is if the finite state machine is not ADC_FSM_STARTED
(read: adc_continuous_start
has not been called or adc_continuous_stop
has already been called).
That could be the case if the ISR (s_adc_dma_intr
) was stopping the FSM in case of insufficient space in the ringbuffer, but it does not - the only thing it does is calling on_pool_ovf
(in case it was registered by the user).
Environment
Problem Description
I'm trying to do real-time sampling of 4 input signals using the onboard ADCs; as best I can tell, the ADC DMA mode should be a good way to do this, despite the lack of documentation.
So I copied the dma_read example and tweaked it to fit my use case. But as soon as I removed the time-consuming print statements, it was generating far more samples than it seemed like it was supposed to.
(Could this be related to #6691? I don't know the code well enough to guess...)
Expected Behavior
Since I've reduced the config values to
sample_freq_hz = 2000
andconv_num_each_intr = 8
, I would expect to be getting 16000 bytes per second.Actual Behavior
I'm getting around 865000 bytes per second.
Steps to reproduce
Here's my code (a slight modification of the dma_read example code - I've tweaked the parameters, removed the excess prints, infrequently logged the average number of bytes read, and stripped out the non-ESP32-specific code) https://gist.github.com/elidupree/cfdbff1b909eb8654f8b5ca38642e726
I simply put that code in the place of the dma_read main file, build and flash over USB to an unmodified WeMos D1 Mini ESP32; no extra wiring is needed.
Debug Logs