I use an FT232H as USB<->I2c interface and pyftdi 0.55.0.
My i2c slave is an FPGA with an embedded CPU. It uses clock stretching, so I connected AD0, 1, 2 and 7 as documented.
I added a small resistor in series with SCL, to see which device controls SCL. In the software I use i2c.configure(device, clockstretching=True).
When I do an i2c read, I see the expected signals on the scope (see image). Both the FT232H and the FPGA pull SCL low right after the ACK (lowest electrical level).
The first cursor (X1) marks the moment that the FT232H releases SCL, the second cursor (X2) marks the moment that the FPGA releases SCL. The FT232H correctly waits until the FPGA releases the bus before it continues clocking.
So far so good. The data pattern on the bus is 0x99 (also shown by the i2c analysis of the scope).
However, the data read by the software is 0x19.
It appears that the FT232H (and/or the driver) uses the state of SDA of the previous LSB (which is on the bus until just before the FPGA releases SCL) as the MSB, instead of the actual value during SCL high.
Just for the test, I extended the SDA setup time from 100ns (which is according to the 400 kHz i2c spec) to 2.5 us, with no effect.
My theory is that the MSB is clocked in on the internal SCL rising, so, at (X1), instead of using the moment of the actual SCL rising, at (X2).
When I reduce the bus clock rate to e.g., 10 kHz, the correct data is read. This, however, is no usable workaround, as the clock stretch time can vary (depending on internal CPU load).
Any suggestions? I see no simple workaround, except for doing 100% bit-banging (which is way too slow).
I use an FT232H as USB<->I2c interface and pyftdi 0.55.0.
My i2c slave is an FPGA with an embedded CPU. It uses clock stretching, so I connected AD0, 1, 2 and 7 as documented. I added a small resistor in series with SCL, to see which device controls SCL. In the software I use
i2c.configure(device, clockstretching=True)
.When I do an i2c read, I see the expected signals on the scope (see image). Both the FT232H and the FPGA pull SCL low right after the ACK (lowest electrical level). The first cursor (X1) marks the moment that the FT232H releases SCL, the second cursor (X2) marks the moment that the FPGA releases SCL. The FT232H correctly waits until the FPGA releases the bus before it continues clocking.
So far so good. The data pattern on the bus is 0x99 (also shown by the i2c analysis of the scope). However, the data read by the software is 0x19. It appears that the FT232H (and/or the driver) uses the state of SDA of the previous LSB (which is on the bus until just before the FPGA releases SCL) as the MSB, instead of the actual value during SCL high.
Just for the test, I extended the SDA setup time from 100ns (which is according to the 400 kHz i2c spec) to 2.5 us, with no effect. My theory is that the MSB is clocked in on the internal SCL rising, so, at (X1), instead of using the moment of the actual SCL rising, at (X2). When I reduce the bus clock rate to e.g., 10 kHz, the correct data is read. This, however, is no usable workaround, as the clock stretch time can vary (depending on internal CPU load).
Any suggestions? I see no simple workaround, except for doing 100% bit-banging (which is way too slow).