OSVVM / UART

OSVVM UART Verification Components. Uart Transmitter with error injection for parity, stop, and break errors. UART Receiver verification component with error handling for parity, stop, and break errors.
Other
8 stars 7 forks source link

Start bit is to short #8

Open Paebbels opened 5 months ago

Paebbels commented 5 months ago

When checking the output waveform of OSVVMs UART TX model, it was noticed the Start-bit is to short (half the bit-time).

The transmitted sequence is 0x45 = 0b01000101, so s10100010S (s = start-bit, S = stop-bit), no parity was used, baudrate is 115.200 kBd.

image

JimLewis commented 5 months ago

Do you have a reproducer? I just reran the test case TbUart_SendGet1.vhd and the start bit looks good.

Paebbels commented 5 months ago

I'll drive home and run the OSVVM regression test can check it there too.

Paebbels commented 5 months ago

I discovered more problems with the UART TX model:

Here is a different example of the TX model not working correctly. A transaction is send at 80 ns. Then the transaction data is forwarded at 4us and generates the start-bit. At 12.680us the start-bit finishes. Thus an artificial delay of ≈4us is introduced.

Similar wrong behavior is expected when executing these commands:

Send(TxUart, x"45");
wait for 1 us;
Send(TxUart, x"9D");

The send operation of 0x45 has already waited the stop-bit time, thus a new start-bit could start immediately. By waiting a short period, the transaction controller is unaligned to the internal TX clock, which will wait for a new rising edge. So an additional full stop-bit is added, instead of 1 us.

image
Paebbels commented 5 months ago

When checking TbUart with test case SendGet1 I found this:

Further findings in the UART VCs:

Paebbels commented 5 months ago

This is a reproducer for existing tests in OSVVM. Use TbUart and SendGet1.

Modification to the harness:

  ------------------------------------------------------------
  UartTx_1 : UartTx 
  ------------------------------------------------------------
  generic map (
    DEFAULT_BAUD        => UART_BAUD_PERIOD_115200,
    DEFAULT_PARITY_MODE => UARTTB_PARITY_NONE
    )
    port map (
    TransRec            => UartTxRec,
    SerialDataOut       => SerialData   
  ) ;

  ------------------------------------------------------------
  UartRx_1 : UartRx 
  ------------------------------------------------------------
  generic map (
    DEFAULT_BAUD        => UART_BAUD_PERIOD_115200,
    DEFAULT_PARITY_MODE => UARTTB_PARITY_NONE
    )
  port map (
    TransRec            => UartRxRec,
    SerialDataIn        => SerialData 
  ) ;
  1. configure a well-known baudrate
  2. disable parity

Modifications for the TX process:

  UartTbTxProc : process
    variable UartTxID : AlertLogIDType ; 
    variable TransactionCount, ErrorCount : integer ;
  begin

    GetAlertLogID(UartTxRec, UartTxID) ; 
    SetLogEnable(UartTxID, INFO, TRUE) ;
        wait for 10 us;
    -- WaitForClock(UartTxRec, 2) ; 

    --  Sequence 1
    Send(UartTxRec, X"55") ;
    Send(UartTxRec, X"51", UARTTB_PARITY_ERROR) ;
  1. disable WaitForClock and add a delay like 10 us to get the transaction asynchronous.
  2. Change first byte to x"55" for a better pattern.
image

For now, this is not producing a short start-bit, I need to get access to the student's PC and check there for modifications.

JimLewis commented 5 months ago

Internally UART hardware is typically synchronous to a 16x or 32x reference clock. That reference clock is independent from the other UART with which it is communicating. In fact their internal reference clocks can differ by a small percentage and the transfer will be successful.

That said the UARTs use of a 1X clock was just a lazy implementation.

JimLewis commented 5 months ago

Wrt 1.5 stop bits, my notes show that that was only use with 5 bit transfers and popular uarts 16550 used the same control value to select both 1.5 and 2 stop bits - which one you got was a function of the number of data bits you sent.

Is there something that supports 9 data bits in a uart?

JimLewis commented 5 months ago

Noted that there are a few extra signals here and there. Open to accepting a pull request to fix these.

JimLewis commented 5 months ago

Once the VC is updated to support a 16x clock, the test case delays you want to use should be expressed as multiples of the 16x clock or at least that is the granularity you should get.

JimLewis commented 2 months ago

In UartTx, did you do any modifications to how UartTxClk is done or WaitForTransaction? The length of the start bit depends on WaitForTransaction starting aligned to UartTxClk. If modified it to use the WaitForTransaction without Clk, and do not update the reset of the "wait until UartTxClk = '1'" then it will be a problem.

Paebbels commented 2 months ago

A UART communication can start at any time, it's wrong to bind the reception of an transaction to an artificial UART clock. The transaction needs to be received and then the UART clock is started for 8+n cycles to transmit the bits. Afterwards the UART clock is stopped and it's waiting for a new transaction.

Say you transmit one bytes and wait for 10 ns and send a second byte. As you missed the UART clock by 10 ns, the current implementation waits for one UART bit time (because WaitForTransaction synchronizes to a clock that should not exist at all). Now, the second byte is send out, with a gab of almost one bit time.

Thus, the UART as in Universal asynchronous RX/TX, becomes synchronous and the receiver can't be stressed at it's maximum because the stop-bit is stretched by another bit time.

JimLewis commented 2 months ago

I understand that you disagree with my thoughts about how UARTs are built, however, back to my question, did you modify how UartTxClk is done or WaitForTransaction? I am trying to get to root cause of how you got a start bit less than one UartTxClk period when given the way that UartTxClk and WaitForTransaction work as published on OSVVM's github, this should not be possible. Hence, my question.

I would like to address one issue at a time.

A UART communication can start at any time, it's wrong to bind the reception of an transaction to an artificial UART clock. The transaction needs to be received and then the UART clock is started for 8+n cycles to transmit the bits. Afterwards the UART clock is stopped and it's waiting for a new transaction.

The internal timing reference for a UART does not start and stop. The part that is asynchronous is the fact that the clock is not part of the interface. In fact the internal clock of the sending UART and receiving UART can be a certain percentage from each other (or from the specified frequency) - use of available internal clock vs an external clock, due to available crystals, and the UARTS clock divider logic.

UartRx is continuously monitoring its serial data in input. Generally speaking it has a 16x (or 32x) clock that is running and when it finds a '0', it counts out 8 (or 16) to find the middle of the start bit, and then once every 16 (or 32 clocks) from then it. This clock cannot stop.

Maybe it is presumptuous, but I am assuming that the UartTx runs off the same 16x or 32x continuously running timing reference clock that the UartRx uses. While the current UartTx runs based on a 1x and not 16x clock, internally they are both clock based.

I don't disagree that the UartTx should get an upgrade so that it is running off of a 16X clock and can start on any edge of that 16X clock.

I agree it is best to model the devices characteristic as they ae built, but my thought is that it is unlikely that anyone creates separate clock generation logic for UartRx and UartTx.

The problem with starting at any old time is it makes for unstable simulations over the long haul - learned that the hard way with an old parallel port for a printer. Instability is a bad thing.

Say you transmit one bytes and wait for 10 ns and send a second byte. As you missed the UART clock by 10 ns, the current implementation waits for one UART bit time (because WaitForTransaction synchronizes to a clock that should not exist at all). Now, the second byte is send out, with a gab of almost one bit time.

Do you have evidence that the actual device works this way? Historically clock gates were not often used due to the skew they introduce between elements of the clock domain that are gating the clock and the parts that have to run continuously.

It is easy enough to create a transmitter that does not use any clock at all - I just don't think this represents how the devices actually work.