avr-llvm / llvm

[MERGED UPSTREAM] AVR backend for the LLVM compiler library
220 stars 21 forks source link

Add labels to MC tests #134

Closed dylanmckay closed 7 years ago

dylanmckay commented 9 years ago

All of the MC tests were originally written before label support was properly implemented.

As such, there are only about two machine code tests which use labels.

We should add at least one case with labels to every branch instruction test. Currently a bug relating to labels with the breq family of instructions would have been caught if tests had existed.

For an example of how to do this, see f4f0f7c133ad590737a43ef0d58d1f6c510a0a98 as a reference. Note that to see what the expected machine code should be, it is useful to assembly with AVR-GCC and disassemble it with avr-objdump.

One thing that should be kept in mind is to try and keep branches far away from the labels they reference. This ensures that we have large numbers to "fixup" in MC, more bits are touched, and every execution path is tested. Also try and place labels in previous points in the program (i.e. jump backwards) - negative two's compliment numbers touch lots of bits.

dylanmckay commented 9 years ago

One problem with this is that llvm-mc -show-encoding does not resolve fixups. It will instead print out statements like this.

It would be good if there was some way to get LLVM to resolve the fixups it can (i.e. resolve fixups of labels which are in the same file that is being processed (i.e. not external)), so that we could test the actual fixup application code, which has happened before.

Until then, our fixup code could could silently regress without being detected by the test suite, leading to silently broken code.

agnat commented 9 years ago

It will instead print out statements like this.

lol... I was looking at exactly the same yesterday. Like: "Why the heck didn't we catch that."...

Until then, our fixup code could could silently regress without being detected by the test suite, leading to silently broken code.

That's exactly the point. I bet there are tools to check the actual object code. Like, using lit/llc/FileCheck with different options. We should take a look at what other targets are doing.

We also should look into writing the llvm-avr-forge script. Forge as in "Hitting metal". I believe in the long run automated execution tests on the target are the only way. But that's a task for another day...

dylanmckay commented 9 years ago

We also should look into writing the llvm-avr-forge script. Forge as in "Hitting metal". I believe in the long run automated execution tests on the target are the only way. But that's a task for another day...

If I understand what you are saying, I agree. I visualize a set of AVR chips soldered onto a circuit board, performing automated testing would be a really good way to check the validity of the backend.

There could be a host controller which connects to a PC via USB, with the ability of passing a program to each of the AVR chips. The outputs of GPIOs could be sent back to the computer, and our tests could validate that a test passed because it say, wrote the bit combination 11001011 on PORTB.

Of course, these are just shitty pipe dreams.

agnat commented 9 years ago

Of course, these are just shitty pipe dreams.

lol... yeah... the best kind. ;)

agnat commented 9 years ago

As a first step I was thinking more about the A-board everybody keeps asking about.

We could link a test_suite.o, built with a known good compiler (aka gcc), with our potentially wonky test_subject.o. Starting out really slow by just testing that the calling convention is working, we progress to a full featured test suite that

Our script on the host keeps looking for the boot loader (crash -> fail) and reports results.

dylanmckay commented 9 years ago

Sounds like a great idea, however - what is an "A-board"?

agnat commented 9 years ago

The Arduino.

dylanmckay commented 9 years ago

I am not sure why my brain didn't make that connection.

dylanmckay commented 7 years ago

A lot of the MC tests now use labels