Open nforbus opened 5 years ago
Modified the poc/Makefile to include a tests flag. It creates two outputs, one with the plugin in use, and one without the plugin in use.
You can then run the test code that Cole added, select the filenames of the two output files, and it will output the difference between the two files in llvm IR.
Now looking at LLDB testing! https://lldb.llvm.org/tutorial.html
As it turns out, the entire LLDB API is available via Python. This is the future: https://lldb.llvm.org/python_reference/index.html
Updated the makefile commands to show the variable names for better readability. Cannot use the same method to create more human-readable test results though.
Embedding within CMake still to happen, more research needed because idk jack about CMake.
Figured out how to use lldb to find locations of struct memories to compare their locations when compiled with and without the plugin.
Working on a python script to automate this check and output 1 or 0 depending on if the structs were shuffled.
We'll be attending a unit testing workshop I've scheduled from 6:30pm until 8pm Jan 29th, 2019 at the Engineering Building. Google cal invite should be in the mail. We'll show up a hour early to prepare as a team.
The plan (as of Jan. 30) is to write a python script https://lldb.llvm.org/python-reference.html, https://lldb.llvm.org/varformats.html that wraps lldb, and use lldb's features to analyze order of structs of a test program or set of test programs. Useful lldb commands: target create "program" process launch --stop-at-entry thread continue b filename:linenum (sets a breakpoint) frame variable varname frame var -L l l -
FYI I can (or from when I worked on it and am recalling) can run the test suite from within windows. It should be easy enough to do on the nix side, just haven't dug into it.
https://llvm.org/docs/TestingGuide.html
This should lead us in the correct direction though!
make check-clang will run the test suite according to : https://clang.llvm.org/get_started.html
These two docs should give us all we need I believe
Looks like this is the correct location to place any tests: ~llvm-project\clang\unittests\AST
Rebuilt my server, rebuilt clang, ran it with check-llvm in the build directory as required. Of all the regression tests, there was only one "unexpected failure". I'll look at it more tomm (in class today), and see if I can pinpoint the cause of the error. I believe that we can consider our regression tests passed as long as our final version doesn't trigger any unexpected errors with check-llvm.
I believe that we can consider our regression tests passed as long as our final version doesn't trigger any unexpected errors with check-llvm
I apologize if this is a bit of a naive question, but in addition to not causing regressions, is there an expectation for us to add additional tests?
On the presentation the sponsor provided he mentions creating a test framework, although clang already has one.
The only problem with this is I cannot figure out how to get the tests to complete (they hang at 11% for me).
What I recall about looking the clang build server, some builds do fail. I couldn't tell you why they fail, but I'm thinking if we perhaps fast fwd our code to a commit where we know in fact the build passes we should expect the test suite to pass.
Our current commit before we forked was : https://github.com/connorkuehl/llvm-project/commit/70d484d94e3ec1f6c563b3f2e85f88becb977a41
Their build server:
http://lab.llvm.org:8080/green/
I highly suspect they do not accept patches that break any of the tests. That would be bad. I'm "hacking" away at it following this document:
Other attempts at running the tests fail at different spots, which is odd...We have a pretty beefy machine running this, and there is enough HDD space and memory.
I think maybe memory is being used up due to the number of threads I'm specifying perhaps.
You can run the tests by going into the build folder and running
make clang-test
Yes, appears to be a threading issue, threading.py
fails. I actually specified using 32 threads and when it runs (when the progress bar appears), it states 64 threads. So it may be the case it ignores what you specify during the make command and tries to max it out.
I've deleted our repo and re-pulled it in.
I've re-build clang/llvm and am re-running the regression tests right now without specifying the number of threads to use.
I've upped the machine to 72 CPU's and 624 GB of RAM.
I previously attempted 40 CPU's and 624 GB of RAM before deleting the folder suspecting that it was running out of RAM.
Perhaps also, when you specify the number of CPU's it carriers over to the tests? I really don't know
Test Error
This is the file we're currently failing regression testing with. I believe we need to add our attri here, but not yet sure how. Also, nobody has added an attri to this file since May of '17 which makes me wonder if either nobody is going this route any more or not.
Wrong link; right link updated 7 days ago; https://github.com/llvm/llvm-project/blob/master/clang/test/Misc/pragma-attribute-supported-attributes-list.test
Updated the above file with our attribute. Tried to do a commit, but I apparently have credential issues on my GCP so it's not letting me push a commit on it. Tim is going to make the changes and do the commit instead.
After editing the file located at
/home/timpugh_pdx_edu/llvm-project/clang/test/Misc/pragma-attribute-supported-attributes-list.test
it states the error is
:120:16: error: CHECK-NEXT: is not on the line after the previous match
I think Its based on alphabetical ordering, so I replaced the line and put it in that order and am waiting to confirm.
It is based on alphabetical ordering.
I managed to get a test to pass but it required running a slower machine, 16 CPU's.
The regression test for clang hung once before it passed.
I'm currently rebuilding upstream clang/llvm to see if this too hangs, although to save time I'm using a 64 CPU machine.
Installed the test suite and attempted to build. Cannot seem to build using more than a single core though. It made it to 37% and failed, and then failed multiple times when attempted to restart. Not sure on next path.
@nforbus Did you experience the same failure at 37%?
Mine complained about an xray header file not being in the proper location.
I believe the directions we followed may be deprecated, but there appears to be a different set we can follow.
https://llvm.org/docs/ReleaseProcess.html
We may have to do a more sophisticated build process to run the test-suite.
https://llvm.org/docs/ReleaseProcess.html
We may have to do a more sophisticated build process to run the test-suite.
https://llvm.org/docs/lnt/quickstart.html
This should and the above links should get the test suite going.
Having a bunch of errors getting the test suite working.
I believe this is because the test-suite isn't in lock step with the llvm-projects master branch (which appears to be their development branch more or less).
The Gameplan from here on out with this:
I'm building clang/llvm release 8 I'm building the test-suite release 8
Provided these two work together, I'll merge the randstruct code, and re-test
If at this point in the process it still works, we're prepared to hook our POC file into the test suite
If this goes according to plan and we accomplish it, then we'll know how to properly do it, and then will add our poc to the test-suites MASTER branch
This will ensure it works, to the best of our knowledge, regardless if the active development between the llvm/clang and the test-suite are not in lock step.
I'm currently having it built and if it fails, I'll have no more leads on how to get it working atm.
I have more leads now after mailing the devs and getting a response.
I included compiler-rt into the build and my build is progressing now. In the event things break in the future I think I'll just have to assemble the entire toolchain which includes: compiler-rt , libunwind , libc++abi, libc++, lld (although lld may not be required)
More can be found here: https://clang.llvm.org/docs/Toolchain.html#clang-frontend
and here:
https://stackoverflow.com/questions/47304919/building-and-using-a-pure-llvm-toolchain-for-c-on-linux
you will also need to install tcl
sudo apt-get install tcl
I've now created instructions in our team drive called BUILDING THE LLVM TEST SUITE (Linux) that solves this headache.
The test suite runs and passes all 918 tests.
This was with the release 8 build (both llvm/clang and the test-suite). We'll need to run it with randstruct now. I'm going to assume this is going to pass since we didn't mess with the clang internals very much except in record layout builder, and even then it should effect things unless we use our flag.
I've got our file hooked into the test suite correctly from what I can tell.
The instructions in the google drive are updated to reflect a fix I needed to add (we need to set it to run the test suite in DEBUG mode), and I dropped our POC file into the correct spot. Running the test picks up our POC file, and show's it failing, which I purposely set to do just to save time not collecting the correct output.
So, to finish off this task:
We need to pass the randstruct-seed=
These should hopefully be easier to solve.
Above is the output from running the test suite with our fork of llvm with randstruct implemented
I explored some other tests but havne't found much in the way of passing our seed.
Emailing a dev will likely be useful here.
Re-assigning to @Nixoncole @jeffreytakahashi @nforbus
What remains to be done:
-pass the seed to the test
-create a test where the same exact struct is randomized and NOT randomized. Compare these structs. If they differ ,good, test is done and return PASS
-create a second test where it also randomizes with the same struct, with the same seed, and compare to the previous tests randomized struct.
if they are the same, good, test is done and return PASS.
Pursue a method to do end-to-end testing, as a means to treat the randstruct plugin as a blackbox.