With some of our inferred build commands we collect test metadata automatically, but that didn't happen for this project. Collecting metadata allows us to list the specific failures, and in some cases makes parallel builds more efficient. Here's how to get it:
For an inferred ruby test command, simply add the necessary formatter gem
Python should work automatically, except for django you'll need to use django-nose.
For another inferred test runner that you'd like us to add metadata support for, let us know.
For a custom test command, configure your test runner to write a JUnit XML report to a directory in $CIRCLE_TEST_REPORTS - see the docs for more information.