Closed remogatto closed 13 years ago
What do you mean by "external package"?
What kind of test cases?
I mean to pull up console functionalities from package main moving them to a specific package (package console? package frontends? we'll see) the same way you did for package formats.
For the next release I'd like to draft a console_test.go file which tests a bunch of console commands (at least the new screenshot() command I have in mind). In the future, I'd like if each new console command had it's own associated test case.
Package console: That seems reasonable.
Testing all console commands: I am skeptical about this. For example, how to test the "fps()" command? Or the "exit()" command. While it is possible, I think it would be a lot of work. Even if a test case for "exit()" existed, and running the test yields PASSED, I am afraid I wouldn't be satisfied with such a result - because it is just a single random datapoint. In order to ensure that "exit()" works in all possible scenarios, all scenarios have to be tried, i.e. it has to be a mathematical proof. The Go language provides no facilities for doing this.
PS: Testing the "screenshot()" command is OK, I guess.
Ok, probably we are not going to test all commands. However, the way I see it, in TDD tests are not necessary mathematical proofs. See the way I'm testing the help() command in console_test.go. Actually, I'm testing that the command outputs a text following an expected pattern. In other words, I'm expressing a general expectation about the behavior of that command. Sure, this will not test the help() function in all its details but IMHO it's better than not testing it at all. I said about TDD but indeed we're not following TDD rules. As you know, in TDD we should write tests first. In theory this lead to an exhaustive test suite because there is no code without an associated test (maybe this still doesn't apply to the exit() case). In fact, after writing the test we write just enough code to make that test pass. Then we write a bit more test and so on... So tests and code are constructed by small increments.
I'm used to develop this way but I'm aware of the peculiarity of this project and I'm open to discussion case by case about the more suitable testing approach. That said, I think that - at the moment - we don't have enough tests.
I understand what you mean, and I mostly agree with that. It would be good for GoSpeccy to execute as many tests as possible. It is just that I am probably not going to write many of them using the Go's test method.
It is easier to perform certain tests manually (i.e: it would be time consuming to write a test function in Go). For example, this applies to the timing tests (http://github.com/remogatto/gospeccy/issues#issue/25).
It is good if issues are caught by leveraging the type system of the language. In other words, the method which is used to prevent software failures is: develop a good software architecture and use a good programming language. Types and transformations between types express programmer's intentions. For example, if a language offers compile-time type checking and has classes, the software developer should design the class structure so that it minimizes the chances of a software failure. Of course, in languages such as Python or Smalltalk, which are not doing any compile-time checking, TDD is a must rather than an option.
The "type systems" (or whatever we call it) have a long way to go in the future. For example, it would be nice if some major future language had the ability to check (at compile-time) that all indices used to access arrays are within bounds.
As another example, I don't like Go's idea of allowing nil pointers in the language (potentially, any pointer can be nil). I know from my experience that it is possible and quite natural to develop software in a language that allows pointers to be nil only if explicitly specified by the programmer.
The timing tests you mentioned are a perfectly valid way to test the emulator. They are a sort of system/black box tests since they don't test a single part of the program but its high-level behavior. In our case I'd call them metatests because they are tests which runs inside the system which is being tested. I don't know if the name is appropriated. It would be nice if the launch of those metatests can be automated and the result returned back to the main testing process (gotest) and checked. Here are other examples of metatests taken from FUSE:
http://fuse-emulator.svn.sourceforge.net/viewvc/fuse-emulator/trunk/fusetest/
That said, there is another class of tests, which are not metatests, which could be performed using the gotest framework (maybe with the aid of a test prettifier since gotest visual feedback is so ugly...). I'm referring, for example, to the tests in console_test.go.
I'm planning to develop both sides of this testing scenario.
"It would be nice if the launch of those metatests can be automated and the result returned back to the main testing process (gotest) and checked."
Yes, that would be nice. And slowly we will get there. It just requires some time (several months, on my side at least; assuming that on average I spend less than 1 hour per day on developing gospeccy).
An average of less than 1 hour per day is a great contribution for a niche project like this. Thank you for your work!
That is because I have a hidden agenda: my primary objective with GoSpeccy is to try to apply some dynamic code generation ideas. Right now, we are at something like 500 x86 instructions per 1 Z80 instruction, and that does not even include SDL audio and video rendering.
Well, but you are right, GoSpeccy is a "pet project".
I've always known there was an hidden plan ;)
A test suite for the CLI is now on the master branch.
I'd like to write a bunch of test cases for console.go before next release. However, console.go is in package main so it can't be tested with gotest. I'm evaluating to put console functionalities in an external package in order to test it.