Open mein1156 opened 11 years ago
With regards to AI, what would be useful for you to know for the test plan? Most of the calls that we want to make will be calls to the API that the client team will already have tested, we will just "automate" them. Would it be useful to know the specific calls we're making?
@leit7193 I imagine we will need to have AI involved in some of the integration and systems tests to see how well it plays with the server. Also, we will probably need unit tests (very ideally here) to test all the code units of the AI code, just like we should for the client code, server code. I would like, again, I can't say that I know much, to test all of the calls but with unit test on the server side then do some integration tests, some including the AI and then a systems test to test a dry run of some sort. How do you imagine testing the AI to see if it is picking a good strategy? Those would be some good tests there.
The only test that I can think of if the AI is picking a good strategy is if it wins. Of course, the game is so heavily in favor of the Imperial the AI might not be smart enough to win, just not dumb enough to lose.
Another test on actual intelligence would be how close to a human do they play? Do they make similar decisions? That's a little more subjective
@leit7193 Heavily in favor of the Imperials? I'll remember that next time I play haha.
My idea of unit tests is even more basic than that, some would be like proactive debugging. I don't know if you would want to try something like pass it bad input, or give small pieces certain input and see how they respond, but that is part of what I had in mind. Testing the AI as a whole I think is an integration test.
For each class function, we can pass it a value and see what it does in response. I don't know any good example off the top of my head. This stuff would be testing that is not really concerned with the fact that the function is part of AI, but that if I pass the function, or whatever a piece of data, want to know that it computes, returns the correct response. When you test your code after you write it by passing it a value of "10" for example, that could be a unit test according to what I am thinking. We just have to define it in the documentation. Does that make sense?
I am not brave enough to tackle testing how well it mimics a human player, especially since I am a terrible game player myself. But if you had a scenario, maybe we could set it up and then test to see what the next move of the AI player is? I think missions might be a good one. Looking at all of the rules and choices, if you even get a half-way decent AI working on this game, Dr. J has got to be super impressed.
Just a thought: I think everyone probably feels swamped and stuff and probably has better things to do than write more specs, I don't blame you, since getting the game (code) to work is the goal. I'll put it out there that I'd be happy to do a lot of the spec writing for anyone, such as use cases, etc if that helps anyone out. I'm a pretty bad writer, so as long as you are willing to review it, I'd be happy to do it.
I feel like Dr. J has been putting a lot of emphasis on testing and its importance. I have seen the direction @mein1156 has been going with testing and it is a good start. For those who are interested in this topic, we should organize a meeting on Friday or Saturday to hammer our a plan.
I am free anytime after 1:30 on Friday and most of the day Saturday.
I'm also free this weekend and plan on being on campus most of the weekend anyways, so I'd like to do something like this as well.
@mein1156 Would you be free this weekend to get together?
@hall5714 @cawaltrip Are we planning to create a repository for Testing (and/or documentation)? @mein1156 has already produced many .tex files that everyone would benefit from seeing. I have tried to keep pace reviewing these but he is producing them faster than I can get them reviewed.
If we do get a repo going for this and someone would like to join me in the review process, then I have been checking for:
-Accuracy of Information -Clarity of statements -the occasional grammatical error
Check with @hall5714 to make sure but there is a docs folder and testing folder in the Freedom-Galaxy repo that could be used I believe.
[Edit: I didn't mean for this post to sound mean. I'm sorry about that]
Hi everyone who's reading this, I talked with Dr. J for a while about testing stuff and other stuff in general and I am reporting on the results of that meeting here. This is based off of my understanding of what he meant, so sorry in advance if I got anything wrong.
Tomorrow I think he is going to announce that we need to create a Test Plan document as one of our goals for next week's sprint. As the team is not very well organized at this point at least from my perspective, I don't know if anyone is working on the document, but I have started one.
Since we need a lot of warm bodies to get our goals done, I have continued to work with the Java team since they have a team of people dedicated to test/documentation stuff (versus me only here?), in terms of generating TCS and stuff and have offered to share some of the higher level, nonJava/Python TCS stuff that we worked on previously. They have continued to work on it and I have continued to share some of my work on parsing the requirements document with them. Personally, I think we need this cooperation and it is much more beneficial for the Python Team to do this than it is for the Java team. Dr. J does not consider this to be a competition at least in terms of grades and has given me the okay to do what is necessary.
I am happy to take ownership of the Test Plan. There are a couple of things that will need to be decided about our testing plan. I emailed @thom5468 a little about some of the details. If we have a Main Menu, Dr. J would like to have a use case on that. That would apply to basically all functionally, such as stacking. We need documentation basically on everything we do. I can write some of this documentation as needed to fill in that gaps. If there is no documentation/specifications specific to your code, how does anyone know when the code performs or doesn't perform? I can't test what I don't know what the expected result should be.
The big question is, are we following our own documentation, such as UML diagrams and use cases or are we just coding (Dr. J's question to me)? We need some specifications and requirements for everything. Basically, if you are doing something, that something should be traced back to some requirements/design document of some kind. If you are doing any programming without a design document, even just a rough one, I personally think that is the wrong approach. That is my own opinion.
This comes back to the issue of design. We have design documents, we need to use them and make some more as required (kind of summarizing Dr. J here). Dr. J doesn't want wants perfect documents, but it will make testing that much easier and expects us to sort of follow them. If someone is testing your code in an integration test or something, we need a public API on the code. How do I call your code?, etc. @hall5714 has put forward something. If it works, we can use it quickly check your work. This is not a substitute for use cases and specifications though. Design comes first ideally.
Also, we need a process on how to deal with discrepancies, failures (in the testing sense). We really need some testing standards. I have developed a way to do that too.
Any comments?