kubasub / chinese-checkers

COSC 3F00 Group Project
0 stars 0 forks source link

Testing - Create Test Plan from UI Design (4 hours) #11

Closed curtesmith closed 10 years ago

curtesmith commented 10 years ago

Description: Build the test plan for current phase and get approvals

Dependency: There is a dependency on knowing the component names (project task #6).

Original Work Effort Estimate: 4 hours

kubasub commented 10 years ago

Dependencies for this task have been completed.

UML Interface Layout Controls

Extra: Naming Conventions

SaajidM commented 10 years ago

Test plan completed.

Phase 1 - Test Plan

Requires a review from documentations and approvals from the rest of team before can be marked as completed.

BenStitt commented 10 years ago

The only inconsistency I noticed is that these docs refer to hotseatConfigurationActivity, but class-method-descriptions.docx refers to HotSeatSetupActivity. I am assuming this doc no longer applies, and approve it.

SaajidM commented 10 years ago

Thanks Ben. Updated the Phase 1 - Test Plan to match the class names from Class Method Description. Awaiting the rest of groups approvals.

kubasub commented 10 years ago

I have read over it and approve it. Still awaiting approval from: @curtesmith Team Lead @goddamnpete Deputy Lead and possibly others.

chriskdon commented 10 years ago

Looks good, I approve.

curtesmith commented 10 years ago

The only adjustment that I would suggest at this time would be section 4 "Testing feedback procedure". We have not discussed it as a team but I think it would be more efficient to use GitHub Issues to log a test failure as an Issue that can be assigned or picked up by one of the developers. I think the content of the form in Appendix A is useful so could we incorporate the content of the form into the GitHub Issue when it is created to capture all the information needed to begin troubleshooting the bug?

Let me know your thoughts.

SaajidM commented 10 years ago

Curt, I was thinking that could work but we would have to generate issues for passes in order to keep the documentation abreast. It shouldn't really be a problem however so I am quite open to it. Just let me know what you think and I'll make the changes to the document before you and the rest approve it. Also I believe any changes to document would mean the previous approvals would have to be re done.

goddamnpete commented 10 years ago

I approve of this document, sorry it took me a little longer to look it over.

chriskdon commented 10 years ago

I don't think we would need to generate any documentation for unit test level passes/failures, these tests happen way to frequently to manage manually. These will be captured by Travis or just the general unit test framework. But we probably want something for the higher level integration/system test passes and failures (we could create a new category like for 'project tasks') since these are done less often. I also think that using GitHub to log the issues would be much more efficient and helpful because then we know exactly who's working on what.

SaajidM commented 10 years ago

I have no problem with the unit test level fails not being documented in the way described for everything in the document as these would most likely be captured in the frameworks as you stated, however I would still like one of those forms filled in when the high level of which the unit tests stated in the document are completed with pass status.

chriskdon commented 10 years ago

I agree with that. I think we should however submit those forms into this section (I realize it's technically the issue tracker but we can tag it with "Test Pass" or something just like project task). But overall I think we're in agreement on how it should work. @curtesmith might have something more on this.

curtesmith commented 10 years ago

I think I agree too. I just want to make sure we are all saying the same thing. So if @SaajidM or anyone running a specific test case observes behavior with the application that does not match what is expected in the test case then the tester can create an Issue in GitHub that references the test case (maybe by a test case ID) and then include the contents of a completed Appendix A form. I am hoping that the Issues will be specific enough that we can track them at the individual test case detail level rather than creating 1 big Issue that has all of the tests that did not "pass". Are we all saying the same thing?

curtesmith commented 10 years ago

"Travis" may also have the ability to open Issues in GitHub automatically if the unit test suites have failures. Maybe something to research and consider if you think it will add value and make life easier.

chriskdon commented 10 years ago

Yup, Curt I agree with you.

SaajidM commented 10 years ago

Alright, Updated the Phase 1 - Test Plan to reflect the new feedback procedure. I would like to get re-approvals from @ChrisKdon @curtesmith @goddamnpete @kubasub verifying changes are adequate and in line with what we have discussed and from @BenStitt that the documentation is up to standards. Thank you.

kubasub commented 10 years ago

I have looked over it and approve it.

chriskdon commented 10 years ago

I approve.

BenStitt commented 10 years ago

I have one issue. Under section 3.6, you wrote "Team Leader to be determine." I'm not sure what this is supposed to mean.

goddamnpete commented 10 years ago

Yeah, aside from the question Ben asked, I approve of this document wholeheartedly.

SaajidM commented 10 years ago

The acceptance testing was to be determined by the team leader as to what is a pass. This is what Curtis had to talk to Vlad about.

curtesmith commented 10 years ago

I also approve the document content. I will pass along Vlad's remarks about the Acceptance Testing as soon as I get it.

SaajidM commented 10 years ago

Alright I'm continuing on as though this phase is completed. I will leave it open however until we hear back from Vlad.

Actual Time For Completion: 5 hours.

curtesmith commented 10 years ago

We are responsible for the Acceptance Tests.

From Prof. W: "let us give your Team all the freedom it needs to arrive at the best product. I still expect all the demos and rationales given.

This methodology applies when you want to launch the game on the wide market; The Team when does not have access to the advice of all customers possible. The team must make its own decision, for better or worse. These decisions may Affect the marketability of the product. Regards — Vlad W."

BenStitt commented 10 years ago

I approve.

SaajidM commented 10 years ago

Okay. So what does the team think should be a reasonable criteria for the acceptance testing?

kubasub commented 10 years ago

Maybe we can make a short list of criteria during our next meeting?

One possible idea I have is to be two gestures away from wherever the user wants to be (within reason)

SaajidM commented 10 years ago

@curtesmith I'm going to move this task over to you as you were to update the Test Plan with the acceptance testing criteria.

curtesmith commented 10 years ago

@SaajidM I've uploaded an Acceptance Test document to https://github.com/kubasub/chinese-checkers/tree/master/project-management/Phase%201%20-%20Testing%20Documentation. Will you have some time to look it review it?

SaajidM commented 10 years ago

@curtesmith I'm unable to open the Documentation. It opens as corrupted on my windows.

curtesmith commented 10 years ago

strange. It looks like that file broke the build too (???). I'll recreate it and update it.

SaajidM commented 10 years ago

The content looks good. Not happy that we failed the acceptance test, but we do have a couple more iterations to get it up to snuff. Also maybe next time we can go with a different format. Just food for thought.

curtesmith commented 10 years ago

@SaajidM I tried uploading it again. Please see if you can open it now from GitHub.

@BenStitt if you cannot open it I can sent it to you as pdf so that you can add it to the binder.

SaajidM commented 10 years ago

@curtesmith Yes I am able to open it now.

kubasub commented 10 years ago

@curtesmith @SaajidM , I intentionally disabled changing the orientation to keep it from going into landscape. Many apps are made like this. Ex: Google Calendar, Dots (a slick game I showed during a meeting), etc.

For that reason I'm thinking that maybe that shouldn't be one of the acceptance tests.

SaajidM commented 10 years ago

@kubasub The test case wasn't for landscape it was for upside down portrait. But I do understand what you mean maybe we leave it in and for the next iteration we can find a way to reorient as long as the device is portrait.

kubasub commented 10 years ago

@SaajidM , yes I understand that it was upside down portrait. Google calendar goes landscape as well, but not upside down portrait. Dots does not rotate at all. Having it upside down, I feel, is not very natural/should not be presented.

SaajidM commented 10 years ago

@kubasub I suppose if we are only doing it for phones, upside down portrait doesn't really make sense. But personally when I pick up my tablet I can't really tell which way is the right side to hold portrait and is annoying when I don't get it right. But again I believe we are only doing it for phones so .... @curtesmith What do you think?

curtesmith commented 10 years ago

@SaajidM @kubasub good discussion. I'm glad that I noticed it and then captured it in a test so that we could say we talked about it and made a decision about whether or not we want to consider it. I think if Google sets a precedence for us then we can follow that especially if we know there are going to be challenges with making it possible to reorient itself. Since all of the apps I use on my Kindle orient themselves to be upright I made an assumption that most users would.

So all that to say that we do not believe that we will invest in making this app reorient itself at this time. Therefore this test failure is not going to be acknowledged as a failure. We say it is working as designed.

curtesmith commented 10 years ago

@SaajidM I noticed that you are working on the failing tests today. I think before our demo today if there are any tests that continue to fail then I would suggest to descope them from this iteration which means lets pull them out and try to get them working in Iteration 2. It would be good if we can run the tests today and have only the working ones running.

SaajidM commented 10 years ago

@curtesmith Hey, the issue is a timing issue while using the emulator. I have verified that they all work on my tablet can someone else verify that they pass on a device. I will bring my tablet and computer to the meeting tonight so we can show the tests running if that is required.

curtesmith commented 10 years ago

@kubasub awesome! I ran the whole suite of tests on my Kindle Fire. 100% success. Great work you guys!!!

SaajidM commented 10 years ago

Completed

@SaajidM - Actual Work Effort - 5 hours