Closed aliceafterall closed 6 years ago
Looks like the build errors occur without your changes.
Yeah, it looks like the travis CI is down. All checks are failing on everything. :/
This looks really good, @chainsol. I'd like to see a few unit tests added to cover the new perception checks.
I'm not sure what's going on with the errors in the Travis CI build. It looks as though all the tests for Ainneve are passing, but we're getting failed tests on the evennia test evennia
part of it. I updated my local copy of the main evennia repository and I'm seeing them locally, though others in the IRC channel say they aren't seeing any. I may try deleting my local copy and checking out again, but perhaps @Griatch may also have some insight into what's going on here.
Thought so too. I'll add some basic unit tests to what's already in the return_appearance checks.
Any feedback on how "difficult" the checks should be?
I think the target numbers you chose seem like a good starting point. The default target number for most skill checks in OA is 5, and i think having the "vague" checks a little easier and the "exact" checks somewhat harder makes a lot of sense.
I wonder if @Griatch would be willing to check out the Travis CI build logs... It appears the errors are happening during the evennia test evennia
part. It looks like the evennia/evennia repository has a passing Travis CI badge on it, but in my local copy of the master branch, I get the same errors as those that appear in the build log when I try to run the base framework tests. I haven't been able to figure out what is causing these errors, and the chatter in the IRC channel seems to indicate most people are not experiencing them. However, it appears that all recent Ainneve PRs (excluding #87) are failing with this same set of errors.
@feend78 I can confirm that the evennia unittest suite fails with the Ainneve game folder (it works with a vanilla game dir). This is most likely due to the test suite somehow making use of the normal commands and typeclasses - and those have been overridden in Ainneve's case. This was not supposed to happen since we are using settings_defaults.py
only for the core tests. I don't know off the bat just why there is a "leakage" between game and core, but that seems to be the case.
I will open a new issue against Evennia concerning this. So as to not block your work here, I suggest you comment out https://github.com/evennia/ainneve/blob/master/.travis.yml#L11 so that Travis only runs Ainneve's own unit tests for now.
Just to be clear - I've written initial unittests that pass but this isn't ready to merge - need to write a little more advanced testing.
I don't know that I can improve the tests, actually - if someone wants to take a look at them that's fine, but other than testing that the exact chances are correct, I'm not sure this needs more testing - and that's a test I don't know how to write. Otherwise, I'm happy with this for the time being! Please do tell me if anything else needs changing.
I'm going to merge this into Develop - if someone wants to improve the testing, that's absolutely amazing, and I'd be excited to see another PR for that!
This change addresses #57 by adding a customized return_appearance to the Character typeclass.
This hook shows the name, and if the looker passes perception checks, the race and archetype like so: Chainsol the Elf Arcanist
Another perception check tells the looker how healthy the target is - right now, there are four messages at 80%, 50%, anything over 0, and 0 HP.
Another check tells you how much stamina they have left, also with three messages at 80%, 50%, and anything below 50%.
Finally, the return_appearance hook also doesn't show the looker the contents of the target's inventory, and instead only shows what they have equipped. To this end, the hook uses the target's EquipHandler and the limbs the target has.
To make the limbs more readable, the limbs of the default characters have been changed to be human-readable instead of shortened: right arm instead of r_arm.
The only thing I remain unsure of is the difficulty of the perception checks, and if we should store and remember the result for a period of time, so the checks are only re-performed after a period of time, say, 5 minutes?