Closed bredmor closed 2 years ago
Hey @bredmor, I am new to this and want to make some contributions. But I need to a few questions and some of them may be basic. Would you like me to work on it?
Hey @bredmor, I am new to this and want to make some contributions. But I need to a few questions and some of them may be basic. Would you like me to work on it?
Sure, I'd welcome the contribution and be happy to answer any questions you have.
Ok, so I used the example with a google api key and I am getting '403 response forbidden'. Any idea why this is happenning?
It looks like Google has actually changed the process to access the Perspective API.
Instead of adding it via the API Console like most of their APIs, you'll have to follow the instructions on their documentation here: https://github.com/conversationai/perspectiveapi/tree/master/1-get-started
Ok, so it's working now. But can you please elaborate on the issue. I mean, what is automated test and how should I go about solving the issue?
I think this would be a good read for you - https://andy-carter.com/blog/phpunit-what-why-how
Ultimately, the idea is to have a suite of tests that can be run with every change to code (or change to the Perspective API) that assure us that this library is still working as intended - or tell us what has broken.
Thanks.
Ok, I have read and somewhat understood what are test all about. Then, what is my job? Is it to write test for the api? If so isn't it something that the person writing the code himself should be doing? And you say you need a 'few automated tests'. I mean, how many tests do you need? Isn't one test enough?
I am sorry if my questions are too basic. I really want to contribute but I just do not know how I should be approaching this problem.
Determining how many tests are needed and what they're needed for is definitely something that needs quite a bit of thought put into it as well. With an API consumer like this it probably would have been ideal for me to create the test as I wrote the code, but that didn't happen.
Testing can be very broad, or very granular. At the basic level we could have just one test that makes sure the main analyze()
function gets a proper response from the API. We could also have tests for each type of Analyzer Model to make sure each one we support gets a proper response from the API - that would cover cases where that first test works with one model, but not another.
If you were to contribute one or more of those tests, that'd be a huge contribution to me.
Tests have been added as of #3 - Closing
Need a few automated tests to ensure the API wrapper is still working with the API and getting the correct data back.