Closed nazneen84 closed 4 years ago
Hi, sqlmock does not have an option to execute same expectation number of times. you have to generate as many expectations as there are queries executed in the tested function. Though, the only option to support concurrency is the unordered matching. With ordered matching, the first expectation not matched will return an error, so all you need to do, is to check errors on every query you are making.
Yes I am doing that already. But if an unexpected query happens after all the expected queries in order, I am not able to assert that last one.
In sqlmock if unexpected query happens, you have an error, that is expected behavior, there is no workaround. Think on your code, whether you can test it properly and how. Maybe it is possible to isolate code in smaller testable parts
The only other way I could test it that I can think of is to scrape the logs. It is not ideal. I take it this is won’t fix? I checked with the features you have currently on my end. The last unexpected query is giving me a false positive. Even if I split that into another function, it will give me a false positive
The point of this library, is to test the expected behavior, this cannot include unexpected calls to the database. If you cannot workaround it, fork the driver and adjust it for your needs if you think it is more reasonable.
I'm facing a similar issue. Would it be possible to add a function similar to ExpectationsWereMet
to check if no unexpected DB calls have been made?
Maybe you guys can explain the problem in more details?
Consider such function:
func multipleActions() error {
err := doQuery1()
if err != nil {
return err
}
return doQuery2()
}
For the positive test case without errors I would define expectations for both queries. Lets say that for a negative test case where I want the doQuery1()
function to fail I forget to set sqlmock expectation to return an error, and I do not set an expectation for the second query. I will get a false positive here as the sqlmock expectations are not defined properly, but the function still returns an error and I end up not testing the if
condition.
What I would be looking for is, as I already said in the previous comment, a function similar to ExpectationsWereMet
(maybe modified logic of the same function) to check that no extra SQL queries were performed outside the expectations. I do realize that this should not be possible if everything is written properly and works as expected, but in such world ExpectationsWereMet
also would not be necessary :)
Ok, I understand what issue you have, but library cannot prevent mistakes from users. in your tests, you can check if error contains specific part of message to have more precise logic for testing. Also, if those queries are exactly the same, such code would not make sense, and probably they are a bit different, so you can match the query arguments or text correctly to prevent such errors from happening.
ExpectationsWereMet
will always be necessary, since your code tested my not return an error if the code hits all expectations in order, but there is still one which had to be hit as the last.
So far, this is not a good enough argument to make changes.
Queries were not intended to be the same in the example, it was also a minimalistic example just for the purpose of an example. Since we are talking about the cases when tests fail, we obviously will be talking about scenarios when there is an issue in the code. Personally I see as big use case here as in ExpectationsWereMet
. I'm not looking for a way to determine where the function returned (or what kind of error it returned, really), but recently I have seen a few (possible) false positives in my unit tests which could have been avoided.
Another possible scenario where a new function could be useful is if we are talking about a regression. If we had valid and properly written unit tests, but changed the code to this:
func multipleActions() error {
doQuery1() // still returns an error
return doQuery2()
}
We stop checking for error from doQuery1
, while it is still there. This is a clear regression in the case when an error is returned. But when considering our unit tests then they would still pass as we will get an error from sqlmock that comes through doQuery2
.
sqlmock can only return errors from the driver, so you have to be checking every error in the each of the database call. if user does not check for an error, this is his code problem, not the libraries. The sqlmock usage is limited, but the main goal is that it allows to test existing code, without any modifications necessary if code follows idiomatic approaches and does the error checking.
sometimes the problem is in user, but not in the tools he uses. if a tool provides a way to assert all the execution path options, then all you need to do, is to ensure that your test code follows one and only feasible execution path and this may not be as easy as it sounds.
Completely agree with you - all my examples are constructed to show a user error. Probably I see this package differently - to help to catch issues like this. After all, that is the main purpose for a unit test
ExpectationsWereMet
will always be necessary, since your code tested my not return an error if the code hits all expectations in order, but there is still one which had to be hit as the last.
I would say that this is a similar case - if there was one expectation that was not hit, it was a user error and the library helps to catch it. I can't come up with a scenario where ExpectationsWereMet
would return an error but the code is correct.
I want to make sure the query executed when an expectation is not met is not executed. I looked but there does not seem to be any option to do a sqlMock.ExpectExec().Times(0) or a slMock.FailOnUnexpectedMatches(true) or sqlMock.FailOnExec(). Please let me know/ show me an example on how to work around this. We cannot test an important condition without it