Closed joshrainwater closed 3 years ago
Is there a reason you can't $response->assertHeader()
on those specific routes?
Regardless of whether or not I can figure that specific situation out, what's the general rule of thumb for a situation that you can't figure out how to run a test? Just keep digging until you can find a way to test, skip the feature, or implement it anyway somehow?
I think skip the feature
is not the ideal solution. And if you do real TDD, implement it anyway
is also not ideal.
In my case, I would search as long as I find a solution to test this. And if I really don't find anything or have no idea how to test it, I would ask here or some other testing experts, if they may help me.
In your case, @FatBoyXPC already answered your specific problem :-)
@jkniest is it possible to just implement that feature without testing? I believe it's possible and I also believe my application will 'survive' !
@joshrainwater Test it yourself manually and maybe implementing some kind of errors logging on that new feature if it's possible and monitor..don't know...just a thought.
@danielnegoita Of course you can implement it. I just said if you really going full TDD it wouldn't be allowed. (But as always, it's based on the circumstances.).
I don't see a problem with implementing this without testing.. but I would create a test for it. (Especially, since this case should be really easy to test, -- see the answer from @FatBoyXPC
Have a nice day, Jordan
Yeah like @FatBoyXPC I would test this personally by making an OPTIONS request to your endpoint and checking for the CORS headers š
OK, great, thanks for the direct answer @FatBoyXPC.
Sorry if this gets off topic, I guess it was two questions in one. I can find another spot to ask if necessary, but I'm still curious about the second question.
Say I have a really complicated equation to test; something really procedurally detailed. I can test around it, test it returns answers or types of results, but I can't reasonably test actual results. Would that be good enough? How would I handle a situation where I can't find a way to test the code I wrote?
Yeah, I kind of intentionally didn't answer the second part to that because it can get situational.
Can you elaborate on "actual results"?
Code that's already written that's hard to test: imo, that's where the strength of laravel's HTTP tests come in. Any time you can write a test to say "hey go to this URL, throw this data at it, assert these things" is pretty fantastic. Often times, I'll use this to assert stuff got added/updated/deleted to/from the database.
However, sometimes there's code that fetches some other API and there's no easy way to say "hey use this sandbox api instead!" That situation is a little more difficult. I honestly would take each "code is already written and hard to test" situation on a case by case basis, since it's often going to be different so a blanket rule probably won't be very accurate.
I recently had a discussion with somebody who told me they sometimes didn't write tests first because they "didn't know what behavior to expect yet." If you're ever in that situation, I simply ask "Then how do you build it?" This might sound snarky, but the fact of the matter is you don't just finger spam the keyboard and get working code (well, I don't, anyway). If you "don't know what test to write" before you write the working code - are you sure you've planned your feature out enough? That's how I would approach that situation.
@fatboyxpc I disagree with the second part of your answer. I sometimes find myself needing to implement some feature that I know what I want it to do but not entirely sure what is it going to do. Instead of taking a pen and a paper and start sketching it I find it sometime easier to simply write some code; toy around and try to polish it and understand what I really want it to do. Once I get there, Iāll often git reset
and then implement the code with TDD approach or sometimes Iāll write the test after the code Iāve written. It depends.
I donāt like the idea of telling people that there is only one correct way to do something. There are literally countless ways to achieve something. You just need to find the one that suits you. It does not make you any less if you donāt TDD everything. Sometimes the āhands onā approach is just what suits someone better when trying to implement a feature.
@kfirba Glad you brought this up, because these hypothetical situations are really fun to me :)
I would like to clear up, though, that I didn't say there's only one correct way. I just made it clear that if you don't know what you want to happen, how can you write code to make that happen? If you can explain a real situation where you've actually ran into this, that would be fantastic :)
The situation you explained to me feels like you aren't sure what behavior you want. Now, this could suggest you're trying to learn an API, in which case I'll elaborate on that below. If you aren't learning an API, and instead you are "figuring out what I really want it to do" - I'm going to stand by my previous statement, you haven't really planned out your feature well enough yet.
Example: Let's add an upload profile picture feature to our website. Right off the bat I know I want a few things: a page that gives me an upload button, a route that processes that image (we can define what that means later), and then a profile page that I can view that picture. This is as good of a starting point as any; it's certainly not an exhaustive list of assertions, but it gets the ball rolling.
About learning APIs: This is one situation where a lot of people like to play with an API before they go and write code for it. Uncle Bob's "Clean Code" talks about "Learning Tests", which are fantastic: Instead of playing with something like postman, write up an http request inside a test. Now you get your playground and you have your integration tests. Win-win!
@fatboyxpc I may have not been very clear in my previous post but what I really meant is that sometimes I need to write production code in order to fully plan the feature I want. Unfortunately I donāt have an example right now but yea, sometimes writing code as a mean to plan your feature is something that helps me understand better what I want this feature to do.
@kfirba Yeah, so I just don't agree with that, honestly (though a real world situation would certainly be helpful). I think there is some confusion here about how you want something to work, and what something you want to begin with. You also have to remember that often times, people who aren't programmers are giving feature specifications. People who aren't programmers simply don't have the luxury of writing production code to drive out how they want the feature to work, so we know this is 100% possible to write out a feature spec without writing production code.
@fatboyxpc itās most certainly possible but might not be the most efficient way for me. Thatās my point
@kfirba in fairness, you probbaly mean "most comfortable" not "most efficient". I highly doubt you save any time by mindlessly poking around with code, then scrapping it and rewriting it :wink: Anywho - obviously anybody can do things how they want, and happy coding to all :+1:
On the 'situations that can't be tested' front, a project I'm working on deals with scientific equipment which works down to a kind of nanotechnology/quantum level - writing tests for what amounts to 'Schrƶdinger's cat' is proving... 'interesting'... Not helped by it being a php backend, which talks to a Python app, which then talks to a TCL 'driver' over an unreliable tcp socket... š¢
@ohnotnow that sounds painful! In this case, you could probably mock it and just pray you never get out of sync but that's certainly not optimal. Do you have any sort of sandbox environment so you can at least run a whole suite of integration tests?
@ohnotnow Yeah that sounds insane. In my case I think I'm going forward sort of... testing circles around the calculations. As in:
That kind of stuff. I might even test the ranges of some results, just in case (say, between 100,000 and 150,000). I just gotta hope that I can get close enough that everything sort of gets covered.
@joshrainwater I think the way I would do this is as follows:
1) Make an HTTP request to an endpoint with a given input 2) Assert a specific thing happens (return value in json, check the html, check the database, etc)
Now, if we're following the dogmatic TDD approach, this test is failing just on the fact that we're making a request to a route that isn't defined, so let's skip to the part where we write a controller and method that's empty. Use this write the code you'd like to have. That's the best part about this TDD approach! Programming by wishful thinking, a concept from "Growing Object Oriented Software Guided By Tests" is really awesome, and fun! @adamwathan even mentions it in one of these videos! This code "you wish you had" will be what you'd like your controller code to look like. Obviously most (if not all) of it hasn't been written yet so it blows up. Take this opportunity to write your unit tests (which also fail because the objects and methods don't exist). Your unit tests will test the inputs and outputs of the functions/methods directly. The ideal unit test is "given input A, I want output B."
Once your unit tests are passing, your controller should be able to move past the 'wishful code' and you should be getting close to testing the result of the controller (saving to a database, returning json, returning html, etc).
For stuff with a lot of results, I'd most likely just check a few of the results and assume it's good unless a bug is reported otherwise.
@FatBoyXPC @joshrainwater heh - I didn't mean to de-rail the conversation - I was just having a 'pity me' moment ;-) But yes, there is a sandbox 'virtual' version supplied by the manufacturer, but it behaves differently to the real equipment. Which is super ;-) I've got tests - some of which do something, some of which are really just guesses. Some of the code is written by scientists and I have zero idea what it's supposed to do, what the inputs should be or what the output should be - so they are pretty much ignored and 'assertHopeForTheBest' ;-) Some of the code has physical effects 'in the real world' (well, as real as quantum mechanics gets) and depending on the state of 'the real world' different things might happen. Or might not happen. Which is really interesting, if you're a scientist I imagine... ;-)
Anyway - I'll go back to my pit of despair now - carry on as you were :-)
@ohnotnow Yeah, that's the problem with not owning the code. Sometimes you just can't control the other teams involved and you have to do the best you can do, and it sounds like you have!
When you're working in TDD, what do you do when there are situations that you can't figure out how to test?
For example, I'm working on an app that allows public access to specific routes. So I need to add CORS Access-Control-Allow-Origin headers to specific routes, but I can't figure out a way to spoof that in PHPUnit.
Regardless of whether or not I can figure that specific situation out, what's the general rule of thumb for a situation that you can't figure out how to run a test? Just keep digging until you can find a way to test, skip the feature, or implement it anyway somehow?