Closed mikicho closed 4 years ago
I suggest running the app and all the commands inside a docker image, that way we don't nedd to worry about any OS specific config
@jhenaoz Thanks, Indeed this is preferable, Just not sure about the performance drawbacks (if any) on Mac, want to do a small benchmark to check it out. my guess is there is a small, neglectable performance drawback.
@mikicho Sounds like a super-strategic topic. It can be interesting to benchmark the differences. I also wonder, whether we can map to RAM in Mac/Windows in an automated way, so a developer just pull, type 'npm t' and the magic happens OR a developer would have to manually point his Docker daemon to some specific local folder.
I suggest running the app and all the commands inside a docker image, that way we don't nedd to worry about any OS specific config
@jhenaoz Did you mean the infra (e.g. DB, MQ) or also the code under test?
@jhenaoz another thing is there are options that available only for Linux, for example tmpfs
@jhenaoz @goldbergyoni @Thormod In component testing, do we run schema migrations at the beginning of every run? just truncate all tables? leave the tables intact?
@mikicho , how about schema migration in some global setup for all the test execution? and truncate tables for data clean up for each test case?
@jhenaoz @mikicho Interesting stuff, my 2.5 cent:
Schema - Migration on global setup sounds indeed very sensible like Juan wrote 🚀. Maybe in dev env, no need to seed on every run, rather when explicitly asked? This will bump the performance for local dev given that in 99% of the executions we don't need to migrate (no schema changes)
Clean-up - If we clean-up after every test, then processA.test1.afterEach will delete processB.test12 data during execution, isn't it? Maybe clean-up only at the end, globalTeardown, and every test should take care to act on his own records only?
For example:
test('When deleting a user, then his articles are deleted', ()=>{
// Arrange
const userToAdd = {name: `Joe ${uuid()}`};
...
}
@goldbergyoni @jhenaoz Just saying there is another option which is to open a transaction and rollback it after each. it's efficient and clean. Having said that I agree with you, on dev no need for cleaning after each (just once in a while to keep the tables small) I don't think we need to clean up at all on global teardown.
@mikicho Yes, good point, transactions are also an option (when working with a supportive DB + single DB + local process) and it's good that we cover everything before making decisions.
Curious to hear @jhenaoz and @Thormod thoughts on this.
Several comments about this discussion:
While I believe using transactions is a valid option, it's not my superior one because:
My preferred option is like (always) like the production is working - Generate unique rows for every test and don't delete those. It isn't perfect but maybe better.
@goldbergyoni @jhenaoz shouldn't we close this in favor of #6 ? (maybe the opposite to preserve the conversation above)
@mikicho Yes
@jhenaoz @Thormod @goldbergyoni
I want to focus on MySQL and Postgres, which are the most popular databases engines in those days. We can add more databases later (MongoDB is a good candidate IMO)
Also, Because of WSL2, I think we can recommend using Linux practices for Windows, WDYT?
The challenge:
Optional Solutions
Postgres:
Tasks