Open yarikoptic opened 3 years ago
Shall read/respond better to this next week, but wanted to capture that we have some cypress based automated tests in the works (there should be an open WIP PR for this somewhere), as well as a whole bunch of backlogged plans around adding more automated QA/cross browser testing/etc. Off the top of my head I know I’ve got backlog issues and concepts written up around tuning the bundle size + reporting that in PRs, visual diffing/review with Percy, the aforementioned cypress integration tests, etc (I feel like there was another couple of things but can’t remember this second)
Never heard about cypress -- might indeed be a more native to JS land way. The PR in mind I guess was #1252 !
The PR in mind I guess was #1252 !
Yup, that’s the one!
Inspired by seeing #1468 by @0xdevalias decided to share an idea which could be nice for a hackathon and could be used by @sparkletown to quickly establish a sample integration smoke test + benchmarking + democase for any given target event deployment.
For our https://dandiarchive.org project we created a quick&dirty https://github.com/dandi/dandi-api-webshots/#readme (code generating it: https://github.com/dandi/dandi-api-webshots/blob/master/tools/make_webshots.py , walkthrow scenario part) where we sweep through dandisets and provide summary of timings to see what aspects should be improved and what dandisets would be "clinical cases" triggering the need.
For sparkle then it would be just a sequence of steps through the event with their timings, webshots and ideally output from browser console to spot any possible errors. Possibly with a summary on top leading to the steps which either exceed some desired timing threshold or error out.
E.g for
env/ohbm
(attn @margulies ) it could be a walkthroughSeeing screenshots would allow rapidly assess "where we are" in terms of desired development, what is missing, how UX would be given the timings, etc.
Ideally, if such testing setup becomes a part of should be then placed into a CI so could run against the target branch deployment and then provide diff on timing or error out if something is not fixed. But that would be quite a bit more evolved ;) Re concern "but it is on CI with unreliable timing!": that would be ok since it would be just for comparison as done/ran on the same instance. We do that for seeing effect of PR on benchmarks in DataLad and it works wonderfully. Relevant workflow (just for ref): https://github.com/datalad/datalad/blob/master/.github/workflows/benchmarks.yml