paypal / nemo-docs

Documentation for the Nemo automation framework
Other
18 stars 30 forks source link

Nemo 3 and Nemo Runner value prop #20

Open grawk opened 6 years ago

grawk commented 6 years ago

nemo-cor@1 (formerly nemo)

nemo (formerly nemo-runner)

Release and Support

ethangodt commented 6 years ago

Just want to share some thoughts based on our experience of setting up Nemo recently, and the accessories we built around it to make it especially useful for us.

  1. Could we consider adding some nemo init functionality to the cli instead of requiring someone find the generator-nemo tool? It may add some bloat to the project, but would ostensibly reduce friction to getting started. It took us several weeks to get setup with everything we have now. It would be great if that took 2 min.

  2. It's been incredibly important for us to stuff arbitrary data in the reports and in what we store in influxdb (for us it's the same thing). Here is a list of what we keep:

    title
    profile
    state <-- like PASS/FAIL
    ——————
    sluggishStage
    jawsFailure
    (These two are just helper flags based on stack trace which we use to easily identify our two biggest sources of flakiness (fake user creation, and stage sluggishness).)
    ——————
    file
    startTime
    endTime
    metadata <— err maybe we should have put more in here, but this is random data like email, CAL ID, etc.
    duration <— helps us identify slow tests
    errorMessage <— makes for use queries to see how often some flake issues are happening
    errorStack
  3. Include default reporter (which supports aggregation), but allow use of third-party solutions. Perhaps the biggest issue we had getting value out of Nemo was merely seeing the results of a suite that, for example, ran 150 tests against 6 different browsers. Because we run each test in parallel, we originally got the choice of peeking at 900 report.json files (literally), or quick glancing at the console output to get some sense of the damage (very hard). We ended up making our own reporter which aggregates the tests by file/test and by browser run. This seems like an extremely common use case and should be considered. If it were covered by a default reporter, that could be really helpful out of the box.

Additionally, our reporter lets you drill down into the health of a particular test/browser combination within the html itself. The data as you drill down comes from influxdb. It's really helpful for us.

(e.g. here are two tests I just ran in chrome and firefox — if both fail the test shows red, if one fails the test shows yellow) report

👍 for how easy storage in influx would enable everyone to set up a dashboard. Our grafana dashboard helps us monitor our regression runs at a high-level. We can see the most failing tests, slowest tests, how browsers are performing, etc. We could save a custom grafana image that plugs into whatever we want to do officially with the influxdb image.