Closed tkadlec closed 9 years ago
One idea for an aggregation strategy: the process used by grunt-phantomas, which produces JSON and uses D3 to generate charts over time. Here's a demo.
Another benefit of producing JSON by default is that it can be integrated into dedicated analytics visualization tools like Elasticsearch
Hey guys,
I plan to remove grunt dependency from grunt-phantomas
in future anyway.
What do you think of combining the two tools into a new project?
I mean logic and functionality for drawing and creating a kind of pretty interface is already existant on my side. :)
@rupl Does phantomas do Speed Index? Do you know if it's on the roadmap or easy to calculate from all the other phantomas data points?
Phantomas offers lots of data points, but it doesn't boil it down into a single index number. In the context of perf budgets, I normally use it for tests like this:
phantomas --url http://gruntjs.com --assert-requests=29
You can provide as many args as you want, and limit the output to those args as well. So once you build a command up, you can save it in a repo and have developers run it and ensure it passes before being able to push or whatnot.
We've opened some discussion where grunt-phantomas would log assertion failures and eventually be able to highlight failed assertions on the charts it generates: https://github.com/stefanjudis/grunt-phantomas/issues/86
At the moment, Phantomas (and supporting tools) offer very granular-but-manual testing abilities. The knowledge that went into creating the WPT speed index (or a similar index like PageSpeed) is not available, and would probably require a fair amount of researching and tweaking to be able to produce a similar number reliably.
Here's another testing/reporting tool which is also based on the grunt-phantomas: https://github.com/gmetais/grunt-devperf/
These tools are all somewhat related so I wanted to post them in case it causes good collab as @stefanjudis mentioned above.
@rupl & @stefanjudis — WPT actually produces JSON for each test result, so I'm guessing it wouldn't take a ton of effort to tap into what is already in place for grunt-phantomas
to make the slick visualizations (which I really dig!). Reducing the dependency on grunt was on the list of issues for me as well (#5), so I think it makes sense to see if we can pool together on this.
At the very least, I like the idea of displaying a bit more information in the console ala grunt-devperf, maybe behind a flag like --expanded
or something.
Man, sorry for putting this issue in a differtent direction ( shall we move conversation somewhere else? ) . :blush:
But srsly... I'm really kind of "dreaming" of putting together a nice visualization tool. And people joining would be super awesome.
@tkadlec so you agree, to put your stuff into the already existent functionality of grunt-phantomas
? We can set up a new repo or even an organization for that, if prefered. :)
This in combination with phantomas sounds really really good to me. :bowtie:
@stefanjudis Well, I think we should at least explore it. :) Is the visualization stuff fairly abstracted from the core phantomas functionality? If so, I think a good first step would be looking at the WPT JSON and how much work it would take to get similar visualizations out of it.
Is the visualization stuff fairly abstracted from the core phantomas functionality?
jups. :)
Fine with me creating an issue at here or over at grunt-phantomas
for investigation and technical details to not bloat this issue more and more. :)
+1 on the ability to save the data.
I don't really care what format it's in, as long as it is easy to parse. JSON would work well. Would love to track the stats over time. Even the stdout doesn't dump all of the budgets and their passing value.
Update long overdue, but thanks to #24, the JSON can now be outputted to the location of your choice allowing you to parse/present as you wish.
wooo cool :tada:
Thanks to all who made it happen!
Should allow folks to save the test data locally so they can analyze.