Open cvan opened 9 years ago
and maybe we just go with something simple like UserVoice - or some other service that we can throw on like http://support.mozvr.com ?
Also: real world usage data. eg: daily active users, session durations, number of sites visited, platform breakdown (mac vs pc, oculus vs vive, etc), where we're losing people in acquisition funnel, etc. These are critical.
Is probably worth breaking this into separate issues, as these strike me as sufficiently unique from engineering and prioritization standpoints. What do we think?
Also: real world usage data. eg: daily active users, session durations, number of sites visited, platform breakdown (mac vs pc, oculus vs vive, etc), where we're losing people in acquisition funnel, etc. These are critical.
yes, agreed. I touched on these Telemetry metrics above:
use some service like Google Forms, Typeform, or Intercom for submitting issues and communicating with our users. check out Intercom, if you're unfamiliar. if we can use this w/o privacy issues at Mozilla, this would be fantastic; perhaps worth talking to the Metrics + Privacy teams, if we're interested. it would give us so much insight into who are users are.
Is probably worth breaking this into separate issues, as these strike me as sufficiently unique from engineering and prioritization standpoints. What do we think?
yeah, I'll be filing separate issues as I see fit.
yes, agreed. I touched on these Telemetry metrics above:
Cool, I saw that but being unfamiliar with Intercom I wanted to confirm. :) Just took a skim and it looks pretty slick.
in order to go live, we need some way of collecting user feedback + app crashes + JS errors + perf data.
not sure how the browser.html project is handling user feedback, but I feel they have almost identical use cases. Today I just asked Paul + Gordon on Slack how they're doing this for - waiting to hear back.
collecting user feedback (bugs, comments, suggestions)
whichever method of collecting feedback we use, we should prefill the form with Git sha hash of Horizon + build ref of Graphene + any other useful version info (and we should do the same thing for JS errors we log).
recording Horizon (Graphene) crash data
recording JS errors in Horizon (and in content)
error
onwindow
that collects errors and does a simple XHR).requestFullscreen
withvrDisplay
+navigator.getVRDevices
) and/or bad code in our content script JS that interacts with the page (we have a lot of code in the content script, so it's quite possible - though I really hope it won't ever happen). if we do end up logging content's perf timing + errors, we should probably do it for only VR content, not classic mono content (though the JS gets run for mono content too, even more so actually) - and should we submit logs when the VR content page closes [listening forbeforeunload
/unload
onwindow
].perf data
see above. New Relic is good for this, or any other RUM-based perf tool - though we can collect Navigation Timing and Resource Timing from this. synthetic testing is good too, perhaps just automated WebPageTest (running Firefox) would suffice - or we could have our own Sauce Labs jobs (they've got sweet videos) queued up from Travis CI (every time we open a PR or manually start a test).