Closed paolodamico closed 3 years ago
Some client context here:
"I want to view the user sessions through the lens of user paths" For example: There are 79 users who drop off in my funnel at a certain stage, to diagnose the cause I need to
Other feedback
Additional points mentioned elsewhere and from exploration (will keep adding as I think of more):
How to connect to funnels
Top of mind:
Been giving this some thought, pardon the verbosity, it's quite late and it has been a very long day. Before issuing an opinion on whether we should focus on this or not, I wanted to take us through the exercise of what problem we're solving, whether it's a problem worth solving, and if this is the right solution for the problem. Would love to hear thoughts on this before deciding to work on this right now or not.
Taking a step back and thinking from first principles what we're trying to solve: if I'm working on optimizing my conversion and particularly understanding why users didn't convert, knowing what users did after their last completed step seems like a natural way to start (in most contexes, but not all: for instance if they bounced from your product, you won't be able to get any valuable insights from this [quant correlation analysis or session recording may provide better insights]). Knowing what they did before can also provide some interesting insights.
Knowing what they did before/instead of the funnel can provide me some hints into why they didn't convert but it's only a partial picture and requires some assumptions / judgement on my part (e.g. if in my purchase funnel I see that a significant number of users went to the pricing page instead of purchasing, I could assume that it's because pricing is unclear on my purchase page, same way I could assume it's because it was perceived as expensive and users were looking for a cheaper option).
Is paths the right approach to answer the question "what did users do instead of converting?" I'm thinking it's not the best approach from a user's perspective, but it may be the best feasible approach. If I asked this question to another human I would expect a concrete answer with parsed responses (e.g. users instead of purchasing are moving their cart to "save for later", or users instead of purchasing are browsing for alternative products). The reason we may jump to paths as the solution to this question might be that is the best way in the realm of feasible to translate raw data into conclusions (all the features we've been thinking about, from error monitoring to having n-deep paths are attempts to answer this question), but this might also be a great opportunity for a disruptive solution. The traditional paths feature requires trial-and-error and is susceptible to biases of whomever interprets it. I think the same applies to the before question.
We have a huge opportunity in answering these questions (with or without paths) mainly due to autocapture. Presumably we have a larger sea of data that could help better answer them (vs alternatives who rely only on custom events or page views), but it'll also be a huge challenge. We'll have to figure out a way to group meaningful sets of actions together so this question can be properly answered.
Now let's say we become amazing at answering these questions. Does this help us answer "why they didn't convert?". It helps, but it's not quite there. Adding a quant layer next (or maybe even before) could help narrow the scope of potential hypotheses (correlation analysis for instance). With all this, I can probably use judgement in some cases to build hypotheses, but in others (if not in most) I'll want to layer in some qualitative knowledge (e.g. through session recording) to better inform these hypotheses.
I think it's a matter of answering,
Thanks for putting a lot of effort into this and sharing the well thought out context here @paolodamico
I feel there's a major issue which we need to accept with every solution we build in this space: It's almost impossible to know why something happened for certain (even if you speak to the person, they may not remember)
So I believe our role here should be to provide the most likely clues for people to piece together in order to build a hypothesis for why something happened to the highest degree of confidence.
As such I'd like to quickly evaluate our options though a rough heuristic of Ability to provide high confidence clues
How do we increase peoples ability to find clues?
How do we increase confidence in clues found?
How do each of our approaches weigh up against these heuristics? Based on the general approaches we've discussed, I've rated them by the above heruistics
[For some reason the table should be here - but its showing up at the bottom of this issue, so scrolll to the bottom first to see the table]
Approach | Clarity | Sample Size | Inferred Priority |
Session Recording | High | Low | P2 |
Paths | Mid | High | P1 |
Correlation Analysis | High | High | P0 |
Customer Surveys | Mid | Low | P3 |
As discussed sync today, opening this issue so we can take the next ~24 hours exploring the problems that the Paths product could solve towards our Diagnosing Causes goal to see if it makes sense for it to be the theme for the next sprint (1.28 2/2). @marcushyett-ph summarized it best when comparing against the incumbent option to work on (session recording #4884), "we seem to feel that session recording will not be a waste of our time but it might also not be the best thing to be working on".
CC @marcushyett-ph @fuziontech @macobo @EDsCODE @kpthatsme @jamesefhawkins @timgl