dev-launchers / dev-launchers-platform

Monorepo for all DevLaunchers internal products and libraries used by the official platform
https://devlaunchers.org
GNU General Public License v3.0
39 stars 30 forks source link

[UXR] Experiment to change our top nav label name - Step 1: Plan & logistics #1800

Open JulieMass opened 3 months ago

JulieMass commented 3 months ago

Context:

Based on our recent preference test results, we're proposing an experiment to change our IdeaSpace main top nav label name. We will then monitor analytics to evaluate the effectiveness of the change.

Goal: To improve our target audience’s understanding and potentially increase engagement.


Acceptance criteria:

The following outcomes have been created:

katehirschman commented 2 months ago

Hi @JulieMass ! I spoke with my friend about sample size and statistical significance and have updated the plan with her recommendations: [https://app.enjoyhq.com/projects/DzYEVmmEp/plan] Since we are looking at data on a monthly basis, I think it is best to use 2023 vs 2024 data to see the changes over time. It will also help normalize how Dream was doing June 2023 vs June 2024 and then compare with how Collaborate does in, say, July 2024 vs how Dream did in July 2023 and then August 2024 vs July 2023. I think sample size will be dependent on how each period does and how different the number of users are. Let me know if you have any other thoughts on this - always love hearing your input - and this is very much out of my research comfort zone so would appreciate hearing what you think.

JulieMass commented 2 months ago

Hi @JulieMass ! I spoke with my friend about sample size and statistical significance and have updated the plan with her recommendations: [https://app.enjoyhq.com/projects/DzYEVmmEp/plan] Since we are looking at data on a monthly basis, I think it is best to use 2023 vs 2024 data to see the changes over time. It will also help normalize how Dream was doing June 2023 vs June 2024 and then compare with how Collaborate does in, say, July 2024 vs how Dream did in July 2023 and then August 2024 vs July 2023. I think sample size will be dependent on how each period does and how different the number of users are. Let me know if you have any other thoughts on this - always love hearing your input - and this is very much out of my research comfort zone so would appreciate hearing what you think.

Hi @katehirschman ,

This plan looks fantastic! I really like it. I do have a couple of clarifying questions:

- Unpaired t-test: The plan mentions using an unpaired t-test. Are we choosing this for our analysis, since our study data is continuous (like “drop-offs’ or “engagement time”) and the “number of users” is discrete, and because it's a two-sample, between-subjects design? Is my understanding correct?

- Study duration: I see you'd like to gather monthly data. To get clear results, how long do you envision the study running for? The initial schedule says "1 month later and ongoing."

Regarding adding the new label, we're currently encountering some significant bugs, like the idea submission feature not working in global staging. Our developers are all swamped with assigned tasks. Additionally, Chad is making some adjustments to our sprint workflow. I'll get back to you ASAP on when we can have a developer available to help with the label change.

Meanwhile, I thought we could move forward with research democratization with the designers and recruiting a researcher for the team. Let me know your thoughts!

katehirschman commented 2 months ago

Hi @JulieMass The unpaired t-test is used since we are comparing two different sets of data (different time period, users, etc.). I think you are correct in how you describe it, but this is also more quant than I am used to... it's fun to learn, though!! I said continuous just in case one month does not provide enough data to be statistically significant - it doesn't need to be exact month to month, more 100ish users to 100ish users. The time period is really better to compare year to year, as I understand. I just want to a) make sure we have enough data to compare and b) are able to see a significant difference between the two data sets. I will do a last run through of the plan to check for clarify and then should I move to review? I completely understand about the bugs taking precedence - and happy to move on to research democratization. Thanks!!