jupyterlab / frontends-team-compass

A repository for team interaction, syncing, and handling meeting notes across the JupyterLab ecosystem.
https://jupyterlab-team-compass.readthedocs.io/en/latest/
BSD 3-Clause "New" or "Revised" License
59 stars 30 forks source link

Jupyterlab community survey #104

Closed lresende closed 3 years ago

lresende commented 4 years ago

Based on the discussion initiated at the JupyterLab vision for the next few years a few JupyterLab community members put together a survey that would ask related questions based on items raised on that thread.

Note that this topic has already been discussed a few times during the weekly dev meetings and other parallel meetings.

We are at the point now that we would like to send this as a "JupyterLab community survey" and would like to seek community approval for doing so.

Please express your support with a proper comment.

I will leave this open for a week before trying to tally the votes.

choldgraf commented 4 years ago

I think it's a great idea đź‘Ť

The only thing that I would caution is to be clear about who the survey has gone to, who has responded, etc. There may be certain groups of people that are less-likely to answer these questions than others, or a certain focus that is taken in the questions, and we may bias interpretation towards a particular perspective if that isn't recognized ahead of time. For example, the language here feels a bit "ML/Industry-focused" as opposed to research and education (which I think are two smaller communities from a user perspective, but are key communities to consider from a "mission" perspective). (as an aside, I would not lump "professor/instructor/teacher" together in the same group, they are very different roles and IMO it's important to know whether a professor is using jupyter for research vs. teaching, or whether a teacher is at a university vs. a community college, etc)

Just a note that the survey is called 'Jupyter Notebooks Survey'...I am not sure if / how much you want to conflate "Jupyter Lab" with "Jupyter Notebooks", but just wanted to point that out.

goanpeca commented 4 years ago

I think it's a great idea too đź’Ż

Thanks for the comments @choldgraf I think it is worth revisiting based on your comments and make some small tweaks :) otherwise let's move forward!

saulshanabrook commented 4 years ago

This sounds great! Thank you to everyone who put in effort on this survey and I am sure we will get a lot of useful information out of the results.

ellisonbg commented 4 years ago

Based on that last state I saw the survey in, more work was needed. A couple of us had met a few times and were making good progress on it, but things slowed down due to JupyterCon. Given the importance of this, I believe this needs to be posted publicly as a PR (this repo is fine) for review and approval by the core team.

On Mon, Oct 26, 2020 at 10:58 AM Saul Shanabrook notifications@github.com wrote:

This sounds great! Thank you to everyone who put in effort on this survey and I am sure we will get a lot of useful information out of the results.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/jupyterlab/team-compass/issues/104#issuecomment-716723646, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAGXUCELDUTJMNEFQIKU5TSMW2DTANCNFSM4S7SBY6A .

-- Brian E. Granger

Principal Technical Program Manager, AWS AI Platform (brgrange@amazon.com) On Leave - Professor of Physics and Data Science, Cal Poly @ellisonbg on GitHub

saulshanabrook commented 4 years ago

Given the importance of this, I believe this needs to be posted publicly as a PR (this repo is fine)

Could you clarify what you think should be posted as a PR? It looks like the content is in the linked survey monkey poll.

isabela-pf commented 4 years ago

I’m glad to see discussion around this happening again! Overall I’m in favor of running this survey (but it’s worth noting I’m a biased party since I spent time working on it). I do think it is possible to gain meaningful info about community perceptions and concerns based on where the survey is now, though I think some of the above comments are helpful feedback worth considering.

The main questions I still have revolve around what the next steps will be if the community supports it here. I’m open to different ideas, I just would like to request that these next steps be transparent and publicly listed set of steps so that

  1. People who are interested in the survey know how they can easily track it or participate
  2. This work doesn’t get lost in a state of ambiguous decision-making
  3. It can serve as a reference for similar work in the future (including thoughts on how to navigate other non-code contributions around Jupyter)
lresende commented 4 years ago

Based on that last state I saw the survey in, more work was needed. A couple of us had met a few times and were making good progress on it, but things slowed down due to JupyterCon. Given the importance of this, I believe this needs to be posted publicly as a PR (this repo is fine) for review and approval by the core team.

Constructive feedback is always welcome, and something like @choldgraf above gives us specific areas to work on.

@ellisonbg could you please clarify what else is needed from your point of view?

ghost commented 4 years ago

Thank you, @choldgraf , this is the third time we've heard the feedback about the ML theme being over represented, so I just deleted another one of the ML questions. There is now only 1 remaining ML-specific question left in the survey that asks about the type of analysis being performed - compared to 7 questions about usage patterns, 3 about collaboration, and 3 about data. Maybe people are mixing up ETL and big data with ML?

@choldgraf regarding education/ research, what information would you like to know? Happy to add it in and delete things to make room for it. We cut out industry-related questions like finance vs pharma vs healthcare in order to focus on use cases, so less inclined to focus on distinctions like comm college vs uni, and more inclined to focus on the use cases you would like to learn more about. Would you add to question 7? Can you list the roles you would like to see?

ellisonbg commented 4 years ago

We did a bunch of work in a couple of different google docs that hasn't been incorporated into the Survey Monkey version.

On Mon, Oct 26, 2020 at 4:27 PM LayneSadler notifications@github.com wrote:

Thank you, @choldgraf https://github.com/choldgraf , this is the third time we've heard the feedback about the ML theme being over represented, so I just deleted another one of the ML questions. There is now only 1 remaining ML-specific question left in the survey that asks about the type of analysis being performed - compared to 7 questions about usage patterns, 3 about collaboration, and 3 about data. Maybe people are mixing up ETL and big data with ML?

@choldgraf https://github.com/choldgraf regarding education/ research, what information would you like to know? Happy to add it in and delete things to make room for it. We cut out industry-related questions like finance vs pharma vs healthcare in order to focus on use cases, so less inclined to focus on distinctions like comm college vs uni, and more inclined to focus on the use cases you would like to learn more about. Would you add to question 7? Can you list the roles you would like to see?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyterlab/team-compass/issues/104#issuecomment-716879758, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAGXUGV2OMC2B5LOSJPQMDSMYAV7ANCNFSM4S7SBY6A .

-- Brian E. Granger

Principal Technical Program Manager, AWS AI Platform (brgrange@amazon.com) On Leave - Professor of Physics and Data Science, Cal Poly @ellisonbg on GitHub

ellisonbg commented 4 years ago

Part of what we found is that Survey Money is not optimized for collaborative work or feedback. Also, given the importance of this, the content or of the survey need to be approved by the core team and a PR is the most appropriate way for that approval to happen. Once that has happened, it can be moved to the actual survey. I can try to pull together a PR from the various google docs we had.

On Mon, Oct 26, 2020 at 4:30 PM Brian Granger ellisonbg@gmail.com wrote:

We did a bunch of work in a couple of different google docs that hasn't been incorporated into the Survey Monkey version.

On Mon, Oct 26, 2020 at 4:27 PM LayneSadler notifications@github.com wrote:

Thank you, @choldgraf https://github.com/choldgraf , this is the third time we've heard the feedback about the ML theme being over represented, so I just deleted another one of the ML questions. There is now only 1 remaining ML-specific question left in the survey that asks about the type of analysis being performed - compared to 7 questions about usage patterns, 3 about collaboration, and 3 about data. Maybe people are mixing up ETL and big data with ML?

@choldgraf https://github.com/choldgraf regarding education/ research, what information would you like to know? Happy to add it in and delete things to make room for it. We cut out industry-related questions like finance vs pharma vs healthcare in order to focus on use cases, so less inclined to focus on distinctions like comm college vs uni, and more inclined to focus on the use cases you would like to learn more about. Would you add to question 7? Can you list the roles you would like to see?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyterlab/team-compass/issues/104#issuecomment-716879758, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAGXUGV2OMC2B5LOSJPQMDSMYAV7ANCNFSM4S7SBY6A .

-- Brian E. Granger

Principal Technical Program Manager, AWS AI Platform (brgrange@amazon.com) On Leave - Professor of Physics and Data Science, Cal Poly @ellisonbg on GitHub

-- Brian E. Granger

Principal Technical Program Manager, AWS AI Platform (brgrange@amazon.com) On Leave - Professor of Physics and Data Science, Cal Poly @ellisonbg on GitHub

lresende commented 4 years ago

@ellisonbg For awareness of the others that might not have participated in the google docs discussions, could you please share them on this issue.

psychemedia commented 4 years ago

Q5: "What are your go-to tools for performing data science, scientific computing, and machine learning on your laptop/ desktop (non-cloud) for data science? (pick up to 2)"

JupyterLab / Jupyter notebook are given as a single option: split these into two options (to capture those who actively use classic notebook & avoid JupyterLab) and raise the choices to up to 3.

Q6 "How do you run and/ or access Jupyter? (pick up to 4)"

I tend to run things locally in a Docker container, often built from a Github repo using repo2docker and built and pushed to the DockerHub from a Github repo using a repo2docker Github action.

BinderHub / MyBinder also appears to be missing?

Also, what about "institutionally provided JupyterHub / authenticated multiuser Jupyter notebook server" and BinderHub? eg try to draw out whether folk are using credentialed persistent servers, credentialed temporary servers, public temporary servers. Then have another question to try to tease out whether the server is provided by a corporate, teaching, government, research or "retail" provider.

Question 7 "What tasks do you need to perform and what tools do you use to accomplish them?" takes forever to complete. There are also different pathways / meaningfulness along the rows. If I NEVER in column A, the extent to which the other columns make sense may also depend on how I answer the other questions.

Q8 Datasources: are SQLite databases SQL or file system files?

Would it be useful to know what sort of resource (CPU/GPU/memorybandwidth) requirements people typically require and who provides them (own computer desktop/laptop computer, own local server, retail cloud, institutionally provided)?

Q18 "What is your reason for sharing a notebook with someone else?" What about other sorts of use case, like training or teaching tutorials, tutor support, providing feedback on assessment, help desk?

ghost commented 4 years ago

@psychemedia thank you.

lresende commented 4 years ago

I am really glad we are making quick progress here. It's unfortunate we missed the opportunity to do this during JupyterCon and we should try to have a deadline for reviews by end of the week, and then give a couple of days for any extra voting/approval required otherwise we might get into the holiday season and I am not sure how that can affect participation.

ellisonbg commented 4 years ago

Writing useful surveys takes time. For reference, I was putting in 3-4 hours a week for a couple of weeks on this before JupyterCon. We were making good progress, but there is still a good amount of work to do. I don't think this fast time frame is reasonable. I have another meeting at 9am today so won't be able to make the weekly meeting today. Here is the google doc:

https://docs.google.com/document/d/1M-Qod4nByssdZJMlQ1HGr4LkjuZS0MjriD4C93kZg9w/edit

I can pull this together into a PR later today.

On Wed, Oct 28, 2020 at 7:44 AM Luciano Resende notifications@github.com wrote:

I am really glad we are making quick progress here. It's unfortunate we missed the opportunity to do this during JupyterCon and we should try to have a deadline for reviews by end of the week, and then give a couple of days for any extra voting/approval required otherwise we might get into the holiday season and I am not sure how that can affect participation.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyterlab/team-compass/issues/104#issuecomment-717980763, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAGXUBBXHN5ZPX2QNUCYNTSNAU3BANCNFSM4S7SBY6A .

-- Brian E. Granger

Principal Technical Program Manager, AWS AI Platform (brgrange@amazon.com) On Leave - Professor of Physics and Data Science, Cal Poly @ellisonbg on GitHub

ellisonbg commented 4 years ago

Yeah, I had a meeting canceled this morning so was able to submit the PR:

https://github.com/jupyterlab/team-compass/pull/106

I propose we start to provide feedback on this version through the PR.

goanpeca commented 4 years ago

It seems this survey has been taking a longer than most of the parties involved expected and I want to share a few thoughts:

Expanding on @lresende:

The JCon milestone was missed and with holidays approaching, the amount of users/devs/people we may to get feedback from will dimish.

We are at the point now that we would like to send this as a "JupyterLab community survey" and would like to seek community approval for doing so.

I think at this point most of the community approves making the survey but there are 4 things that have not been defined, and need to be defined:

1. How to collaborate and make suggestions on the latest content of the survey that was created on Survey Monkey?

Part of what we found is that Survey Money is not optimized for collaborative work or feedback.

Please reconsider using a publicly open Google Doc.

2. What is the deadline to make any suggestions on the content of the survey?

Making surveys is an art and a science. That does not mean that this should take for months. Some data is better than no data, and the perfect is the enemy of the good.

I propose some days before Community Call 11.17.20 so that the voting can be made on that date and moved to SurveyMonkey.

Please let's agree on a reasonable deadline for making comments.

3. What is the process to vote on the final content once the deadline has arrived, who is voting and how are votes resolved?

I propose we define a group with an odd number of members (5, 7 or 9) so that a vote on the final content for each question/answers can be made. Each member gets the same weight and can vote (Yes/No). Simple Yes majority means the question is approved and ready to be moved to the Survey Monkey. No majority means either remove the question or make very minor tweaks if that fixes it.

There should probably be another round of voting for questions that got a No majority, to review minor tweaks if any, to be solved on the same call/meeting.

Please let's agree on a reasonable voting process to decide the questions that will go into the survey after the deadline.

4. What are the next steps after passsing in the context of this being accepted by JupyterLab?

Once approved and moved to SurveyMonkey then:

Please let's agree on the steps, so that we can expand on this and document it moving forward for other similar initiatives.

lresende commented 4 years ago

As many, if not all, the participants of the JupyterLab meeting today seem to agree, we have spent too much time on the current survey.

Here is a proposed timeline:

  1. What is the deadline to make any suggestions on the content of the survey?

How about a tentative and little aggressive schedule trying to match the Lab 3.0 release announcement?:

  1. What is the process to vote on the final content once the deadline has arrived, who is voting and how are votes resolved?

Unfortunately, I don't think we have much of a say here. The "binding" votes are probably scoped by the official JupyterLab committers and it should be a majority vote (more than half of the votes cast) unless JupyterLab has any different bylaws related to voting procedure.

What is the alternative here if the approval vote fails? Well, we can always do a survey independent of the community which is definitely not the goal, but a "last case resort".

  1. What are the next steps after passing in the context of this being accepted by JupyterLab?

I believe that the more amplification the better. So on top of @goanpeca suggestions, I would say:

psychemedia commented 4 years ago

Just a note about amplification: I assume a banner on MyBinder might be possible, but that then introduces a possible bias. You can defend against that a little by having a form entry where did you find out about the survey and have MyBinder as one of the options etc.

choldgraf commented 4 years ago

Yeah I think the Binder team would be +1 on including a survey from JupyterLab as a banner - we have done it for Binder before. Would need to run it by folks in a team meeting though.

lresende commented 4 years ago

Looks like we are making good progress on reporting survey updates/suggestions on this issue, and @LayneSadler has been applying these very quickly (@choldgraf there is still one open issue waiting for your feedback).

As we are sort of far ahead with the contents, I would suggest we keep this way instead we really need to do some drastic changes to the survey which I think it's not the case. Yes, I understand it's not perfect, but it's one less place to try to maintain things in sync.

If everything goes as planned, we will listen to feedback and quickly iterate through it over the weekend and start a vote on Monday (Nov/02).

choldgraf commented 4 years ago

Thanks for those edits @goanpeca - regarding specific things to learn about research or education. I guess it may be hard to insert these kinds of questions into the questionnaire in a way that isn't obviously about research/education. E.g., I am interested to learn what people use for grading, whether they want/need an auto-grader, whether they want something integrated into an interface or something callable from an API, etc. But, perhaps that is better to run as a research/education-focused survey in the future. As I mentioned, I feel a bit weird pushing that issue since few people on the JupyterLab team are embedded within an academic context, so I think you all should go with the survey that makes the most sense for you.

One topic that I think is missing from a "research" standpoint is reproducibility and (scholarly/book) publishing. Questions about whether they want to be able to write documents in JupyterLab vs. just analyses and scripts, whether they care about having reproducibility features built into their interfaces, or things like versioning of notebooks etc.

I don't think that my first paragraph should be a blocker on this, though I think we should include some questions about writing documents/narratives/etc, as well as reproducibility, as those are both core parts of the Jupyter story.

ghost commented 4 years ago

@choldgraf there's an idea; modes for editing myst markdown and jupyter-books. Let me get through the workweek and I'll run some changes by you.

ghost commented 4 years ago

@ellisonbg raised important points about the nature of a collaboration. If we do not gather the information necessary to answer the question of "do users need realtime collaboration" then the survey has failed.

image

Added the above question to the existing collaboration section, which hopefully reconciles a major fork in the surveys in figuring out more of the who/what/when/where/why of collaboration.

ghost commented 4 years ago

@choldgraf

Regarding research/ markdown docs: Q7 use cases: Split content creation into “Documenting research (scientific papers, reports)” and “Creating content (blogs).” To make room for it, dropping “deploying code to production” and merging “building data intensive dashboards” into viz item. Q20 ui problems: Added “No modes for editing other Jupyter documents (MyST, Jupyter Book).”

Regarding roles Q4 roles: Split into "Teacher/ lecturer" and "Tutor/ Teacher's assistant." Leaving professor out of it. There are existing boxes for "researcher/ scientist."

Regarding versioning/ reproducibility Q19 collaboration problems: Tweaked “More robust version history of a notebook.” Tweaked “Don't know what dependencies (versions of language, packages, extensions) a notebook uses.” Added “Don't have the data that a notebook is supposed to use.” Q15 scale problems: already asks about batch execution (parameterization) and saving notebook outputs.

choldgraf commented 4 years ago

@LayneSadler these are great, thanks for these updates :-)

Ah one other thing I think is missing from this survey: extensions and development. In my opinion, a huge reason that the notebook was so popular was because it was so hackable. Anybody could write an extension with (relatively) minimal background knowledge. I often wonder to what extent JupyterLab should put effort towards re-capturing this. One of the core value propositions of other UIs like VSCode is in their extension mechanism, marketplace, and community. I think that "extensability" is a core feature of JupyterLab just as much as any other part of the UI or functionality is.

So I'm curious how users feel about this - do they wish they could extend JupyterLab more easily? Do they find documentation / tutorials / APIs for extending JupyterLab to be straightforward or confusing? Do they find it easy to discover and extend their interface via extensions?

lresende commented 4 years ago

So I'm curious how users feel about this - do they wish they could extend JupyterLab more easily? Do they find documentation / tutorials / APIs for extending JupyterLab to be straightforward or confusing? Do they find it easy to discover and extend their interface via extensions?

I believe the main target here are users, and I don't expect to drop the burden to fix the deficiencies of a product by asking them to create extension. To that extent, I would re-phrase this to "what makes it hard to use jupyter notebooks/jupyterlab" and then we as developers would look into what are some of the improvements we should focus on.

ghost commented 4 years ago

If Jupyter is going to compete and win against corporate giants, then it needs to wholeheartedly embrace the non-linear benefits of a platform strategy. I highly recommend Platform Revolution and Exponential Organizations.

image

Long tail distribution; The Jupyter core team cannot possibly develop all things for all users across science, data science, and software engineering - but these users will leave if their niches aren't fulfilled. The core team should focus on developing "the hits" to solve major problems and enabling power users in different fields to develop the "niche content." For example, a genome browser should be developed by someone in genomics, not Jupyter.

Regarding the survey, I agree, the focus should be the core product and getting a better idea of what "the hits" are. However, considering the strategic importance of extensibility, I also agree that it should not be neglected. After all, wouldn't something like an "App Store" be provided by the core product?

My recommendation is that we (a) gauge awareness/ satisfaction with the Extension system in general, and (b) try to tag people for recontact for a followup to learn more about extension development. This area feels better suited for qualitative research (user interviews) to identify the problem areas. @isabela-pf what are your thoughts here?


Changes: image

image

choldgraf commented 4 years ago

I think that there's a more complex community make-up for Jupyter(Lab) than just "developers" and "users". One of Jupyter's strong points has always been the way in which it builds components for the community to extend and modify, and I think it fits in with Jupyter's open source culture of building a "big tent" of community members with a variety of development expertise.

As I mentioned before, I think one of the key reasons for the success of the Notebook UI was because "users" were empowered to hack and extend it for many purposes. Those people are certainly not the majority of Jupyter users but I think they are an important group of power users to remember. The people that were building the ipython notebook extension ecosystem were not always developers in the traditional sense, they often began as users, and realized they had a need and enough Python knowledge to build tools that were useful for others. I think we want to encourage that kind of creation and sharing as a core pillar of our community.

re: @LayneSadler 's recommendation, I think you make a good point re: qualitative research. There will certainly be fewer folks with opinions and interest in developing extensions, but those are folks that might be worth a heavier-touch investigation.

psychemedia commented 4 years ago

I think one of the key reasons for the success of the Notebook UI was because "users" were empowered to hack and extend it for many purposes. Those people are certainly not the majority of Jupyter users but I think they are an important group of power users to remember.

The ability to treat the classic notebook view as a relatively easily extensible document editing platform on two counts is important to me:

  1. the ability to customise the environment using community developed extensions and share that environment to my "users" (students on a course). This has a multiplier effect: I have exposed third party to students as integral part of a provided environment to 2k+ distance education students, many of them in employment, over the last few years.
  2. the ability for me to develop relatively simple extensions as an educator/have-a-go technologist, not a as a developer, in order to support a teaching point or improve the presentation of notebooks as educational materials.

In both senses, I essentially act as a customiser and "reseller" of notebook environments and I'm not sure what set of answers to the current questions would allow this sort of segment to be detected?

As well as using notebooks to deliver teaching, we hope that students may go on to adopt notebooks outside of their courses in various ways:

  1. Continuing to use the VM provided environment as is (eg with extensions enabled) and making use of those extensions;
  2. Continuing to use the VM provided environment as is (eg with extensions enabled) but without making use of those extensions, and/or disabling those extensions;
  3. Using other environments and installing some of the extensions we made students aware of into those environments;
  4. Looking for and installing other extensions from an awareness of the extension ecosystem;
  5. using vanilla / unextended notebooks.

This also makes me wonder about the life history of Jupyter users:

isabela-pf commented 4 years ago

I don’t know if this reply is too delayed with the voting starting today, but here it is anyways. I do agree @LayneSadler, I don’t think a survey can answer all our questions, so it may never feel as perfect as we want. I do know that I don’t feel like I have a good grasp on what the community does and how they feel working with Jupyter tools, which makes my work (and others’, if the push for this survey can be used as proof) much more difficult and probably hurts the project long term.

I think getting any info is the first step, and being able to ask more/better/specific questions will come later, probably with qualitative research like user interviews (as Layne mentioned, as we talked about in meetings when writing this survey, and as is mentioned at the top of the series of google docs open that were open for review). A lot of the questions on this issue and the questions I’ve heard in other conversations aren’t the kinds of questions that I think only deserve a single multiple-choice answer.

So once again, I agree that I think the goals for this survey can helpfully be paraphrased (from Layne again) as (a) gauge awareness/ satisfaction and (b) try to tag people for recontact for a followup.

choldgraf commented 4 years ago

I will defer to you all on the right balance of "breadth of use-cases we ask about" vs. "depth on any one specific use-case"...I trust your expertise over my own when it comes to this. In general I'd be +1 on more questionnaires over time so that each one can be tailored for specific needs / users / etc.

lresende commented 4 years ago

Thank you all for collaborating on this. Let's use this exercise to gain experience in creating and driving such surveys so that we can do these types of activities more often.

I will create a snapshot of the survey (a PDF) later today and start the vote as a PR as planned.

ellisonbg commented 4 years ago

Great to see the collaboration and work on this, thanks everyone. I got pulled into helping resolve a JupyterCon Code of Conduct situation over the last week, so I haven't been able to participate, but glad to see the survey moving forward.

A few meta comments though...

lresende commented 4 years ago
  • The current governance model makes it clear that consensus can be broad ("input from the community) but that it has to include the "Core Developers". We haven't formalized exactly who the core developers of JupyterLab are, but I don't believe we can move forward without some combination of people like @blink1073 @afshin @jasongrout @ian-r-rose @jtpio @telamonian @saulshanabrook @vidartf @echarles et al. reviewing and approving the text of the survey. I am not saying that only these people can/should review this, only that consensus without the core isn't consensus.

Currently, the only official list the project has is the "JupyterLab committers" GitHub team, which I believe is the one I suggested to use.

  • Lastly, it is important to remember that consensus takes time.

While I agree with this, taking too much time means consensus is not really being reached. Having said that, in this particular case, most of the community that is actually trying to implement the survey is in consensus, and I think it's fair with them, which has put a lot of time into driving this, to try to get a resolution into this, and unfortunately, the only way we seem to have is to call a vote, which will give closure to the matter one way or another.

  • Regardless of what survey tool is used (Survey Monkey, Google Forms) a Jupyter account should be used so the core team can access it. We have a project wide Google Drive that can be used, or we can create a Survey Monkey account using the project email.

When we look at the current URL https://www.surveymonkey.com/r/LCB7GBF and contents of the survey, there is no identification of whose account is being used. At the moment, the account is from a member of the community, and he has certified multiple times here that he is going to make all data available to the community. Based on that, I don't think this is a stop-ship for the survey.

meeseeksmachine commented 3 years ago

This issue has been mentioned on Jupyter Community Forum. There might be relevant details there:

https://discourse.jupyter.org/t/pyqtgraph-maintainers-seeking-input-regarding-user-survey/10303/4

jmacagno commented 3 years ago

Hello All -

Have the results of this survey been shared with the community? Can anyone share access and insights gathered?

thank you

isabela-pf commented 3 years ago

Hi @jmacagno! The results are at jupyter/surveys and @aiqc has been posting insights on various issues. I think one such set of insights can be found at #121, but maybe Layne has some others he'd like to point to as well?