-
-
Notifications
You must be signed in to change notification settings - Fork 46
Cycle 5: Astropy UX User Interviews #504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I would be interested in know more about the proposed contractor, Jenn Kotler. Maybe link or include their resume? |
|
Please react to this comment to vote on this proposal (👍, 👎, or no reaction for +0). |
|
Since I rarely downvote, an explanation: I just cannot see how interviewing 10 "astronomers" is going to help... I just taught a coding class for first-year graduate students, and, no surprise, astropy is seen like numpy and python itself, a tool that exists of which one knows little, but just googles for examples. Obviously, happy to be proven wrong, but it seems just too unlikely one will get anything useful and actionable. Sorry! |
|
Just to offer a contrasting opinion to @mhvk : As part of an NSF award, the pypeit team was required to participate in the I-Corps for POSE training, which requires each team to conduct 60 "ecosystem discovery" interviews over a period of 4 weeks (@eteq was one of the people we interviewed!). Personally, I found these interviews really useful, but they were a ton of work. There can be value in these interviews, and I think it helped us set development priorities. But here are some things to consider: (1) I would argue you need more than 10 interviews. I think 60 was too many for us, but I think 10 would have been too few. (2) This is discussed broadly in the proposal, but who you interview will be very important, particularly if you only do 10. Instead of a broad target (astronomers with a "reasonable geographic and demographic spread"), I'd recommend identifying specific groups you think might have been under-represented in the user survey. (3) I found it easy to fall into the trap of listening for what I wanted or expected to hear during the interviews. I don't have any specific advice on how to avoid this, but I would find it useful to keep this in mind when developing the interview script. Having said all that, pypeit and Astropy are rather different projects, so what's true for one won't necessarily be true for the other... |
|
@kbwestfall - thanks, that's interesting to hear. I do think the projects are very different, astropy being really more like numpy, scipy and matplotlib. So much so in fact that it is very hard to get people to think of it as something they could contribute to rather than just use it. I do share your question about how the 10 would be selected... |
|
Thanks for the info, @kbwestfall ! Is the pypeit report public and available somewhere? |
Hi @pllim . Sorry, no, there was no formal report written up, but I'd be happy to share a couple of slides that gives the breakdown demographics of the people we interviewed and some of the take-home messages. I can share them in Slack, if that's helpful. |
|
PS - @tepickering was part of the pypeit team that participated in the I-Corps training. So he likely has some useful insights into this, as well. |
|
@kbwestfall , yes I would be interested to see. This is because I understand that pypeit has an visual component, so if the takeaways are all about visualization, I suspect it wouldn't overlap much with "Astropy UX". I also am not very sure where we're going with this for Astropy. Chances are the users would want everything, and some things would conflict with each other, or ask for things we can never provide with the resources available. If you want to proceed, maybe be more concrete on what exact problems you are trying to solve with these interviews. |
|
i agree with @kbwestfall and found the I-Corps interviews a lot more useful than i expected, but also a lot of work. if you get people talking, information will come out that they might not necessarily take the time to code into a survey response. being able to interactively follow up on responses can really help dig into where pre/mis-conceptions may lie and what we might be able to do about them. i also agree that 60 was an extreme number, but 10 is probably too few unless really done right. however, i think of this as a pilot project with the script and protocol as a deliverable that could be used going forward. |
UX does not necessarily mean GUI. it's overall "user experience", how they interact with the system as a whole. |
|
We did studies on users, maintainers, etc before, but did we do anything with those results? I would rather we go wrap up previous study results than to generate even more new results that we might not act upon. |
Just posted a couple of slides in Slack. |
|
Here are some past surveys of various focus that I can think of. I think we all agreed that something should be done as follow-up but no one took the lead AFAIK.
|
This is part of Jenn Kotler's expertise. |
This is 1 business day late for the draft deadline. I had some technical and logistical issues on Friday that prevented me from submitting it then, but anyone should feel free to weight this late-submission appropriately to their own thinking when considering whether they support the proposal.