hotosm / osm-tasking-manager2

Designed and built for Humanitarian OpenStreetMap Team collaborative emergency/disaster mapping, the OSM Tasking Manager 2.0 divides an area into individual squares that can be rapidly mapped by thousands of volunteers.
http://tasks.hotosm.org
Other
425 stars 156 forks source link

"Hold" on new users edits #889

Open logrady opened 7 years ago

logrady commented 7 years ago

Based on feedback from mapathon organizers and validators (and the Training Working Group) we would like to suggest that a new user have a hold placed on their account after X number of edits. This would give validators a chance to check over their work and provide feedback before they move forward.

If we compiled a list of validators they could be notified through the TM (see other suggestions about messaging systems in the TM) of new users "holds" and review their work in a timely fashion. If necessary a new user could continue on a probationary period where their edits are checked again by an experienced validator or they could move on to edit in a more independent fashion.

bgirardot commented 7 years ago

I thought we thought this was a bad idea. we would not block anyone from mapping waiting for validation.

I for sure feel like we can improve validation tools that will have a big impact so more drastic measures are not needed :)

logrady commented 7 years ago

How about a modified version where a new user hits a certain number of edits and a validator is contacted to review but the new user can continue unobstructed? I believe this could form the basis of a mentoring system, which I think has been discussed before.

Nick-Tallguy commented 7 years ago

I'd rather see a scheme whereby;

This would allow newbies to continue contributing but they are more likely to receive feedback at an early stage.

RAytoun commented 7 years ago

I am with Nick regards some indication of a newbie and the problem that we only know when a tile has been completed by a newbie, but not those that map a few items but never complete a tile. So it would have to be some way of flagging according to changesets by that newbie for us to be able to follow where and when they are mapping. Taking into account that most mappers never go on to be consistent or prolific mappers and many only do a few edits it would be great if we could see whether a newbie is continuing to map or is just a 'one off' mapper. , I would encourage validators to give positive feedback and encouragement to mappers who have completed about four tiles as they have shown the interest and you can look at their progress over the different tiles to give a better assessment. I do not exclude them from my validations but

But there is not enough validators to be able to effectively police all the newbies. By creating a list of accredited validators we have a pool of people who can be invited to take the Lead Validator on a task. They can then start validating that task and when they notice that a mapper also doing that task is good at it they can be asked to assist in the validating of that task. This not only gives that 'good' person the extra recognition for the good work they are doing, it also gives them the confidence to fix the mistakes of other mappers. The Lead Validator can keep an eye on the tiles validated by the nominee validator to ensure that it is maintaining standards and can give that nominee help where needed and positive encouragement if they are good. It also means that the Lead Validator takes responsibility to see that the validation of a task is progressing and that comments are noticed.

bgirardot commented 7 years ago

Lets talk about this.

How could we best identify a new mapper?

Less than some number of changesets? If so what number? Do we need a scale or a range 1-10 edits, 11-50 is less than 100 changesets meaningful? If so how?

logrady commented 7 years ago

Less than some number of changesets? If so what number? Do we need a scale or a range 1-10 edits, 11-50 is less than 100 changesets meaningful? If so how?

I think the best way to calculate it is to create a mathematical equation or algorithm that would represent a new user. Something like "new user = time_since_registration + number_edits" but I wonder if this needs to be calculated with respect to what other contributors are doing.

For example, currently in my OSM bio I'm categorized as a "casual mapper". I know others who are classified as heavy mappers. I don't know what other classifications exist or how this is calculated. I'm assuming it's number of edits and across a period of time.

Perhaps a first step would be to find out how that works and go from there as I'm thinking the new user algorithm might need to be in relation to these other user categories.

logrady commented 7 years ago

Correction. That should be something like, "new user = time_since_registration/number_edits"

dalekunce commented 7 years ago

I fundamentally disagree that we should somehow stop mappers from mapping. I would rather focus on the actual quality of the edit versus grading the editor by focusing on the osm-analytics toolchain.

bgirardot commented 7 years ago

Yes, I do not think we will block any editing. There are a lot of other gains the validation QA process to be had that are more productive for people giving feedback. But newness rating is a valuable metric as expressed by many validators.

aawiseman commented 7 years ago

I love @Nick-Tallguy and @logrady idea of pinging a validator (or validator group) after some amount of mapping!

bgirardot commented 7 years ago

I think pinging validators is a really good idea as well.

But I think that we need to figure out what a "new mapper" is and when is a good time to ping a validator exactly.

And that leads me to better metrics that validators can use to find mappers that they would like to focus on.

And I fear pining validators could be thousands of pings at busy time, i would certainly hope so, unless the metric filtered those out, and then I am back to tools for validators to find people to work with based on varied criteria. It might be different for different validators, or who knows, it could even change in disaster v. mm type of mapping.

I do not have a good vision for how to do this UI wise either. Certainly when viewing a project, color coded or get a list of the mappers with this available information:

osm sign up date number of days mapped number of changesets number of objects created last edit date number of checkout tasks number of completed tasks number of validated tasks (have been validated, not doing validating that will show other places)

and a UI to select those for example an expanded UI that lets you set things like this:

number of days mapped is <, =, > X and number of completed tasks <, =, > X

Then you get a list of mappers in that project with all their stats listed.

Or simpler, task squares they have checked out in the project are turned blue so you can select them easily to review mapping. (bonus if you could select all their task squares at once to review in mass in josm, but that is not part of this issue :)

logrady commented 7 years ago

I'm not sure if this would work but what if people were to self declare their learning level? I've used this method successfully when working as a trainer but that was in-person.

Or possibly a couple of quick questions after someone logs into to the TM (for X number of times) such as, "Are you comfortable identifying and mapping buildings (huts, roadways, etc.)"?

Or both these methods.

majkaz commented 7 years ago

Just a note - I have noticed most tools use the easily visible number of changesets to evaluate the mapper. But this can be deceptive - some of my changesets on HOTOSM are huge - sometimes one changeset per completely task mapped task, using upload into the same changeset during the edits. Not sure if this is a good practice - but it happens and works well for me. I don't mind having relatively low number of changesets visible, but we should be aware of it - it is difficult to compare a changeset with 3 new object with one as such. I have small changesets too - few reverts, some deletes or edits where I expect it might need reverted if something goes wrong. But mostly, it goes in several thousands of changes in one.

Blake's previous comment about number of object created is the only one not opening way to false positives. Perhaps letting it as it is (number of changesets) but allowing to mark "up" a mapper internally? Otherwise you get the same "warning" if such mapper switches the projects on every other project as well.

logrady commented 7 years ago

@majkaz I've written something very similar to your comments about changesets being a deceptive measure (see #894).

aawiseman commented 7 years ago

Could we use existing things like the How Do You Contribute tool? http://hdyc.neis-one.org Here's me for example -- if we can somehow access it maybe we can just use their categories http://hdyc.neis-one.org/?Marion%20Barry

logrady commented 7 years ago

@aawiseman I've mentioned Pascal Neis' work in #894.

Note to all: I'm trying to cross reference as many entries by their 3 digit number so we can keep track of the various suggestions across as well as within this "Issues" forum.

aawiseman commented 7 years ago

933 has a similar idea, notifying validators when squares are completed.