I decided to alter the title. As a title i judged that it wasn’t appropriate but happy with reference at the end.
As for the subject……
The inconsistency of reviewers remains problematic. That’s the trouble with us humans we are all different. And trying to get us to review within guidelines seems an uphill task.
I love a lot of the work Emily does, but sigh there are things I want a human to think about.
So how do we feedback to reviewers?
The current system seems not to achieve what we need it to do……and the introduction of Emily does change things.
The first one probably got rejected because of people who review fast and only look at the photo. To be fair to those people it is kind of hard to read that wooden sign.
Maybe if there were (clearer, more frequent, fully up to date) educational messages given to reviewers who are rejecting eligible proposals…? If Niantic approves something on appeal and it’s not something borderline or iffy, maybe they could look at who voted it down and the rejection reasons given and let those folks know not only that they’re not voting correctly but the reasons why.
Also I feel that it would help to have the criteria clarification collection on the main Wayfarer website, someplace easy to locate and reference so that reviewers don’t have to search the forums if they have questions. The education emails could even include a link to the criteria clarification collection so that folks could easily look up where they’ve gone astray.
It would be a good idea to have annual/semiannual reviews for the reviewers. Like a regular opportunity to refresh ourselves on the current eligibility requirements.
The ML makes reviewing more tedious for lots of us. I think it’s easy to get frustrated, confused, or overwhelmed when looking at some submissions that are on the fence. I think rather than eliminating the community vote entirely it’d be better to make the resources for reviewing easier to access and understand.
If i look back - how ridiculous everything was. We submitted something, it took 3 years to be reviewed (!) and then it was rejected because in that 3 years
a) unexperienced reviewers “reviewd”
b) experience revierwers “laid out their idea of their game”
c) Niantic kerfuffled between the lines of code
I said it before and will say it again:
The “Database” is just used for one cash-cow-game.
All other means are just a little fluff here and there, and a vanity project of someone. The “Database” will never make any profit if it will be de-attached from the cash-cow.
The “Database” is mainly there to make gameplay acceptable (!) outside of New York, Zaragoza or San Francisco.
Niantic isnt gaining anything by rejecting 90% of the submissions. It is totally okay that Emily is acting like it is.
The clarifications are there in the help section. But completely agree that it is hard to know where to look for any particular answer. And really hard to know there has been an update.
I like your idea! I think a lot of us would benefit from a more hands-on approach; I’ve surely learned a lot just by browsing topics here. When I started reviewing I would always leave additional notes on my reasoning for accepting or rejecting some of the tougher submissions, then promptly gave up when I realized no one could read those. I think if you could go through notes left from the reviewers when a submission of yours gets rejected it could help make it feel less random. Or maybe you’d just want to drive off a cliff.
Another thought I had was that maybe the guidelines could include specific examples on different categories of submissions, like gyms, parks, cafes, water features and such. It would restrict the reviewers’ freedom to interpret the guidelines and apply them as they see fit to each candidate, but it would also prevent the rise of dubious rules accepted as law by local communities (did you know that in Italy up until a year ago parks and town squares did not exist unless they had a signpost?)
I would nearly always type something in the box at the end. It made me stop and check my thinking and sometimes I woukd go back and alter something.
So I knew it wasn’t being used to go back to anyone but I felt more comfortable about a decision.
It was useful to me when I got one of those “educational emails” saying I was reviewing incorrectly to have my comment there explaining why i had rejected it. I wish I had written more in that case. But I still do make comments whenever I can, and wish we had more places in the current review flow to make them.
I found it useful too when I had my non-educational email
There was just enough for me to know a bit about my thinking. ( I had mine through a data information request) but if I’d written more it would be better.
And yes I still use rather option when I can.
We need built in tools to be able to support us to be better.
Because I’m human I can learn from mistakes, and as I’m old I’ve done a lot of learning.
The one consistency about human reviewers and the reviewing process is that it’s very inconsistent. I’ve come to accept that and think there will always be a place for human reviewers.
ML seems to have its own problems. For example I have reason to believe it’s not good (yet) at detecting ineligible locations for nominated POIs (eg. single family PRP). For example, I reported a POI in my town yesterday for a “play ground” that was obviously the play set in someone’s back yard. I believe that had to be an ML acceptance.
There needs to be better and more concise instruction to reviewers on what to accept. ML still needs to grow and improve. The process can be improved but will always be somewhat inconsistent.