Criteria Re-Learning

I’ve been reviewing submissions for 7 plus years now. The criteria of what is appropriate continue to change. Some are allowed; some are now disallowed.

The problem is that once you are approved to review submissions you are expected to know and respond to those changes. Most people are not part of this side of the community. They review what they see based on a prior test. When criteria change, they have no idea it ever happened so they continue as they were trained.

When criteria shift on the Niantic side, all reviewers need that forcibly presented to them. There should be no expectation of them reviewing a community board after they have been approved. It should be a retraining test of a) This was allowed and b) it no longer is. Here are 4 examples of good and bad submissions. It is very much a missing component.

The proximity idea for reviewing was a good start. It no longer is. It allows regional bias on what is allowed vs excellent reviews by larger teams. It definitely needs to stay within country to start but the idea of locations turned out poorly.

On Removals

There are so many garbage POI’s on the map that you cannot possibly review them internally with any accuracy. A new work flow of letting the overall community judge them before they hit Niantic would be helpful. Simple Yes/No effort. And you can train ML on that effort if it isn’t regional. You could add complexity on the rejections to make the ML learning aspect more appropriate.

What is more important? Quality? Or Volume? Choose one

Hey there, you wanted to present some examples, right? They seem to be missing from the post