Improving the experience with wrong rejection reasons

Lechu1730-PGOLechu1730-PGO Posts: 537 ✭✭✭✭

At this stage I think everyone is familiar with the problem of wrong rejection reasons being used by reviewers either due to misunderstanding what they really mean or just due to bad faith reviewing.

When considering what I choose when I reject, there seems to be some rejection reasons that sit there unused. In over 3000 reviews i never came across with an abusive nomination, except fake nominations. I think I used live animal only once. I used Inappropriate location two or three times for graffities that were drug-related. Obstructs emergency services and sensitive location are seldom used as well.

I think it shouldn't be difficult to tweak the cool down algorithm to detect anomalies in rejection reason selection and send a warning to reviewers using them incorrectly.

Comments

  • Hosette-INGHosette-ING Posts: 3,470 ✭✭✭✭✭

    Interesting. Your underlying assumption is that specific individuals overuse these rejection reasons and that should be detectable. I wonder if that assumption is true, or if it just seems that way to the recipient. It be that the submitters get seven different rejections that include a weird reason but it was really seven different reviewers who only did it once. I'm pretty sure there's no way to assess that from the player side.

    I'm currently on a four-day cooldown because of a short sequence of blatantly obvious 1* rejections. It would be interesting to weigh the tradeoff between curtailing a few people who abuse rejection reasons and the possibility of blocking a legitimate reviewer because they just happened to get a sequence of weird submissions.

    Other than being frustrating to submitters I wonder how big a problem this really is. The rejection is a two-phase procedure where the reviewer first decides to reject the submission and then they pick a reason. I think the rejection would happen just as often under you proposal, only the reasons specified in email would change. Niantic could probably handle that last part just by requiring a higher threshold of people choosing a specific reason before they included it in the email. That's speculation on my part because we have no clue what it takes for something to be included.

  • Lechu1730-PGOLechu1730-PGO Posts: 537 ✭✭✭✭

    Of course having no inside knowledge of the data one may as well study the flight of the birds like an augur to discern what Niantic thinks, but the existence of cool down triggered by a sequence of rejections points to fast rejections being a problem for Niantic.

    Só How to separate a good reviewer fast rejecting a streak of crappy nominations from a bad reviewer (or a bot) rejecting everything in autopilot? I think the pattern of rejection reasons used can be used for that.

    If you allow this process engineer to nerd out a little bit, a statistical process control diagram should be the tool to use. You define the parameters of a good reviewing process based on a large sample of reviewer behavior and detect anomalies. How many rejections in a row is normal? Which rejection reasons are unlikely to repeat in a single reviewing session? Of those that are likely to repeat, how many of them in a row are likely?

    It seems that right now only the first parameter (rejections in a row) is used and the bar is set too low so you end up sending a cool down to good reviewers. Adding other parameters, such as rejection reasons used, could allow to target bad reviewers more accurately and avoid penalizing good reviewers.

    So if the current limit is 6 rejections in 2 minutes (and that's why waiting 20 seconds per review works) you could raise that to 15 provided that there's no more than 1 (or even none) unusual rejection reason, no more than 5 in a row are marked "other", no more than 2 are K-12, etc.

    I think something along those lines will really help the reviewing experience.

  • Hosette-INGHosette-ING Posts: 3,470 ✭✭✭✭✭

    @Lechu1730-PGO I'm a software engineer who once tripped over her shoelaces and wound up being a data analyst and then a fraud specialist. It's easy to start by saying something like, "A sequence of X rejections in a row is three standard deviations out so we'll just use that threshold for a cooldown", but then roughly 1/400 legit reviewers could trigger that. That's a pretty impactful false-positive rate.

    I would be inclined to approach the problem from a slightly different perspective. Computers are pretty good at identifying patterns, and I'd let them figure out how to identify patterns that were normal and abnormal. It's probably something more complex than X out of Y in a sequence, or average time of Q over R reviews, but my guess is that we have something like that in place now. I think Niantic can do better.

    I'd also let computers identify the quality of reviewers based on a variety of attributes, and then apply the rules differently. You can't just say anyone with a high agreement is trustworthy, for example, because a group of reviewers that works together for nefarious purposes probably also has a high agreement rate, and those are the people that should be stopped. I can speculate about quite a few data signatures that could be used to differentiate between high-quality-legit and high-rated-not-legit, but quality is probably a complex calculation to do well.

    But back to the original premise. It's not entirely clear to me that a small number of people are generating a lot of "false" rejection reasons rather than a lot of people occasionally generating one. I can't imagine I will ever know the answer to that question.

  • GearGlider-INGGearGlider-ING Posts: 1,335 ✭✭✭✭✭

    It might be very hard to get Niantic to try and reprimand people for using wrong reject reasons though, because Niantic would be way more concerned if an ineligible Wayspot was approved rather than an eligible one was rejected, or ineligible on rejected but with the wrong reason. There would be a lot of reviewers who would just think they would need to reject less in general for the sake of avoiding the check/cooldown instead of actually taking the time to make sure they're rejecting properly.

    If an eligible wayspot is rejected with a bad reject reason, sure Niantic loses out on a potential wayspot for their database, but (as Niantic says themselves) it can always be resubmitted.

    If an Ineligible wayspot is approved but is ineligible because it is on K-12 grounds, on PRP, or other reasons that could get them in hot water for legal or PR reasons, well that's a lot worse for Niantic than a few missed good wayspots.

    Not that these things are mutually exclusive, you can make sure good wayspots aren't being rejected while bad ones are being rejected with more accurate reject reasons. But if Niantic takes on making sure good nominations are rejected, but end up making bad nominations approved more often, it gives them more work to try and correct this issue and remove the bad wayspots. So I can see why they might not just randomly send checks to people stating that it's because of inappropriate reject reasons.

    Though doing things like adding more honeypots of good nominations (hotspot gaming cafe, fountain at private club, etc) that uninformed reviewers might reject, and giving then an unstated check/cooldown or lowering their rating, might be good routes to go with the current system.

    This issue really feeds back to the big issue that reviews don't get very much feedback at all about how good their own personal reviewing is. If there's feedback about innapropriate rejection reasons, there's also gonna need to be a lot of feedback about approving bad candidates too so people don't start telling others to indiscriminately reject less to avoid checks/cooldowns.

  • ZambiaP-PGOZambiaP-PGO Posts: 16 ✭✭

    There are two submitters to consider, the fraud who is trying to sneak a bad one through for whatever reason (I want something by my home, by my school, by me work, just for kicks) and the player who has good intentions, but doesn't understand the rules.

    As a new Wayfarer and long-time player I get confused about some standards/criteria. Deciphering a rejection email is tough. A casual player can't be expected to understand those when I have to go to the online forums here to figure them out.

    Giving good feedback on exactly why the submission was rejected will help the casual players fix the submission and make it a good one.

    Unfortunately, giving good feedback will also help the frauds tweak their submissions to sneak them past reviewers. Perfect example is the people taking a photos of a historic mailboxes and attaching that to a recent mailboxes. That's not an honest mistake. That's someone trying to game the system.

Sign In or Register to comment.