Ratings making me sad - they have dropped because of the challenge

I guess, not)) I had 65% for several years and that was a great rating :thinking: do you think the great for 65% was a mistake?

No, I think you should likely still be great!
Anything where youre agreeing more than disagreeing sounds pretty positive to me

mine and many of my friends/local players ratings also been reduced to nothing, hopefully this will get fixed soon… if i’m not mistaken lower rating means your vote means less for a nomination to reach a decision? Because they have started to slow down in queue really bad


Yiiiikes 76% is poor now?
When I did my degree, over 70% score was “first class”… you only needed 40% to pass
Seems like Wayfarer is harder than graduating university now


We don’t know how the lifetime percentage of agreements (which is inferred by a plug-in) correlates with the reviewer rating, if at all.

1 Like

It makes no logical sense though for someone with this many agreements vs reviews to have such a low score, surely? Thats a huge amount of successful history to become invalidated quite quickly

I would expect ups and downs from someone with 10s or 100s of reviews but this many thousands seems like the ratings should be quite stable and slow to change

Unless Niantic has taken only a small snapshot of reviews for the formation of ratings? Ie last month, or 3 months?

I work as an analyst for a company with millions of customers by the way, my job is categorising, segmenting and scoring customers based on their behaviour so I have some insight on what it takes to do this kind of thing

When the most obvious things are being directly approved by ML there are less things to get agreements on, it could be as simple as that :thinking:

That doesnt explain why this happened during/soon after the challenge specifically

If it was ML related Id expect it was more general after the ML started working that ratings gradually shifted

I think this is why people are focussing on it possibly being edit related

1 Like

The challenge would also contribute to it. If you are reviewing types of nominations you are not familiar with it will be harder get an agreement. To me it seems that there are a lot of things that get regularly approved in different regions that don’t actually meet the criteria. So if you strictly adhere to the guidelines doing a challenge I think your rating is bound to suffer :thinking:

I agree, but that would be the case for all challenges, where this specific challenge seemed to have the issue, but I dont recall this type of thing in previous challenges, certainly not widespread and affecting established reviewers

1 Like

My impression is that the rating can change quickly (even though there’s the obvious lag for things to be resolved), although it could be that it’s just me that’s always on the threshold between good and great :sweat_smile: Whenever I drop down to good I’m usually back to great in a couple of days. It always helps when I try to focus less on the Wayfarer criteria and try to guess what other reviewers are going to vote :laughing:

1 Like

I think it can change quickly for people who are close to that threshold, or for people who havent done tonnes of reviews, but for many people reporting issues in this thread, this is the first time our rating ever changed from great to something less

I think a lot of Ingress players had that same experience when OPR was transformed to Wayfarer :thinking:

I also have the same issue, I’m on poor now after being on great with over 25000 reviews.


Just want to point out that the plug-in that shows expanded stats has 3 count type settings: Simple, Upgrade Count, and Medal Stat. Medal Stat is for keeping track of the PoGo medal, so not much help here. Now, the other 2 are a bit different.

Simple just displays your total Wayspot submission agreements. It adds your nominations accepted, rejected, and duplicated; it does not include edits. Here’s my current Simple rating:

Upgrade Count is a little odd. What it says it does is multiply your earned upgrades total by 100, then adds your current progress. This does show your other agreements, which most likely are edits, as my other agreements is over 2700 after the challenge. Here’s my Upgrade Count stats:

Right now, my rating is Great, and was Great throughout the challenge. I had a few days after the challenge where my rating dipped to Good, but typically within 24 hours, was back up to Great. Therefore, I don’t think your total number of agreements one has received has everything to do with your rating; there may be other things being used to calculate ratings that Niantic keeps to themselves, like the number of votes needed for the community to come to a decision.


Hi @audreygav
I can see why you posted here because of the early reference to cemeteries. It is an important topic but this is not the best place. So your post has been hidden. Could you create a new topic about it in the general category please.

1 Like

Are we going to have a CV competition on who is less of a stranger to data analysis? :wink: (I’m joking)

I can also see the argument for the (good or bad) patterns from eg. the beginning of OPR X years ago not really being relevant to evaluate the person’s quality as a reviewer anymore, as well as the need to be able to detect if someone who’s been reviewing for years has suddenly started behaving very oddly.

Community observations over the years tell me that people can have identical lifetime percentages but different ratings, and that if there were percentage thresholds, this community would already have figured them out.

And of course disclaimer, as I see has already been mentioned above, let’s not forget that the plug-in that calculates the percentage makes a number of assumptions whose correctness it is on the user to verify (extra upgrades, etc).

1 Like

I’d imagine there’s a lot of analytical people here.

I think previous experience is relevant, unless we’re expecting people to get worse at reviewing over time. Generally more experience at a high standard leads to high quality work, and indeed up untill the last few weeks, it was many people’s experience that their “great” rating never changed. We viewed our ratings as something that we established early on, and then maintained over the years

Now, whether that was because historical agreements counted, or we were all continuing to show high standards over shorter time frames, it translated into stable ratings for the majority

Something changed quite recently to disrupt that.


Completely agree that there is an ongoing issue, as has been acknowledged. But yeah as I’m trying to say it looks like more of a bug than an intentional algorithm change.

Again not to be devil’s advocate, but you would want to know if a long time reviewer’s pattern had changed.