Any formal response to many peoples profile ratings suddenly tanking

I don’t think anyone made light of it. It just makes me glad I obsessively check my rating :joy::joy:

3 Likes

I slowed down my reviewing to only during Wayfarer Challenges. The reports made on here has concerned me quite a bit. I know one voter ring in California has been dealt with, with possibly others being looked at. But reviewing has become a chore rather than a pleasure. Also, I am way behind on understanding what is a good nomination and what is a bad nomination.

2 Likes

A good thing about this forum is that it gives a space for fellow wayfinders to raise niggles, doubts and concerns about what they are experiencing. That is important, as wayfinders could otherwise be quite isolated and unaware that others are having the same sort of issue.
It enables people to see they are not the only one and that can mean a lot. Never underestimate the value of peer support.
It allows staff to see not only that there may be a problem but the impact.

9 Likes

Doesn’t this happen regularly? Like, with every challenge? I wonder if new bugs are continuously introduced by new/changing code that wasn’t tested enough. Or if there were 100 bugs to start with, and we’re peeling them off one at a time.

2 Likes

I am intrigued to see what happens with the fix . imo, it is just the nature of the review flow now that almost everything we see is a coin toss, and basing rating on disagreements and agreements just needs to be revisited completely. Or maybe rating needs to be thrown out. But idk what kind of check to suggest to replace it.

5 Likes

I think they have various bugs in different branches of their code. They seem to reactivate old bugs while trying to deal with a current problem. For example: they make your stats appear correct by using a formula that disregards edits, but they tally edits as part of a subtotal against your total wayfarer activity. Now your score takes a hit when you review a lot of edits, even though you voted correctly.

1 Like

Yes, reintroducing bugs is a solid possibility.

I think the reason Edits don’t count is: multiple edits can be presented at once. For example, if there’s a title and description edit, and you agree on one but get a disagreement on the other. Or 3 new photos are under review - and you agree with the crowd on 2 but not the other one.
(Back when edits were added to reviewing, we were told this is why they weren’t added to agreements - that communication is long gone, but it still makes sense.)

Whatever the reason, Niantic decided it was logical to not count edits at all. Niantic Wayfarer

So… maybe the bug that maybe keeps getting reintroduced… is adding edit agreement/disagreements in a way that tanks ratings. Like, maybe adding the disagreement part, but not the agreement part, of a single review? Or maybe, if there’s one switch per review (agree/disagree) then maybe only the last thing on the page is the one that counts? Who knows!

Which is pretty much what you just said! :rofl:

Hi @MargariteDVille
I can’t believe that it’s complicated like this to handle edits countable. The nominator can only give one edit at time and they receive one email per decision. That shouldn’t matter how much they put into one review

1 Like

Consider photo reviews. The Wayfarer is expected to determine if any images do not meet criteria. The Wayfarer could reply that none meet criteria, all meet it, or anything in between. Criteria include a mixture of subjective, objective, and derived qualities: Is this a good photo, is it the correct statue, does it fill enough of the frame, do you think it was taken from a car, etc.

How many votes is a photo review worth? It could depend upon the number of photos shown, or the quantity the reviewer accepts or rejects.

Niantic has said the photo review is worth zero votes because they don’t wish to give credit for edits.

They need to count every action excluding Skip/Time Out, but they also need to quantify each of the sub-types, and they must make certain to keep track of corrected vs. uncorrected values.

Any time the new guy chooses the wrong bin or makes a false assumption about what it represents, the math fails to jibe.

1 Like

I’m still not convienced :smile:
If you choose the green arrow in a photo review and the photo gets accepted = :heavy_check_mark:
If you choose the photo is against the criterias and it gets accepted = :heavy_multiplication_x:
You skip? Than you’re out.
Exact like nomination review :woman_shrugging:t2:

Definitely not every challenge but there have been some, specifically during events.

Some have even ended up bringing long running issues to the surface to finally get fixed.

1 Like

I think challenges make us notice things as we’re logging in more frequently and reviewing more nominations in a shorter space of time than usual

So we are simply paying more attention than usual to our stats due to checking challenge progress and we do enough reviews to really shift things

4 Likes

We had definitely noted the ratings issue resurface before the latest challenge.

Yeah, I had been having my rating fluctuate constantly for quite a while, and I think you had some issues beforehand too, although after mine had started misbehaving…
It was when yours dropped that I realised that I wasnt the problem, because I was completely sure that you hadnt done anything and were being affected the same

Coding changes for the challenge likely started going live months before the challenge began. Installs have to go in a certain order, to work.

For example, change an existing program to add a call to a new program that starts out empty - then install the new program’s code in stages, to take it from nothing to full functionality.
Another example: a new data point may have to first be created (named) in a database as null, then give it values, then use it with programs.

3 Likes

Don’t forget that the challenge is usually run in parallel with AI training. So, if we continue the analogy with empty, named null data points, the first step could be to deploy an AI that does nothing. Then, it could be taught to reject everything and report everyone to the team for review. After that, the AI should learn to calm down in certain situations to avoid being corrected by staff. I’ll go find my tinfoil hat in the meantime! :slight_smile:

2 Likes

I’ve been recording more of my voting, and I’m beginning to think that it isn’t bad wayspots being rejected, but good ones not being accepted or being stuck in the system. I have been recording some of the votes, and a few decent submissions still aren’t in even after nearly two weeks.

2 Likes

I’m a long long time reviewer.

Weirdly dropped to poor when the system glitch occurred.

Now back up to fair and only ever review the very occasional “sure thing”. Even then the system seems more likely to punish me than reward me which is, to say the least, demotivating.

Except… edits do count…

1 Like

I was saying that Niantic Wayfarer doesn’t count edits. Which it doesn’t. We were talking here about Wayfarer, not Pokemon Go.