Both the title and description originally misspelled “Balestier” as “Balastier”. The correct spelling is pretty easy to verify as it’s on the physical object itself and can be seen in the in-game photos. The name is also on government sites like this: Balestier Heritage Trail
This description edit was auto-accepted:
The title edit for the same typo entered community voting instead:
This isn’t a complaint or anything, I just find it a bit odd that the AI judges one but not the other.
Good test case! I would guess that one would be accepted, then the other rejected because (after the first if applied) it doesn’t change anything.
Oh, I meant two as in one title edit plus one description edit. They aren’t overlapping.
1 Like
Oh yeah - that’s very common.
First: Niantic chooses around 20% to be judged by humans (and then used for Machine Learning to train the AI).
Then: AI judges each edit separately. It returns yes/no and a percent confidence level. If that confidence doesn’t meet whatever minimum Niantic set, then it will go to human review (and used for Machine Learning to train the AI).
For example, say Niantic auto-applies anything with at least 85% confidence level, and routes the rest to humans.
Fixing typos and grammar in titles/descriptions almost always are ML accepted, but during the challenge, not so much. Here’s a description I submitted and ML approved back on May, but I spelled the name of the park wrong. I went to fix it during the challenge, and it’s now in voting. Only edit I made to this Wayspot, too.
And here’s a title edit I did to make sure sign is changed to Sign, but it’s also in voting (I also submitted a description edit as “Public art” isn’t very accurate, and a location edit).
This challenge is testing ML, and I will say that of the last 10 I submitted, the photo, 2 title, and 1 description edits were ML accepted, so it may be that ML had to be turned down a bit due to the abuse that Niantic has confirmed was happening when the challenge started.
That’s the way Machine Learning works. It doesn’t learn anything from what it processes itself with its own existing process. It learns from resolved cases being fed to it.
During a training period, the industry standard is to pull off a random 20% to do manually. Plus manually review cases where the AI isn’t confident of its answer.
Then feed the manual results to AI as input, so it can learn. (Later dial it back to, say, a random 5% of cases done manually, for ongoing learning. It’s always learning, from crowd-sourced and in-house manual decisions.)
Ah, I had assumed the AI filters everything before passing some to the community.