Is Wayfarer AI the worst ever designed?

Is Wayfarer AI the worst ever designed?

The question is certainly worth asking. At a time when AI agents have become capable of analyzing complex strategies and producing incredible videos, Wayfarer’s automated process is incapable of recognizing a hiking marker on a tree trunk or a picnic table. Nor is it able to block a suggestion with “ggggg” in the title and “hhhhh glllll” in the description. It is powerless to recognize that the photo (stolen from Google) of Geneva’s large floral clock, an international tourist attraction, cannot possibly be located in a small village in the Savoie region. Nor does it detect that the 50th suggestion for a children’s playground within a 100-meter radius in the same neighborhood, or that the suggestion of mismatched tree signs, photographed in different arboretums, placed along a treeless ditch on a country road, is an abuse. I believe we have here the perfect tool to generate frustration among honest trainers who genuinely try to improve the experience for other trainers, and to complicitly satisfy the twisted schemes of all kinds of cheaters. In short, after several thousand reviews and over a hundred (real) PokéStops validated in the territories where I play, I’m questioning whether to continue or not, as my anger grows. I had placed a lot of hope in the changes announced for 2025. Unfortunately, I’m finding that the wider opening of the community to submit and review results in a lot of mediocrity and increasingly incomprehensible rulings that have to be appealed to be (very often, it’s true) accepted.

I’m frustrated and tired. Like probably thousands of other users… Thank you if you’ve read this (and sorry for the automatic translation into English).

6 Likes

Welcome to the forums :slight_smile:

Specifically regarding the ML (machine-learning) system that Niantic use to filter wayspots, it does a very good job of removing a lot of the garbage submissions that would otherwise flood the system.

A while ago, when the ML system was offline, these garbage submissions were appearing in the review flow. All they were good for was instant agreements, but when 90% of what you review is garbage, it is an unpleasant experience.

The volume of garbage submissions was very high. I cannot fault the ML system that some ineligible submissions get through. In order to tighten it further and prevent those, the number of eligible but rejected submissions would increase a lot. In order to reduce the number of eligible but rejected submissions, you would get more ineligible submissions in the review flow.

There is no perfect way at which all ineligible submissions will be rejected and all eligible ones will be let through. You inevitably have correctly rejected, incorrectly rejected (false-negative), correctly let through, incorrectly let through (false-positive). The goal is to balance this by finding an acceptable level for the false-negatives and false-positives.

I think Niantic have got this at a good balance.

5 Likes

Thank you for your response..
I agree with you that a system is impossible to perfect. However, the examples I’m giving seem quite easy to avoid… Should a title that means nothing in any language on the planet, like “ggggg,” really go through a basic automated process? Why are five yellow hiking markers with the letters PR (for short hikes in France) accepted while the eighth is rejected? The same goes for a picnic table. Thousands have been validated correctly… hasn’t the system learned anything?
How can it accept a firearms shooting range, defined as such in its title? How can two strictly identical photos in a sigle submission be successfully accepted?
Why do repeated identical photos in the same area or photos that appear first on Google when searching for comparisons pass the test without a problem?
It’s not the system’s fallibility that I’m questioning. It’s the sort of lottery that seems to result from the automated process’s verdicts. Or perhaps the AI ​​has had too much to drink some days?.. If that’s the case, then we can indeed understand.

I think the ML does quite a lot to filter out terrible nominations, as we can clearly see when it breaks and stops filtering anything! However it definitely also has some issues with some kinds of nomination and particularly seems to have a bias towards scenes with buildings in vs a lot of greenery.

The ML also seems to have started rejecting some of my trail markers which are along the same trail as previously accepted markers, either via community or appeal. So a feedback loop of appeal decisions back into the ML might be an area that could be improved.

One thing I hope it can do better in future is detect fake images created by AI and also stolen pictures - or maybe it is already doing this in the background to a point? This would be extremely helpful since we see a lot of reports of AI fakes being presented for community review and they seem to be getting hard to spot.

1 Like

Placeholder titles have to be allowed, because the ML system reviews as soon as a submission is prepared, not only once it has been released into the queue.

I’m fairly sure on this, because the ML system has a 24-hour period in which it assesses before submissions go into the queue, but if you use Upload Later a day after creating, this 24-hour period will already have been used up and your submission might immediately go into the queue.

In order to reject placeholder titles, the phasing of the ML review would have to be changed.

As to why one similar submission is treated differently to others, this is inevitable in some situations because the differences between those submissions will be such that an automated system will perceive that difference as significant.#

Firearms shooting ranges may not be an automatic rejection. There was a recent discussion about these, but I think it was established that it is possible for one to be family-oriented (I could be wrong!)

It’s always possible to look at specific submissions and think “that rejection was definitely wrong” or “that should definitely not have been let through”, but it is better to look at it in the round

(By the way, I refuse to call anything like this AI, unless I put quotes around that phrase.)

1 Like

Definitely. I think this is a hard problem, because AI fake images are not always obvious. There was a recent post where the image was real, but the central part had been substituted with a fake. I expect that Niantic are working on this and have already made the ML system ‘aware’ of this problem, and are going to continue improving this, but we will never be told how good it is.

1 Like

As stated it does get rid of some of the real bad nominations, when it goes down the amount of “My House” or “My School” that are fed through the system is quite shocking.

Whenever the ML system is discussed I always revert to stating that in my opinion it should be used to to aid wayfarers when submitting and reviewing…

“It looks like you are submitting a play park. Please Note that you should nominate the play area as an whole and not as individual equipment”.

“This Review looks to be for a Bench. The rules are…”

5 Likes

(By the way, I refuse to call anything like this AI, unless I put quotes around that phrase.)

Hmm, if today, in 2026, it’s simply a matter of Machine Learning based on community decisions, then obviously that explains the disappointing performance. Especially if the tool is the same for different countries. The player base with its French, Italian, or German culture and reflexes, for example, has nothing to do with what must have been the driving force behind the tool’s design. We can still dream that Niantic will make a technological leap :wink:

I’m generally agnostic on this idea, but I wonder if it would cause submitters to “standardise” their submissions, which would remove the local variability and individuality in submissions. The ML system would be directing submitters and I am a little uncomfortable with that.

We can’t know how the ML system has been taught. I doubt it is solely community decisions :slight_smile:

1 Like

This wouldn’t eliminate cheaters, but it would certainly help those with good intentions.
But it wouldn’t prevent a resubmission once the proposal is published, with even less leniency if the advice hadn’t been followed.

I agree, but that was more of a reference to your first response about my evaluations feeding the machine learning :wink: I’m convinced that with the combination of photo + text + categorization + geolocation, it’s much more complex than a “simple” algorithm based on usage statistics. That’s why I prefer to talk about AI. At least, that’s what it should be, it seems to me.

Salut et bienvenue! N’hésite pas à écrire en français, il y a un outil de traduction intégré à ce forum.

Pour répondre à l’une de tes suppositions: je propose en français, allemand, italien, anglais. Mon taux de succès dans les 3 premières langues n’est pas différent des propositions en anglais. Je ne partirais pas du principe qu’il y a un biais contre des langues spécifiques. On ne connaît ni les données d’entraînement, ni comment le système traîte les différentes caractéristiques de la proposition.

1 Like

I think they should add the ability for ML (which is not strictly an AI) to recognize the semantic watermarks that they are adding to tools like Google Gemini because I see so much AI generated stuff

This has absolutely happened, in our local community we use standard phrasings and angles/compositions of photos just to get past the AI. Those choices are suboptimal (for example, photos that the AI likes are often much worse at actually showing what an object is) but the humans that review can see the supporting image/street view and make more nuanced decisions without freaking out because they saw a tree.

This is an interesting take. I have found that the better the photo is, the more likely the ML model is able to understand what is being presented. Any suggestions I make to “get past” an automated process rejection also result in a better presentation on the Wayspot.

1 Like

I still think it’s better than the month+ long queues we used to have, so I’m not too upset about it, but we definitely teach to the test, you know? Particularly with photos, we end up recommending a lot of very close shots that don’t necessarily show the whole waypoint just to ensure that the visible background is minimized.

That’s a curious outcome. As @cyndiepooh says, photos which make more sense to the ML system tend to be better photos.

I once took a sub-optimal photo, but that was to get the submission to be accepted by reviewers, as I expected that a bench-with-a-view photo that focussed primarily on the view would be rejected by human reviewers.

I take very close-up photos of trail markers that basically just show the trail marker in order to get the submissions accepted by reviewers, as I expect that a photo focussed primarily on the scenery will get rejected by human reviewers.

I don’t pay any attention to which of these the ML system will prefer and haven’t had a single ML rejection.

1 Like

can you give an example of this? very close shots often make it impossible for the ML model to detect what is being presented, and i often recommend backing up.

i also recommend against being too far away.

the guideline i use was created years ago:

this was posted on the Wayfarer Discussion Discord in 2021 - well before the ML model was introduced. (although i probably do “eyeball” a little less than 25% for the border, but that is the idea.)

1 Like

If you have examples of a specific POI which was rejected by the ML system with one photo and then not-rejected by the ML system with a different photo, that would be useful.

Specifically when the rejection was a broad shot and the not-rejection a close-up shot, as you describe.

1 Like