Is Wayfarer AI the worst ever designed?


Here are two examples I’ve submitted, in one the background was cut out and in another instead of showing a whole wall of murals I zoomed in to just one. In both cases I think it’s notable that the submissions were very tall or very long, and I also don’t think that the final pictures are bad, they just aren’t necessarily how I would have composed them if I didn’t have the AI filter to get around.

I think this would be most noticeable if I were submitting say, a tall skinny statue with a bunch of foliage behind it. I would either crop the image or not show the full statue.

I have these examples of 2 very similar (in my opinion) photos and nominations. They are maybe 1km apart or so but the same route.

The first picture was allowed to enter voting by the ML but the 2nd was rejected by the ML. Both submissions are now accepted. I had to appeal both of them :woman_facepalming::rofl:

Ooh found another. This was allowed into voting by the ML.

There is definitely a suspicion that when there is a lot of vegetation in a photo, it can cause the ML system to not know what it is looking at. With trail markers as an example, the solution is to reduce the greenery but not to eliminate it. Something with the proportions @cyndiepooh gave seems ideal (and has a relationship with the rule of thirds).

Did you try either of those submissions with wider-scale photos?

If not, do you have evidence that the wider-scale photos would have been rejected by the ML system?

(I’m not trying to deny what you are saying.)


It didn’t have a problem with this.

I’ve had a dirty named-trail marker rejected by the community, so I can understand why this second one was rejected by ML.

If the material allows, I often clean trail markers before taking photos.

1 Like

I don’t have any 1 to 1 examples where something got through after being rejected, for the most part I just appeal cases like that instead of resubmitting.

It’s definitely a case of potentially unproven community knowledge, but I think it’s telling that the AI filter is reactive enough to change behavior, even if it turns out it isn’t working.

1 Like


oh and found one with a lot of greenery that it didn’t reject.

reviewers had rejected the previous nomination for this when i focused on one of the signs, but the ML did not

Nice theory to blame me for not cleaning it but these likely disprove that, and these signs are all perfectly readable so any rejection for “dirt” would be incorrect

These were all also allowed into voting by the ML and are pretty degraded/algae and lichen growth

All accepted I might add.

I don’t think anyone would say it doesn’t let zoomed out photos through, it does more often than not I would say, it’s about reducing the odds that it happens.

That’s definitely not what I did. Before ML, I found a clear consistency in results between ‘dirty’ and ‘clean’ trail markers, so I adjust my submissions accordingly.

Rejection for dirt is incorrect, but since it happens, reducing the chances of it by wiping dirt from trail markers seems like a rational response.

1 Like

oh, that reminds me of another trail marker i was honestly surprised to find that ML did not have a problem with

2 Likes

My wayfarer bag has a “cleaning kit” of a microfibre cloth and a couple brushes, in summer a pair of secateurs :joy:

I’ve been known to use a scrapper to remove a sticker obscuring a sign :joy:

I often see ones where I think why did you not spend 20 seconds pushing those branches out of the way, but don’t reject. But then all this shows I am human.

2 Likes

Hello, merci beaucoup ça va être plus simple pour moi :wink:

Are the ones with nonsense titles otherwise potentially eligible? If the wayspots themselves are fine, it might just be someone submitting placeholder text and forgetting to go back and edit them before they enter voting.

(I’ve definitely accidentally let some placeholder text get into the queue before.)

No. Minor spelling mistakes shouldn’t negate an entry, but text that is entirely wrong should not be accepted.

For example, a title of “pf40”, “temp”, “path” or “dhjkfh”, is not acceptable.

When I have forgotten to correct placeholder text and accidentally let something into voting, I mentally kick myself but would be horrified if it got accepted.

1 Like

Hello , ce matin je tombe sur une soumission qui a passé le ML. Le titre (et le reste) sont très clairs Collège + nom du collège. Clairement une école pour des enfants entre 11 et 14 ans. Un critère de refus très clair. N’est ce pas lamentable de laisser passer ça ?

It’s easy to pick on an individual submission that you have been able to identify as definitely not eligible. That doesn’t make the ML system bad, but it can be used by Niantic to see if it can be improved.

In general, if you take a system that does a decent job and tweak it for a situation it didn’t handle well, there is a significant risk of negatively affecting the handling of other situations, so it might take a surprising amount of work to make adjustments. You wouldn’t want to find submissions of adult colleges being affected in this example.

Out of interest, what was/were the indicator(s) of this being a college for children?

En France, un collège est un établissement dont la fourchette d’âge est de 11-12 ans à 14-15 ans. Les seuls adultes dans un collège sont les professeurs.
Collège ou Lycée = -18 ans

Ah. I suspect the indicator for you was that it was a college and therefore necessarily an educational institution for students aged 11-12 to 14-15. The implication is that none of the texts or photos admitted it was a school for children.

I can see the problem for the ML system, which operates globally. The phrase “college” in most countries refers to educational establishments for 18+ and I imagine it’s quite hard to have different rules for what is a small area (relative to the size of the world).