Can we please get a "fake warning" button?
LukeAllStars-ING
Posts: 4,625 Ambassador
I believe it would help cleaning up the process and gives fakes less chances + increase the rating of the old, experienced wayfarers, which really struggle holding their rating up because of such nominations getting through.. And yes, such nominations are getting accepted
It maybe isnt a must have for everyone, but it would definately help preventing the system to get feeded with more fakes.
Also, you can maybe add a little text like "Check this weblink, its fake" or "look at the top right corner, its photoshopped". That would really help in my eyes.
Post edited by NianticCasey-ING on
Comments
Hey Luke,
I would reject this nomination for 100% cause 1. No picture wtih orientation + no street view, cause yes Germany just doesn't and 2. You're right, you can see it's fake. But how to report these submitters?
Could you teach me about why this is fake? Where should I watch?
If you look very good at the picture, you can see mini-squares which means that the foto has been taken from another another (computer)screen
And thats abuse
Oh yes.I clearly see in the sky.
I was on phone before and it was not that clear, but from laptop of courseit is clearly fake pic
Of course you have to reject that one. But a lot of reviewers are stupid (sorry but thats the truth)
No, that wouldn't help with all the other people that have reviewed it previously or those that don't know what to look at.
Instead, you should simply mark it as abuse and Niantic should manually review every of those nominations marked as abuse and act accordingly. If you mark it as abuse and it's fine then you get a message/warning: hey that nomination is OK; if it's abuse, then obviously it's removed and the nominator gets a strike.
How to mark a submission as abuse?
1* abuse, 1* third party photo and/or the abuse form.
Also, besides the moire-effects that happen when these cheaters take photo from their PC Screen, you can mostly identify these by the fact that theres only 1 of them photos, in this example the additional photo is just tilted a little. Poor little guy didnt even have the slightest idea how to make a submission (Infoboard - the old Infoboard - It is a beautiful view and a beautiful landscape).
Also these people tend to add the strong will to have more pokestops becauz there are none in the neighbourhood.
You can take a picture from a screen and then go outside and take the second picture without the object, but with the right surroundings. Faking POIs is super easy, believe me. I have seen everything in my reports...
just to add this, Luke, people are faking things and you wont notice a difference. making a photo from a PC screen is Kindergarten. But, those fakers grow up and use different techniques.
Hey there,
This is an interesting suggestion, are you talking about possibly using some sort of photo AI or link scraping system that would be able to detect whether a potential nomination is a fake based on what was submitted? Or improve the ease with which you can report possible fakes as abuse during the reviewing process?
I believe we need both of them.
The photo AI should be used to detect every nomination submitted, so if any nomination was detected for being not only photoshopped but also literally taken from third party source (just like Google Lens do), it should be instantly removed from the queue, and the submitter got the warning.
And we really need some improvement for easily reporting abusive nominations, especially to deal with several fake nominations in multiple times in a row.
In this case, I mean a warning for other wayfarers, which can get activated while reviewing. It should be like the "additional comments" area (which is never getting used) but visible for other wayfarers. This can help deleting fake POIs before they even go ingame. Many fakes are just easy to expose with only 2 minutes of research. (I can post my discussion with examples as an additional comment)
Other ideas would be an automatic reversed image search with Google and the Niantic database to prevent a "copy and paste" abuse.
Also, the system could check the metadata of submitted pictures, whether there are Photoshop data on it or not. (You can easily outsmart this, for example with using a screenshot)
At least any kind of warning for reviewers, which come from past wayfarers or the system. Of course you have to proove why you are making something as abuse. For this, you have to write a statement or copy and paste a link which can expose the submission as fake. You can also create an extra reward if you flagged something correctly. (Thats not directly necessary, but it will make people really check things abs not just rate with whatever they got in the review screen)
Thats the discussion I was talking about:
https://community.wayfarer.nianticlabs.com/discussion/9914/all-the-fakes-i-had-to-review-the-last-days#latest
I think in a lot of a cases an AI would work even better. Pictures taken from screen, pictures which are photoshopped and Pictures which are taken from the Internet or the wayfarer Database would easily be detected.
In addition there could be a warning when you review a nomination by a known abuser or in a region which has had lots of fakes in the past
I think, warning for areas where abuse happened is too general and might make you review worse than without this kind of warning.
Experienced reviewers votes should just count more. It's frustrating to see this stuff pass. My idea would be that only honeypots count towards your rating once you have reached 10.000 agreements.
I disagree that experience should make a vote worth more. Experience in and by itself is no really a good indicator that the reviewer is actually good, or they're aware of the current guidelines - especially the latter.
I only play PoGo, so I only started revieweing a year ago, but I was involved with the local OPR community before that as well, and unfortunately I -as a non-reviewer - knew the guidelines much better than many very experienced reviewers.
I had to argue with some of the most experienced people in my community about things that should be obvious: whether a submission with a picture that has bad lighting (it was perfectly recognizable, mind you, the picture was just taken early in the morning) should be rejected (they said yes). Or whether a nomination that is perfectly visible on the Google Satellite picture but has no Street View should be rejected. Or whether all for-profit businesses should be automatically rejected. And I could go on and on.
Unfortunately, in my experience, the most experienced reviewers are the least inclined to keep up with changes in the guidelines.
True but that's where the honeypots come in. As long as there are enough honeypots experienced reviewers still get "tested" regularly.
But if there's enough testing (via honeypots or otherwise), there's no need to differentiate between reviewers based on experience.
If there is enough testing some votes will need to count more because a lot of people would go to poor. At least in my region where fakes get accepted all the time. Even the most obvious fakes like these for example: https://community.wayfarer.nianticlabs.com/discussion/10075/more-fake-stolpersteine
A significant portion of the reviewer base is unable to spot fakes and their rating needs to drop.
And also pictures with watermarks! I believe that would be very easy to detect and would avoid so many worthless reviews (not to mention sometimes people do not reject them).
"If there is enough testing some votes will need to count more because a lot of people would go to poor,"
That's not true, nor is it logical. Even if a lot of people drop to poor, there's plenty of other ways to solve the resulting situation without creating an unfair distinction between players based on an irrelevant metric (experience). As I stated earlier, experience doesn't strongly correlate with quality IMO, so your proposal would solve nothing, but successfully disenfranchise newer voters.
Any kind of basic AI that pre-checks the nominated photos would be a huge improvement.
Check if both photos are identical.
Check if they don't match standard camera sizes (cropped pictures), or if it contains metadata pointing that it has been altered, or that the GPS location doesn't match
Compare them to all the previous nominations (stolen nominations)
Then you can start more complex task like trying to compare with online pictures, try to perform AI checks to detect image manipulation, etc... but the first steps could improve the quality a lot without getting into complex AI training.
Check if they don't match standard camera sizes (cropped pictures), or if it contains metadata pointing that it has been altered
Some manipulations aim to better reflect reality; some manipulations aim to deceive. A simple yes-no test can't distinguish between the two.
From the guidelines:
A high-quality photo of a Wayspot is:
• Well composed - the Wayspot or placemarker is centered without too much foreground or background, or objects passing by in front
• Clear/sharp with good exposure (show off your photography skills!)
There is no requirement that what comes "straight from the camera" be absolutely perfect. Many legitimate photos can be improved by a small bit of cropping, and some minor adjustments to contrast and saturation.
I agree with this. I have done some minor editing (adjusting contrast or brightness, cropping, etc) for photos I submit through Ingress. I hope these would not get flagged by AI as these adjustments were to better compose a photo of what I thought would make a good wayspot and not an attempt to push something through that does not exist or is not really at that location.
Overall, I think some exploration should be done to see if AI can detect abuse, but efforts on this front should proceed with caution to avoid false positives.
Yes, any kind of AI should be a pre-filter that flags nominations that should be manually reviewed by Niantic . With enough time and data it might be a real AI that detects fakes, but in any case it shouldn't perform direct actions against players unless 100% sure of evil intent.