Not historically significant or visually unique

HaramDingo-INGHaramDingo-ING Posts: 531 ✭✭✭✭

Recently people have been reporting that several of their nominations are being commonly rejected for the nominating not being culturally significant or visually unique. Regular playgrounds, picnic shelters (gazebos) are getting the not culturally significant rejection reason, and trail markers, post offices and even churches are getting the not visually unique message.

Considering that the criteria has being overhauled to either being:

  • a great place for exploration,
  • a great place for exercise, or
  • a great place to be social with others...

visual uniqueness (or rather, visual distinction) and cultural/historical significance has become antiquated. So there are three things that we can do to combat this:

  1. Communicating to the reviewing that 5-starring a nomination overall, but putting 1-star in the aforementioned categories is a reject if enough reviewers vote this way, so recommend they should vote a neutral 3-star I guess,
  2. Add or replace the visually unique/cultural significance questions with the three core criteria of exploration/exercise/social, or
  3. Wait for the Winter 2020/2021 Wayfarer update that will update the review interface to replace the dreaded star system.

I'm sure there's a lot of work being done to hopefully get the next update over the line before the end of Winter 2021, but right now there are an abnormally growing amount of rejections in our local communities of not being visually unique/culturally significant when we know that we have nominated these in the past that they have comfortably been approved.

Tagged:

Comments

  • Hosette-INGHosette-ING Posts: 1,059 ✭✭✭✭✭

    @HaramDingo-ING Niantic has never confirmed that your first point is true. Some people believe it but it shouldn't be taken as fact.

    As these two things are poorly-specified by Niantic and not necessarily primary drivers of eligibility Niantic should not be be rejecting things based solely on those two things. Their sole guidance for cultural/historical significance is to "use your best judgement", and for many people the historical/cultural significance of a trail marker or a random playground or trail marker is very low. How much is a cultural loss will it be if "Foobar Trail Mile 3.5 Marker" is taken down?

  • TWVer-INGTWVer-ING Posts: 269 ✭✭✭✭

    The alternative is that a significant amount of reviewers rated the nomination low enough to overall reject it, and not enough people selected 1* for the rejection reason to be included in the e-mail.

    With all the weird rejection reasons we see, it might be enough for 1 or 2 people to select a rejection reason for it to be included in the mail.

    Did we ever see "visual uniqueness" or "historical significance" combined with other rejection reasons? I think we only get those, when no-one rated the nomination 1*.

  • PYLrulz-PGOPYLrulz-PGO Posts: 30 ✭✭✭

    I agree. There has to be a very low threshold that says if x-amount score something 1*, no matter if everyone else scores it 5*, it's rejected. I don't think it takes just one person, but the threshold has to be some crazy low amount because there are times where there is something that should be an easy accept, yet because a few people are not knowledgeable about how Wayfarer fully works, they may unwittingly ding an otherwise legit nomination.

  • TWVer-INGTWVer-ING Posts: 269 ✭✭✭✭

    That is not what I am saying. I don't think that just a few people can reject a nomination when a vast majority voted in favor of it. Maybe the threshold is different between reject and accept, but I don't think it is significantly different.

    There is just too many people who don't keep up-to-date with the criteria, don't care enough, don't understand them enough, or mold the criteria to something they like.

  • Hosette-INGHosette-ING Posts: 1,059 ✭✭✭✭✭

    It may also frequently be the case that the thing being submitted is something that should be accepted but the submitter has done a poor job of presenting it. Between this and the Facebook group I've seen countless cases where someone posted something and asked, "Why was this rejected?" The submitter would then respond by explaining the historical value, or the value as a local gem, but none of that was in the original submission.

    To go back to the original topic, I think it's more likely that Niantic is just bad at sending rejection email than it is that a moderately low rating for historical/cultural value can tank an otherwise highly-rated submission. I really wish people wouldn't state that last part as fact.

  • Euthanasio2-PGOEuthanasio2-PGO Posts: 255 ✭✭✭


    It is a fact. Show me where's the "not visual unique" or "not culturally significant" rejection criteria. Most weird rejection of common acceptable nominations usually receives only that reason. What other reasons could that rejection possibly happen?

  • SeaprincessHNB-PGOSeaprincessHNB-PGO Posts: 155 ✭✭✭

    It is a fact. I have had things rejected and the ONLY reason listed was that it was not Culturally Relevant. That is not a rejection reason you can select if you 1* the nomination as a whole. So it must have been rejected because it was rated poorly on that one criteria. If it was a mixture of low criteria, I would have gotten multiple reasons, as I typically do. But when you only get ONE reason listed, EVERY rejection was due to the same reason.

  • Hosette-INGHosette-ING Posts: 1,059 ✭✭✭✭✭

    It is possible that a particular mediocre submission got mediocre reviews overall and that's just the rejection reason that wound up in email. I think that concluding that the contents of the email accurately describe the root cause of a rejection is expecting too much of that message. We're all used to seeing screwy rejection reasons... why would we suddenly think this particular one is a high-accuracy signal?

  • SeaprincessHNB-PGOSeaprincessHNB-PGO Posts: 155 ✭✭✭

    I'm just not sure you know how data works, TBH. While we do get "screwy" rejections reasons, those all tie out to a specific reason in the 1* list. Like in the old days we would get "generic business." That was a discrete reason selected from a list of available reasons.

    Yes, there was a problem when lazy reviewers would select the WRONG reason, but he reason was always one on the list.

    When the reason your nomination failed is NOT listed but IS one of the individual categories, there is logic to support the idea that a low rating in that category resulted in the rejection.

    People using the wrong rejection reason so that we get poor feedback is a completely different issue from the fact (and I do believe it is fact) that a low score on one of the voting components leads to rejection.

    Do you have a hard time believing that only one stop or gym can be in a Level 17 cell? Because Niantic has never acknowledged that, either. We discovered that on our own and have been working on that fact ever since. I don't know why you don't understand this when it is just as clear.

  • Hosette-INGHosette-ING Posts: 1,059 ✭✭✭✭✭

    @SeaprincessHNB-PGO I'm a software engineer with over three decades of experience, and a great deal of my background is in software testing. Not only do I understand how data work, but I have seen more screwy bugs and poorly-conceived logic than you can possibly imagine... and I'm sure you can imagine a lot. (I've created plenty of my own too, and fixed tons that other people created.) One of my professional superpowers is identifying unusual behavior and quickly intuiting what is going on under the hood to cause what I'm seeing. I'm an SRE for a large complex website these days, and it's a regular pattern for one of the more junior engineers on the team to come to me for help... they describe the problem, and I'll respond, "Have you checked X? If you do, I bet it will show you Y because Z is happening."

    In this case, my instincts tell me that a submission that finishes with averages of 4.5/4.8/1.5/4.5/4.9/4.7 is not going to be rejected. I'm willing to believe that if something ends at 2.7/2.8/1.5/2.3/2.8/3.0 then the 1.5 would be the straw that broke the camel's back. If there wasn't a strong signal in hard rejections then they might fall back to the lowest star rating(s) to populate email.

    The bottom line is that I don't think it's reasonable to conclude from text in email that a low rating for one field on something otherwise highly-rated can cause the whole submission to be rejected. We just don't have enough information.

  • 52cucumbers-ING52cucumbers-ING Posts: 207 ✭✭✭
    edited January 19

    We're pretty much just able to observe the output from this big black box that is the review engine or whatever you want to call it. We don't know how it works and we don't even know the full picture when it comes to the input because we'd need to know exactly how every single reviewer voted.

    This is an extremely poor starting point for trying to reverse engineer how it all works. We can't, for instance, possibly know if it's a lot of 2* or a lot of 1* in one of those categories that triggers including it in the email. We can probably safely assume that it's not because it got a lof 5*. Bottomline, we can guess but when we do we should be well aware that that's what we're doing and not present our guesswork as fact.

    Do you have a hard time believing that only one stop or gym can be in a Level 17 cell? Because Niantic has never acknowledged that, either. We discovered that on our own and have been working on that fact ever since. I don't know why you don't understand this when it is just as clear.

    In that case we know both the input (ingress portals in every L17 cell) and the output (Pogo things in every L17 cell). It's not a comparable situation.

    Post edited by 52cucumbers-ING on
  • Hosette-INGHosette-ING Posts: 1,059 ✭✭✭✭✭

    @52cucumbers-ING Very well stated, thank you.

    I would love to be in a situation where we could perform controlled experiments with Wayfarer to generate specific inputs and then measure the outputs, but I doubt that will ever happen. @Raachermannl-ING has been doing some really excellent observational study work but I gather it involves having a lot of contact with locals so that they can pool data for analysis.

    Using data is good. Thoughtful hypotheses based on observations are good. What's bad is forming a hypothesis and then presenting it as a conclusion. That's misleading, and it can discourage people from doing their own thinking and analysis and maybe identifying something that others have overlooked. My quibble is with presenting "We suspect X because of Y" as "X is true."

  • Euthanasio2-PGOEuthanasio2-PGO Posts: 255 ✭✭✭
    edited January 19

    Do you genuinely believe that people just all gave 2* to playgrounds or sport fields? 1* is way more common for rejection. There's way too many people reporting the same occurence. When 1 person says something they might be wrong, but when tons of people from all around the world report the same thing...maybe just maybe we have a higher chance of being right? And like I said There's no rejection criteria for those.


    They also say to rate 3* the location is you are not sure it exists, it's safe to assume that Niantic consider less than 3* a no based on this.

  • Lechu1730-PGOLechu1730-PGO Posts: 537 ✭✭✭✭

    There might be a way to test the hypothesis but it would be difficult to gather the data.

    Every now and then I review something and automatically get an extra agreement. I guess that happens when my review is the one that tips the nomination into agreement.

    So the the thing to track is if the review that generated the agreement was rated 4* or 5* in general, 1* or 2* in significant or unique and then if it appears or not in the games.

  • Hosette-INGHosette-ING Posts: 1,059 ✭✭✭✭✭

    @Lechu1730-PGO I don't think that the last review would have more weight than any of the others, so it's unlikely that you could get a good signal out of that. It's an interesting idea, though.

  • Lechu1730-PGOLechu1730-PGO Posts: 537 ✭✭✭✭

    I don't think it has more weight, just that you have some useful data: you know that you are in agreement and that you rated highly the proposal but lowly on relevant/unique. Seeing if the proposal is spiced or rejected should tell which valoration is more important.

  • Kroutpick-PGOKroutpick-PGO Posts: 347 ✭✭✭

    Every now and then I review something and automatically get an extra agreement. I guess that happens when my review is the one that tips the nomination into agreement.

    @Lechu1730-PGO , how can you be sure that this new agreement is directly related to the review you just sent?

    We get agreements at any time when one submission reach an agreement. The +1 agreement you see after sending a review may be from any previous review that reached an agreement while you were reviewing. It's not necessarily related to that last submission you just reviewed.

  • 52cucumbers-ING52cucumbers-ING Posts: 207 ✭✭✭

    I'm not arguing against the theory, I'm arguing against presenting it as a proven fact because it's not. It's most likely a correct theory but a theory nevertheless. If that annoys you just imagine how Einstein must feel, his work is still referred to as theories 100 years later. And that's because we don't have proof. There's been many theories connected to how the games actually work over the years, some turned out to be true (the last ball bug in early pogo raids for instance), some are still just theories (pokemon spawn points). I can argue semantics about this all day but I doubt it's interesting to the wider community so I'll just leave it now.

    The only reliable way I can see for gathering the data needed to conclude with reasonable confidence would be to craft some kind of browser plugin that scrapes the name of the nomination, sniffs out the ratings given by the reviewer and sends it all off somewhere but the problems are that it would need wide adoption in the local community to be reasonable sure it catches most reviews for a nomination. If you could do that, then someone with access to the recorded results could make nominations and wait for them to go through the process. There's several issues with that, but I think it's technically possible. You'd probably want a reasonably rural/remote community without too many reviewers because the main issue is knowing you've caught enough data to be able to reason about the result.

  • 0X00FF00-ING0X00FF00-ING Posts: 456 ✭✭✭✭✭

    So, if a nomination similarly gets rejected for low "overall" scores but doesn't have these "reject" scores in any sub-category, we don't get any rejection reasons in the email? And this may vaguely explain the rare occasions when the only given reasons is: .

    Yup, just a period.

    I still maintain that this specific reason should not be included as a rejection reason @NianticCasey-ING ; A playground is a playground is a playground, and its "cultural or historical" relevance is ... umm irrelevant. Other than "this photo is terrible" and "the location is a lie" 1* rejection reasons, when a reviewer says "5* overall this should be a wayspot", the subcategories ought not override the overall vote. Telling me that my playground isn't historical/cultural is just salt in the wound, the most helpful thing to tell me is that the reviewers gave it overall low points.

  • SeaprincessHNB-PGOSeaprincessHNB-PGO Posts: 155 ✭✭✭

    What could fix this is if there was no Overall score that a person was allowed to select. But if the Overall Score was an average of the subcategories. So if I rate 3* on Title/Desc, CR, VU, PA and Location - then the overall score is 3*. And LET ME SEE THAT. If I vote 1* on 3 of those things and 3* on the rest, the overall score goes down to 1.7 or 2 AND LET ME SEE THAT. Then I understand the levers and how I need to think about each piece of the voting process because they matter.

  • 52cucumbers-ING52cucumbers-ING Posts: 207 ✭✭✭

    With the 3.1 criteria they could just replace the entire star-system with checkboxes because the criteria reads exactly like a checklist. It doesn't have to be more complicated than that.

  • Hosette-INGHosette-ING Posts: 1,059 ✭✭✭✭✭

    @SeaprincessHNB-PGO I have it on good authority that not all fields carry equal weight, and IMO that's the way it should be. For example, safe pedestrian access is absolutely mandatory for a wayspot. A high-quality title/description and visual uniqueness are more in the nice-to-have category. I would absolutely want a high-quality submission with a mediocre title to go through, but I wouldn't one with an exquisite title and super sketchy pedestrian access.

  • HaramDingo-INGHaramDingo-ING Posts: 531 ✭✭✭✭

    I just had tennis courts rejected for not being historically/culturally significant.

    What a stupid waste of a nomination. Description and title were standard, but yeah that is pathetic.

  • HaramDingo-INGHaramDingo-ING Posts: 531 ✭✭✭✭

    Still going to hound over when the REDUNDANT categories of visually unique and cultural significance are going to be retired.

    This historical plaque was rejected by not appearing to be visually unique but my god you think that we should re-submit this with an improved photo? That's really low, absolutely low and a slap in the face. The photo is perfectly fine. Now someone here might think that the description or something is wrong with the nomination somewhere but there is NOTHING wrong!

    Reviewers are dumb as hell. This is a historical plaque and yet rejected for the most dumbest reason!

    Niantic, I really hope you get rid of these absolutely worthless categories soon enough. Because this is atrocious.

  • Shilfiell-INGShilfiell-ING Posts: 265 ✭✭✭✭

    Things can be visually unique and not culturally significant. My tripod cat is visually unique, but I can assure you that he needs a lot of improvement in the culturally significant arena. A tombstone for a Medal of Honor honoree may be a simple wooden marker identical to many others, but can still be considered culturally significant. Your photo does look fine, and yep you likely got some bad reviewers, but so have we all - it's taken me multiple tries to get some of my best candidates through. It's not necessarily the fault of the criteria themselves, just reviewers picking the easiest rejection option for something I failed to "sell" adequately.

    Reviewers are not "dumb as hell" as a group. It really pains me to see comments like that when I come to these pages after a long and generally unproductive reviewing session. Many times, the rejection emails make little sense, sure....but putting all the blame on the human reviewers or the established criteria isn't the answer.

  • HaramDingo-INGHaramDingo-ING Posts: 531 ✭✭✭✭

    Unearthing this controversial topic because of a visually unique reject reason come back for a park sign.

    I'm furious because this is a nomination for a park. And I absolutely hate getting rejections because I take a lot of time and effort into preparing these nominations and anyone who thinks or assumes that a candidate is "meh" frustrates me more than anything.

    This reserve/park was rejected for not being visually unique. On Street View, it is a standalone, park sign on the corner which is not already a wayspot. It is its own definitive marker on the north-western side of this park. Visually unique: does it stand out from its surroundings and is it easy to locate? Well, yes, it's a park sign and despite everything else, it is easily identifiable, and it is individual. Is that not visually unique?

    Well nowadays, I can theorise and probably start to understand why these reasons are appearing. A park is not a "meh" candidate. My assumption is that a nomination that is voted as a "duplicate" before any of the stars are rated is actually giving the system a 0* into each category. But why should this be the case? On this corner, there IS no wayspot of Neville Reserve. So why is it not visually unique?

    On the bottom-right is an existing wayspot another park sign, of the same park yes, called Neville Reserve. Right on the other corner of this park. Its picture is below.

    Sure, maybe they are meant to be duped, despite being in radically different places and just over 250 metres away., Maybe it was because I called it by the exact name and didn't call it Neville Reserve Northwest Sign? But the visually unique reason is wrong as hell. I used to have a post to ask as to how far do park signs have to be apart from each other before they start being duplicated (no idea where it is on these forums and it doesn't appear in My Discussions), but I am absolutely frustrated that this was rejected for the absolute bullocks of a reason. I believe the entrances of this park are differentiable enough.

    I get Casey's post regarding this, but is this park really not big enough? Does it have to be big enough that you can't see the other park sign cannot be detected in the 'Check for Duplicates' screen? I'm really disappointed about this.

  • 0X00FF00-ING0X00FF00-ING Posts: 456 ✭✭✭✭✭

    @HaramDingo-ING I would hazard a guess that the biggest issue with your Reserve nomination above may be a cultural/language disconnect. Yes, the submitted park very likely "should" be a valid wayspot.

    I'm not particularly familiar with the specific area, but it is a country-wide convention that a "reserve" is a "park"? Nothing in your given title or description uses the specific word "park". My google/dictionary search just gives:

    a place set aside for special use.

    • a reservation for an indigenous people. "a reserve was allocated to the tribe on Bear Island"
    • a protected area for wildlife. "part of the marshes has been managed to create a splendid reserve full of birds"


  • HaramDingo-INGHaramDingo-ING Posts: 531 ✭✭✭✭

    Between the usage of Park and Reserve, they are quite interchangeable in Australia. There really is no real difference. Australians would understand the nuances of a Park vs a Reserve, like how "Hotels" in Australia are actually usually just pubs, but have quite a different meaning overseas.

Sign In or Register to comment.