Contents
On April 14, 2026, Google deployed the largest revision to its review filtering pipeline since the 2022 'authentic content' overhaul. Two days later, on April 16, Google published its 2025 Trust and Safety Report confirming 292 million policy-violating reviews were blocked or removed in the prior year - roughly 22 percent of all review activity. On April 17, two new clauses quietly appeared in the Rating Manipulation policy, banning staff review quotas and content-directed solicitation outright.
We watched the whole sequence unfold in real time across 2,400 monitored profiles in the BGR Review network. The signal was unambiguous: review velocity that worked for the last 18 months stopped working overnight. Profiles that had been banking 15-20 legitimate reviews per week suddenly saw 30 to 60 percent of new reviews vanish into the filter within 48 hours of posting.
This is not a minor algorithm tweak. It is a structural rewrite of how Google evaluates review authenticity. Every business running a review request programme needs to understand what changed, what the data shows, and what to do this week - not this quarter.
What We Measured: 2,400 Profiles, 14 Days, Three Signals
Within 48 hours of the April 14 rollout, our monitoring system flagged anomalies across 74 percent of the 2,400 business profiles we track continuously. Review filtering rates spiked in three distinct patterns, each pointing to a different change in Google's evaluation model.
We isolated each signal by comparing the 14 days before and after the update across matched cohorts of profiles grouped by industry, location count, and baseline review velocity. Every number in this article comes from that dataset. We did not sample. We measured everything.
The three signals were: a velocity sensitivity increase, an account-history weight change, and a content-quality trust uplift. Each one is actionable on its own. Together, they represent a complete reset of what a sustainable review strategy looks like in 2026.
74% of our 2,400 monitored profiles showed filtering anomalies within 48 hours of the April 14 rollout. This was not a gradual shift. It was a switch.
Signal 1: Velocity Sensitivity - The End of Burst Campaigns
The single largest change is how the filter now treats clustered timing. Before April 14, a local business could receive 12 to 15 reviews in a single week without triggering filtering, provided the accounts were real and the reviews were genuine. That threshold has collapsed.
In our post-update data, profiles receiving more than 8 reviews in any 7-day window saw filtering rates of 41 percent on the excess reviews. Profiles that stayed at 3 to 7 reviews per week saw filtering drop to 6 percent - essentially the pre-update baseline. The filter is not flagging individual reviews as fake. It is flagging the velocity pattern as inorganic, regardless of whether the individual reviews are genuine.
This kills the standard CRM-driven review sequence. Most platforms (Birdeye, Podium, NiceJob, and similar tools) default to sending review requests to every customer within hours of service completion. In a busy week, that produces exactly the kind of burst that the new filter catches. The tool is not the problem - the default cadence is.
We tested adjusted pacing across 340 profiles in the first week after the update. Profiles that throttled requests to a maximum of one per day, spread across the week, retained 94 percent of new reviews. Profiles that left their CRM on default settings and received the same total volume in 2-3 day bursts retained just 62 percent.
- Set a hard cap of one review request per day per location in your CRM or automation tool
- If you have multiple locations, stagger request schedules so no single profile spikes
- Avoid Monday-morning batch sends - spread requests across the full week
- For high-volume businesses (restaurants, retail), route requests through a queue that releases slowly rather than in real-time
- The target window is 3 to 7 reviews per week per location, sustained indefinitely
Profiles with 8+ reviews per week: 41% filtering rate. Profiles with 3-7 per week: 6%. The pacing is now the strategy.
Signal 2: Account History Weight - Thin Profiles Get Filtered
The second change targets the reviewer, not the business. Google has significantly increased the weight it places on reviewer account history when deciding whether to display a review.
In our pre-update dataset, reviews from accounts with fewer than 3 prior reviews were filtered at 11 percent. Post-update, the same cohort is filtered at 26 percent - a 2.3x increase. Reviews from accounts with 10 or more prior reviews and at least one profile photo saw essentially no change in filtering rates (4 percent pre-update, 5 percent post-update).
This has a disproportionate impact on businesses whose customer base skews toward first-time Google reviewers - which is most local businesses. When you send a review request to a satisfied customer who has never reviewed anything on Google before, their review is now substantially more likely to be filtered even if the review is entirely genuine.
There is nothing you can do to change your customers' account history. But you can prioritise request timing to increase the likelihood that reviews come from engaged Google users. Customers who found you through Google Maps search, for example, already have an active Google account with usage history. Customers who came through a direct referral or walked in off the street may not.
In practice, the highest-retention review channel in our post-update data is the in-Maps review prompt that appears after a customer navigates to your location using Google Maps. Those reviews come from accounts with inherent usage history and are filtered at just 3 percent.
Signal 3: Content Quality Uplift - Photos and Substance Win
The third change is the most positive for businesses operating honestly. Reviews that include substantive content - either a photo attachment or a text body exceeding 40 words - now pass the filter at the highest rates we have ever measured.
Pre-update, photo-attached reviews were filtered at 7 percent versus 12 percent for text-only reviews. Post-update, photo-attached reviews are filtered at just 3 percent while text-only reviews without substantial content have risen to 18 percent. The gap has widened from 5 percentage points to 15.
We believe this reflects Google's investment in Gemini-powered content evaluation, announced on April 16. A review with a photo and a detailed description gives Google multiple signals to evaluate authenticity: image metadata, location data embedded in the photo, linguistic analysis of the text, and cross-referencing with the reviewer's Maps activity. A review that says 'Great service, highly recommend' gives Google almost nothing to work with.
This creates a clear tactical advantage. Businesses that train their teams to ask for the photo - 'Would you mind including a photo of the finished work?' - are now operating in a fundamentally different trust bucket than businesses sending generic 'Please leave us a review' messages.
Photo-attached reviews: 3% filtering rate post-update. Text-only reviews under 40 words: 18%. Ask for the photo. Every time.
The New Policy Bans: Staff Quotas and Content Direction
Beyond the algorithmic changes, Google added two explicit prohibitions to its Rating Manipulation policy on April 17. These are not filtering signals - they are policy violations that can trigger penalties up to and including profile suspension.
First, businesses may no longer direct staff to solicit a specific number of reviews. Monthly review contests, 'get 10 reviews this week' targets for your front desk, team performance metrics tied to review counts - all of these are now explicit violations. Google has seen too many quota-driven programmes producing inauthentic patterns and is cutting the incentive structure at the source.
Second, businesses may no longer ask customers to include specific content in their reviews, including naming a staff member. The common practice of saying 'If you loved your experience with Sarah, would you mention her name in your Google review?' is now a policy violation. The wording is broad enough that it could extend to any content-specific prompting - asking a customer to review a particular product, service, or menu item may also fall under this clause.
The practical implication is that every review request must be completely open-ended. You can ask a customer to leave a review. You can send them a direct link. You can ask for a photo. You cannot tell them what to write, who to mention, or set a team target for how many reviews to collect.
- Remove all staff review quotas and team contests immediately
- Update review request scripts to remove any language directing content ('mention Sarah', 'talk about your haircut')
- Keep requests open-ended: 'We would love your honest feedback on Google' is compliant
- Asking for a photo is still permitted - it is a format request, not a content direction
- Audit your CRM templates now - many default templates include content-prompting language
Google is moving from 'are these reviews real?' to 'does this review pattern look organic?' The review strategy that wins is now the one that looks least like a strategy.
The Penalty Escalation: What Happens When You Get Caught
Google's enforcement under the updated policy follows a three-tier escalation. Tier one: individual reviews are silently filtered or removed. The business often does not realise anything has happened - the reviews simply never appear, or disappear days after posting. This is the most common outcome and the one we see across our network daily.
Tier two: the profile receives a temporary restriction on new reviews. Google pauses the ability to receive reviews for a period (we have observed windows ranging from 7 to 30 days) and may temporarily unpublish existing reviews. During this window, potential customers searching for your business see a profile with reduced or no reviews, which is devastating for conversion.
Tier three: a public warning banner is displayed on your profile stating that fake reviews were detected and removed. This banner is visible to every consumer searching for your business. In our network, we have tracked 23 profiles that received this banner in the two weeks following the April update. The average impact was a 52 percent drop in profile-to-call conversion for the duration the banner was displayed.
Critically, tier-two and tier-three penalties can also be triggered by review extortion attacks - coordinated fake reviews targeting your business from outside. Google's new pre-publication scam detection is designed to catch these before they go live, but when they do slip through, the system responds with the same restriction mechanisms. The business is penalised for the attack, not for its own behaviour.
23 profiles in our network received the public warning banner post-update. Average impact: 52% drop in profile-to-call conversion. The penalty is visible to every potential customer.
Industry Impact: Who Got Hit Hardest
The filtering changes did not affect all industries equally. We segmented our 2,400 monitored profiles by vertical and measured the change in filtering rates from the 14 days before to the 14 days after the April 14 update.
Home services (plumbers, electricians, HVAC) saw the largest increase: filtering rates jumped from 9 percent to 27 percent, a 3x increase. This is because home services businesses typically run aggressive post-job review sequences that produce the exact burst patterns the new filter targets. A plumber completing four jobs on a Tuesday who sends four review requests that afternoon is now a textbook velocity trigger.
Dental and medical practices saw the second-largest increase, from 8 percent to 22 percent. These businesses often have front-desk staff trained to hand customers a tablet or QR code at checkout, producing tight temporal clustering from a single location.
Hospitality (hotels and restaurants) saw a smaller increase, from 11 percent to 16 percent. These businesses already had higher baseline filtering rates due to high volumes of thin-profile reviewer accounts (tourists and one-time visitors), so the incremental impact was lower.
Professional services (law firms, accountants, consultants) were least affected, with filtering rates moving from 6 percent to 9 percent. These businesses tend to have lower review velocity and a higher proportion of established Google users among their client base.
The Filtered Review Audit: What You Are Probably Losing Right Now
Every Google Business Profile has filtered reviews - reviews that were submitted but never displayed, or that were displayed briefly before being removed by the filter. Most business owners never check them.
You can see your filtered reviews by searching for your business on Google Maps, clicking your total review count, and checking whether the displayed count matches your known submissions. A more precise method is to use the Google Maps API to compare the review object count against what is publicly visible. We run this check weekly across our network.
In the two weeks after the April 14 update, the average profile in our network had 2.3 more filtered reviews than in a typical two-week window. For profiles running active review campaigns, the average was 4.7. Those are real, genuine reviews from real customers that will never appear on your profile.
The urgency is that filtered reviews do not come back. Once a review is filtered, it stays filtered. The customer who wrote it does not know - they can still see it on their own account. But nobody else can. If your best customer left you a detailed 5-star review with photos last week and it was filtered because it arrived during a velocity spike, that review is gone permanently.
Audit your profile this week. Compare your known review requests against your displayed review count. If there is a gap, it is almost certainly larger than it was in March. Adjust your pacing before you lose more.
What to Do This Week: The 7-Day Action Plan
The window to adjust before the filtering compounds into a visible rating drop is short. Here is exactly what to do in the next seven days, based on what our data shows works under the new filter.
Day 1: Audit your current review pacing. Check how many reviews your profile received in each of the last four weeks. If any week exceeded 8, you are in the risk zone. Log in to your CRM or review platform and throttle the send rate to a maximum of one request per day.
Day 2: Update your review request copy. Remove any language that directs content ('mention your stylist', 'tell us about your procedure'). Add a photo prompt: 'If you have a moment, we would love a photo with your review.' Keep the ask open-ended and genuine.
Day 3: Run a filtered review audit. Compare your total review request sends from the last 30 days against your new displayed reviews. Calculate the gap. If more than 20 percent of requested reviews are not appearing, your pacing or account-profile mix is triggering the filter.
Day 4-5: Brief your front-of-house team. If you have staff asking for reviews in person (dental front desk, hotel checkout, restaurant servers), update their script. No quotas, no content direction, no tablet hand-offs that produce clustered timestamps. A simple 'We would really appreciate an honest review when you get a chance' is now the gold standard.
Day 6-7: Set up ongoing monitoring. At minimum, check your review count weekly. Compare the number of reviews you know were submitted against what is displayed. If the gap widens, slow your request cadence further. The businesses that will thrive under this filter are the ones that treat review velocity as a metric to manage, not to maximise.
The Bigger Picture: Why Google Is Doing This Now
Google's own data tells the story. In 2025, fake reviews on Google Maps increased 21 percent year-over-year, driven by generative AI making fake review production cheaper and easier. Google responded by investing in Gemini-powered detection that catches patterns before publication rather than cleaning up after the fact. The 292 million reviews blocked or removed in 2025 represent the output of that investment.
But the April 2026 changes go further than fighting fakes. The velocity sensitivity increase and the policy bans on quotas and content direction target legitimate businesses running aggressive but technically-honest review programmes. Google is drawing a line: even genuine reviews, collected through genuine interactions, will be filtered if the collection pattern looks manufactured.
This is a philosophical shift. Google is moving from 'are these reviews real?' to 'does this review pattern look organic?' The practical consequence is that the review strategy that wins is now the one that looks least like a strategy. Steady, unprompted, photo-rich, detailed reviews from established Google users - that is the profile the filter rewards. Everything else is friction.
For businesses operating honestly, this is ultimately a positive change. It makes the ecosystem harder to game, which means your genuine reviews carry more weight. But only if you adjust your collection approach to match what the filter now values. The businesses that keep running 2025 playbooks in a 2026 filter environment will watch their ratings erode - not because they are doing anything wrong, but because the rules changed and they did not.

Written by
Emily
SEO & Marketing Lead
Local SEO and AI-search strategist building the structured signals that put BGR Review clients in the answer, not just the index.



