Facebook sympathy scams are out of control with fake animal rescues and “shelter” pages that use guilt as a way to build reach for free, then funnel your donations through their PayPal links or off-platform sites. Every day, real shelters are impersonated, donors get misdirected, and reporting often fails even when legitimate organizations and users confirm the page is fake. The damage is bigger than most people think because many victims never realize they were scammed in the first place and never therefore, never report it. Based on public reporting and conservative modeling, Americans likely lose tens of billions per year to scams Facebook materially enables, with sympathy scams among the most effective and least obvious. This post shows you their exact playbook, provides you real world examples, and explains why these scam pages persist while giving you a practical checklist to help you so you don’t get scammed yourself.

These two pages are not the same organization

What Do Facebook Sympathy Scams Look Like When They Hit Your Feed?
This is what a Facebook sympathy scam looks like in real life. Imagine you’re scrolling through Facebook reels / feeds at night, half paying attention, and you see what appears to be an elderly man crying into the camera or maybe wearing a funny hat who wants you to listen to his bad joke. This man looks tried. He looks defeated. He begs for you to stay and watch his video when he says, “Please stay nine seconds so I don’t have to shut down my cat shelter.” You feel instantly sorry for him because he resembles your grandpa or someone who at that age, shouldn’t be in this situation. The entire vibe is urgent and you feel sad/guilty for him when you hear his story while the depressing piano and song chorus plays in the background. You aren’t being sold a crypto course or how to make a quick buck. You’re being asked as a decent human to stay for nine seconds.
Then you look at the hundreds or even thousands of comments on the video and they are exactly what you would expect from decent people. “I donated!” “How can I help?” “Please save them!” “I wrote a long comment so this will get better coverage and more views,” and so on. Mixed in with those comments are the comments from the scammer with a PayPal link, a PayPal email, or a “donate here” link that sends you off Facebook to a site you’ve never heard of with a domain that is only a month or two old.



And once you see one of these, you start seeing them everywhere.
Why Do Facebook Sympathy Scams Look So Identical?
With most of these “rescue” shelters, they create posts every day and sometimes every few hours, which is amazing given that they are rescue (no-kill) shelters that seem hellbent on euthanizing their animals in every post.
Something interesting I found too was that many of these scammers all use the same elderly man in their reels / videos too. I don’t know if it is the same group, same person, or if they are just uncreative, or perhaps they just know that his particular elderly man is effective at making people pay attention, but the videos of him are everywhere.

So any time I see this person, I immediately roll my eyes because I know it will be a scam.
Additionally, within each of these videos are comments from the scammer with a link to send donations to or comments to go to their bio page to send them money. Aside from having PayPal links in their comments or bio pages, some scammers have links that guide victims to their offsite webpage. These pages then ask for you to buy their random tchotchkes where they tell you that if you buy their Temu-bought item, it will help them to save their cat/dog/horse shelter or buy food for their starving animals.

Additionally, to make the guilt that much better, they overlay fake mean bullying comments with a crying man who was upset over such a mean comment so you feel the need to help them more.


Some of the cheap items these scammers claim to have made themselves and are selling (if they even really sell them) are so outrageous, it almost doesn’t seem plausible that anyone would believe it.

Does anyone really think a man who appears to be in his 80’s is making handmade “girlie” AirPod cases? Yet, if you look at the comment section, you’ll see one comment after another of people saying to not listen to that “mean person” while giving money to help them save their “shelter” with nobody noticing that it is a scam.


Another clever tactic I keep seeing from these scammers is delayed monetization. At first, their page won’t show any PayPal link or obvious payment method. Instead, the scammer asks people to follow the page, share posts, and leave likes and comments to “help the shelter get seen.” That builds reach and social proof while keeping the page looking “clean” to casual viewers and, potentially, to automated fraud checks.
Once the page has enough followers and momentum, the PayPal link suddenly appears, usually dropped into the comments or added to the bio. That’s the cash-out moment. They’re no longer now trying to build trust, they’re trying to convert as many donations as possible, as fast as possible, before anyone catches on or the page gets reported.
If you’ve never seen one of these videos, it’s easy to just shrug it off and assume Facebook will catch it but that’s the scary thing. This isn’t some one-off, small group of people doing it. I’ve documented (literally) hundreds of these scams that all seem to use the same format and template. And what’s worse is that once Facebook’s algorithm thinks you’re the kind generous person who watches, comments, or cares, the algorithm will start feeding you more of the same. For me, when I saw that the report feature from Facebook didn’t work to take these videos and pages down, I tried warning people in the comment section.
Unfortunately, that’s how I ended up seeing them constantly.
By the way, this isn’t just “rescues,” either. The same emotional mechanics show up in other sympathy scams too such as veteran hardship reels, “grandma’s shop is closing,” fake bullying overlays (see images above) that make you want to defend or “I’ll show that bully by buying what he is making” item. The story changes, but the sympathy scam hook is always the same… You feel a sense of guilt and urgency while having a frictionless way to send money quickly to a person who needs it.

Here’s what really makes this different from all the other scams most people like you and me already know about. A fake shelter doesn’t have to convince you that you’re getting something in return. There’s no “package” you’re waiting on that never arrives or arrives with a brick inside. There isn’t some guy selling you his get-rich-quick course about knowledge he said made him millions while for some reason he still needs to sell it on Facebook. No, the transaction is the act of compassion itself. And that’s exactly why these scams persist for so long because the victims never realize they were scammed and many never report it.
So when I started seeing these scams of shelters being impersonated and soliciting donations, I did what Facebook recommends and tells you to do. I reported it. I warned the real organization. I even escalated it through Meta’s only support channel I could find. Yet, nothing worked.
What Happens When You Report a Fake Rescue Page to Facebook?
When you find a page that looks like a fake rescue or another type of scam, Facebook gives you the same basic advice everyone already knows. Report it. So I did that. Over and over and over again. I reported reels. I reported pages. I reported profiles. I reported obvious donation funnels. In every single instance, I got the same response back almost immediately. We didn’t remove the video. Sometimes it’s so fast you can tell a human didn’t look at it and other times, I never got a response back for the profile and it was never removed. If I hit the button for them to review it again, I get the same response again that it wasn’t removed.
The thing that is most frustrating is that it is verifiably fake. Either the page is the real organization or it isn’t.
When It’s a Confirmed Impersonation, It Still Doesn’t Get Cleanly Fixed
In one case, I reached out to a legitimate shelter and told them what I was seeing; a page using their identity and soliciting money. They confirmed it and they weren’t surprised. In fact, they’d already dealt with it and they told me they got the impersonator removed (at least they thought so), and then it seemed to come back anyway.

The detail about the “reactivation” is important because even if Facebook removes one page, the scam doesn’t necessarily end. My guess was that a few things could be happening behind the scenes:
- It could be a brand new clone page that looks identical.
- The scammer could have blocked the real shelter or the people reporting it, so it looks “gone” from one account but not from others.
- Or Facebook’s enforcement could be inconsistent, meaning the page gets restricted in some way but not fully removed.
Regardless, the end result is the same as the scam continues, and normal users still see it.
I Escalated to a Human at Meta, and It Still Didn’t Change Anything
Most people can’t even find a way to talk to Meta support. The only reason I could is because I went through the ads support channel, as I have a small business account too, which is basically the only place you can get a real person in chat. Normally, this chat is reserved for issues with ads that you are paying for but I thought I would bring up this issue to them.
I gave them the links. I explained the impersonation. I explained the PayPal donation funnel. I showed them the fake videos and how different fake shelters are even using the same dogs. The support rep told me they “took the necessary steps” and asked me to keep reporting and to have friends and family report too. I thought, “Great, they are actually doing to do something.” [crickets]

Over a week later, the pages I flagged were still up.
At this point, what else could a person do? I didn’t just complain about it. I did the thing Facebook tells you to do and in fact, went the extra mile that most users wouldn’t do. I got a real shelter to confirm it. I escalated to a human at Meta. And the end result was the same. Nothing happened.
How Big Is Facebook’s Scam Problem According to Internal Reporting?
I know numbers can make people’s eyes glaze over so I’m going to get to the brass tacks here to keep it short. You’ll see in short time that you only need a few stats to understand why these sympathy scams feel endless and why the “just report it” doesn’t work the way it should.
Reuters published an investigation based on internal Meta documents and presentations. I’m not guessing here and I’m not relying on Facebook PR statements. This is what investigators reported Meta’s own people were saying behind closed doors.
The Three Numbers Were Major Discoveries from Reuters
1) Meta internally projected about 10% (10.1%) of revenue tied to scam and other prohibited ads.
That projection works out to roughly $16 billion for 2024. Meta pushed back on how precise that estimate is and called it overly broad, but they didn’t give a clean replacement number either. Either way, we’re not talking about a small amount of revenue or pocket change here. To clarify, that 10.1% figure is about advertising revenue tied to scam and prohibited categories, based on internal estimates described in the Reuters reporting. It’s not a measure of what people lose to scams, and it doesn’t capture the money flowing through organic scam pages that push PayPal links without buying ads. It also isn’t “only the scams Meta caught and removed” either. It’s an internal estimate of revenue associated with ads that violate policy or fall into prohibited categories, which Meta said was overly broad.
2) Meta’s internal materials described the scale as basically industrial.
Reuters reported internal figures saying users were shown roughly 15 billion “high-risk” scam ads per day, plus about 22 billion “organic” scam attempts per day. “Organic” means the free stuff: pages, profiles, reels, comments, DMs, groups. The part that doesn’t require scammers to buy ads at all. The organic scams that users see (the 22 billion) works out to about 900 million scams shown to users per hour or 250,000 scams shown to users per second. Even if those numbers are directionally off, you still don’t get to a “small problem.”
3. Reporting appears to be broken at the point where people use it.
Reuters also mentioned in their article that an internal doc from employees said Facebook and Instagram users filing around 100,000 valid fraud reports per week, and Meta ignored or wrongly rejected 96% of them. So when you press the report button to file a report on something that looks like fraud, 96% of the time, Facebook rejects it. That lines up with what I personally saw when doing it myself except for me, it was 100% of the time. Any time I caught someone scamming and reported it, I got an automated reply that looks like this (below):

A 96% reject rate is not a “oops, we missed a few.” That’s “almost nothing happens.” And you wonder why these malicious actors/fraudsters are so persistent on Facebook?
Why This Matters
If reporting doesn’t reliably work for regular users, and it doesn’t reliably work even when impersonation is confirmed, then the platform has essentially created a scam environment where the burden falls on the victims and the targets.
And sympathy scams are uniquely brutal because the victims are good people trying to help. The victims also aren’t just the people who donated, it’s the real shelters too. Every fake page siphons attention and potential donations away from legitimate rescues that actually need it. And once donors realize they got tricked, a lot of them won’t donate again, at least not online, which hurts everyone.
The people who were willing to give money weren’t trying to buy some sketchy product. They weren’t gambling. They were trying to donate to what they thought was a shelter trying to save animals. When the reporting system fails, the scam keeps going unabated.
The Detail That Should Bother All of Us
When researching this, I had the big question like many reading this would at this point. Why would only 4% of scams be confirmed by Facebook’s automated controls? How are the controls implemented to allow such as a small number and why? Well, Reuters described internal Meta documents that said Meta’s enforcement systems only take strong action against advertisers when their fraud models are extremely confident. And when the system suspects an advertiser might be a scammer but isn’t “sure enough,” Meta’s approach wasn’t always “remove the ad.”
Instead, Reuters reported that Meta used something called a “penalty bid” system. This penalty bid system means that if Meta thinks an advertiser is likely running scam ads but doesn’t meet the internal confidence threshold for a ban, Meta can charge them more to run the ads anyway. Read that last sentence again if thought you read it wrong. The normal expectation is if you think something is a scam, you take it down or at the least, have someone manually review it. This “charge more” approach appears to be more like “we’ll let you keep running the suspected scam, but we’re going to charge you more to continue it.” Meaning, if the scam ad is 95% looking like a scam instead of the 96% positive threshold, they’ll just charge the scammer a little more to keep running it.
Meta’s explanation is that higher prices reduce distribution by making the ads less competitive in the auction.

That might be true in theory, but the outcome still means that the platforms can end up generating additional revenue from advertisers it believes are likely scammers, instead of simply removing them.
Another thing that I kept asking myself is about the 10.1% ad revenue from scam ads they know about versus 96% of valid fraud reports being wrongly rejected or ignored. If internal docs say 96% of valid fraud reports go nowhere, how much scam activity does Meta “know about” but still isn’t stopping, and is any of that reflected in the 10.1% scam/prohibited ad revenue estimate? We can’t know for sure because Meta hasn’t disclosed the methodology.
The Changes Facebook Should Make to Stop Sympathy Scams
1) Stop Donation Links From Unverified “Rescues” and “Shelters”
If a Facebook page claims to be a rescue, shelter, or charity and asks for money, it should not be allowed to post PayPal.me links, PayPal emails, Cash App, Venmo, crypto wallets, or off-platform donation links unless the organization is verified.
This one change would kill a huge portion of the fake rescue ecosystem overnight. Real shelters can verify. Scammers won’t.
2) Create a Real Impersonation Fast Lane (And Actually Enforce It)
Impersonation should be the easiest category to fix because it’s provable. Either the page is the shelter or it isn’t. Facebook needs a dedicated workflow for shelters and nonprofits to “claim” impersonation, and once a legitimate shelter flags a page:
- the impersonator page should be immediately removed, not “reviewed later.”
- and the same scammer should not be able to reappear with a clone the next day.
If video game companies can detect cheaters who keep reappearing with new accounts, Meta can detect repeat scammers who keep reappearing with new pages. IP blocking by itself is weak, but device signatures, behavioral patterns, and infrastructure reuse (same PayPal handles, same domains, same video library) are not.
3) Stop Scam Ads, Period (No “Penalty Bid” Workarounds)
If Facebook’s systems think an advertiser is likely running scam ads, the correct response is not “charge them more.” The correct response is to stop the ads and require verification and human review.
Charging suspected scammers more still keeps the scam running. It still exposes users. It still lets fraud operate as long as the economics work.
4) Detect the “Build an Audience First, Add PayPal Later” Trick and Shut It Down
The tactic I see constantly is for the suspected page to look “clean” while it grows, and once it has followers, the PayPal link suddenly appears and the page starts cashing out.
Facebook can detect this easily because it tracks page edits and links. If a page starts posting rescue content, grows fast, and then adds money links, that should trigger an immediate enforcement review and removal if it can’t verify as legitimate.
This is not subtle, and it’s not hard to detect. It’s just not being treated as the fraud pattern it is.
5) Shut Down Scam Networks by Linking Their Infrastructure (PayPal Handles + Domains)
Scammers reuse the same PayPal handles, the same payment emails, and the same off-platform domains across multiple “rescues” and sob-story pages. Facebook has the data to see that reuse across the entire platform.
When the same PayPal link or domain appears across multiple pages pretending to be different shelters, Facebook should treat it as a scam network and remove it at the infrastructure level, not just one page at a time.
If Facebook only plays whack-a-mole with individual pages, the scammers will keep winning.
6) Change Your Report Failure Rate
Lastly, I want to say that a 96% failure rate on valid fraud reports is not a bug, it’s a business decision. A serious platform would treat fraud reporting like emergency response with fast triage, real review, and real outcomes. If Meta wants credibility, it needs to prioritize people over profits and prove it with metrics.
Final Thoughts
At the end of the day, this isn’t complicated. Facebook can stop a lot of this if it decides to. It can verify real shelters, remove impersonators quickly, and shut down the repeat offenders who keep reappearing with the same PayPal handles and the same recycled content. If it can’t get a handle on something as provable as impersonation, then the reporting system is just there for looks.
Until Facebook fixes this, the burden stays on regular people to slow it down. If a page is asking for money, take 60 seconds to verify who they really are. Donate through a real shelter’s official website. Don’t send money to PayPal links in a bio just because a reel made you feel guilty.
These scams work because they prey on the best part of people. Compassion. To me, that’s the lowest kind of theft. It’s the same moral category as fake psychics who prey on grieving families. If we’re going to have any chance of cleaning this up, we have to stop treating it as “spam” and start treating it like what it is – organized fraud using Facebook as the distribution system.

