Here's something worth knowing:

The phishing email that gets opened — the one that tricks a real employee — doesn't look like spam anymore. It looks like an email from your accountant. It references your actual invoice number. It calls the recipient by their first name and mentions the project they're working on.

That's not a coincidence. That's AI.

What Changed (and When)

For years, phishing emails were easy to spot. Bad grammar. Weird domain names. Requests that made no sense.

Then, starting around 2023, something shifted. Attackers got access to the same large language models everyone else was using — and they started using them to write better emails. The result:

AI-generated phishing attacks surged 1,265% between 2023 and 2025. (Brightside AI, 2025)

82.6% of phishing emails are now created using AI. (Programs.com, 2025)

These aren't enterprise-only numbers. The same tools that help your marketing team write faster are helping criminals run attacks at scale — against dental practices, law firms, medical offices, and small businesses of every kind.

How It Actually Works

Here's a scenario that happened to a real dental group in 2025:

A front-desk coordinator received an email that looked like it came from their dental software vendor. It referenced their practice name, their account number, and mentioned an "urgent billing issue." The link went to a nearly identical login page.

She entered her credentials. The attackers had access within minutes.

What made it work:

  • The email was grammatically perfect

  • The domain was one letter off from the real vendor (hard to spot in a busy inbox)

  • The urgency made her skip her usual caution

The AI didn't "hack" anything. It just made the social engineering convincing enough to work.

Why Small Practices Are the Target

You might think: "We're a 6-person law firm. Why would anyone bother?"

Two reasons:

  1. Volume. AI lets attackers send thousands of personalized emails for almost no cost. Small businesses aren't individually targeted — they're swept up in automated campaigns that look personal.

  2. Defenses. Larger companies have security teams, email filtering, and training programs. A 10-person dental office typically has... a Gmail account and good intentions.

The FBI flagged this specifically in early 2025 — calling these "the most sophisticated ever AI-powered phishing attacks" — and noted that smaller professional services firms are in the crosshairs.

Three Things You Can Actually Do This Week

These are practical. No security team required.

  1. Train your front desk on one thing: domain scrutiny.

Before clicking any link in an email — especially one involving payment, login, or "urgent" action — look at the actual domain in the sender's address. Not the display name. The actual email address.

One character difference. That's all it takes. Make this a 10-minute team huddle this week.

  1. Set up a simple verification rule for wire transfers and payments.

No payment, transfer, or credential should be processed based on an email alone — even if it looks like it came from your accountant, your landlord, or your software vendor.

Rule: if an email asks for money or login info, verify by phone using a number you already have on file. Not the number in the email.

This single rule stops most attacks cold.

  1. Check your email authentication settings.

If your practice uses Google Workspace or Microsoft 365, ask your IT person (or Google) whether DMARC, DKIM, and SPF are configured. These aren't buzzwords — they're technical settings that prevent criminals from impersonating your domain to send fake emails from you to your clients.

If you have no idea what those are, that's okay. It means they probably aren't set up. That's worth fixing.

The Bigger Picture

AI didn't create the phishing problem. It just made a well-understood attack dramatically cheaper and more effective.

The defenses haven't fundamentally changed either — it's still about slowing down, verifying, and creating a culture where "I got a weird email" is something employees feel comfortable saying out loud.

The difference now is the margin for error is smaller. An email that used to look suspicious no longer does.

That's worth knowing.

Quick Hits

Google's new AI-powered spam filter is catching more phishing than ever — but only for Gmail users. If you're still on a shared hosting email (GoDaddy, Bluehost, etc.), you're not getting that protection.

Voice phishing ("vishing") is rising. AI voice cloning means a caller can now sound like your CEO or your accountant. 3 in 4 AI voice scam victims lose money. If a caller asks you to do something financial, call them back on a known number.

The WEF's 2026 Global Cybersecurity Report found that 94% of security professionals see AI as the top driver of change in cyber threats this year. And 77% reported increases in fraud and phishing incidents in 2025.

About The Rampart Report

Written by Grant Heckman — cybersecurity engineer with 5+ years in the field, working with businesses that don't have a security team but still need to stay safe.

Every issue: one real threat, explained plainly, with things you can actually do.

No jargon. No vendor pitches. Just useful.

Reply to this email anytime — I read every one.

Keep reading