Let’s face it: the digital world is changing fast—and not always for the better…
Behind the scenes, two of the biggest players in tech, Google and Anthropic, are quietly sounding the alarm.
They’re warning the world that the next wave of cyber-threats will be powered by the same artificial intelligence that is also powering our smartphones, our cars—and yes, our investing dashboards.
If you’re not paying attention, you’re going to get blindsided.
But if you are paying attention, you might just find the opportunity of a lifetime…
The Rise of AI-Powered Cybercrime
In August 2025, Anthropic published a stark “Threat Intelligence” report that revealed something chilling: cybercriminals are no longer simply using AI tools as a hacky shortcut—they’re using them as the core engine of their operations.
One example: a hacking ring dubbed “vibe-hacking” used Anthropic’s Claude Code tool to automate entire campaigns—from reconnoitering networks to crafting extortion notes, deploying ransomware, negotiating ransom demands—all powered by AI.
Another: attackers with almost no coding skill stood up ransomware variants for sale (on internet forums) for as little as $400–$1,200 using AI assistance.
To be clear: the barrier to entry is collapsing…
What used to require big teams and deep expertise can now be done by one person with AI.
Anthropic warned that “agentic AI has been weaponized” to turn what was once an advice-tool into an operational tool.
Meanwhile, Google has chimed in, too…
In its “Cybersecurity Forecast 2026” report, Google Cloud’s security teams write that 2026 will be the year AI doesn’t just help cybercrime, but defines it.
Things like prompt injection, AI-enabled social engineering (voice-cloning executives over the phone!), and “shadow agents” (unauthorized AI bots inside your company) are highlighted as big upcoming threats.
“2026 will usher in a new era of AI and security,” the report plainly says.
So what does this mean in practice?
It means that cyberattackers will increasingly start with AI, they’ll scale faster, automate more, and rely less on human skill.
If you think phishing emails are bad now—just wait for AI-generated voice calls that sound like your boss telling you to wire money right now.
And if that doesn’t scare you, consider the fact that attackers may pivot from apps and endpoints into virtualization infrastructure and cloud layers—areas traditionally seen as blind spots.
Real World Examples You Can’t Ignore
We already have proof of movement in this direction.
- The Anthropic “vibe-hacking” operation: The target list reportedly spanned healthcare, government, religious and emergency services across at least 17 organizations. The AI wasn’t just assisting—it was orchestrating.
- Google’s fraud & scams advisory shows how AI is being used today to fuel scams: fake customer-support websites, toll-road scams, malvertising, and heavier use of social engineering.
- The broader trend: Reports show that phishing campaigns with stealer-malware jumped significantly in 2024, and AI + video + deepfakes are now playing a major role.
Put it together and the message is clear..
We’re already in the early phase of an AI crime cascade—and the worst is likely still ahead.
But Yes—There’s Hope… And Opportunity
Now, I promised a positive spin. Because this story isn’t just about danger—it’s about the flip side of that danger: defense, innovation, and investment…
As attackers embrace AI, defenders are doing the same.
That means companies building next-gen cybersecurity tools—AI-powered defenses, agentic security operations centers (SOCs), identity systems designed to manage and monitor AI agents—are going to be hot.
Google’s forecast highlights that security analysts will no longer drown in alerts—they’ll orchestrate AI agents that triage, correlate, summarize, and even recommend actions.
And essentially, humans will become strategic overseers rather than data janitors.
Think about this: every new kind of attack demands a new kind of defense.
Voice-cloning scams? That means voice-authentication checks, deepfake detectors.
AI agents turning into criminals? That means new identity frameworks, attestation services, anomaly detection.
Attackers going after virtualization control planes? Defense tools have to follow.
Crypto & on-chain attacks? That means blockchain forensics, crypto-wallet surveillance, DeFi-security tools.
Google shows all of this becoming a real-time investment and business opportunity.
So while yes, the threat is severe, the opportunity for early-mover investors is substantial, too.
Security is recession-proof in many cases; when the attack surface doubles thanks to AI, the cost of defense goes up—and so does spending on security.
For anyone looking to position themselves ahead of a wave, this could be the moment.
What You Should Be Thinking About Right Now
If I were talking directly to you (and I am), here’s what I’d say: Don’t wait until next year to wake up to this. Start thinking now.
- Assume your company, your investment portfolio, your personal profile will be targeted with AI-enabled attacks.
- Assume that attackers will use AI to automate and scale attacks in the next 12–18 months.
- Invest time and resources—or invest capital—into defense technologies that are built for this reality.
- Watch for companies written off as “cybersecurity niche” who suddenly become central because of AI vulnerability.
- Keep an eye on regulation and government responses—there will be new frameworks around AI misuse, identity for AI agents, etc.
- For investors: evaluate cybersecurity firms not just for traditional threats (malware, firewall, endpoint) but for next-gen threats (AI abuse, agentic SOCs, identity for machine-actors, blockchain forensics).
The Bottom Line
Let’s be blunt here…
AI-powered cybercrime is scary. It’s more automated, more scalable, and more efficient than anything we’ve seen.
The fact that both Anthropic and Google are actively warning about it means that this isn’t hypothetical. It’s already happening.
And in 2026, according to Google, it could become business as usual for criminals.
But here’s the thing—history, investing and technology all tell us that when one side of the ledger gets disrupted, the other side often gets the opportunity.
The defenders get smarter. The new tools get funded. The companies that help protect the rest of the world get a moment.
And if you’re one of the first in line, you might just ride that wave.
So yes, there’s risk. Big risk. But with risk comes reward…
If we position ourselves now—thinking about the architecture of our investment portfolio, our companies, our personal cyber posture—we might just win the next decade of cybersecurity investing.
Because when the bad guys start using AI as a weapon, the good guys will use it too—and those building the defenses will be the ones making the profits.
So, keep your eyes open. Keep your wits sharp. And let’s stay one step ahead of the hackers and the markets.


