Why Crypto’s Next Security Problem Won’t Be On-Chain
Melanie MohrI had a conversation recently that genuinely shocked me.
It was with a founder whose company focuses on protecting system memory - the physical memory chips inside our computers, servers and phones that software uses while it’s running.
Memory is where things actually happen. When you open an app, run a wallet, or use an AI model, the system loads data into memory so it can work. That includes private keys, passwords, AI context and decision logic. You can think of it as a temporary workspace: it only exists while the system is active, but during that time, it holds some of the most sensitive information.
And that’s exactly why attackers are starting to go after it.
If someone can access memory while a system is live, they don’t need to break encryption or crack a blockchain. They can watch or interfere with what’s happening in real time and take what they need. It’s less like breaking into a safe and more like stepping in while the vault door is already open.
Now add AI agents to the picture.
By AI agents, I mean software powered by AI that can act on its own - running tasks, making decisions and interacting with systems without a human constantly giving input.
Attackers are starting to use these AI agents to carry out break-ins. These aren’t single hacks anymore. The software can keep trying, learning and poking around systems over time.
And this is already happening.
Earlier this year, attackers abused Claude Code, an AI coding tool from Anthropic, to help break into more than 30 organizations, including tech companies, banks, manufacturers and government-related targets. They didn’t use rare or highly technical tricks. They used AI agents to quietly move through systems while they were running.
And this is where it really clicked for me.
Attacks are shifting away from files and obvious malware and into live systems: the moment when software is running, decisions are being made and secrets are sitting in memory. And AI agents make this faster, cheaper and easier to do at scale.
AI agents themselves are becoming targets. While they’re working, they keep sensitive information and instructions in memory. If that live state is exposed, other software (including malicious AI agents) can watch what’s happening and interfere in real time, influencing what the agent does next.
In crypto, this hits even harder.
Wallets, validators, bridges, trading bots - all of them depend on systems that temporarily load private keys and signing logic into memory. If that live state is compromised, attackers can trigger perfectly valid transactions using your keys. On-chain, everything looks legitimate… even if neither you nor your AI agent ever intended that action.
What makes this moment especially dangerous is the scale.
AI agents are being deployed everywhere. They run longer, connect to more systems, move more value and operate with less human oversight. That means more secrets sitting in memory, for longer periods of time, while software acts on our behalf.
This isn’t just another security issue. It feels like a cybersecurity tsunami.
The damage starts out of sight, deep under the surface, while everything still looks normal. And once it’s visible, the window to react is already gone. You don’t deal with a tsunami after it hits - you survive it by detecting it early.
I’ve really only just started educating myself on this and I’m still connecting the dots. The conversation that sparked this made me realize how big this topic might be and how little I personally understood before. I’d genuinely love to learn more about this together - especially from people who are closer to the technical and security side of things.
Let‘s connect.