There Should Be Nothing to Harvest
A compromised Bitwarden CLI harvested SSH keys, cloud credentials, and npm tokens from 334 developer machines. The real problem isn't how the malware got in. It's that every secret was sitting there as a plain file, waiting to be read.
Yesterday, a compromised version of Bitwarden's CLI harvested SSH keys, AWS credentials, npm tokens, environment variables, shell history, and Git secrets from 334 developer machines.
Today it was Bitwarden. Last month it was Axios. Before that, Checkmarx. Tomorrow it'll be a VS Code extension, or Acrobat, or a Homebrew formula, or a Docker image. The vector changes weekly. The result is always the same.
The malware lands. It reads ~/.ssh/. It reads ~/.aws/credentials. It reads ~/.npmrc. It reads ~/.git-credentials. It reads shell history, environment variables, browser password stores. It packages everything up and sends it to a C2 server.
And it works. Every time.
The harvest
The Bitwarden payload — a 10 MB obfuscated file called bw1.js — didn't try to crack any encryption. It didn't need to. Here's what it collected, as documented by Socket and Aikido:
- SSH keys and host fingerprints
- AWS, GCP, and Azure cloud credentials
- npm authentication tokens
- Git credentials and remote URLs
- Environment variables
- Shell history
- Claude Code authentication and MCP configurations
Then it used the stolen npm tokens to republish other packages the victim maintained, spreading itself further. Victims became vectors.
None of this required breaking encryption. Every one of these secrets was a file on the filesystem, readable by any process running as the user.
This is not a Bitwarden story
Bitwarden's vault encryption wasn't breached. Their zero-knowledge architecture held. The malware never touched the vault.
It didn't have to.
The vault protects what's inside it. But SSH keys were never in the vault. AWS credentials were never in the vault. npm tokens, Git credentials, API keys in .env files — none of these live in password managers. They live in dotfiles, in plaintext, on every developer's machine.
The attacker understood this. The vault is a locked safe in a house where every drawer is open.
The real attack surface
Open a terminal right now. Look at what's on your machine.
~/.ssh/id_ed25519 — your private key. Plaintext file.
~/.aws/credentials — your cloud access. Plaintext file.
~/.npmrc — your publish token. Plaintext file.
~/.git-credentials — your repo access. Plaintext file.
~/.env in a dozen project directories — API keys, database passwords, signing secrets. All plaintext files.
Any process running as your user can read all of these. No privilege escalation needed. No exploit required. Just cat.
This is the default developer setup in 2026. We put our passwords in an encrypted vault and leave everything else in the open.
The wrong question
After every supply chain attack, the industry asks the same question: how do we prevent malware from getting in?
Better CI/CD security. Code signing. Dependency scanning. Sandboxed runtimes. These are all good. None of them are sufficient. The attack surface is too broad. There are too many vectors — package managers, browser extensions, IDE plugins, OAuth apps, compromised build tools. You cannot seal every entry point.
The right question is: when malware inevitably gets execution on a developer's machine, what does it find?
If the answer is "hundreds of plaintext credentials in predictable filesystem locations," no amount of supply chain hardening matters. You're playing defense on a field where the goal is wide open behind you.
There should be nothing to harvest
The fix isn't better malware detection. The fix isn't sandboxing npm install. The fix isn't a faster incident response time.
The fix is: secrets should not exist as files on disk.
SSH keys derived from hardware at the moment of authentication — not stored in ~/.ssh/. Cloud credentials issued per-session from a hardware-bound identity — not written to ~/.aws/. API tokens scoped, ephemeral, and hardware-gated — not sitting in .env files.
When a credential only exists inside a hardware security module and in ephemeral process memory during use, there is nothing for malware to read. No file to exfiltrate. No dotfile to scrape. The process runs, it finds nothing, it moves on.
This isn't theoretical. Hardware-bound credentials exist today. WebAuthn PRF can derive cryptographic keys from a physical authenticator tap — keys that never touch the filesystem. The technology is here. The industry just hasn't adopted it as the default.
What to do now
If you were affected by the Bitwarden CLI compromise:
- Rotate every credential on the machine — SSH keys, cloud tokens, npm tokens, API keys, everything in dotfiles and env vars
- Check if any npm packages you maintain were republished
- Audit GitHub activity and CI/CD workflows for unauthorized changes
If you weren't affected, the action is the same. Look at your machine. Count the plaintext secrets. Ask yourself what happens when — not if — something malicious runs as your user.
The answer should be: nothing. There should be nothing to harvest.