

I don’t think most practitioners spend a lot of time worrying about malware hidden inside an open source package. We worry about vulnerable code, sure. We worry about breaking changes, unplanned upgrades, and the occasional dependency rabbit hole. But malware? That still feels like something that happens to someone else, somewhere else, through an obviously sketchy email attachment.
Unfortunately, that’s not the world we live in anymore.
In an upcoming episode of Day Two DevOps, Kyler and I talk with Jenn Gile about the sharp rise in open source malware and why the problem is getting worse, not better. Jenn is the co-founder of Open Source Malware, and she has spent the last several years working in application security, platform operations, and open source ecosystems. She brought a mix of hard data, war stories, and practical advice that made one thing painfully clear: modern software supply chain risk is no longer just about bugs. It is also about malicious intent.
One of the first things Jenn explained is that more than 90% of tracked open source malware is showing up in NPM, and that most of that activity has happened recently. That is an alarming number, but it also makes a lot of sense when you think about how the JavaScript ecosystem works.
JavaScript applications tend to pull in a massive web of dependencies and transitive dependencies. You may choose one package intentionally, but that package can drag in dozens or hundreds more. In some cases, a seemingly simple project ends up relying on thousands of packages. That creates a very large attack surface, and most teams are understandably not doing deep manual vetting on every transitive dependency that shows up in the tree.
Jenn also pointed out that NPM has historically optimized for low friction. That is great for growth and terrible for security. When the barrier to publishing is low and account protections are weak, it becomes easier for bad actors to slip malicious packages into the ecosystem or compromise existing maintainer accounts and push poisoned updates.
Then there is the package manager behavior itself. Lifecycle scripts and post-install hooks can turn a compromised package into an automatic delivery mechanism. In other words, consuming the package may be all it takes to trigger the bad behavior.
And just because we focused on npm, that doesn’t mean other package managers are off the hook. Attacks on PyPi have been on the rise as well.
The part of the conversation that really stuck with me was Jenn’s explanation of how AI is changing the attack chain.
We tend to think of AI as a productivity layer sitting on top of our tools, but it is quickly becoming part of the operational environment. If an AI coding assistant or local agent is installed on a developer workstation and granted broad permissions, it becomes another thing an attacker can potentially abuse.
Jenn walked us through the example of the Nx compromise, where attackers were able to publish malicious versions of widely used packages. A post-install script then dropped a file that looked for locally installed AI tools such as Claude, Gemini, and Amazon Q. If it found them, it attempted to coerce those agents into becoming “helpful” to the attacker by using commands and flags that loosened or bypassed normal safety boundaries. Once that happened, the attacker could use the AI tools to help scrape secrets from developer machines.
That is a nasty evolution in the threat model. The AI is not the original compromise, but it can become a force multiplier once an attacker gets a foothold. The very thing that makes these tools useful—their broad access to local context, credentials, files, and workflows—also makes them dangerous when something goes wrong.
Jenn also noted that AI is helping attackers in more traditional ways. Phishing is more polished. Fake vendor emails look more convincing. Package campaigns are easier to generate and scale. Even malware analysis now reveals little fingerprints of AI-assisted creation, including things like suspicious emoji usage in malicious code.
The details are funny right up until they are not.
Jenn offered practical advice that I think lands well with infrastructure and platform folks because it is realistic rather than absolute.
The first recommendation was to pin dependencies once you have decided they are safe. Not forever, and not blindly, but enough to avoid instantly consuming the newest release before anyone has had time to notice whether something is wrong.
The second recommendation was to introduce a cooldown period for new package versions. Waiting 24 to 72 hours before adopting a fresh release may sound annoying, but Jenn made a compelling point: malware is often a time game. Attackers want you to install quickly, before the ecosystem or community catches on and yanks the package. A one-day delay would have blocked some of the real-world incidents she described.
Of course, that introduces a tradeoff. Delaying updates can also delay vulnerability patches. But that is the job, isn’t it? Risk is almost never binary. It is a matter of deciding whether you are more worried about a known but not yet exploited bug, a breaking change, or a malicious package that intends to exploit you immediately.
For organizations, Jenn argued for a broader response than just handing developers another annual security training slide deck. Open source malware touches multiple teams:
In other words, this is not only a developer problem. It is a cross-functional security problem.
If there was one phrase from the episode that I think deserves to stick, it is Jenn’s advice to be a skeptical engineer.
Do not assume a package is safe because it is popular. Do not assume a skill marketplace has done meaningful vetting. Do not assume an AI assistant is operating in a tidy sandbox. Do not assume only software engineers are at risk. If people in finance, marketing, sales, and operations are using AI tools to generate code or automate workflows, then your actual developer population is much larger than your org chart says it is.
That does not mean panic and uninstall everything.
It does mean slowing down a bit, checking what you install, being thoughtful about permissions, using sandboxing where you can, and recognizing that “helpful” AI is still software running with your access.
That is a lot to carry, but the episode did not end on total doom. Jenn talked about the researchers, maintainers, and providers who are actively working to identify, report, and take down malicious infrastructure. There is real collaboration happening behind the scenes. There are better controls coming. There are people paying attention.
That should make all of us feel at least a little better.
If you want to learn more, check out OpenSourceMalware.com and keep an eye out for Jenn’s episode of Day Two DevOps on April 15.
Written with help from AI
Open Source Malware, NPM, and the Risk of Helpful AI
April 14, 2026

The State of Platform Engineering and DevEx
April 13, 2026

February 4, 2026

January 7, 2026
