Update #47: Open Source Vs AI
Cal.com just went closed source because of AI. How much longer will the decades-old assumption that auditable open-source software is more secure be true?
A couple of weeks ago Cal.com, a beloved open-source (OSS) calendar tool with nearly 1k contributors and over 40k GitHub stars announced they were going closed source. Their justification wasn’t licensing drama or VC pressure, it was the security risk of being open-source in the AI era. This came at the extremely timely point where Mythos had just found thousands of vulnerabilities in open-source software projects.
So this week I want to discuss the shift we are seeing in OSS, partly because we’re making the same decisions ourselves at Secure Agentics.
The Cal.com Decision
TL;DR Cal.com founder announced on April 14th that Cal.com is moving its production codebase from public to private. The reason given is that AI tools have made it economically trivial to scan an open-source codebase for vulnerabilities, and they’ve decided that the customer data risk no longer justifies the openness. They’re keeping a slimmed down community version called Cal.diy under MIT, but the real product is now behind a closed door.
For context, Cal.com is the open source alternative to Calendly. They’ve been one of the loudest and proudest cheerleaders of open source over the last five years. So, naturally, when the founder announced they were going closed source it grabbed my attention.
Why does that matter? Because they’re not the only ones thinking this. They are just perhaps one of the first to make this decision and now we’re wondering who else might do the same.
Claude Mythos + Open Source
We don’t need to go over Claude Mythos again in this newsletter - you can check our previous one dedicated to that here. What we do need to reiterate though is that one of the reasons claude mythos was able to find ‘thousands’ of previously overlooked vulnerabilities is that it had full access to the code base.
In the last 2 years AI has improved dramatically at consuming larger and larger codebases, understanding how everything connects, and keeping a line-by-line understanding of an entire OSS project in-memory. A model that can hold an entire codebase in context, reason across hundreds of files, trace data flows, infer trust boundaries and chain together edge cases is something which we’ve just not had to deal with until quite recently. From this perspective, the models have a huge advantage over humans who can perhaps keep a mental model of this at best, but struggle to retain every individual function, edge case, or relationship of the project in mind when looking for bugs. This is exactly where AI excels.
It is because of this capability, and the fact that AI can run indefinitely, with patience and at a scale which dwarfs human-counterparts that Claude Mythos was able to find so many bugs in OpenBSD, Vim, Emacs, Firefox, FreeBSD, etc.
Keep in mind, these are the most scrutinised pieces of software ever written. Decades of expert eyes, decades of static analysis, decades of bounty programs, and the bugs were still there. However, also keep in mind that much of the success of Claude Mythos was due to the fact that it had full access to the entire code base.
Why closed-source is different
When a product is closed source, an attacker (human or AI) is limited to what’s externally observable: the web UI, public APIs, configuration surfaces and the occasional leaked bit of information like perhaps a stack trace. They get access and can infer behaviour but not a blueprint explaining everything - this is a much shallower attack surface for AI to reason over and pick apart.
You can still do useful security testing, but you’re often making assumptions about how certain elements are working or connected, not reading it line by line from the source of truth. This is the core reason behind Cal.com’s decision to go closed-source. They’re not claiming that closed source is invulnerable, but the attack surface available to AI-driven scanning is dramatically smaller, and that for a product handling millions of bookings and a lot of sensitive scheduling data, that’s a trade off they’ve decided to make.
How this has affected Secure Agentics
We’ve been planning to release Secure Agentics’ first product as open source since the inception of the company, which is a deliberate philosophical / moral choice. Having come from the practitioner community I used OSS tools all throughout my career and they were a requirement in certain scenarios where you need to read every line of code that touches a client environment to ensure it is not doing anything nefarious.
But the Cal.com decision along with everything happening around Claude Mythos and the general agentic AI space has recently forced us to sit down and discuss this:
Do we open source the entire thing, knowing that as soon as we publish, a rolling AI audit becomes possible by anyone with a frontier model and a weekend?
Do we use the same tools ourselves and try to cover all the ground we believe potential attackers will do before we release?
Do we split it: a closed core that handles the parts where customer data lives, and a slimmed open community version that lets people inspect and contribute to the framework but doesn’t expose the full production system?
Over to You
So the question I want to leave you with is this: knowing what you now know about how AI is reshaping vulnerability discovery, would you feel more secure trusting an open source product or a closed source one with your most sensitive data? And does your answer change depending on whether the vendor is a tiny startup, a household name, or an industry critical piece of infrastructure?
I’m still undecided in places but have my own views, and I’d love to hear how you’re thinking about it. Drop a comment!
Thanks!

