Creating the first autonomous defender.
Ctrl+G is a data lab focused on cybersecurity in San Francisco. We build the training data, benchmarks, and environments that teach models to think like attackers so they can write secure code, defend your infrastructure, and protect your network.
Grégoire and I have been building together since 2017. I've been in offensive security since I was 13, reverse-engineering malware and competing in CTFs. Grégoire is an operator who sees patterns in data, people, and systems, and turns them into unfair advantages.
We cofounded Germinal in 2018. Grégoire is known as the LinkedIn guy in France, but he was never fond of LinkedIn. He was fond of deconstructing algorithms. LinkedIn's was just another system to decode. 32 million impressions in 2020, and we built a whole company on top of that acquisition engine. Not a traditional agency. A hyper-structured, productized go-to-market machine for startups, held together by automation and processes I built from scratch. All pre-ChatGPT, pre-LLMs. We sold it at $3M ARR because we were bored. We needed something more challenging.
Then we built OnlyDust, an open-source funding platform that distributed $18M in grants to 4,000 contributors. We raised $6M and built a platform that helped thousands of developers build their skills in open source and helped maintainers create projects and grow them. We even built an autonomous agent that distributed money based on the value of your PRs. $1M monthly at its peak.
By early 2025, the system was breaking. Contributors were flooding projects with AI-generated pull requests, not to help, but to game the stats. For money, for badges, for reputation. We saw fake open-source projects created just to boost numbers. Maintainers couldn't tell if they were talking to humans or bots. They stopped accepting external contributions entirely. Some started rejecting our money.
We didn't want to add more burden on the people we were trying to help. The bottleneck had moved. It was no longer funding. It was security. One bad AI-generated commit could compromise critical infrastructure. Output was scaling 10x. Quality wasn't, and neither was security.
We moved to San Francisco in 2025. I'd been coding with AI for months in France, but in early 2025 nobody there believed it was a thing. We needed to be where the frontier was actually happening.
The security collapse is already here. Every major player is moving toward 100% AI-generated code. Developers are 10x-ing their output. Human review is disappearing. The attack surface is exploding.
Security is asymmetric. Attackers only need to find one flaw. Defenders must prevent all of them. Models are already good enough to attack. Not good enough to defend. The equilibrium between offense and defense is breaking.
Without intervention, we retreat to reviewing code by hand, or writing it ourselves. A massive productivity gain for humanity, lost. Like the Concorde.
So we launched Ctrl+G. We restarted from first principles: how do you teach models and agents to actually understand cybersecurity? We assembled a small team of offensive security experts and built the first data lab focused on cybersecurity for the AI age. Benchmarks and realistic environments that measure what models can actually secure, exploit, and defend. Training data that teaches them to write secure code. We partner with frontier labs to make it happen.
Our roadmap: help models write secure code by default. Then build autonomous defenders.
Offense is inevitable.
Defense must be superhuman.
AI will be used to attack systems at a scale and speed that human defenders cannot match. We can help prevent some of this, and where we can't, we make defense so intelligent and so fast that attackers lose their edge.
The future is vibe coding,
but only if security is invisible.
A million new developers are building software with AI right now. They will never learn security. That's not a problem if the models handle it natively. Security shouldn't be a skill developers need. It should be something the model guarantees.
You can't build defense
if you've never played offense.
Every person at Ctrl+G has broken systems before building them. Reverse-engineered malware, exploited protocols, competed in CTFs, found real vulnerabilities in production software. You cannot teach a model what a vulnerability looks like if you've never found one yourself.
We're hiring.
We're looking for people who've broken systems before building them. On-site in San Francisco.