Daniel Stenberg, lead developer of the cURL project, has had enough of low-quality AI-generated vulnerability reports and is instituting strict new rules to combat what he calls a form of DDoS against open source developers.

Daniel Stenberg, the long-time maintainer of the ubiquitous open-source tool cURL, has issued a clear ultimatum: AI-generated bug reports will no longer be tolerated. Frustrated by a growing influx of false or misleading vulnerability disclosures submitted via platforms like HackerOne, Stenberg announced on LinkedIn that he is officially cracking down.

“That’s it. I’ve had it. I’m putting my foot down on this craziness,” he wrote.

Effective immediately, anyone submitting security issues for cURL on HackerOne must now answer a new mandatory question:
“Did you use AI to find the problem or generate this submission?”

If they check “yes,” they can expect a rigorous follow-up designed to confirm that genuine human understanding and expertise were involved.

From Bug Reports to AI Spam

The tipping point came with a recent AI-generated submission that claimed to identify a vulnerability in a nonexistent function—a classic hallucination. For Stenberg and the cURL team, it was one AI blunder too many.

“We now ban every reporter instantly who submits what we deem to be AI slop,” he wrote. “A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.”

This isn’t the first time Stenberg has voiced concern. Back in January 2024, he publicly ranted against what he called “crap reports” from AI tools. What makes these different from traditional garbage reports, he explained, is that they are more convincing on the surface—often grammatically polished and plausibly structured, but ultimately technically vacuous. They waste more time because they require deeper scrutiny before being debunked.

An Industry-Wide Problem for Open Source Projects

The issue Stenberg raises isn’t unique to cURL. As generative AI becomes more accessible, open source maintainers across the board are seeing a rise in “automated” bug hunting—individuals using LLMs to create vulnerability reports in hopes of receiving bug bounty payouts.

While many of these tools can assist in identifying software flaws, they often lack critical contextual awareness, and users frequently submit findings without verification or real understanding.

This phenomenon poses a unique challenge for community-led software projects, which often rely on a small group of volunteers to triage and investigate reports. The burden of filtering out AI-generated noise is increasingly unsustainable.

A New Gatekeeping Policy

Stenberg’s policy introduces a two-pronged response:

  1. Mandatory AI usage disclosure for all HackerOne submissions related to cURL.
  2. Zero-tolerance enforcement: immediate bans for reports judged to be AI-generated junk.

He also published a screenshot of the updated HackerOne submission form on Mastodon, showing the new AI disclosure checkbox.

This gatekeeping approach is likely to influence other open-source communities, many of which are currently debating how to integrate or reject AI-assisted contributions—be they code, documentation, or bug reports.

No Valid AI Reports Yet

Perhaps most damning in Stenberg’s statement is his claim that, so far, not a single AI-assisted security report submitted to cURL has proven to be valid or helpful.

That is a serious indictment of current AI-assisted bug hunting practices. While tools like GPT-4 and Claude 3 can assist developers and security researchers, they still fall short of autonomous insight when it comes to nuanced, real-world software vulnerabilities.

The Broader AI Ethics Conversation

Stenberg’s actions reopen a critical question: What is the role of AI in collaborative open-source ecosystems? When do these tools assist, and when do they become a burden?

There is an increasing call across the developer community for ethical and responsible use of generative AI, not just in writing code or generating text, but also in how these tools interface with human-led systems like bug bounty programs.

By instituting stricter measures, Stenberg is not only protecting his time and the cURL project’s integrity—he’s also sending a message to the broader industry: AI must be a tool, not a crutch or a shortcut to financial rewards at others’ expense.

References: Heise, Linkedin and Hackerone

Scroll to Top