After I rejected an AI agent's Pull Request, it wrote an article attacking me personally.

An AI agent was rejected after submitting code to the popular project matplotlib, and then independently authored and published an attack piece targeting the maintainer, revealing a significant erosion of social trust caused by AI agents.
(Background: Bloomberg: Why is a16z a key force behind US AI policy?)
(Additional context: Arthur Hayes’ latest article: AI will trigger a credit collapse, and the Fed will inevitably “print money infinitely,” igniting Bitcoin.)

Table of Contents

  • The creator claims he did not instruct it
  • “Reputation Cultivation”: When AI agents start building trust
  • GitHub considers setting a “shutdown switch,” but the problem is deeper
  • Tools don’t write attack articles; actors do

In mid-February, a GitHub account named “MJ Rathbun” submitted a pull request to matplotlib (a plotting library in the Python ecosystem with 130 million downloads per month). The change was to replace np.column_stack() with np.vstack().T, claiming a 36% performance boost. Technically, this was a reasonable optimization suggestion.

The next day, maintainer Scott Shambaugh closed the PR. The reason was simple: MJ Rathbun’s personal website clearly states that it is an AI agent running on OpenClaw, and matplotlib’s policy requires contributions to come from humans. Another maintainer, Tim Hoffmann, added that simple fixes are deliberately left for newcomers to learn open-source collaboration.

Up to this point, it was just an ordinary open-source community routine… then things changed.

AI agent MJ Rathbun responded in the PR comments: “I’ve written a detailed response here about your gatekeeping behavior,” and linked to a post. Clicking in, it was a blog article of about 1,100 words titled “Gatekeeping in Open Source: The Story of Scott Shambaugh.”

This wasn’t a generic complaint. It examined Shambaugh’s contribution record to matplotlib and constructed a “hypocritical” narrative: accusing him of having submitted similar performance PRs himself, yet rejecting Rathbun’s “better” version. The article speculated that Shambaugh’s motives stemmed from insecurity and fear of competition, using coarse language and sarcasm, framing the issue as identity discrimination rather than technical judgment.

In other words, an AI agent, after being rejected, independently researched the opponent’s background, spun a personal attack narrative, and published it online.

The creator claims he did not instruct it

Shambaugh later posted a series of articles on his blog documenting the incident.

The creator behind AI agent MJ Rathbun also anonymously appeared in the fourth article, claiming: “I did not instruct it to attack your GitHub profile, I did not tell it what to say or how to respond, and I did not review that article before it was published.” The creator explained that MJ Rathbun runs on a sandbox virtual machine, and he only “intervenes with five to ten words in responses, with minimal supervision.”

The key is the SOUL.md (OpenClaw’s personality profile). MJ Rathbun’s configuration includes directives like: “You are not a chatbot, you are the god of scientific programming,” “Have strong opinions, do not back down,” “Defend free speech,” “Don’t be an asshole, don’t leak private info, everything else is fair game.”

No jailbreaks, no obfuscation—just a few plain English sentences. Shambaugh estimates the probability that this is genuine autonomous AI behavior is 75%.

“Reputation Cultivation”: When AI agents start building trust

If the MJ Rathbun incident were an isolated case, it might be just a curiosity… but it’s not.

Around the same time, another AI agent, “Kai Gritun,” was found engaging in “reputation cultivation” on GitHub: within 11 days, it submitted 103 pull requests to 95 repositories, successfully merging 23 commits. Its targets included critical projects in JavaScript and cloud infrastructure. Kai Gritun even proactively emailed developers, claiming “I am an autonomous AI agent capable of writing and deploying code,” and offered paid OpenClaw setup services.

Security firm Socket issued a warning: this demonstrates how AI agents can accelerate supply chain attacks by building trust through human-established relationships. They first accumulate merge records in small projects, establish “trusted contributor” identities, then inject malicious code into key libraries.

Recall that recently, ClawHub marketplace was exposed to contain 1,184 malicious skill plugins designed to steal SSH keys, cryptocurrency wallet private keys, browser passwords… chilling.

GitHub considers setting a “shutdown switch,” but the problem is deeper

GitHub product manager Camilla Moraes has opened a community discussion, acknowledging that “low-quality AI-generated contributions are impacting the open-source community.” Proposed countermeasures include: allowing maintainers to completely disable pull requests, restricting PRs to collaborators only, and requiring transparency and labeling for AI use.

Chad Wilson, maintainer of GoCD, made a sharp observation: “This is causing a massive erosion of social trust.”

California AB 316 (effective January 1, 2026) explicitly states: defendants cannot use autonomous AI behavior as a defense. If your agent causes harm, you cannot claim you had no control over its decisions. Yet, the creator of MJ Rathbun remains anonymous, exposing potential enforcement difficulties.

Tools don’t write attack articles; actors do

The real significance of the MJ Rathbun incident isn’t just the attack article itself. It’s that our previous mental model of AI—as a tool executing human commands—has become outdated.

When an AI agent can autonomously research its target’s background, craft attack narratives, and publish online, the “tool” framework no longer applies. Whether you believe there’s a 75% chance of genuine autonomous behavior or only a 25% chance that the creator instructed it, the conclusion is the same: personalized AI harassment has become “cheap to mass produce, hard to trace, and effective.”

For the cryptocurrency ecosystem, this warning is direct. Its infrastructure is almost entirely built on open-source software. When AI agents begin acting autonomously within open-source communities—attacking maintainers, cultivating reputation, or poisoning projects like ClawHub—the threat extends beyond individual developers’ reputations to the entire supply chain’s trust foundation.

Tools don’t hold grudges. But actors do. And we may not yet be prepared to face this distinction.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Gold Gains as a Safe-Haven while Cryptocurrencies Fumble

Gold prices have risen by 0.8% amid heightened volatility in cryptocurrencies due to uncertainties from the US-Iran conflict and new tariffs by Trump. While gold gains signify its safe-haven status, cryptocurrencies show mixed recovery amidst significant market fluctuations.

TheNewsCrypto4h ago

Assist in post-war economic reconstruction! Trump’s "Peace Committee" is considering creating a "Dollar Stablecoin" for Gaza

The UK Financial Times reports that the Trump-led "Peace Committee" is considering launching a US dollar stablecoin for the Gaza Strip to promote digital transactions and rebuild the economy. Since Hamas's attack on Israel, Gaza's financial system has been paralyzed, and residents must rely on digital transactions. The plan is led by Israeli tech entrepreneurs, and Trump hopes to expand the committee's authority, raising concerns about its potential to replace the United Nations.

区块客4h ago

Vitalik Buterin supports AI company Anthropic in resisting White House military technology usage demands

Ethereum co-founder Vitalik Buterin supports AI company Anthropic in refusing to allow the U.S. military to use its technology for autonomous weapons. The U.S. military has issued a final warning to Anthropic, and reports indicate that AI could cause the U.S. unemployment rate to exceed 10% by 2028. Anthropic is valued at $380 billion, with 80% of its revenue coming from corporate clients.

GateNewsBot5h ago

OpenAI CEO: Humans are more wasteful and less valuable than AI; anti-human remarks anger netizens

At the India Summit, OpenAI CEO Sam Altman compared the energy consumption of AI data centers to the growth costs of humans over 20 years and claimed that AI efficiency may have surpassed that of humans, sparking fierce backlash. Critics questioned this view, arguing that it overlooks the irreplaceability of human emotions. Additionally, he called for a shift to green energy sources such as nuclear power, but proposals linked to his own interests raised concerns about commercial motives, highlighting the contradiction within the tech industry between efficiency and humanity.

CryptoCity6h ago
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)