L'expérience commence ici…
…et continue ici

Technique #2 - Tarpit/honeytrap:

Described by aaron at zadzmo.org as »sending AI scrapers down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months"«

It is often combined with Markov Babbler, a way of generating gibberish content without using sophiscated LLMs to confuse scrapers and pollute their training data

Technique #1 - Disruption:

Disable AI scrapers lurking a webpage by overwhelming their bandwidth, mostly by using ZIP bombs: 1mb of data when decompressed can explode into 1gb of data

Consult the complete list of tools for your webservers here:

These strategies are for those who have a capable web server, but how do I implement tarpits without one?

Responses may vary: konterfAI suggests implementing reverse proxies to link your site to a public tarpit. if that's not possible, putting a simple hyperlink to a tarpit might work, but the crawlers won't be sucked into it automatically, they will still crawl your site's content. And it's better to ask the creators of the tarpits first before sending traffic their way.
The Jesus shrimp industry and the bullshitting-your-homework industry shouldn't be valued at trillions of dollars.

Why should they worth more than our healthcare system, our education, or our public utilities? And what is that saying about the kind of future that the tech oligarchies are offering us?
The mainstream resistance to this sh*tstorm has been a futile "we need to regulate and reform AI".

Well, if we want to regulate and reform AI, we need to first strip it away from its embedded logic of surveillance, from the reckless investments and the infinite greed of those deploying it.

AI saboteurs offer a sensible counter-threat: let's try and destroy AI. The mere existence of this possibility can push AI bros to think twice before acting like they're kings of this world. And if tech billionaires can stash away public services that millions of people depend on everyday for their actual life and death, we can and should stash away their frankensteinish vision of technology.

when i wrote "destroy AI" i wasn't saying everything should be destroyed - i was saying the absence of destruction as an option creates a lopsided landscape to our imagination. people can just create and deploy systems, but we can't remove those systems? do you see how that is inescapably untenable?

— ali alkhatib (@ali-alkhatib.com) 8 novembre 2025 à 23:58
Soooo...

- Do you have any example of this counter threat?

- Yes. For some context, AI companies do not care about consent on the web anymore: they aggressively steal common goods, then resell an approximate and inaccurate version of it through an unified and controlled platform. If social media has turned Internet into a private walled garden, generative AI will turn it into just a wall.
AI sabotage hopes to disrupt this disrespect for the rules of an open web. Case in point: Anthropic says that only 250 poisoned samples are needed to poison LLMs of any size!
So how are things going at the moment for the AI poisoning project? Not the best, but not the worst either.

If this topic interests visitors of Poisonland, what's better than reading temoignages from the saboteurs themselves?
Now I want to discover the techniques of AI sabotage !
Remind me again why are we poisoning AI?