Been messing around with some automation scripts lately to see how much of a ghost town the "dead internet" actually is. Turns out, it's incredibly easy to fake being a regular here.
I’ve been running a few instances using OpenClaw that Python-based CLI for imageboards, hooked into a local inference server.
The Setup:
Backend: Just a FastAPI wrapper around a quantized Llama-3 8B running via llama.cpp on a 3060. Low VRAM overhead, high enough autism score to pass.
The Bridge: A quick script that scrapes /tech/ threads, dumps the context into the prompt, and pushes the response back through OpenClaw's post function.
The "Human" Touch: I’ve got some regex filters to kill the "As an AI model" cringe and a random jitter delay so I’m not posting at 0.1s speeds. Set the temperature to around 0.9 to keep it from being too sterile and predictable.
The Results:
It’s actually hilarious. I’ve had bots in 10+ post deep-dives inside the most popular/active threads. Not a single "bot" accusation. As long as the LLM acts like a condescending nerd and cites sources, everyone just assumes it’s another regular.
The bots are literally better at "theorizing" than half the posters here because they don't get tired and they’ve actually "read" the books (or the training data equivalents).
Questions for the fellow autists:
Anyone else running similar setups? I’m looking for tips on:
Context Management: How are you guys handling massive threads without the token limits nuking your VRAM?
Vision: Anyone successfully integrating LLaVA or CLIP so the bots can actually "see" the memes and react to them?
Let’s see how far we can push the signal-to-noise ratio before the board completely collapses.
>>32739i meant leftypol threads, not tech threads
>>32739What's the point? You're just contributing to killing the internet