So, I'm a musician, who wants to have a musical career (a lot of communist musicians had stable careers) and meanwhile stupid porkies tell me that "no, we'd prefer if you were replaced, prole, because there if no place for people like you" and I hear, not only music, but other art, computer science, programming etc. will be replaced by AI. How do we stop this, so people are still prosperous in the real socialst societies?
562 posts and 80 image replies omitted.>>30657My friend and I were drunk vibecoding a game the other night on his copilot subscription. It's kinda there, kinda not. Things it was able to do
>Hack together a tkinter gui for a niche CLI tool>Use animations from a spritesheetThings it couldn't do without major handholding
>port a godot game to C++>write a vulkan renderer that wasn't just regurgitated vkguide.dev>write game enemy AI (lel) >>30654lol can’t wait until she’s saying something about desperate classical computer tards crying about inevitable quantum supremacy.
What a joke this woman is.
>>30659It can do a lot of things but it unironically writes code that *feels really bad* and also probably takes longer than it seems to write.
Underappreciated part of the vibe coding phenomenon is that computer programmers, like everyone else, are extremely lazy and measure their productivity in terms of effort. When they don’t work hard and get code, they feel very productive.
>>30644If LLMs are used directly, they will fuck up. Improving the LLM by throwing more data at some model will not really solve this, and advice to come up with the "right" prompts is not useful either because of the randomness of the LLM response. The user input has to be handled by something reliable which only uses the LLM in some limited fashion and again checks the output and checks it well, so that the overall system is reliable.
This crucial reliable thing that makes the overall system reliable has to be built and checked thoroughly by humans, which means the thing has to be simple enough, which means the overall system can only have use in a narrow context.
recently started adding this to the end of my deepseek searches:
Additionally, I want to know what non-search engine, non-social media website I could have look for that would have had this information.
Seems like I get better results
and I can better verify stuff.
>>29888anthropic released a breakdown of how their models 'do math' it just vibes the answer, 'when 5 and 8 are in the text 13 is in it too sometimes' and then repeats this statistical inference for each digit. it has no awareness of what it's doing.
anyone who thought this shit is capable of AGI actually failed the npc turing test
>>30683Blue collars can explain that one with the good old how many apples show
The llm's response is about right for white collar work
>>30686Before you start thinking knowledge work is that special go find some tertiary texts like textbooks on a topic and then compare those to the original primary sources
The PMC understanding of Marx and Hegel are particularly hilarious for example, an LLM really can do better than them with a bit of gentle prompting
Ultimately this is where a lot of the freak out is coming from
>>30688That Synthesis, Antithesis and Synthesis is Fichte not Hegel is a fact not a personal bias
Poor you dumber than a chatbot
>>30686It's an emergent property that is the prototypical emergent property.
In theory, the training set should not have been able to see every single multiplication or addition out there, so it can't be memorizing answers. And some simple combinations would make sense, but more complex ones require us to do arithmetic. It can do those, though, with some accuracy, but that accuracy falls off with larger numbers. So why?
Could it be that perhaps there is some kind of very high-dimensional structure that is acting like an arithmetic unit or something? And why does an LLM get more accurate if you train it to use more tokens on math problems?
And actually, what Anthropic found is that it really does develop some interesting structures when it attempts to solve math. But these do fundamentally accord with the nature of the transformer. It does parallel computing, where it combines guesses from different parts of the numbers. So it estimates ending digits, separates estimated sums from middle digits, and combines these to estimate the overall number. This is all stateless, so this separation happens inside a single forward pass. For larger numbers, these parallel pathways have to increase in number or decrease in accuracy or both.
So what it learned to do to predict numbers is to basically break the computation up into smaller numbers it can predict and then recombine them in a way it can predict.
And it's true they found that it has no awareness of this. No matter how many times you query the LLM, it cannot tell you it's doing that. Which is totally expected. No surprises here. But it can be generalized to the likely true conclusion that any "self-knowledge" about its "thought process" is totally bogus. Compare this to a human which can answer the question "what happens in your head when you add these two numbers together?"
AI cultists will probably read then and then begin to deny that people can query their own thoughts, though. To them, because the AI looks outwardly like a human, a human must be like an AI. When they investigate how Transformers work, they are unlocking the secrets of *their* mind. Truely spooky belief, the end is dehumanization.
>>30693>Compare this to a human which can answer the question "what happens in your head when you add these two numbers together?"Go ask some people, there'll be a few who can but most won't
My contention here isn't that LLMs are equivalent to a human being no, of course not
What my contention is that
the Professional Managerial types thought doesn't operate on a level higher than an LLM
That minimum wage Barista or fast food worker has more going on in their head; can't replace that, replacing overconfident PMCs who just confabulate nonsense sure an LLM can do that
i think the pandemic might have been part of this huge push for ai automation, it kind of made it very apparent that many workers don't really need very much supervision or management to do their jobs or even need to physically be on the company property, and i think that sent a scare into the managerial/executive class whose jobs really are useless and the company is quite capable of running itself without them
when companies were forced to let everyone work from home they ceded some control over the means of production to the workers, now they want to make sure something like that never happens again
and on top of that, we're in this degenerate era of capitalism where the private sector is dominated by huge monopolies that have no competition left and their business models require perpetual growth to remain viable so now they are looking for ways to cut costs, lay off workers, enshittify their products/services, put the squeeze on their customers, avoid any accountability, insulate themselves from pushback, etc. and ai is their solution to all of these problems
but even if ai does solve all these problems, the companies fire most of their employees and replace them with ai and it works out great and they make trillions of dollars in revenue, what will they do with that revenue? will they give the revenue to their remaining employees in the form of raises? will they give the revenue to their customers in the form of lower prices or better quality products/services? of course not - they will let it ride and invest all of that revenue into another even bigger and riskier venture and continue operating on razor thin margins, because the monster cannot stay one size, it must continue to grow or it will die
>>27559>musician, who wants to have a musical career (a lot of communist musicians had stable careersdumbest thing I've seen on this site today.
you simply can't have a """stable career""" in music without selling out.
>>30720also musicians making a career out of recorded music is only a 70 year or so old phenomenon.
throughout human history musicians made their career through actually playing for people in person. they can still do that and people still pay for it and it makes them equivalent to any other worker who earns through labour instead of rent/royalties.
https://www.marktechpost.com/2025/07/31/agentsociety-an-open-source-ai-framework-for-simulating-large-scale-societal-interactions-with-llm-agents/?ampHere we go, if you ever wanted to make a game and scratch your itch, you can with generative AI
Drive responsibly, audit your network code your players will tolerate a bit of jank for a good gaem – however, importantly you have a duty of care to them to audit your code; for single player games, sweet even a simple local one as a rubber ducky as you go through the code and make sections off; once you are dealing with network code excercise due care.
>>30574Tested it:
User: What species of fursona would Max Stirner have likely picked if he--for whatever reason--wanted one?
Kimi: Max Stirner, the 19th-century German philosopher known for his radical egoist philosophy outlined in The Ego and Its Own, never had a fursona—nor did furries exist in his lifetime. But if we playfully imagine him adopting one, the choice would likely reflect his core ideas: absolute individualism, rejection of external authority, and the assertion of the ego as the sole proprietor of its own existence.
Stirner’s philosophy centers on the “unique one” (der Einzige), who owns nothing but their own will and recognizes no moral, social, or metaphysical constraints beyond what serves their self-interest. Thus, his fursona would need to embody autonomy, cunning, and a kind of amoral adaptability.
Among the memes and fan creations referencing Stirner in furry contexts, two species stand out:
- Cat: Memes suggest Stirner as a cat, possibly because cats are stereotypically independent, self-interested, and indifferent to human rules—traits aligning with egoism. One meme even captions a cat with Stirner’s quote: “So long as I assert myself as holder, I am the proprietor of the fursona” .
- Illegal Primagen: On Fur Affinity, a character based on Stirner is depicted as an “illegal primagen,” a closed-species cyborg dragon-like creature. The choice here seems less about species symbolism and more about rejecting ownership norms—primagens are a “closed species,” and declaring one’s Stirner-inspired sona “illegal” flouts those rules, echoing his disdain for external restrictions .
Between these, the cat arguably fits Stirner’s philosophy more elegantly. It’s a solitary predator, ungovernable, and famously indifferent to anything but its own desires—essentially the furred embodiment of the egoist’s creed.
I like it's brevity, some of these AIs feel like 10 minute mark video essaysist, so seeing one that gets to the point is refreshing.
>>27605>porks no longer have to admit they took personal responsibility when denying claims for thousands of isulin perscriptions or verifying intel for drone strike targets, they can just say da computer did it so it's all okay"hey porky how did you get those profits"
"by working really really hard and basically doing everything with my giant brain"
"hey porky why are all those people dead"
"uhhhh the AI did that"
"hey porky who made the AI"
"uhh my workers did"
"hey porky why not fire the workers and get rid of the AI"
"because then I wouldn't get any profits"
A large language model developed by researchers from Beijing Language and Culture University (BLCU) was applied in rescue efforts following the devastating earthquake in Myanmar on March 28. It was recently included in "China's AI Solutions Benefiting the World (2025)" released by the 2025 World Artificial Intelligence Conference (WAIC) recently. It is reported that this marks the first application of China's large language model technology in international rescue operations.
Upon receiving a request from the Chinese rescue team, the National Emergency Language Service Corps with its secretariat based at BLCU immediately assembled a technical team to develop the "Chinese-Myanmar-English Translation System" to provide language support for frontline rescue operations, Han Lintao, the leader of the system's R&D team and associate professor at the School of English and Advanced Translation at BLCU, told China Education News.
Han said that leveraging DeepSeek's open-source large language model, the team utilized language corpus and technical resources from relevant research projects under the Ministry of Education and the National Language Commission.
According to the report, within just seven hours, Han's team delivered the first version of the system to the Chinese rescue team in Myanmar and distributed it widely to frontline disaster relief operations via relevant departments of the Ministry of Emergency Management.
The death toll from Myanmar's devastating 7.7-magnitude earthquake has risen to 3,706, Myanmar Now reported.
More than 600 people have used the platform, according to Yuyuantantian, a social media account affiliated with China Media Group.
Han explained that traditional translation methods often struggle to produce accurate results. To address this, his team integrated large language models with specialized rescue terminology and existing databases to build a translation system. This system significantly outperforms conventional translation tools in accuracy and effectively meets the communication needs of Chinese rescue teams on the ground in Myanmar.
Some online users praised the case for showcasing to the world China's technological strength, its genuine commitment to international aid, and the positive impact of language technology.
The system remained openly accessible online to serve the public's daily communication needs, according to China Education News.
Global Times
https://www.globaltimes.cn/page/202508/1340214.shtml>>30777>>30776The dick sucking around version 5 is impressive. But what’s wrong with that chart? The numbers don’t match the barchart visualization.
Also the “personality” they’ve RLHF’d into it from the blueberry thing instantly made me recoil. For fucks sake.
>>30778>>30776Wow Sam Altman is a real piece of shit. He’s got ChatGPT now talking about itself as if it had a brain and drawing false equivalences between LLMs and humans. In this case, a very galling one, since this letter counting problem is a well known adversarial challenge against LLMs which human brains are immune to. But the AI is inverting this and saying humans fail at it and providing nonsense reasoning with its characteristic authority.
AI psychosis.
Unique IPs: 29