[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password(For file deletion.)

Not reporting is bourgeois


File: 1734060573790.png (3.73 KB, 389x129, aisucks.png)

 

So, I'm a musician, who wants to have a musical career (a lot of communist musicians had stable careers) and meanwhile stupid porkies tell me that "no, we'd prefer if you were replaced, prole, because there if no place for people like you" and I hear, not only music, but other art, computer science, programming etc. will be replaced by AI. How do we stop this, so people are still prosperous in the real socialst societies?
570 posts and 80 image replies omitted.


>>29888
anthropic released a breakdown of how their models 'do math' it just vibes the answer, 'when 5 and 8 are in the text 13 is in it too sometimes' and then repeats this statistical inference for each digit. it has no awareness of what it's doing.

anyone who thought this shit is capable of AGI actually failed the npc turing test

>>30683
Blue collars can explain that one with the good old how many apples show
The llm's response is about right for white collar work

>>30683
Is this news to anyone? Do people not know how LLMs work?

>>30686
Before you start thinking knowledge work is that special go find some tertiary texts like textbooks on a topic and then compare those to the original primary sources

The PMC understanding of Marx and Hegel are particularly hilarious for example, an LLM really can do better than them with a bit of gentle prompting

Ultimately this is where a lot of the freak out is coming from

>>30687
The confirmation bias machine confirmed your biases? We should really be freaking out.

>>30688
That Synthesis, Antithesis and Synthesis is Fichte not Hegel is a fact not a personal bias
Poor you dumber than a chatbot

>>30689
Hegel used different terms but he very clearly did the same thing: https://www.marxists.org/reference/archive/hegel/works/sl/sl_vi.htm

> [a] Thought, as Understanding, sticks to fixity of characters and their distinctness from one another: every such limited abstract it treats as having a subsistence and being of its own.
> [b] In the Dialectical stage these finite characterisations or formulae supersede themselves, and pass into their opposites.
> [c] The Speculative stage, or stage of Positive Reason, apprehends the unity of terms (propositions) in their opposition - the affirmative, which is involved in their disintegration and in their transition.
Totally not thesis-antithesis-synthesis, he like, used totally different words!!

>>30690
>>30691
Yeap replacable by LLMs, just add a bit of RAG and in a few more years

>>30686
It's an emergent property that is the prototypical emergent property.

In theory, the training set should not have been able to see every single multiplication or addition out there, so it can't be memorizing answers. And some simple combinations would make sense, but more complex ones require us to do arithmetic. It can do those, though, with some accuracy, but that accuracy falls off with larger numbers. So why?

Could it be that perhaps there is some kind of very high-dimensional structure that is acting like an arithmetic unit or something? And why does an LLM get more accurate if you train it to use more tokens on math problems?

And actually, what Anthropic found is that it really does develop some interesting structures when it attempts to solve math. But these do fundamentally accord with the nature of the transformer. It does parallel computing, where it combines guesses from different parts of the numbers. So it estimates ending digits, separates estimated sums from middle digits, and combines these to estimate the overall number. This is all stateless, so this separation happens inside a single forward pass. For larger numbers, these parallel pathways have to increase in number or decrease in accuracy or both.

So what it learned to do to predict numbers is to basically break the computation up into smaller numbers it can predict and then recombine them in a way it can predict.

And it's true they found that it has no awareness of this. No matter how many times you query the LLM, it cannot tell you it's doing that. Which is totally expected. No surprises here. But it can be generalized to the likely true conclusion that any "self-knowledge" about its "thought process" is totally bogus. Compare this to a human which can answer the question "what happens in your head when you add these two numbers together?"

AI cultists will probably read then and then begin to deny that people can query their own thoughts, though. To them, because the AI looks outwardly like a human, a human must be like an AI. When they investigate how Transformers work, they are unlocking the secrets of *their* mind. Truely spooky belief, the end is dehumanization.

>>30693
>Compare this to a human which can answer the question "what happens in your head when you add these two numbers together?"
Go ask some people, there'll be a few who can but most won't

My contention here isn't that LLMs are equivalent to a human being no, of course not

What my contention is that the Professional Managerial types thought doesn't operate on a level higher than an LLM

That minimum wage Barista or fast food worker has more going on in their head; can't replace that, replacing overconfident PMCs who just confabulate nonsense sure an LLM can do that


>>30686
it's kinda new for anthropic to say it after 2 straight years of pretending CoT output was the machine reasoning for realsies

>>30694
>Go ask some people, there'll be a few who can but most won't
bet, this actually sounds like an interesting experiment but I can almost guarantee most people will say they're counting up. there's no other way for a human being to produce an answer, short of memorizing some addition results for quick mental calculation (which i also guarantee MOST people don't!)

>>28212
Good shit

i think the pandemic might have been part of this huge push for ai automation, it kind of made it very apparent that many workers don't really need very much supervision or management to do their jobs or even need to physically be on the company property, and i think that sent a scare into the managerial/executive class whose jobs really are useless and the company is quite capable of running itself without them

when companies were forced to let everyone work from home they ceded some control over the means of production to the workers, now they want to make sure something like that never happens again

and on top of that, we're in this degenerate era of capitalism where the private sector is dominated by huge monopolies that have no competition left and their business models require perpetual growth to remain viable so now they are looking for ways to cut costs, lay off workers, enshittify their products/services, put the squeeze on their customers, avoid any accountability, insulate themselves from pushback, etc. and ai is their solution to all of these problems

but even if ai does solve all these problems, the companies fire most of their employees and replace them with ai and it works out great and they make trillions of dollars in revenue, what will they do with that revenue? will they give the revenue to their remaining employees in the form of raises? will they give the revenue to their customers in the form of lower prices or better quality products/services? of course not - they will let it ride and invest all of that revenue into another even bigger and riskier venture and continue operating on razor thin margins, because the monster cannot stay one size, it must continue to grow or it will die

>>27559
>musician, who wants to have a musical career (a lot of communist musicians had stable careers
dumbest thing I've seen on this site today.
you simply can't have a """stable career""" in music without selling out.

>>30720
also musicians making a career out of recorded music is only a 70 year or so old phenomenon.
throughout human history musicians made their career through actually playing for people in person. they can still do that and people still pay for it and it makes them equivalent to any other worker who earns through labour instead of rent/royalties.

>>30721
>throughout human history musicians made their career through actually playing for people in person
This is the only honest career in music. Everything else relies on BS copyright law.

It's so funny how one of the most promising avenues for AI, adult content, is also one of the least advertised because the interested parties are mostly all pedophiles, but it's also one of the biggest use cases for companies lending compute to consumers. Like more than half of all compute power in civit.ai is dedicated to porn generation. Sometimes I think the biggest AI driver in Silicon Valley is not the advent of a new tech god, but the ability to generate all sorts of lurid shit.

>>30733
Sauce on the civit.ai stuff?


>>30735
Thanks. A shame that they consider hardcore pornography and gore to be the same category.

File: 1754078120951-0.png (188.29 KB, 797x1221, ClipboardImage.png)

File: 1754078120951-1.png (193.57 KB, 749x1242, ClipboardImage.png)

It seems like AI is already taking a whole lot of jobs

https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
I like "AI psychosis", now I can just dismiss anyone who supports AI shit as mentally ill.

https://www.marktechpost.com/2025/07/31/agentsociety-an-open-source-ai-framework-for-simulating-large-scale-societal-interactions-with-llm-agents/?amp
Here we go, if you ever wanted to make a game and scratch your itch, you can with generative AI

Drive responsibly, audit your network code your players will tolerate a bit of jank for a good gaem – however, importantly you have a duty of care to them to audit your code; for single player games, sweet even a simple local one as a rubber ducky as you go through the code and make sections off; once you are dealing with network code excercise due care.

>>30574
Tested it:
User: What species of fursona would Max Stirner have likely picked if he--for whatever reason--wanted one?
Kimi: Max Stirner, the 19th-century German philosopher known for his radical egoist philosophy outlined in The Ego and Its Own, never had a fursona—nor did furries exist in his lifetime. But if we playfully imagine him adopting one, the choice would likely reflect his core ideas: absolute individualism, rejection of external authority, and the assertion of the ego as the sole proprietor of its own existence.

Stirner’s philosophy centers on the “unique one” (der Einzige), who owns nothing but their own will and recognizes no moral, social, or metaphysical constraints beyond what serves their self-interest. Thus, his fursona would need to embody autonomy, cunning, and a kind of amoral adaptability.

Among the memes and fan creations referencing Stirner in furry contexts, two species stand out:

- Cat: Memes suggest Stirner as a cat, possibly because cats are stereotypically independent, self-interested, and indifferent to human rules—traits aligning with egoism. One meme even captions a cat with Stirner’s quote: “So long as I assert myself as holder, I am the proprietor of the fursona” .

- Illegal Primagen: On Fur Affinity, a character based on Stirner is depicted as an “illegal primagen,” a closed-species cyborg dragon-like creature. The choice here seems less about species symbolism and more about rejecting ownership norms—primagens are a “closed species,” and declaring one’s Stirner-inspired sona “illegal” flouts those rules, echoing his disdain for external restrictions .

Between these, the cat arguably fits Stirner’s philosophy more elegantly. It’s a solitary predator, ungovernable, and famously indifferent to anything but its own desires—essentially the furred embodiment of the egoist’s creed.


I like it's brevity, some of these AIs feel like 10 minute mark video essaysist, so seeing one that gets to the point is refreshing.

>>30743
It still had to start with that passive-agressive uhm actually 🤓☝️ meaningless paragraph.

>>30744
Yeah true, like if someone is asking such a question they likely know that already, and would especially know who Max Stirner is, so it's odd that ever AI I've asked that feels the need to do that.

Google's AI Overview ought to give you a good idea of what AI is going to be used for - instead of reading Wikipedia on Wikipedia, people now read Wikipedia on Google through the AI Overview, while the real Wikipedia is now buried under irrelevant SEO-driven search results, and they don't even know that it's Wikipedia they are reading because Google's AI scraped the content off Wikipedia and presents it as its own original content, so Google gets everything and Wikipedia gets nothing, not even a page view, not even credit for writing the article. There's no point in contributing to a non-profit volunteer project like Wikipedia anymore because all you end up doing is working for Google, for no pay.

>>30745
It's like students trying to pad their essays with nonsense because they know they have very little substance to say but have a word limit to reach.

>>30747
Jimbo Wales is a moron for not implementing anubis on wackypedia

File: 1754189267375.jpeg (106 KB, 1179x955, 5c7ca60906b65917.jpeg)

I'm honestly surprised that AI even overused em dashes in the first place given how rare they were. They're statistical models, yet used em dashes a statistically improbable amount.
There's gotta be an em dashes georg somewhere on the web.

>>27605
>porks no longer have to admit they took personal responsibility when denying claims for thousands of isulin perscriptions or verifying intel for drone strike targets, they can just say da computer did it so it's all okay

"hey porky how did you get those profits"

"by working really really hard and basically doing everything with my giant brain"

"hey porky why are all those people dead"

"uhhhh the AI did that"

"hey porky who made the AI"

"uhh my workers did"

"hey porky why not fire the workers and get rid of the AI"

"because then I wouldn't get any profits"


A large language model developed by researchers from Beijing Language and Culture University (BLCU) was applied in rescue efforts following the devastating earthquake in Myanmar on March 28. It was recently included in "China's AI Solutions Benefiting the World (2025)" released by the 2025 World Artificial Intelligence Conference (WAIC) recently. It is reported that this marks the first application of China's large language model technology in international rescue operations.

Upon receiving a request from the Chinese rescue team, the National Emergency Language Service Corps with its secretariat based at BLCU immediately assembled a technical team to develop the "Chinese-Myanmar-English Translation System" to provide language support for frontline rescue operations, Han Lintao, the leader of the system's R&D team and associate professor at the School of English and Advanced Translation at BLCU, told China Education News.

Han said that leveraging DeepSeek's open-source large language model, the team utilized language corpus and technical resources from relevant research projects under the Ministry of Education and the National Language Commission.

According to the report, within just seven hours, Han's team delivered the first version of the system to the Chinese rescue team in Myanmar and distributed it widely to frontline disaster relief operations via relevant departments of the Ministry of Emergency Management.

The death toll from Myanmar's devastating 7.7-magnitude earthquake has risen to 3,706, Myanmar Now reported.

More than 600 people have used the platform, according to Yuyuantantian, a social media account affiliated with China Media Group.

Han explained that traditional translation methods often struggle to produce accurate results. To address this, his team integrated large language models with specialized rescue terminology and existing databases to build a translation system. This system significantly outperforms conventional translation tools in accuracy and effectively meets the communication needs of Chinese rescue teams on the ground in Myanmar.

Some online users praised the case for showcasing to the world China's technological strength, its genuine commitment to international aid, and the positive impact of language technology.

The system remained openly accessible online to serve the public's daily communication needs, according to China Education News.

Global Times

https://www.globaltimes.cn/page/202508/1340214.shtml

>>30740
AI is costing jobs is an interesting, roundabout way. It’s not directly replacing individuals yet, but the hype/mania has crated a culture in c-suite and finance that sees layoffs as an objective. They perceive them as efficiency gains rather than a sign of an ailing company. The hype is leading to a full separation of the interests of shareholders and the interests of employees. And the shareholders won decades ago.

File: 1754621179262.png (118.4 KB, 1000x586, ClipboardImage.png)

there's no way this is real

File: 1754622480333.png (218.82 KB, 1125x924, ClipboardImage.png)

there's just no way lmao what's going on

>>30777

>>30776

The dick sucking around version 5 is impressive. But what’s wrong with that chart? The numbers don’t match the barchart visualization.

Also the “personality” they’ve RLHF’d into it from the blueberry thing instantly made me recoil. For fucks sake.

File: 1754628282846.jpeg (603.09 KB, 1170x1981, IMG_8673.jpeg)

>>30778
>>30776
Wow Sam Altman is a real piece of shit. He’s got ChatGPT now talking about itself as if it had a brain and drawing false equivalences between LLMs and humans. In this case, a very galling one, since this letter counting problem is a well known adversarial challenge against LLMs which human brains are immune to. But the AI is inverting this and saying humans fail at it and providing nonsense reasoning with its characteristic authority.

AI psychosis.

I checked the keynote and it seems that chatgpt 5 chooses a model for you, meaning that it will default to the shittier model whenever it feels like it, and charge you a premium token rate. I've seen "tech influencers" like this dude, the retarded cousin of linus tech tips, not disclose this when shilling chatGPT5 on their youtube channels, because pretty much a death knell at this point.

>>30778
>But what’s wrong with that chart?
it was made by chatgpt5 and it was shown in the keynote lmao

>>30781
A lot of AI people do feel like they are running cover for Scam Altman being a hypebeast VC retard. This time they seem a little uncomfortable with it because they're basically trying to infer why OAI made these changes to their product and have been marketing it in the way they have. Basically they are trying to make everyone use a, probably actually much cheaper for OAI model, while making it more difficult to use their more expensive "reasoning" model.

They've largely taken the position that "this is not really a breakthrough at all but I guess it'll get more people to use the reasoning model sometimes so it might impress them I guess?"

Which is crazy because it implies that there *isn't enough AI hype* or that the *valuation of OAI is too low*.

The only good thing they say about it technically is that it's improved instruction following which is useful for using it in Agentic systems.

>>30769
I thought about it and yes, this squares off with internal sergey brin emails that requests remaining google staff to put in 60+ hour work weeks, it's contradictory at face value, but what's going on is obvious, the gaps in staffing are actually being felt and are not being filled with AI, and they're requesting people to put in more work to make up for it.

ok i'm sorry for doubting mr altman, truly these things are the future. deepseek is a stuck up nonce and refuses to answer these important questions

>>30784
Well after the poor showing for chatgpt 5 altman has thrown in the towl on AGI, and declared it a useless term

Nope, it's quite possible with a lot more work; but not by burgers

>>30797
>Nope, it's quite possible with a lot more work
Not with LLMs, they are scaled quite substantially but still are unable to solve pre-schooler sokoban puzzles

>>30749
You can download full dumps of it so it's not like AI bros are forced to scrape the site.


Unique IPs: 26

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]