So, I'm a musician, who wants to have a musical career (a lot of communist musicians had stable careers) and meanwhile stupid porkies tell me that "no, we'd prefer if you were replaced, prole, because there if no place for people like you" and I hear, not only music, but other art, computer science, programming etc. will be replaced by AI. How do we stop this, so people are still prosperous in the real socialst societies?
323 posts and 34 image replies omitted.>>28781The people who say AI will replace programmers are the guys in product departments who get really impressed by UX demos but don’t really understand what the backend engineering team is working on.
This has been a consistent issue at every company I’ve worked. The backend will have horrible scaling issues that need to be addressed, but product is constantly roadmapping new “features” that they want prototyped as quickly as possible, so the app just gets shittier as time goes on and the customer is paying millions of dollars for what is effectively a prototype.
What LLMs enable is a future where these mfs can do this at an ever increased rate without the help of an engineering department so if you’re a backend engineer you can look forward to a future of employment where you are brought into to refactor increasingly incomprehensible autogenerated code bases.
AI is good for boilerplate and implementing stuff that you already roughly know what and where you want it. These guys think it’s somehow going to start making overarching architectural decisions like that’s so easy
>>28781>people who really really give a shit should be unionizing and raising living standards for all prolesYes, high wage professionals should embrace collective bargaining.
>coping with the idea that the table will flip on the next boom/bust cycle and demand will spike againMost likely it will. People saying AI is gonna replace IT workers in its current or reasonably extrapolated forms are just fueled by resentment. If a job consists purely of copying and pasting from stack overflow (like a significant chunk of junior positions that have been cut) then yes it's fucked, but IT requires a lot more critical thinking than that.
>>28783>AI is good for boilerplate and implementing stuff that you already roughly know what and where you want it>These guys think it’s somehow going to start making overarching architectural decisions like that’s so easyThis anon actually codes with LLMs.
>>28784>If a job consists purely of copying and pasting from stack overflowThis is pretty much it, what LLMs do. People used to copy and paste code as-is from stack overflow, now they do it from a chatgpt.com response.
Also there are shit-tons of poorly documented overly complex shitpiles like AWS/kubernetes/random niche libraries/etc that have no docs to be fed to an LLM for it to produce relevant copy-pastable output. the best it can do is generate a sample configuration or code which still needs to be modified to fit into a codebase, basically exactly what you get from stack overflow.
this may qualify as """"""AGI"""""" for PHBs, but i don't have to agree with that delusion.
>>27559>"no, we'd prefer if you were replaced, prole, because there if no place for people like you""Blacks Rule" quality writing here
You're not a prole you're selling the product of your labor, not your labor. You're aspiring petit boug
>>28769Lmao what a fucking moron
>>28776They changed the deginition of AGI some time ago to claim to have reached it. They chopped their own leg off.
>>28776>>28790Right. They definition wasn't changed though, it was just never exactly what you would assume.
The definition they chose was that AI was more economically productive than a human. That's it. Not complicated. But understand that to them, this is the sum total of the human being. Humans are simply economic agents, and the human mind is simply an optimization algorithm. It's only natural they see optimization algorithms as humans.
>>28866No, I don't think so. I think it's very clear, with much of the reaction to OpenAI's new product roll out, that AI evangelists really, truly hate artists and genuinely don't understand human beings. The vulgar dehumanization is one of the most insipid bastardizations of Marx, who himself was very passionate about the importance of artistic immersion and expression in the cultivation of the human being, and whose belief in these things ungrided his critique and analysis of political economy.
I think it's poigant and sad because Miyazki, a truly generational artistic mind and a peerless genius, could explain that art and the human soul and expression are deeply and profoundly tied together. Almost all people who express themselves through creative will would understand this intuitively. It seems that only certain people don't understand it– mostly those who don't create anything. Who talk about "democratizing art", when art has in fact been democratized for centuries. They simply do not understand the process.
>>28865You are going to be ruled, instead, by a misanthropic group of people (who possess the same nihilistic and misanthropic outlook embraced by many of the 4chan rejects who ended up posting on forums and or populating SV venture capital firms) who genuinely only see "people" like you useful in so much as they are stupid subjects to be dominated.
You do not have a future. Why should you? What is a single thing about you that is different in any way from an optimization algorithm that predicts the next word in a sentence? Name one thing, just one, that sets you apart.
>>28869we are already ruled by the bourgeois you dumb liberal fuck
>>28868>pretending to be a marxist and bringing up morals and humanismagain, fucking retard
>>28865>Just accept it - you need to adapt to changing technologies or be crushed by them. I think part of the anxiety is that VCs keep blurring the horizon by overselling what these things will be able to do, and by fundamentally misunderstanding what most jobs consists of. So, for programming, they're measuring how good these things are by one-shotting like flying simulators or whatever, but nobody seems to grasp that these projects are easy to conceptualize and have no strict requirements for the end result, whereas most jobs have strict requirements with lots of caveats, and conceptualizing like, i dunno, a payroll system requires knowledge of a lot of quirks that are specific to each business and you need to specify all of them to your LLM, at which point it's irrelevant whether you're writing code yourself or asking the LLM to do it. Most jobs are like this, but these details are abstracted by the time it reaches porky's eyes. Thus they confuse what do their own employees do, and then the LLM isn't able to fit in correctly in your process pipeline. This is part of the reason why they're shoving LLMs into every interface they can get their hands, on they're trying to discover use cases where it can effectively replace people. And I mean, having it be a magic wikipedia that talks back is very cool, I think it's awesome to ask LLMs to do a bunch of ETL shit instead of messing with the actual disgusting python and pandas code, but the valuation is all wrong, LLMs are not replacing a whole lot, except artists that were already jobless to begin with. I hope OpenAI and Anthropic go bankrupt so we can wait for the actual innovators, the chinese, to deliver on LLMs that are increasingly efficient to run, so that we eventually can run the big big models on our local computers and ultra-commoditize this technology. Until that happens all of this is shit.
https://threadreaderapp.com/thread/1904933435803877882.html
>New forensic findings have just been released in the death of Suchir Balaji — a whistleblower against OpenAI.
>Police ruled it a suicide.
>But the evidence just uncovered tells a very different story: drugging, a possible second bullet, and a botched autopsy. 🧵1/ Image>2/ On November 26, 2024, San Francisco PD informed Suchir Balaji’s family he had died by suicide.
>According to the family's attorney, an autopsy was completed just "40 minutes" after arriving at the scene — no interviews, no toxicology report, no ballistic analysis.
>Why? 🤔 Image>3/ This is the last known footage of Suchir Balaji before his death.
>Multiple other CCTV cameras in his apartment complex — including one covering a secondary entrance — were mysteriously disconnected around the time he died. 🤨 Image>4/ Body cam footage shows SFPD officers touching and examining the crime scene without gloves — a blatant breach of protocol.
>They failed to collect fingerprints, left blood-stained evidence unsecured, and one even quipped that the scene looked like a "homicide". Image>5/ When a private autopsy was conducted, it revealed critical evidence the first had completely missed — or ignored.
>First, CT scans showed a second metallic fragment lodged in his skull — in other words, a possible second bullet, which is extremely uncommon in a suicide. Image>6/ The toxicology report raised even more red flags.
>Suchir had a blood alcohol level of 0.178% — well above the legal limit.
>He also had extremely high levels (in excess of 50,000 ng/mL) of GHB in his system, a "date rape" drug commonly used to incapacitate victims. ImageImage
>7/ Based on those reports, Suchir would have been heavily impaired — possibly unconscious — at the time of death.
>Also, the gun found at the scene had no blood, no tissue, no back spatter. His hands didn’t either. For a point-blank shot to the head, that’s virtually impossible. Image>8/ The crime scene also told a different story from the official narrative.
>Rooms were disturbed. Furniture was moved. Blood spatter patterns suggested Suchir had been standing, crawling, and possibly struggling before the fatal shot.
>Does this like a suicide to you? [VIDEO]>9/ There was no suicide note. No history of prior attempts. No recent crisis.
>In fact, Suchir was thriving. He was on the verge of launching his own venture — and had also received numerous job offers offering multi-million dollar salaries. Image>10/ We’re in an AI arms race — and at the center is Sam Altman, CEO of OpenAI.
>Suchir Balaji was a whistleblower with info that could disrupt it. Then he turned up dead.
>We need answers. We need accountability. We need justice. >>28874>all this luddite reaction against 'AI' (LLMs, text-to-image models, etc.) is so annoying and it's 99% based on petite-bourgeois complaints on intellectual property which I couldn't care less about, copyright shouldn't existIf copyright didn't exist, we wouldn't have LLM hype. Why are LLMs "hallucinating"? Because they are so fuzzy. Why are they so fuzzy, why can't they properly cite? If they don't obfuscate their sources and just pull out long excerpts from copyrighted material whenever it looks like that's what the user wants, the "AI" companies will get into trouble.
I want LLMs to dial down their randomness (especially around sensitive words like "murder", so they don't constantly crap out false accusations about real people) and in wanting this I am basically asking them to infringe harder.
>>28886They are fuzzy because they are statistical models, not because the companies are afraid of copyright lawsuits. The "hallucinations" will never go away, because that's how LLMs fundamentally work. There is no difference between getting things right and making shit up, one is as accidental as the other.
This is a fundamental issue with LLMs that cannot be solved.
>>28894>They are fuzzy because they are statistical models, not because the companies are afraid of copyright lawsuits.Your argument is the tool just has the properties. But the tool has been chosen again and again while knowing these properties.
>The "hallucinations" will never go away, because that's how LLMs fundamentally work.The point was not about solving everything, but about reducing errors around sensitive topics. In LLMs you can directly set the "temperature" (randomness) of the output.
>>28900That's neither true of AI as a general concept nor of LLMs specifically. DeepSeek shows this.
>>28901The knob for fiddling with output temperature does not require new training. Of course it is feasible to make temperature change during the conversation.
Unique IPs: 29