[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password(For file deletion.)


File: 1734060573790.png (3.73 KB, 389x129, aisucks.png)

 

So, I'm a musician, who wants to have a musical career (a lot of communist musicians had stable careers) and meanwhile stupid porkies tell me that "no, we'd prefer if you were replaced, prole, because there if no place for people like you" and I hear, not only music, but other art, computer science, programming etc. will be replaced by AI. How do we stop this, so people are still prosperous in the real socialst societies?
323 posts and 34 image replies omitted.

File: 1742265507266.png (265.39 KB, 1028x880, ClipboardImage.png)

Apparently when the new claude code agent thing was released, hackers started hunting for people who proudly announced their one-shot, no-code proof of concepts and exploited the shit out of them lol

>>28768
>LLMs are really shitty at all of that, they only excel when you give them a prompt with a specific input and a specific output.
sounds like you never tried prompts like
>list any and all creative, unusual, imaginative and leftfield solutions to problem X

All these AI people suddenly saying “AGI has been reached” has to be one of the more bearish signals I’ve seen on AGI recently. Oh AGI is here and nothing super fundamental has changed? It’s just being used as another labor saving technology? Well ok then.

Anyway this thread is just PB coping with the threat of proletarianization.

>>28771
Post some LLM generated slop you thought was profound so we can laugh at your retardation.

>>28771
>>list any and all creative, unusual, imaginative and leftfield solutions to problem X
that's a shitty prompt, retard.

>>28776
>Anyway this thread is just PB coping with the threat of proletarianization.
I think the sword of damocles is still hanging regardless of whether AGI happens or not, there's clearly an appetite for firing IT workers to the tune of 300 billion dollars, and it's going to happen one way or the other, even if it's something as puerile as training a zillion indians. people who really really give a shit should be unionizing and raising living standards for all proles instead of coping with the idea that the table will flip on the next boom/bust cycle and demand will spike again. it won't.

>>28781
The people who say AI will replace programmers are the guys in product departments who get really impressed by UX demos but don’t really understand what the backend engineering team is working on.

This has been a consistent issue at every company I’ve worked. The backend will have horrible scaling issues that need to be addressed, but product is constantly roadmapping new “features” that they want prototyped as quickly as possible, so the app just gets shittier as time goes on and the customer is paying millions of dollars for what is effectively a prototype.

What LLMs enable is a future where these mfs can do this at an ever increased rate without the help of an engineering department so if you’re a backend engineer you can look forward to a future of employment where you are brought into to refactor increasingly incomprehensible autogenerated code bases.

AI is good for boilerplate and implementing stuff that you already roughly know what and where you want it. These guys think it’s somehow going to start making overarching architectural decisions like that’s so easy

>>28781
>people who really really give a shit should be unionizing and raising living standards for all proles
Yes, high wage professionals should embrace collective bargaining.
>coping with the idea that the table will flip on the next boom/bust cycle and demand will spike again
Most likely it will. People saying AI is gonna replace IT workers in its current or reasonably extrapolated forms are just fueled by resentment. If a job consists purely of copying and pasting from stack overflow (like a significant chunk of junior positions that have been cut) then yes it's fucked, but IT requires a lot more critical thinking than that.

>>28783
>AI is good for boilerplate and implementing stuff that you already roughly know what and where you want it
>These guys think it’s somehow going to start making overarching architectural decisions like that’s so easy
This anon actually codes with LLMs.

>>28784
>If a job consists purely of copying and pasting from stack overflow
This is pretty much it, what LLMs do. People used to copy and paste code as-is from stack overflow, now they do it from a chatgpt.com response.
Also there are shit-tons of poorly documented overly complex shitpiles like AWS/kubernetes/random niche libraries/etc that have no docs to be fed to an LLM for it to produce relevant copy-pastable output. the best it can do is generate a sample configuration or code which still needs to be modified to fit into a codebase, basically exactly what you get from stack overflow.
this may qualify as """"""AGI"""""" for PHBs, but i don't have to agree with that delusion.

>>28784
> a job consists purely of copying and pasting from stack overflow
I am not convinced these actually exists. It's something people with unwarranted self-importance syndrome say to devalue the work of others.

>>27559
>"no, we'd prefer if you were replaced, prole, because there if no place for people like you"
"Blacks Rule" quality writing here
You're not a prole you're selling the product of your labor, not your labor. You're aspiring petit boug

>>28769
Lmao what a fucking moron
>>28776
They changed the deginition of AGI some time ago to claim to have reached it. They chopped their own leg off.

Shitty floss infrastrucute can't handle AI glory: https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/

>>28799
>My girlfriend is gonna be mighty upset if she thinks I'm into that kinda thing.
AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAH

Holy shit, I forgot how pathetic normies were.

>>28788
>I am not convinced these actually exists.
When I was a webdev we were doing localization that required copy pasting strings by hand into a two decade old visual basic project with a trivial locale check. I wanted to kill myself every day, and the only thing that kept me going into the office was stealing snacks from the breakroom to hand out to the homeless. I have never been more grateful to be fired than when I got laid off because they were offshoring to Latin America.


>>28807
That sounds like a data entry job, you were not copy-pasting code, but data.


>>28810
>>28844
Are you making these videos?

I wonder why AI evangelists don't realize how demoralizing the Ghibli filters are. Makes you wonder if you will ever have Miyazakis again. Probably not.

>>28776
>>28790
Right. They definition wasn't changed though, it was just never exactly what you would assume.

The definition they chose was that AI was more economically productive than a human. That's it. Not complicated. But understand that to them, this is the sum total of the human being. Humans are simply economic agents, and the human mind is simply an optimization algorithm. It's only natural they see optimization algorithms as humans.


Tomorrow is the CoreWeave IPO launch, which is a company that provides datacenters for AI shit, they're under a technical default and microsoft has slashed contracts with them just like two weeks ago. Nothing ever happens sisters… will we make it?

AI is being led by a group of people who simply do not know what a human being is, probably the worst group to lead culture right now. I suspect we are going to enter an era of total poverty of the soul. Utter catastrophic dehumanization on a civilizational level. It might lead to violence, but likely it will lead to something even worse. Ive already witnessed philosophies of total dehumanization overtake individuals on SV. The hated of life itself.

>all the moral shitflinging over chatgpt's newest pic model
lol everything must be some holy culture war now

>>28862
>le humanity
>le soul
>le hate of life itself
why do you talk like a fucking retard

Just accept it - you need to adapt to changing technologies or be crushed by them. I am a medical student who recently started using AI and it is immensely useful. I use it to bounce off ideas for diagnoses and it has an almost perfect accuracy rate. Unsurprising, considering LLM's were scoring almost perfectly on US licensing exams several months ago and have only improved since then. I also use it for charting and it is game changer. At least in medicine, I think this shit is here to stay and will improve healthcare outcomes for kany.

>>28864
If someone brings those up I assume they have stocks in AI and are trying to strawman the opposition, because surely they know how that comes across.

>>28866
you are way too optimistic about the average online person, they love their platitudes and empty abstractions

>>28866
No, I don't think so. I think it's very clear, with much of the reaction to OpenAI's new product roll out, that AI evangelists really, truly hate artists and genuinely don't understand human beings. The vulgar dehumanization is one of the most insipid bastardizations of Marx, who himself was very passionate about the importance of artistic immersion and expression in the cultivation of the human being, and whose belief in these things ungrided his critique and analysis of political economy.

I think it's poigant and sad because Miyazki, a truly generational artistic mind and a peerless genius, could explain that art and the human soul and expression are deeply and profoundly tied together. Almost all people who express themselves through creative will would understand this intuitively. It seems that only certain people don't understand it– mostly those who don't create anything. Who talk about "democratizing art", when art has in fact been democratized for centuries. They simply do not understand the process.

>>28865
You are going to be ruled, instead, by a misanthropic group of people (who possess the same nihilistic and misanthropic outlook embraced by many of the 4chan rejects who ended up posting on forums and or populating SV venture capital firms) who genuinely only see "people" like you useful in so much as they are stupid subjects to be dominated.

You do not have a future. Why should you? What is a single thing about you that is different in any way from an optimization algorithm that predicts the next word in a sentence? Name one thing, just one, that sets you apart.

You are only useful, and see these AI agents as tools to help you, because you have a job that you are paid for and in turn can spend that money on consumption or rent.

But that's only because there is still a slight gap, where you are required to be "in the loop".

Once an AI agent can do that job, be paid for it, and can spend money, then you will no longer be in the loop. And how you are treated them will have you re-evaluate those "empty abstractions". Or you will kill yourself, I suppose.

>>28869
we are already ruled by the bourgeois you dumb liberal fuck

>>28868
>pretending to be a marxist and bringing up morals and humanism
again, fucking retard

File: 1743181412524.jpg (101.49 KB, 500x748, 1743079054491.jpg)

I'll be honest I don't care about 'AI' being used in songs at all, it's a technology and a tool like every other tool, it might have some cool uses too. as it stands everything that uses AI sounds like dogshit though, maybe that changes

all this luddite reaction against 'AI' (LLMs, text-to-image models, etc.) is so annoying and it's 99% based on petite-bourgeois complaints on intellectual property which I couldn't care less about, copyright shouldn't exist, nobody should own *anything*, which sucks because there's actual good criticism of these things and how companies like OPEN-AI operate but those valid criticisms are drowned by a sea of shitty and ignorant 'criticism' made by luddites who don't even know what 'AI' is like when idiots were crying about 'AI' being used in now and then by the beatles lol

also the morons who want copyright to be even stronger than it already is to "protect creators from AI" or that "copyright is leftist" are going to make me fucking scream

>>28874
>>28875
Instead of weaponizing climate anxiety to attack AI merely to defend property law and the petit-bourgeois, it would be nice if people cut to specific issues like Meta's and Google's ability to purchase water in violation of treaties. These are much more significant issues, and like what anti-AI is avoiding talking about, this is largely an issue of class relations, of capitalism, not the near horizons of artisans in developed nations, who have been propelled by such developments.

>>28789
>You're not a prole you're selling the product of your labor, not your labor.
Lol yeah it is that simple. Self-employed skilled workers are literally the original petty-bourgeoisie but we gotta pretend the proletariat is synonymous with whoever is poor or struggling to make ends meet for some reason. I mean most small business-owners are also poor.


>>28865
>Just accept it - you need to adapt to changing technologies or be crushed by them.
I think part of the anxiety is that VCs keep blurring the horizon by overselling what these things will be able to do, and by fundamentally misunderstanding what most jobs consists of. So, for programming, they're measuring how good these things are by one-shotting like flying simulators or whatever, but nobody seems to grasp that these projects are easy to conceptualize and have no strict requirements for the end result, whereas most jobs have strict requirements with lots of caveats, and conceptualizing like, i dunno, a payroll system requires knowledge of a lot of quirks that are specific to each business and you need to specify all of them to your LLM, at which point it's irrelevant whether you're writing code yourself or asking the LLM to do it. Most jobs are like this, but these details are abstracted by the time it reaches porky's eyes. Thus they confuse what do their own employees do, and then the LLM isn't able to fit in correctly in your process pipeline. This is part of the reason why they're shoving LLMs into every interface they can get their hands, on they're trying to discover use cases where it can effectively replace people. And I mean, having it be a magic wikipedia that talks back is very cool, I think it's awesome to ask LLMs to do a bunch of ETL shit instead of messing with the actual disgusting python and pandas code, but the valuation is all wrong, LLMs are not replacing a whole lot, except artists that were already jobless to begin with. I hope OpenAI and Anthropic go bankrupt so we can wait for the actual innovators, the chinese, to deliver on LLMs that are increasingly efficient to run, so that we eventually can run the big big models on our local computers and ultra-commoditize this technology. Until that happens all of this is shit.

https://threadreaderapp.com/thread/1904933435803877882.html

>New forensic findings have just been released in the death of Suchir Balaji — a whistleblower against OpenAI.


>Police ruled it a suicide.


>But the evidence just uncovered tells a very different story: drugging, a possible second bullet, and a botched autopsy. 🧵1/ Image

>2/ On November 26, 2024, San Francisco PD informed Suchir Balaji’s family he had died by suicide.

>According to the family's attorney, an autopsy was completed just "40 minutes" after arriving at the scene — no interviews, no toxicology report, no ballistic analysis.


>Why? 🤔 Image

>3/ This is the last known footage of Suchir Balaji before his death.

>Multiple other CCTV cameras in his apartment complex — including one covering a secondary entrance — were mysteriously disconnected around the time he died. 🤨 Image

>4/ Body cam footage shows SFPD officers touching and examining the crime scene without gloves — a blatant breach of protocol.

>They failed to collect fingerprints, left blood-stained evidence unsecured, and one even quipped that the scene looked like a "homicide". Image

>5/ When a private autopsy was conducted, it revealed critical evidence the first had completely missed — or ignored.

>First, CT scans showed a second metallic fragment lodged in his skull — in other words, a possible second bullet, which is extremely uncommon in a suicide. Image

>6/ The toxicology report raised even more red flags.

>Suchir had a blood alcohol level of 0.178% — well above the legal limit.


>He also had extremely high levels (in excess of 50,000 ng/mL) of GHB in his system, a "date rape" drug commonly used to incapacitate victims. Image

Image
>7/ Based on those reports, Suchir would have been heavily impaired — possibly unconscious — at the time of death.

>Also, the gun found at the scene had no blood, no tissue, no back spatter. His hands didn’t either. For a point-blank shot to the head, that’s virtually impossible. Image

>8/ The crime scene also told a different story from the official narrative.

>Rooms were disturbed. Furniture was moved. Blood spatter patterns suggested Suchir had been standing, crawling, and possibly struggling before the fatal shot.


>Does this like a suicide to you? [VIDEO]

>9/ There was no suicide note. No history of prior attempts. No recent crisis.

>In fact, Suchir was thriving. He was on the verge of launching his own venture — and had also received numerous job offers offering multi-million dollar salaries. Image

>10/ We’re in an AI arms race — and at the center is Sam Altman, CEO of OpenAI.

>Suchir Balaji was a whistleblower with info that could disrupt it. Then he turned up dead.


>We need answers. We need accountability. We need justice.

>>28883
I lost track of my own point, but this is all to say, up until this year, when we are finally starting to see the limits to what these things can do, it was hard to see where LLMs and AI in general would fit in the labor market, so I can't blame anyone who isn't "adapting" to "new paradigms" because these new predicted paradigms are more or less, selling points for the investor class, and AI is less the advent a new technology, and more of a political project to re-organize society into a post-capitalist fiefdom. Whether that concept even makes sense (that post-capitalist technofascist society is still very much capitalism) is another thing entirely

>>28874
>all this luddite reaction against 'AI' (LLMs, text-to-image models, etc.) is so annoying and it's 99% based on petite-bourgeois complaints on intellectual property which I couldn't care less about, copyright shouldn't exist
If copyright didn't exist, we wouldn't have LLM hype. Why are LLMs "hallucinating"? Because they are so fuzzy. Why are they so fuzzy, why can't they properly cite? If they don't obfuscate their sources and just pull out long excerpts from copyrighted material whenever it looks like that's what the user wants, the "AI" companies will get into trouble.

I want LLMs to dial down their randomness (especially around sensitive words like "murder", so they don't constantly crap out false accusations about real people) and in wanting this I am basically asking them to infringe harder.

>>28886
They are fuzzy because they are statistical models, not because the companies are afraid of copyright lawsuits. The "hallucinations" will never go away, because that's how LLMs fundamentally work. There is no difference between getting things right and making shit up, one is as accidental as the other.

This is a fundamental issue with LLMs that cannot be solved.

>>28894
>They are fuzzy because they are statistical models, not because the companies are afraid of copyright lawsuits.
Your argument is the tool just has the properties. But the tool has been chosen again and again while knowing these properties.
>The "hallucinations" will never go away, because that's how LLMs fundamentally work.
The point was not about solving everything, but about reducing errors around sensitive topics. In LLMs you can directly set the "temperature" (randomness) of the output.

>>28895
But those are not errors. You are using the bullshit generator and then act surprised when it gives you bullshit.

>>28896
>yuo are le sUrpriSeD
No yuo are le sUrpriSeD here that despite me having some ideas how to improve how LLM-based bots behave, I am not a fan. I have ideas how to reform capitalism, that doesn't make me a fan of capitalism. I know how LLMs work and I don't have a high opinion about them as a general concept, not just the current versions.

I'm not anti-AI like the average idiot online but clearly the energy demands to sustain its growth can't be met without nuclear energy.

>>28898
You have made it pretty clear that you have no idea what you are talking about.

>>28900
That's neither true of AI as a general concept nor of LLMs specifically. DeepSeek shows this.

>>28901
The knob for fiddling with output temperature does not require new training. Of course it is feasible to make temperature change during the conversation.

https://youtu.be/FjgdohQk0IY
Look at the figures people, short term stop panicking


Unique IPs: 29

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]