So, I'm a musician, who wants to have a musical career (a lot of communist musicians had stable careers) and meanwhile stupid porkies tell me that "no, we'd prefer if you were replaced, prole, because there if no place for people like you" and I hear, not only music, but other art, computer science, programming etc. will be replaced by AI. How do we stop this, so people are still prosperous in the real socialst societies?
465 posts and 54 image replies omitted.>>29654>>29657Had to reprogram this completely using another LLM to get it to work.
Think the main problem is to know your needs, and often at a fairly descent level.
"The parent window of popups are being raised above the popup in the following program. (How can we raise the popups on each iteration?)"
It's possible for the LLM to elucidate your needs, but it doesn't always give a convincing answer.
There's also simple related propositions like, don't ask for things you don't need or want.
After knowing your needs you have to enjoy and discern.
A critical component of using an LLM is this discernment, and avoiding frustration.
It sometimes takes many queries before you find the desired solution, or the solution sinks in.
Still think budgeting might be room for budgeting; it might even spur some innovations.
Overall am fairly pleased with LLMs at this point, and am hopeful they'll get better, working more efficiently even beyond scaling.
Not sure if this belongs to /labour/, but I feel like we haven't talked enough about how AI is wrecking the job market. AIs are used to write and analyse CVs. People in my office use it all the time for a variety of purposes, from the production department using it for visuals, to Customer Service using it to reply to customers. Yes, you can gain some time in some ways, but you also lose a lot of quality. I've seen the texts generated by ChatGPT they send to clients, the syntax is so bad, whe you ask it to write in a certain style, it almost becomes a parody of said style… But somehow, they're pleased with the results.
I've even heard my boss at lunch say that AI is good for SMEs cause you can save up costs by asking Chat GPT for legal advice… I don't care for this fucking job, but I almost feel bad when I hear shit like this. People unironically feel like AI will be able to do everything in a few years, when it has only ever been able to do some things faster, and many of them more poorly.
>>27560Same for artists tbqh
If you look into animation for instance it has been outsourced since the late 90s with shows like Batman Beyond, Boondocks, etc. Inbetween frames have been done by Koreans/China/Vietnam/etc. for decades for Japanese studios too. They'll never outsource key frames to AI, they're far too important.
>>27567Porn commissions lost some demand due to it, you can decide how important that is yourself
>>27605Yes AI as of now is mostly used to absolve people of responsibility. AI rejected your medical claim, not me! Unsupervised learning is really hard to categorize but they've basically legalized discrimination against protected groups again by correlating statistics that relate to race, socioeconomic status, etc.
>>30139i don't get it. if vibe coding is already so good, why are big ideas guys and thought leaders and industry disruptors and crypto baron captains of VC industry not simply telling the LLM to make them 30 billion dollars by tomorrow? want a cool business idea? ask the LLM, i'm sure it has an answer, ask it to make a tech solution to solve the problem that that new business would, ask it to generate an effective marketing campaign, ask it to list the interested parties that the thought leader can "lunch with" to promote it, etc.
why isn't sam altman just asking chatgpt o3-tinylord-AGI-pre-release to go full AGI and make openai investors trillions?
>>30177>There's a trend on TikTok right now where people ask the AI to draw "what it's like to chat with me," and now I am actually truly spooked by the way people are using this thing.I dunno in some ways I feel like it's essentially the same thing as astrology, which people use in lieu of actually examining themselves
In other AI news, AI "researchers" online have been coping super hard with Apple's "the illusion of intelligence" paper, which kinda seems like it reinforces the other paper which found out the "chain of thought" reasoning models print out was complete made up gibberish and had no bearing on model output. That Apple decided to reveal it just as they released ios version god-knows-what-number is pretty revealing on what Apple is putting their stock on for the next year.
>>30177>And the thing is, they LOVE it. If you try to take this away from them they will kill themselves.normie narcissism isn't recognized or shamed enough. it's a surefire path to success if you make any product "about them".
it's what made "social media" the cancer it is currently. the internet was uncool nerdy shit until normies were enabled to bring their high-school social politics about who is cool and influencer vs who is pleb and too ugly to be on camera fully online.
>>30181>>30182It's even more retarded, it's not only calling it gambling, but a
gambling addiction.
>>30189>social networks are like slot machinesno, they are not. not everything is predatory and victimizing and completely out of your hands.
people like this stuff, they willingly opt-in and will call you a nerd or something when you tell them it's not a good idea, then when it goes predictably, they whine about privacy and evil corps or whatever else.
this situation is not some corp-engineered thing, people actively helped the corps in bringing it about.
>>30194>no, they are not. not everything is predatory and victimizing and completely out of your hands.quote me on wherever the fuck you feel i implied this. i just said that refreshing xitter feeds, prompting genAI fishing for adequate results, etc, is a bit like a skinnerbox, that's it. further i said that habit formation relies almost completely on the UX being as frictionless as possible, because, and i hoped this implication was better understood, the enjoyment you get out of them is limited. the fuck is wrong with your debate addict brain.
>this situation is not some corp-engineered thing<UX<not corp-engineeredlmfao jobless buffoon.
https://arxiv.org/abs/2506.08872A while ago, either here or in /edu/ we discussed AI in the early chatbot era; that these will improve students will cheat and that it will be time to go back to oral defenses of work
Anybody have the we warned you, you didn't listen now it's too late maymay handy?
>>30451No, oral defenses are a retarded concept and would require students to dumb down their work into concepts that can be conveyed verbally. Same issue with oral debate: Survival of the catchy. Shackled by linearity.
If colleges that have teachers that are inept enough to not be able to tell at a glance what is and isn't AI generated, then colleges are a failed institution. Students may have to prioritize curveball topics and other tactics to prove their authenticity, and allistic students may fall behind their autistic colleages, but that's survivable.
AI is going to take the jobs of people in AI companies:
>In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS).>This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and industry.>A defining feature of the model is its multilingual fluency in over 1,000 languages.https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.htmlLooking forward to Musk and Altman blowing their brains out LMAO
Unique IPs: 29