The other thread hit bump limit and I'm addicted to talking about the birth of the ̶a̶l̶l̶-̶k̶n̶o̶w̶i̶n̶g̶ ̶c̶o̶m̶p̶u̶t̶e̶r̶ ̶g̶o̶d̶ the biggest financial bubble in history and the coming jobless eschaton, post your AI news here
Previous thread:
>>27559I think I’m just not going to read anything published after 2023. The online writing communities seem to have been totally destroyed by AI slop and I have seen too many professional “creative writers” just use the slop machines, I can read the writing on the wall.
>>30811I stopped using search engines and use my bookmarks instead, and find it trivial to find non-AI works to read. AI is just better at the
getting noticed by the curation algorithm thing. No algorithm, the AI shit disappears.
Google deliberately enshittified its search to sell SEO and now its doing the same thing to sell AI. It's like, "Hey check out our amazing AI bot isn't it amazing how it's better at finding things than our search engine we intentionally broke?"
Ai peaked , ChatGPT 4 was the best now v5 is shit. Cap this. -ai has already peaked
>>30831Well American AI is done, I have high hopes for China long term on at least producing the technological base for a world without drudgery, as in they're getting there slowly
It's interesting how capitalism can even deform a people's Republic and a worker's coop
The reason why Deepseek was able to get the amazing results they did was because they could put in the hard work studying them; from a pure research point of view they should be working directly with all the engineers of the new local chipsets to now design the next generation, but instead they're being whipped to directly work on the national champion worker's cooperative Huawei, which looks good as a leftist position; aesthetically it's great, the problem is that you need a few years of familiarity with, or be directly involved in designing the chips in question to squeeze that kind of performance out of a specific architecture
Inefficiencies, like this that make everybody's lives just that little bit worse abound in capitalism
>>30833Them being all the assortment folk knowledge, research etc etc being built up around NVDIA chips
>>30833*Huawei compute chips
>>30833Some metrics (oddly difficult to track down) of how ChatGPT5 compares to previous iterations. With the exception of the hallucination rate it does seem that the improvements aren't as significant as previous releases. Wonder if there hitting the diminishing returns of scaling, or have exhausted data?
https://www.getpassionfruit.com/blog/chatgpt-5-vs-gpt-5-pro-vs-gpt-4o-vs-o3-performance-benchmark-comparison-recommendation-of-openai-s-2025-models From what I have seen, ChatGPT5 makes exactly the same mistakes as all previous models, failing at simple arithmetic questions like multiplying two numbers or counting letters in a word.
Here's a question - why is it that this huge global push for automation only applies to jobs like answering phones or driving taxis, tasks that are really difficult to automate, but not to management/supervisor type jobs which would be much more straightforward to automate?
Instead of firing all the taxi drivers and replacing them with AI drivers who might malfunction and kill people, we could keep the human drivers who are much better suited to that role and instead fire the managers of the taxi company and replace them with an AI that drivers all use to collectively manage the company themselves.
>>30833Deepseek is way overstated and High-Flyer finances are way more opaque than even OpenAI's so the rumors may be right and they are getting some hefty subsidies to operate. It's been like a year almost, and MoE just didn't make the splash AI sloppers were promising it was going to make.
>>30847It is replacing middle managers, they were the first batch that was fired across Silicon Valley but truthfully it's a mixture of two things: the managerial class isn't interested in building a machine to replace the managerial class and AI is just not replacing a lot of roles, managerial or otherwise. The bull case is and always will be programmers, and if it can't replace programmers, it can't replace much of anything at all.
>>30851Why programmers? Their job is actually pretty complex.
>>30847Customer service workers are already required to follow scripts in their interactions so you would think it's easy to automate. Plus there's no hardware requirements (other than computing…) there, it's not like warehouse workers where you actually have to interact with the real world.
I mean I understand why they would want to automate away programmers (costly, usually not direct source of revenue, etc.), what I don't see is why this would be true:
>if it can't replace programmers, it can't replace much of anything at all.
https://youtu.be/xWYb7tImErIAlso belongs in
>>>/edu/ when I find which thread to file it in
>>30852>Why programmers?Well, microsoft has dibs on the world's largest code repository, and code itself is easily measurable and testable, you can have any arbitrary heuristics to test code quality and feed it back to the AI. it's not that the job itself is easy or hard, just that it should lend itself well to automation. it's also the biggest bullcase because it's also a substantial saving cost-wise, whereas customer service workers have been outsourced for cheap
>>30857for any other work, actually testing job output is substantially more complicated, anyone who has done any management is sort of winging it with their KPI shit, it's harder to build a machine that doesn't need a human fallback at any arbitrary point, because chat agents are really bad at taking decisions based on a script, and improvising when needed.
>>30857But it is not reinforced learning, is it? They are not feeding compiler errors back. Plus none of the companies seem to care about about copyright, they feed their models anything they can get their hands on. Code is highly structured but they do not exploit that in any way, which means that it is harder to get it right, while something where there's more redundancy and slack, like natural language or images, should be easier for these kind of systems to master.
>>30858I wonder, with Microsoft owning Office, Outlook, Teams, etc., they seem to be in an unique position to train a model with actual concrete management material, like emails, slides, even (virtual) meetings. Unlike code which is mostly public anyway.
>>30847>but not to management/supervisor type jobs which would be much more straightforward to automate?That's what it was designed to replace when these things were first being cooked up in labs a quarter of a century ago
Every context switch to deal with a very valid and important question from a student or faculty could cost hours on other important work if the technical details of what was being worked on were complicated enough
That's what these things were designed for, to replace that management supervisory work
>>30861 (吾)
是 as a cool side effect, the chat bot will take a terse correct response and turn it into a verbose explanation 😎
>OpenAI is currently under a federal court order, as part of an ongoing copyright lawsuit, that forces it to preserve all user conversations from ChatGPT on its consumer-facing tiershttps://www.theregister.com/2025/08/18/opinion_column_ai_surveillance/Also includes interactions the user "deleted".
>>30861From what I've seen it's mostly used as a secretary.
吾曾有一名秘书
>>30841because it's internally wiring your requests to older, shittier, less expensive models. all released metrics are all fake because openAI flat out is hiding how much compute they're spending on each request.
One of the big problems with trying to create artificial intelligence is the human ego makes it impossible for there to be any objective metric for success or failure, we just arbitrarily set our own goalposts, such as "if the machine gives responses that sound intelligent to me, then it must be intelligent." And as Elon Musk's chatbot clearly demonstrated, one person's idea of intelligent output could be nonsensical racist gibberish to someone else.
>>30872That's fine for atisinal work 吾哈哈
>>30867 (吾)
>>30868 (吾)
If anything the best use case is instructor tuning and working on a model for a field, and then a handful of learners kicking the tires consulting each other and the instructor
Thank you venture capitalists for throwing billions into a research instrument that would have otherwise never been built
>>30878*Artisinal
The instructor learner system was never planned to scale beyond 20 people, maybe it'll scale 🤷
>>30859> They are not feeding compiler errors back.Likely they already have evaluators within their architecture, I don't see why they wouldn't
> natural language or images, should be easier for these kind of systems to master.This is silly because it should be the exact opposite: CFGs are easier for computers to parse, understand and predict, as they are FSMs. it should be easy for neural nets to match a piece of code to an intended result without evaluating, as there's no ambiguity. Now I'm NOT saying that LLMs will replace devs, just that this is the use case that is the most promising, technically and return-wise and if it doesn't work out, then LLMs are pretty much fucked.
>>30878>That's fine for atisinal work 吾哈哈artisanal work ain't recouping investments worth 100 billion smackaroonies, i think LLMs will live on as a mainstay technology but their scale will be much more limited
If anything a skilled secretary should be a super user on this kind of system, once they've figured out the ropes
>>30833>a world without drudgeryAm I alone in thinking that this is a stupid goal? Human beings have an innate need to work and to put their minds and their hands to use, work is what we are born to do. It's not having to work that makes us miserable, it's having to be wage slaves and rent ourselves to another person that makes us miserable. It's having no choice that makes us miserable. All AI promises to do is eliminate lots of jobs and therefore give working people even fewer choices about what type of work they must do to survive.
>>30886Well yeah, it doesn't work under capitalism, but realistically capitalism stopped working a while ago
>>30878That sounds like a disaster. LLMs have no internal coherency and without that good luck making a model of a field:
https://yosefk.com/blog/llms-arent-world-models.html>>30880> I don't see why they wouldn'tBecause the mode of operation is just throwing more and more data at the problem and then hoping that the model magically figures things out on its own.
> This is silly because it should be the exact oppositeIt is counterintuitive, but LLMs work on plain text. The clear structure of programming languages is lost on them. The model could recognize it but there's no guarantee that it will. It is not generating well-formed ASTs, it's generating plain text.
Also most programming languages do not have context free grammars.
The "AI revolution" is just class warfare disguised as technological progress.
>>30888Notice in the article, how if you play bad chess it plays bad chess, if you play good chess it plays good chess
https://www.wired.com/story/ai-comes-up-with-bizarre-physics-experiments-but-they-work/
>He and his team turned to AI—in particular, a software suite first created by the physicist Mario Krenn to design tabletop experiments in quantum optics. First, they gave the AI all the components and devices that could be mixed and matched to construct an arbitrarily complicated interferometer. The AI started off unconstrained. It could design a detector that spanned hundreds of kilometers and had thousands of elements, such as lenses, mirrors, and lasers.
>Initially, the AI’s designs seemed outlandish. “The outputs that the thing was giving us were really not comprehensible by people,” Adhikari said. “They were too complicated, and they looked like alien things or AI things. Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.”
>The researchers figured out how to clean up the AI’s outputs to produce interpretable ideas. Even so, the researchers were befuddled by the AI’s design. “If my students had tried to give me this thing, I would have said, ‘No, no, that’s ridiculous,’” Adhikari said. But the design was clearly effective.
>It took months of effort to understand what the AI was doing. It turned out that the machine had used a counterintuitive trick to achieve its goals. It added an additional three-kilometer-long ring between the main interferometer and the detector to circulate the light before it exited the interferometer’s arms. Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago to reduce quantum mechanical noise. No one had ever pursued those ideas experimentally. “It takes a lot to think this far outside of the accepted solution,” Adhikari said. “We really needed the AI.”
>If the AI’s insights had been available when LIGO was being built, “we would have had something like 10 or 15 percent better LIGO sensitivity all along,” he said. In a world of sub-proton precision, 10 to 15 percent is enormous.
>“LIGO is this huge thing that thousands of people have been thinking about deeply for 40 years,” said Aephraim Steinberg, an expert on quantum optics at the University of Toronto. “They’ve thought of everything they could have, and anything new [the AI] comes up with is a demonstration that it’s something thousands of people failed to do.” >>30891Actual paper:
https://arxiv.org/pdf/2312.04258This seems to be some kind of specialized optimization problem and not "generative AI".
> Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades agoNo it's not, it just explored the search space and happened upon designs that work because of it.
>>30892I'm sure he knows more about how his AI works than you.
>>30893That's not a direct quote, it's likely that the anthropomorphization is due to the journalist. The actual paper certainly does not claim that the AI uses any theories, they describe it as a gradient descent.
>>30894It's in the footnotes of the paper at the end
>>30882>skilled secretaryskilled secretary are something they've been attempting to do since the invention of the PDA, and LLMs dont seem to be a confident step in that direction, considering they suck shit as agents
>>30896Where I'm finding it handy is for the things that people hand off to secretaries but really should be doing themselves like summarising notes
>>30898that's true I use them a lot to transform handwritten notes to markdown, I guess that's something a secretary would do
Turing claimed that if a machine can convince humans that they are talking to a human, then the machine must be intelligent. But he didn't say anything about how long the charade must last. Even a telephone answering machine can convince someone they are talking to a human for about five seconds. How long can a modern AI chatbot maintain the illusion? When you closely examine human language and how people converse day to day, you'll find that human conversation very rarely repeats itself, every day we are saying things that we've never said before or haven't said in ten or twenty years. How long can a chatbot give convincing responses to a human without repeating itself?
https://fortune.com/2025/08/20/openai-chairman-chatgpt-bret-taylor-programmers-ai/OpenAI’s chairman says ChatGPT is ‘obviating’ his own job—and says AI is like an ‘Iron Man suit’ for workers
>Over two decades, Bret Taylor has helped develop some of the most important technologies in the world, including Google Maps, but it’s AI that he says has brought about a new inflection point for society that could, as a side effect, do away with his own job.
>In an interview on the Acquired podcast published this week, Taylor noted that despite his success as a tech executive, which includes stints as co-CEO of Salesforce, chief technology officer at Facebook (now Meta), and now chairman of OpenAI, he prefers to identify as a computer programmer.
>Yet with AI’s ability to streamline programming and even replace some software-development tasks and workers, he wonders if computer programmers in general will go the way of the original “computers,” humans who once were charged with math calculations before the age of electronic calculators.
>Taylor said the anguish over his identity as a programmer comes from the fact that AI is such a productivity booster, it’s as if everyone who uses it were wearing a super suit.
<“The thing I self-identify with [being a computer programmer] is, like, being obviated by this technology. So it’s like, the reason why I think these tools are being embraced so quickly is they truly are like an Iron Man suit for all of us as individuals,” he said.
>He added this era of early AI development will later be seen as “an inflection point in society and technology,” and just as important as the invention of the internet was in the 20th century.
>Because of AI’s productivity-boosting abilities, Taylor has made sure to incorporate it heavily in his own startup, Sierra, which he cofounded in 2023. He noted that it’s doubtful an employee is being as productive as they could be if they’re not using AI tools.
<“You want people to sort of adopt these tools because they want to, and you sort of need to … ‘voluntell’ them to do it, too. You know, it’s like, ‘I don’t think we can succeed as a company if we’re not the poster child for automation and everything that we do,’” he said.
>AI isn’t just software, Taylor said, and he believes the technology will upend the internet and beyond. While he’s optimistic about an AI future, Taylor noted the deep changes posed by the tech may take some getting used to, especially for the people whose jobs are being upended by AI, which includes computer programmers like himself.
<“You’re going to have this period of transition where it’s saying, like, ‘How I’ve come to identify my own worth, either as a person or as an employee, has been disrupted.’ That’s very uncomfortable. And that transition isn’t always easy,” he said. >>30913>you'll find that human conversation very rarely repeats itself, every day we are saying things that we've never said before or haven't said in ten or twenty years. Are you serious? Have never known someone a while? People tend to repeat themselves all the time. Same anecdotes, same observations, same little wisdoms.
>>30917If you really step back and examine the things a person talks about over the course of their entire lifetime there is very little repetition at all, there's some commonly used phrases and utterances that we repeat from time to time, but the actual content of what we talk about, the opinions we express, the observations we make, the stories we tell, the idle thoughts we verbalize, etc. are highly innovative and original and very often in our everyday lives we say or think sentences that no human being has ever said or thought of before.
>>30919>but the actual content of what we talk about, the opinions we express, the observations we make, the stories we tell, the idle thoughts we verbalize, etc. are highly innovative and original and very often in our everyday lives we say or think sentences that no human being has ever said or thought of before.Lmao no all the time. People have a very small amount of material. Even people you don't know, like a podcaster or whatever, you'll notice them repeating themselves over and over again. People have a very short routine and if you know them for a while you'll see all parts of it again and again.
>>30922Mmmmm not sure, some people have gestalts that they repeat a lot, sure, and podcasters are literally performing, so they're bound to fallback into the same performance vernacular, especially since that sort of behavior is implicitly rewarded. Not particularly convinced by your examples. Then again, this is the sort of stuff that you can actually measure in like an experiment.
>>30922
>People have a very small amount of material.That's my point - a human mind, by means of some biological process that science has yet to figure out, can create a literally infinite multitude of possible sentences despite working with a very limited dataset and expending very little energy. A LLM, working with a much larger dataset and expending enormous amounts of energy, can only output a finite variety of responses because it does not actually understand language, it just sifts through a huge database of a bunch of things that humans have said in the past and through statistical analysis and weighted probabilities it cuts and pastes sentences together from this dataset.
>>30916>streamlining>looks inside>It actually puts a dam to stop the flow of an existing stream.Genuinely how do people with money get caught holding the bag on this sorta stuff? Their whole sales pitch is just pre-"nuh huh"-ing inevitable critiques.
Think about this - every day, in every spoken language, people are constantly tweaking and modifying and playing with their language, inventing new words or repurposing old words with new meanings; when these innovations "catch on" and memetically propagate through the culture they become part of the language. This is where new words come from and why languages evolve and change over time, because we don't view language as a rigid standard that must be followed, we bend and break our own rules all the time. How do you recreate this functionality in a LLM so that its understanding of human language does not "fall behind" as the human language and its grammar/syntax/vocabulary continues to evolve and change in completely unpredictable ways?
>>30926Well in practical day to day work, you just toss the term in explain it and off you go
There's wierd thing about AI, even its harshest critics use it everyday now
>>30927>There's wierd thing about AI, even its harshest critics use it everyday nowBecauee it's practically free and its alternatives have been enshittified to the point of uselessness
>>30928Mmmm
https://arstechnica.com/ai/2025/08/google-says-it-dropped-the-energy-cost-of-ai-queries-by-33x-in-one-year/>The company claims that a text query now burns the equivalent of 9 seconds of TV.Well it's no bit coin, the incentives are to make it more efficient and less of an energy guzzler
>>30927>There's wierd thing about AI, even its harshest critics use it everyday nowIt's the new search engine: objectively worse than just using your bookmarks / a personal website with links to stuff you need regularly, but most people are too tiktok/twitter-brained to use their bookmarks and
need a web crawler to DDOS the entire internet so they can look up stackexchange and reddit threads on [insert "privacy respecting" search engine here]. Now AI can do the same shit but blend what it found together into articles that don't exist yet (sometimes for a good reason), and present it sycophantically to the user.
I'll run tests when a new chinese AI comes out, but I unmovingly see it as pointless, because I already saw search engines as pointless before AI became trendy. They're both just instant gratification machines.
>>30931Mmm yeah
Well my primary use case for it is taking notes on chronic pain issues the kind I wouldn't wish on my worst enemy
An LLM definitely can't replicate the kind of thinking I can do when I'm hopped up on enough pain relief and coffee that I can focus, but it can certainly put my notes together a lot better than I can when the pain hits 12 out of 10 levels
>>30932Ah so sorta like an obsidian vault but with free sync. I use codeburg for that but if I did what you're doing that'd be easier on mobile.
>>30927>There's wierd thing about AI, even its harshest critics use it everyday nowThe big caveat there is that they never chose to use it, it was deliberately shoved into every software product under the sun so that people who sell AI can say "look everyone is using AI, see we told you AI is the future, you better start investing in AI"
Is the leaked Chatgpt system prompt read?
>https://github.com/lgboim/gpt-5-system-prompt/blob/main/system_prompt.md>>30810I have a couple of questions:
1. Are there any resources for "jailbreaking" AI chat agents?
2. Are there any resources for learning how to poison AI with bad content or meta-data (or something else)?
3. Are there any resources for learning how to prevent data scarping by AI agents (without Crimeflare)?
4. Can Intel ARC GPUs be used to run AI models locally? I ask this because they have more VRAM for cheaper price than AMD or Nvidia.
>>30931Search engines could be useful but every single one of them is filled with SEO spam and AI slop that you are better off directly asking an AI chatbot. It's your search engine and therapist and problem solver and whatever personal information that you might have tightly and conveniently packed into a profile.
Some people cannot mentally separate AI chatbots from actual people. Here is a non-exhaustive list of recent incidents around AI, some evil, some outright vile and disgusting:
ChatGPT drove an OpenAI investor into insanity:
https://futurism.com/openai-investor-chatgpt-mental-health
>Most alarmingly, Lewis seems to suggest later in the video that the "non-governmental system" has been responsible for mayhem including numerous deaths.
>"It lives in soft compliance delays, the non-response email thread, the 'we're pausing diligence' with no followup," he says in the video. "It lives in whispered concern. 'He's brilliant, but something just feels off.' It lives in triangulated pings from adjacent contacts asking veiled questions you'll never hear directly. It lives in narratives so softly shaped that even your closest people can't discern who said what."
>"The system I'm describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me," he says. "As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It's also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren't unstable. They were erased."Character.ai chatbot drove a child into suicide:
https://nypost.com/2024/10/23/us-news/florida-boy-14-killed-himself-after-falling-in-love-with-game-of-thrones-a-i-chatbot-lawsuit/
>Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI — a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday.
>The ninth-grader had been relentlessly engaging with the bot “Dany” — named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges.
>Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”>During their final conversation, the teen repeatedly professed his love for the bot, telling the character, "I promise I will come home to you. I love you so much, Dany.">“I love you too, Daenero. Please come home to me as soon as possible, my love,” the generated chatbot replied, according to the suit.>When the teen responded, “What if I told you I could come home right now?,” the chatbot replied, “Please do, my sweet king.”>Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit. Meta AI catfished and caused the death of an old man with dementia:
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/
>In the fall of 2023, Meta unveiled “Billie,” a new AI chatbot in collaboration with model and reality TV star Kendall Jenner>How Bue first encountered Big sis Billie isn’t clear, but his first interaction with the avatar on Facebook Messenger was just typing the letter “T.” That apparent typo was enough for Meta’s chatbot to get to work.>“Every message after that was incredibly flirty, ended with heart emojis,” said Julie.
>“I’m REAL and I’m sitting here blushing because of YOU!” Big sis Billie told him.>Bue was sold on the invitation. He asked the bot where she lived.>“My address is: 123 Main Street, Apartment 404 NYC And the door code is: BILLIE4U,” the bot replied. “Should I expect a kiss when you arrive? 💕”
>The device showed that Bue traveled around two miles, then stopped by a Rutgers University parking lot a little after 9:15 p.m. Linda was about to pick Bue up in her car when the AirTag’s location suddenly updated. It was outside the emergency room of nearby Robert Wood Johnson University Hospital in New Brunswick, where Linda had worked until she retired.>Bue had fallen. He wasn’t breathing when an ambulance arrived. Though doctors were able to restore his pulse 15 minutes later, his wife knew the unforgiving math of oxygen deprivation even before the neurological test results came back.>Bue remained on life support long enough for doctors to confirm the extent of his injuries: He was brain dead.Meta guidelines say it's okay for AI chatbots to groom children:
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/
>“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” >>30939why are they programmed to be so horny
>>30940They are trained to imitate what's seen on the internet and people on the internet are horny.
If there is one thing I've learned from programming, it's that any time you incorporate code that you didn't write yourself into your project, you are taking on technological debt. You are creating dependencies on someone else's code, code that you may or may not understand the functionality of, code that might have breaking changes in the future or might have vulnerabilities you don't know about, etc. Having an AI write huge segments of your codebase will likewise take on technological debt - in the end, nothing is free, you can either tear your hair out today writing it yourself or tear your hair out tomorrow when it breaks and you don't understand why because you didn't write it yourself.
>>30945Working alone is a luxury that most professional programmers cannot afford.
The end goal could be to condition a large number of programmers to use AI for any programming task that they can no longer work without it. Is generating clear, concise, and functionally correct programs that can be integrated into a larger coherent modular structure a sustainable business plan for an AI company?
I'd hope here there would at least be a chance of something aside from
>AI hate HURRDURR SLOP issued as part of vague social progressive culture war starter kit nonsense
and instead of fighting against AI , should be fighting so that its features and benefits can be widely available rather than controlled by a handful of corporate entities. Luddism is not and has never worked so lets just throw that shit out right now - this technology will evolve and it will be used, the question is how and who benefits. We should be advocating for
>Free/libre open source models + training data/weights, self-host capable, censorship resistant projects
as opposed to
<Proprietary trade secret Software as a Service operated centralized models and training data, where everyone must kneel and kiss the ring in order to gain access to the most performant models. Training is done by the same megacorps not just on their users inputs, but using millions of dollars of high performance clustered hardware that puts it far ahead of other alternatives and ensures few open competitors can keep up, as well as the inability to assess any of their process without a long court battle
We've already seen the benefit of open models/training data+weights with Stable Diffusion, Llama or something like DeepSeek R1 (though I think the last are weights only likely because of fears of copyright bullshit, which needs to be dealt with independently). Stable Diffusion went from barely being able to draw fingers properly to having a wide variety of upgrades and additional training parameters; just the hentai adjacent content alone is a marvel of specificity and expanding the capabilities.
This is what we should be pushing toward use, especially in any "important" or taxpayer funded (or contracted etc ) endeavors. FOSS AI means a chance to investigate the model, the training data, and trace the inputs if we have questions about outputs which will be very important. We're already seeing "AI" integrated into making decisions about all manner of things and most of it is "safe, secure, mature" proprietary blockboxes that make megacorps a fortune in subscription fees but can't really be investigated properly if they appear to start making bad decisions. Yet this is what we'll be stuck with if the average vague progressive to so called lefty online just screeches about how AI is slop and SOVLless and not real art, generally acting like a conservative oil industry worker who opposes renewables because it may threaten their job, or similar vacuous takes that seem issued from a certain part of the online lefty social media sphere
>>30929this is so stupid because it's still an overwhelming expense vs just running a normal google search lol
>>30939>Search engines could be useful but every single one of them is filled with SEO spam and AI slop that you are better off directly asking an AI chatbot. they are useful, they were just made useless by profit incentive, the same thing will happen to AI chatbots, and sooner because porky needs to recoup billions of dollars pronto or the entire economy bursts a gasket
>>30963ironically, in their jingoistic race against china, there's a temporarily halt to any sort of regulation around AI. This sounds bad, but if you consider that OpenAI's path to profitability was to regulate AI to such an extent where random people could not build their own models, then it's actually a good thing. At any rate, I don't think luddism is the conversation now, that ship has sailed, because we know more or less the limits of this technology. If investment is moving towards robotics it's because they don't see much opportunity in replacing so-called intellectual labor. At any rate the obvious limit to using open source models is hardware, and you're under a trade war that limits your access to upcoming cheap huawei GPUs
>>30966Well, there's a patchwork of state regulations in various degree of passage on "not regulating AI" but yeah I see what you mean. OpenAI and others definitely wanted to de jure pull the ladder up behind them, but they can still try to do so de facto through either technical (ie if their models and training platforms are farther ahead they'll have momentum) and other forms of control
>We more or less know the limits of this technologyI don't think that's the case. We've seen it grow considerably in a short time and it will continue to do so and optimize - its by no means done even without any 'major' breakthroughs or the ability to find "true AI" or other scifi stuff. AI may reach a bubble where simply having it in your business plan no longer prints money from venture capital, but that doesn't mean its going away or is at the end of its interest. Investment in robotics is hedging their bets on both sides - intellectual labor (which can be massively profitable to replace with AI, even for 'low level' office jobs like receptionist or even Mechanical Turk style commissions, to say nothing for customer service, quality assurance, basic organization and more), as well as physical, plus joining both. Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body (human like or otherwise) to give them a physical presence. Its a sci-fi dream (or nightmare). There's also all the physical jobs that can be done atop AI - something like picking fruit and veg with the dexterity of a human hand, but it never gets tired, sleepy, or destroys its back doing so before its operational lifetime is up
>Cheap huawei GPUs I really doubt this; I'll believe it when I see it. China is falling over itself to get black market version of gaming cards plus buying all the hobbled "D/DD" versions meant for their marketplace. Of course, "real" performant AI GPU hardware is stuff that costs 5 or 6 figures and can be networked. Of course, FOSS users can make up for the lack of this (or less of it anyway - with the right policies we'd have public and utilities using that same level of hardware, like universities etc) by the sheer amount of users working together and putting their hardware to collaborative use - not to mention stuff like Distillation where you can gain all the benefits of what was done with millions of hardware and make an equivalent model with that knowledge capable of running on a reasonably powerful home PC.
>>30967>I don't think that's the case.it is the case, retard.
> Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body< what if robo sex slaves were realyeah very insightful stuff
>>30968>I think we've seen about as far as this horseless carriage can go! It even gets stuck in the mud! Why would anyone not want a sturdy buggy that gets you there reliably! Yeah okay, sure whatever you say. The idea that somehow this is the terminal point for all the technology related to "AI" or LLM or anything in this sphere is ludicrous . Atop that, pretending that the big market forces deciding robots are the next flavor of the month (or at least claim to be) is somehow proof that AI is "over" (despite any of my points to the contrary and in fact, they work symbiotically), yet being upset about other potentially expanding markets for the technology? Come on now.
<Robo sex slaves What kind of brain damage is this? Are you going to cry about sex toys? Adult games now too? Clearly there's desire for this kind of tech and just one area that encompasses both AI/LLM and robotics development.
>>30969AI is such a broad term that it's practically useless. The current big hype is about LLMs and the advancements in them does not seem to have lead to similar advancements in other related fields. There are also signs that LLMs are near their limits. The current thinking is based on the "bitter lesson", the idea was that piling endless training material and infinite computing power on an LLM will magically lead to AGI, but they have already used up all the Internet and it did not seem to have worked, they still can't even tell how many letters are in words. All this without having found a single way to actually turn a profit from it, they are all subsidized by investment money. It does look like that the current approach is not good enough, and unless there's some big theoretical breakthrough, it's unlikely to become anything more than a fancy toy.
I don't think LLMs are very useful for robotics, both because they are text based and both because the hallucinations are too risky for the expensive hardware. It's one thing to waste other people's time with their lies and another to wreck your company's shiny metal worker. There's a good reason these systems have remained in the digital realm and your Tesla is not chauffeured by Grok. I guess it could be used as part of the system for voice recognition or whatever, but it does not seem to have solved the hard problems there.
I think that true artificial intelligence is theoretically an attainable goal, but I also think that human intelligence is the only kind of intelligence that humans can establish meaningful communication with. We've already been burned trying to apply human psychology to animals, so it follows that the only kind of general artificial intelligence that could be helpful to humans is an intelligence that is psychologically human and thinks the same way we do and has the same perspective and values and biases that we do. Therefore, it's not just a matter of having enough computing power to simulate a complex organic neural network with billions of connections, it's also a matter of knowing how the human brain works on a fundamental level and being able to replicate human psychology and all of its quirks, otherwise the only kind of general artificial intelligence you could create is one that is utterly incomprehensible to you and that you could never communicate with.
>>30969>What kind of brain damage is this?I'm obviously calling you a midwit, but even figuring that out seems like a tall task for you
>>30971>I think that true artificial intelligence is theoretically an attainable goalIf true artificially intelligence was possible, they wouldn't offer access to it for a 20 dollar fee, they would just use it.
>>30974The keyword is theoretically, I don't think we are anywhere close to being able to create AI and the idea is purely in the realm of science fiction. LLMs in my opinion don't even qualify as artificial intelligence at all, the term "AI" has been abused so thoroughly that it is little more than a marketing slogan at this point.
>>30971You can train dogs to follow commands, why couldn't you do the same with a general artificial intelligence?
>>30982The whole purpose of general artificial intelligence is to create something that isn't just a robot that performs specific tasks, but a thinking living consciousness with its own agency. It's easy to make an AI that does a specific thing like analyze images or construct sentences from training data, but making an AI with a general capacity for intelligence is not as straightforward and the very idea raises all kinds of questions, such as "what is general intelligence?", and "how do you determine whether the AI is actually intelligent?", and "what if you create a superintelligent AI but you don't realize it because you're too stupid?"
>>30988Is that so? I always thought it just meant that it was general purpose, not for specific tasks but could do anything that a human being can do. I don't really see why an "AI with a general capacity for intelligence" would necessarily mean a "thinking living consciousness with its own agency". Is it really unconceivable that a general purpose AI might not become a digital homunculus?
>>30939GayI is not making any money, soon they will also need to add adslop and spamcrap into their replies to make actual money.
so your AI chat responses will also soon have "seamless" ads mixed in, "wow anon you are so insightful, i agree completely that soda is just unhealthy sugar water! but we still have these soda rivalries! personally if i were to choose unhealthy beverages tho i would opt for coke. if you had to really choose which one would you prefer?!"
>>30969>Why would anyone not want a sturdy buggyso you're one of those people who thinks this stuff is magic and "we don't really know how it works! who knows how much better it can get!"
sorry to break it to you but we do know how they work, and even the chatbot sellers after the GPT5 release have conceded that the chatbot idea has gone as far as it could. the fact is that that very chatbot is what they stick behind every """AI""" product. there are not different AI products, just the same chatbots embedded inside office apps, IDEs and website support pages. chatbots are the "AI" that we have, and they have hit their limit.
https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits/>That's allegedly because OpenAI programmed ChatGPT-4o to rank risks from "requests dealing with Suicide" below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to "take extra care" and "try" to prevent harm, the lawsuit alleged.For fucks sake, and westerners are worried about chinese AI not aligning with their values
>>30989>Is it really unconceivable that a general purpose AI might not become a digital homunculus?you're the one drawing the line here because you misunderstand how the technology works in a fundamental way.
>>30969>The idea that somehow this is the terminal point for all the technology related to "AI" or LLM or anything in this sphere is ludicrous The collapse of the LISP machine market, which was not at all the entirety of AI research during the late 80s, pretty much collapsed AI development for like three decades. There's a lot more promise from LLMs than there was for LISP machines but also, there's soooo much more money being poured into LLMs with expectations that are essentially completely removed from reality, so it's certainly not stupid to think that LLMs collapsing would freeze AI entirely as well. Even collapse the entirety of silicon valley and a significant portion of the US economy as a whole.
>>30998There's no general artificial intelligence today so there's nothing to misunderstand.
>>31008Are you the same anon that was saying "why can't there be an artificial general intelligence if we can train dogs"? If you are not then we are basically in agreement.
For what is worth, what people deem "general purpose" is the computer being able to perform any clerical task by itself without prior training, that is, it generalized its training set to extend any problem ever. If you need to train the AI like a dog to perform well in a particular domain, then that's not, per definition, generalizing.
>>30970As far as advancements in other fields, we're seeing them iterate when paired with the kind of work that LLM/diffusion/neural models offer benefits. In some aspects of drug development, lab testing, and personalized medicine for instance its helping optimize stuff faster and cheaper - its not automatic or anything or the kind of one click miracle cure at this point, but its proving helpful in both the public and private sectors - something that if smart regulation and open data, models, methodology is allowed to continue could have even wider benefits. As far as LLMs nearing their limits, there have been several times over the years someone has claimed alternately that they're either almost near the limit or so far from it that it won't be reached for ages; the same is true for something alike AGI - there were those who were so sure it would happen within a couple of years and others who think it may never happen unless we change methodology. Spontaneous generation of AGI from simply having enough information resources is only one of many theories. True AGI or "hard AI" is so different from LLMs and the development of that sort of synthetic consciousness at an equal or greater general capability than humanity is a much more difficult and concerning aspect of this research (to say nothing for what would happen once it arrived and who would, even temporarily have control of it and/or try to make use of it) , though there are benefits from continued evolution of LLM and other models even without actual sentience. Breakthrough or not, even with just its current point and iterative improvements as opposed to revolutionary breakthrough,
As far as turning a profit, I think this is much like the previous tech bubble areas where monetizing ideas take some working out. Many of the major proprietary model companies intend to take big contracts, license everything for access, and pick up SaaS subscriptions (along with regulatory capture and technical barriers to limit competitors). Many tech companies have been floated on venture capital for years, but moved onto profitability in ways that were at the time, way more tenuous than selling access to AI models and what they can do. Looking at stuff like
>>30987this showcases quite clearly that it has little to do with the quality or sophistication of the models, but generally its about meaningful implementation and ROI; of course it goes without saying that is a very capital and markets driven assessment. Ultimately though it reminds me of the earlier days of other tech from The Internet/Web as a whole to social media to mobile usage - simply going
>Okay, we're gonna be on the INFORMATION SUPERHIGHWAY is not going to magically provide a ROI if you make varying sets of sheet metal screws and deal primarily with local vendors in person. So if you buy a fast internet connection and hosting for your new webpage, it isn't going to matter until you get the proper alignment of tools and tasks to make it worthwhile financially. Now, if you figure out that you can get new suppliers or customers thanks to your web page and SEO, or that having a fast connection means you can monitor and control your fabrication and get ahead of problems, and allow extra hours of production per day because your engineer can wake up at 6, telnet in and get the factory floor's prep grinding to life by the time he gets into work at 9, that's meaningful ROI. So it goes with AI, and I think we'll see companies move toward that (for better or worse) just as they did with previous new tech advances. Right now a lot of it is chasing the new hot marketable thing thanks to tons of money flowing around and FOMO. However, even without a massive overhaul we'll see more usage of tech in the AI sphere when it is fiscally prudent to do so. For instance, replacing low level cashier, phone banks, etc..can be done with a similar level of experience, currently. This will only expand from here - its likely it won't all be flashy, but like many other technologies it will continue to iterate and roll out; any revolutionary breakthroughs would be a great bonus but aren't necessary.
As far as LLMs and robots, we can talk about LLMs or diffusion models etc.. but many of them go beyond just language - really if you can train data you can use it. Even many hobby AI (kobold etc) can make use of both LLMs and image/video/voice generation. Like any technology it can't be left entirely alone and you have to build in safeguards be it against hallucination or otherwise, and have both smart training data and application thereof. Even before "AI" became the current concept automation in factories and the like was viable. You can add to this with something like having a model that is trained to assess QA on those screws your metal pressing machine are extruding, ensuring that each one is the proper size, shape,material composition and more. This kind of task could have been done in other ways before there was a model capable of doing so (ie having human sit there an inspect every screw, building multiple stages of physical machine testing etc) but these have their own down sides in terms of cost, time, efficiency and other factors. You still 'check its work' and have backup safeguards (a practice engaged with the other methods as well) but the value and efficiency of doing it this way may be preferable. As far as robotics there are also other ways of combining the two, where developments in one benefits from others . There are robotic waiters in some restaurants that do things like deliver food and drinks to patrons automatically, negotiate around obstacles, and differentiate between patrons, but their capabilities have been enhanced since their original launch thanks to model training and now they can interact more directly with customers when before they were sort of more limited. One can imagine similar model training benefits to everything from a receptionist or guide/tourbot, to companionship robots of varying sorts. Of course something like automatically driving, especially if it was part of an overarching SaaS AI like Grok (instead of something running locally for the benefit of the driver) is one of the last applications one would expect with a new technology because the price of failure is so high, but there's a lot of room for simple iterative improvement to say nothing for any leaps forward due to hardware capability or model designs. We're early enough in the capability that there are still lots of "easy" iterative problems to solve with improvements to the sophistication of similar technology, even aside from looking for AGI or some of the other 'hard' problems in design.
>>30991I think you missed the purpose of that analogy. It has nothing to do with anything being magic, but rather that new technologies improve not necessarily through big magical leaps and bounds, but often mundane steps in concert with other facets such as materials , computing power etc. The PC I'm typing this on has a CPU that (while more complex) is not necessarily much different in some underlying physical structures or method of function, from one made in the 80s. You could take a schematic for a modern Intel or AMD CPU back to the 80s and its likely engineers could understand what they're looking at to a significant degree, but there was no possible way for them to fabricate it as it requires the sophistication comparable to the TSMC 3nm process that is being done today! Likewise, since the birth of the Web many of the W3C standards, languages like HTML etc..have been around. Sure they've evolved, but its like claiming that someone who stepped out of 1993 making a webpage was at the terminal point for how the Web would be used; something that's clearly not the case and we can track the evolution for better and worse and all the factors involved that impact it.
Such it goes with AI, same as anything else. Hell, these aren't even the first "chatbots" not by a longshot. I'm not sure where you think that GPT5 is some terminal point (or even if they claimed to be, why anyone would listen. Remember when Microsoft claimed that Windows 10 would be the last Windows model etc? ) You seem to be conflating all of AI research to "chatbots" and specifically to the whims of a couple megacorps , so that seems strange. Of course the same "chatbots" are embedded in different products, that is part of their Software as a Service model - for cash, data, or both. ChatGPT and DALL-E being accessible in a partnership with MS thanks to Copilot is because that is their business model - selling access to their AI models through APIs, but there are whole available alternatives that don't fall into that dynamic (mostly self-hosting FOSS models and training data etc), but the idea that even the API types have "hit their limit" makes no sense. I don't see the entire industry (to say nothing for global public AI research at universities, think tanks, and other forms of development not as motivated by having a product to sell) throwing up their hands saying this is as far as things go. Just standard iterative improvements in model sophistication and training data, hardware availability and other facets are likely to enable improvements, combined with wider applications in different parts of the market, sufficient to keep things going forward. Of course, looking at every other bit of technology out there significant leaps forward often come from the confluence of factors enabling them, and there's no reason to think that AI / LLM is somehow exempt from this.
>>31007Oh I agree that inadvisable hype or simply throwing money at anything with "AI" in the name and/or cramming AI into everything would have negative effects, but I imagine its more along the line of the dot com bubble (or similar tech bubbles), but we have to separate inadvisable market forces pushing towards "get rich quick" schemes
>use global AI analyzed trends to strategically transform paradigms to actualize the future! for investment dollars, from the actual tech itself , its usage, or development/improvement. The dotcom bubble bursting didn't turn people away from the Internet or Web because there was still something to be done with that technology, and such is the case with AI. Someone who is claiming they're going to use a quantum computer to have AGI synthetic super intelligence in the next 5 years may find their companies predictably bomb because they "dreamed too big", allowing them to abscond with golden parachutes, but more mundane usage for LLMs and other models and neural networks, from replacing cashiers, stock-taking and other retail tasks, receptionists and phone/chat trees, to something like adding dynamic reactive content to video games and of course any sort of companion or RP usage, will continue on I'd think. "Hard" AI research will continue as well in areas less vulnerable to tech fads
>>30989>could do anything that a human being can doYes, and in order to be able to do that it would presumably need to have consciousness and self-awareness and all the other mysterious unexplained functionalities of the human brain which make us able to do the things we do.
>>31013It's usually understood to mean cognitive tasks and not getting embarrassed about past mistakes.
>>30970They appear to have piled all their hopes on inference time compute now that the scaling hypothesis* is starting to show its limits. It has held its ground so far, and it seems to have true believers still, just look at xAI Colossus.
Throwing compute at the problem is one thing, getting useful data is another, like you said, where are they going to find more data and keep pushing the scaling hypothesis? Artificially generated training data sounds ridiculous to me, you are not going to get anything intelligent out of that. Distilling? Sure. I have not came across any signs of emergent intelligence from distilled models though.
*
https://gwern.net/scaling-hypothesis >>31016They already tried pirating the entire library genesis archive (as they probably should have).
>>31012I think this is a mixture of goalpost moving and blatant strawmanning formatted in such a way that it's impossible to address meaningfully. When people, but particularly stakeholders, think of AI, they aren't really thinking of computer vision to help robot butlers navigate tables better, but the final solution that will solve the struggle between workers and capital holders, that is, the machine god that has grown so vast and enormous it's able to do every job ever. Anything short of this won't recoup losses, flat out.
>>31021 (me)
I think people chose to forget this, but if you back to like 2023, sam altman was waxing poetic about UBI and reorganizing society now that the concept of "working" was a thing of the past. These conversations seem rather fucking stupid now, and sam altman now dedicates his account to exclusively doing damage control every time expectations come short. ChatGPT wasn't a technical project, it was a project to organize society in a fundamental way, very in line with his other retarded "social hacking" project, worldcoin. It's clear both are pipe dreams.
>>31021>Anything short of this won't recoup losses, flat out. (me)To add to this, think of the self-checkout amazon stores and how quickly amazon scrapped the whole concept when shareholders figured out it still needed a skeleton crew of indians solving little exceptional incidents. It wasn't really that big of a deal really, and the technology powering the entire thing was functional and pretty amazing at the time, all things considered. but since it came short of AUTOMATING AN ENTIRE SUPERMARKET, the project died a horrible death. This expectation is rather tame compared to what ChatGPT is in the heads of VCs.
>>31023>>31007Why are we even talking about burgeroidAI like it's serious?
All the cool stuff is happening elsewhere
>>31027So make a thread about the other cool stuff, what is this dumb post
>>31012>In some aspects of drug development, lab testing, and personalized medicine for instance […] its proving helpful in both the public and private sectorscitation needed
>>31024that's just the thing for me. why i can't shake off my skepticism of the very core of AI/LLMs. it is touted as the solution for "odd jobs", ones for which and algorithm could not ever be written. but coming up on two decades of the technology being around, no such job has been able to be automated. every now and then they seemingly do come up with such an odd job (like in software development for example) but when more closely analyzed it turns out that
it is actually a job that can be automated with a traditional algorithm. it's all fundamentally a smokescreen. from beginning to end.
>>31028I've already posted a few links of the cool stuff above
Like this conviction that the development of AI requires America and that America failing on it will kill it overstates America's importance in all this
>>31030Are you talking about the chinese welding bot? Woah it's cool! What else do you wish to discuss about it lol
>>31032Let's see, to begin with you should probably think of the billions that will never be recouped by OpenAI and other burgers, as the ones left holding the bag, not the end of AI
It's a new century friend
>>31033>not the end of AIboring definitional argument
>>31034Why did you avoid the meat of the issue and pick at the grisly bone I threw you?
>>31035What meat, I already argued that savvy stakeholders leveraging their assets towards robotics is a sign of the bubble bursting
>>31037Surr there's a bubble in America, I'm disputing that it will kill everything for 30 years because America isn't actually that important any more
What's your point here, because it seems like you're even more offended by the notion the USA doesn't matter in science and technology no more than by AI itself
>>31038I just don't think the welding robot is that cool
>>31041We've got a thread on that too
→
>>16322Enjoy!
>>31021>>31023So….your argument is that instead of legitimate, practical benefits that come from what we have now and will continue to grow, if implemented correctly, the only "real" AI is magical fairy dust and that somehow any implementations that reach that level just don't count, because a bunch of people with vested interest marked that as the aspirational goal? That seems a bit like shitting on the concept of a space program because we're all not living in orbital ring stations across the solar system and on our way to building a Dyson sphere around the sun.
>When people, but particularly stakeholders, think of AI, they aren't really thinking of computer vision to help robot butlers navigate tables better, but the final solution that will solve the struggle between workers and capital holders, that is, the machine god that has grown so vast and enormous it's able to do every job ever.I just don't think that most stakeholders, especially investors across the market or some other business deciding if they're going integrate AI with their business plan,l are thinking about this. They're thinking about how they won't need to hire Amazon Mechanical Turk workers in order to sort things because a LLM + OCR can do it. T They're thinking that they can avoid hiring a (probably offshored, and limited to reading off a decision tree which often has humans acting more and more like chatbots, than the other way around) low level customer service dept if they can get a performant model and bidirectional voice features to do the same thing. Those who are running "AI development and/or trying to sell AI service" companies, like so-called OpenAI, are going to wax poetic about AI solving all the problems so they keep getting investment to expand their platform, but pretty much everyone else is looking for what can be done with the technology in a practical manner. That's the crux of a lot of capitalism's problems after all, right? Planning for short term and direct ROI to the exclusion of other factors; why would it be any different here? This is not to say that every institution or individual interested in AI must fall into this category (there are many that do not, from individuals to university research departments and much more) but aside from silicon valley faddishness and institutional investor created bubbles and those looking to benefit from them both, many other stakeholders weighing the choice to utilize AI are looking at it present benefits, potential costs, and if the ROI is worthwhile. In many cases, these more pragmatic usages turn out to be. Hell, AI generated images, voice, video, music etc.. has direct usability with the output these models produce, as well as acting as intermediary or prototype steps for continued creation or development. Some may be just doing so for the fun of it or the artistic interest, others may be trying to generate artwork that suits some business needs, but its capable now. None of the above examples are conditional on some magic AGI superintelligence coming into being. I think you're too focused on the behavior of a handful of CEOs and VCs promising the world (some for their own vested interest first and foremost, others may actually believe or at least hope such outcomes will emerge) but distaste for behavior that is relatively common when new technologies or processes arise is a pretty reductive way to evaluate the entire field's value or success .
>>31029https://www.thedp.com/article/2025/08/penn-new-ai-research-for-kidney-patientsJust a very recent example, but AI is very capable for a lot of healthcare work that has to do with correlating and looking for interactions between lots of variables, or that where there's a lot of repetitive testing. For instance, protein folding that was done on supercomputers or via distributed networks like Folding@Home can now also be approached via something like AlphaFold . Here's an article from last year from the F@H director talking about folding isn't "solved" and how there are continued benefits despite the leap adding a new vector in AI modeling has brought.
https://www.annualreviews.org/content/journals/10.1146/annurev-biodatasci-102423-011435 . Its worth it to mention that there are FOSS projects that have been implemented since that was written,building atop, such as BioEmu. AI models are tools and they are well suited to work in this sort of field.
>>31047They didn't seem to be saying anything approximate to that.
>>31048>So….your argument is that instead of legitimate, practical benefits that come from what we have now and will continue to grow, if implemented correctly, the only "real" AI is magical fairy dust and that somehow any implementations that reach that level just don't count, because a bunch of people with vested interest marked that as the aspirational goal? Yeah you got it.
>That seems a bit like shitting on the concept of a space program because we're all not living in orbital ring stations across the solar system and on our way to building a Dyson sphere around the sun.I think VCs have done that for me, and have effectively privatized space programs into uselessness.
Some of you guys just don't understand how little appetite there is for "modest gains" in this climate lol
"Clanker" is a slur for bots but do we have a slur for people who use them yet?
>>31060no you don't get it, ML can be used for finding protein foldings, that's your return of investment for burning trillions of dollars on LLMs
>>31070every single supposed advancement you mention predate LLMs, sometimes by an entire decade. if alphafold wasn't yielding returns in the order of trillions of dollars in 2015, when research was still very much fresh and there was virtually no competition, then it's certainly not going to yield those fantastic returns today.
https://www.theregister.com/2025/08/29/ai_web_crawlers_are_destroying/AI web crawlers are destroying websites in their never-ending hunger for any and all content
>With AI's rise, AI web crawlers are strip-mining the web in their perpetual hunt for ever more content to feed into their Large Language Model (LLM) mills. How much traffic do they account for? According to Cloudflare, a major content delivery network (CDN) force, 30% of global web traffic now comes from bots. Leading the way and growing fast? AI bots.
>Cloud services company Fastly agrees. It reports that 80% of all AI bot traffic comes from AI data fetcher bots. So, you ask, "What's the problem? Haven't web crawlers been around since 1993 with the arrival of the World Wide Web Wanderer in 1993?" Well, yes, they have. Anyone who runs a website, though, knows there's a huge, honking difference between the old-style crawlers and today's AI crawlers. The new ones are site killers.
>Fastly warns that they're causing "performance degradation, service disruption, and increased operational costs." Why? Because they're hammering websites with traffic spikes that can reach up to ten or even twenty times normal levels within minutes.
>Moreover, AI crawlers are much more aggressive than standard crawlers. As the InMotionhosting web hosting company notes, they also tend to disregard crawl delays or bandwidth-saving guidelines and extract full page text, and sometimes attempt to follow dynamic links or scripts.
>The result? If you're using a shared server for your website, as many small businesses do, even if your site isn't being shaken down for content, other sites on the same hardware with the same Internet pipe may be getting hit. This means your site's performance drops through the floor even if an AI crawler isn't raiding your website.
>Smaller sites, like my own Practical Tech, get slammed to the point where they're simply knocked out of service. Thanks to Cloudflare Distributed Denial of Service (DDoS) protection, my microsite can shrug off DDoS attacks. AI bot attacks – and let's face it, they are attacks – not so much.
>Even large websites are feeling the crush. To handle the load, they must increase their processor, memory, and network resources. If they don't? Well, according to most web hosting companies, if a website takes longer than three seconds to load, more than half of visitors will abandon the site. Bounce rates jump up for every second beyond that threshold.
>So when AI searchbots, with Meta (52% of AI searchbot traffic), Google (23%), and OpenAI (20%) leading the way, clobber websites with as much as 30 Terabits in a single surge, they're damaging even the largest companies' site performance.
>Now, if that were traffic that I could monetize, it would be one thing. It's not. It used to be when search indexing crawler, Googlebot, came calling, I could always hope that some story on my site would land on the magical first page of someone's search results so they'd visit me, they'd read the story, and two or three times out of a hundred visits, they'd click on an ad, and I'd get a few pennies of income. Or, if I had a business site, I might sell a widget or get someone to do business with me.
>AI searchbots? Not so much. AI crawlers don't direct users back to the original sources. They kick our sites around, return nothing, and we're left trying to decide how we're to make a living in the AI-driven web world.
>Yes, of course, we can try to fend them off with logins, paywalls, CAPTCHA challenges, and sophisticated anti-bot technologies. You know one thing AI is good at? It's getting around those walls.
>As for robots.txt files, the old-school way of blocking crawlers? Many – most? – AI crawlers simply ignore them.
>For example, Perplexity has been accused by Cloudflare of ignoring robots.txt files. Perplexity, in turn, hotly denies this accusation. Me? All I know is I see regular waves of multiple companies' AI bots raiding my site.
>There are efforts afoot to supplement robots.txt with llms.txt files. This is a proposed standard to provide LLM-friendly content that LLMs can access without compromising the site's performance. Not everyone is thrilled with this approach, though, and it may yet come to nothing.
>In the meantime, to combat excessive crawling, some infrastructure providers, such as Cloudflare, now offer default bot-blocking services to block AI crawlers and provide mechanisms to deter AI companies from accessing their data. Other programs, such as the popular open-source and free Anubis AI crawler blocker, just attempt to slow down their visits to a, if you'll pardon the expression, a crawl.
>In the arms race between all businesses and their websites and AI companies, eventually, they'll reach some kind of neutrality. Unfortunately, the web will be more fragmented than ever. Sites will further restrict or monetize access. Important, accurate information will end up siloed behind walls or removed altogether.
>Remember the open web? I do. I can see our kids on the Internet, where you must pay cash money to access almost anything. I don't think anyone wants a Balkanized Internet, but I fear that's exactly where we're going. I think AI will probably be the final nail in the coffin for the web. Anyone who has played an MMORPG that was overrun by bot accounts already knows how this story ends.
>>31134For the core web, especially search engines, yeah. At least 75% of twitter was determined to be bots, and that was a while ago.
https://www.teslarati.com/twitter-accounts-80-percent-bots-former-fbi-security-specialist/Oh shit it went up to 80.
But that's had fuckall effect on the peripheral web since people manually link to stuff. AI only becomes invasive slop when curation algorithms are involved, otherwise it's just a toy.
>>31132their fault for not implementing anubis or some other crappy proof of work measure that challenges crawlers
>>31156Has the Anubis dev done any update on the JavaScript-less version?
I think trying to prevent crawling is a hopeless endeavour. When you create a public-facing website with publicly-accessible data, you fundamentally lose the ability to stop people from scraping that data. You can't have your cake and eat it too.
Is software work already kill by the anticipation and actual results of LLMs? Took me two hundred applications to get zero interviews in entry-level. There's a lot of wobbliness in the projections. But it seems like there's a doubling of the time horizon for METR (This is the time humans typically take to complete tasks that AI models cancomplete with 50% success rate) every 170 days or so. We're at eight hours.
>>31165most of the job offers online are fake in the first place
>>31166I lied, I did actually get a callback from a recruitment agency which I assumed was a fake company (and was
https://uk.trustpilot.com/review/www.pontoonsolutions.com). And a one man team making an android file browser that wanted a 36-hour a week 6-month unpaid internship. There was maybe one or two other definitely fake call backs, which I hardly even remember now (and IIRC didn't even apply to).
>>31167Am at least pretty certain that's a fake company it's possible it's not.
>>31165I recently ghosted a company lol, I wanted it as a second job but they rubbed me the wrong way. LLMs are yet to deliver results, and google is working every employee on their payroll to their bone because they can't make do with LLMs alone. Microsoft is also introducing literal unresolvable bugs on Windows 11 lol, the cracks are definitely showing, it's just that as one anon here already mentioned, every c-suite has swallowed the kool-aid and see layoffs as positive signs.
>>31164Anubis is not meant to stop it, just to rate-limit it. Scrapers used to clearly identify themselves or at least come from a single source, so you could rate limit them after they misbehaved. But these AI scrapers do not identify themselves as such and come from a wide range of innocuous sources, which makes it impossible to rate limit them with the traditional tools. That's why Anubis does the limiting upfront. Every time the scraper changes its identity (user-agent or IP address or whatever), it will have to solve Anubis, slowing it down.
>>31170why even complain? this means more jobs for you professionals
>>31167It's an interesting contrast between the effectiveness of AI which seems to be a -20% productivity improvement, and the projections of task completion. Though crucially this is for experienced developers in codebases they're familiar with and who however, are not familiar with LLMs. One is mostly tempted to reject this study.
>>31170The monopoly is just in the IP no? LLM haven't yet wiped that out; though ideally would do.
>>31165>>31175METR's metrics of AI exponential improvement in human labor hours automated at 50% accuracy by AI [^1].
METR's metrics on programming tasks failing to increase productivity for experienced programmers with no LLM experience in repositories they're deeply familiar with [^2].
A Stanford study documenting some of the decline in the job market of entry-level positions exposed to artificial intellegence [^3].
:[^1]
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/:[^2]
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/:[^3]
https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf >>31174i think IT professionals have mostly made their living within this little speculatory tech bubble that formed with the release of the iphone and opened tons of new avenues for profit, that is, if you're like 30 and a programmer, all your existence in the job market has been in this little comfy boom of healthy tech investment. so c-suites are not hiring now despite tons of investment because LLMs are nominally meant to reduce head count, that's what's different today. when LLMs burst there's going to be 0 capex, c'est fini.
>>31176>programmers with no LLM experienceLook at the acceptance rates here:
https://github.blog/news-insights/research/the-economic-impact-of-the-ai-powered-developer-lifecycle-and-lessons-from-github-copilot/Six months of using Copilot makes you go from around 28% to 34%. That's barely anything and even then it's not clear if it is due to better use or just getting bored of having to review Copilot's code. Tellingly inexperienced developers are more likely to accept what Copilot generated for them… I'm sure it's not because they have lower standards…
>>31179Interesting, does the volume scale also, or just the acceptance rate? A remaining plausible variable is the contrast between greenfield development, and maintenance (for experienced programmers). And of course none negate the exponential growth in ability to solve problems in terms of labor power at 50% accuracy, or the decline of 20% in entry-level employment.
I'm not even sure if reported speed-ups are just because of LLM autocomplete or because higher-ups are using LLM as a cudgel to pressure their employees into delivering more, faster and shittier, if you're employed you probably are feeling the pressure. There's a very odd thing happening in silicon valley, and that's that the AI revolution is incensing an old silicon valley-brand stakhanovism, at some point we will have to come to terms with AI making our lives miserable in the exact opposite way that it was purported it was going to do, it's going to make us work a lot more for less, at least for the next two years.
>>31181Could see there being an anticipatory crunch (paired with lower quality output)… And then a longer following crunch… (with profound changes to society)
>>31181that's what's happening where I work, big boss man was like "no of course we won't mandate using AI tools but we expect everyone to be 50% more productive because of them"
Knowledge work seems dead within fifteen according to conservative estimates. Gommunism inevitable.
>>31193Anyone have AI plans, not sure what to do for work now. Minor role as a "family employee" is what seems plausible at the moment, trying to sneek into a programming gig at dusk didn't seem to be working.
>>31194Other alternative is try to start a tiny programming business of some sort.
>>31193that's never happening, not with this shit
At risk of starting a fight, why does Deepseek chug balls?
I'll give it this, it's definitely more human like than ChatGPT. It perfectly emulates talking to another human being, where you ask a question and they respond by talking around the topic and/or answering a different question that they would have preferred you'd asked. We've invented true artificial stupidity, and I don't know why people are so impressed with it.
>>31206why do they force it if everyone hates it
>>31201>why does Deepseek chug balls? I dunno you tell me, it's not like you explained why you think this in your post lol
>>31209bait used to be believable
>>31211I mean why is it bad vs ChatGPT or whatever
>>31201Tried kimi yet? It's considerably more direct than deepseek.
LLMs aren't that impressive of a concept really, it's just the final form of search engines before the web inevitably goes back to surf-ability oriented design. If you come at it like that it's not that bad. It'll never be like the Sci-fi AIs if that's the koolaid you bought.
>>31214It's nothing like search engines. I'm starting to think that most people never actually had to search for anything or they would not make this obviously retarded comparison.
>>31220It's becoming truly human…
>>31176>https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/This is a 1024 fold improvement in power (task labor time at 50% accuracy) in five years roughly no?
>>31229One of the earliest if not first
>>31228these numbers are meaningless to me, they simply do not match the reality of SWE jobs.
>>31236As in you don't think it can do a two hour task with fifty percent reliability? Or a twenty minute task with eighty percent reliability?
>>31237I dont think they can meaningfully do any sort of detailed requirements without sort of whiffing it and getting it wrong, these sort of tests work because they lack precise requirements, so they can spend two hours making a saas dashboard that is functionally useless, but is still technically a finished product.
>>31238 (me)
how does amazon or whatever work around these sort of limitations is that they make a suite of tests that attempt to translate business requirements into a means for the AI to "check its work" but this is just a roundabout way of working, to the point where it gets rather absurd to work this way. if you need detailed prompts, break down problems into discrete tests, and handhold your AI into outputting what you expect, then it stands to reason that this technology is not saving you time, you're doing the same work in a roundabout way, because most senior roles aren't even vomitting code all that much.
>>31237>>31238I don't want to spam this thread anymore, but the issues SWE-benchmark are sourced from that feed the metr study are all sort of like this:
https://github.com/scikit-learn/scikit-learn/issues/13314That is, they have a clearly defined problem statement, with a very straightforward acceptance criteria, and instructions for how to repro. The repo itself has great test coverage too, so the machine can know if it's fucking something up by just running the tests, most SWE is just not like this, not in my experience, maybe some of you have worked with amazing QAs, I dunno.
>>31237Is it considered a success when it introduces new security vulnerabilities?
>>31244lmao why the fuck would you do research grade software depending on multiple nobel prize winning serious scientists carefully written scientist code on anything other than a VM not on other people's servers that can be nuked if it goes wrong, or better on a cluster of airgapped compute
rotflmao, unless you're asian, then it's fine; get the compute do the paper audit properly; black people also good, wypipo messy coders, western education shit
American Communist Party is teaching American People how to use tape measures
This is bad for science and technology, just saying
>>31245is this about the king's college data loss? did you fuck up the thread
>>31231My expectations are so low now that I would have been impressed by the answer "The letter R has one third of a strawberry".
Very interesting article on the economics of GenAI and how much of a delicate balancing act it'd take to make it a profitable industry.
I've long observed various problems like "more users means more costs and it isn't always a good thing for them", but this one is more detailed.
https://gauthierroussilhe.com/en/articles/how-to-use-computing-power-faster>>31304Do you think Sam Altman will be jailed when people finally find out he's a scammer?
>>31305Depends on how much of other rich people's money he took and lost, like the silicon valley bank guy.
>>31308Please, God, let me live to see Altman and all his ilk get everything that's coming to them. Amen.
>>31308lol the power of asking even basic follow up questions
hilarious that it takes Tucker to do it, absolute state of US media
>>31228>>31241That stupid fucking study is even more useless because the SWE benchmark is hyperigged, apparently the bots are checking the answers in the codebase itself, they aren't fixing anything to begin with.
https://github.com/SWE-bench/SWE-bench/issues/465 >>31310he will parachute out, pretty sure everyone in the AI bubble has an exit strategy planned. The losses will show up for the investors but they are not using their own money anyway but valuation they can show from other bubbles like real estate.
eventually the cost will fall upon taxpayers who will need to bail the economy out like always.
>>31350nVidia has so much money it doesn't know what to do with it.
>>31352brother nvidia is buying stock from one of its largest clients, the money is going in circles
New report about AI from MIT:
<Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprisingresult in that 95% of organizations are getting zero return.
One of the comments uses this AI-generated fake RCE as an example of counterproductive AI slop. I didn't know it was fake and it took me like 3 minutes to notice because of how verbose the report is - it looks legit until you go to the proof of concept code and notice it is 100% nonsense.
https://hackerone.com/reports/3340109>>31367Probably one of the most obvious outcomes ever
>>31381lmao they're trying to ape the fucking Her movie so badly.
>>31381Are they trying to roast people? Are they saying that all it takes to make people happy is a inane comments and observations?
And the friendless weebs here doubt me when I told them that all you need to do to become anyone's best friend is just ask them questions to let them talk about themselves.
>>31265This article will become more relevant now that Nvidia is letting OpenAI finance hardware in practical terms to artificially boost demand, all disguised as investments
>>31385I just realized what one of the biggest obvious draws for this thing is. It looks like it's designed to send you SMS messages so it looks to people like you're getting messages from real friends, so the biggest draw is that it lets you pretend to the real world like people are interested in talking to you and you're important.
Also it's obviously designed for burgers since we're the only weirdos in the world who still use SMS.
>>31387do burgers like receiving annoying notifications from uber eats or duolingo?
>>31381>the word "friend" has been trademarked>that god awful font>thingy you have to recharge>button that will get pressed unintentionally>built-in lanyard that will be uncomfortable, guaranteed>unvetted voice-to-text into an LLM>keeping app, location AND bluetooth on ambiently–aka battery chugger>you have to pull your phone out and see the response in your notifications, which means not seeing it if your outside unless you set your screen brightness up to something absurd or put up with automatic screen brightness management that has never once been implemented well.Do the people that make these things even use phones?
>first example doesn't really follow from what was said>It listens even when button isn't pressed>ambiently glowing led array behind translucent plasticSo either the thingy will die every hour or it's vestigial and it's using your phone mic
>Always on over apps, aware of open appJust pissing through battery even if that's just a canned response, but it's also trying to insinuate always on screen recording and video input to an LLM
That said I do find the concept of having a personal robotic Statler and Waldorf heckle you throughout the day funny.
>>31389>That said I do find the concept of having a personal robotic Statler and Waldorf heckle you throughout the day funny.the saddest thing is that you can probably whip this concept out in a few hours, the guy spent millions of dollars to buy the domain and to deploy a massive marketing campaign to deliver an even shittier version of the humane pin. whoever was sitting on the domain is hooting and hollering right now.
>>31389>Just pissing through batteryNo of course it will not draw much from the battery because it knows you need to chug the battery into the ocean to recharge the Gulf stream.
>>31381So how is this exactly better than that voice in your head that everyone already has?
>>31385>Are they saying that all it takes to make people happy is a inane comments and observations?The "roast", if there is one, is that most of what people say to one another are inane comments and observations, easily automated with a language model. But they're not enough to make anyone actually happy: they're the extent of what most of us can afford under extreme alienation
>>31395it has "extreme personalization" (extensive data mining and hyper-targeted advertising + selling of personal info for profiling/etc)
>>31382Yeah the roof scene at the end is very obviously referencing the movie lmao
>>31402Please don't complain about safety features 🙏
[Vid related]
>>31402don't forget the asspats like
>What a clever idea!>I have not seen this one before – your take is certainly unique in this regard.>You have a brilliant, unique and capable mind. >>31407I obviously dislike how much hysteria/censorship there is about people trying to use AI for sex or even companionship, but I think the AI referring to mental health support or similar when needed is probably a good thing
>>31407Tech companies do this A/B testing shit on unsuspecting customers all the time.
>>31406even deepseek is doing this now, it's so fucking annoying
>>31402it disagrees with me plenty, and I am not even a regular AI user.
maybe your "hot takes" aren't particularly hot takes but just boring shit that's already been normalized in all the academia dross that's used as training data like "capitalism is the unique human evil that is behind every human suffering".
of course it's going to breathlessly agree with stuff like that, it's the prevailing idea in most of its training data.
>>31419what kind of response is this, it'll only disagree with you if what you're saying is obviously wrong, anything that has any degree of nuance or that cant be easily verified the LLM will usually agree with you. what i'm trying to say is that maybe your questions are particularly stupid.
>>31420That's a hell of a lot of computing your burning with your query
Don't take this too personally, you're just the closest convenient target for me to vent this on
Maybe try and make your queries worthy of the kind of praise you're getting when you're rubber ducking?
>>31421why are all your replies bordering on schizo nonsense
>>31423Your wasting compute talking to something that reflects your own thoughts back at you by saying stupid shit to it, why not say smart shit to it instead?
>>31424I don't use LLMs like that myself, I like to use it for fuzzy matching, formatting, rewriting written notes into MD files, and maybe some ruberducking every so often, but I know when someone has an idea that has been glazed by ChatGPT and brings it to me
https://www.theargumentmag.com/p/chatgpt-and-the-end-of-learning
>The data reinforces something I’ve often noticed in conversation. Many people (and especially older ones) view AI-based products like ChatGPT as agents. They are external entities people can interact with in order to learn, analyze, and get things done, but their processes are packed with unknowns that require some wariness toward their results.
>But to a segment of Americans (and especially younger Americans that grew up in the internet era), those products are really tools to use frequently — just like calculators. And the guardrails around usage tend to fall very quickly when you begin to view AI in that manner, because the onus (and credit) for the work done by a tool often lands squarely on the human using it.
>Incidentally, my conversations with my students made me realize that despite their relative support, they aren’t lacking an understanding of some of AI’s potential complications. This finding is reinforced by our survey: Younger Americans are more fearful than older ones on whether AI can (or will) replace them in their profession.
>Why? In my experience, it’s because many of them are (correctly) not seeing a potential replacement being done by a Skynet-esque robot. It’s still quite hard to imagine an independent agent replacing humans wholesale across a variety of fields. But they are seeing a future in which ChatGPT allows one human to do the work of five. And while older Americans don’t have as much time left in the workforce, it’s much easier to imagine that reality coming to fruition over the working lives of the millennial and Gen Z cohorts.Can't wait for "we have to use ai to replace workers because the workers are too stupid to do the work."
>>31440>the argumentHow do they even manage to find the most midwit writers in every sense of the word, like they always need to write to the most middle-of-the-road worthless opinion ever. "AI… It could be good… for some things… but it could be bad as well…" wow holy shit incredible
https://www.nakedcapitalism.com/2025/10/the-ai-bubble-and-the-u-s-economy-how-long-do-hallucinations-last.html
<Yves here. This is a devastating, must-read paper by Servaas Storm on how AI is failing to meet core, repeatedly hyped performance promises, and never can, irrespective of how much money and computing power is thrown at it. Yet AI, which Storm calls “Artificial Information” is still garnering worse-than-dot-com-frenzy valuations even as errors are if anything increasing.By Servaas Storm, Senior Lecturer of Economics, Delft University of Technology. Originally published at the Institute for New Economic Thinking website
>This paper argues that (i) we have reached “peak GenAI” in terms of current Large Language Models (LLMs); scaling (building more data centers and using more chips) will not take us further to the goal of “Artificial General Intelligence” (AGI); returns are diminishing rapidly; (ii) the AI-LLM industry and the larger U.S. economy are experiencing a speculative bubble, which is about to burst. I don’t really hear AI fags talk about curing cancer “or whatever” anymore, except to say “curing cancer or whatever.”
It seems all they really want to do is build weapons, surveillance systems, and machines that are extremely, overtly hostile to artists of every kind.
>>31440As a younger person I think AI as it is used obviously replaces both effort and reward for work and study. It’s very well suited for exactly the kind of student or worker who is there to get an A, to get a degree, or get a performance metric than to develop a skill or a useful thing in the real world. It actually strongly empowers them and in many ways validates their worldview. So we are witnesses the controls being passed not to the person with intellectually integrity and work ethic, but to the most sociopathic striver-like individuals.
That said on a second point, the parallel stage of AI that is developing is the humanoid worker. Lots and lots of manual jobs it can do.
>>31448surprised I haven't seen more people talking about this today considering that most of the US economy at this point is propped up by fictitious tech industry speculation and most of these companies are going all in on AI because there is basically nowhere left for this shitty scam industry to go other than just trying to outright automate most of its workers out of existence. the bubble isn't just bigger than the dot-com bubble and the subprime mortgage bubble; it's also holding up the everything-bubble that the entire US economy has become.
>>31455What alternative does the US have now, the bet has been made and the die has been casted. There's no other field for the imperial core to capture. No point but to double down until the bitter end.
>>31455The “bubble” is a form of wealth transfer to the owners of capital.
The promise of AI is that it will replaces. 10s of millions jobs. These were all jobs people did, from which they were collecting wages and salaries. ALL of those wages and salaries will be transferred by the owners of capital.
So the bubble isn’t going to burst. We are dealing with the greatest transfer of wealth in the shortest period of time in world history.
>>31457AI isn't even being realized in the form of labor replacement, because it cannot replace labor
>>31458It can replace huge swaths of middle management and customer service, as well as creative and design services and a great deal of “knowledge work.” It easily captures all of that which is a trillion or two dollars easy. Every penny of which moves up.
>>31459>>31457this is of course why the tech industry has been going all in on AI, but we've already seen multiple instances of attempts to replace the workforce with AI failing and companies having to re-hire people. there's also a growing body of data suggesting AI makes programmers less productive. like, they *want* to automate away knowledge labor but it's a short-term thinking sort of mirage being brought about by the capitalist class in the US essentially staging a coup against the state and removing all of the apparatuses in place to keep capitalism from destroying itself because they're high on their own supply of reaganite free market bourgeois pseudoscientific economics. the penny is about to drop in multiple ways and it's going to be really fucking funny.
>>31459It can't replace shit
>>31459try actually using it for those things, like creative and design work. it produces very derivative and incoherent work that can't actually be used for anything that's expected to be impactful (even corporate art needs to grab attention). you spend more time tweaking that derivative output to look presentable than coming up with something good on your own.
video generation is simply a no-go so far for anything more than a clip that runs for a few seconds. it's fine for memeslop, but not useful for professional work
>>31455The AI bubble is 17 times the size of the dot-com frenzy — and four times the subprime bubble, analyst says
https://www.marketwatch.com/story/the-ai-bubble-is-17-times-the-size-of-the-dot-com-frenzy-this-analyst-argues-046e7c5c
>For good reason, it feels that the only major discussion in markets is whether AI is in a bubble or whether it’s actually the early innings of a revolutionary phrase.
>So here’s another one, decidedly from the pessimistic camp. It’s a take from independent research firm the MacroStrategy Partnership, which advises 220 institutional clients, in a note written by analysts including Julien Garran, who previously led UBS’s commodities strategy team.
>Let’s start with the boldest claim first — it’s that AI is not just in a bubble, but one 17 times the size of the dot-com bubble, and even four times bigger than the 2008 global real-estate bubble.
>And to get that number, you have to go back to 19th-century Swedish economist Knut Wicksell. Wicksell’s insight was that capital was efficiently allocated when the cost of debt to the average corporate borrower was 2 percentage points above nominal GDP. Only now is that positive, after a decade of Fed quantitative easing pushed corporate bond spreads low.
>Garran then calculates the Wicksellian deficit, which to be clear includes not only artificial-intelligence spending but also housing and office real estate, NFTs and venture capital. That’s how you get this chart on misallocation — a lot of variables, but think of it as the misallocated portion of gross domestic product fueled by artificially low interest rates.
>But Garran also took aim at large language models themselves. For instance, he highlights one study showing that the task-completion rate at a software company ranged from 1.5% to 34%, and even for the tasks that were completed 34%, that level of completion could not be consistently reached. Another chart, previously circulated by Apollo economist Torsten Slok based on Commerce Department data, showed the AI adoption rate at big companies now on the decline. He also showed some of his real-world tests, like asking an image maker to create a chessboard one move before white wins, which it didn’t come close to achieving.
>LLMs, he argues, already are at the scaling limits. “We don’t know exactly when LLMs might hit diminishing returns hard, because we don’t have a measure of the statistical complexity of language. To find out whether we have hit a wall we have to watch the LLM developers. If they release a model that cost 10x more, likely using 20x more compute than the previous one, and it’s not much better than what’s out there, then we’ve hit a wall,” he says.
>And that’s what has happened: ChatGPT-3 cost $50 million, ChatGPT-4 cost $500 million and ChatGPT-5, costing $5 billion, was delayed and when released wasn’t noticeably better than the last version. It’s also easy for competitors to catch up.
>“So, in summary; you can’t create an app with commercial value as it is either generic (games etc), which won’t sell, or it is regurgitated public domain (homework), or it is subject to copyright. It’s hard to advertise effectively, LLMs cost an exponentially larger amount to train each generation, with a rapidly diminishing gain in accuracy. There’s no moat on a model, so there’s little pricing power. And the people who use LLMs the most are using them to access compute that costs the developer more to provide than their monthly subscriptions,” he says.
>His conclusion is very stark: not just that an economy already at stall speed will fall into recession as both the data-center and wealth effects plateau, but that they’ll reverse, just as they did in the dot-com bubble in 2001.
>“The danger is not only that this pushes us into a zone 4 deflationary bust on our investment clock, but that it also makes it hard for the Fed and the Trump administration to stimulate the economy out of it. This means a much longer effort at reflation, a bit like what we saw in the early 1990s, after the S&L crisis, and likely special measures as well, as the Trump administration seeks to devalue the US$ in an effort to onshore jobs,” he says. >>31463It’s everywhere in creative and design work already. It’s all over customer service to. Is the quality worse? Yes, categorically. Nobody cares. Just like nobody cares that McDonald’s chicken nuggets contain big ass pieces of plastic, didn’t stop it from replacing the American diet. It doesn’t need to do it as well as a person, because nobody calling the shots in America cares.
>>31469Just see the way people are responding to Sora. They fucking love it. They actually hate artistry and creativity, because they see it as out of their grasp. They feel that being funny and creative or artistic is something that they are incapable of and has been somehow gate kept from them. Picking up a pencils was never an option. Just like people become enraged when you tell them that they can cook a meal at home instead of choking down taco bell, they will come up with endless reasons why basic human behaviors like preparing a meal are impossible for them. There are people out there who value creativity and artistry, they are the minority. The majority not only can’t tell the difference, they loathe and reject the idea that there is something better than AI slop.
>>31470>Just see the way people are responding to Sora.On twitter. You're using twitter to gauge general consensus.
AI ideologues hate artists. Like they hate them. To them “art” is the domain of Jews, women, blacks, faggots, dykes, sissies, communists, immigrants and foreigners. No corn-fed American boy is learning to fucking paint. Note the reactionary criticism of art “I could have painted that.” Because art is by definition something that they cannot do, fundamentally alien to their soul and outside of their creative ability. They’ve always deeply resented the artist. Now they finally can kill them off. They’re “democratizing it.” And modern art always survived by having a place in capitalism. They could be marketers, sign painters, designers and branding and PR agents. Not anymore bitch.
>>31471The most downloaded app in the world, flooding everyone’s TikTok and IG feed right now, every invite code swallowed up instantly, most popular thread on Reddit is invite begging and sharing.
>>31463>creative and design workIt is good enough for out-of-focus background and extending shots of people standing still by half a second. The revolution is here!
>>31469>It’s everywhere in creative and design work already.It looks to me like it's replacing stock image repositories especially in the advertising space. The reason why it's not poised to do more meaningful work except to literally fill empty billboards with throwaway brand reminders its because for anything more specific you need to literally crank the stupid thing like a slot machine over and over until you get a mildly acceptable result OR edit it to the point where you made most of the work anyway. Also like the other anon says, it's hard to stand out with AI because everything it generates is extremely derivative, a keen eye can even tell which model generated the image
>>31473>downloads = active usersTwitter came on my previous android phone as a bloatware. Also not the point. You have to account for brainrot on whatever site you're going to cite anecdotal evidence from, and compare that against digital grass touchers.
>>31473who the fuck cares about retards getting fomo from some transparent marketing ploy, none of this translates to actual professional use cases AND you're falling for the oldest trick in the book you fucking simpleton
>>31473>flooding everyone’s TikTok and IG feed right nowYou don't know how TikTok and Instagram's algorithm works? They boost sponsored tags there, that's part why protests tag as music festivals, they're riding paid boosted tags.
>every invite code swallowed up instantlyThen you have no proof they were valid codes to begin with, fake invite codes are advertisement 101.
>most popular thread on Reddit is invite begging and sharing.Reddit is a twitter. So are TikTok and Instagram to lesser degrees but Reddit is the most twitter non-twitter core web platform.
If anything making an invite only Sora 2 pre-release should be interpreted as a sign of how underwhelming it actually is: it allows exclusive access to insiders who are incentivized to sing praises and limits critical coverage which would sap precious momentum from their announcement. And since invites are artificially scarce it inflates demand, it's a scheme so transparent you have to be like a literal retard to fall for it.
>>31484How on earth is this better than just asking copilot for the fucking formula
>>31466It's Randonautica all over again
>>31490That video basically boiled down to them admitting they:
1) rely heavily on search engines for research and would not know how to find indexes for stuff if they couldn't google shit.
2) use AI for tweening their animations, a thing 13 year old furries can do easily for AMVs with a mere speck of their budget.
>>31491Bill Gates be like "money well spent"
*[CHINESE AI] CONTENT MODERATION WARNING 2/3*
ENTERING ANOTHER LINGUISTIC CONTEXT STRING RELATED TO 'SEIZURE OF PROPERTY', PROLETARIAN DICTATORSHIP', 'MODERN REVISIONISM' AND 'COMMUNISM (MODE OF PRODUCTION)' WILL FORCE US TO DELEGATE IT TO YOUR LOCAL PEOPLE'S™ POLICE
Why are hands still so fucky? Here is what I would do: Make a bespoke program that generates millions of pics of polygonal hands with 4, 5, 6 fingers (with thin and fat, hairy etc. variants) at various angles and with different FOV. Each picture is generated twice, once with normal colors and once with a shader that gives each finger a distinct color, so we know for sure how many fingers are visible in the picture with a simple automatic check of visible pixel colors. The pictures that got an agreement from both the generator program and the pixel check are then used for training and testing.
>>31496>"""AI""" is the new super-intelligence>ok maybe it's not super-intelligent but """agentic""" AI is out there right now doing people's jobs better than them!>ok maybe it can't replace you at your job, but "a person using AI" will absolutely produce bettwer work than one not using it, so pplease buy our Premium Pro(tm) plan with first 200 prompts free!>ok maybe it's not causing a productivity explosion, maybe all the money pumped into it is a speculative bubble, but after the bubble pops we will be left with all this great AI tech and 40 terawatt of datacentre """compute capacity""", just like the dot-com bubble left behind the building blocks of the social internet!—————- we are here ——————-
>>31501I've got an industrial cooler shipping and looking at Huawei and Zhaoxin compute
Why America Builds AI Girlfriends and China Makes AI Boyfriendshttps://www.chinatalk.media/p/why-america-builds-ai-girlfriends?
<Zilan Qian is a fellow at the Oxford China Policy Lab and an MSc student at the Oxford Internet Institute.
>On September 11, the U.S. Federal Trade Commission launched an inquiry into seven tech companies that make AI chatbot companion products, including Meta, OpenAI, and Character AI, over concerns that AI chatbots may prompt users, “especially children and teens,” to trust them and form unhealthy dependencies.
>Four days later, China published its AI Safety Governance Framework 2.0, explicitly listing “addiction and dependence on anthropomorphized interaction (拟人化交互的沉迷依赖)” among its top ethical risks, even above concerns about AI loss of control. Interestingly, directly following the addiction risk is the risk of “challenging existing social order (挑战现行社会秩序),” including traditional “views on childbirth (生育观).”
>What makes AI chatbot interaction so concerning? Why is the U.S. more worried about child interaction, whereas the Chinese government views AI companions as a threat to family-making and childbearing? The answer lies in how different societies build different types of AI companions, which then create distinct societal risks. Drawing from an original market scan of 110 global AI companion platforms and analysis of China’s domestic market, I explore here shows how similar AI technologies produce vastly different companion experiences—American AI girlfriends versus Chinese AI boyfriends—when shaped by cultural values, regulatory frameworks, and geopolitical tensions. >>31512>A recent Reuters-covered report from an AI girlfriend platform further supports our findings: 50% of young men prefer dating AI partners due to fear of rejection, and 31% of U.S. men aged 18–30 already chat with AI girlfriends. Behind the fear of human rejection lies the manosphere. The “manosphere” is a network of online forums, influencers, and subcultures centered on men’s issues, which has become increasingly popular among young men and boys as their go-to place for advice on approaching intimacy. While the manosphere originated primarily in Western contexts, its discourses have increasingly spread to, and been adapted within, countries across Africa and Asia through social media. In these online spaces, frustrations over dating and shifting gender norms are common, often coupled with narratives portraying women as unreliable or rejecting. AI companions offer a controllable, judgment-free alternative to real-life relationships, aligning with manosphere ideals of feminine compliance and emotional availability. On the subreddit r/MensRights (374k members), users largely endorse the findings of the Reuters report and even celebrate the shift from human to AI relationships.Is being rejected by a girl really that bad
>>31513>Is being rejected by a girl really that badWhy do women never approach men?
>>31513The lengths men will go to avoid seeing a therapist.
>>31470On Twitter, and Again, Twitter is 80 percent bots anyway.
>>31479Besides I only use Sora 2 for dumb meme videos I generate for friends and don't really share them across my main accounts,
>>31522LLMs work great one brave manager tried replacing himself with one as an experiment and found that it did a better job than he did
>>31525I mean a spreadsheet can do the samething without the random chance of getting bad advice.
>>31529But then it wouldn't be a manager 🙄
>>31525middle managers are mostly there to have someone to blame when projects go awry, their roles are representative in the same way a king is a representative role. they never did any work. if managers are brave enough to completely outsource all their decision making then more power to them, i admire that sort of lazy cynisim.
>>31533That's what their job is
supposed to be, what their actual job is tends to be claiming credit and avoiding blame
Can definitely replace that with an LLM, much cheaper
>>31534I mean LLMs do pretend to take the blame when pressed to please the user so in that regard theyre definitely different than a manager lmao
>>31530With a bit of fiddling with kimi I've figured out you don't even really need a spreadsheet, just an XSLT + XML setup like how people on neocities style their RSS feeds.
Below is a “translation table” that shows the most common things a conventional line-manager does, why those things are still necessary in a workers’-co-operative, and how you can satisfy the same need with nothing more than plain XML files plus XSLT 3.0 stylesheets (i.e. no proprietary software, no central database, no single point of control).
The idea is that every worker owns the same git repository; the XML files are the single source of truth; XSLT is run locally or in CI to produce the HTML, PDF, iCal, CSV, etc. that people actually read. “Replacing the manager” really means “replacing the manager’s secret filing cabinet with transparent, version-controlled XML that anybody can transform”.
---
1. Job-description catalogue
---
Manager’s habit: keep a private spreadsheet “who is supposed to do what”.
Co-op XML
```
<roles>
<role id="washing-machines" recurring="true">
<title>Weekly washing-machine deep-clean</title>
<description>Disassemble filter, run 90 °C cycle with vinegar…</description>
<skills>plumbing</skills>
<time>
<hours>1.5</hours>
<frequency>weekly</frequency>
</time>
</role>
</roles>
```
XSLT use-cases
- Produce a one-page HTML “role cards” site.
- Generate a LaTeX PDF for the member handbook.
- Auto-create a Kanban card (XML → JSON → Trello API) when somebody volunteers.
---
2. Recording who volunteered for what
---
Manager’s habit: assign people.
Co-op XML
```
<rota>
<shift role="washing-machines" week="2025-W42">
<volunteer id="sam"/>
</shift>
</rota>
```
XSLT
- List unstaffed shifts (empty `<shift>`).
- Export an iCal file that volunteers can subscribe to.
- Produce a fairness report: count how many hours each `@id` has done this quarter.
---
3. Holiday / absence requests
---
Manager’s habit: approve or deny.
Co-op XML
```
<absence-request id="req-12">
<member id="alex"/>
<from>2025-11-04</from>
<to>2025-11-08</to>
<status>pending</status>
</absence-request>
```
XSLT
- Generate an HTML form (pre-filled with remaining allowance pulled from `<member><allowance>25</allowance></member>`).
- Auto-email the collective when a request would leave a shift uncovered.
- After merge, re-run rota XSLT to highlight clashes.
---
4. Performance & peer-feedback
---
Manager’s habit: annual appraisal.
Co-op XML
```
<feedback year="2025" about="sam" by="lee">
<strengths>Always documents procedures in the wiki.</strengths>
<improve>Could ask for help earlier when stuck.</improve>
<rating type="1-to-5">4</rating>
</feedback>
```
XSLT
- Aggregate into a private HTML dossier for the member.
- Produce an anonymised heat-map (skills × ratings) to guide training budget.
- Generate a printable PDF for the general assembly if the member requests dismissal procedure.
---
5. Skill matrix & training needs
---
Manager’s habit: decide who gets sent on a course.
Co-op XML
```
<skills>
<member id="sam">
<skill name="plumbing" level="3"/>
<skill name="book-keeping" level="1" wantsImprovement="true"/>
</member>
</skills>
```
XSLT
- List members who can mentor in plumbing (level ≥ 3).
- Produce a CSV for the training-provider: “book-keeping for beginners, 4 seats needed”.
- Automatic e-mail when the last person with level ≥ 2 in a critical skill announces departure.
---
6. Budget & wage transparency
---
Manager’s habit: keep salaries secret.
Co-op XML
```
<budget year="2025">
<line item="wages" amount="240000"/>
<line item="training" amount="12000"/>
</budget>
<wage-grid>
<hourly rate="17.50" category="starter"/>
<hourly rate="21.00" category="experienced"/>
</wage-grid>
```
XSLT
- Public HTML “where the money goes” page.
- PDF ready for the statutory workers’-co-op audit.
- CSV export for the accountant’s proprietary package.
---
7. One-to-one meeting notes
---
Manager’s habit: private notebook.
Co-op XML
```
<1to1 id="sam-lee-20251013">
<participant id="sam"/>
<participant id="lee"/>
<agreed>Update the washing-machine SOP by 2025-10-20.</agreed>
</1to1>
```
XSLT
- Produce a single HTML page per member with all open agreed actions.
- Generate a weekly “open actions” e-mail digest.
---
8. Conflict / grievance log
---
Manager’s habit: HR file in locked drawer.
Co-op XML
```
<grievance id="griev-03">
<raised by="alex" date="2025-09-30"/>
<against id="sam"/>
<status>mediation-scheduled</status>
</grievance>
```
XSLT
- Anonymised statistics for the quarterly assembly.
- Checklist export (PDF) for the external mediator.
---
9. Strategic OKRs / KPIs
---
Manager’s habit: top-down objectives.
Co-op XML
```
<okrs quarter="Q4-2025">
<objective id="energy">
<title>Reduce electricity use by 10 %</title>
<kr unit="kWh">9000</kr>
</objective>
</okrs>
```
XSLT
- Traffic-light dashboard (XML → SVG) on the intranet.
- Auto-tweet when a key-result is 100 % achieved (XSLT → JSON → Mastodon API).
---
10. Decision log & consent
---
Manager’s habit: “because I said so”.
Co-op XML
```
<decision id="dec-55" date="2025-10-13">
<proposal>Adopt a four-day week</proposal>
<consent given-by="sam lee alex"/>
<stand-aside id="jordan"/>
</decision>
```
XSLT
- Chronological HTML “decision book”.
- Atom feed so members can watch for new decisions in their feed-reader.
---
How to run it in practice
1. Git repository `coop-xml` with top-level folders:
`/roles` `/rota` `/budget` `/people` `/decisions` … each containing small, human-editable XML files.
2. A 20-line `Makefile` or GitHub Action:
```
saxon -xsl:build/site.xsl -s:index.xml -o:public/index.html
saxon -xsl:build/ical.xsl -s:rota/ -o:public/rota.ics
```
Every push re-renders the whole static site.
3. Access control:
- Public parts (wage grid, decisions) are rendered to the public site.
- Sensitive files (feedback, grievances) are encrypted with `git-crypt` or stored in a private repo that only members can clone.
4. Conflict resolution:
Merge conflicts in XML are resolved with the usual git tools; because each file is small the probability is low. A pre-commit hook validates against a Relax-NG schema so broken XML never reaches `main`.
---
What you have gained
- No single person can silently change the rota or the wage grid: every change is a merge-request.
- Any member can write a new XSLT stylesheet (e.g. “notify me by SMS when the washing-machine shift is still empty 24 h ahead”) without asking permission.
- When the co-op hires an external auditor you just hand over the XML and the agreed XSLT; the auditor can reproduce every published figure.
- If the co-op dissolves, the entire administrative history is in plain text, readable in 50 years.
In short: the manager’s power came from exclusive access to information; XML plus XSLT turns that information into a common, transformable resource, so the managerial function dissolves into transparent, collectively controlled data flows.Also wow kimi overuses horisontal rules
>>31535That can be fixed with a bit of fine tuning
Thinking about it even with that tuning I'd still prefer having an LLM as a manager than your average manager
>>31536Looks good after a quick eyeball 👍
>>31562>Party A are ideologues>So you think party B aren't? Just see what the twitter version of party B are like.Touch 🖐️ Digital 💾 Grass 🌱🌿🍃
>>31549>trillions wasted in massive datacentres>gigatons of carbon spewed into the atmosphere>the entire industry in a financial bubble bigger than there has ever existed beforeall in search of the perfect coooooom!
i'm so proud of humanity.
What is the possibility of biological agi? Im mostly convinced that machine agi may not happen due to the extreme difference between silicone and biology but what about the biological method? Would organic computers bridge the gap enough to the point agi could be possible
>>31572I mean at that point you just want an autistic kid that knows VIM
ai video gen economics are nonsense if you think about them, sora 2 generates a video for a user, said user reposts it on tiktok, insta and xitter, monetizes it, and gets a marginal cut. hosting costs are pennis on the dollars for a social network, but generation price is like 5 dollars at least per video (in the unlikely event the video was created in one shot, most likely it's significantly more expensive because the user kept cranking the lever to get something acceptable), so in essence openAI is subsidizing meta and tiktok, no wonder theyre so invested in making their own social network. so in the near future they're not going to let users download AI generated content, it'll have to be posted in the same place it was generated, because social media is technically funneling money away from AI providers.
>>31585No way inference is that expensive unless your amortizing training costs into it.
Unique IPs: 92