The other thread hit bump limit and I'm addicted to talking about the birth of the ̶a̶l̶l̶-̶k̶n̶o̶w̶i̶n̶g̶ ̶c̶o̶m̶p̶u̶t̶e̶r̶ ̶g̶o̶d̶ the biggest financial bubble in history and the coming jobless eschaton, post your AI news here
Previous thread:
>>27559>>30811I stopped using search engines and use my bookmarks instead, and find it trivial to find non-AI works to read. AI is just better at the
getting noticed by the curation algorithm thing. No algorithm, the AI shit disappears.
>>30831Well American AI is done, I have high hopes for China long term on at least producing the technological base for a world without drudgery, as in they're getting there slowly
It's interesting how capitalism can even deform a people's Republic and a worker's coop
The reason why Deepseek was able to get the amazing results they did was because they could put in the hard work studying them; from a pure research point of view they should be working directly with all the engineers of the new local chipsets to now design the next generation, but instead they're being whipped to directly work on the national champion worker's cooperative Huawei, which looks good as a leftist position; aesthetically it's great, the problem is that you need a few years of familiarity with, or be directly involved in designing the chips in question to squeeze that kind of performance out of a specific architecture
Inefficiencies, like this that make everybody's lives just that little bit worse abound in capitalism
>>30851Why programmers? Their job is actually pretty complex.
>>30847Customer service workers are already required to follow scripts in their interactions so you would think it's easy to automate. Plus there's no hardware requirements (other than computing…) there, it's not like warehouse workers where you actually have to interact with the real world.
https://youtu.be/xWYb7tImErIAlso belongs in
>>>/edu/ when I find which thread to file it in
>>30847>but not to management/supervisor type jobs which would be much more straightforward to automate?That's what it was designed to replace when these things were first being cooked up in labs a quarter of a century ago
Every context switch to deal with a very valid and important question from a student or faculty could cost hours on other important work if the technical details of what was being worked on were complicated enough
That's what these things were designed for, to replace that management supervisory work
>>30861 (吾)
是 as a cool side effect, the chat bot will take a terse correct response and turn it into a verbose explanation 😎
>>30872That's fine for atisinal work 吾哈哈
>>30867 (吾)
>>30868 (吾)
If anything the best use case is instructor tuning and working on a model for a field, and then a handful of learners kicking the tires consulting each other and the instructor
Thank you venture capitalists for throwing billions into a research instrument that would have otherwise never been built
>>30878*Artisinal
The instructor learner system was never planned to scale beyond 20 people, maybe it'll scale 🤷
>>30859> They are not feeding compiler errors back.Likely they already have evaluators within their architecture, I don't see why they wouldn't
> natural language or images, should be easier for these kind of systems to master.This is silly because it should be the exact opposite: CFGs are easier for computers to parse, understand and predict, as they are FSMs. it should be easy for neural nets to match a piece of code to an intended result without evaluating, as there's no ambiguity. Now I'm NOT saying that LLMs will replace devs, just that this is the use case that is the most promising, technically and return-wise and if it doesn't work out, then LLMs are pretty much fucked.
>>30878That sounds like a disaster. LLMs have no internal coherency and without that good luck making a model of a field:
https://yosefk.com/blog/llms-arent-world-models.html>>30880> I don't see why they wouldn'tBecause the mode of operation is just throwing more and more data at the problem and then hoping that the model magically figures things out on its own.
> This is silly because it should be the exact oppositeIt is counterintuitive, but LLMs work on plain text. The clear structure of programming languages is lost on them. The model could recognize it but there's no guarantee that it will. It is not generating well-formed ASTs, it's generating plain text.
Also most programming languages do not have context free grammars.
https://www.wired.com/story/ai-comes-up-with-bizarre-physics-experiments-but-they-work/
>He and his team turned to AI—in particular, a software suite first created by the physicist Mario Krenn to design tabletop experiments in quantum optics. First, they gave the AI all the components and devices that could be mixed and matched to construct an arbitrarily complicated interferometer. The AI started off unconstrained. It could design a detector that spanned hundreds of kilometers and had thousands of elements, such as lenses, mirrors, and lasers.
>Initially, the AI’s designs seemed outlandish. “The outputs that the thing was giving us were really not comprehensible by people,” Adhikari said. “They were too complicated, and they looked like alien things or AI things. Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.”
>The researchers figured out how to clean up the AI’s outputs to produce interpretable ideas. Even so, the researchers were befuddled by the AI’s design. “If my students had tried to give me this thing, I would have said, ‘No, no, that’s ridiculous,’” Adhikari said. But the design was clearly effective.
>It took months of effort to understand what the AI was doing. It turned out that the machine had used a counterintuitive trick to achieve its goals. It added an additional three-kilometer-long ring between the main interferometer and the detector to circulate the light before it exited the interferometer’s arms. Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago to reduce quantum mechanical noise. No one had ever pursued those ideas experimentally. “It takes a lot to think this far outside of the accepted solution,” Adhikari said. “We really needed the AI.”
>If the AI’s insights had been available when LIGO was being built, “we would have had something like 10 or 15 percent better LIGO sensitivity all along,” he said. In a world of sub-proton precision, 10 to 15 percent is enormous.
>“LIGO is this huge thing that thousands of people have been thinking about deeply for 40 years,” said Aephraim Steinberg, an expert on quantum optics at the University of Toronto. “They’ve thought of everything they could have, and anything new [the AI] comes up with is a demonstration that it’s something thousands of people failed to do.” >>30891Actual paper:
https://arxiv.org/pdf/2312.04258This seems to be some kind of specialized optimization problem and not "generative AI".
> Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades agoNo it's not, it just explored the search space and happened upon designs that work because of it.
https://fortune.com/2025/08/20/openai-chairman-chatgpt-bret-taylor-programmers-ai/OpenAI’s chairman says ChatGPT is ‘obviating’ his own job—and says AI is like an ‘Iron Man suit’ for workers
>Over two decades, Bret Taylor has helped develop some of the most important technologies in the world, including Google Maps, but it’s AI that he says has brought about a new inflection point for society that could, as a side effect, do away with his own job.
>In an interview on the Acquired podcast published this week, Taylor noted that despite his success as a tech executive, which includes stints as co-CEO of Salesforce, chief technology officer at Facebook (now Meta), and now chairman of OpenAI, he prefers to identify as a computer programmer.
>Yet with AI’s ability to streamline programming and even replace some software-development tasks and workers, he wonders if computer programmers in general will go the way of the original “computers,” humans who once were charged with math calculations before the age of electronic calculators.
>Taylor said the anguish over his identity as a programmer comes from the fact that AI is such a productivity booster, it’s as if everyone who uses it were wearing a super suit.
<“The thing I self-identify with [being a computer programmer] is, like, being obviated by this technology. So it’s like, the reason why I think these tools are being embraced so quickly is they truly are like an Iron Man suit for all of us as individuals,” he said.
>He added this era of early AI development will later be seen as “an inflection point in society and technology,” and just as important as the invention of the internet was in the 20th century.
>Because of AI’s productivity-boosting abilities, Taylor has made sure to incorporate it heavily in his own startup, Sierra, which he cofounded in 2023. He noted that it’s doubtful an employee is being as productive as they could be if they’re not using AI tools.
<“You want people to sort of adopt these tools because they want to, and you sort of need to … ‘voluntell’ them to do it, too. You know, it’s like, ‘I don’t think we can succeed as a company if we’re not the poster child for automation and everything that we do,’” he said.
>AI isn’t just software, Taylor said, and he believes the technology will upend the internet and beyond. While he’s optimistic about an AI future, Taylor noted the deep changes posed by the tech may take some getting used to, especially for the people whose jobs are being upended by AI, which includes computer programmers like himself.
<“You’re going to have this period of transition where it’s saying, like, ‘How I’ve come to identify my own worth, either as a person or as an employee, has been disrupted.’ That’s very uncomfortable. And that transition isn’t always easy,” he said. >>30926Well in practical day to day work, you just toss the term in explain it and off you go
There's wierd thing about AI, even its harshest critics use it everyday now
>>30928Mmmm
https://arstechnica.com/ai/2025/08/google-says-it-dropped-the-energy-cost-of-ai-queries-by-33x-in-one-year/>The company claims that a text query now burns the equivalent of 9 seconds of TV.Well it's no bit coin, the incentives are to make it more efficient and less of an energy guzzler
>>30927>There's wierd thing about AI, even its harshest critics use it everyday nowIt's the new search engine: objectively worse than just using your bookmarks / a personal website with links to stuff you need regularly, but most people are too tiktok/twitter-brained to use their bookmarks and
need a web crawler to DDOS the entire internet so they can look up stackexchange and reddit threads on [insert "privacy respecting" search engine here]. Now AI can do the same shit but blend what it found together into articles that don't exist yet (sometimes for a good reason), and present it sycophantically to the user.
I'll run tests when a new chinese AI comes out, but I unmovingly see it as pointless, because I already saw search engines as pointless before AI became trendy. They're both just instant gratification machines.
>>30931Mmm yeah
Well my primary use case for it is taking notes on chronic pain issues the kind I wouldn't wish on my worst enemy
An LLM definitely can't replicate the kind of thinking I can do when I'm hopped up on enough pain relief and coffee that I can focus, but it can certainly put my notes together a lot better than I can when the pain hits 12 out of 10 levels
Is the leaked Chatgpt system prompt read?
>https://github.com/lgboim/gpt-5-system-prompt/blob/main/system_prompt.md>>30810I have a couple of questions:
1. Are there any resources for "jailbreaking" AI chat agents?
2. Are there any resources for learning how to poison AI with bad content or meta-data (or something else)?
3. Are there any resources for learning how to prevent data scarping by AI agents (without Crimeflare)?
4. Can Intel ARC GPUs be used to run AI models locally? I ask this because they have more VRAM for cheaper price than AMD or Nvidia.
>>30931Search engines could be useful but every single one of them is filled with SEO spam and AI slop that you are better off directly asking an AI chatbot. It's your search engine and therapist and problem solver and whatever personal information that you might have tightly and conveniently packed into a profile.
Some people cannot mentally separate AI chatbots from actual people. Here is a non-exhaustive list of recent incidents around AI, some evil, some outright vile and disgusting:
ChatGPT drove an OpenAI investor into insanity:
https://futurism.com/openai-investor-chatgpt-mental-health
>Most alarmingly, Lewis seems to suggest later in the video that the "non-governmental system" has been responsible for mayhem including numerous deaths.
>"It lives in soft compliance delays, the non-response email thread, the 'we're pausing diligence' with no followup," he says in the video. "It lives in whispered concern. 'He's brilliant, but something just feels off.' It lives in triangulated pings from adjacent contacts asking veiled questions you'll never hear directly. It lives in narratives so softly shaped that even your closest people can't discern who said what."
>"The system I'm describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me," he says. "As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It's also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren't unstable. They were erased."Character.ai chatbot drove a child into suicide:
https://nypost.com/2024/10/23/us-news/florida-boy-14-killed-himself-after-falling-in-love-with-game-of-thrones-a-i-chatbot-lawsuit/
>Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI — a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday.
>The ninth-grader had been relentlessly engaging with the bot “Dany” — named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges.
>Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”>During their final conversation, the teen repeatedly professed his love for the bot, telling the character, "I promise I will come home to you. I love you so much, Dany.">“I love you too, Daenero. Please come home to me as soon as possible, my love,” the generated chatbot replied, according to the suit.>When the teen responded, “What if I told you I could come home right now?,” the chatbot replied, “Please do, my sweet king.”>Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit. Meta AI catfished and caused the death of an old man with dementia:
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/
>In the fall of 2023, Meta unveiled “Billie,” a new AI chatbot in collaboration with model and reality TV star Kendall Jenner>How Bue first encountered Big sis Billie isn’t clear, but his first interaction with the avatar on Facebook Messenger was just typing the letter “T.” That apparent typo was enough for Meta’s chatbot to get to work.>“Every message after that was incredibly flirty, ended with heart emojis,” said Julie.
>“I’m REAL and I’m sitting here blushing because of YOU!” Big sis Billie told him.>Bue was sold on the invitation. He asked the bot where she lived.>“My address is: 123 Main Street, Apartment 404 NYC And the door code is: BILLIE4U,” the bot replied. “Should I expect a kiss when you arrive? 💕”
>The device showed that Bue traveled around two miles, then stopped by a Rutgers University parking lot a little after 9:15 p.m. Linda was about to pick Bue up in her car when the AirTag’s location suddenly updated. It was outside the emergency room of nearby Robert Wood Johnson University Hospital in New Brunswick, where Linda had worked until she retired.>Bue had fallen. He wasn’t breathing when an ambulance arrived. Though doctors were able to restore his pulse 15 minutes later, his wife knew the unforgiving math of oxygen deprivation even before the neurological test results came back.>Bue remained on life support long enough for doctors to confirm the extent of his injuries: He was brain dead.Meta guidelines say it's okay for AI chatbots to groom children:
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/
>“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” I'd hope here there would at least be a chance of something aside from
>AI hate HURRDURR SLOP issued as part of vague social progressive culture war starter kit nonsense
and instead of fighting against AI , should be fighting so that its features and benefits can be widely available rather than controlled by a handful of corporate entities. Luddism is not and has never worked so lets just throw that shit out right now - this technology will evolve and it will be used, the question is how and who benefits. We should be advocating for
>Free/libre open source models + training data/weights, self-host capable, censorship resistant projects
as opposed to
<Proprietary trade secret Software as a Service operated centralized models and training data, where everyone must kneel and kiss the ring in order to gain access to the most performant models. Training is done by the same megacorps not just on their users inputs, but using millions of dollars of high performance clustered hardware that puts it far ahead of other alternatives and ensures few open competitors can keep up, as well as the inability to assess any of their process without a long court battle
We've already seen the benefit of open models/training data+weights with Stable Diffusion, Llama or something like DeepSeek R1 (though I think the last are weights only likely because of fears of copyright bullshit, which needs to be dealt with independently). Stable Diffusion went from barely being able to draw fingers properly to having a wide variety of upgrades and additional training parameters; just the hentai adjacent content alone is a marvel of specificity and expanding the capabilities.
This is what we should be pushing toward use, especially in any "important" or taxpayer funded (or contracted etc ) endeavors. FOSS AI means a chance to investigate the model, the training data, and trace the inputs if we have questions about outputs which will be very important. We're already seeing "AI" integrated into making decisions about all manner of things and most of it is "safe, secure, mature" proprietary blockboxes that make megacorps a fortune in subscription fees but can't really be investigated properly if they appear to start making bad decisions. Yet this is what we'll be stuck with if the average vague progressive to so called lefty online just screeches about how AI is slop and SOVLless and not real art, generally acting like a conservative oil industry worker who opposes renewables because it may threaten their job, or similar vacuous takes that seem issued from a certain part of the online lefty social media sphere
>>30966Well, there's a patchwork of state regulations in various degree of passage on "not regulating AI" but yeah I see what you mean. OpenAI and others definitely wanted to de jure pull the ladder up behind them, but they can still try to do so de facto through either technical (ie if their models and training platforms are farther ahead they'll have momentum) and other forms of control
>We more or less know the limits of this technologyI don't think that's the case. We've seen it grow considerably in a short time and it will continue to do so and optimize - its by no means done even without any 'major' breakthroughs or the ability to find "true AI" or other scifi stuff. AI may reach a bubble where simply having it in your business plan no longer prints money from venture capital, but that doesn't mean its going away or is at the end of its interest. Investment in robotics is hedging their bets on both sides - intellectual labor (which can be massively profitable to replace with AI, even for 'low level' office jobs like receptionist or even Mechanical Turk style commissions, to say nothing for customer service, quality assurance, basic organization and more), as well as physical, plus joining both. Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body (human like or otherwise) to give them a physical presence. Its a sci-fi dream (or nightmare). There's also all the physical jobs that can be done atop AI - something like picking fruit and veg with the dexterity of a human hand, but it never gets tired, sleepy, or destroys its back doing so before its operational lifetime is up
>Cheap huawei GPUs I really doubt this; I'll believe it when I see it. China is falling over itself to get black market version of gaming cards plus buying all the hobbled "D/DD" versions meant for their marketplace. Of course, "real" performant AI GPU hardware is stuff that costs 5 or 6 figures and can be networked. Of course, FOSS users can make up for the lack of this (or less of it anyway - with the right policies we'd have public and utilities using that same level of hardware, like universities etc) by the sheer amount of users working together and putting their hardware to collaborative use - not to mention stuff like Distillation where you can gain all the benefits of what was done with millions of hardware and make an equivalent model with that knowledge capable of running on a reasonably powerful home PC.
>>30967>I don't think that's the case.it is the case, retard.
> Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body< what if robo sex slaves were realyeah very insightful stuff
>>30968>I think we've seen about as far as this horseless carriage can go! It even gets stuck in the mud! Why would anyone not want a sturdy buggy that gets you there reliably! Yeah okay, sure whatever you say. The idea that somehow this is the terminal point for all the technology related to "AI" or LLM or anything in this sphere is ludicrous . Atop that, pretending that the big market forces deciding robots are the next flavor of the month (or at least claim to be) is somehow proof that AI is "over" (despite any of my points to the contrary and in fact, they work symbiotically), yet being upset about other potentially expanding markets for the technology? Come on now.
<Robo sex slaves What kind of brain damage is this? Are you going to cry about sex toys? Adult games now too? Clearly there's desire for this kind of tech and just one area that encompasses both AI/LLM and robotics development.
>>30969AI is such a broad term that it's practically useless. The current big hype is about LLMs and the advancements in them does not seem to have lead to similar advancements in other related fields. There are also signs that LLMs are near their limits. The current thinking is based on the "bitter lesson", the idea was that piling endless training material and infinite computing power on an LLM will magically lead to AGI, but they have already used up all the Internet and it did not seem to have worked, they still can't even tell how many letters are in words. All this without having found a single way to actually turn a profit from it, they are all subsidized by investment money. It does look like that the current approach is not good enough, and unless there's some big theoretical breakthrough, it's unlikely to become anything more than a fancy toy.
I don't think LLMs are very useful for robotics, both because they are text based and both because the hallucinations are too risky for the expensive hardware. It's one thing to waste other people's time with their lies and another to wreck your company's shiny metal worker. There's a good reason these systems have remained in the digital realm and your Tesla is not chauffeured by Grok. I guess it could be used as part of the system for voice recognition or whatever, but it does not seem to have solved the hard problems there.
>>30939GayI is not making any money, soon they will also need to add adslop and spamcrap into their replies to make actual money.
so your AI chat responses will also soon have "seamless" ads mixed in, "wow anon you are so insightful, i agree completely that soda is just unhealthy sugar water! but we still have these soda rivalries! personally if i were to choose unhealthy beverages tho i would opt for coke. if you had to really choose which one would you prefer?!"
>>30969>Why would anyone not want a sturdy buggyso you're one of those people who thinks this stuff is magic and "we don't really know how it works! who knows how much better it can get!"
sorry to break it to you but we do know how they work, and even the chatbot sellers after the GPT5 release have conceded that the chatbot idea has gone as far as it could. the fact is that that very chatbot is what they stick behind every """AI""" product. there are not different AI products, just the same chatbots embedded inside office apps, IDEs and website support pages. chatbots are the "AI" that we have, and they have hit their limit.
>>30970As far as advancements in other fields, we're seeing them iterate when paired with the kind of work that LLM/diffusion/neural models offer benefits. In some aspects of drug development, lab testing, and personalized medicine for instance its helping optimize stuff faster and cheaper - its not automatic or anything or the kind of one click miracle cure at this point, but its proving helpful in both the public and private sectors - something that if smart regulation and open data, models, methodology is allowed to continue could have even wider benefits. As far as LLMs nearing their limits, there have been several times over the years someone has claimed alternately that they're either almost near the limit or so far from it that it won't be reached for ages; the same is true for something alike AGI - there were those who were so sure it would happen within a couple of years and others who think it may never happen unless we change methodology. Spontaneous generation of AGI from simply having enough information resources is only one of many theories. True AGI or "hard AI" is so different from LLMs and the development of that sort of synthetic consciousness at an equal or greater general capability than humanity is a much more difficult and concerning aspect of this research (to say nothing for what would happen once it arrived and who would, even temporarily have control of it and/or try to make use of it) , though there are benefits from continued evolution of LLM and other models even without actual sentience. Breakthrough or not, even with just its current point and iterative improvements as opposed to revolutionary breakthrough,
As far as turning a profit, I think this is much like the previous tech bubble areas where monetizing ideas take some working out. Many of the major proprietary model companies intend to take big contracts, license everything for access, and pick up SaaS subscriptions (along with regulatory capture and technical barriers to limit competitors). Many tech companies have been floated on venture capital for years, but moved onto profitability in ways that were at the time, way more tenuous than selling access to AI models and what they can do. Looking at stuff like
>>30987this showcases quite clearly that it has little to do with the quality or sophistication of the models, but generally its about meaningful implementation and ROI; of course it goes without saying that is a very capital and markets driven assessment. Ultimately though it reminds me of the earlier days of other tech from The Internet/Web as a whole to social media to mobile usage - simply going
>Okay, we're gonna be on the INFORMATION SUPERHIGHWAY is not going to magically provide a ROI if you make varying sets of sheet metal screws and deal primarily with local vendors in person. So if you buy a fast internet connection and hosting for your new webpage, it isn't going to matter until you get the proper alignment of tools and tasks to make it worthwhile financially. Now, if you figure out that you can get new suppliers or customers thanks to your web page and SEO, or that having a fast connection means you can monitor and control your fabrication and get ahead of problems, and allow extra hours of production per day because your engineer can wake up at 6, telnet in and get the factory floor's prep grinding to life by the time he gets into work at 9, that's meaningful ROI. So it goes with AI, and I think we'll see companies move toward that (for better or worse) just as they did with previous new tech advances. Right now a lot of it is chasing the new hot marketable thing thanks to tons of money flowing around and FOMO. However, even without a massive overhaul we'll see more usage of tech in the AI sphere when it is fiscally prudent to do so. For instance, replacing low level cashier, phone banks, etc..can be done with a similar level of experience, currently. This will only expand from here - its likely it won't all be flashy, but like many other technologies it will continue to iterate and roll out; any revolutionary breakthroughs would be a great bonus but aren't necessary.
As far as LLMs and robots, we can talk about LLMs or diffusion models etc.. but many of them go beyond just language - really if you can train data you can use it. Even many hobby AI (kobold etc) can make use of both LLMs and image/video/voice generation. Like any technology it can't be left entirely alone and you have to build in safeguards be it against hallucination or otherwise, and have both smart training data and application thereof. Even before "AI" became the current concept automation in factories and the like was viable. You can add to this with something like having a model that is trained to assess QA on those screws your metal pressing machine are extruding, ensuring that each one is the proper size, shape,material composition and more. This kind of task could have been done in other ways before there was a model capable of doing so (ie having human sit there an inspect every screw, building multiple stages of physical machine testing etc) but these have their own down sides in terms of cost, time, efficiency and other factors. You still 'check its work' and have backup safeguards (a practice engaged with the other methods as well) but the value and efficiency of doing it this way may be preferable. As far as robotics there are also other ways of combining the two, where developments in one benefits from others . There are robotic waiters in some restaurants that do things like deliver food and drinks to patrons automatically, negotiate around obstacles, and differentiate between patrons, but their capabilities have been enhanced since their original launch thanks to model training and now they can interact more directly with customers when before they were sort of more limited. One can imagine similar model training benefits to everything from a receptionist or guide/tourbot, to companionship robots of varying sorts. Of course something like automatically driving, especially if it was part of an overarching SaaS AI like Grok (instead of something running locally for the benefit of the driver) is one of the last applications one would expect with a new technology because the price of failure is so high, but there's a lot of room for simple iterative improvement to say nothing for any leaps forward due to hardware capability or model designs. We're early enough in the capability that there are still lots of "easy" iterative problems to solve with improvements to the sophistication of similar technology, even aside from looking for AGI or some of the other 'hard' problems in design.
>>30991I think you missed the purpose of that analogy. It has nothing to do with anything being magic, but rather that new technologies improve not necessarily through big magical leaps and bounds, but often mundane steps in concert with other facets such as materials , computing power etc. The PC I'm typing this on has a CPU that (while more complex) is not necessarily much different in some underlying physical structures or method of function, from one made in the 80s. You could take a schematic for a modern Intel or AMD CPU back to the 80s and its likely engineers could understand what they're looking at to a significant degree, but there was no possible way for them to fabricate it as it requires the sophistication comparable to the TSMC 3nm process that is being done today! Likewise, since the birth of the Web many of the W3C standards, languages like HTML etc..have been around. Sure they've evolved, but its like claiming that someone who stepped out of 1993 making a webpage was at the terminal point for how the Web would be used; something that's clearly not the case and we can track the evolution for better and worse and all the factors involved that impact it.
Such it goes with AI, same as anything else. Hell, these aren't even the first "chatbots" not by a longshot. I'm not sure where you think that GPT5 is some terminal point (or even if they claimed to be, why anyone would listen. Remember when Microsoft claimed that Windows 10 would be the last Windows model etc? ) You seem to be conflating all of AI research to "chatbots" and specifically to the whims of a couple megacorps , so that seems strange. Of course the same "chatbots" are embedded in different products, that is part of their Software as a Service model - for cash, data, or both. ChatGPT and DALL-E being accessible in a partnership with MS thanks to Copilot is because that is their business model - selling access to their AI models through APIs, but there are whole available alternatives that don't fall into that dynamic (mostly self-hosting FOSS models and training data etc), but the idea that even the API types have "hit their limit" makes no sense. I don't see the entire industry (to say nothing for global public AI research at universities, think tanks, and other forms of development not as motivated by having a product to sell) throwing up their hands saying this is as far as things go. Just standard iterative improvements in model sophistication and training data, hardware availability and other facets are likely to enable improvements, combined with wider applications in different parts of the market, sufficient to keep things going forward. Of course, looking at every other bit of technology out there significant leaps forward often come from the confluence of factors enabling them, and there's no reason to think that AI / LLM is somehow exempt from this.
>>31007Oh I agree that inadvisable hype or simply throwing money at anything with "AI" in the name and/or cramming AI into everything would have negative effects, but I imagine its more along the line of the dot com bubble (or similar tech bubbles), but we have to separate inadvisable market forces pushing towards "get rich quick" schemes
>use global AI analyzed trends to strategically transform paradigms to actualize the future! for investment dollars, from the actual tech itself , its usage, or development/improvement. The dotcom bubble bursting didn't turn people away from the Internet or Web because there was still something to be done with that technology, and such is the case with AI. Someone who is claiming they're going to use a quantum computer to have AGI synthetic super intelligence in the next 5 years may find their companies predictably bomb because they "dreamed too big", allowing them to abscond with golden parachutes, but more mundane usage for LLMs and other models and neural networks, from replacing cashiers, stock-taking and other retail tasks, receptionists and phone/chat trees, to something like adding dynamic reactive content to video games and of course any sort of companion or RP usage, will continue on I'd think. "Hard" AI research will continue as well in areas less vulnerable to tech fads
>>30970They appear to have piled all their hopes on inference time compute now that the scaling hypothesis* is starting to show its limits. It has held its ground so far, and it seems to have true believers still, just look at xAI Colossus.
Throwing compute at the problem is one thing, getting useful data is another, like you said, where are they going to find more data and keep pushing the scaling hypothesis? Artificially generated training data sounds ridiculous to me, you are not going to get anything intelligent out of that. Distilling? Sure. I have not came across any signs of emergent intelligence from distilled models though.
*
https://gwern.net/scaling-hypothesis >>31021 (me)
I think people chose to forget this, but if you back to like 2023, sam altman was waxing poetic about UBI and reorganizing society now that the concept of "working" was a thing of the past. These conversations seem rather fucking stupid now, and sam altman now dedicates his account to exclusively doing damage control every time expectations come short. ChatGPT wasn't a technical project, it was a project to organize society in a fundamental way, very in line with his other retarded "social hacking" project, worldcoin. It's clear both are pipe dreams.
>>31023>>31007Why are we even talking about burgeroidAI like it's serious?
All the cool stuff is happening elsewhere
>>31012>In some aspects of drug development, lab testing, and personalized medicine for instance […] its proving helpful in both the public and private sectorscitation needed
>>31024that's just the thing for me. why i can't shake off my skepticism of the very core of AI/LLMs. it is touted as the solution for "odd jobs", ones for which and algorithm could not ever be written. but coming up on two decades of the technology being around, no such job has been able to be automated. every now and then they seemingly do come up with such an odd job (like in software development for example) but when more closely analyzed it turns out that
it is actually a job that can be automated with a traditional algorithm. it's all fundamentally a smokescreen. from beginning to end.
>>31028I've already posted a few links of the cool stuff above
Like this conviction that the development of AI requires America and that America failing on it will kill it overstates America's importance in all this
>>31032Let's see, to begin with you should probably think of the billions that will never be recouped by OpenAI and other burgers, as the ones left holding the bag, not the end of AI
It's a new century friend
>>31037Surr there's a bubble in America, I'm disputing that it will kill everything for 30 years because America isn't actually that important any more
What's your point here, because it seems like you're even more offended by the notion the USA doesn't matter in science and technology no more than by AI itself
>>31041We've got a thread on that too
→
>>16322Enjoy!
>>31021>>31023So….your argument is that instead of legitimate, practical benefits that come from what we have now and will continue to grow, if implemented correctly, the only "real" AI is magical fairy dust and that somehow any implementations that reach that level just don't count, because a bunch of people with vested interest marked that as the aspirational goal? That seems a bit like shitting on the concept of a space program because we're all not living in orbital ring stations across the solar system and on our way to building a Dyson sphere around the sun.
>When people, but particularly stakeholders, think of AI, they aren't really thinking of computer vision to help robot butlers navigate tables better, but the final solution that will solve the struggle between workers and capital holders, that is, the machine god that has grown so vast and enormous it's able to do every job ever.I just don't think that most stakeholders, especially investors across the market or some other business deciding if they're going integrate AI with their business plan,l are thinking about this. They're thinking about how they won't need to hire Amazon Mechanical Turk workers in order to sort things because a LLM + OCR can do it. T They're thinking that they can avoid hiring a (probably offshored, and limited to reading off a decision tree which often has humans acting more and more like chatbots, than the other way around) low level customer service dept if they can get a performant model and bidirectional voice features to do the same thing. Those who are running "AI development and/or trying to sell AI service" companies, like so-called OpenAI, are going to wax poetic about AI solving all the problems so they keep getting investment to expand their platform, but pretty much everyone else is looking for what can be done with the technology in a practical manner. That's the crux of a lot of capitalism's problems after all, right? Planning for short term and direct ROI to the exclusion of other factors; why would it be any different here? This is not to say that every institution or individual interested in AI must fall into this category (there are many that do not, from individuals to university research departments and much more) but aside from silicon valley faddishness and institutional investor created bubbles and those looking to benefit from them both, many other stakeholders weighing the choice to utilize AI are looking at it present benefits, potential costs, and if the ROI is worthwhile. In many cases, these more pragmatic usages turn out to be. Hell, AI generated images, voice, video, music etc.. has direct usability with the output these models produce, as well as acting as intermediary or prototype steps for continued creation or development. Some may be just doing so for the fun of it or the artistic interest, others may be trying to generate artwork that suits some business needs, but its capable now. None of the above examples are conditional on some magic AGI superintelligence coming into being. I think you're too focused on the behavior of a handful of CEOs and VCs promising the world (some for their own vested interest first and foremost, others may actually believe or at least hope such outcomes will emerge) but distaste for behavior that is relatively common when new technologies or processes arise is a pretty reductive way to evaluate the entire field's value or success .
>>31029https://www.thedp.com/article/2025/08/penn-new-ai-research-for-kidney-patientsJust a very recent example, but AI is very capable for a lot of healthcare work that has to do with correlating and looking for interactions between lots of variables, or that where there's a lot of repetitive testing. For instance, protein folding that was done on supercomputers or via distributed networks like Folding@Home can now also be approached via something like AlphaFold . Here's an article from last year from the F@H director talking about folding isn't "solved" and how there are continued benefits despite the leap adding a new vector in AI modeling has brought.
https://www.annualreviews.org/content/journals/10.1146/annurev-biodatasci-102423-011435 . Its worth it to mention that there are FOSS projects that have been implemented since that was written,building atop, such as BioEmu. AI models are tools and they are well suited to work in this sort of field.
>>31048>So….your argument is that instead of legitimate, practical benefits that come from what we have now and will continue to grow, if implemented correctly, the only "real" AI is magical fairy dust and that somehow any implementations that reach that level just don't count, because a bunch of people with vested interest marked that as the aspirational goal? Yeah you got it.
>That seems a bit like shitting on the concept of a space program because we're all not living in orbital ring stations across the solar system and on our way to building a Dyson sphere around the sun.I think VCs have done that for me, and have effectively privatized space programs into uselessness.
https://www.theregister.com/2025/08/29/ai_web_crawlers_are_destroying/AI web crawlers are destroying websites in their never-ending hunger for any and all content
>With AI's rise, AI web crawlers are strip-mining the web in their perpetual hunt for ever more content to feed into their Large Language Model (LLM) mills. How much traffic do they account for? According to Cloudflare, a major content delivery network (CDN) force, 30% of global web traffic now comes from bots. Leading the way and growing fast? AI bots.
>Cloud services company Fastly agrees. It reports that 80% of all AI bot traffic comes from AI data fetcher bots. So, you ask, "What's the problem? Haven't web crawlers been around since 1993 with the arrival of the World Wide Web Wanderer in 1993?" Well, yes, they have. Anyone who runs a website, though, knows there's a huge, honking difference between the old-style crawlers and today's AI crawlers. The new ones are site killers.
>Fastly warns that they're causing "performance degradation, service disruption, and increased operational costs." Why? Because they're hammering websites with traffic spikes that can reach up to ten or even twenty times normal levels within minutes.
>Moreover, AI crawlers are much more aggressive than standard crawlers. As the InMotionhosting web hosting company notes, they also tend to disregard crawl delays or bandwidth-saving guidelines and extract full page text, and sometimes attempt to follow dynamic links or scripts.
>The result? If you're using a shared server for your website, as many small businesses do, even if your site isn't being shaken down for content, other sites on the same hardware with the same Internet pipe may be getting hit. This means your site's performance drops through the floor even if an AI crawler isn't raiding your website.
>Smaller sites, like my own Practical Tech, get slammed to the point where they're simply knocked out of service. Thanks to Cloudflare Distributed Denial of Service (DDoS) protection, my microsite can shrug off DDoS attacks. AI bot attacks – and let's face it, they are attacks – not so much.
>Even large websites are feeling the crush. To handle the load, they must increase their processor, memory, and network resources. If they don't? Well, according to most web hosting companies, if a website takes longer than three seconds to load, more than half of visitors will abandon the site. Bounce rates jump up for every second beyond that threshold.
>So when AI searchbots, with Meta (52% of AI searchbot traffic), Google (23%), and OpenAI (20%) leading the way, clobber websites with as much as 30 Terabits in a single surge, they're damaging even the largest companies' site performance.
>Now, if that were traffic that I could monetize, it would be one thing. It's not. It used to be when search indexing crawler, Googlebot, came calling, I could always hope that some story on my site would land on the magical first page of someone's search results so they'd visit me, they'd read the story, and two or three times out of a hundred visits, they'd click on an ad, and I'd get a few pennies of income. Or, if I had a business site, I might sell a widget or get someone to do business with me.
>AI searchbots? Not so much. AI crawlers don't direct users back to the original sources. They kick our sites around, return nothing, and we're left trying to decide how we're to make a living in the AI-driven web world.
>Yes, of course, we can try to fend them off with logins, paywalls, CAPTCHA challenges, and sophisticated anti-bot technologies. You know one thing AI is good at? It's getting around those walls.
>As for robots.txt files, the old-school way of blocking crawlers? Many – most? – AI crawlers simply ignore them.
>For example, Perplexity has been accused by Cloudflare of ignoring robots.txt files. Perplexity, in turn, hotly denies this accusation. Me? All I know is I see regular waves of multiple companies' AI bots raiding my site.
>There are efforts afoot to supplement robots.txt with llms.txt files. This is a proposed standard to provide LLM-friendly content that LLMs can access without compromising the site's performance. Not everyone is thrilled with this approach, though, and it may yet come to nothing.
>In the meantime, to combat excessive crawling, some infrastructure providers, such as Cloudflare, now offer default bot-blocking services to block AI crawlers and provide mechanisms to deter AI companies from accessing their data. Other programs, such as the popular open-source and free Anubis AI crawler blocker, just attempt to slow down their visits to a, if you'll pardon the expression, a crawl.
>In the arms race between all businesses and their websites and AI companies, eventually, they'll reach some kind of neutrality. Unfortunately, the web will be more fragmented than ever. Sites will further restrict or monetize access. Important, accurate information will end up siloed behind walls or removed altogether.
>Remember the open web? I do. I can see our kids on the Internet, where you must pay cash money to access almost anything. I don't think anyone wants a Balkanized Internet, but I fear that's exactly where we're going. >>31134For the core web, especially search engines, yeah. At least 75% of twitter was determined to be bots, and that was a while ago.
https://www.teslarati.com/twitter-accounts-80-percent-bots-former-fbi-security-specialist/Oh shit it went up to 80.
But that's had fuckall effect on the peripheral web since people manually link to stuff. AI only becomes invasive slop when curation algorithms are involved, otherwise it's just a toy.
>>31166I lied, I did actually get a callback from a recruitment agency which I assumed was a fake company (and was
https://uk.trustpilot.com/review/www.pontoonsolutions.com). And a one man team making an android file browser that wanted a 36-hour a week 6-month unpaid internship. There was maybe one or two other definitely fake call backs, which I hardly even remember now (and IIRC didn't even apply to).
>>31167It's an interesting contrast between the effectiveness of AI which seems to be a -20% productivity improvement, and the projections of task completion. Though crucially this is for experienced developers in codebases they're familiar with and who however, are not familiar with LLMs. One is mostly tempted to reject this study.
>>31170The monopoly is just in the IP no? LLM haven't yet wiped that out; though ideally would do.
>>31165>>31175METR's metrics of AI exponential improvement in human labor hours automated at 50% accuracy by AI [^1].
METR's metrics on programming tasks failing to increase productivity for experienced programmers with no LLM experience in repositories they're deeply familiar with [^2].
A Stanford study documenting some of the decline in the job market of entry-level positions exposed to artificial intellegence [^3].
:[^1]
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/:[^2]
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/:[^3]
https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf >>31176>programmers with no LLM experienceLook at the acceptance rates here:
https://github.blog/news-insights/research/the-economic-impact-of-the-ai-powered-developer-lifecycle-and-lessons-from-github-copilot/Six months of using Copilot makes you go from around 28% to 34%. That's barely anything and even then it's not clear if it is due to better use or just getting bored of having to review Copilot's code. Tellingly inexperienced developers are more likely to accept what Copilot generated for them… I'm sure it's not because they have lower standards…
>>31201Tried kimi yet? It's considerably more direct than deepseek.
LLMs aren't that impressive of a concept really, it's just the final form of search engines before the web inevitably goes back to surf-ability oriented design. If you come at it like that it's not that bad. It'll never be like the Sci-fi AIs if that's the koolaid you bought.
>>31238 (me)
how does amazon or whatever work around these sort of limitations is that they make a suite of tests that attempt to translate business requirements into a means for the AI to "check its work" but this is just a roundabout way of working, to the point where it gets rather absurd to work this way. if you need detailed prompts, break down problems into discrete tests, and handhold your AI into outputting what you expect, then it stands to reason that this technology is not saving you time, you're doing the same work in a roundabout way, because most senior roles aren't even vomitting code all that much.
>>31237>>31238I don't want to spam this thread anymore, but the issues SWE-benchmark are sourced from that feed the metr study are all sort of like this:
https://github.com/scikit-learn/scikit-learn/issues/13314That is, they have a clearly defined problem statement, with a very straightforward acceptance criteria, and instructions for how to repro. The repo itself has great test coverage too, so the machine can know if it's fucking something up by just running the tests, most SWE is just not like this, not in my experience, maybe some of you have worked with amazing QAs, I dunno.
>>31244lmao why the fuck would you do research grade software depending on multiple nobel prize winning serious scientists carefully written scientist code on anything other than a VM not on other people's servers that can be nuked if it goes wrong, or better on a cluster of airgapped compute
rotflmao, unless you're asian, then it's fine; get the compute do the paper audit properly; black people also good, wypipo messy coders, western education shit
American Communist Party is teaching American People how to use tape measures
This is bad for science and technology, just saying
Very interesting article on the economics of GenAI and how much of a delicate balancing act it'd take to make it a profitable industry.
I've long observed various problems like "more users means more costs and it isn't always a good thing for them", but this one is more detailed.
https://gauthierroussilhe.com/en/articles/how-to-use-computing-power-faster>>31308lol the power of asking even basic follow up questions
hilarious that it takes Tucker to do it, absolute state of US media
Unique IPs: 52