The other thread hit bump limit and I'm addicted to talking about the birth of the ̶a̶l̶l̶-̶k̶n̶o̶w̶i̶n̶g̶ ̶c̶o̶m̶p̶u̶t̶e̶r̶ ̶g̶o̶d̶ the biggest financial bubble in history and the coming jobless eschaton, post your AI news here
Previous thread:
>>27559>>30811I stopped using search engines and use my bookmarks instead, and find it trivial to find non-AI works to read. AI is just better at the
getting noticed by the curation algorithm thing. No algorithm, the AI shit disappears.
>>30831Well American AI is done, I have high hopes for China long term on at least producing the technological base for a world without drudgery, as in they're getting there slowly
It's interesting how capitalism can even deform a people's Republic and a worker's coop
The reason why Deepseek was able to get the amazing results they did was because they could put in the hard work studying them; from a pure research point of view they should be working directly with all the engineers of the new local chipsets to now design the next generation, but instead they're being whipped to directly work on the national champion worker's cooperative Huawei, which looks good as a leftist position; aesthetically it's great, the problem is that you need a few years of familiarity with, or be directly involved in designing the chips in question to squeeze that kind of performance out of a specific architecture
Inefficiencies, like this that make everybody's lives just that little bit worse abound in capitalism
>>30851Why programmers? Their job is actually pretty complex.
>>30847Customer service workers are already required to follow scripts in their interactions so you would think it's easy to automate. Plus there's no hardware requirements (other than computing…) there, it's not like warehouse workers where you actually have to interact with the real world.
https://youtu.be/xWYb7tImErIAlso belongs in
>>>/edu/ when I find which thread to file it in
>>30847>but not to management/supervisor type jobs which would be much more straightforward to automate?That's what it was designed to replace when these things were first being cooked up in labs a quarter of a century ago
Every context switch to deal with a very valid and important question from a student or faculty could cost hours on other important work if the technical details of what was being worked on were complicated enough
That's what these things were designed for, to replace that management supervisory work
>>30861 (吾)
是 as a cool side effect, the chat bot will take a terse correct response and turn it into a verbose explanation 😎
>>30872That's fine for atisinal work 吾哈哈
>>30867 (吾)
>>30868 (吾)
If anything the best use case is instructor tuning and working on a model for a field, and then a handful of learners kicking the tires consulting each other and the instructor
Thank you venture capitalists for throwing billions into a research instrument that would have otherwise never been built
>>30878*Artisinal
The instructor learner system was never planned to scale beyond 20 people, maybe it'll scale 🤷
>>30859> They are not feeding compiler errors back.Likely they already have evaluators within their architecture, I don't see why they wouldn't
> natural language or images, should be easier for these kind of systems to master.This is silly because it should be the exact opposite: CFGs are easier for computers to parse, understand and predict, as they are FSMs. it should be easy for neural nets to match a piece of code to an intended result without evaluating, as there's no ambiguity. Now I'm NOT saying that LLMs will replace devs, just that this is the use case that is the most promising, technically and return-wise and if it doesn't work out, then LLMs are pretty much fucked.
>>30878That sounds like a disaster. LLMs have no internal coherency and without that good luck making a model of a field:
https://yosefk.com/blog/llms-arent-world-models.html>>30880> I don't see why they wouldn'tBecause the mode of operation is just throwing more and more data at the problem and then hoping that the model magically figures things out on its own.
> This is silly because it should be the exact oppositeIt is counterintuitive, but LLMs work on plain text. The clear structure of programming languages is lost on them. The model could recognize it but there's no guarantee that it will. It is not generating well-formed ASTs, it's generating plain text.
Also most programming languages do not have context free grammars.
https://www.wired.com/story/ai-comes-up-with-bizarre-physics-experiments-but-they-work/
>He and his team turned to AI—in particular, a software suite first created by the physicist Mario Krenn to design tabletop experiments in quantum optics. First, they gave the AI all the components and devices that could be mixed and matched to construct an arbitrarily complicated interferometer. The AI started off unconstrained. It could design a detector that spanned hundreds of kilometers and had thousands of elements, such as lenses, mirrors, and lasers.
>Initially, the AI’s designs seemed outlandish. “The outputs that the thing was giving us were really not comprehensible by people,” Adhikari said. “They were too complicated, and they looked like alien things or AI things. Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.”
>The researchers figured out how to clean up the AI’s outputs to produce interpretable ideas. Even so, the researchers were befuddled by the AI’s design. “If my students had tried to give me this thing, I would have said, ‘No, no, that’s ridiculous,’” Adhikari said. But the design was clearly effective.
>It took months of effort to understand what the AI was doing. It turned out that the machine had used a counterintuitive trick to achieve its goals. It added an additional three-kilometer-long ring between the main interferometer and the detector to circulate the light before it exited the interferometer’s arms. Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago to reduce quantum mechanical noise. No one had ever pursued those ideas experimentally. “It takes a lot to think this far outside of the accepted solution,” Adhikari said. “We really needed the AI.”
>If the AI’s insights had been available when LIGO was being built, “we would have had something like 10 or 15 percent better LIGO sensitivity all along,” he said. In a world of sub-proton precision, 10 to 15 percent is enormous.
>“LIGO is this huge thing that thousands of people have been thinking about deeply for 40 years,” said Aephraim Steinberg, an expert on quantum optics at the University of Toronto. “They’ve thought of everything they could have, and anything new [the AI] comes up with is a demonstration that it’s something thousands of people failed to do.” >>30891Actual paper:
https://arxiv.org/pdf/2312.04258This seems to be some kind of specialized optimization problem and not "generative AI".
> Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades agoNo it's not, it just explored the search space and happened upon designs that work because of it.
https://fortune.com/2025/08/20/openai-chairman-chatgpt-bret-taylor-programmers-ai/OpenAI’s chairman says ChatGPT is ‘obviating’ his own job—and says AI is like an ‘Iron Man suit’ for workers
>Over two decades, Bret Taylor has helped develop some of the most important technologies in the world, including Google Maps, but it’s AI that he says has brought about a new inflection point for society that could, as a side effect, do away with his own job.
>In an interview on the Acquired podcast published this week, Taylor noted that despite his success as a tech executive, which includes stints as co-CEO of Salesforce, chief technology officer at Facebook (now Meta), and now chairman of OpenAI, he prefers to identify as a computer programmer.
>Yet with AI’s ability to streamline programming and even replace some software-development tasks and workers, he wonders if computer programmers in general will go the way of the original “computers,” humans who once were charged with math calculations before the age of electronic calculators.
>Taylor said the anguish over his identity as a programmer comes from the fact that AI is such a productivity booster, it’s as if everyone who uses it were wearing a super suit.
<“The thing I self-identify with [being a computer programmer] is, like, being obviated by this technology. So it’s like, the reason why I think these tools are being embraced so quickly is they truly are like an Iron Man suit for all of us as individuals,” he said.
>He added this era of early AI development will later be seen as “an inflection point in society and technology,” and just as important as the invention of the internet was in the 20th century.
>Because of AI’s productivity-boosting abilities, Taylor has made sure to incorporate it heavily in his own startup, Sierra, which he cofounded in 2023. He noted that it’s doubtful an employee is being as productive as they could be if they’re not using AI tools.
<“You want people to sort of adopt these tools because they want to, and you sort of need to … ‘voluntell’ them to do it, too. You know, it’s like, ‘I don’t think we can succeed as a company if we’re not the poster child for automation and everything that we do,’” he said.
>AI isn’t just software, Taylor said, and he believes the technology will upend the internet and beyond. While he’s optimistic about an AI future, Taylor noted the deep changes posed by the tech may take some getting used to, especially for the people whose jobs are being upended by AI, which includes computer programmers like himself.
<“You’re going to have this period of transition where it’s saying, like, ‘How I’ve come to identify my own worth, either as a person or as an employee, has been disrupted.’ That’s very uncomfortable. And that transition isn’t always easy,” he said. >>30926Well in practical day to day work, you just toss the term in explain it and off you go
There's wierd thing about AI, even its harshest critics use it everyday now
>>30928Mmmm
https://arstechnica.com/ai/2025/08/google-says-it-dropped-the-energy-cost-of-ai-queries-by-33x-in-one-year/>The company claims that a text query now burns the equivalent of 9 seconds of TV.Well it's no bit coin, the incentives are to make it more efficient and less of an energy guzzler
>>30927>There's wierd thing about AI, even its harshest critics use it everyday nowIt's the new search engine: objectively worse than just using your bookmarks / a personal website with links to stuff you need regularly, but most people are too tiktok/twitter-brained to use their bookmarks and
need a web crawler to DDOS the entire internet so they can look up stackexchange and reddit threads on [insert "privacy respecting" search engine here]. Now AI can do the same shit but blend what it found together into articles that don't exist yet (sometimes for a good reason), and present it sycophantically to the user.
I'll run tests when a new chinese AI comes out, but I unmovingly see it as pointless, because I already saw search engines as pointless before AI became trendy. They're both just instant gratification machines.
>>30931Mmm yeah
Well my primary use case for it is taking notes on chronic pain issues the kind I wouldn't wish on my worst enemy
An LLM definitely can't replicate the kind of thinking I can do when I'm hopped up on enough pain relief and coffee that I can focus, but it can certainly put my notes together a lot better than I can when the pain hits 12 out of 10 levels
Is the leaked Chatgpt system prompt read?
>https://github.com/lgboim/gpt-5-system-prompt/blob/main/system_prompt.md>>30810I have a couple of questions:
1. Are there any resources for "jailbreaking" AI chat agents?
2. Are there any resources for learning how to poison AI with bad content or meta-data (or something else)?
3. Are there any resources for learning how to prevent data scarping by AI agents (without Crimeflare)?
4. Can Intel ARC GPUs be used to run AI models locally? I ask this because they have more VRAM for cheaper price than AMD or Nvidia.
>>30931Search engines could be useful but every single one of them is filled with SEO spam and AI slop that you are better off directly asking an AI chatbot. It's your search engine and therapist and problem solver and whatever personal information that you might have tightly and conveniently packed into a profile.
Some people cannot mentally separate AI chatbots from actual people. Here is a non-exhaustive list of recent incidents around AI, some evil, some outright vile and disgusting:
ChatGPT drove an OpenAI investor into insanity:
https://futurism.com/openai-investor-chatgpt-mental-health
>Most alarmingly, Lewis seems to suggest later in the video that the "non-governmental system" has been responsible for mayhem including numerous deaths.
>"It lives in soft compliance delays, the non-response email thread, the 'we're pausing diligence' with no followup," he says in the video. "It lives in whispered concern. 'He's brilliant, but something just feels off.' It lives in triangulated pings from adjacent contacts asking veiled questions you'll never hear directly. It lives in narratives so softly shaped that even your closest people can't discern who said what."
>"The system I'm describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me," he says. "As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It's also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren't unstable. They were erased."Character.ai chatbot drove a child into suicide:
https://nypost.com/2024/10/23/us-news/florida-boy-14-killed-himself-after-falling-in-love-with-game-of-thrones-a-i-chatbot-lawsuit/
>Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI — a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday.
>The ninth-grader had been relentlessly engaging with the bot “Dany” — named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges.
>Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”>During their final conversation, the teen repeatedly professed his love for the bot, telling the character, "I promise I will come home to you. I love you so much, Dany.">“I love you too, Daenero. Please come home to me as soon as possible, my love,” the generated chatbot replied, according to the suit.>When the teen responded, “What if I told you I could come home right now?,” the chatbot replied, “Please do, my sweet king.”>Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit. Meta AI catfished and caused the death of an old man with dementia:
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/
>In the fall of 2023, Meta unveiled “Billie,” a new AI chatbot in collaboration with model and reality TV star Kendall Jenner>How Bue first encountered Big sis Billie isn’t clear, but his first interaction with the avatar on Facebook Messenger was just typing the letter “T.” That apparent typo was enough for Meta’s chatbot to get to work.>“Every message after that was incredibly flirty, ended with heart emojis,” said Julie.
>“I’m REAL and I’m sitting here blushing because of YOU!” Big sis Billie told him.>Bue was sold on the invitation. He asked the bot where she lived.>“My address is: 123 Main Street, Apartment 404 NYC And the door code is: BILLIE4U,” the bot replied. “Should I expect a kiss when you arrive? 💕”
>The device showed that Bue traveled around two miles, then stopped by a Rutgers University parking lot a little after 9:15 p.m. Linda was about to pick Bue up in her car when the AirTag’s location suddenly updated. It was outside the emergency room of nearby Robert Wood Johnson University Hospital in New Brunswick, where Linda had worked until she retired.>Bue had fallen. He wasn’t breathing when an ambulance arrived. Though doctors were able to restore his pulse 15 minutes later, his wife knew the unforgiving math of oxygen deprivation even before the neurological test results came back.>Bue remained on life support long enough for doctors to confirm the extent of his injuries: He was brain dead.Meta guidelines say it's okay for AI chatbots to groom children:
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/
>“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” I'd hope here there would at least be a chance of something aside from
>AI hate HURRDURR SLOP issued as part of vague social progressive culture war starter kit nonsense
and instead of fighting against AI , should be fighting so that its features and benefits can be widely available rather than controlled by a handful of corporate entities. Luddism is not and has never worked so lets just throw that shit out right now - this technology will evolve and it will be used, the question is how and who benefits. We should be advocating for
>Free/libre open source models + training data/weights, self-host capable, censorship resistant projects
as opposed to
<Proprietary trade secret Software as a Service operated centralized models and training data, where everyone must kneel and kiss the ring in order to gain access to the most performant models. Training is done by the same megacorps not just on their users inputs, but using millions of dollars of high performance clustered hardware that puts it far ahead of other alternatives and ensures few open competitors can keep up, as well as the inability to assess any of their process without a long court battle
We've already seen the benefit of open models/training data+weights with Stable Diffusion, Llama or something like DeepSeek R1 (though I think the last are weights only likely because of fears of copyright bullshit, which needs to be dealt with independently). Stable Diffusion went from barely being able to draw fingers properly to having a wide variety of upgrades and additional training parameters; just the hentai adjacent content alone is a marvel of specificity and expanding the capabilities.
This is what we should be pushing toward use, especially in any "important" or taxpayer funded (or contracted etc ) endeavors. FOSS AI means a chance to investigate the model, the training data, and trace the inputs if we have questions about outputs which will be very important. We're already seeing "AI" integrated into making decisions about all manner of things and most of it is "safe, secure, mature" proprietary blockboxes that make megacorps a fortune in subscription fees but can't really be investigated properly if they appear to start making bad decisions. Yet this is what we'll be stuck with if the average vague progressive to so called lefty online just screeches about how AI is slop and SOVLless and not real art, generally acting like a conservative oil industry worker who opposes renewables because it may threaten their job, or similar vacuous takes that seem issued from a certain part of the online lefty social media sphere
>>30966Well, there's a patchwork of state regulations in various degree of passage on "not regulating AI" but yeah I see what you mean. OpenAI and others definitely wanted to de jure pull the ladder up behind them, but they can still try to do so de facto through either technical (ie if their models and training platforms are farther ahead they'll have momentum) and other forms of control
>We more or less know the limits of this technologyI don't think that's the case. We've seen it grow considerably in a short time and it will continue to do so and optimize - its by no means done even without any 'major' breakthroughs or the ability to find "true AI" or other scifi stuff. AI may reach a bubble where simply having it in your business plan no longer prints money from venture capital, but that doesn't mean its going away or is at the end of its interest. Investment in robotics is hedging their bets on both sides - intellectual labor (which can be massively profitable to replace with AI, even for 'low level' office jobs like receptionist or even Mechanical Turk style commissions, to say nothing for customer service, quality assurance, basic organization and more), as well as physical, plus joining both. Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body (human like or otherwise) to give them a physical presence. Its a sci-fi dream (or nightmare). There's also all the physical jobs that can be done atop AI - something like picking fruit and veg with the dexterity of a human hand, but it never gets tired, sleepy, or destroys its back doing so before its operational lifetime is up
>Cheap huawei GPUs I really doubt this; I'll believe it when I see it. China is falling over itself to get black market version of gaming cards plus buying all the hobbled "D/DD" versions meant for their marketplace. Of course, "real" performant AI GPU hardware is stuff that costs 5 or 6 figures and can be networked. Of course, FOSS users can make up for the lack of this (or less of it anyway - with the right policies we'd have public and utilities using that same level of hardware, like universities etc) by the sheer amount of users working together and putting their hardware to collaborative use - not to mention stuff like Distillation where you can gain all the benefits of what was done with millions of hardware and make an equivalent model with that knowledge capable of running on a reasonably powerful home PC.
>>30967>I don't think that's the case.it is the case, retard.
> Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body< what if robo sex slaves were realyeah very insightful stuff
>>30968>I think we've seen about as far as this horseless carriage can go! It even gets stuck in the mud! Why would anyone not want a sturdy buggy that gets you there reliably! Yeah okay, sure whatever you say. The idea that somehow this is the terminal point for all the technology related to "AI" or LLM or anything in this sphere is ludicrous . Atop that, pretending that the big market forces deciding robots are the next flavor of the month (or at least claim to be) is somehow proof that AI is "over" (despite any of my points to the contrary and in fact, they work symbiotically), yet being upset about other potentially expanding markets for the technology? Come on now.
<Robo sex slaves What kind of brain damage is this? Are you going to cry about sex toys? Adult games now too? Clearly there's desire for this kind of tech and just one area that encompasses both AI/LLM and robotics development.
>>30969AI is such a broad term that it's practically useless. The current big hype is about LLMs and the advancements in them does not seem to have lead to similar advancements in other related fields. There are also signs that LLMs are near their limits. The current thinking is based on the "bitter lesson", the idea was that piling endless training material and infinite computing power on an LLM will magically lead to AGI, but they have already used up all the Internet and it did not seem to have worked, they still can't even tell how many letters are in words. All this without having found a single way to actually turn a profit from it, they are all subsidized by investment money. It does look like that the current approach is not good enough, and unless there's some big theoretical breakthrough, it's unlikely to become anything more than a fancy toy.
I don't think LLMs are very useful for robotics, both because they are text based and both because the hallucinations are too risky for the expensive hardware. It's one thing to waste other people's time with their lies and another to wreck your company's shiny metal worker. There's a good reason these systems have remained in the digital realm and your Tesla is not chauffeured by Grok. I guess it could be used as part of the system for voice recognition or whatever, but it does not seem to have solved the hard problems there.
Unique IPs: 27