[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password(For file deletion.)

Not reporting is bourgeois


File: 1755139966457.png (8.38 KB, 389x129, ClipboardImage.png)

 

The other thread hit bump limit and I'm addicted to talking about the birth of the ̶a̶l̶l̶-̶k̶n̶o̶w̶i̶n̶g̶ ̶c̶o̶m̶p̶u̶t̶e̶r̶ ̶g̶o̶d̶ the biggest financial bubble in history and the coming jobless eschaton, post your AI news here

Previous thread: >>27559

I think I’m just not going to read anything published after 2023. The online writing communities seem to have been totally destroyed by AI slop and I have seen too many professional “creative writers” just use the slop machines, I can read the writing on the wall.

>>30811
I stopped using search engines and use my bookmarks instead, and find it trivial to find non-AI works to read. AI is just better at the getting noticed by the curation algorithm thing. No algorithm, the AI shit disappears.

File: 1755281035917-0.png (87.7 KB, 351x293, ClipboardImage.png)

File: 1755281035917-1.png (33.81 KB, 485x292, ClipboardImage.png)

File: 1755281035917-2.png (95.52 KB, 359x268, ClipboardImage.png)

A story in three parts

Google deliberately enshittified its search to sell SEO and now its doing the same thing to sell AI. It's like, "Hey check out our amazing AI bot isn't it amazing how it's better at finding things than our search engine we intentionally broke?"

Ai peaked , ChatGPT 4 was the best now v5 is shit. Cap this. -ai has already peaked

>>30831
Well American AI is done, I have high hopes for China long term on at least producing the technological base for a world without drudgery, as in they're getting there slowly

It's interesting how capitalism can even deform a people's Republic and a worker's coop

The reason why Deepseek was able to get the amazing results they did was because they could put in the hard work studying them; from a pure research point of view they should be working directly with all the engineers of the new local chipsets to now design the next generation, but instead they're being whipped to directly work on the national champion worker's cooperative Huawei, which looks good as a leftist position; aesthetically it's great, the problem is that you need a few years of familiarity with, or be directly involved in designing the chips in question to squeeze that kind of performance out of a specific architecture

Inefficiencies, like this that make everybody's lives just that little bit worse abound in capitalism

>>30833
Them being all the assortment folk knowledge, research etc etc being built up around NVDIA chips

>>30833
*Huawei compute chips

>>30833
Some metrics (oddly difficult to track down) of how ChatGPT5 compares to previous iterations. With the exception of the hallucination rate it does seem that the improvements aren't as significant as previous releases. Wonder if there hitting the diminishing returns of scaling, or have exhausted data?

https://www.getpassionfruit.com/blog/chatgpt-5-vs-gpt-5-pro-vs-gpt-4o-vs-o3-performance-benchmark-comparison-recommendation-of-openai-s-2025-models

From what I have seen, ChatGPT5 makes exactly the same mistakes as all previous models, failing at simple arithmetic questions like multiplying two numbers or counting letters in a word.

Here's a question - why is it that this huge global push for automation only applies to jobs like answering phones or driving taxis, tasks that are really difficult to automate, but not to management/supervisor type jobs which would be much more straightforward to automate?

Instead of firing all the taxi drivers and replacing them with AI drivers who might malfunction and kill people, we could keep the human drivers who are much better suited to that role and instead fire the managers of the taxi company and replace them with an AI that drivers all use to collectively manage the company themselves.

Politico AI fails
In the broad view, the only reason to look at the politico was to close read it to see what strange nonsense Washington is thinking about

No need to close read any more; the AI is saying it right out loud

To the person asking about how to think about security in tech a while ago, there's some good practical tips on it in some of Gerard's earlier videos

>>30833
Deepseek is way overstated and High-Flyer finances are way more opaque than even OpenAI's so the rumors may be right and they are getting some hefty subsidies to operate. It's been like a year almost, and MoE just didn't make the splash AI sloppers were promising it was going to make.

>>30847
It is replacing middle managers, they were the first batch that was fired across Silicon Valley but truthfully it's a mixture of two things: the managerial class isn't interested in building a machine to replace the managerial class and AI is just not replacing a lot of roles, managerial or otherwise. The bull case is and always will be programmers, and if it can't replace programmers, it can't replace much of anything at all.

>>30851
Why programmers? Their job is actually pretty complex.

>>30847
Customer service workers are already required to follow scripts in their interactions so you would think it's easy to automate. Plus there's no hardware requirements (other than computing…) there, it's not like warehouse workers where you actually have to interact with the real world.

I mean I understand why they would want to automate away programmers (costly, usually not direct source of revenue, etc.), what I don't see is why this would be true:
>if it can't replace programmers, it can't replace much of anything at all.

https://youtu.be/xWYb7tImErI
Also belongs in >>>/edu/ when I find which thread to file it in

>>30852
>Why programmers?
Well, microsoft has dibs on the world's largest code repository, and code itself is easily measurable and testable, you can have any arbitrary heuristics to test code quality and feed it back to the AI. it's not that the job itself is easy or hard, just that it should lend itself well to automation. it's also the biggest bullcase because it's also a substantial saving cost-wise, whereas customer service workers have been outsourced for cheap

>>30857
for any other work, actually testing job output is substantially more complicated, anyone who has done any management is sort of winging it with their KPI shit, it's harder to build a machine that doesn't need a human fallback at any arbitrary point, because chat agents are really bad at taking decisions based on a script, and improvising when needed.

>>30857
But it is not reinforced learning, is it? They are not feeding compiler errors back. Plus none of the companies seem to care about about copyright, they feed their models anything they can get their hands on. Code is highly structured but they do not exploit that in any way, which means that it is harder to get it right, while something where there's more redundancy and slack, like natural language or images, should be easier for these kind of systems to master.

>>30858
I wonder, with Microsoft owning Office, Outlook, Teams, etc., they seem to be in an unique position to train a model with actual concrete management material, like emails, slides, even (virtual) meetings. Unlike code which is mostly public anyway.

>>30847
>but not to management/supervisor type jobs which would be much more straightforward to automate?
That's what it was designed to replace when these things were first being cooked up in labs a quarter of a century ago

Every context switch to deal with a very valid and important question from a student or faculty could cost hours on other important work if the technical details of what was being worked on were complicated enough

That's what these things were designed for, to replace that management supervisory work

>>30861 (吾)
是 as a cool side effect, the chat bot will take a terse correct response and turn it into a verbose explanation 😎

>OpenAI is currently under a federal court order, as part of an ongoing copyright lawsuit, that forces it to preserve all user conversations from ChatGPT on its consumer-facing tiers
https://www.theregister.com/2025/08/18/opinion_column_ai_surveillance/
Also includes interactions the user "deleted".

>>30861
From what I've seen it's mostly used as a secretary.

吾曾有一名秘书

And it's dead

>>30871
END OF LLMs YOU CAN (NOT) AUTOMATE

>>30841
because it's internally wiring your requests to older, shittier, less expensive models. all released metrics are all fake because openAI flat out is hiding how much compute they're spending on each request.

One of the big problems with trying to create artificial intelligence is the human ego makes it impossible for there to be any objective metric for success or failure, we just arbitrarily set our own goalposts, such as "if the machine gives responses that sound intelligent to me, then it must be intelligent." And as Elon Musk's chatbot clearly demonstrated, one person's idea of intelligent output could be nonsensical racist gibberish to someone else.

>>30872
That's fine for atisinal work 吾哈哈
>>30867 (吾)
>>30868 (吾)
If anything the best use case is instructor tuning and working on a model for a field, and then a handful of learners kicking the tires consulting each other and the instructor

Thank you venture capitalists for throwing billions into a research instrument that would have otherwise never been built

>>30878
*Artisinal

The instructor learner system was never planned to scale beyond 20 people, maybe it'll scale 🤷

>>30859
> They are not feeding compiler errors back.
Likely they already have evaluators within their architecture, I don't see why they wouldn't

> natural language or images, should be easier for these kind of systems to master.

This is silly because it should be the exact opposite: CFGs are easier for computers to parse, understand and predict, as they are FSMs. it should be easy for neural nets to match a piece of code to an intended result without evaluating, as there's no ambiguity. Now I'm NOT saying that LLMs will replace devs, just that this is the use case that is the most promising, technically and return-wise and if it doesn't work out, then LLMs are pretty much fucked.

>>30878
>That's fine for atisinal work 吾哈哈
artisanal work ain't recouping investments worth 100 billion smackaroonies, i think LLMs will live on as a mainstay technology but their scale will be much more limited

If anything a skilled secretary should be a super user on this kind of system, once they've figured out the ropes

>>30833
>a world without drudgery

Am I alone in thinking that this is a stupid goal? Human beings have an innate need to work and to put their minds and their hands to use, work is what we are born to do. It's not having to work that makes us miserable, it's having to be wage slaves and rent ourselves to another person that makes us miserable. It's having no choice that makes us miserable. All AI promises to do is eliminate lots of jobs and therefore give working people even fewer choices about what type of work they must do to survive.

>>30886
Well yeah, it doesn't work under capitalism, but realistically capitalism stopped working a while ago

>>30878
That sounds like a disaster. LLMs have no internal coherency and without that good luck making a model of a field: https://yosefk.com/blog/llms-arent-world-models.html

>>30880
> I don't see why they wouldn't
Because the mode of operation is just throwing more and more data at the problem and then hoping that the model magically figures things out on its own.

> This is silly because it should be the exact opposite

It is counterintuitive, but LLMs work on plain text. The clear structure of programming languages is lost on them. The model could recognize it but there's no guarantee that it will. It is not generating well-formed ASTs, it's generating plain text.

Also most programming languages do not have context free grammars.

The "AI revolution" is just class warfare disguised as technological progress.

>>30888
Notice in the article, how if you play bad chess it plays bad chess, if you play good chess it plays good chess

File: 1755684930826.png (4.19 MB, 1600x1297, ClipboardImage.png)

https://www.wired.com/story/ai-comes-up-with-bizarre-physics-experiments-but-they-work/

>He and his team turned to AI—in particular, a software suite first created by the physicist Mario Krenn to design tabletop experiments in quantum optics. First, they gave the AI all the components and devices that could be mixed and matched to construct an arbitrarily complicated interferometer. The AI started off unconstrained. It could design a detector that spanned hundreds of kilometers and had thousands of elements, such as lenses, mirrors, and lasers.


>Initially, the AI’s designs seemed outlandish. “The outputs that the thing was giving us were really not comprehensible by people,” Adhikari said. “They were too complicated, and they looked like alien things or AI things. Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.”


>The researchers figured out how to clean up the AI’s outputs to produce interpretable ideas. Even so, the researchers were befuddled by the AI’s design. “If my students had tried to give me this thing, I would have said, ‘No, no, that’s ridiculous,’” Adhikari said. But the design was clearly effective.


>It took months of effort to understand what the AI was doing. It turned out that the machine had used a counterintuitive trick to achieve its goals. It added an additional three-kilometer-long ring between the main interferometer and the detector to circulate the light before it exited the interferometer’s arms. Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago to reduce quantum mechanical noise. No one had ever pursued those ideas experimentally. “It takes a lot to think this far outside of the accepted solution,” Adhikari said. “We really needed the AI.”


>If the AI’s insights had been available when LIGO was being built, “we would have had something like 10 or 15 percent better LIGO sensitivity all along,” he said. In a world of sub-proton precision, 10 to 15 percent is enormous.


>“LIGO is this huge thing that thousands of people have been thinking about deeply for 40 years,” said Aephraim Steinberg, an expert on quantum optics at the University of Toronto. “They’ve thought of everything they could have, and anything new [the AI] comes up with is a demonstration that it’s something thousands of people failed to do.”

>>30891
Actual paper: https://arxiv.org/pdf/2312.04258

This seems to be some kind of specialized optimization problem and not "generative AI".

> Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago

No it's not, it just explored the search space and happened upon designs that work because of it.

>>30892
I'm sure he knows more about how his AI works than you.

>>30893
That's not a direct quote, it's likely that the anthropomorphization is due to the journalist. The actual paper certainly does not claim that the AI uses any theories, they describe it as a gradient descent.

>>30894
It's in the footnotes of the paper at the end

>>30882
>skilled secretary
skilled secretary are something they've been attempting to do since the invention of the PDA, and LLMs dont seem to be a confident step in that direction, considering they suck shit as agents

>>30896
Where I'm finding it handy is for the things that people hand off to secretaries but really should be doing themselves like summarising notes

>>30898
that's true I use them a lot to transform handwritten notes to markdown, I guess that's something a secretary would do

Turing claimed that if a machine can convince humans that they are talking to a human, then the machine must be intelligent. But he didn't say anything about how long the charade must last. Even a telephone answering machine can convince someone they are talking to a human for about five seconds. How long can a modern AI chatbot maintain the illusion? When you closely examine human language and how people converse day to day, you'll find that human conversation very rarely repeats itself, every day we are saying things that we've never said before or haven't said in ten or twenty years. How long can a chatbot give convincing responses to a human without repeating itself?

File: 1755858130815.png (1.52 MB, 1440x960, ClipboardImage.png)

https://fortune.com/2025/08/20/openai-chairman-chatgpt-bret-taylor-programmers-ai/

OpenAI’s chairman says ChatGPT is ‘obviating’ his own job—and says AI is like an ‘Iron Man suit’ for workers

>Over two decades, Bret Taylor has helped develop some of the most important technologies in the world, including Google Maps, but it’s AI that he says has brought about a new inflection point for society that could, as a side effect, do away with his own job.


>In an interview on the Acquired podcast published this week, Taylor noted that despite his success as a tech executive, which includes stints as co-CEO of Salesforce, chief technology officer at Facebook (now Meta), and now chairman of OpenAI, he prefers to identify as a computer programmer.


>Yet with AI’s ability to streamline programming and even replace some software-development tasks and workers, he wonders if computer programmers in general will go the way of the original “computers,” humans who once were charged with math calculations before the age of electronic calculators.


>Taylor said the anguish over his identity as a programmer comes from the fact that AI is such a productivity booster, it’s as if everyone who uses it were wearing a super suit.


<“The thing I self-identify with [being a computer programmer] is, like, being obviated by this technology. So it’s like, the reason why I think these tools are being embraced so quickly is they truly are like an Iron Man suit for all of us as individuals,” he said.


>He added this era of early AI development will later be seen as “an inflection point in society and technology,” and just as important as the invention of the internet was in the 20th century.


>Because of AI’s productivity-boosting abilities, Taylor has made sure to incorporate it heavily in his own startup, Sierra, which he cofounded in 2023. He noted that it’s doubtful an employee is being as productive as they could be if they’re not using AI tools.


<“You want people to sort of adopt these tools because they want to, and you sort of need to … ‘voluntell’ them to do it, too. You know, it’s like, ‘I don’t think we can succeed as a company if we’re not the poster child for automation and everything that we do,’” he said.


>AI isn’t just software, Taylor said, and he believes the technology will upend the internet and beyond. While he’s optimistic about an AI future, Taylor noted the deep changes posed by the tech may take some getting used to, especially for the people whose jobs are being upended by AI, which includes computer programmers like himself.


<“You’re going to have this period of transition where it’s saying, like, ‘How I’ve come to identify my own worth, either as a person or as an employee, has been disrupted.’ That’s very uncomfortable. And that transition isn’t always easy,” he said.

>>30913
>you'll find that human conversation very rarely repeats itself, every day we are saying things that we've never said before or haven't said in ten or twenty years.
Are you serious? Have never known someone a while? People tend to repeat themselves all the time. Same anecdotes, same observations, same little wisdoms.

File: 1755858406313.png (133.26 KB, 350x350, teto_drill.png)

>>30916
>it’s doubtful an employee is being as productive as they could be if they’re not using AI tools.
FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU FUCK YOU

>>30917

If you really step back and examine the things a person talks about over the course of their entire lifetime there is very little repetition at all, there's some commonly used phrases and utterances that we repeat from time to time, but the actual content of what we talk about, the opinions we express, the observations we make, the stories we tell, the idle thoughts we verbalize, etc. are highly innovative and original and very often in our everyday lives we say or think sentences that no human being has ever said or thought of before.

>>30919
>but the actual content of what we talk about, the opinions we express, the observations we make, the stories we tell, the idle thoughts we verbalize, etc. are highly innovative and original and very often in our everyday lives we say or think sentences that no human being has ever said or thought of before.
Lmao no all the time. People have a very small amount of material. Even people you don't know, like a podcaster or whatever, you'll notice them repeating themselves over and over again. People have a very short routine and if you know them for a while you'll see all parts of it again and again.

>>30922
Mmmmm not sure, some people have gestalts that they repeat a lot, sure, and podcasters are literally performing, so they're bound to fallback into the same performance vernacular, especially since that sort of behavior is implicitly rewarded. Not particularly convinced by your examples. Then again, this is the sort of stuff that you can actually measure in like an experiment.

>>30922

>People have a very small amount of material.


That's my point - a human mind, by means of some biological process that science has yet to figure out, can create a literally infinite multitude of possible sentences despite working with a very limited dataset and expending very little energy. A LLM, working with a much larger dataset and expending enormous amounts of energy, can only output a finite variety of responses because it does not actually understand language, it just sifts through a huge database of a bunch of things that humans have said in the past and through statistical analysis and weighted probabilities it cuts and pastes sentences together from this dataset.

>>30916
>streamlining
>looks inside
>It actually puts a dam to stop the flow of an existing stream.
Genuinely how do people with money get caught holding the bag on this sorta stuff? Their whole sales pitch is just pre-"nuh huh"-ing inevitable critiques.

Think about this - every day, in every spoken language, people are constantly tweaking and modifying and playing with their language, inventing new words or repurposing old words with new meanings; when these innovations "catch on" and memetically propagate through the culture they become part of the language. This is where new words come from and why languages evolve and change over time, because we don't view language as a rigid standard that must be followed, we bend and break our own rules all the time. How do you recreate this functionality in a LLM so that its understanding of human language does not "fall behind" as the human language and its grammar/syntax/vocabulary continues to evolve and change in completely unpredictable ways?

>>30926
Well in practical day to day work, you just toss the term in explain it and off you go

There's wierd thing about AI, even its harshest critics use it everyday now

>>30927
>There's wierd thing about AI, even its harshest critics use it everyday now
Becauee it's practically free and its alternatives have been enshittified to the point of uselessness

>>30928
Mmmm
https://arstechnica.com/ai/2025/08/google-says-it-dropped-the-energy-cost-of-ai-queries-by-33x-in-one-year/
>The company claims that a text query now burns the equivalent of 9 seconds of TV.
Well it's no bit coin, the incentives are to make it more efficient and less of an energy guzzler

>>30927
>There's wierd thing about AI, even its harshest critics use it everyday now
It's the new search engine: objectively worse than just using your bookmarks / a personal website with links to stuff you need regularly, but most people are too tiktok/twitter-brained to use their bookmarks and need a web crawler to DDOS the entire internet so they can look up stackexchange and reddit threads on [insert "privacy respecting" search engine here]. Now AI can do the same shit but blend what it found together into articles that don't exist yet (sometimes for a good reason), and present it sycophantically to the user.

I'll run tests when a new chinese AI comes out, but I unmovingly see it as pointless, because I already saw search engines as pointless before AI became trendy. They're both just instant gratification machines.

>>30931
Mmm yeah
Well my primary use case for it is taking notes on chronic pain issues the kind I wouldn't wish on my worst enemy

An LLM definitely can't replicate the kind of thinking I can do when I'm hopped up on enough pain relief and coffee that I can focus, but it can certainly put my notes together a lot better than I can when the pain hits 12 out of 10 levels

>>30932
Ah so sorta like an obsidian vault but with free sync. I use codeburg for that but if I did what you're doing that'd be easier on mobile.

>>30927
>There's wierd thing about AI, even its harshest critics use it everyday now

The big caveat there is that they never chose to use it, it was deliberately shoved into every software product under the sun so that people who sell AI can say "look everyone is using AI, see we told you AI is the future, you better start investing in AI"

Is the leaked Chatgpt system prompt read?
>https://github.com/lgboim/gpt-5-system-prompt/blob/main/system_prompt.md

>>30810
I have a couple of questions:
1. Are there any resources for "jailbreaking" AI chat agents?
2. Are there any resources for learning how to poison AI with bad content or meta-data (or something else)?
3. Are there any resources for learning how to prevent data scarping by AI agents (without Crimeflare)?
4. Can Intel ARC GPUs be used to run AI models locally? I ask this because they have more VRAM for cheaper price than AMD or Nvidia.

>>30931
Search engines could be useful but every single one of them is filled with SEO spam and AI slop that you are better off directly asking an AI chatbot. It's your search engine and therapist and problem solver and whatever personal information that you might have tightly and conveniently packed into a profile.

Some people cannot mentally separate AI chatbots from actual people. Here is a non-exhaustive list of recent incidents around AI, some evil, some outright vile and disgusting:

ChatGPT drove an OpenAI investor into insanity: https://futurism.com/openai-investor-chatgpt-mental-health

>Most alarmingly, Lewis seems to suggest later in the video that the "non-governmental system" has been responsible for mayhem including numerous deaths.


>"It lives in soft compliance delays, the non-response email thread, the 'we're pausing diligence' with no followup," he says in the video. "It lives in whispered concern. 'He's brilliant, but something just feels off.' It lives in triangulated pings from adjacent contacts asking veiled questions you'll never hear directly. It lives in narratives so softly shaped that even your closest people can't discern who said what."


>"The system I'm describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me," he says. "As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It's also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren't unstable. They were erased."


Character.ai chatbot drove a child into suicide: https://nypost.com/2024/10/23/us-news/florida-boy-14-killed-himself-after-falling-in-love-with-game-of-thrones-a-i-chatbot-lawsuit/

>Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI — a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday.


>The ninth-grader had been relentlessly engaging with the bot “Dany” — named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges.


>Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”

>During their final conversation, the teen repeatedly professed his love for the bot, telling the character, "I promise I will come home to you. I love you so much, Dany."
>“I love you too, Daenero. Please come home to me as soon as possible, my love,” the generated chatbot replied, according to the suit.
>When the teen responded, “What if I told you I could come home right now?,” the chatbot replied, “Please do, my sweet king.”
>Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit.

Meta AI catfished and caused the death of an old man with dementia: https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/

>In the fall of 2023, Meta unveiled “Billie,” a new AI chatbot in collaboration with model and reality TV star Kendall Jenner

>How Bue first encountered Big sis Billie isn’t clear, but his first interaction with the avatar on Facebook Messenger was just typing the letter “T.” That apparent typo was enough for Meta’s chatbot to get to work.
>“Every message after that was incredibly flirty, ended with heart emojis,” said Julie.

>“I’m REAL and I’m sitting here blushing because of YOU!” Big sis Billie told him.

>Bue was sold on the invitation. He asked the bot where she lived.
>“My address is: 123 Main Street, Apartment 404 NYC And the door code is: BILLIE4U,” the bot replied. “Should I expect a kiss when you arrive? 💕”

>The device showed that Bue traveled around two miles, then stopped by a Rutgers University parking lot a little after 9:15 p.m. Linda was about to pick Bue up in her car when the AirTag’s location suddenly updated. It was outside the emergency room of nearby Robert Wood Johnson University Hospital in New Brunswick, where Linda had worked until she retired.

>Bue had fallen. He wasn’t breathing when an ambulance arrived. Though doctors were able to restore his pulse 15 minutes later, his wife knew the unforgiving math of oxygen deprivation even before the neurological test results came back.
>Bue remained on life support long enough for doctors to confirm the extent of his injuries: He was brain dead.

Meta guidelines say it's okay for AI chatbots to groom children: https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

>“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.”

>>30939
why are they programmed to be so horny

>>30940
They are trained to imitate what's seen on the internet and people on the internet are horny.

>>30927
I don't use it.

If there is one thing I've learned from programming, it's that any time you incorporate code that you didn't write yourself into your project, you are taking on technological debt. You are creating dependencies on someone else's code, code that you may or may not understand the functionality of, code that might have breaking changes in the future or might have vulnerabilities you don't know about, etc. Having an AI write huge segments of your codebase will likewise take on technological debt - in the end, nothing is free, you can either tear your hair out today writing it yourself or tear your hair out tomorrow when it breaks and you don't understand why because you didn't write it yourself.

>>30945
Working alone is a luxury that most professional programmers cannot afford.

The end goal could be to condition a large number of programmers to use AI for any programming task that they can no longer work without it. Is generating clear, concise, and functionally correct programs that can be integrated into a larger coherent modular structure a sustainable business plan for an AI company?

I'd hope here there would at least be a chance of something aside from
>AI hate HURRDURR SLOP issued as part of vague social progressive culture war starter kit nonsense
and instead of fighting against AI , should be fighting so that its features and benefits can be widely available rather than controlled by a handful of corporate entities. Luddism is not and has never worked so lets just throw that shit out right now - this technology will evolve and it will be used, the question is how and who benefits. We should be advocating for
>Free/libre open source models + training data/weights, self-host capable, censorship resistant projects
as opposed to
<Proprietary trade secret Software as a Service operated centralized models and training data, where everyone must kneel and kiss the ring in order to gain access to the most performant models. Training is done by the same megacorps not just on their users inputs, but using millions of dollars of high performance clustered hardware that puts it far ahead of other alternatives and ensures few open competitors can keep up, as well as the inability to assess any of their process without a long court battle

We've already seen the benefit of open models/training data+weights with Stable Diffusion, Llama or something like DeepSeek R1 (though I think the last are weights only likely because of fears of copyright bullshit, which needs to be dealt with independently). Stable Diffusion went from barely being able to draw fingers properly to having a wide variety of upgrades and additional training parameters; just the hentai adjacent content alone is a marvel of specificity and expanding the capabilities.

This is what we should be pushing toward use, especially in any "important" or taxpayer funded (or contracted etc ) endeavors. FOSS AI means a chance to investigate the model, the training data, and trace the inputs if we have questions about outputs which will be very important. We're already seeing "AI" integrated into making decisions about all manner of things and most of it is "safe, secure, mature" proprietary blockboxes that make megacorps a fortune in subscription fees but can't really be investigated properly if they appear to start making bad decisions. Yet this is what we'll be stuck with if the average vague progressive to so called lefty online just screeches about how AI is slop and SOVLless and not real art, generally acting like a conservative oil industry worker who opposes renewables because it may threaten their job, or similar vacuous takes that seem issued from a certain part of the online lefty social media sphere

>>30929
this is so stupid because it's still an overwhelming expense vs just running a normal google search lol

>>30939
>Search engines could be useful but every single one of them is filled with SEO spam and AI slop that you are better off directly asking an AI chatbot.
they are useful, they were just made useless by profit incentive, the same thing will happen to AI chatbots, and sooner because porky needs to recoup billions of dollars pronto or the entire economy bursts a gasket

>>30963
ironically, in their jingoistic race against china, there's a temporarily halt to any sort of regulation around AI. This sounds bad, but if you consider that OpenAI's path to profitability was to regulate AI to such an extent where random people could not build their own models, then it's actually a good thing. At any rate, I don't think luddism is the conversation now, that ship has sailed, because we know more or less the limits of this technology. If investment is moving towards robotics it's because they don't see much opportunity in replacing so-called intellectual labor. At any rate the obvious limit to using open source models is hardware, and you're under a trade war that limits your access to upcoming cheap huawei GPUs

>>30966
Well, there's a patchwork of state regulations in various degree of passage on "not regulating AI" but yeah I see what you mean. OpenAI and others definitely wanted to de jure pull the ladder up behind them, but they can still try to do so de facto through either technical (ie if their models and training platforms are farther ahead they'll have momentum) and other forms of control

>We more or less know the limits of this technology

I don't think that's the case. We've seen it grow considerably in a short time and it will continue to do so and optimize - its by no means done even without any 'major' breakthroughs or the ability to find "true AI" or other scifi stuff. AI may reach a bubble where simply having it in your business plan no longer prints money from venture capital, but that doesn't mean its going away or is at the end of its interest. Investment in robotics is hedging their bets on both sides - intellectual labor (which can be massively profitable to replace with AI, even for 'low level' office jobs like receptionist or even Mechanical Turk style commissions, to say nothing for customer service, quality assurance, basic organization and more), as well as physical, plus joining both. Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body (human like or otherwise) to give them a physical presence. Its a sci-fi dream (or nightmare). There's also all the physical jobs that can be done atop AI - something like picking fruit and veg with the dexterity of a human hand, but it never gets tired, sleepy, or destroys its back doing so before its operational lifetime is up
>Cheap huawei GPUs
I really doubt this; I'll believe it when I see it. China is falling over itself to get black market version of gaming cards plus buying all the hobbled "D/DD" versions meant for their marketplace. Of course, "real" performant AI GPU hardware is stuff that costs 5 or 6 figures and can be networked. Of course, FOSS users can make up for the lack of this (or less of it anyway - with the right policies we'd have public and utilities using that same level of hardware, like universities etc) by the sheer amount of users working together and putting their hardware to collaborative use - not to mention stuff like Distillation where you can gain all the benefits of what was done with millions of hardware and make an equivalent model with that knowledge capable of running on a reasonably powerful home PC.

>>30967
>I don't think that's the case.
it is the case, retard.

> Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body

< what if robo sex slaves were real
yeah very insightful stuff

>>30968
>I think we've seen about as far as this horseless carriage can go! It even gets stuck in the mud! Why would anyone not want a sturdy buggy that gets you there reliably!
Yeah okay, sure whatever you say. The idea that somehow this is the terminal point for all the technology related to "AI" or LLM or anything in this sphere is ludicrous . Atop that, pretending that the big market forces deciding robots are the next flavor of the month (or at least claim to be) is somehow proof that AI is "over" (despite any of my points to the contrary and in fact, they work symbiotically), yet being upset about other potentially expanding markets for the technology? Come on now.
<Robo sex slaves
What kind of brain damage is this? Are you going to cry about sex toys? Adult games now too? Clearly there's desire for this kind of tech and just one area that encompasses both AI/LLM and robotics development.

>>30969
AI is such a broad term that it's practically useless. The current big hype is about LLMs and the advancements in them does not seem to have lead to similar advancements in other related fields. There are also signs that LLMs are near their limits. The current thinking is based on the "bitter lesson", the idea was that piling endless training material and infinite computing power on an LLM will magically lead to AGI, but they have already used up all the Internet and it did not seem to have worked, they still can't even tell how many letters are in words. All this without having found a single way to actually turn a profit from it, they are all subsidized by investment money. It does look like that the current approach is not good enough, and unless there's some big theoretical breakthrough, it's unlikely to become anything more than a fancy toy.

I don't think LLMs are very useful for robotics, both because they are text based and both because the hallucinations are too risky for the expensive hardware. It's one thing to waste other people's time with their lies and another to wreck your company's shiny metal worker. There's a good reason these systems have remained in the digital realm and your Tesla is not chauffeured by Grok. I guess it could be used as part of the system for voice recognition or whatever, but it does not seem to have solved the hard problems there.

I think that true artificial intelligence is theoretically an attainable goal, but I also think that human intelligence is the only kind of intelligence that humans can establish meaningful communication with. We've already been burned trying to apply human psychology to animals, so it follows that the only kind of general artificial intelligence that could be helpful to humans is an intelligence that is psychologically human and thinks the same way we do and has the same perspective and values and biases that we do. Therefore, it's not just a matter of having enough computing power to simulate a complex organic neural network with billions of connections, it's also a matter of knowing how the human brain works on a fundamental level and being able to replicate human psychology and all of its quirks, otherwise the only kind of general artificial intelligence you could create is one that is utterly incomprehensible to you and that you could never communicate with.

>>30969
>What kind of brain damage is this?
I'm obviously calling you a midwit, but even figuring that out seems like a tall task for you

>>30971
>I think that true artificial intelligence is theoretically an attainable goal
If true artificially intelligence was possible, they wouldn't offer access to it for a 20 dollar fee, they would just use it.

>>30974

The keyword is theoretically, I don't think we are anywhere close to being able to create AI and the idea is purely in the realm of science fiction. LLMs in my opinion don't even qualify as artificial intelligence at all, the term "AI" has been abused so thoroughly that it is little more than a marketing slogan at this point.

>>30975
>The keyword is theoretically,
yeah you can imagine whatever fantasy scenarios you want but it isnt a particularly insightful exercise, is it


Unique IPs: 27

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]