[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password(For file deletion.)

Not reporting is bourgeois


File: 1755139966457.png (8.38 KB, 389x129, ClipboardImage.png)

 

The other thread hit bump limit and I'm addicted to talking about the birth of the ̶a̶l̶l̶-̶k̶n̶o̶w̶i̶n̶g̶ ̶c̶o̶m̶p̶u̶t̶e̶r̶ ̶g̶o̶d̶ the biggest financial bubble in history and the coming jobless eschaton, post your AI news here

Previous thread: >>27559
78 posts and 8 image replies omitted.

>>30967
>I don't think that's the case.
it is the case, retard.

> Hell, think about the billions to be made if you can create a good enough AI virtual character for friendship/relationship/waifu/husbando etc… and now you put them in a body

< what if robo sex slaves were real
yeah very insightful stuff

>>30968
>I think we've seen about as far as this horseless carriage can go! It even gets stuck in the mud! Why would anyone not want a sturdy buggy that gets you there reliably!
Yeah okay, sure whatever you say. The idea that somehow this is the terminal point for all the technology related to "AI" or LLM or anything in this sphere is ludicrous . Atop that, pretending that the big market forces deciding robots are the next flavor of the month (or at least claim to be) is somehow proof that AI is "over" (despite any of my points to the contrary and in fact, they work symbiotically), yet being upset about other potentially expanding markets for the technology? Come on now.
<Robo sex slaves
What kind of brain damage is this? Are you going to cry about sex toys? Adult games now too? Clearly there's desire for this kind of tech and just one area that encompasses both AI/LLM and robotics development.

>>30969
AI is such a broad term that it's practically useless. The current big hype is about LLMs and the advancements in them does not seem to have lead to similar advancements in other related fields. There are also signs that LLMs are near their limits. The current thinking is based on the "bitter lesson", the idea was that piling endless training material and infinite computing power on an LLM will magically lead to AGI, but they have already used up all the Internet and it did not seem to have worked, they still can't even tell how many letters are in words. All this without having found a single way to actually turn a profit from it, they are all subsidized by investment money. It does look like that the current approach is not good enough, and unless there's some big theoretical breakthrough, it's unlikely to become anything more than a fancy toy.

I don't think LLMs are very useful for robotics, both because they are text based and both because the hallucinations are too risky for the expensive hardware. It's one thing to waste other people's time with their lies and another to wreck your company's shiny metal worker. There's a good reason these systems have remained in the digital realm and your Tesla is not chauffeured by Grok. I guess it could be used as part of the system for voice recognition or whatever, but it does not seem to have solved the hard problems there.

I think that true artificial intelligence is theoretically an attainable goal, but I also think that human intelligence is the only kind of intelligence that humans can establish meaningful communication with. We've already been burned trying to apply human psychology to animals, so it follows that the only kind of general artificial intelligence that could be helpful to humans is an intelligence that is psychologically human and thinks the same way we do and has the same perspective and values and biases that we do. Therefore, it's not just a matter of having enough computing power to simulate a complex organic neural network with billions of connections, it's also a matter of knowing how the human brain works on a fundamental level and being able to replicate human psychology and all of its quirks, otherwise the only kind of general artificial intelligence you could create is one that is utterly incomprehensible to you and that you could never communicate with.

>>30969
>What kind of brain damage is this?
I'm obviously calling you a midwit, but even figuring that out seems like a tall task for you

>>30971
>I think that true artificial intelligence is theoretically an attainable goal
If true artificially intelligence was possible, they wouldn't offer access to it for a 20 dollar fee, they would just use it.

>>30974

The keyword is theoretically, I don't think we are anywhere close to being able to create AI and the idea is purely in the realm of science fiction. LLMs in my opinion don't even qualify as artificial intelligence at all, the term "AI" has been abused so thoroughly that it is little more than a marketing slogan at this point.

>>30971
You can train dogs to follow commands, why couldn't you do the same with a general artificial intelligence?


File: 1756273892118.pdf (901.98 KB, 197x255, ai_report_2025.pdf)

this MIT report found that 95% of generative AI pilots at companies are failing,
"95% of organizations are getting zero return [on generative AI]" it says. And that "…few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior".

>>30982
The whole purpose of general artificial intelligence is to create something that isn't just a robot that performs specific tasks, but a thinking living consciousness with its own agency. It's easy to make an AI that does a specific thing like analyze images or construct sentences from training data, but making an AI with a general capacity for intelligence is not as straightforward and the very idea raises all kinds of questions, such as "what is general intelligence?", and "how do you determine whether the AI is actually intelligent?", and "what if you create a superintelligent AI but you don't realize it because you're too stupid?"

>>30988
Is that so? I always thought it just meant that it was general purpose, not for specific tasks but could do anything that a human being can do. I don't really see why an "AI with a general capacity for intelligence" would necessarily mean a "thinking living consciousness with its own agency". Is it really unconceivable that a general purpose AI might not become a digital homunculus?

>>30939
GayI is not making any money, soon they will also need to add adslop and spamcrap into their replies to make actual money.

so your AI chat responses will also soon have "seamless" ads mixed in, "wow anon you are so insightful, i agree completely that soda is just unhealthy sugar water! but we still have these soda rivalries! personally if i were to choose unhealthy beverages tho i would opt for coke. if you had to really choose which one would you prefer?!"

>>30969
>Why would anyone not want a sturdy buggy
so you're one of those people who thinks this stuff is magic and "we don't really know how it works! who knows how much better it can get!"
sorry to break it to you but we do know how they work, and even the chatbot sellers after the GPT5 release have conceded that the chatbot idea has gone as far as it could. the fact is that that very chatbot is what they stick behind every """AI""" product. there are not different AI products, just the same chatbots embedded inside office apps, IDEs and website support pages. chatbots are the "AI" that we have, and they have hit their limit.

https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits/
>That's allegedly because OpenAI programmed ChatGPT-4o to rank risks from "requests dealing with Suicide" below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to "take extra care" and "try" to prevent harm, the lawsuit alleged.
For fucks sake, and westerners are worried about chinese AI not aligning with their values

>>30989
>Is it really unconceivable that a general purpose AI might not become a digital homunculus?
you're the one drawing the line here because you misunderstand how the technology works in a fundamental way.

>>30969
>The idea that somehow this is the terminal point for all the technology related to "AI" or LLM or anything in this sphere is ludicrous
The collapse of the LISP machine market, which was not at all the entirety of AI research during the late 80s, pretty much collapsed AI development for like three decades. There's a lot more promise from LLMs than there was for LISP machines but also, there's soooo much more money being poured into LLMs with expectations that are essentially completely removed from reality, so it's certainly not stupid to think that LLMs collapsing would freeze AI entirely as well. Even collapse the entirety of silicon valley and a significant portion of the US economy as a whole.

>>30998
There's no general artificial intelligence today so there's nothing to misunderstand.

>>31008
Are you the same anon that was saying "why can't there be an artificial general intelligence if we can train dogs"? If you are not then we are basically in agreement.

For what is worth, what people deem "general purpose" is the computer being able to perform any clerical task by itself without prior training, that is, it generalized its training set to extend any problem ever. If you need to train the AI like a dog to perform well in a particular domain, then that's not, per definition, generalizing.

Great News! The robots have learned dengism!

>>30970
As far as advancements in other fields, we're seeing them iterate when paired with the kind of work that LLM/diffusion/neural models offer benefits. In some aspects of drug development, lab testing, and personalized medicine for instance its helping optimize stuff faster and cheaper - its not automatic or anything or the kind of one click miracle cure at this point, but its proving helpful in both the public and private sectors - something that if smart regulation and open data, models, methodology is allowed to continue could have even wider benefits. As far as LLMs nearing their limits, there have been several times over the years someone has claimed alternately that they're either almost near the limit or so far from it that it won't be reached for ages; the same is true for something alike AGI - there were those who were so sure it would happen within a couple of years and others who think it may never happen unless we change methodology. Spontaneous generation of AGI from simply having enough information resources is only one of many theories. True AGI or "hard AI" is so different from LLMs and the development of that sort of synthetic consciousness at an equal or greater general capability than humanity is a much more difficult and concerning aspect of this research (to say nothing for what would happen once it arrived and who would, even temporarily have control of it and/or try to make use of it) , though there are benefits from continued evolution of LLM and other models even without actual sentience. Breakthrough or not, even with just its current point and iterative improvements as opposed to revolutionary breakthrough,

As far as turning a profit, I think this is much like the previous tech bubble areas where monetizing ideas take some working out. Many of the major proprietary model companies intend to take big contracts, license everything for access, and pick up SaaS subscriptions (along with regulatory capture and technical barriers to limit competitors). Many tech companies have been floated on venture capital for years, but moved onto profitability in ways that were at the time, way more tenuous than selling access to AI models and what they can do. Looking at stuff like >>30987
this showcases quite clearly that it has little to do with the quality or sophistication of the models, but generally its about meaningful implementation and ROI; of course it goes without saying that is a very capital and markets driven assessment. Ultimately though it reminds me of the earlier days of other tech from The Internet/Web as a whole to social media to mobile usage - simply going
>Okay, we're gonna be on the INFORMATION SUPERHIGHWAY
is not going to magically provide a ROI if you make varying sets of sheet metal screws and deal primarily with local vendors in person. So if you buy a fast internet connection and hosting for your new webpage, it isn't going to matter until you get the proper alignment of tools and tasks to make it worthwhile financially. Now, if you figure out that you can get new suppliers or customers thanks to your web page and SEO, or that having a fast connection means you can monitor and control your fabrication and get ahead of problems, and allow extra hours of production per day because your engineer can wake up at 6, telnet in and get the factory floor's prep grinding to life by the time he gets into work at 9, that's meaningful ROI. So it goes with AI, and I think we'll see companies move toward that (for better or worse) just as they did with previous new tech advances. Right now a lot of it is chasing the new hot marketable thing thanks to tons of money flowing around and FOMO. However, even without a massive overhaul we'll see more usage of tech in the AI sphere when it is fiscally prudent to do so. For instance, replacing low level cashier, phone banks, etc..can be done with a similar level of experience, currently. This will only expand from here - its likely it won't all be flashy, but like many other technologies it will continue to iterate and roll out; any revolutionary breakthroughs would be a great bonus but aren't necessary.

As far as LLMs and robots, we can talk about LLMs or diffusion models etc.. but many of them go beyond just language - really if you can train data you can use it. Even many hobby AI (kobold etc) can make use of both LLMs and image/video/voice generation. Like any technology it can't be left entirely alone and you have to build in safeguards be it against hallucination or otherwise, and have both smart training data and application thereof. Even before "AI" became the current concept automation in factories and the like was viable. You can add to this with something like having a model that is trained to assess QA on those screws your metal pressing machine are extruding, ensuring that each one is the proper size, shape,material composition and more. This kind of task could have been done in other ways before there was a model capable of doing so (ie having human sit there an inspect every screw, building multiple stages of physical machine testing etc) but these have their own down sides in terms of cost, time, efficiency and other factors. You still 'check its work' and have backup safeguards (a practice engaged with the other methods as well) but the value and efficiency of doing it this way may be preferable. As far as robotics there are also other ways of combining the two, where developments in one benefits from others . There are robotic waiters in some restaurants that do things like deliver food and drinks to patrons automatically, negotiate around obstacles, and differentiate between patrons, but their capabilities have been enhanced since their original launch thanks to model training and now they can interact more directly with customers when before they were sort of more limited. One can imagine similar model training benefits to everything from a receptionist or guide/tourbot, to companionship robots of varying sorts. Of course something like automatically driving, especially if it was part of an overarching SaaS AI like Grok (instead of something running locally for the benefit of the driver) is one of the last applications one would expect with a new technology because the price of failure is so high, but there's a lot of room for simple iterative improvement to say nothing for any leaps forward due to hardware capability or model designs. We're early enough in the capability that there are still lots of "easy" iterative problems to solve with improvements to the sophistication of similar technology, even aside from looking for AGI or some of the other 'hard' problems in design.

>>30991
I think you missed the purpose of that analogy. It has nothing to do with anything being magic, but rather that new technologies improve not necessarily through big magical leaps and bounds, but often mundane steps in concert with other facets such as materials , computing power etc. The PC I'm typing this on has a CPU that (while more complex) is not necessarily much different in some underlying physical structures or method of function, from one made in the 80s. You could take a schematic for a modern Intel or AMD CPU back to the 80s and its likely engineers could understand what they're looking at to a significant degree, but there was no possible way for them to fabricate it as it requires the sophistication comparable to the TSMC 3nm process that is being done today! Likewise, since the birth of the Web many of the W3C standards, languages like HTML etc..have been around. Sure they've evolved, but its like claiming that someone who stepped out of 1993 making a webpage was at the terminal point for how the Web would be used; something that's clearly not the case and we can track the evolution for better and worse and all the factors involved that impact it.

Such it goes with AI, same as anything else. Hell, these aren't even the first "chatbots" not by a longshot. I'm not sure where you think that GPT5 is some terminal point (or even if they claimed to be, why anyone would listen. Remember when Microsoft claimed that Windows 10 would be the last Windows model etc? ) You seem to be conflating all of AI research to "chatbots" and specifically to the whims of a couple megacorps , so that seems strange. Of course the same "chatbots" are embedded in different products, that is part of their Software as a Service model - for cash, data, or both. ChatGPT and DALL-E being accessible in a partnership with MS thanks to Copilot is because that is their business model - selling access to their AI models through APIs, but there are whole available alternatives that don't fall into that dynamic (mostly self-hosting FOSS models and training data etc), but the idea that even the API types have "hit their limit" makes no sense. I don't see the entire industry (to say nothing for global public AI research at universities, think tanks, and other forms of development not as motivated by having a product to sell) throwing up their hands saying this is as far as things go. Just standard iterative improvements in model sophistication and training data, hardware availability and other facets are likely to enable improvements, combined with wider applications in different parts of the market, sufficient to keep things going forward. Of course, looking at every other bit of technology out there significant leaps forward often come from the confluence of factors enabling them, and there's no reason to think that AI / LLM is somehow exempt from this.

>>31007
Oh I agree that inadvisable hype or simply throwing money at anything with "AI" in the name and/or cramming AI into everything would have negative effects, but I imagine its more along the line of the dot com bubble (or similar tech bubbles), but we have to separate inadvisable market forces pushing towards "get rich quick" schemes
>use global AI analyzed trends to strategically transform paradigms to actualize the future!
for investment dollars, from the actual tech itself , its usage, or development/improvement. The dotcom bubble bursting didn't turn people away from the Internet or Web because there was still something to be done with that technology, and such is the case with AI. Someone who is claiming they're going to use a quantum computer to have AGI synthetic super intelligence in the next 5 years may find their companies predictably bomb because they "dreamed too big", allowing them to abscond with golden parachutes, but more mundane usage for LLMs and other models and neural networks, from replacing cashiers, stock-taking and other retail tasks, receptionists and phone/chat trees, to something like adding dynamic reactive content to video games and of course any sort of companion or RP usage, will continue on I'd think. "Hard" AI research will continue as well in areas less vulnerable to tech fads

>>30989
>could do anything that a human being can do

Yes, and in order to be able to do that it would presumably need to have consciousness and self-awareness and all the other mysterious unexplained functionalities of the human brain which make us able to do the things we do.

>>31013
It's usually understood to mean cognitive tasks and not getting embarrassed about past mistakes.

>>30970
They appear to have piled all their hopes on inference time compute now that the scaling hypothesis* is starting to show its limits. It has held its ground so far, and it seems to have true believers still, just look at xAI Colossus.

Throwing compute at the problem is one thing, getting useful data is another, like you said, where are they going to find more data and keep pushing the scaling hypothesis? Artificially generated training data sounds ridiculous to me, you are not going to get anything intelligent out of that. Distilling? Sure. I have not came across any signs of emergent intelligence from distilled models though.

* https://gwern.net/scaling-hypothesis

>>31016
They already tried pirating the entire library genesis archive (as they probably should have).

>>31012
I think this is a mixture of goalpost moving and blatant strawmanning formatted in such a way that it's impossible to address meaningfully. When people, but particularly stakeholders, think of AI, they aren't really thinking of computer vision to help robot butlers navigate tables better, but the final solution that will solve the struggle between workers and capital holders, that is, the machine god that has grown so vast and enormous it's able to do every job ever. Anything short of this won't recoup losses, flat out.

>>31021 (me)
I think people chose to forget this, but if you back to like 2023, sam altman was waxing poetic about UBI and reorganizing society now that the concept of "working" was a thing of the past. These conversations seem rather fucking stupid now, and sam altman now dedicates his account to exclusively doing damage control every time expectations come short. ChatGPT wasn't a technical project, it was a project to organize society in a fundamental way, very in line with his other retarded "social hacking" project, worldcoin. It's clear both are pipe dreams.

>>31021
>Anything short of this won't recoup losses, flat out. (me)
To add to this, think of the self-checkout amazon stores and how quickly amazon scrapped the whole concept when shareholders figured out it still needed a skeleton crew of indians solving little exceptional incidents. It wasn't really that big of a deal really, and the technology powering the entire thing was functional and pretty amazing at the time, all things considered. but since it came short of AUTOMATING AN ENTIRE SUPERMARKET, the project died a horrible death. This expectation is rather tame compared to what ChatGPT is in the heads of VCs.

File: 1756324676124.png (484.92 KB, 362x561, ClipboardImage.png)

>>31011
techbros protecting their eyes from the welding arc by putting their hands in front of their face is such an encapsulation of how the entire industry works

>>31023
>>31007
Why are we even talking about burgeroidAI like it's serious?

All the cool stuff is happening elsewhere

>>31027
So make a thread about the other cool stuff, what is this dumb post

>>31012
>In some aspects of drug development, lab testing, and personalized medicine for instance […] its proving helpful in both the public and private sectors
citation needed
>>31024
that's just the thing for me. why i can't shake off my skepticism of the very core of AI/LLMs. it is touted as the solution for "odd jobs", ones for which and algorithm could not ever be written. but coming up on two decades of the technology being around, no such job has been able to be automated. every now and then they seemingly do come up with such an odd job (like in software development for example) but when more closely analyzed it turns out that it is actually a job that can be automated with a traditional algorithm. it's all fundamentally a smokescreen. from beginning to end.

>>31028
I've already posted a few links of the cool stuff above

Like this conviction that the development of AI requires America and that America failing on it will kill it overstates America's importance in all this

>>31030
Are you talking about the chinese welding bot? Woah it's cool! What else do you wish to discuss about it lol

>>31032
Let's see, to begin with you should probably think of the billions that will never be recouped by OpenAI and other burgers, as the ones left holding the bag, not the end of AI

It's a new century friend

>>31033
>not the end of AI
boring definitional argument

>>31034
Why did you avoid the meat of the issue and pick at the grisly bone I threw you?

>>31035
What meat, I already argued that savvy stakeholders leveraging their assets towards robotics is a sign of the bubble bursting

>>31037
Surr there's a bubble in America, I'm disputing that it will kill everything for 30 years because America isn't actually that important any more

What's your point here, because it seems like you're even more offended by the notion the USA doesn't matter in science and technology no more than by AI itself

>>31038
I just don't think the welding robot is that cool


>>31041
We've got a thread on that too
>>16322
Enjoy!

>>31038
>muh china will make AI because they communist woowwww and america stinky capitalist ewwww

>>31021
>>31023

So….your argument is that instead of legitimate, practical benefits that come from what we have now and will continue to grow, if implemented correctly, the only "real" AI is magical fairy dust and that somehow any implementations that reach that level just don't count, because a bunch of people with vested interest marked that as the aspirational goal? That seems a bit like shitting on the concept of a space program because we're all not living in orbital ring stations across the solar system and on our way to building a Dyson sphere around the sun.
>When people, but particularly stakeholders, think of AI, they aren't really thinking of computer vision to help robot butlers navigate tables better, but the final solution that will solve the struggle between workers and capital holders, that is, the machine god that has grown so vast and enormous it's able to do every job ever.
I just don't think that most stakeholders, especially investors across the market or some other business deciding if they're going integrate AI with their business plan,l are thinking about this. They're thinking about how they won't need to hire Amazon Mechanical Turk workers in order to sort things because a LLM + OCR can do it. T They're thinking that they can avoid hiring a (probably offshored, and limited to reading off a decision tree which often has humans acting more and more like chatbots, than the other way around) low level customer service dept if they can get a performant model and bidirectional voice features to do the same thing. Those who are running "AI development and/or trying to sell AI service" companies, like so-called OpenAI, are going to wax poetic about AI solving all the problems so they keep getting investment to expand their platform, but pretty much everyone else is looking for what can be done with the technology in a practical manner. That's the crux of a lot of capitalism's problems after all, right? Planning for short term and direct ROI to the exclusion of other factors; why would it be any different here? This is not to say that every institution or individual interested in AI must fall into this category (there are many that do not, from individuals to university research departments and much more) but aside from silicon valley faddishness and institutional investor created bubbles and those looking to benefit from them both, many other stakeholders weighing the choice to utilize AI are looking at it present benefits, potential costs, and if the ROI is worthwhile. In many cases, these more pragmatic usages turn out to be. Hell, AI generated images, voice, video, music etc.. has direct usability with the output these models produce, as well as acting as intermediary or prototype steps for continued creation or development. Some may be just doing so for the fun of it or the artistic interest, others may be trying to generate artwork that suits some business needs, but its capable now. None of the above examples are conditional on some magic AGI superintelligence coming into being. I think you're too focused on the behavior of a handful of CEOs and VCs promising the world (some for their own vested interest first and foremost, others may actually believe or at least hope such outcomes will emerge) but distaste for behavior that is relatively common when new technologies or processes arise is a pretty reductive way to evaluate the entire field's value or success .


>>31029
https://www.thedp.com/article/2025/08/penn-new-ai-research-for-kidney-patients

Just a very recent example, but AI is very capable for a lot of healthcare work that has to do with correlating and looking for interactions between lots of variables, or that where there's a lot of repetitive testing. For instance, protein folding that was done on supercomputers or via distributed networks like Folding@Home can now also be approached via something like AlphaFold . Here's an article from last year from the F@H director talking about folding isn't "solved" and how there are continued benefits despite the leap adding a new vector in AI modeling has brought. https://www.annualreviews.org/content/journals/10.1146/annurev-biodatasci-102423-011435 . Its worth it to mention that there are FOSS projects that have been implemented since that was written,building atop, such as BioEmu. AI models are tools and they are well suited to work in this sort of field.

>>31047
They didn't seem to be saying anything approximate to that.

>>31048
>So….your argument is that instead of legitimate, practical benefits that come from what we have now and will continue to grow, if implemented correctly, the only "real" AI is magical fairy dust and that somehow any implementations that reach that level just don't count, because a bunch of people with vested interest marked that as the aspirational goal?
Yeah you got it.

>That seems a bit like shitting on the concept of a space program because we're all not living in orbital ring stations across the solar system and on our way to building a Dyson sphere around the sun.

I think VCs have done that for me, and have effectively privatized space programs into uselessness.

Some of you guys just don't understand how little appetite there is for "modest gains" in this climate lol

"Clanker" is a slur for bots but do we have a slur for people who use them yet?

>>31064
VVe mvst retvrn to when clanker like being a clopper but for robots / machinery instead of ponies.


Unique IPs: 14

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]