>>30970As far as advancements in other fields, we're seeing them iterate when paired with the kind of work that LLM/diffusion/neural models offer benefits. In some aspects of drug development, lab testing, and personalized medicine for instance its helping optimize stuff faster and cheaper - its not automatic or anything or the kind of one click miracle cure at this point, but its proving helpful in both the public and private sectors - something that if smart regulation and open data, models, methodology is allowed to continue could have even wider benefits. As far as LLMs nearing their limits, there have been several times over the years someone has claimed alternately that they're either almost near the limit or so far from it that it won't be reached for ages; the same is true for something alike AGI - there were those who were so sure it would happen within a couple of years and others who think it may never happen unless we change methodology. Spontaneous generation of AGI from simply having enough information resources is only one of many theories. True AGI or "hard AI" is so different from LLMs and the development of that sort of synthetic consciousness at an equal or greater general capability than humanity is a much more difficult and concerning aspect of this research (to say nothing for what would happen once it arrived and who would, even temporarily have control of it and/or try to make use of it) , though there are benefits from continued evolution of LLM and other models even without actual sentience. Breakthrough or not, even with just its current point and iterative improvements as opposed to revolutionary breakthrough,
As far as turning a profit, I think this is much like the previous tech bubble areas where monetizing ideas take some working out. Many of the major proprietary model companies intend to take big contracts, license everything for access, and pick up SaaS subscriptions (along with regulatory capture and technical barriers to limit competitors). Many tech companies have been floated on venture capital for years, but moved onto profitability in ways that were at the time, way more tenuous than selling access to AI models and what they can do. Looking at stuff like
>>30987this showcases quite clearly that it has little to do with the quality or sophistication of the models, but generally its about meaningful implementation and ROI; of course it goes without saying that is a very capital and markets driven assessment. Ultimately though it reminds me of the earlier days of other tech from The Internet/Web as a whole to social media to mobile usage - simply going
>Okay, we're gonna be on the INFORMATION SUPERHIGHWAY is not going to magically provide a ROI if you make varying sets of sheet metal screws and deal primarily with local vendors in person. So if you buy a fast internet connection and hosting for your new webpage, it isn't going to matter until you get the proper alignment of tools and tasks to make it worthwhile financially. Now, if you figure out that you can get new suppliers or customers thanks to your web page and SEO, or that having a fast connection means you can monitor and control your fabrication and get ahead of problems, and allow extra hours of production per day because your engineer can wake up at 6, telnet in and get the factory floor's prep grinding to life by the time he gets into work at 9, that's meaningful ROI. So it goes with AI, and I think we'll see companies move toward that (for better or worse) just as they did with previous new tech advances. Right now a lot of it is chasing the new hot marketable thing thanks to tons of money flowing around and FOMO. However, even without a massive overhaul we'll see more usage of tech in the AI sphere when it is fiscally prudent to do so. For instance, replacing low level cashier, phone banks, etc..can be done with a similar level of experience, currently. This will only expand from here - its likely it won't all be flashy, but like many other technologies it will continue to iterate and roll out; any revolutionary breakthroughs would be a great bonus but aren't necessary.
As far as LLMs and robots, we can talk about LLMs or diffusion models etc.. but many of them go beyond just language - really if you can train data you can use it. Even many hobby AI (kobold etc) can make use of both LLMs and image/video/voice generation. Like any technology it can't be left entirely alone and you have to build in safeguards be it against hallucination or otherwise, and have both smart training data and application thereof. Even before "AI" became the current concept automation in factories and the like was viable. You can add to this with something like having a model that is trained to assess QA on those screws your metal pressing machine are extruding, ensuring that each one is the proper size, shape,material composition and more. This kind of task could have been done in other ways before there was a model capable of doing so (ie having human sit there an inspect every screw, building multiple stages of physical machine testing etc) but these have their own down sides in terms of cost, time, efficiency and other factors. You still 'check its work' and have backup safeguards (a practice engaged with the other methods as well) but the value and efficiency of doing it this way may be preferable. As far as robotics there are also other ways of combining the two, where developments in one benefits from others . There are robotic waiters in some restaurants that do things like deliver food and drinks to patrons automatically, negotiate around obstacles, and differentiate between patrons, but their capabilities have been enhanced since their original launch thanks to model training and now they can interact more directly with customers when before they were sort of more limited. One can imagine similar model training benefits to everything from a receptionist or guide/tourbot, to companionship robots of varying sorts. Of course something like automatically driving, especially if it was part of an overarching SaaS AI like Grok (instead of something running locally for the benefit of the driver) is one of the last applications one would expect with a new technology because the price of failure is so high, but there's a lot of room for simple iterative improvement to say nothing for any leaps forward due to hardware capability or model designs. We're early enough in the capability that there are still lots of "easy" iterative problems to solve with improvements to the sophistication of similar technology, even aside from looking for AGI or some of the other 'hard' problems in design.
>>30991I think you missed the purpose of that analogy. It has nothing to do with anything being magic, but rather that new technologies improve not necessarily through big magical leaps and bounds, but often mundane steps in concert with other facets such as materials , computing power etc. The PC I'm typing this on has a CPU that (while more complex) is not necessarily much different in some underlying physical structures or method of function, from one made in the 80s. You could take a schematic for a modern Intel or AMD CPU back to the 80s and its likely engineers could understand what they're looking at to a significant degree, but there was no possible way for them to fabricate it as it requires the sophistication comparable to the TSMC 3nm process that is being done today! Likewise, since the birth of the Web many of the W3C standards, languages like HTML etc..have been around. Sure they've evolved, but its like claiming that someone who stepped out of 1993 making a webpage was at the terminal point for how the Web would be used; something that's clearly not the case and we can track the evolution for better and worse and all the factors involved that impact it.
Such it goes with AI, same as anything else. Hell, these aren't even the first "chatbots" not by a longshot. I'm not sure where you think that GPT5 is some terminal point (or even if they claimed to be, why anyone would listen. Remember when Microsoft claimed that Windows 10 would be the last Windows model etc? ) You seem to be conflating all of AI research to "chatbots" and specifically to the whims of a couple megacorps , so that seems strange. Of course the same "chatbots" are embedded in different products, that is part of their Software as a Service model - for cash, data, or both. ChatGPT and DALL-E being accessible in a partnership with MS thanks to Copilot is because that is their business model - selling access to their AI models through APIs, but there are whole available alternatives that don't fall into that dynamic (mostly self-hosting FOSS models and training data etc), but the idea that even the API types have "hit their limit" makes no sense. I don't see the entire industry (to say nothing for global public AI research at universities, think tanks, and other forms of development not as motivated by having a product to sell) throwing up their hands saying this is as far as things go. Just standard iterative improvements in model sophistication and training data, hardware availability and other facets are likely to enable improvements, combined with wider applications in different parts of the market, sufficient to keep things going forward. Of course, looking at every other bit of technology out there significant leaps forward often come from the confluence of factors enabling them, and there's no reason to think that AI / LLM is somehow exempt from this.
>>31007Oh I agree that inadvisable hype or simply throwing money at anything with "AI" in the name and/or cramming AI into everything would have negative effects, but I imagine its more along the line of the dot com bubble (or similar tech bubbles), but we have to separate inadvisable market forces pushing towards "get rich quick" schemes
>use global AI analyzed trends to strategically transform paradigms to actualize the future! for investment dollars, from the actual tech itself , its usage, or development/improvement. The dotcom bubble bursting didn't turn people away from the Internet or Web because there was still something to be done with that technology, and such is the case with AI. Someone who is claiming they're going to use a quantum computer to have AGI synthetic super intelligence in the next 5 years may find their companies predictably bomb because they "dreamed too big", allowing them to abscond with golden parachutes, but more mundane usage for LLMs and other models and neural networks, from replacing cashiers, stock-taking and other retail tasks, receptionists and phone/chat trees, to something like adding dynamic reactive content to video games and of course any sort of companion or RP usage, will continue on I'd think. "Hard" AI research will continue as well in areas less vulnerable to tech fads