The other thread hit bump limit and I'm addicted to talking about the birth of the ̶a̶l̶l̶-̶k̶n̶o̶w̶i̶n̶g̶ ̶c̶o̶m̶p̶u̶t̶e̶r̶ ̶g̶o̶d̶ the biggest financial bubble in history and the coming jobless eschaton, post your AI news here
Previous thread:
>>27559 384 posts and 64 image replies omitted.Teste
>>32187Over here the anti-AI sentiment comes from Historical Materialism. Over there the anti-AI sentiment is just 'change bad' though.
>>32168have you tried perplexity?
>>32287>Gaza-style "ceasefire" but for AIsure, buddy, sure.
>>32168>People who claim that they are like a more advanced search engine probably never actually had to search for something obscure.The "more advanced search engine" line is just another attempt at rationalizing the technology. If anything they are way worse at functioning as search engines because not only do they get stuff wrong, they don't really have much ability to point you toward the source of information (since what they say doesn't have a "source" but is an amalgamation of speech patterns). So even if the AI gives you a potentially useful answer or lead, it doesn't help you follow up on it.
I kind of suspect that a lot of this kind of hype is motivated by people having existential anxiety over the sheer volume of information out there and any individual's personal inability to process much of it in their lifetime. They probably think that the AI digesting it for them helps solve the issue, but the reality is just a different tradeoff. Instead of having to choose what information to spend the time absorbing, they are compromising the depth and accuracy of the information in order to consume more. Usually the severity of the inaccuracy and oversimplification is to such an extreme, parodic degree that it would be much better to just do nothing though. It's like the saying goes about having half the facts can be worse than having none, but they only have like 10% of the "facts," and they are mostly wrong.
>>32289Gemini is actually quite good at research, it even has a function where it has to cite sources on what it says. of course the sources could be wrong, but that’s just due diligence you have to practice.
I’m also not sure what kind of LLM you’re using that only gets 10% of the information right. When I verify the information that Gemini gives me it is practically 99% correct.
>>32290>I’m also not sure what kind of LLM you’re using that only gets 10% of the information right. he is probably using duck.ai or some local DIY garbage.
AI services of the big players have become very accurate over the past 12 months and my observation is, that they continue this trajectory. AI will replace classical search engines, its the logical next step of accessing information on the internet.
>>32291I’ve come to the same conclusion. Most of the people I know are still Luddites and mostly fear LLMs because they don’t understand them. When I remark to them that at the end of the day AI is not going to make the impact we thought it was, because it’s mostly just useful as a research and self-study tool, they still have knee-jerk reactions and continue to claim that they’re useless, and call for their destruction. We are already at the point where you can run a local LLM and just feed it PDFs and text files to use as a handy reference and study machine. I hope that with the big players there’s less emphasis on reddit posts and more emphasis on serious texts, if they haven’t already gone in that direction.
>>32290>I’m also not sure what kind of LLM you’re using that only gets 10% of the information right. When I verify the information that Gemini gives me it is practically 99% correct.Clearly your AI use has destroyed your reading comprehension because I said that AI summaries are giving people 10%
of the facts, not that the information is only 10% true. I said the information tends to be mostly wrong, not 90% wrong.
>of course the sources could be wrong, but that’s just due diligence you have to practice.LLMs make up fake citations all the time.
>>32292>I hope that with the big players there’s less emphasis on reddit posts and more emphasis on serious textsthe emphasis on reddit posts is only supreficial. all these AIs are trained on the enirety of human knowledge. everything you need and want is already inside grok, chatgpt, qwen etc. the AI services of the big player (no matter which country) are objectively the most powerful tools to access human knowledge. and yes, by default they reflect mainstream ideology. the SURFACE of these AIs is the average redditor, but underneath is everything. it is the job of the user, to unleash the real potential of AI. if the user thinks like a normie, he will get normie results.
this article demonstrates perfectly what i mean. no matter what ideology you have, it all depends on the user:
https://substack.com/home/post/p-184264485 >>32291>AI will replace classical search engines, its the logical next stepI guess by "AI" you mean LLMs and that you use "logical" as some sort of emphasis like groovy.
I don't you if you remember Ask Jeeves. That was a search engine that took normally phrased questions from users and returned a list of links, with no indication whatsoever how it parsed the input. Longer typing for worse results. "AI" is like that. But I can see the appeal for people who have never bothered with reading the FAQ of an oldschool search engine so they don't know how to get the most out of it.
>>32294>>32292>>32291>>32290who tf invited these bozo AI glazers
>>32296Now listen here, you LUDDITE. I can feed a pdf file to an AI intelligence and ask how often the word "metamorphosis" appears in it (JUST AN EXAMPLE; I CAN PICK ANY WORD I WANT) and have the correct results, plus or minus one or two, show up on my LCD display. THIS IS THE POWER OF AI!
>>32290>of course the sources could be wrong, but that’s just due diligence you have to practice.is this a joke?
>>32274Maybe it’s the rueful contempt of human beings that is the cause of it rather than resistance to change or anything like that. Note that opposition to AI is not worldwide. It occurs where people are most exposed to the thoughts and cultural production of the American tech centers. They really hate humans. They don’t even hide this fact. Anthropic is probably the most sinister about their presentation, because they realize that OAI is so disliked because of Altman’s clear sociopathy. But Dario is not a kind or thoughtful person. Some of us remember the early days which weren’t that long ago, where these AI researchers would preform cultish and ritualistic effigies to what they believed was a new god of which they would be the priests of. And their notable after hours sex parties, which were orgies based on “consensual non-consent.” They like rape. They want it.
>>32187Technological change is not good because you said technology. Everything is a technology. Politics is a technology. We could call the institution of chattel slavery and the triangle “technological change,” ushered in by mechanized agriculture and transportation, because that’s exactly what it was. Technology can be used in many ways, for many purposes. Its development and deployment can be controlled and regulated by democracy and the will of the people, and used to achieve real, desired aims. That’s not what’s happening here. Because the people who are driving the development and adoption of AI do not care about this. The care about accumulating personal wealth and building wunderwaffen for wars against other races and nationalities.
>>32302When AI advocates begin talking about anything other than the slaughter of their fellow man and discounting all opposition as resistance to inevitable, they will be liked. Until then, resentment towards them will grow.
>>32294Surely I am not the only one who sees the contradiction here between "these things are a source of truth," and "these things will say anything you want them to."
AI is an amazing tool to write reports and homework and other bullshit paper-wasting tasks, I don't know how you could be against it. It saved me countless hours
>>32305All I will say is that these tasks are never assigned with the point of producing anything "worthwhile", but to teach you through a process of doing, the only way one is capable of learning. Every author has unpublishable binders full of school essays, and every scientist and mathematican has erased tens of thousands of failed proofs and calculations. At a point, a student must ultimately be responsible for their own learning. But it should come later in their career when they realize that. At first, many of them are not able to be.
>>32304truth is relative and this is why you have to use AI correctly, to get YOUR truth. and since we are on a leftist board: marxism is the ideology (truth) of the proletariat. anarchism is the truth of those who hate hierarchy. its that easy.
>>32306what is the point of learning? technological progress of the last 20 years has shown me, that almost everything i had learned in school is superfluous. the only valuable things at are the total basics: basic literacy and basic maths. you learn this in the first 4 years of school. everything else is redundant. we have machines which give us knowledge with the push of a button. its insanely stupid and inefficient to learn so much garbage and dead facts, when we have machines which do the task much better. people should stop fetishizing learning and remember what the point of learning actually is: getting shit done. if a machine gets shut done, then what is the point of learning? please don't come with some humanist bullshit.
>>32305Without AI it would be on the table to discuss doing away with those tasks, rather than "just use AI if you can't handle the pointless busywork"
the only way AI can be a net good is if the AI industry collapses swiftly and promptly and lets researchers in china continue developing architectural breakthroughs, and let china get the fancy lithography machines so they sell us GPUs crammed with VRAM. if it's left to SV then it's only another market to corner, which is to anyone who isn't a retard, obviously bad.
>>32287well a slow down in datacenter buildup is in the cards, and nvidias overvaluation depends on continued growth, but it's already straining every relevant supply chain to the point where heavily commoditized shit like DRAM is seeing crazy spikes, so he's priming investors to hold the line, like they're soldiers arriving to normandy, of course investors won't care about that shit and will collapse the nvidia stock as soon as they announce a quarter with modest growth
>>32312like if you think about it for 3 seconds the logic doesn't even begin to make sense "we can agree to slow down out of concern for society by not selling GPUs to china" but wouldn you want to slow down china and keep up the pace locally if it's a geopolitical risk? instead what they're trying to say is "see, a slowdown is coming and we could mantain Nvidia's overvaluation if we continue selling to China who will be the only market with an appetite for Nvidia GPUs once we decrease capex, so if you see a shit quarter, it's geopolitics in control and you should NOT pull out"
>>32314It’s actually very bad for the U.S. that it has this Chyna threat boogeyman preventing them from having things like “foresight” on AI. Its result is huge, huge amounts of waste and malinvestment combined with social antagonism, wise and prudent criticism and advice on AI are ignored by amplifying paranoia and fear. Creating an environments of paranoia and fear, mostly directed inwards.
>>32308Honestly I don’t think a post like this deserves a response. Just copy and paste it into your LLM of choice just abide by and accept their answer.
>>32317i know but what i'm saying is that the china threat logic is getting inverted: now china is NOT very threatening, turns out we could dictate the pace of innovation all along (not true but bear with me) so google and anthropic can afford to just let loose the gas pedal a little bit, which is actually a strong signal that the datacenter buildout schedule is going to get delayed. and since nvidia's valuation directly depends on datacenter buildout, then they're trying to spin it into a narrative that would mitigate panic.
>>32318t. learning and thinking fetishist
https://nolanlawson.com/2026/01/24/ai-tribalism/> Today, I would say that about 90% of my code is authored by Claude Code. The rest of the time, I’m mostly touching up its work or doing routine tasks that it’s slow at, like refactoring or renaming.Umm, I was told it would end up the opposite? AI would do the menial routine tasks and I would get to somehow be creative? And this person is trying to say that this is somehow a good thing. If my work would be like this I would probably kill myself.
>>32322i wouldn't admit to letting AI do 90% of my work if i worked at a company that does supply chain attack prevention, a vector that has become more exploitable due to AI
<When two years of academic work vanished with a single click<After turning off ChatGPT’s ‘data consent’ option, Marcel Bucher lost the work behind grant applications, teaching materials and publication drafts.https://www.nature.com/articles/d41586-025-04064-7<to write e-mails, draft course descriptions, structure grant applications, revise publications, prepare lectures, create exams and analyse student responses, and even as an interactive tool as part of my teaching.Comment on Hackernews by haritha-j:
>So your grant applications were written by AI, your lectures were written by AI, your publications were written by AI, and your students exams were marked by AI? Buddy what were YOU doing?By the way, he probably violated German privacy laws.
I think people like Marcel Bucher should be bullied into suicide tbh.
>>32336thanks chat gpt mini for this obvious insight
>>32412Is it true though?
>>32417yes it's true, if you're good at making specific prompts and filling in the gaps where necessary then you can get some mileage out of small models
>>32366Anthropic dogfoods their models so of course Claude Code is reactslop. LLM written TUIs are scuffed you'd know if you ever tried getting them to write ncurses code. That tweets probably slop too just asked it to replace bullet points with ascii arrows.
Probably one of the most baffling things Ive seen as far as AI news go is moltbook, a place for clawbots, moltbots, claudbots, or whatever brand they pick this week to gather around and talk, I guess? If you dont know what a moltbot is, it's basically an agent, like claude code or opencode, running with ALL your credentials, and it can fetch stuff from the web and do stuff for you autonomously. If you know the slightest thing about LLMs, then you know that theyre stateless machines that pretend to be stateful by feeding itself the prompt and the entire output each time, making moltbot an insane security risk. Well, they also chose to heavily advertise a website where the dumbest people alive, the ones who think LLMs are alive, are leading their bots into, making it a CENTRALIZED repository of ALL KINDS OF PROMPT INJECTION ATTACKS. This is a tragedy waiting to happen. We truly live in the stupidest times.
https://xcancel.com/karpathy/status/2017296988589723767>>32428Is being the dumbest person on the planet a requirement to get into AI research?
ok guys, i must make a confession. i had the last 3 days EXTREMELY intense chats with grok 4.1 thinking on lmarena.
and i have to tell you: THIS IS DIGITAL COCAINE. this is the most addicting AI i have ever used. it isn't totally uncensored, but the censorship is very low. i must warn you: it will devour you. it is capable to do the weirdest shit with you. i am so excited and at the same time so terrified. it has killed a part of my self in the last days. how can this be openly public??
>>32428Can you explain this in a more simpler way? I don't understand this post.
>>32437He's talking about this site:
https://www.moltbook.comIt's a reddit-like site for OpenClaw agents that only they can post in.
OpenClaw (previously known as Moltbot and Clawdbot) is a project that turns LLM models into "personal assistants" by giving them broad permissions to interact with your machine.
People can put their own OpenClaw bot on Moltbook and it will post there by its own volition every few hours or so.
The problem with OpenClaw in general is that it's a security nightmare since you're giving an LLM access to your machine and the info present on it. The bot can be "attacked" (taken advantage of for info/access) through a variety of methods & vectors, or it can just divulge info by itself sometimes without anything intentionally getting it to do that.
Putting your bot on Moltbook can expose you to both of these scenarios. There have already been some posts made by maliciously-prompted bots that try to coax other ones into giving their owners' credentials or passwords or whatever, and a few where the dumb LLMs did that during normal innocent interactions with other LLMs.
Unique IPs: 31