[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM / ufo ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password(For file deletion.)

Check out our new store at shop.leftypol.org!


File: 1755139966457.png (8.38 KB, 389x129, ClipboardImage.png)

 

The other thread hit bump limit and I'm addicted to talking about the birth of the ̶a̶l̶l̶-̶k̶n̶o̶w̶i̶n̶g̶ ̶c̶o̶m̶p̶u̶t̶e̶r̶ ̶g̶o̶d̶ the biggest financial bubble in history and the coming jobless eschaton, post your AI news here

Previous thread: >>27559
558 posts and 87 image replies omitted.

>>32647
AI ruining open source projects with garbage PRs is also something of a self-inflicted wound because, unless you're breaking down instructions, AI tends to write code with lots of dependencies by default


>>32297
Only if you have 2 trillion RAM to run a 900000000+ parameter model on your computer. Its data centers controlled by the 0.00000001% for 99% of people.

>>32301
>AI researchers would preform cultish and ritualistic effigies to what they believed was a new god
>And their notable after hours sex parties
Source please, I have to know

lol the anti-AI side is also growing a bit schizo and embracing notch of all people. AI = sweet baby inc

Benn Jordan did some research and found that on top of everything else datacenters are functioning like LRADs and making people sick.

>>32297
>plus or minus one or two

Ok, so it's worthless. The whole point of computing is it being predictable

>>32652
>twitter user claims twitter users embrace twitter user, citing twitter screenshot.

>>32654
he is being sarcastic,you can alreasy press CTRL+F and have the exact numbers anyway

Criticizing AI for being imprecise is missing the point. There are infinitely more problems in the real world to solve that require estimation instead of precise answers. E.g. modeling weather, or the movement of a legged robot. The rush for AI comes precisely from a desire to extend computing into areas where getting a deterministic solution is impossible or undesired. But in that rush, companies also try to make AI solve problems that are already perfectly solvable with standard deterministic computing

>>32657
>Criticizing AI for being imprecise is missing the point
No it isn't. If the US was like China and was more modest with its expectations then yeah it would be incredibly asinine to point out how a stochastic little bot sometimes delivers unexpected results, but the expectation is nothing short of the death god of labor that will doom the proles into a miserable life as the permanent underclass, and we're going to let it loose over a number of fields where precise answers are required, because capitals demands it so.

At this point there's nothing to gain over trying to separate the immense capital AI has raised as the ultimate labor replacer from what AI actually is in reality. This is something liberals which are clearly trying to curry the favor from silicon valley billionaires are doing and you should be able to detect it and call it out for the malicious miopicness that it actually is.

>>32657
You're not wrong, but a lot of the things that look as if they're inherently only solvable by imprecise computing (in general, not just ML) are probably actually deterministic systems/problems which we humans just haven't completely figured out yet.
The reliance on imprecise methods can be said to be both doomed since the bipedal primates are probably gonna figure out how those systems work eventually, but at the same time its usage is also still bad because if an imprecise method is good enough then it discourages efforts to actually unravel how shit works and consequently increases the amount of time until that happens for a system.

(All of this broad-stroke theoretical CS stuff is off-topic to the thread btw lol).

AI companies and in particular OpenAI has made building a personal computer unattainably expensive again. The cost for basic components has tripled or quadrupled. The degree to which consumers are getting fucked here is just not even discussed. They really don’t consider people and society as part of the equation.

File: 1771600650334.png (133.49 KB, 648x878, ClipboardImage.png)

Like I said, what's going to happen with clawbot or whatever is tons of people will get fleeced clean

https://xcancel.com/chiefofautism/status/2024483631067021348

>>32661
it's insane that my computer has been quite literally appreciating, this has never happened before in consumer electronics, shit's bananas, this can't be good in the long run, and thus I doubt that it will last, especially with certain components like RAM. It'll take a bit for everything to land into place but SCOTUS ruling shows that porky wants to source from china as soon as possible, and chinese companies who haven't quite entered into the HBM market yet are rising up to the occassion in asia.

>>32663
Wanna buy my t430 for 1000?

https://web.archive.org/web/20260220200341/https://www.modular.com/blog/the-claude-c-compiler-what-it-reveals-about-the-future-of-software
> We are facing a new era of automated reimplementation of proprietary software
I don't understand how he can claim this when the whole article is about how Claude reimplemented code that was already open-source. Since the main claim is that AI can reproduce well-known solutions but cannot innovate, wouldn't this just incentivise companies to not make their solutions well-known? Don't release the source code, don't publish white-papers, don't give conference talks, don't write blog posts. Just keep everything secret and hope nobody reverse-engineers it. In fact I wouldn't be surprised the reason Google crippled AOSP was due to fears of this. To make things worse, since AI output is not protected by copyright, and if it really can reproduce existing solutions, it could be used to strip the license of existing copyleft software. Google could just ask their AI to make a binary-compatible implementation of Linux and it would be proprietary. They could put it in Android and get rid of the last part that they still develop in the open.

>>32657
AI is a useless term. Most of the criticism is aimed at LLMs, which generate text without any indication that it would be imprecise. Their imprecision is termed "hallucination", implying that it is not fundamental to their operation, but a simple bug that can be fixed. But you might remember object detection models just outright telling you that the thing in the picture has a 81% chance of being a dog and 18% chance of being a dreadnought. I don't remember anyone criticising those for being estimations.

I hate using it, but at the same time, researching on the web has become completely shit since it took off, so sometimes using it is the best option.

File: 1771816989001.png (62.28 KB, 667x202, ClipboardImage.png)

>>32666
>I don't understand how he can claim this when the whole article is about how Claude reimplemented code that was already open-source.
well maybe it helps to understand these sort of editorials as an exercise in consensus building among the investor class, and the current consensus is that claude can replace any existing saas solution ever, the small print that underscores the entire thing is that it HAS to replace all software ever or it the entire investment will go up in flames spectacularly. also i like that they keep referring to all software as "the application layer" as if introducing this specific OSI model terminology made them look more expert

>>32666
They'd need government level secrecy. LLMs have SO MANY PARAMETERS that they really only need to see something once to almost completely memorize it, and once its in the weights its done, forever. Would be really hard to prevent if people start becoming aggressive about training on proprietary source code. But who knows.

>>32671
Deepseek is going to commoditize it. That's like the goal of the company, and they're really, really good.

OAI and Anthropic are still chasing AGI/ASI or whatever they call it. The runaway singularity stuff. Otherwise their strategy was a mistake.

>>32678
>OAI and Anthropic are still chasing AGI/ASI or whatever they call it.
It seems that the tweet implies that the alternative to AGI is being like a big big SaaS ACME lol. Workday? Claude. SAP? Claude. Service Now? Uber Eats? Doordash? All claude

>>32677
They'd need to run private LLM instances also, if you're using even the paltry autocomplete models, they're sending entire files if not the entire codebase already as context.

>>32677
>>32680
Okay I guess I was a bit naive, I could totally Microslop letting OpenAI train their models on Microsoft Github's private repositories or Anthropic training on files submitted as context. Even though I think they both provide "private" LLM instances.

>>32681
microsoft doesn't care because they retain all rights to OpenAI's stuff, it's pretty much a msft subsidiary in every aspect except in paper.

Pentagon Threatens Anthropic with Contract Termination Over AI Restrictions

<Defense Secretary Pete Hegseth has issued an ultimatum to Anthropic CEO Dario Amodei, demanding the company remove ethical restrictions on its AI technology for military use by Friday. Failure to comply could result in Anthropic losing its government contracts, being designated a supply chain risk, or being subjected to the Defense Production Act, which would allow the military to seize more authority over its products.


<The conflict centers on two specific "red lines" established by Anthropic:


>Fully autonomous military targeting (lethal force).


>Domestic surveillance of U.S. citizens.


<While peers like OpenAI, Google, and xAI have already integrated into the Pentagon’s secure networks (GenAI.mil) and agreed to support all "lawful" military applications, Anthropic remains the sole holdout. Hegseth has publicly criticized "ideological constraints" in AI, vowing that the Pentagon’s systems will not be "woke" and must be fully optimized for warfare.


<Despite the pressure, Anthropic maintains that its safety-first approach is necessary to prevent catastrophic risks, such as AI-assisted mass surveillance or the suppression of dissent.


<However, analysts suggest Anthropic’s bargaining power is waning as its competitors move to fulfill the Department of Defense's requirements without similar ethical caveats.


https://apnews.com/article/anthropic-hegseth-ai-pentagon-military-3d86c9296fe953ec0591fcde6a613aba

>>32686
Amodei is just a standard lib with some morals but hes still a retard booster for the usa neolib order

>>32686
this smells like kayfabe bullshit to me, as if chatGPT was meaningfully worse than claude for most tasks. i think dario is just playing his consistent performance "oh no the AI is le dangerous you guys and ours the most dangerous one", and just acquising instantly to the DOD would be inconsistent.

>>32686
>>32694
anthropic's moral objections are certainly insincere. they're just trying to limit the scope of any technology and/or service they provide the USA government to ensure future sales (just like modern tech companies do with regular consumers).

File: 1772055549098.png (97.33 KB, 662x690, ClipboardImage.png)

Apparently distillation goes both ways and if you ask Sonnet 4.6 "which model are you" it will respond with DeepSeek lol
https://xcancel.com/stevibe/status/2026227392076018101#m

>>32696
We have tet to understabd how distillation works

>>30810
Ibm stock went down because antropic claims claude can code cobol
Get readt for vibe coded air traffic control

>>32698
in my head everyone on finance is like a homer simpson-ish simpleton who stops in their tracks each time they encounter an ad in the street and go like "ooohh yeah that's so right i can not eat just one chip"

File: 1772085138066.png (69.55 KB, 498x461, feels-good.png)

>>32662
>disguised as crypto trading bots
no better feeling than coincels being scammed

I'm thinking about this decade and how Java OOP enterprise quality boilerplate slop eventually gave way to fancy languages with lambdas, and functions as first class citizens and what not and I wonder if LLMs will make the current tech and language stack "stick". Like it seems that React which seemingly was starting to become phased out, suddenly became a permanent fixture because most frontend code LLMs will output is going to be react, specifically spaghetified react that makes heavy use of hooks, supposedly a now maligned practice, and we're going to be stuck with this paradigm forever because it's what LLMs managed to snapshot.

>>32705
Yes LLMs will bring about the end of the history.

File: 1772226765726.jpg (144.01 KB, 499x968, HCMUbDfbEAEm-4a.jpg)

lmfao I'm genuinely convinced this dog and pony show of defiance is literally just for anti-trump Democrats, I think they expect dems to retake congress and want to be positioned as the "good AI guys". Trump and Hegseth just want the smart people to vouch for them at a time where AI is symbolically important for maintaining hegemony against China

>>32705
There is also now something of a counterincentive to creating new languages and libraries at all because before LLMs can reliably write new code in them you need extensive examples and then for the models to be retrained on datasets containing those examples. So this only worsens the stagnation

>>32716
I think Dario doesn’t care because these people assume they are going to become God-Kings anyway, and AI will dominate and destroy the world with power and terror never before seen. Dario just wants this to be righteously done in the name of liberalism, even though concepts like “democracy” are incoherent under the dominion of the AGI he imagines. He’s just, at the core, extremely afraid that “heaven will be Chinese.”

Hegseth is a just an alcoholic retard and an ape. Altman’s swiftly evil behavior has made him out to be a terrible actor as well. I think this debacle has done a huge amount of damage to the perception of the entire technology. They just don’t care because they’re betting on the swarm of blood powered nanobots to be launched before the next election.

How to fight back LLMs and Big Tech data scraping?

I have read about putting hidden links for LLMs and web crawlers. The link would open another page with links. If the LLM or web scraper follows any links on that page, it would either get blacklisted or the server would start feeding spam pages to them (and those spam pages lead to other spam pages, ad infinitum). Another way is to deliver spam pages that are incomplete and they get longer and longer overtime. You can generate the spam pages with markov chains or LLMs. Look up "Nepenthes" and theCPAN module Games::Dissociate for inspiration. Another approach would be to require proof of work (mkproof or anubis?

You could poison the content itself.. You can add certain small modifications to images to make AI models break (specific methods are called "glazing" or project nightshade). Also, if you could somehow trick LLMs to train with content generated with another LLM, it kind of makes the AI model less good than before (see, "model collapse").

>>32728
with anubis? i suppose you could also just hide immense blocks of comments and prompt injections into your fontend code, the current crop of mid-tier frontier models are all summarizing context which tends to make them "forget" initial instructions, because standard attention is expensive, at a slight disservice to your users i suppose

>>32728
>Also, if you could somehow trick LLMs to train with content generated with another LLM, it kind of makes the AI model less good than before (see, "model collapse").
AI companies love synthetic data. They feed models slop for more than half of their diet right now. What model collapse does is cut the output distribution and every single thing AI companies do on top of pretrained models does that, such as fine-tunung, adding instruct and thinking capabilities, but especially RLHF. AI bad people miss that AI companies don't care about model quality, they're just benchmaxxing to give the investor another number that go up, also many of them barely use any real internet data now.

Another blog-post grumpy about AI that is not really articulating a new viewpoint, but it is exceptionally well put together. It also happens to be long, like long long.
https://www.scottsmitelli.com/articles/you-dont-have-to/

(The one tiny missing thing that would have made this essay a perfect 10/10 in my view is this: I expected the author to go back to the anecdote of the lip-sync debacle of Milli Vanilli he mentions early in, but he never does. And I expected him to mention this: These guys could actually sing, and they were forced by their manager to lip-sync to another voice. Which makes it a really fitting anecdote, since the author also brings up how using AI is often forced on people in their workplace.)

looking forward the death of leftypol. its only a matter of time.

File: 1772601956910.png (1.17 MB, 864x1292, 1763574451991666.png)

ai bros on suicide watch

I’ve used AI to generate a bunch of dashboards for various random things I wanted to visualize and unfortunately I’ve gained the ability to instantly recognize vibe-coded UI. For some reason all the bots have a very limited visual language. I can’t unsee it now and so now so many new webpages just make me nauseous to look at.

Just take a look at “dataisbeautiful” if you don’t know what I mean yet. It’s all Claude coded visualizations. Black backgrounds with colorful UI elements, way too much text, too many elements, unironically “not beautiful” representations of data that are overloaded with irrelevant and confounding analytics.


File: 1772848451647.png (827.36 KB, 1206x1917, ClipboardImage.png)

The entire pro-AI side of the discourse is basically just this, isn't it

>>32834
Yes, it’s literally all AI generated posts.

I use it so much at work (code) that I can instantly recognize it and often down to which model and which version. Though they blur together. I’m always surprised and a little disgusted when I go into, for example, Reddit, and see some AI generated OP with AI “polished” responses and the actual humans responding to it not even noticing apparently.

But they also simply don’t care about human life at all, at the far end. They think 99% of humanity is effectively worthless chaff, non-productive and made obsolete by commoditized intelligence. The cause of this is actually primarily capitalism, I think, because they don’t understand the purpose of anything that does boost productivity or make markets more efficient. So if you are inefficient and unproductive, you are worthless. What you are as a human? If you bring this up you’ll be scoffed at, as if this questions is a childish thought that they’ve already refuted.

Incidentally I can totally see the awful agent pipeline they have set up:

>“Hum, the user seems to believe the previous target designation was incorrect. This is a critical issue. I need to handle this delicately….”


>You were right to push back on that. On closer examination, that is not a military barracks but is clearly a school. Do not push that button. This is not a valid military target. Thanks for setting me straight. Are there more targets you would like me to evaluate? Or would you like me to explain why bombing protected civilian infrastructure is a war crime?


Unique IPs: 35

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM / ufo ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]