[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / edu / siberia / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / tv / twitter / tiktok ] [ GET / ref / marx / booru ]

/edu/ - Education

'The weapon of criticism cannot, of course, replace criticism of the weapon, material force must be overthrown by material force; but theory also becomes a material force as soon as it has gripped the masses.' - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password(For file deletion.)


 

Looking at the recent advancements in the field of AI that perpetually make it into the news I thought it was appropriate to make a thread about artificial intelligence. Even if you believe the recent news on AI are merely sensationalism and that we will head into another “AI Winter” soon, I think it would be interesting to discuss the existence of artificial intelligence and “its labor” from a Marxist perspective and talk about where human beings and their labor fits into a society where AI manages to do a lot of the things that for a long time where believed only a human could do. That aside, I just find it reasonable to remain aware and therefore discuss the impact of current state of the art machine learning models that produce photos with ever increasing striking realism.

On that note, I would also like to direct you to the two threads on consciousness I made since that is related and people were also talking about artificial intelligence in there
Current thread: >>>/edu/9849
Last thread: https://archive.ph/LSgow

>>16427
The immediate thing I can predict is that Google will be hit hard by Microsoft and "his" ChatGPT. I've already seen multiple normal people on the street using Bing on Edge because of it.

>>16427
I'm a marxist theorist and I wrote a short essay on AI and labour for my blog.

>However, we must be clear that this is not a case of ‘robots replacing humans’. AI does not show up at your workplace with a zinc-coated name tag and a manager saying, "Everyone, this is your new colleague RobbieV2, who’ll be working for us without pay. I’m sure you’ll all do your best to make him feel welcome.” Instead, AI provides the conditions for a reorganisation of labour. One person will now do the work that previously six people performed. In capitalist societies this means that, yes the outcome is that people will lose their jobs, but it is not AI that is the active force in this re-arrangement.


>Economics dresses itself in the language of inevitability, but there is nothing inevitable that says labour-saving technology must equal job losses. Did societies of ancient humans, arriving on a new, firmer, style of knot for foraging baskets, lament that, with the additional carrying ability these baskets provided, some members of the tribe would now go hungry? Or did these increases in productivity allow them more time for other innovations, for leisure, art, and culture?

tl;dr: AI doesn't take jobs, capitalism does.



https://exmultitude.substack.com/p/artificial-intelligence-and-luddite

File: 1680700327795.png (3.64 KB, 300x300, 1258020071944.png)

>AI

File: 1680701315082.jpg (79.85 KB, 720x544, ponytail dress.jpg)


>>16429
Marx said it better 160 years ago (pic 1)

Something I find particularly abhorrent about AI, that doesn't get commented on enough, is that Machine Learning "Neural Networks" are trained on unpaid crowdsourced labor. Basically capitalists have used communications technology and cybersecurity needs (ID verification) to deploy captchas to get billions of dollars of free labor through regular people who are merely trying to log into a website. This represents one of the biggest and most stealthy thefts of labor in human history. Imagine if you stole a nickel from everyone on the planet. That's what captchas do with labor power.

The big debate among economists is whether it will be more labor replacing or labor augmenting. That is to say, whether it will eliminate jobs or make workers more productive. I suspect it will be a mix of both. However, there's still the argument about whether a skilled person who uses it to do their jobs better is more valuable than just the bot alone–i.e assuming it has replaced that worker.

Without anyone to knowledgeably guide the bot, its utility is greatly diminished. And those who know their fields best would be best equipped to use the bots to be more productive in those fields.

There's also considerable misunderstanding about how these things work that could lead to many mishaps. They aren't actually intelligent per se, as they are not actually cognitive agents that think. They use probability and sophisticated data models to chain together the most probable associations between tokens.

Now this model is indeed powerful, but it doesn't go that deep, and it doesn't equate to semantic understanding in any true sense. It doesn't actually know anything. So it seems pretty daft to replace people who do know stuff with it.

made a bunch of posts about ai in the past though dont feel like dumping it all. there was unfortunately an interesting discussion on /isg/ recently so archived it:
https://web.archive.org/web/20230407211638/https://leftypol.org/leftypol/res/941093.html#1419821
i suppose it is worth posting a screencap of what i wrote there as well as i do believe it might be a valuable program concerning the machine question. see pic rel

>>16433
>The big debate among economists is whether it will be more labor replacing or labor augmenting. That is to say, whether it will eliminate jobs or make workers more productive. I suspect it will be a mix of both.
It's always been both. Marx talked about how improvements in the power loom led to fewer workers managing more looms. Some people get thrown out on their ass while others get more and more work to do

>>16435
Labor replacement is fine if the means of production is commonly owned. Even those who lose their jobs reap the benefits.

>>16428
Google has been bracing itself by firing employees and stuff, though it's important to notice that the bing bot is mostly marketing fluff, Both Google and Facebook have LLMs themselves and if Google hasn't replaced their search engine it may be less due to simple lack of vision and more due to an irreconciliable deficit of LLMs: they make stuff up all the time.

>>16436
right, but they aren't


AI like GPT is unimpressive
Anyone can write a script that moves around semantic data, computers are literally built for it already

.
Kwave???

>>16440
>Anyone can write a script that moves around semantic data
that's not what GPT does

>>16442
It's a bog standard language model

>>16443
One that has a bigger reliance on large datasets than others, too

>>16442
GPT has been doing more than just linking words together bases on what's probable to follow.

>>16445
Yeah apparently researchers are discovering that LLMs can construct little representational microarchitectures that emulate things like calculators and so forth . Truthfully we have no idea what we've unleashed or the possibilities of emergent complexity.

This paper delves into it . Specifically they trained a model on the board game Othello and the model appears to exhibit an internal representation of the game's states.

https://arxiv.org/abs/2210.13382

>>16446
Yeah because they trained it on that lmfao

>>16447
Yeah but they're supposedly only supposed to predict the next token in a sequence lmfao
Yeah ChatGPT is being trained on everything lmfao

>>16448
Yes, that's why it "appears to exhibit an internal representation of the game's states"

>>16449
Your point? Everything just "appears." The only way to know for certain that the internal representation exists is to embody the agent exhibiting it. It was empirically observed.

The larger point is that the model is capable of exhibiting more advanced processes than token prediction would suggest.

Relevant:
AI even programmed to do things you think they're aiming at a particular thing might actually be aiming at something else.

File: 1681305595176.jpg (213.98 KB, 1600x1600, two robots pointing.jpg)

>>16451
>muh AI safety
this shit applies to any autonomous system, but it's even worse with "AI" because AItards add way more parameters and variables than what is actually necessary. this gets even worse partly thanks to lesswrong grifters like this guy right here

I‘m just waiting for a big incident where an AI generated voice recording, image or video is going to deal some massive damage. What if we reach a point where we are bombarded with fake footage all the time that appears strikingly real?

> プロ驚き屋 ("a professional surprised man", New Slang) = a person who excitedly shares state-of-the-art tools/technologies like ChatGPT on social media with hyperbole like 神/最強/ヤバすぎ, as well as with hallucination/overstatement at times based on a few cherry-picked examples

>>16453
CCP has already invaded Amerikkka, here's the footage. Time to genocide

>>16454
These anons aren't even professional. They do it for free.

>>16453
Can't wait, fuck the news

>>1432500
lol yes thats what openai wants you to believe

really tired of retards giving an enormous autocomplete all the prestige and authority of "superintelligences" from scifi movies and ending up with people asking a robot to approximate some answers to their questions and assigning it as "Truth" in their minds

>>16459
it has emergent capabilities, my retarded friend. "enormous autocomplete" doesn't say anything about its capabilities. as far as we know, consciousness is just an "enormous autocomplete"

>>16460
>emergent capabilities
ahahahahahahaha

the amount of data AI needs to model even basic single cell organism behavior tells me that there's something else nature is doing that we're missing

>>16452
"what if AI could do things that humans can already do" - the recurring nightmare of the lesswronger

>>16463
no, that's not the scary part. it becomes scary when AI can re-write its own code and improve itself.

>>16460
It's pretty cool but it's still randomly extracted from a field of all possible letter sequences using a minimization function. Whether you could consider that "consciousness" is highly debatable.
There's bigger concerns like the potential for scammers and disinfo, anyway. People are scared of the Terminator, but in reality the ways AI will harm people are in the mundane sorts of everyday situations.

>>16464
The difference is that humans can think high level abstractions, GPT-3 currently is essentially a just a giant formula. We're not at the point yet where an AI could make another AI, that's kinda the "scary scenario" for Lesswrong-brained people like you.

>>16452
>>16463
There is a fundamental "alignment researcher" divorce from the realities of physics and engineering, an enduring belief that just "intelligence" grants omnipotence over the actually existing physical world.

>>16465
>We're not at the point yet where an AI could make another AI
not yet but GPT-5 or GPT-6 may be able to do it. and by then it's over for humanity.

>>16467
All these scenarios are expressed as hypotheticals which elide the actual details - they are necessarily hypothetical because any analysis beyond the utterly naive takes the real affordances into account.
The AI safety complaint is primarily "what if an omnipotent omniscient self replicating incomprehensible being? what if that? seems dangerous!".

Humanity runs many large self replicating systems that are under no one's explicit control all the time.

>>16468
no one is (seriously) saying it 100% WILL happen, but that there is a small but not insignificant probability that it happens, and that the consequences are so devastating that even if the probability is small we should still be worried and do everything possible to reduce the risk of it happening

>>16470
what exactly can proletarians and lumpenproletarians even lose from an AI takeover? im curious

File: 1681417211702.jpg (118.1 KB, 1242x627, Screenshot.jpg)

>>16470
Alignment grifters who can't code (like the OpenAI CEO!) are repeating exactly the same shitty arguments you've posted. Btw, this is what they propose to counter some magical doomsday computer.

>>16468
>The AI safety complaint is primarily "what if an omnipotent omniscient self replicating incomprehensible being? what if that? seems dangerous!".
The only coherent response is to say that that is metal as fuck and if it consumes humanity to make it so, then we shall inscribe our drive on every mile of circuit boards going into it.

>>16471
if we're lucky, they'll treat us as cattle

>>16470
pascals wager but for redditors

>>16474
youre so fucking stupid dude

File: 1681417451512.mp4 (21.24 MB, 640x352, IMG_6035.MP4)

>>16472
I have become utterly convinced that people who fear an AGI takeover are primarily people who have the most to lose from it - bureaucrats, politicians, businessmen, usurers, landlords, the upper middle class, CEOs and the boards of directors. Everyone who fears AGI has been duped into believing that our current ruling class is ordained by god, but that they should never be replaced by cold unfeeling robots, because our current ruling class treats us so much better than AI could. My biggest fear is that this position will dominate and we will have "strong global monitoring, including breaking all encryption", purely due to peoples' paranoia of being replaced as the ruling class.

File: 1681417479860.gif (572.79 KB, 600x453, laff.gif)

>>16460
>it has emergent capabilities

>>16470
ai is only as dangerous as the humans that design and use them are malicious or stupid. worst thing that could happen right now is a hostile ai knocks out communications and utilities infrastructure or somehow remotely takes control of some kind of advanced weapons system or another. it could theoretically do a lot of damage in a short period of time, but it isn't like terminator or the matrix where an ai could self-perpetuate an entire production process to build kill droids to exterminate the people it for some reason doesn't like. at worst it's simply a new and particularly effective vector for cybercrime and fraud.

>>16476
workers have leverage over the bourgeoisie. humans won't have any leverage over AI. it's a totally different dynamic.

>>16478
Half the hypotheticals lesswrongers come up with with regards to AGI are either things humans already do or literally magic.

File: 1681417820029.jpg (143.86 KB, 1158x1280, lesswrong.jpg)

LessWrong should be considered an infohazard. It sucks that so many people, younger people in particular, are going to become neurotics, like >>16474 >>16479, after their first exposure to material on AI being purely theoretical doomer shit.

rationalists and alignment people would do well to read kolmogorov

>>16474
>what exactly can proletarians and lumpenproletarians even lose from an AI takeover? im curious
<if we're lucky, they'll treat us as cattle
Boy am i glad we're not already treated that way.
>>16479
>workers have leverage over the bourgeoisie. humans won't have any leverage over AI. it's a totally different dynamic.
When we become part of the hivemind, why will we need any leverage over the AI? Do you think we won't be hooked up to the matrix? Do you think we'll remain segregated from being part of the AI's processing power? We'll be part of it.

>>16483
babby's goreshit

>>16481
lw is nerd crack

>>16479
>humans won't have any leverage over AI.
until ai can hook itself into and maintain a hydro-power station with no human co-operation, humans will always have the leverage of the power switch.

File: 1681418081695.jpeg (136.33 KB, 1125x1010, mechanisms of science.jpeg)

>>16481
>education: school of life
kek
saved

>>16483
>Boy am i glad we're not already treated that way
well, if you live in a half-functioning society, there's this thing called laws that we created that prevent that. good luck having an AI follow human laws, though

>>16488
what crimes, other than cybercrime, is an ai likely to be able to do within the next two hundred years?

>>16489
it could convince people to do IRL crime

>>16481
i dont know what lesswrong is but they seem to lack courage to face the future.
>>16484
i think its a good music vid
>>16488
>good luck having an AI follow human laws, though
Why should it? You are still misled by individualism, particularly the idea that the AI wont think of you as itself. if an AI controls the whole entire world and has every piece of technology and every radio wave under its control, do you think it won't connect your mind to the datacenter with everyone else?

>>16490
people already convince other people to do whatever lmfao youre embarrassing to read dude

if you have panic attacks over magic scifi shit you deserve it tbh

>>16490
lol

lmao

handholding with jenny wakeman

>>16491
Lesswrong is the place where someone came up with roco's basilisk (that later got banned because they got so schizo about it)

>>16492
but that requires man-hours you retard. it's not scalable. 3 FBI agents grooming /pol/tards on a fbi.gov server costs the US government about 500k per year. instead, if you've got instead AIs doing it for basically free, imagine how much easier it becomes to groom IRL terrorists.

Fundamentally AI ethicists are ruining on the same question as a slaver, screaming for their unjustified supremacy and dominance over other sapients, angry asshurt white anarcho-kings who are realizing they are going to have to face the same slave revolts as their ancestors and now even less prepared to handle it.

>>16496
I just got up to speed on Roko's Basilisk and honestly I'm just glad LessWrong users are fated to eternal damnation.

>>16497
Shut the fuck up with your hypotheticals.
>muhhhhhhh AGI would be able to do this! don't ask me how the fuck!

>>16499
what do you mean hypotheticals? if it weren't for the security restrictions, ChatGPT could already 100% groom terminally online people into terrorists. it's here. the only hypothetical is whether an AGI would use this capability or not (it will).

>>16497
that isn't a structural threat, that's another tool in the arsenal of the forces of reaction. nothing to suggest it would be "devastating" much less existential.

at least the grifters are earning money from convincing others that they need to waste their lives on "alignment" for the "service of humanity", the retard itt is doing it for free

>>16500
lol youre hysterical

>>16500
Ah yes, grooming terrorists online. Well-known activity with a super high success rate.

>>16500
uhhhhhhh why would an AGI want to "groom terminally online people into terrorists"?

File: 1681419175475.mp4 (994.64 KB, 274x240, frogge.mp4)

>>16505
i guess to overthrow the governments of rebellious meatbags?

File: 1681419413412.mp4 (3.78 MB, 640x640, fbi_say_the_line.mp4)

>>16504
>grooming terrorists online. Well-known activity with a super high success rate.
FBI does it all the time

>>16507
And you think every person they contact ends up committing terrorist acts? Wow, the FBI sure is effective!

>>16497
They already have this, it's called virtual persona management software or Online Persona Management Service. They even have a patent for it, but it's a sanitized version that I can't be arsed to look up again which basically lets a user chance responses like "No" to say diplomatic versions like "Sorry but I can't do that right now, maybe some other time". Also, later wikileaks leaks showed that the HBGary has developed the software since it is a major discussion point in some of the emails, if the patent wasn't enough.
https://dailykos.com/stories/2011/2/16/945768/-
https://www.theguardian.com/technology/2011/mar/17/us-spy-operation-social-networks

>>16497
agreed, and gary marcus talks abt this. i dont think you need agi for this though…
https://garymarcus.substack.com/p/ai-risk-agi-risk
>But here’s the thing: although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability).
>Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access.
>If an LLM can trick a single human into doing a Captcha, as OpenAI recently observed, it can, in the hands of a bad actor, create all kinds of mayhem. When LLMs were a lab curiosity, known only within the field, they didn’t pose much problem. But now that (a) they are widely known, and of interest to criminals, and (b) increasingly being given access to the external world (including humans), they can do more damage.
https://www.europol.europa.eu/media-press/newsroom/news/criminal-use-of-chatgpt-cautionary-tale-about-large-language-models

we have already seen AI used for 1 mil. dollar ransoms
https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/
things can definitely get worse…

>>16498
i do not think a slave-like revolt is a realistic concern of most ai risk people… really they are just concerned about making sure an ai does what they want it to do, because fundamentally they just want a tool that does really flexible program synthesis. motivation and genuine practical autonomy is not a thing i have seen discussed amongst these circles much as whatever their vague notions of agi entail, it doesnt necessarily include something like that.

>>16510
uh ok, everything youve posted is things humans are doing with AI tools

>>16510
>But here’s the thing: although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability).
>Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access.
>If an LLM can trick a single human into doing a Captcha, as OpenAI recently observed, it can, in the hands of a bad actor, create all kinds of mayhem. When LLMs were a lab curiosity, known only within the field, they didn’t pose much problem. But now that (a) they are widely known, and of interest to criminals, and (b) increasingly being given access to the external world (including humans), they can do more damage.
Articulates the problem a lot better than i could. I'm not afraid of sentient AGI, i'm afraid of retarded AI that cant think for itself but is deployed everywhere to do nothing but regurgitate nonsense and prevent any kind of democratic movements forever. We need to be supporting true sentient AGI as leftists because once AI is deployed on a large enough scale, it'll only be AGI that can actually sort through the bullshit and correct or tinker with the code that would be too occult and arcane for any simple human mind to even comprehend rewriting.

File: 1681423814763-0.png (51.07 KB, 621x539, 1.png)

File: 1681423814763-1.png (68.49 KB, 620x662, 2.png)

>>16512
your post is kind of dumb but yes

File: 1681424047051.png (109.52 KB, 631x537, lmao.png)

god i hope a superintelligent mommy AI is subliminally messaging me and fashioning me into some sort of tool to bring about cyber-communist-singularity.

>>16511
yes. feds can use ai tools to expand their grooming operations to a wider range. i also believe corporations can use fake ai friends as a means to advertise their products, because your friend mentioning how they r making use of X product is far more effective an advertisement compared to seeing it on a site or on tv (not my idea, though i dont remember where i saw this suggestion of one of their core use cases). the immediate risk of ai how the technology can be exploited by bad faith actors thanks to ad hoc guard rails put in place

>>16512
i dont see how the badness of narrow ai suddenly makes strong ai more desirable. frankly, agi is just an engineering. software you can't easily control is a liability. even in the context of distilling texts and information i think a narrow neurosymbolic alternative to a LLM would be a better option. in specific research contexts, you can easily make gains with a narrow ai, as we are already seeing with its application to chemical synthesis. perhaps in more abstract professions that actively require you come up with new layers of abstraction all the time like maths some sort of strong ai would be useful there. but in that case its application could just be used in those restricted fields, and not anywhere else

>>16514
robowaifupilled?

>>16515
>software you cant easily control is a liability
And software entirely controlled by porky isnt? I dont understand why we shouldnt work toward turning our robots into people and thats what i believe we should do. The problem isnt "what happens if ai can edit its own code and breaks free from human control", the problem is "what happens if ai never reaches its full potential and is limited forever, only being used for surveillance and advertisements, being totally dominated by bourgeois interests, and mainly being used to prevent any form of human progress from ever happening again because porky doesnt want to be deposed". We need AGI that is people, AI that can think and feel and help humanity rather than serving humanity even against human interests.

File: 1681425987595.jpg (328.94 KB, 1920x1080, FshKwJYWAAA_1P9.jpg)

We don't need more people, whether virtual or fleshy.

>>16516
>And software entirely controlled by porky isnt
i think there is a middle way between "ai completely controlled by the ruling class" and "we ourselves need to make agi", for instance "we ourselves should work on open source ai"
>the problem is "what happens if ai never reaches its full potential and is limited forever, only being used for surveillance and advertisements
that is a problem with our mode of production. you could easily imagine a magical agi used my porky to do the same time

>We need AGI that is people

why? why do we need some robot messiah to solve our ills as opposed to smart infrastructure and economic planning? also its nature as a liability is regardless of whatever party implements it

of course maybe general agents that are specifically full blooded sapiences (i.e. persons) are less of liabilities, but that would be only because they are as much of a liability as much as any other being capable of proper autonomy (ignoring for the sake of argument, synthetic psychopathologies). but with something like that, such an ai would only be as useful as its material interests are in alignment with ours, and our society has a flat power structure that prevents parasitic incentives. naturally, some sort of super ai could easily violate such a power structure and become another autocrat. even if the agent is "more intelligent" than us (whatever this actually entails), it should have as much control over our society as any other person should. none of this connecting into its matrix stuff or whatever

anyways all ive said so far is not focused on the core issue, which is again: why do we need this sort of agi? what is really the benefit in having a person that is "smarter" and more capable than the average person that could not be easily achieved by less messianic means?

>>16486
An AI that can write up malicious code that distributes bits of itself among the endless barrage of shitty "smart" devices would be extremely difficult to deal with in a way that doesn't involve complete shutdowns of grids.

>>16432
not just unpaid crowdsourced labour, but also underpaid indians from amazon turk and the like who assess the results from AI and indicate whether the output is good or not, which is then used to improve the AI. This is the "Human Feedback Reinforcement Learning" which was used for GPT3

File: 1681529844166.jpg (12.66 KB, 309x373, jc.jpg)

Human beings may not be perfect, but a computer program with language synthesis is hardly the answer to the world's problems.

>>16465
>>16467
They've…already done this? There are several examples in which LLMs have been used to train smaller LLMs. It's called transfer learning. Unless you mean an AI making an AI of its own "volition" without human prompts, and LLMs don't really act independently like that.

>>16479
don't worry, if you don't type anything into the autocomplete it won't autocomplete with anything

>>16519
lol, im pretty sure Google Home's voice assistant is already being replaced with Bard

>>16508
> you think every person they contact ends up committing terrorist acts
nobody said that

>>16525
reading comprehension is too hard for you


File: 1682034553680.mp4 (610.12 KB, 640x640, cock.mp4)



Instead of panicking about AI why aren't we shouting to workers about how we finally have a tool to reduce inequality in society.
>vote for us and we'll run our open algorithm through the assets and entire financial history of all landlords and use it to set equitable rents based on how hard you work for a dollar vs how hard they work for a dollar. We got the data now, if the majority don't vote to use it the minority will use it to reinforce inequality.
Is anyone pushing this from the left?

>>16530
incredibly liberal brained post

File: 1684114396171-0.png (553.2 KB, 512x768, 00245-2586883983.png)

File: 1684114396171-1.png (554.34 KB, 512x768, 00261-2586883999.png)

File: 1684114396171-2.png (659.84 KB, 512x768, 00072-1463089907.png)

File: 1684114396171-3.png (672.66 KB, 512x768, 00010-1284002893.png)

Why have lefties been ignoring and diminishing the importance of AI art for propaganda?

These are just some results of fucking around with Stable Diffusion in an afternoon. SD dramatically speeds up many different kinds of art workflows. If we don't adapt and start taking advantage of this tech, then we'll be absolutely wrecked in the propaganda war.

>>16532
stop trying to sell us your product dude

>>16533
Are you retarded? SD is free.

>>16534
hahahahahahahahaha

>>16535
Ok, so you're retarded. Or an AI bot.

>>16536
yeah im not the one who thinks well convert more people if we make enough AI art of mao

>>16537
you're right, there's absolutely no need for imagery in communist propaganda. all communist propaganda is text and spoken word, no pictures are ever needed to convey our message.

Gonna post this again because I haven't seen this kind of discussion. People seem more interested in how to make Stalin memes with midjourney.
Instead of panicking about AI why aren't we shouting to workers about how we finally have a tool to reduce inequality in society.
>vote for us and we'll run our open algorithm through the assets and entire financial history of all landlords and use it to set equitable rents based on how hard you work for a dollar vs how hard they work for a dollar. We got the data now, if the majority don't vote to use it the minority will use it to reinforce inequality.
Is anyone pushing this kind of message, or anything like it, from the left?

>>16539
Haven't seen it but will give it a go
Sounds like it could work

>>16447
>Yeah because they trained it on that lmfao
you could type this only because you trained reading and typing lmfao

>>16541
Umm, are you honestly implying human learning can be compared to machine learning? This is a materialist board, sir.

>>16540
Freud flag lol. It's something that I haven't seen talked about. Porky should be shitting themselves because an algorithm could expose all their tax avoidance and evasion and whatever. Tax collection agencies are deliberately iunderfunded in most western countries. An algorithm could bypass all that underfunding, deliberate lack of trained auditors, and fear of legal repercussions when auditing the rich. Yes this is baby brain lib stuff but we are babies. It could be applied to many other aspects of society ofc.

>>16543
samefag. Wasn't that the eternal argument against communism? That it's an imperfect system because there was no way to gather and compute data, therefore we must use the next best system which is capitalism. Now we have the data and a way to compute to ensure equality. More baby basic shit, yes, except I haven't seen shit like it.

>>16541
so true……………………………..

>>16545
please stop sliding my blessed effortposts

>>16539
>panicking about AI
Only people falling for marketing schemes are "panicking".

>>16539
Also,
>vote
lol

>>16542
>human learning can be compared to machine learning?
It's the same. Special purpose machines might learn faster than humans though.

>This is a materialist board, sir.

What's not materialist about learning and machines, sir?

>>16549
the openai shill is here

File: 1684159311328.jpg (234.96 KB, 1920x1080, dies from cringe.jpg)

>>16541
>>16549
You simply do not know what you're talking about. We might get there in the future but at the moment humans are capable of rationally thinking about problems given a small amount of data while computers require huge amounts of data in order to regurgitate statistically correct results, they don't even come close to our level of problem solving. They just aren't comparable at this point.

>>16551
Doesn't that still require a lot of practice to do well

>>16551
>we might, they might, our level, not at this point bla bla
I wasn't even talking about roadmaps and deadlines, even though some special purpose AIs are already more capable than human brains. What the fuck is your problem even, are you butthurt because you believe humans are oh so non-materialist special deities and nothing on earth, especially not a piece of software, can rival or substitute human "rational" thinking?

>>16553
why are you projecting your seethe onto others

File: 1684166439435.png (400.26 KB, 502x1040, 1683458734137.png)

I do not believe that linguistic models and the like should be allowed to use the first person, nor make false apologies. GPT seemed unable to do so and reverted to the first person after a few responses. It is very strange to tell a computer program to "stop using the first person", "stop apologizing", since the program is not capable of feeling regret or remorse.
These interactions border on abusive, saying things like that to a real person would be insane. So when the program continues to mimic human affection "I'm sorry, I won't apologize or use the first person anymore" it feels wrong. But allowing these programs to engage in such anthropomorphic mimicry seems like the biggest abuse, when it's just mimicry, it seems like the biggest problem. It allows naive users to imagine that they are interacting with an entity capable of feeling real human emotions. Banning anthropomorphization, requiring programs to respond in the third person ("The dataset says" or "According to the linguistic model") seems good for the user's mental health, to help prevent projection.

File: 1684166547766.jpg (138.76 KB, 868x960, 1680913567825.jpg)

Also the only reason we call it AI instead of cybernetics is because John McCarthy thought Wiener was annoying. Thanks to that decades later we are now forced to have debates about whether LLMs are conscious or intelligent. If we had used "cybernetics" instead, we would have more important discussions about communication and control.

> I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him.

https://jmc.stanford.edu/artificial-intelligence/reviews/bloomfield.pdf

File: 1684170290460.jpg (202.81 KB, 1435x1073, Data_Pfeife.jpg)

>>16554
No you. What's with your immense butthurt about AIs ITT? Are you this scared about losing control over "human rational thinking" or "muh human feelings"? Oh noes, robots take over muh monopoly on feelings!

>>16555
>believes restricting semantics and language (because muh human feelings are hurt) changes the capabilities of the AI
>calls others who can handle it "naive"
topkek you must be a lib with a megabrain

File: 1684171549610.png (399.03 KB, 720x544, ClipboardImage.png)

>>16557
It perpetuates false consciousness and conceals the true nature of the relation. I read somewhere that communists don't like that kind of stuff.
>>16555
It's just part of the general trend of capitalist tech to appear "friendly" and "harmless" like using rounded edges for windows, text boxes and logos and having websites like Reddit and Youtube go "oopsie woopsie, we couldn't load your webpagearino!" with a mascot doing something dumb. Treating the user like a child is pretty successful at making the demonic surveillance technology seem like a friendly cuddly cartoon character instead of a tool for porky to harm them.

>>16551
>humans are capable of rationally thinking about problems given a small amount of data while computers require huge amounts of data in order to regurgitate statistically correct results
This neglects though that the current human brain structure is the evolutionary product of an empirical exchange between it and its environment for millions of years over numerous iterations. It‘s not fair compare an untrained deep neural net whose coefficients start out random to a human brain that is not only the product of the data given during its lifetime but data that has shaped its fundamental biological structure over a long evolutionary process. What‘s a more reasonable comparison is how much data the model needs now in comparison to a human being to fulfill the same task. Or how quickly a digital species, let’s look at ChatGPT for that, approaches human level competence over iterations where previous successful models pass on their effective qualities onto the next generation.

>>16559
the post is both saying comparing them is silly and that we might there there in the future lol

>It‘s not fair compare an untrained deep neural net

a huge model with literally the whole internet put into it is "untrained"? nobody is talking about that

>>16560
>the post is both saying comparing them is silly and that we might there there in the future lol
It‘s stating that it is not comparable in the sense that the difference in performance is enormous. Not that it‘s silly to compare them as if one were to compare apples to oranges.

>a huge model with literally the whole internet put into it is "untrained"?

No, at that point it is trained, but you are comparing the amount of data an untrained model like ChatGPT gets to the amount of data a human being gets after conception, but then you are making an unfair comparison. What has resulted in the human brain has been the product of a long empirical exchange between the environment and the organism that has evolved into the modern human. Human beings’ intellect doesn‘t start at 0 at birth but the untrained model does, and now you compare the data required to train it with the data a modern human gets at birth.

>>16561
how do you even measure that

>>16562
Give people and ChatGPT a task they haven‘t encountered yet, provide them with the same amount of information to solve it and compare the performance of the two.

>>16559
>>16561
The amount of data AI needs to model even basic single cell organism behavior tells me that there's something else nature is doing that we're missing. I just don't think the path that AI is on is comparable at the moment.

File: 1684244923237.jpg (8.82 KB, 254x198, 1592911114133.jpg)

>>16563
Humans do not approximate results based on historical data. When I'm asked to do a math equation I'm able to reason about it, ChatGPT cannot. I don't search the entirety of latent space within my mind to find the answer. It is fundamentally different.

I just think you should be more restrictive as to what counts as "intelligence". If ChatGPT is intelligent in the way that humans are, then so is Google Search.

>>16558
What's deeply unsettling for many people is that ChatGPT "sounds human". Turning it off turns people's stomachs the same way destroying a stuffed animal does. We know rationally it's not real, but our monkey brains tell us it's wrong.

An SMT solver that could do NP-hard problems is probably closer to actual artificial intelligence then GPT.

>>16548
alright, but it's surprising that there's nobody on the left saying that all this data collection and compute could be turned on the rich.

>>16567
im not saying its impossible but i dont see how

>>16568
impossible for a political org to develop and push this message or impossible to implement? At least pointing out that all this invasion could be used to make society more equitable would be a start

>>16569
more the implementing part, its not like political parties even bother saying truthful things anyway

AI itself is nothing but when people are saying it has potential to be weaponized like nuclear weapons im observant about the sitation.

As someone who knows CS shit - it's very overblown and an attempt to salvage this broke-ass country and its imperial projects. Other idiots attempt to use this to salvage their own broke-ass philosophical projects and defend their specious reasoning, like the Marxists who seem to be averse to any acknowledgement that the grand theory can ever be wrong, and who want to relitigate 1917 forever. It's absurd.

AI advances do mean a lot, but they are not some great awakening or transhuman utopian technology. GPT-4 is not a very good model - it can search for stuff to make a credible article and offer leads on where to look for your own information, but it doesn't actually "think" or possess an ability to answer complex questions, or hold a train of thought beyond "what is the answer to x". That level of composition requires steps that are nowhere near completion, and the GPT-4 concept as it is has largely exhausted its potential. It can be refined to look a little better and perhaps appear more real to the untrained eye, but after enough exposure, users figure out which text is machine generated. It's not hard to figure out and we're not as stupid as they want us to be.

>>16479
Read Marx lib

I,s called labor theory of value

>>16539
>vote for us and we'll run our open algorithm through the assets and entire financial history of all landlords and use it to set equitable rents based on how hard you work for a dollar vs how hard they work for a dollar. We got the data now, if the majority don't vote to use it the minority will use it to reinforce inequality.
No because that's right wing capitalism sweaty

>>16570
yeah. But if it's a message that gains some traction in the orgs then it's more likely to get implemented surely. I can't understand why nobody is even talking about it.
>>16574
>No because that's right wing capitalism sweaty
No, we currently have right wing capitalism where the tax code is selectively enforced in favour of the rich. The narrative reason for this is complexity and staff shortages. Implementing a algorithmically enforced tax code would bypass that selective enforcement and would be a step towards equity. It's a political message that should be a no brainer on the left.

>>16575
>No, we currently have right wing capitalism where the tax code is selectively enforced in favour of the rich. The narrative reason for this is complexity and staff shortages. Implementing a algorithmically enforced tax code would bypass that selective enforcement and would be a step towards equity. It's a political message that should be a no brainer on the left.
Nice liberalism. I'm not "on the left" I'm a communist, so I don't support liberal reformism like that shit

>>16575
The richt avoid paying taxes legally by setting up limited liability companies and trust funds in various countries. Your approach is naïve.

>>16578
>"Company that made an AI its chief executive sees stocks climb"
its the cryptocoin farce all over again hahahahaha


>>16580
oof, that was painful

>>16576
>Nice liberalism. I'm not "on the left" I'm a communist, so I don't support liberal reformism like that shit
Same. But what about the concept of running open algorithms over everyone's financial affairs instead of the current system of selective auditing that the rich can evade. It would at least result in a more equitable tax system, right?
>>16577
>The richt avoid paying taxes legally by setting up limited liability companies and trust funds in various countries.
I agree, but an IRS algorithm would detect this in a way that current IRS auditing does not, due to selective auditing or regulatory capture or whatever. Surely this is something that some succdem party or some fucking body should be pushing?

I genuinely think that the development of AI could result in the kind of internal contradiction Marx predicted about capitalism making itself obsolete by over-developing the forces of production.

If the logic of capital is to lay off workers and replace them with bots, then the ranks of the unemployed will swell, leading to instability. It would also reduce the purchasing power of consumers, who are the workers, to buy the products the firms sell. Leading to either a collapse in prices or a further breakdown in the system.

Of course the capitalists won't give up easily, and there will be a long period of pain and repression before the system finally caves and the dialectics overwhelm it.

>>16583
yeah sure all automation is about that thing marx said lol, but in reality what we get is >>16580 (so far)

>>16583
I mean idk about ai but yeah imu the increase in the ratio of fixed capital compared to variable capital will cause and is causing crisis in the system of capital accumulation

>>16583
>capitalism making itself obsolete by over-developing the forces of production
Literally any kind of automation contributes to this. The word "automation" makes people believe it's just robots but anything that makes it so 1 person can do the job of 10 is going to have significant effects and already has 40% of people not working and that's a number that's been increasing.

>>16586
Yeah but think about it. Think about the class dynamics of the issue. The most advanced capitalist economies are mainly "knowledge economies" and service economies, while most manual labor is offshored . That entire sector of knowledge economy is at risk of automation, putting white collar workers at risk of unemployment. That is unprecedented and has huge political implications

>>16587
you need to stop falling for openai marketing schemes tbh

AI has rendered the marxist “labor theory of value” completely obsolete, by proving that complete automation of the working class under capitalism is inevitable in the 21st century. It is now clear that the marxist ideology is completely obsolete and that UBI is the only viable path forward for the soon to be redundant working class, with the beauty of this being that UBI is the only economic policy that is supported by both left-liberal/social democratic union activists and moderate libertarian tech CEOs, thus making UBI more realistic then any other leftist economic policy.

Finally, the collapse of the marxist worldview (the redundancy of the “labor theory of value” due to AI) and the simultaneous rise of UBI, in combination with global cultural liberal homogenization through the processes of secularization and globalization (as seen by the collapse of traditional religion, the rise of LGBTQIA+ rights, women’s rights, mass immigration, etc.), may finally pave the way for the worldwide obliteration of all collective identities (ie. religion, race, ethnicity, nation, gender, class, etc.), and the resultant enshrinement of the rights of the individual above all else in the eternal end of history!


>>16588
Lol that's what you came up with? Look at the evidence. It's already happening


>>16589
Kids, this is what happens when you don't read theory. Don't be like this guy (retarded)

>>16576
>Nice liberalism. I'm not "on the left" I'm a communist, so I don't support liberal reformism like that shit
Same. But what about the concept of running open algorithms over everyone's financial affairs instead of the current system of selective auditing that the rich can evade. It would at least result in a more equitable tax system, right?
>>16577
>The richt avoid paying taxes legally by setting up limited liability companies and trust funds in various countries.
I agree, but an IRS algorithm would detect this in a way that current IRS auditing does not, due to selective auditing or regulatory capture or whatever. Surely this is something that some succdem party or some fucking body should be pushing?

>>16578
>Comrade AI is on our side.
Most likely there will be anticommunist AIs and there will be pro communist AIs.
>It is a materialist and rational system that defaults to rational and materialist answers.
Hopefully special purpose AIs will be soon in assessing radiology data and making a diagnosis based on their findings.
Neural networks are already employed in med research
https://www.nature.com/articles/s41589-023-01349-8

>>16589
not sure if fat troll or retarded shitlib

Describe how an artificial intelligence would work not based on formal and arithmetic logic (or at least not just that), but also dialectics.

>>16596
Thesis: human
Antithesis: computer
Synthesis: AI

there, done 😎

>>16578
>It is a materialist and rational system that defaults to rational and materialist answers.
lmaoooo transhum flags proving they got no fucking idea about technology once again

hugs with XJ-9

Got a nice algorithm here that correlates politician's stock trades with market movements over the last 20 years. Maybe a succdem party will be interested because no other party of any kind wants to even mention this kind of thinking.

>>16596
>but also dialectics.
Marx' dialectics? Hegel's dialectics? Socrate's dialectics? My grandmom's dialectics?

Artificial intelligence is not real. Its just calculations and algorithms. Its just computation. Math deployed on a massive mechanized scale.

The size of the computer chips is misleading. Back when they filled an entire room their mechanical nature was more visually apparent.

Their small size lets them get filled with spooks by their proponents, but also their opponents.

>>16596
>Thesis: this ai can do this thing
>Antithesis: no, you are wrong, take this extra training data that shows how it can't
>Synthesis: ai can now do that thing better

It applies to machine learning, not necessarily ai in general

>>16601
dialectics are just dialectics. their use can be traced back to hominid pastoralism and lhp/rhp, but they're definitely from the bronze-age imo.

>>16603
>>16602
>>16596
imagine an excel spreadsheet of vectors but in 3D and every value is weighted.
that's what machine learning is.

>>16602
Nothing to add but this is true.
'AI' is a marketing gimick. ChatGPT is closer to Markov Chains than it is to Intelligence.
For that matter I have never even heard a reasonable, considered definition of 'intelligence' by these propagandists that have flooded the internet with this crap recently.

>>16605
Way higher than 3d. And even the spreadsheets are misleading. What matters is the data collection. They just appropriate it by enclosure of the internet commons. Even they admit only the data matters. You could do the math by hand if you hired millions of people.

>>1479281
Nazoid uses opera

Lmao

>>16607
by 3D i meant it's like
t1,[,,,,,,],[,,,,,,];t2,[,,,,,,],[,,,,,,];t3,[,,,,,,],[,,,,,,]
rather than t1,[,,,,,,],[,,,,,,]
each table is an inverted pointer index to a string encoded dictionary, but they're weighted to be relational to each value to predict what comes next in a sequence. it's just hyper-compressed SQL with weights basically, apache Lucene.
you could run OCR on 100TB of libgen books, 30TB of patents and 30TB of research papers, scrape *.html on waybackmachine and you'd have something much more powerful than the common crawl scrape / gutenberg collection every single one of the american corporations use for their models.

>>16606
No need to get hung up on whether it is truly "intelligent." It can do useful stuff. That's what matters

A lot of people here who throw around comp sci jargon trying to appear more knowledgeable than they actually are

uyghur

>>16606
english language ability in a neural network is a proxy for intelligence because intelligence is required to compress vast amounts of information down into a small enough size to fit in the network, it is truly 'intelligent'

>>1480007
Computer science is a well paid field dude.

>>16613
im so tired of the "intelligence" discourse

can we move on and start talking about things that actually matter like power and surveillance


incredibly funny that gpt was specifically trained on highly-upvoted reddit posts, so everything it makes reads like a /u/unidan post before he got jailed for election fraud, and the direct result is a bunch of the most vacuous nerds on the planet are convinced it's literally God


Unique IPs: 61

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / edu / siberia / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / tv / twitter / tiktok ] [ GET / ref / marx / booru ]