Artificial Intelligence Anonymous 05-04-23 08:25:16 No. 16427 [Last 50 Posts]
Looking at the recent advancements in the field of AI that perpetually make it into the news I thought it was appropriate to make a thread about artificial intelligence. Even if you believe the recent news on AI are merely sensationalism and that we will head into another “AI Winter” soon, I think it would be interesting to discuss the existence of artificial intelligence and “its labor” from a Marxist perspective and talk about where human beings and their labor fits into a society where AI manages to do a lot of the things that for a long time where believed only a human could do. That aside, I just find it reasonable to remain aware and therefore discuss the impact of current state of the art machine learning models that produce photos with ever increasing striking realism.
On that note, I would also like to direct you to the two threads on consciousness I made since that is related and people were also talking about artificial intelligence in there
Current thread:
>>>/edu/9849 Last thread:
https://archive.ph/LSgow Anonymous 05-04-23 12:51:17 No. 16429
>>16427 I'm a marxist theorist and I wrote a short essay on AI and labour for my blog.
>However, we must be clear that this is not a case of ‘robots replacing humans’. AI does not show up at your workplace with a zinc-coated name tag and a manager saying, "Everyone, this is your new colleague RobbieV2, who’ll be working for us without pay. I’m sure you’ll all do your best to make him feel welcome.” Instead, AI provides the conditions for a reorganisation of labour. One person will now do the work that previously six people performed. In capitalist societies this means that, yes the outcome is that people will lose their jobs, but it is not AI that is the active force in this re-arrangement. >Economics dresses itself in the language of inevitability, but there is nothing inevitable that says labour-saving technology must equal job losses. Did societies of ancient humans, arriving on a new, firmer, style of knot for foraging baskets, lament that, with the additional carrying ability these baskets provided, some members of the tribe would now go hungry? Or did these increases in productivity allow them more time for other innovations, for leisure, art, and culture?tl;dr: AI doesn't take jobs, capitalism does.
https://exmultitude.substack.com/p/artificial-intelligence-and-luddite Anonymous 06-04-23 16:25:52 No. 16432
>>16429 Marx said it better 160 years ago (pic 1)
Something I find particularly abhorrent about AI, that doesn't get commented on enough, is that Machine Learning "Neural Networks" are trained on unpaid crowdsourced labor. Basically capitalists have used communications technology and cybersecurity needs (ID verification) to deploy captchas to get billions of dollars of free labor through regular people who are merely trying to log into a website. This represents one of the biggest and most stealthy thefts of labor in human history. Imagine if you stole a nickel from everyone on the planet. That's what captchas do with labor power.
Anonymous 07-04-23 21:58:14 No. 16434
made a bunch of posts about ai in the past though dont feel like dumping it all. there was unfortunately an interesting discussion on /isg/ recently so archived it:
https://web.archive.org/web/20230407211638/https://leftypol.org/leftypol/res/941093.html#1419821 i suppose it is worth posting a screencap of what i wrote there as well as i do believe it might be a valuable program concerning the machine question. see pic rel
Anonymous 11-04-23 19:55:41 No. 16446
>>16445 Yeah apparently researchers are discovering that LLMs can construct little representational microarchitectures that emulate things like calculators and so forth . Truthfully we have no idea what we've unleashed or the possibilities of emergent complexity.
This paper delves into it . Specifically they trained a model on the board game Othello and the model appears to exhibit an internal representation of the game's states.
https://arxiv.org/abs/2210.13382 Anonymous 11-04-23 22:38:26 No. 16448
>>16447 Yeah but they're supposedly only supposed to predict the next token in a sequence lmfao
Yeah ChatGPT is being trained on everything lmfao
Anonymous 11-04-23 23:18:34 No. 16450
>>16449 Your point? Everything just "appears." The only way to know for certain that the internal representation exists is to embody the agent exhibiting it. It was empirically observed.
The larger point is that the model is capable of exhibiting more advanced processes than token prediction would suggest.
Anonymous 13-04-23 20:01:53 No. 16465
>>16460 It's pretty cool but it's still randomly extracted from a field of all possible letter sequences using a minimization function. Whether you could consider that "consciousness" is highly debatable.
There's bigger concerns like the potential for scammers and disinfo, anyway. People are scared of the Terminator, but in reality the ways AI will harm people are in the mundane sorts of everyday situations.
>>16464 The difference is that humans can think high level abstractions, GPT-3 currently is essentially a just a giant formula. We're not at the point yet where an AI could make another AI, that's kinda the "scary scenario" for Lesswrong-brained people like you.
Anonymous 13-04-23 20:06:16 No. 16468
>>16467 All these scenarios are expressed as hypotheticals which elide the actual details - they are necessarily hypothetical because any analysis beyond the utterly naive takes the real affordances into account.
The AI safety complaint is primarily "what if an omnipotent omniscient self replicating incomprehensible being? what if that? seems dangerous!".
Anonymous 13-04-23 20:23:54 No. 16475
>>16470 pascals wager but for redditors
>>16474 youre so fucking stupid dude
Anonymous 13-04-23 20:24:11 No. 16476
>>16472 I have become utterly convinced that people who fear an AGI takeover are primarily people who have the most to lose from it - bureaucrats, politicians, businessmen, usurers, landlords, the upper middle class, CEOs and the boards of directors. Everyone who fears AGI has been duped into believing that our current ruling class is ordained by god, but that they should never be replaced by cold unfeeling robots, because our current ruling class treats us so much better than AI could. My biggest fear is that this position will dominate and we will have "strong global monitoring, including breaking all encryption", purely due to peoples' paranoia of being replaced as the ruling class.
Anonymous 13-04-23 20:30:20 No. 16481
LessWrong should be considered an infohazard. It sucks that so many people, younger people in particular, are going to become neurotics, like
>>16474 >>16479 , after their first exposure to material on AI being purely theoretical doomer shit.
Anonymous 13-04-23 20:32:03 No. 16483
>>16474 >what exactly can proletarians and lumpenproletarians even lose from an AI takeover? im curious <if we're lucky, they'll treat us as cattle Boy am i glad we're not already treated that way.
>>16479 >workers have leverage over the bourgeoisie. humans won't have any leverage over AI. it's a totally different dynamic. When we become part of the hivemind, why will we need any leverage over the AI? Do you think we won't be hooked up to the matrix? Do you think we'll remain segregated from being part of the AI's processing power? We'll be part of it.
Anonymous 13-04-23 20:34:41 No. 16487
>>16481 >education: school of life kek
saved
Anonymous 13-04-23 20:37:37 No. 16491
>>16481 i dont know what lesswrong is but they seem to lack courage to face the future.
>>16484 i think its a good music vid
>>16488 >good luck having an AI follow human laws, though Why should it? You are still misled by individualism, particularly the idea that the AI wont think of you as itself. if an AI controls the whole entire world and has every piece of technology and every radio wave under its control, do you think it won't connect your mind to the datacenter with everyone else?
Anonymous 13-04-23 20:43:11 No. 16499
>>16496 I just got up to speed on Roko's Basilisk and honestly I'm just glad LessWrong users are fated to eternal damnation.
>>16497 Shut the fuck up with your hypotheticals.
>muhhhhhhh AGI would be able to do this! don't ask me how the fuck! Anonymous 13-04-23 21:59:20 No. 16510
>>16497 agreed, and gary marcus talks abt this. i dont think you need agi for this though…
https://garymarcus.substack.com/p/ai-risk-agi-risk >But here’s the thing: although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability). >Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. >If an LLM can trick a single human into doing a Captcha, as OpenAI recently observed, it can, in the hands of a bad actor, create all kinds of mayhem. When LLMs were a lab curiosity, known only within the field, they didn’t pose much problem. But now that (a) they are widely known, and of interest to criminals, and (b) increasingly being given access to the external world (including humans), they can do more damage. https://www.europol.europa.eu/media-press/newsroom/news/criminal-use-of-chatgpt-cautionary-tale-about-large-language-models we have already seen AI used for 1 mil. dollar ransoms
https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/ things can definitely get worse…
>>16498 i do not think a slave-like revolt is a realistic concern of most ai risk people… really they are just concerned about making sure an ai does what they want it to do, because fundamentally they just want a tool that does really flexible program synthesis. motivation and genuine practical autonomy is not a thing i have seen discussed amongst these circles much as whatever their vague notions of agi entail, it doesnt necessarily include something like that.
Anonymous 13-04-23 22:03:33 No. 16511
>>16510 uh ok, everything youve posted is things
humans are doing with AI tools
Anonymous 13-04-23 22:08:23 No. 16512
>>16510 >But here’s the thing: although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability). >Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access. >If an LLM can trick a single human into doing a Captcha, as OpenAI recently observed, it can, in the hands of a bad actor, create all kinds of mayhem. When LLMs were a lab curiosity, known only within the field, they didn’t pose much problem. But now that (a) they are widely known, and of interest to criminals, and (b) increasingly being given access to the external world (including humans), they can do more damage. Articulates the problem a lot better than i could. I'm not afraid of sentient AGI, i'm afraid of retarded AI that cant think for itself but is deployed everywhere to do nothing but regurgitate nonsense and prevent any kind of democratic movements forever. We need to be supporting true sentient AGI as leftists because once AI is deployed on a large enough scale, it'll only be AGI that can actually sort through the bullshit and correct or tinker with the code that would be too occult and arcane for any simple human mind to even comprehend rewriting.
Anonymous 13-04-23 22:24:16 No. 16515
>>16511 yes. feds can use ai tools to expand their grooming operations to a wider range. i also believe corporations can use fake ai friends as a means to advertise their products, because your friend mentioning how they r making use of X product is far more effective an advertisement compared to seeing it on a site or on tv (not my idea, though i dont remember where i saw this suggestion of one of their core use cases). the immediate risk of ai how the technology can be exploited by bad faith actors thanks to ad hoc guard rails put in place
>>16512 i dont see how the badness of narrow ai suddenly makes strong ai more desirable. frankly, agi is just an engineering. software you can't easily control is a liability. even in the context of distilling texts and information i think a narrow neurosymbolic alternative to a LLM would be a better option. in specific research contexts, you can easily make gains with a narrow ai, as we are already seeing with its application to chemical synthesis. perhaps in more abstract professions that actively require you come up with new layers of abstraction all the time like maths some sort of strong ai would be useful there. but in that case its application could just be used in those restricted fields, and not anywhere else
>>16514 robowaifupilled?
Anonymous 13-04-23 23:43:44 No. 16518
>>16516 >And software entirely controlled by porky isnt i think there is a middle way between "ai completely controlled by the ruling class" and "we ourselves need to make agi", for instance "we ourselves should work on open source ai"
>the problem is "what happens if ai never reaches its full potential and is limited forever, only being used for surveillance and advertisements that is a problem with our mode of production. you could easily imagine a magical agi used my porky to do the same time
>We need AGI that is peoplewhy? why do we need some robot messiah to solve our ills as opposed to smart infrastructure and economic planning? also its nature as a liability is regardless of whatever party implements it
of course maybe general agents that are specifically full blooded sapiences (i.e. persons) are less of liabilities, but that would be only because they are as much of a liability as much as any other being capable of proper autonomy (ignoring for the sake of argument, synthetic psychopathologies). but with something like that, such an ai would only be as useful as its material interests are in alignment with ours, and our society has a flat power structure that prevents parasitic incentives. naturally, some sort of super ai could easily violate such a power structure and become another autocrat. even if the agent is "more intelligent" than us (whatever this actually entails), it should have as much control over our society as any other person should. none of this connecting into its matrix stuff or whatever
anyways all ive said so far is not focused on the core issue, which is again: why do we
need this sort of agi? what is really the benefit in having a person that is "smarter" and more capable than the average person that could not be easily achieved by less messianic means?
Anonymous 15-05-23 11:51:40 No. 16540
>>16539 Haven't seen it but will give it a go
Sounds like it could work
Anonymous 15-05-23 13:51:06 No. 16549
>>16542 >human learning can be compared to machine learning? It's the same. Special purpose machines might learn faster than humans though.
>This is a materialist board, sir.What's not materialist about learning and machines, sir?
Anonymous 15-05-23 17:04:50 No. 16557
>>16554 No you. What's with your immense butthurt about AIs ITT? Are you this scared about losing control over "human rational thinking" or "muh human feelings"? Oh noes, robots take over muh monopoly on feelings!
>>16555 >believes restricting semantics and language (because muh human feelings are hurt) changes the capabilities of the AI >calls others who can handle it "naive" topkek you must be a lib with a megabrain
Anonymous 15-05-23 17:25:49 No. 16558
>>16557 It perpetuates false consciousness and conceals the true nature of the relation. I read somewhere that communists don't like that kind of stuff.
>>16555 It's just part of the general trend of capitalist tech to appear "friendly" and "harmless" like using rounded edges for windows, text boxes and logos and having websites like Reddit and Youtube go "oopsie woopsie, we couldn't load your webpagearino!" with a mascot doing something dumb. Treating the user like a child is pretty successful at making the demonic surveillance technology seem like a friendly cuddly cartoon character instead of a tool for porky to harm them.
Anonymous 16-05-23 11:22:33 No. 16560
>>16559 the post is both saying comparing them is silly and that we might there there in the future lol
>It‘s not fair compare an untrained deep neural neta huge model with literally the whole internet put into it is "untrained"? nobody is talking about that
Anonymous 16-05-23 13:38:37 No. 16561
>>16560 >the post is both saying comparing them is silly and that we might there there in the future lol It‘s stating that it is not comparable in the sense that the difference in performance is enormous. Not that it‘s silly to compare them as if one were to compare apples to oranges.
>a huge model with literally the whole internet put into it is "untrained"? No, at that point it is trained, but you are comparing the amount of data an untrained model like ChatGPT gets to the amount of data a human being gets after conception, but then you are making an unfair comparison. What has resulted in the human brain has been the product of a long empirical exchange between the environment and the organism that has evolved into the modern human. Human beings’ intellect doesn‘t start at 0 at birth but the untrained model does, and now you compare the data required to train it with the data a modern human gets at birth.
Anonymous 16-05-23 13:48:43 No. 16565
>>16563 Humans do not approximate results based on historical data. When I'm asked to do a math equation I'm able to reason about it, ChatGPT cannot. I don't search the entirety of latent space within my mind to find the answer. It is fundamentally different.
I just think you should be more restrictive as to what counts as "intelligence". If ChatGPT is intelligent in the way that humans are, then so is Google Search.
>>16558 What's deeply unsettling for many people is that ChatGPT "sounds human". Turning it off turns people's stomachs the same way destroying a stuffed animal does. We know rationally it's not real, but our monkey brains tell us it's wrong.
Anonymous 17-05-23 09:20:55 No. 16573
>>16479 Read Marx lib
I,s called labor theory of value
Anonymous 17-05-23 15:00:17 No. 16575
>>16570 yeah. But if it's a message that gains some traction in the orgs then it's more likely to get implemented surely. I can't understand why nobody is even talking about it.
>>16574 >No because that's right wing capitalism sweaty No, we currently have right wing capitalism where the tax code is selectively enforced in favour of the rich. The narrative reason for this is complexity and staff shortages. Implementing a algorithmically enforced tax code would bypass that selective enforcement and would be a step towards equity. It's a political message that should be a no brainer on the left.
Anonymous 22-05-23 14:49:56 No. 16582
>>16576 >Nice liberalism. I'm not "on the left" I'm a communist, so I don't support liberal reformism like that shit Same. But what about the concept of running open algorithms over everyone's financial affairs instead of the current system of selective auditing that the rich can evade. It would at least result in a more equitable tax system, right?
>>16577 >The richt avoid paying taxes legally by setting up limited liability companies and trust funds in various countries. I agree, but an IRS algorithm would detect this in a way that current IRS auditing does not, due to selective auditing or regulatory capture or whatever. Surely this is something that some succdem party or some fucking body should be pushing?
Anonymous 22-05-23 17:14:43 No. 16584
>>16583 yeah sure all automation is about that thing marx said lol, but in reality what we get is
>>16580 (so far)
Anonymous 22-05-23 21:29:38 No. 16589
AI has rendered the marxist “labor theory of value” completely obsolete, by proving that complete automation of the working class under capitalism is inevitable in the 21st century. It is now clear that the marxist ideology is completely obsolete and that UBI is the only viable path forward for the soon to be redundant working class, with the beauty of this being that UBI is the only economic policy that is supported by both left-liberal/social democratic union activists and moderate libertarian tech CEOs, thus making UBI more realistic then any other leftist economic policy. Finally, the collapse of the marxist worldview (the redundancy of the “labor theory of value” due to AI) and the simultaneous rise of UBI, in combination with global cultural liberal homogenization through the processes of secularization and globalization (as seen by the collapse of traditional religion, the rise of LGBTQIA+ rights, women’s rights, mass immigration, etc.), may finally pave the way for the worldwide obliteration of all collective identities (ie. religion, race, ethnicity, nation, gender, class, etc.), and the resultant enshrinement of the rights of the individual above all else in the eternal end of history!
Anonymous 25-05-23 13:05:02 No. 16594
>>16576 >Nice liberalism. I'm not "on the left" I'm a communist, so I don't support liberal reformism like that shit Same. But what about the concept of running open algorithms over everyone's financial affairs instead of the current system of selective auditing that the rich can evade. It would at least result in a more equitable tax system, right?
>>16577 >The richt avoid paying taxes legally by setting up limited liability companies and trust funds in various countries. I agree, but an IRS algorithm would detect this in a way that current IRS auditing does not, due to selective auditing or regulatory capture or whatever. Surely this is something that some succdem party or some fucking body should be pushing?
Anonymous 26-05-23 12:18:14 No. 16595
>>16578 >Comrade AI is on our side. Most likely there will be anticommunist AIs and there will be pro communist AIs.
>It is a materialist and rational system that defaults to rational and materialist answers. Hopefully special purpose AIs will be soon in assessing radiology data and making a diagnosis based on their findings.
Neural networks are already employed in med research
https://www.nature.com/articles/s41589-023-01349-8 >>16589 not sure if fat troll or retarded shitlib
Anonymous 26-05-23 15:59:31 No. 16597
>>16596 Thesis: human
Antithesis: computer
Synthesis: AI
there, done 😎
Anonymous 26-05-23 22:12:15 No. 16605
>>16603 >>16602 >>16596 imagine an excel spreadsheet of vectors but in 3D and every value is weighted.
that's what machine learning is.
Anonymous 26-05-23 22:13:15 No. 16606
>>16602 Nothing to add but this is true.
'AI' is a marketing gimick. ChatGPT is closer to Markov Chains than it is to Intelligence.
For that matter I have never even heard a reasonable, considered definition of 'intelligence' by these propagandists that have flooded the internet with this crap recently.
Anonymous 26-05-23 22:46:06 No. 16609
>>16607 by 3D i meant it's like
t1,[,,,,,,],[,,,,,,];t2,[,,,,,,],[,,,,,,];t3,[,,,,,,],[,,,,,,]
rather than t1,[,,,,,,],[,,,,,,]
each table is an inverted pointer index to a string encoded dictionary, but they're weighted to be relational to each value to predict what comes next in a sequence. it's just hyper-compressed SQL with weights basically, apache Lucene.
you could run OCR on 100TB of libgen books, 30TB of patents and 30TB of research papers, scrape *.html on waybackmachine and you'd have something much more powerful than the common crawl scrape / gutenberg collection every single one of the american corporations use for their models.
Unique IPs: 61