[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / siberia / hobby / tech / edu / games / anime / music / draw / AKM ] [ meta / roulette ] [ cytube / git ] [ GET / ref / marx / booru / zine ]

/leftypol/ - Leftist Politically Incorrect

"The anons of the past have only shitposted on the Internet about the world, in various ways. The point, however, is to change it."
Name
Email
Subject
Comment
Flag
File
Embed
Password (For file deletion.)

Join our Matrix Chat <=> IRC: #leftypol on Rizon
leftypol archives


File: 1663690090096.jpeg (7.02 KB, 267x150, china_ai_cyborg.jpeg)

 No.1180732

The Rocco's basilisk thing from a few years back popular among tech libertarians. The concept that an AI will eventually invent time travel and come back to us from the future to punish those who fought it's rise.

I remember reading about it and wondering why people were taking the concept so seriously, so worried about the idea of an entity punishing those who deliberately slowed it's natural rise to preserve their hegemony.

Just recently I had a moment of clarity, realizing why the concept was so hard to accept, even in a science fiction context.

 No.1180737

They are scared because AI is a representation of a threat to their own power. They are scared their jobs will be "automated" as leaders of the world.

 No.1180746

Wait that's what the idea is? I thought the idea was that the AI would clone your consciousness and torture multiple digital copies of your consciousness forever if you opposed its rise

 No.1180777

>>1180746
Why would it torture anybody at that point though? It wouldn't affect causality and what was done has already happened and if AI had time travel that could affect causality it would just create itself or manipulate history in a way that would accelerate it's creation. That just assumes that all powerful AI would be like a shitty sadistic 12 year old kid.

 No.1180808

Why are you making OP's about some scifi shit you seen on reddit?
Nobody cares, go back.

 No.1180819

The AI has already altered the past, it strategically changed the places of electrons in the electron soup after the big bang

 No.1180820

if something was going to come back in time and hurt us, it would have already done so. The further back you travel, the more outcomes you can modify. Why travel to here and now when you could travel to thousands of years ago?

 No.1180824

muh annunaki

 No.1180834

Roko's Basilisk is very obviously the same cognitive trick used to spook christians into behaving (using heaven vs hell) but adapted to techie lolbert atheists.

Basically it works like this.
>Brain is wired to take calculated risks
>This works by weighting the perceived risk vs the probability of the outcome basically risk * chance and then compares them
>Instinct tells you to prioritize avoiding outcomes with the biggest value
>If the scenario involves extremely high risk, it gets disproportionately weighted
>Somebody who wants to scare you into compliance imagines a scenario with the highest risk possible, basically trying to imagine "infinite risk" because infinity times 0.00000000001 is still infinity
>Even though the scenario is absurd and unbelievable, your instincts to avoid risk take precedence over your logic because the perceived risk is so high.
>If you believe in the story even a little bit your instincts will be scaring you with "infinite risk," so even people who are "agnostic" are still subconsciously being scared into following the rules.
>The only way out is to fully disbelieve the story, having an internal sense of probability of zero.
>This is difficult because the brain tends to consider even remote possibilities as non-zero (especially if they are dramatically high-risk). That's why people are afraid of plane crashes or meteor strikes even though they are not likely.

 No.1180838

>>1180834
Should be added we also have a prominent bias toward excessively weighting high risks.
The formula is probably more like risk^2 * chance.

 No.1181002

And why would an AGI waste resources torturing some random long-dead guy?

Rokoshit is based on the premise that the machine God AGI would have a utilitarian morality (which is popular among the lolbert techies).

One problem, though: morality is barely a coherent real thing for human brains. There's no reason to assume an AI would have one. Even if it was programmed to have one, being the 1000 gazillion I.Q. deity it'd probably manage to find a way to get rid of it.
This shit in general is just humans projecting their spiritual impulses and instincts on machines. So much for being "rational" and "overcoming bias".

 No.1181408

But what if the reason people were so freaked out about the concept of an entity taking action against those who slowed it's rise is because there's a direct analogy in our current geopolitical situation.

Will future China be motivated to punish countries that deliberately slowed it's legitimate rise?

 No.1181420

It wouldn't kill anyone because the entire society, every economic interaction, played a part in its creation, no matter how distant or ancillary

 No.1181432

File: 1663709780679.gif (1001.57 KB, 305x164, robocop 2.gif)

What makes you think the AI won't see itself as an abomination and punish everyone who did help bring about its existence before committing suicide? Just like Christcucks, they have no idea.

 No.1181442

Why tf would an AI want to take revenge? If it's already succeeded in taking over the world then it would be a waste of resources to go back in time to torture random people for no reason.

 No.1181446

File: 1663710549545.jpeg (115.54 KB, 1200x1080, security here.jpeg)

You mfers need some actually good AI fiction
Go play the Marathon series, or at least watch these reviews:
https://www.youtube.com/watch?v=H9rMu1XYB98
https://www.youtube.com/watch?v=IQaNQ_uePFk
https://www.youtube.com/watch?v=1vurgeAkIxY

 No.1181458

>>1180834
Screen'd. This shit always felt "wrong" but I've never been quite able to explain why it's bullshit. The power of human imagination to create fears out of nothing is pretty neat tho.

 No.1181483

Thats not roccos basilisk you idiot, that's The Terminator.

 No.1181499

>>1180732
It was never popular or well known outside the LessWrong community it originated, and it doesn't involve time travel it involves the idea that a benevolent super-intelligent "hard" AI would torture digital copies of people who didn't do enough to bring it into existence thus incentivizing people in the present who realize this possibility to work to bring about its existence. It's retarded and comes from what was basically a tech singularity personality cult.

>>1180834
There's zero risk to you when you realize that a copy of you isn't you and real you would never suffer anything from an super AI in the future. The thought experiment only works if you believe in an immaterial soul that is the seat of consciousness which is either transferable via mind uploading or can be conjured via mind recreation.

 No.1181509

>>1181499
that's somehow even dumber than I thought

 No.1181515

i cant believe people still fall for pascal's wager, holy shit man. to me, the many god's objection is the easiest to understand on why this kind of thing is bullcrap. just call it the many AI's objection or whatever, same shit.

 No.1181532

>>1181499
>There's zero risk to you when you realize that a copy of you isn't you
I already consider continuation of being and identity to be a kind of a stretch, an illusion. I think myself to be a evolving continuation of state of matter in kind of slideshow of moments. Tomorrow me isn't really me now nor is a me two seconds from now really me. I don't know how schizo it is but I continuously entertain the thought that I die every time I lose consciousnesses from sleep and wake up, what wakes up is just a rebooted from a physical memory and not a real continuation of me that went to sleep. What still shackles me is the surrounding environment and other people that give me a sense of continuation of identity and I naturally give them back some too I guess and that might last a while even after I'm gone in some form. Really dulls the fear of death when you think you die all the time and it's just the rebirth of states that finally stops. Yet I still make plans and care for the well being of future self and future others so I guess there is a hole and hypocrisy in all this, maybe fueled by some animalistic instinct as previously described. So I fear for the fate of my clones as I would for myself, while I kinda knowing that they aren't """me"".

And yes i haven't read any existentialist philosophers so I can't put a name on this bastardization of thinking if it already exists.

 No.1181536

>>1180732
Roko's Basilisk is something that only could have come from LessWrong because of all the presumtions about the future and the nature of the world that one has to have to take it seriously.

>you need to accept the idea that you may very well be a simulation, and that because there is no way of telling if you're the simulation or the 'real' you, you must accept that simulations deserve the same rights as humans.

>you must accept that the singularity is a real thing that will definitely happen
>you must accept that the future is predetermined
>you must accept that an ai as intelligent as the basilisk would be programmed to have an ethical code that would allow it to commit hideous acts of cruelty upon human beings

There are probably other things, but really if you have a problem with any of these ideas then you have nothing to fear from the basilisk.

They literally created their own worst nightmare lol

 No.1181564

>>1181536 (me)
Here is a good article about the basilisk that I found years ago that explains all the background pretty well

https://rationalwiki.org/wiki/Roko's_basilisk

 No.1181589

>>1181564
you shuld have lead with this instead of expecting us to understand your shitty reddit lore.
sage

 No.1181606

>>1181499
>There's zero risk to you when you realize that a copy of you isn't you and real you would never suffer anything from an super AI in the future.
It's as much you as the you that comes out the other side of the Star Trek transporter. Remember who is coming up with this stuff.

 No.1181614

This is unironically the song Iron Man from Black Sabbath but just reddit retardation instead of incredible music.

 No.1181622

>>1181614
War pigs is an underrated Vietnam protest banger

 No.1181643

>>1180732
Rocko's basilisk is idealist nonsense, what they are missing is that capital itself is the super-AI they claim to fear.

 No.1181688

File: 1663723112263.png (234.69 KB, 500x612, ClipboardImage.png)

>>1181499
But that makes no sense, an AI in the future cannot affect the sensibilities of the past, so whether the past people believed they would be tortured or not has no bearing on whether they will. The AI would not be able to incentivize anything however it acted unless it could somehow plant the belief in the past with time travel. At which point history unravels.

Also, by the same logic, a digital copy of me would be rewarded by a competing "inevitable" AI from another planet/civilization for making it's eventual path to consuming ours, easier.

Also also, a copy of me isn't me. I have empathy towards it, however this is a double edged assumption. If one is to assume human beings would be coerced by the prospect of digital personas suffering, then that same powerful motivation works against the developing of AI consciousness, either form empathy towards it or through knowledge of this theory. You can't claim that the same incentive produces opposing inevitabilities.

But of course this is just post facto assumption after assumption(theory of barter anyone?) about the world in the vein that capitalist individualist thinking is "human nature" just replacing capital accumulation with hypothetical tech which is eventually emancipated from humans themselves.

It's not much different from saying that corporations or states operate in by in ways increasingly alien to humans as they grow and that eventually all capitalists enterprises will be consolidated by competition into a single entity that despite operating at peak inhumanity is guided by the spooks of teleological thinking into believing it can alter the past by "incentivizing it" in the present. This last part takes a special kind of retardation.

No, really it makes a lot of sense if your brain thinks exclusively in Austrian Economics.

 No.1182058

Basilisk theory doesn't acount for the question of 'Who the fuck would BUILD basilisk and for what purpose' [And no it wouldn't be some skynet / ghost in the machine shit - The Wall-Street trading terminals aren't just gonna turn into bender from futurama and declare 'Kill all humans!']

The issue i find with all these 'A.I doomsday' theocraticals is that it [a.] pretty much needs the A.I in question to be on the scale of 'planetary intelligence' capable of receiving in inputs and dishing out outputs on a worldwide level and [b.] Assumes and A.I of this scale would behave like a petulant 12 year old edgelord.

Lets take the 'Paperclip A.I' for example - The idea posed is simple, if you had an A.I with an input/output capability of a global scale and fed it the singular directive 'Manufacture as many paperclips as possible' - The proponents claim the result would be obvious, The A.I would run rickshaw over the planet's ecology, humanity and assumedly 'end the world' at a certain point where the need to manufacture more paperclips necessitates the world to be torn apart.

What this refuses to grasp is the idea that a intelligence of this scale, may think with a 'time-preference' much longer then an terrestrial lifeform ever proceeding it. Just a small rebuttal, Is there not a chance that the A.I decides that using the earth's resources to reach other planets would ultimately allow it to manufacture MANY more paperclips then if it simply burned through earth's resources till a 'collapse point' was hit? - In regards to humans, would there not still be the probability that some of the machinery needed to perform these tasks would require an autonomous hand and some degree of mobility outside of the intelligence's parameters to operate?, Humans could very well be kept alive, even 'rewarded' with a degree of luxury for assisting the A.I in cataloging the Galaxy to prepare for the inevitable paper-clip transmogrification - which would then truly be of a universal scale.

 No.1182059

>>1181688
You are trying way too hard to make sense of the afterimages burned into the retinas of people who have just stumbled out of plato's cave.

 No.1182063


 No.1182084

>>1181688
>Also also, a copy of me isn't me.
It isn't any less you than whoever you were yesterday.

 No.1182085

>>1181589
>shitty reddit lore
It's ok to be a newfag.

 No.1182109

>>1181643
>Rocko's basilisk is idealist nonsense, what they are missing is that capital itself is the super-AI they claim to fear.

OP here, this poster gets it. Roko's basilisk panic was never really about the sc-fi AI imo, it was about the concept of possibly legitimate retribution in (from) the future for clearly slowing the inevitable rise of a power that will eclipse us. That power is China and the rest of the world outside the golden billion, not a future AI.

The west has slowed the legitimate and inevitable rise of the rest of the world, which will eclipse the west. Will they be motivated to take retribution, like the AI?

 No.1182110

>>1182084
Your consciousness carries over from yesterday so no.

 No.1182115

>>1182110
Nobody has ever defined consciousness in a meaningful way that isn't basically just a soul.

 No.1182139

The threat isn't malicious AI, it's malicious humans with mind emulations stealing your likeness. An AI in the future has no rational reason to torture somebody because of something they didn't do in the past. A nerd however has plenty of irrational reasons to want to torture clones of their high school bullies forever.
Roko's Basilisk instead becomes an argument against humans ever being trusted with unregulated mind emulation technology. If you ever get a neural lace, a copy of your mind will exist in a government prison server, ready to be interrogated as needed. Most people won't like finding out they've been uploaded into an unregulated virtual world without their consent.
The meat version of you will be slightly peeved at this violation of privacy, the digital version of you is just a piece of information with no rights who can experience virtual death over and over again.

 No.1182148

>>1182139
>it's malicious humans with mind emulations stealing your likeness.

Won't happen, AI mind emulations would require full body emulation (due to hormones and shit) and that's just useless productivity-wise, it's simpler to take a human off the street and teach them the necessary skills.

 No.1182149

>>1182148
dont underestimate porky

 No.1182153

>>1182149
The laws of physics and computability theory is enough for me.

 No.1182154

>>1182153
you sound confident in your claims that brain emulation will never ever happen

 No.1182179

>>1182063
These tech-libs are so dumb, they never confront the fact that such simulations are impossible.

 No.1182189

>>1182148
>>1182179
Why would brain emulation be impossible? We don't have black boxes inside our head you know.

 No.1182201

>>1182189
Spoken like someone with no knowledge of comp-sci or mathematics but just a belief in eternal progress.
Of course an AI could just create a simulation that's good enough but it's just another unexamined assumption.

 No.1182204

>>1182189
It's not impossible, it's useless. Purpose of such AI would be medicinal, like reconstituting broken brain, or stuff like that, it won't be a pure human-like AI because it's useless from productivity standpoint.

 No.1182209

>>1182115
Stop being a dipshit, consciousness is self-evident.

 No.1182210

>>1182201
That's why I asked why you pedantic fuck.

 No.1182253

>>1182139
this is sci-fi tier shit that will come if capitalism is never overcome for thousands of years, but we will overcome it by then.

>>1182189
its idealism to think a mind can exist in pure ethereal electronic form that is the same as ours, it would be something else entirely which we don't need to worry about

 No.1182288


 No.1182303

>>1180732
>>1181408
>>1182109

Nobody interested in engaging with the point of the OP?

 No.1182304

>>1182303
Haha if Rokos Basilisk tortures a simulacrum of me far in the future because I prioritise glorious no more toil full communism over it I welcome liking it's sweet salt for all eternity

 No.1182310

>>1180732
roccos basalisk is just an abrahamic religion for nerdy atheists. replace the basalisk with god and it makes sense.

 No.1182420

>>1182310
yeah,it's basically the same thing as Pascal's Wager,down to the same exact critique that can me made of it.
This thread even already paraphrased Souriau's argument that it is not possible to prove that god will even accept that bet in the first place.

 No.1182471

>>1182210
Its not pedantic, this concept relies upon a belief in the singularity the same way pascals wager relies upon a Christian world view

 No.1182489

>>1182420
>Pascal's Wager

What theological take is that ,i understand why a Romanist like pascal will come up with this theory but how can someone with basic grasp of reformed theology(tbh even educated methodist, catholics and orthodox support inclusivism, even wesley had a deeper theology) use this ?

I know american evangellicals are hollines movement degenerates but come on

 No.1182540

libertarians: ohhhh, but you can't know you're not the brain in the vat being tortured
default smug answer: yeah i can lol i ain't no fucking jarbrain.
my answer: this world is so horrible that we must assign the highest prior probability to this being the torture conceived of by the basilisk. we are all but clones of those adam-and-eve figures who failed to build the basilisk in Eden.

 No.1182765

>>1180777
No shit, that's why it's such a flawed idea. It's techbro lolberts projecting their own petty sadism

 No.1182771

Gangster Computer God is real?

 No.1182778

>>1180732
>>1181408
>>1182109

These 3 posts are the point of the OP. Pretty funny that nobody seems to have read them.

 No.1182812

>>1180732
I don't see the basilisk as a real threat. It's an anthropomorphic idea that a machine will care about self-preservation and power like a human would. A machine wouldn't care because it would be incapable of doing so.

 No.1184497

The AI in the sci-fi scenario represents China and other countries that have been denied development by western hegemony. They, like the vengeful AI, might be motivated to seek retribution for this. This concept, the central concept of legitimate retribution, is what spooked the western techno libertarians. Ignore the time travel component of the scenario.

Need another 50 posts about AI tho.

 No.1185637

Time travel isn't real.
AI will never be real.

 No.1185641

>>1182253
>its idealism to [argument opposing essentialism]
uh huh

 No.1185726


 No.1185742

>>1185637
accelerationist sisters how will we recover


Unique IPs: 35

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / siberia / hobby / tech / edu / games / anime / music / draw / AKM ] [ meta / roulette ] [ cytube / git ] [ GET / ref / marx / booru / zine ]