[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / siberia / edu / hobby / tech / games / anime / music / draw / AKM ] [ meta / roulette ] [ cytube / wiki / git ] [ GET / ref / marx / booru / zine ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password (For file deletion.)

Join our Matrix Chat <=> IRC: #leftypol on Rizon


File: 1666935362430.png (670.46 KB, 960x540, ClipboardImage.png)

 No.17378

https://uniglobalunion.org/news/time-investigation-reveals-the-traumatized-teleperformance-workers-moderating-tiktok/

The workers who moderate internet have always been underpaid and over traumatized, I even speak from experience, they have to see gruesome shit daily, even the mods of this site could relate

Apparently the ones that work for tiktok that work through a france based company in Colombia are thinking of unionizing
I was watching a tech show about it, and it's true when you think about it, how in the future we'll how crazy it was to put so many people through PTSD

If these people went on a strike, the internet would be 1000% worse, In a good and sad way though, It seems A.I will replace them

Do you have any thoughts on this job sector in tech that goes unnoticed mostly

 No.17388

Collective Bargaining benefits from hyper localized immobile companies and rare skills people. Both are not here. Uphill battle.

Make people view content moderation as more abhorrent than sewer cleaners to force wages up.

 No.17392

>>17378
large companies will have the ability to automate this stuff. There will be some lag for FOSS programs to do the same thing, if it ever happens since AI is a big budget type of thing. Imageboard software is already undermaintained. It'll lead to a further widerning of the gap between the personally maintained/small sites, and the corporate sites + godaddy or wordpress hosted sites.

 No.17395

>even the mods of this site could relate
We do it for free, but at least we aren't doing it as a job.

I've been jannying IBs for 8 years. Not constantly of course, but logging in and deleting spam during normal site use. The reason I started was my homeboard getting commercial CP spam every single day and no tolerable admins are going to spend 24 hours a day online. So you need a few people with brooms to stop this shit staying online for most of the day.
It's not a past problem either, weekly CP ad spam is still happening in 2022 all across every imageboard they can find. It's done by hand on proxies and Tor, so no, captchas and antispam tend not to slow them. Boards that don't have jannies are easily spotted because of this, that's how constant the problem is.
Then you get leftypol, where until last year there was a /pol/ CP spammer who literally purchased VPSs to post child porn here. I fucking hate cloudflare too, but it's stopping people like this from posting, so it won't get removed any time soon on a site as infamous as /leftypol/. Plus a former user with notable brain problems spent literally months and months, near daily posting shock and gore content here last year, because they were mad about dumb drama. Even outside of /pol/ raids (which are mostly just le gamer word and infographic spam but inevitably include a few gore and/or CP spammers) it's nasty business. But at least the team is big enough that no-one is seeing this shit every day.

So yes, I can say that even casual internet moderation is gruesome, but it's not even close to the level of forensics and internet moderation workers. These people are in an industry where they're constantly exposed to it. These people often (if they can) go to therapy, and I don't blame them. On sites that big, there will be a substantial amount of people posting obscene content.
We're also talking from the perspective of imageboard users, who are more likely to have a tolerance or normalization of things like porn and political imagery. Even those things are revolting to most people.

>AI.jpg

There are (hopefully) already things like known-bad hashing or phashing in place, but it would be interesting to see how well AI can do of detecting classes of illegal/shock/abusive imagery. Maybe it could be pretty successful in giving confidence scores, but I don't know if they can replace the manual confirmation.

Anyway, I hope they can unionize because their job is suffering.

 No.17396

>>17392
If we're lucky we might see something akin to the security industry with a normalized open collaborator culture. Honestly, everyone wins if they make abuse-protection tools together and it's probably not going to be considered a competitive advantage. Unfortunately I kind of doubt it, I suspect commercial solutions will dominate and compete.

 No.17400

>>17378
AI can never really replace them because there's always things which slip through

 No.17407

>>17395
>there was a /pol/ CP spammer who purschased VPS

I genuinely hate pol so much, worthless degens
I wish it would shutdown but the feds use it for honeypotting it seems

>>17400
If A.I is able to at least swift through a large chunk, and clean a lot less than 700 per day (as the people in this colombian one) that'll at least be better

I heard one description about drunk people playing with a carved face and it traumatized me just from hearing it,

The better for these people, the good

>>17396
Open sourcing this stuff really would make this all better

 No.17411

>>17378
It is a shit job, definitely. It feels like a lot of it can be automated but then you start getting into weird scenarios where people are banned and they go through the cracks of a non existent customer service and descend into hell. Automating customer service only works to a degree. You still need people intervening. See vid related.

With regards to moderation specifically, they are treating people like organic AI filters. There's nothing creative, engaging, or anything. It's mental work of the most tedious kind.

 No.17432

>>17378
>In a good and sad way though, It seems A.I will replace them
It's only good until you realize that the AI moderation will be proprietary and used to censor anything that goes against porky's narratives.

 No.17454

Lets have 2 AI's
1. Recognize isChild
2. Recognize isNude

Take 2 separate legal AI's and add an AND clause to detect illegal content.

Notice:
The preferred term is Child Sexual Abuse Material (CSAM).

 No.17455

>>17454
how do you train it retard? you'd need a large library of cp in the first place. All porn should just be banned so we can train the AI with adults.

 No.17467

>>17378
a france based company in Colombia
It's funny that people are talking about automation in here, because usually that advertising term just a cover for neocolonial exploitation of labor. They just push the labor away so you can't see it, so a third world worker gets paid like $0.5 per hour. Listen to Trashfuture podcast

 No.17468

>>17454
>>17455
The government gatekeeps CSAM detection algorithms. It is actually not easy to gain access to these detection services, despite being provided free of charge by major providers like Google, Microsoft, Cloudflare, Amazon, etc.

Let me rephrase,
The US government is actively making it extremely hard for people to fight CSAM dissemination online, by unnecessarily gatekeeping free (as in gratis) services offered by companies.

I've thought about making a FOSS service that collects CSAM hashes, to circumvent this issue, but the government obviously has way way way more data, as well as decades collecting it. It would take years of many contributors to build a useful database. Idk, I think about it from time to time.

 No.17470

>>17468
if it was just normal for all forums and chans to hash all pictures, and have a system for flagging the hashes of pictures that are removed as CP, i'm sure a good database would build up.

>>17455
they literally explain it right there - one based on detecting children, on based on detecting nudity. If child, and if nude, then bad

 No.17471

>>17388
Part of the problem is mod/janny powers give people a rush and there's a draw to being able to control a community, or to suck up to whoever is in charge. Mod/janny organizing or struggle is obstructed by the inherent character of the work and the personalities/ideologies that attracts.

 No.17472

File: 1667949709358.png (29.03 KB, 343x626, ClipboardImage.png)

>>17470
vichan does md5 hashes (through the optional .json API) and lynxchan uses I'm guessing SHA256 for filenames. That said, easily evaded with single pixel changes or by some who overlay a link in the image.
phashes could be effective.

I made a prototype system for collective antispam, never got put into active use but technically it could, with a bit of refactoring and improvement.
https://xj9k.neocities.org/ (code not included, but it's nothing advanced. can share if someone gives a fuck)
Fair warning: there are no images and links are broken on purpose, but most of them are CP link spam.

It worked by downloading the catalogs/overboards of vichan and lynxchan boards, looking for duplicate threads that are on multiple sites, generating a vichan filter to block it, and then uploading it onto that site. Just ran on a shitty pi so it's simple.
Of course there are false positives but it gives an insight into commercial spam and evangelical schizos.

I actually got mail from some kiddo whining about free speech because of this lol

 No.17474

>>17468
>The government gatekeeps CSAM detection algorithms.
makes sense since they're the only ones legally allowed to posess it therefore they are the only ones who can engineer an accurate filter

 No.21386

File: 1692791446973-0.png (311.79 KB, 1920x1080, ClipboardImage.png)

File: 1692791446973-1.png (4.46 MB, 1920x8903, ClipboardImage.png)

https://yro.slashdot.org/story/23/08/22/2029254/the-feds-asked-tiktok-for-lots-of-domestic-spying-features
>A draft agreement between TikTok and the Committee on Foreign Investment in the United States (CFIUS) to avoid a ban would have given U.S. agencies unprecedented access to TikTok's facilities and servers.

https://gizmodo.com/tiktok-cfius-draft-agreement-shows-spying-requests-1850759715

 No.21387

>>17454
Me, an intellectual:
Recognize isPhoto

 No.21389

>>17472
That sounds like a valuable tool. An anon on 4chan was producing graphs that claimed to show posting frequency on ukraine threads correlated to certain happenings. Anything that gives us a better overview of wtf is going on is great.

 No.21390

Automated moderation is fucking awful (look at YouTube and reddit automod tools) and it's not just targetting CP or gore, it's being used as an automated censorship tool. With how the mainstream platforms operate to push the bourgie neolib orthodoxy and suppress dissent, automation (more than they already have) would make that even worse. Unless paired with radical transparency in moderation it's proliferation would advance the dystopian trajectory of the internet.
An obervation I've read here I agree with is if a platform has to use automated tool to keep out spam or illicit content it's become too big for it's own good.

 No.21391

>>17472
It's done with so called perceptual hashes. So they're resistant to single pixel changes. Of course they can be circumvented, but they're resistant to cropping, resizing, shit like that, that normal hashing isn't.

 No.21392

>>17472
They were right in that tools like this can be a free speech issue if not paired with extreme transparency in moderation which few places have, or are closed aource which most tools are.

>>21386
All rightoids in hysterics about China and TikTok are useful idiots for this agenda. America is rapidly becoming worse than China when it comes to digital controls. As usual, the cries about authoritarian the west are really either hypocrisy or jealousy.


Unique IPs: 17

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / siberia / edu / hobby / tech / games / anime / music / draw / AKM ] [ meta / roulette ] [ cytube / wiki / git ] [ GET / ref / marx / booru / zine ]