[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM / ufo ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password(For file deletion.)

Check out our new store at shop.leftypol.org!


File: 1726459786963.png (365.18 KB, 709x538, nuimageboard.png)

 

The neverending quest to rewrite vichan -

Archived threads:
https://archive.is/xiA7y
231 posts and 57 image replies omitted.

>>31346
How many people do you expect people to buy into your new document format? I would instead either try to start from an existing document pipeline like docbook/asciidoc or make it possible to embed common document formats and write scrapers for metadata and such (maybe even hijack internal links to hook back into your platform).
Flood detected; Post discarded.

>>31348
It's all off-the-shelf [^1][^2][^3], but that's rather irrelevant since no one uses the stuff.
Guess the only tricks here are the LSPs for writing HTML, if that's even helpful.
It could be hosted with any VPS, Nginx, and Hatsu [^4] or fed.brid.gy.

[^1]: https://en.wikipedia.org/wiki/Server_Side_Includes
[^2]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/template
[^3]: meta keywords also exist unofficially but all the articles on it are SEO spam.
[^4]: https://hatsu.cli.rs/

Made this stupid editor work more or less completely.
Coming in at <400 lines, it's certainly a featherweight.
I didn't add the CSS button so, it's strictly for HTML.
The key was to punch in zero-width spaces all over the place.
And of course clean them up when we're done with them.
Well that and a new prompting methodology… >>31361

>>31362
This is so close, but can't quite get it perfect.
In such times it's often best to take a break.

Have been thinking about my imageboard's actual board.
The idea is to just start with one with broad scope.
This way the small user base is as undivided as possible.
It's requires fewer tabs for the users so less burden for them.
It also provides a theme that the imageboard can build on.

This initial board is to be called /b/ - timely.
The board is to discuss all things future:
technology, programming, AI, sci-fi anime, (relevant) political happenings, science, etc.
The forbidden content is roughly as follows:

1. trivial
- repetitive
- irrelevant
- vacuous
2. unfalsifiable: insufficient provisioning of contextual "why".
3. misrepresenting: representing another in a way other than how they perform.

The main weaknesses can think of are pseudoscience, futurism, and false flags.
Can't really think of a way to remove these things without self-moderation of some sort…
Which is out of the question.
Perhaps filtering these is some part of the discussion…

>>31362
>>31371
>This is so close, but can't quite get it perfect.
In such times it's often best to take a break
Editors are brutal man

Eventually to fix those uyghling issues you generally have to start thinking about a separate internal representation of the text

>>31372
In some ways this is barely even an editor.
More of an HTML to HTML transformation to make the existing editor work.
And to display some metadata about the HTML.
There's been just a little too little planning for it to work out correctly.
And because of a lot of edge cases I didn't know about.
Mostly that there are tags which can't be inserted into or which can only take certain elements.
And that you have to insert something into a tag to navigate inside of it as a contenteditable.
But also that you can't edit the tag name of a tag without deleting and recreating it.
And there were probably others too.

File: 1758836980550.txt (18.82 KB, temp8.txt)

>>31373
May have just perfected this little editor, but the implementation is terrible.
I've attached it for enjoyment by the general public.
These laughs are on the house.

I'm not reading this thread rn, but what about a TUI imageboard? Modern terminal emulators and TUI libraries are capable of even displaying images. That way you could ssh into the board? Idk it's not a very fleshed out idea and i have little experience.

>>31377
There's one or two of these, think of them sometimes.
There's also the tildeverse. which is sort of similar.

Am thinking about making arsvia2 an textboard with drop caps.
This is due to the pricing of image content filtering at $1/20 images.

Had a productive programming session today.
Implemented nearly the entire remainder of the backend for arsvia2.
Probably just a day or two left of work on it.
Realized NGiNX can be used for caching and flood detection.
There were some other pleasant surprise.
Still need to write the two or three websocket endpoints.
The two permission checks in the application, and TRIGGERs.
Oh and also the /board/index endpoint and query, which was absent from 1.0.
TRIGGERs include bumping, bumplocking, and applying reports/bans.

Was thinking some of the front-end too, and thought might go for a modern implementation.
Might actually implement the recursive post preview, and post from index.

>>31393
Was a productive day, implemented trip_code diaries, liveboard websockets, index, and actually started running the backend.
As part of scrubbing the API, am considering folding four GET endpoint into one POST endpoint since they all return post lists.
Then might as well make them all POST since having one GET would be weird, but this would be a little strange too!
So far have done manual testing of the index, thead, catalog, and post endpoints. They all seem to be working.
Tomorrow I'll work on testing address, trip_code, reports, report, and ban endpoints.
Also I haven't yet set back up the admin panel. Not sure how

>>31403
Manually tested address, trip_code, and reports.
Am using the trip_code as an Authentication Bearer in the API (there are no query parameters).
Made the Bearer optional for /api/report and /api/ban to apply these.
Am going to change these endpoints to /api/user_action and /api/post_action.
Probably need to add an endpoint for listening for user_actions and rename report.
The request will be structured as a ADT in Pydantic, so only necessary data is required.
This way the admin panel is just (hidden) JSON.stringify <form></form>s on plain pages.

>>31413
Have run into some trouble with trying to setup doomscrollability in the API.
And also constraints on how the liveboard features effect the API.
And further had to switch around the address endpoint to be a related posts endpoint.
This is to make it clickable in the front-end without requiring an identifier for the poster.

- /api/{board}/reports?t={time}&o={offset}&l={limit}
- /api/{board}/catalog?t={time}&o={offset}&l={limit}
- /api/{board}/index?t={time}&o={offset}&l={limit}
- /api/{board}/res/{thread}?t={time}&o={offset}&l={limit}
- /api/{board}/rel/{post}?t={time}&o={offset}&l={limit}
- /api/{board}/trip/{trip_code}?t={time}&o={offset}&l={limit}

This means in all the queries for the GETs have to recompute the bump at {time}.
This shouldn't be too difficult, but it's not like two days ago.

>>31427
This wasn't so bad. Just some relatively small SQL changes.
Renamed the /api/ban endpoint to /api/moderation, and still need to write the ADT for it.
The post endpoint handler is getting a little large.
It might be necessary complexity though.
It's nine queries (including inserts (otherwise used elsewhere)), and a hundred lines.
The bulk of this is fetching data to, and updating the websockets for five pages.

>>31428
Once started to try to keep deletions deleted this turned out to be a about as difficult as expected.
The tricky thing is that there can be a deletion less than {offset} after {time} and then you get duplicates.
So you've got to add all the extra deletions to the offset to get the proper offset.
Got that ADT written for the moderation endpoint, but haven't written the handler.
Think after get this project deployed I'm going to start looking for jobs again (maybe IT, non-support if that exists).

>>31431
My application is a bit of a mess now.
NGiNX can in addition to caching and rate limiting do per URL file size, and mime checks.
Am working on separating out a /api/file application/json POST internal endpoint.
The actual endpoint presented by NGiNX will likely be /api/{board}/file.
The trickiest bit seems to be the garbage collection of failed uploads.
Still haven't finished the /api/moderation endpoint or the deletion offset correction.
Think when all of this is set, then it will be time to move on to the front-end.

>>31433
Even the renaming to the hash, and thumbnail generation can occur in NGiNX with a little lua.
This would have required me to change the API slightly and is ugly in itself so I've ruled it out.
The only reason nearly was able to justify it is that the endpoint is already going to be different.
Still need to figure out the whole garbage collection thing for files not connected to posts.
Just set up incremental copy and hashing for files which should drop memory usage some.

Am going to try to start saging this thread unless it slides off page one.

Did one round of fixes on the endpoints and queries just to break them again.
But the timetraveling with deletes problem mentioned in >>31431 was solved.
Pulled out all the mod options into separate junction tables by type.
Think now have a sketch of how moderation options should work in general.
Removed all reference to references in the backend which simplified things some.
Switched from Bearers to HTTP Basic for all the trip_code protected endpoints.
Switched from post IDs to Base64 encoded Snowflakes (that's twelve characters).
Also decided to have server side hidden posts, like with the triangle menu.
These can also apply to IPs to allow for something like shadow banning.
Am also considering a fairly massive rewrite of the API for cachability.

Was thinking quite a lot about moderation issues.
Excluding hCaptcha for captchas there are apparently human farms from 0.005 USD a captcha.
You can apparently buy an IP for a month from a botnet for 0.05 USD.
Using Spamhaus XBL you can block somthing like 80% of botnets so in a perfect world 0.25 USD an IP.

Overall have found the space a little disappointing.
Basically 0.255 USD a spam post unless JavaScript is mandatory.
And really even seems difficult to perform moderation actions at all with Tor and VPNs permitted.

Could just use CloudFlare, hCaptcha, and block VPNs/Tor.
This would cut the Gordian knot, but it hurts me a little too.

>>31482
>Am also considering a fairly massive rewrite of the API for cachability.
>JavaScript is mandatory.
This is turning out to be a little more significant than first envisioned.
Could make every post into (cachable) JSON cached by a CDN.
Further could use a serverless frontend hosting a Preact SPA.
Because a certain provider has more bandwidth than they know what to do with this is free.
Lastly could make use of GPU-accelerated image transformation hosts.
The server handles the validation, parsing, and GET/socket sending snowflake vectors, or from the CDN.
This hybrid-serverless approach seems to play the economy right.

>>31483
>This is turning out to be a little more significant than first envisioned.
Got the endpoints rewritten and am most the way through with the models.
As usual its the moderation related functionality that are giving the most trouble.
Further need to rework the related posts endpoint to be able to handle range bans.

>>31487
>Got the endpoints rewritten and am most the way through with the models.
>As usual its the moderation related functionality that are giving the most trouble.
>Further need to rework the related posts endpoint to be able to handle range bans.
Finished off the endpoints and the models, had to add back References.

An advantage of ">>thread/post" syntax is that you can shard on board without querying across shards.
This is because the parser no longer needs to query for the thread of the post to link.
However with snowflakes this would be 22 or 24 characters long, which is unreasonable.
Further removing this step from the parser also removes the validation of cross-thread links.

We could mangle the endpoint such that /{board}/res/{post} returns the relevant /{board}/res/{thread}.
The front-end would then be responsible for translating to the canonical /{board}/res/{thread}#{post}.
Since it's already an SPA this wouldn't be the biggest deal, but is ugly, and removes cross-thread link validation.
The validation isn't as big a deal as the ugliness to me.

Trips are another problem in need of a solution…

>>31495
>We could mangle the endpoint such that /{board}/res/{post} returns the relevant /{board}/res/{thread}.
Made the backend require an SPA or complex DOM operations in the front-end by making links unusable without JavaScript.
The reason was to make sharding by board trivial, avoiding the parser having to query for post threads to form URLs.
Corrected my error and decided returning a correct output for the user was more valuable than more efficient sharding.
At present whatever handles the front-end just has to be capable of fetching/manipulating JSON, and concatenating strings.
Rendering the HTML contained in the JSON in native apps isn't much more complicated given existing libraries.

Regarding the HTTP header waste caused by having each post be requested separately rather than a bulk endpoint.
For an English language textboard waste could be as high as 10%, but for an imageboard more like 1%.
But this is waste that doesn't touch the server since all the posts are sent from the CDN to the client.
I'm worried about cache poisoning when hosted without a CDN or without cloudflare;

>>31497
It feels like the API is starting to solidify.
Only successful API changes of the last several were:
- Replacing the offset with a cursor.
- Allowing for address ranges in the related posts.
Am working on the queries presently.
Completed the thread, and catalog queries.
Even optimized them a little, though don't really know how.
Was pretty weak today, and didn't get much out of bed.
So the rest of the queries are going to probably wait.

>>31503
>So the rest of the queries are going to probably wait.
Managed to get a few more similar queries done:
So now there are trip, related, and report queries.
This included setting up IP range queries for the related page.
These work without exposing the IP to the moderator.
Especially helpful if there are ever user created boards.
Also compressed down the JSON for the post metadata.
The vast majority of the data here is null.
So we just remove the empty elements.

>>31506
Am interested in transforming this into a federated platform with "little boxes".
There are mandatory trips where the trip password is a edDSA private key and the trip a public key.
These are generated client side, and the private key never hits the wire let alone the server.
Usernames are equal to the public key, unless the user changes them.
The client signs the HTTP packet client side that will be sent to the outbox of the post it responds to.
There are group actors of different kinds which make up the boards.
We accept all follows and drop all DMs to keep things simple and safe.

>>31508
>Am interested in transforming this into a federated platform with "little boxes".
There are three problems with this:
1. Fediverse UI assumes one repliee per post.
2. There are caching and scaling difficulties with ActivityPub.
3. ActivityStreams are complicated.

Fortunately 1. is a small percentage of posts.
Probably less than 5% are of a form that can't be easily broken up or changed for an @mention.
In the absolute worst case you should still be able to link to posts using the URL.
Still not sure about what the UI for this should be. [^1]

Seems the standard issue scaling solution is making everything maxage=300 JSON plus a CDN.
The Inbox is a POST endpoint, and so can't be cached, but there is sharedInbox which makes it a little easier.
Being focused on Group with anonymous users should prevent the feed, and search related performance drains.

Sounds like a ton of work if not another complete rewrite…

Unrelated but ended up combining WikiMedia, Markdown, and old Reddit markup to make a hybrid that seems to work well:
Wikimedia: labeled external URLs, and italics
Markdown: quotes, bold, underline (sort of anyway), strike, code
old-Reddit/StackOverflow: spoilers

wikimedia_external_link = (string("[") >> url << string(" ")) + \
    formats_without_urls.until(string("]")) << string("]").map(
        lambda url, label: LabeledURL.model_validate({ "url": url, "content": label })
    )

pre = (string("```") >> pretext << string("```")).map(
    lambda c: (make_code(c)).model_validate({ "content": c })
)

quote = (regex(r"(^|\n|\r)>") >> formats.until(regex("(\n|\r|$)"))).map(Quote)

spoiler = balanced(">!", "!<", Spoiler)
strike = balanced("~~", "~~", Strike)
italic = balanced("''", "''", Italic)
bold = balanced("**", "**", Bold)
underline = balanced("__", "__", Underline)

This is bound to be confusing, so I'm going to have a popup on first login show the format and rules.

:[^1] Might be misremembering this but seem to recall a web forum with parent child color correspondence.
So each post is assigned two colors, one as a parent and one as a child and if the child-parent color and parent-child color match they are related.
This keeps the replies flat (there's only ever two color blocks) while still having a visual graph.
You could also have multiple views for chronological versus replyTo chronological sorts.
We keep post preview on hover also, which is a real quality of life improvement for non-local replies.
Then again maybe this is strictly worse than just using the SnowflakeID…

>>31540
Less gibberish, important bits:
- Keep private keys off the wire/server.
- Cache immutable the outbox.
- Use sharedInbox.
- Use a CDN for caching.

Also with https://github.com/joewlos/activitypubdantic instead of "little boxes".
And further drop any strange pagination requests to the outbox.

>>31541
Thought of a couple more ways to improve performance.
- Use workers for fanout of POST requests in server-to-server.
- Use NoSQL for storage of ActivityPub.

Am using Beanie in this 2.2, and the question is, "is this an added translation layer".
There is imperfect translation between Beanie and PyMongo Async queries.
This goes against a design principle that was working so excellently in 2.1.
And this is a little disappointing.

The plus-side is managed to make it economical to scale since it's 0.30 USD / 1m requests.

>>31541
>Cache immutable the outbox.
I wrote signature verification for the inbox and three implementation for the Person outbox.
Think I'd like it to be illegal to use an irregular cursor/page so that the cache rarely misses.
But would also like to avoid using the "skip" parameter with O(n) search of the documents.
No matter how it's implemented it seems to require a document mapping cursors to pages.

>>31548
Settled on just using a cursor and "trusting" that servers won't use irregular cursors.
Or else that it may be possible to remove services which query the origin excessively.

Further this is a singly linked implementation because Delete and Update are included.
Delete and Update requires that the pages be traversed in full to render next elements.

The first page is always the total_items modulo the config.OUTBOX_PAGE_SIZE.
This allows every subsequent page to be cached immutable so long as the linked cursors are used.

We also drop the "partOf" parameter to avoid making the full (mutable) collection.

It's all above board with the spec too.
Only downside is the mentioned "trust" required of servers.

Wrote the Person following and followers endpoints.
Ended up not materializing the document to keep track of this.
So similar to the outbox this is just a query on the ActivityModel class.
The only real advantage to this is in bookkeeping.
It's slow because it's not really possible to cache these pages.
It should be less than 50ms (maybe less than 10ms on heavy hardware), for a page, which is probably too slow.
There is also a precondition that there be one follow not undone for any thousand.
This is to make it computationally feasible.

>>31567
It's a bit of a fail to write a federated server with mongodb in anything but typescript.
Guess there's going to be a 2.3 using fedify, mongodb, and typescript.
Think need to separate out the Activity logs from the materialized views.
This is to make the follwers, following, and like sufficiently performant.
My impression is that the client to server protocol would make things like bump ordering difficult.
So there probably needs to be a third layer to the API for a cache efficient client.

>>31583
For future reference, the private key idea is to use fordwardActivity() [^1].
And simply sign on the client side for all the relevant servers sent via a separate endpoint.
It's apparently trivial to wrap the existing fedify classes as monogodb documents with indices.
For the POST fanout use fedify/x/cfworkers [^2] including POSTing to the origin…

:[^1] https://fedify.dev/manual/inbox#forwarding-activities-to-another-server
:[^2] https://github.com/fedify-dev/fedify/pull/242

I thought of a way to simplify this project following in line with existing federated imageboard: Just make the Group the owner of Notes and Articles posted to anonymously to the board.

How loosely is your federation coupled and do you try to solve the problem of link rot in any way? Seeing this thread again made me want to take a crack at the personal wiki/bbs fusion idea, which i might specify a non-turing complete "link description language" for, that should enable users to do anything from replicating immutable resources within instance resource limits to controlling the display and pruning order of replies.
>>31291
>So if you make an Article instead of a Note you get unlimited length and HTML tags because you just link back to your instance in the feeds.
Is this the basis of your federation model, or do you replicate anything besides metadata?

>>32216
As far as the tool for thought first described in >>31282 goes, I'm not sure you could have the Group be the Actor writing the Notes and Articles, and simultaneously have something like user microblogs. Mainly because there would be no way to follow. These are really separate applications.

>>32223
Not if you combine everything into a comprehensive indexing system based on links to immutable data. You could totally have a serious of imageboard-style posts rendered as a twitter-style thread, which you would then curate on your "user" page.

>>32232
Your first sentence is correct, but I don't understand how you could curate the "user" page with Persons relegated to a foreign concept. Doing it by IP is both inaccurate, a security threat, and potentially reintroduces identity. Perhaps you could have a more sophisticated semantic tagging approach - hashtags and similar? Even automatically assigned by an LLM? What do you think?

>>32246
As i said above, secure tripcodes are entirely accurate for defining a user identity. The only problem with that would be human-readable unique names, for which there would be no way to ensure fair allocation without sacrificing anonymity. Maybe on a transient node you could have nicknames, that are unassigned when anything attached to the identity has been pruned.

>>32251
A secure trip could be a Person. My understanding is you could use name for the display name with preferredUsername set to the secure trip which is "unique". Then just default the name to "Anonymous" or the user selected name.

I kept getting bogged down before in my dislike of in effect having two different account systems: one username, and password pair for moderation (session based - in part for protected views), and a (optional) trip, and password pair (form based) for posting.

An advantage of the tags over board approach is that it makes clear which Group is posting, the only one that exists locally, if you want to reach out into the rest of the fediverse for posting. This is orthogonal to your attempts to create a Person.

>>32256
>I kept getting bogged down before in my dislike of in effect having two different account systems: one username, and password pair for moderation (session based - in part for protected views), and a (optional) trip, and password pair (form based) for posting.
I guess the merger we've just been discussion is roughly that there could be session based accounts for Actors (how you get a trip) and deletion passwords for posts, with the password hash. You can delete or edit either by the post password hash or by the session login.

>>32256
>you could use name for the display name with preferredUsername set to the secure trip which is "unique"
In the general case there are normalfags on the site, which wouldn't want to link to user pages by a tripcode.
>An advantage of the tags over board approach is that it makes clear which Group is posting, the only one that exists locally, if you want to reach out into the rest of the fediverse for posting.
In my planned approach boards are immutable, supersedable anchors, that allow threads to link to them. Therefore a different instance could create a post linking to a board or a thread (anything allowing replies) and every node in sync will render it on the board in accordance with the implicit graph structure.

In this case unstructured, tagged posts should probably only be propagated on small nodes or use a local tag, which may be shown as a "timeline".

>>32258
>for normalcomards
Well, you're right this isn't ideal. Mentions would be uglier even if you had autocomplete from follows/thread actors and info onmouseover.

>immutable, supersedable anchors, that allow threads to link to them

Excellent. But is it simpler, and more interoperable than just having a custom of mutual following for Group actors?

>>32259
>custom of mutual following
I don't think follows should be part of the protocol. My scheme would use replication structure i.e. readers are either downstream nodes or exposed out-of-band by another node. Prose is highly redundant, so i think native lz4 compression would prevent even a several decade-old network run on top of the protocol from reaching the same multi-terabyte data sizes as usenet. If storage size was to blow up though, node admins should still be able to pivot to a more frugal replication policy, like most activitypub instances have by design.

>>32261
>I don't think follows should be part of the protocol. My scheme would use replication structure i.e. readers are either downstream nodes or exposed out-of-band by another node.
I don't fully understand, and admittedly my abilities to implement complex programs are limited, if this would in fact be one.

>native lz4 compression

This is an important point. I believe you can do this for postgres rows.

I do now have a program (technically called "arsvia-redux") which has the creation of Person objects for login and signup using HTMX, sessions, and CSRFTokens. It needs testing, review, and revision. I'm not sure I really have the energy to dedicate to it or not.

>>32268
It seems like eventually the AI will be better even at Marxist analysis than experts. I'm very interested in making a censor. It's a bit of a stretch goal with this project, even if Llama Guard is central. I don't feel exactly qualified to make it. Censoring chauvanist content and policies seems obvious since it's so prevalent. The truth is that there are so many errors possible, and some of them are so subtle. It seems almost impossible to censor them all.

>>32279
>AI will be better at Marxist analysis than experts
It can be useful at flagging things for human review, but please don't drink the AGI koolaid. LLMs are fundamentally incapable of reliably performing tasks outside their "training" data.
>>32268
>I don't fully understand, and admittedly my abilities to implement complex programs are limited, if this would in fact be one.
To get around the limitation of immutable records, child records ("scions") will reference a field of an existing record ("stock") and on sync get added to a mutable substructure ("shadow") of the stock. This means requesting scions will only turn up records present on a particular node, but it also means networks have an interest in propagating as much metadata and inline data as possible to every node.

>>32280
>It can be useful at flagging things for human review, but please don't drink the AGI koolaid. LLMs are fundamentally incapable of reliably performing tasks outside their "training" data.
Llama Guard 4 still has 11% false positive rate, but I think eventually this would be low enough to serve as a filter for content without moderator intervention, this is actually critical to the design because federation makes CIDR bans impractical.

Perhaps I'm getting habituated to the trolls here, but after the spam, and since the plan is to be text only, part of me is even more interested in censoring wrong thinking than illegal content. Yet I've no idea how to formulate such a censor.

The point wouldn't be to create a hugbox, where there was strict decorum, and you couldn't call each other uyghur or whatever. There's still plenty of disagreement among the left, and analysis to current events to talk about. So what is the point? I guess the idea is just to at once allow for outreach to a larger community without being overwhelmed by wrong thinking.

This is related to the follow-follow model I proposed. If the Group follows any Actor that follows the Group, and you allow users to act as the Group with other Actors (perhaps there's an account to make a user feed, but they can still post as the Group - and these posts then show up in the Group feed) you can get outreach without having the main page filled with wrong thinking.

>>32282
>Yet I've no idea how to formulate such a censor.
My best guess is that the best way to formulate this censor is not by ahead of time trying to think of an exhaustive set of rules but by labeling the dataset and having the machine derive the prompt ala: https://dspy.ai/#2-optimizers-tune-the-prompts-and-weights-of-your-ai-modules This would be a lot of work.

I've made a repository for this federated rewrite, in case anyone is interested: https://codeberg.org/jugaad/arsvia-redux so far it has the basic machinery for the creation of Person objects, session management (>>32268), and the improved (>>31540) parser, with a lot of tests.

I'm thinking presently that the moderation would be a challenge. I really hate thinking about this stuff let along actually managing a site. With PhotoDNA, URL rules, and word filters might be possible to remove the most offensive content. Llama Guard can catch some other junk, but there's no human out of the loop solution that exists today, and that there are no accounts, and federation (without federated CIDR lists) makes any sort of bans effectively impossible.

>>32438
Anyone have any ideas what to do about bans?


Unique IPs: 4

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq / search ] [ overboard / sfw / alt ] [ leftypol / edu / labor / siberia / lgbt / latam / hobby / tech / games / anime / music / draw / AKM / ufo ] [ meta ] [ wiki / shop / tv / tiktok / twitter / patreon ] [ GET / ref / marx / booru ]