[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / edu / siberia / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / tv / twitter / tiktok ] [ GET / ref / marx / booru ]

/tech/ - Technology

"Technology reveals the active relation of man to nature" - Karl Marx
Name
Options
Subject
Comment
Flag
File
Embed
Password(For file deletion.)


File: 1699238728626.png (109.67 KB, 474x310, ClipboardImage.png)

 

You know what would actually be a useful AI? One that crawls through the text of the entire written works of an author, isolates each logical assertion that author makes, and cross-checks it against other logical assertions for contradictions, to see if that author ever contradicts themselves. Is such a thing possible using LLMs? Does such a thing already exist?

As long as the statements never exceed a certan complexity or use implications, or require ciphering.

that's not useful for anyone

>>22148
As like a personal writing aid I could find it useful, if you're trying to make a low intelligence / pre-programmed robot character. The character would grow to have a weird worldview as it tries to reconsile previously stated / heard explicit statements.

>>22148
It's useful if you write a fairly long paper and want to make sure you never contradicted yourself.

Do these two sentences contradict each other:
<The puppet did not fit into the box because it was too big.
<The puppet did not fit into the box because it was too small.
I'm firmly of the opinion that they describe the same situation. (AI-trolling sentence pairs like the above are called Winograd sentences.) How could I know, clearly I don't make a deduction based on grammar here. There is a lot of grammatical ambiguity in sentences with what "it" or "he" or "this" actually refers to. And not just these. Words like "house" or "tree" are not unique identifiers that refer to exactly one object, "Jon" can refer to two different guys in a story, "Robert" and "Anton" can refer to the same Mr Robert Anton Something.

People resolve the ambiguity by thinking about the situation described. In parsing a story about a woman pranking another woman, reasoning about avoiding self-harm, about not being surprised by what you did yourself, that is reasoning about intent and what sort of information different people have, can be necessary. Current AI is very flaky at this.

>>22146
>isolates each logical assertion that author makes, and cross-checks it against other
authors, to build a database/network map of which authors agree, which disagree, and to what degree (strong agreement, weak agreement).

>>22146
It's trivially easy to write a formal language with arbitrary tokens signifying assertions and checking for contradictions with a SAT solver or something. Translating a text into this form is thorny because most texts are written about the real world and unambiguously transcribing the real world to a formal language is almost impossible, so you can always go "well is this what the author really MEANT?"

>>22146
>Is such a thing possible using LLMs?
No, they don't understand the actual content of the statements, only the linguistic characteristics of the words (being generous).

>>22146
it probably already is possible if you just customize GPT's personality and guide it every time for each text

>>22146
Funny how Abrahamic religions rely on the validity of their religious texts so much. Really shows that they can't stand on their own and have no real practical advantage over other worldviews, it's just constant self-validation.

>>22151
This doesn't just require understanding pronouns and antecedents. It also requires understanding what the nouns actually designate, in terms of their qualities with respect to each other, what "fit" actually describes in the relationship between the two things.
>Current AI is very flaky at this.
From my understanding it doesn't do it at all and can only occasionally imitate it passably out of luck and because people tend not to intentionally throw curveballs. The chatbot AI we have now isn't even supposed to be able to do this, either. The way people are asking it questions to get real answers fundamentally misunderstands the problems it's designed to solve. At the same time, the way people are trying to use the chatbots does show that they are succeeding at what they are supposed to do, which is produce text that looks like something a human would produce. The problem here isn't that the AI sucks, but that people misunderstand what tool they're using and aren't trying to verify or fact check things.

>>22151
It's Wilton. Robert Anton Wilson.

AI doesn't understand the meaning of statements or words, it's only a statistical analysis of recurring sequences of symbols.

Not to mentions that statements made by some author are not necessarily unambigious in their meaning, so even humans can't achieve what you ask.

PrivateGPT has a feature like this (querying documents) and you can run it locally in a docker container. But it’s not that good with massive volumes of text.


Unique IPs: 12

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / edu / siberia / latam / hobby / tech / games / anime / music / draw / AKM ] [ meta ] [ wiki / tv / twitter / tiktok ] [ GET / ref / marx / booru ]