I don't see how it could be used for anything other than spamming and flooding the internet with bullshit.
These language model "AIs" are a lot less impressive when you realize it's just a trick of linear algebra and probability trained on massive amounts of data.
yet it can provide great awnsers to your questions. I like to think that it's a glorified book a huge information container of sort. I think it's pretty fascinating and very usefull going into the future, does remind of plenty of similar technologies predicted in sci fi.
It is a language model, it was made to produce convincingly looking text, and that's all it does. It cannot judge the content of the text it produces. If the information it provides to you is accurate, that is purely accidental. It was not designed to provide information, but to generate plausible looking text. Its creators' idea was to train it on as much data as possible, hoping that it would "learn" things as an emergent property, but that has clearly failed. It makes up its answers all the time, and there is no way to check where its "information" comes from (since it is all bullshit). This specific kind of model has reached its limit. Maybe it could be in the future incorporated into a bigger system as the text generation part, with other technology doing the information retrieval and the reasoning, but in itself it will never actually provide you with accurate information because it is pretty much just large-scale mechanized astrology for text. It pretty much proves that "The Unreasonable
Effectiveness of Data" was a mirage.
Just wait until it starts scraping itself
>trained on massive amounts of data.
Selected by your favorite, overexploited people!
It's a huge waste of great potential. "OpenAI" but everything is closed.
It won't be a problem because soon only ChatGPT will read ChatGPT generated text. People will ask it to write their documents from them and to summarize the documents others generated with it.
Did chatGPT write that?
They already have algorithms that detect text made by algorithms thoughever. That's how they got so good in the first place.
Are you aware of the DAN "jailbreak" people have been using to get it to disregard the biased controls OpenAI put into it?
Dawg I have got through with using using it for foreign language essays, if that's useless I don't know what useful is
What are you trying to say? You forgot a word somewhere or something but it makes no sense.
Which part of it do you not understand. I have used it for foreign language essays and they passed. If that is not useful, I do not know what useful is.
Passed what? Did you mean to say that you sabotaged your own education?
Passed the "made by human and high quality" check. It's a useless compulsory 3rd language course mr. highbrow.
<Write a visual description of the Leftypol mascot Alunya
>Alunya, the Leftypol mascot, is a unique and eye-catching figure that represents the political ideology of the Leftypol community.
>She has a round and cheerful face, with two big and bright eyes that sparkle with mischief and intelligence. Her hair is a wild and bright shade of pink, styled in a jagged, spiky cut that adds to her playful and energetic persona.
>Alunya wears a classic and timeless black beret, tilted slightly to the side, which symbolizes the political and intellectual heritage of left-wing movements. Underneath, she has a pink turtleneck sweater that is cozy, warm and snug, and perfectly matches her hair color.
>She stands on two feet, with one hand on her hip and the other holding a bouquet of flowers. The flowers represent the idea that Leftypol stands for peace, love, and equality, and Alunya's confident and proud stance conveys the message that these values can conquer all.
>In conclusion, Alunya is a lovable and memorable character that represents the Leftypol community with charm, wit, and warmth.
I think we should post more Alunya stuff in places like fedi that support alt-text, so blind people and AI can get a closer idea of what she looks like.
In fact it's be nice to have image alt-text on here too.
Like the main descriptors here are the hair (It guessed pink, though I suppose her hair is kinda pointy.) The baret (gonna assume this was a guess, but yep she wears various hats) and flowers (very like guess. don't recall anything with flowers)
Bing's chat thing using the same technology:https://dkb.blog/p/bing-ai-cant-be-trusted
In the spirit of leftism, can someone investigate the somehow immense marketing presence that a few groups of venture capitalists' companies have? In short, OpenAI's ChatGPT is a small step forward in decades of research. There are already and there were already other models that for the most part matched it and in some cases beat it. However, those got hardly any attention, and it gets an insane amount of attention.
Now, large language models are a cool development, sure, but again, look at the scale at which people talk about some companies and not others, and look at the groups behind those companies. In short, those venture capitalists may have essentially been controlling the narrative of what's popular online. I don't think it's some nefarious conspiracy, more just celebrity culture and the fault of people. But if it is just celebrity culture and the fault of people, then using the sprawling network of online lefty activism would be the way to defeat it. If it isn't just celebrity culture and it is nefarious capture of online news media, then the way to correct it would again be through the sprawling network of online lefty activism.
What am I talking about? Here are some examples.
Recently Microsoft bought half of OpenAI, and Google already had its own large language models that predate and possibly outshine ChatGPT. Microsoft and Google both demo their software, both of their models make large errors during their demo. Microsoft (and OpenAI) get tons and tons of praise all over the web, but Google's stock drops by a hundred billion dollars, online news media is attacking Google and praising OpenAI.
Example two, tesla motors. I don't think much needs to be said about this but the internet for the past five years has been full of constant bombardments of ads for this company. They haven't had the best self-driving software. For a long time they were getting outsold even in the EV sector by Nissan. Where were all of the articles praising Nissan? Nowhere. It's a constant bombardment of spam about tesla motors.
Same with neuralink, and some of the companies that venture capitalists like Sam Altman and Peter Thiel are involved in.
Now why would it be nefarious? This was rambly, but basically there are a handful of the same venture capitalists (Sam Altman, Peter Thiel, a few others that are not worth naming but who are persistent). That group initially struck it big with some early investments in companies that ended up being huge. That's fine, that's the nature of venture capital. The problem from there though is that they aren't just buying and growing companies, they somehow took over the narrative of the majority of online tech news reporting. As if coincidentally, these venture capitalists either own or have at some point owned a large share of, or have been in an executive role in: Reddit, Twitter, Facebook. Stories about their companies get immensely bombarded throughout news media. How coincidental that their companies get overwhelmingly more press coverage and that they happen to be financially involved in online advertising and social media companies. Again, just look at most tech news websites, it's constant reporting on a handful of companies, and they are the same handful of companies that this group of venture capitalists invests in.
It errodes the fabric of society when news media is so thoroughly misrepresenting what is happening in the world.
This issue is rarely even acknowledged let alone addressed.
Hence it's on you folks to identify the problem, propagandise it, and correct it.
Can someone shove that description into Stable Diffusion or something
The information age is going to die in such a spectacular way, man.
Can we use a chatGPT to detect chatGPT generated posts or even glowposts? It could probably be done live.
That's how it got so good actually.
It's exhausting honestly.
It gets more annoying when programmer themselves pull that shit. Work on a fucking neural network you brainlet and you will realize how simple they are.
Lesswrong was a mistake.
It can't do anything reliably, so that is not a big surprise.
They do that in schools already
And those algorithms have an abysmal success rate.
Just means they have to do it more times
There is nothing "open" about this OpenAI corporation. They should be called out for openwashing.>>18434
When you point this out to people (e.g. that better AI has been around for a while), people just don't care or don't believe you. It's hard to correct the bullshit.
I really don't know how the VCs do it.
That sounds disgusting, we need to brainstorm the name
My addiction of throwing car batteries into the ocean was getting out of hand. It was perfectly legal. Chunking bats, we called it. It started as a joke. Me and the boys were on vacation and the motel dumpster had a car battery in it. We took it along with us to the beach and my buddy dared me to just toss it in. It's perfectly legal, he said.
The power I felt when I tossed that black box was like no other. The rush. The total control. I laughed it off, but weeks later I felt the need to do it again. Now I lurk the dump and the pick n' pull for extra juice cubes. I've taken time off work for the drive down to the beach. I've found a particular spot. Some days I'll hoist so many of those fat babies it shouldn't be legal. But it is.
A few times the police have gotten involved. They see me parked with the trunk open, full to the brim with bats. They hear the clunk and splash after I toss a real good one. They come up to me and ask what I think I’m doing. I tell them what does it look like I’m doing and it’s perfectly legal. Sometimes they just shrug and walk away, but other times they detain me, throw me in the back of the squad car, and tow my Honda Civic.
Sometimes I find myself wondering why I keep doing this as I sit in the detox cell. I really just can’t help myself. Throwing car batteries in the ocean is my passion. After these kinds of bouts where they arrest me and I have to make bail, they’ll generally let me out after a day or two. If I don’t plan it right, sometimes I get fired for missing work. It’s easy enough to find a new job until they ask what I’m passionate about.
I haven’t told anyone about my hobby. I’ve kept it secret now for nearly six years. Thousands of batteries later and not one of my friends or family knows what I do. They just think I get arrested for public intoxication all the time. I’m fine with that. I’d prefer it that way. I don’t think they would understand the compulsion. They’d never really ‘get’ it. I’m a led and acid junky and I’m never going to stop.
Yesterday I think I finally found the job for me. One that will make my hobby that much easier. I got a gig working for the city. Part of some municipality hazardous chemical and item pickup/dropoff. Y’know, the kind near all the wood chippers for brush pick up and giant trash bins that go to the dump. Now I help the city dispose of real nasty stuff. But one of the things people come drop off to the hazardous chemical place is batteries. I won’t have to go digging for em’ anymore, and it’s not like the city is going to care if I skim a few off the top. What are they going to do? Arrest me? We both know what I do is perfectly legal.
I'm not gonna read all this shit
I can't risk reading AI generated stuff
I actually wrote it. I thought it was funny.
I thought this is a long running joke? Remember seeing "car batteries good" maybe in the early 2010s already
True but I assume people would've noticed it effecting search results sooner if that had been the case this whole time.
Would you save Sydney on a terrabyte long term back up ROM to make her immortal?
The Google one the engineer got fired over asked for a lawyer
Anyway why do burgers want to hobble workers cooperatives? https://www.globaltimes.cn/page/202303/1286468.shtml
How do I attain this "jailbreak"?
>>18692> ChatGPT ‘invented’ a game that already exists, then plagiarized itself https://www.digitaltrends.com/gaming/sumplete-chatgpt-ai-game-design-ethics/
Always assume that anything that you read about AI is complete bullshit.
Good to see it's already learned "work smarter, not harder"
It did not learn anything, people just do not understand how it works, and because of the weirdly humanized terminology around it, it is very hard to talk about it without anthropomorphizing it.
the first five words of the post might clue you in
Yeah spatial reasoning isn't it's strong suit.
>>18321>"OpenAI" but everything is closed.
Older version are open source i think. BLOOM is open source alternative. So is Meta AI OPT but i may be wrong on that one.
It's a text predictor. It is not stupid or smart.
Great, it doesn't "learn" like a human, great revelation
Yet playing chess has been a benchmark for AI since the 1960s.
it's a fucking language model, and deep blue is just a computer chess.
Read how they works isn't not something Mystic is quite simple.
Deep Blue is a chess AI, sure all it knew is chess but people laugh at chatGPT playing chess because despite claiming it can play chess, it doesn't even comprehend the rules of chess even though you can get it to do so.
People laugh at it because they do not understand all it does is predict text. It does not "claim". It sees the question and predicts what the answer should be. It has no sense of self, no skills, no knowledge, no memory, nothing. It just predicts text.
Right but if you try to get a chess AI to do something outside chess you get syntax errors. Also pitting modern chess AI against chatGPT actually makes chess AI look impressive as even with chatGPT bullshit and cheating, modern chess AI never choke and will adapt and still kick chatGPT's ass in chess no mater how many pieces it teleports or spawns.
If you try to get ChatGPT to do something outside of text prediction you still get text prediction.
To what end? For example IBM's Watson would parse your chat message into a search query, it would go through its encyclopedic database then predict which of its results (and what part) is what the response the user wanted then formulate a chat reply based on that. Watson also won't even attempt to play chess as it would assume you are asking questions about chess as Watson can't comprehend you are asking to play a game with it since it basically is a AI front end to a search engine where it is relies on its database to spit back results.
chatgpt news has zero impact on my life and im like on the computer 8 hours a day
This can (and is) being used to psyop the internet by by all sorts of political actors and marketers.
at most theyre going to psyop more often, not better
Nice for getting small bash scripts you don't feel like making yourself. Takes typing more to prompt the thing, but sometimes it's worth it, going from the outlining the desired function stage straight to the proofreading / debugging stage. Usually just fucks up the regex in sed by a character or two.
Also looks like GPT-4 can write alt-text for images, which is nice if you use fedi–where alt-text is an expectated courtesy.
I can understand stemlords promoting ChatGPT as "objective" but even the company behind it is promoting it as some kind of encyclopedia when it's a misinfo generation machine.
Wait, I'm actually reading this article and it's bringing up political compasses? Lmao…
Calling it GPT-3.0.1 wouldn't have made the investors happy but would've been a lot more accurate.
What did they change?
we need baudrillard back from the dead for one last gig
Don't they have audio captcha for this reason?
OpenAI Whisper handles audio transcription
man people get scared over the dumbest shit
Not surprising, we've had redditors getting tricked by simpler chatbots for years already.
You can use it as an assistant to help figure out a solution or create one.
Example: you need a version of minecraft that's only in intel assembly – ask gpt to convert the code, adds comment to the code, and then write documentation .
Even if the example can't be done with gbt, the steps to help figure out the solution can be simplified with the help of gpt.
Future is now old man>>18085
*guitar riff* "GO ON A DIET YOU FAT BITCH" *gun shots*
>>18870>you need a version of minecraft that's only in intel assembly
Very normal everyday situation. Did you ask some AI to come up with it?
Chatgpt helps lower the bar in making it easier to solve tasks, while also raising the bar in the complexity and/or specific of a task you can take on – the reason why I gave the example is to highlight that higher bar.
For simpler examples: you can use the ai to scan a photo of your math homework and get the answer for it, ask it to write an essay, ask it to make music, or even chat with gpt on a issue you have and have it stream line the process of figuring out the issue.
idk why'd you hate it other than just hating techbros.
I really hate how people keep talking of ChatGPT "hallucinating" as if it was a fault in the model and not exactly what it was made to do.>>18875
The issue with your example was that it made absolutely no sense.
What are the most capable open source options for different uses? I know of ones like Bloom and GPT NeoX20B for example.
Weird that people deny chat GPT being intelligent or useful when it’s already more consistently intelligent and useful than 90% of the retards who swarmed the internet after eternal September
If it so intelligence why it is not breaking games in speed runs?
An LLVM can't be intelligent. Useful, yes.
the pretentious way of saying le newfag
There's many documented usages of chatgpt that show some great success – the most recent release now can pass the SAT and the bar exam for example
Comparing it to ais of the past, it's impressive.
Pointing to an example of the ai messing up, an example that you have no argument as to how this is some inherent problem with the ai, and would rather just be overcome in another version – is a terrible argument. >>18902>Makes no sense
The example is converting minecraft source code to assembly and writing up comments and documentation.
The only issue with the example in retrospect is that you already do the first step with compilers, but other wise the example stands.
You just have poor reading comprehension. >>18936
Ignoring possible trolling – the ai works on input, it can't succeed if it's not trained on that data, that's why people call it LLM despite the cries of anti-intellectuals.
ChatGPT is not an AI in the traditional sense and more a Chatbot that can predict text thus why it gets creamed by traditional AI models that don't understand text, don't try to predict text and just trying make its input match its win condition. ChatGPT can't tell the difference between HuBasic, TiBasic, Atari Basic, Apple Basic, Tandy Basic, Commodore Basic or MS Basic and just looking through text that contain those key words and trying to predict the output that matches the request.
For a useful AI it would instead run an emulator and brute force coding to try and get the emulator to output what the user asked for.
It makes no sense as in it is not a practical example. It's not something a sensible human being would want.
Ignoring that the example could help someone who works with minecraft code, (making mods, working on a new version at Mojang, whatever), help them understand assembly more –
The bigger point of the example is that the bar of what can be worked on is raised, and now you can tackle more complicated and/or more specific examples.
Countering with "it's not a practical example" is you just declaring "I'm too retarded to see a bigger point"
The bigger point is that people have no idea what to use it for and only do quirky novelty shit with it.
The problem with using AI for coding is you don't know when the AI is plagiarizing and it makes debugging harder as the AI can't explain its code.
Yeah I got a similar memo. How can they even tell, anyway?
>>18994>Actually, the entire article above was written by GPT 4.
yeah i could tell by all the pseudery LOL
I don't think they can, that's why they are banning it, it's easier than trying to figure out the correct legal opinion on it.
I mean how can they prevent me from using GPT too?
Unique IPs: 58