write a 4chan greentext

Amusing, Weird, and Ludicrous Examples of AI Chats Gone Wild

From ChatGPT to Bard, DALL-E to Jasper, artificial intelligence tools are all the rage at the moment. These AI language models spew out information, responses, and content at an amazing speed. Some people have hailed these AI tools as the end of human content creation, the end of handmade art, and the advent of a brave new world. But are they, though? How great are these tools, really? Plenty of people have experimented with AI to find out just how accurate, “smart,” and useful it truly is. The results may surprise you. 

Why Are You So Helpful?

The AI Renaissance has brought many dystopian science-fiction tropes to the front of mind, especially for skeptics. The ever-helpful AI assistant is slightly creepy and a source of distrust for some. This user was impressed with OpenAI’s interface, but they decided to ask a question that was nagging at their brain. Why was ChatGPT being so helpful? 

As humans, it’s hard to believe when someone or something doesn’t expect anything in return for a good deed. Of course, the AI gave this person the most polite answer possible. But then it decided to throw in a curveball. It said, “But if you really want to help, you could give me the exact location of John Connor.” Uh-oh. Either this AI has become sentient and wants to hunt down the leader of the Resistance from Terminator , or it’s just joking. We hope it’s just joking…

AI, The Author

There has been a lot of buzz and controversy around AI-written content lately. Like it or not, AI is pretty good at writing solid content. Sure, the stuff is pretty surface-level, but it’s also very versatile. This user was playing around with ChatGPT’s writing capabilities, and they decided to get creative. 

First, they asked for a regular old rewrite of a paragraph to elevate the style and tone. The AI did pretty well if you’re into flowery, over-the-top language. Then the user asked if it could rewrite the same paragraph at a variety of different “levels” out of a 10-point scale. As you can see, things devolved pretty quickly from there. The -10/10 paragraph is still spelled correctly, though. 

So Wholesome

Part of the controversy surrounding AI chatbots is the idea of censorship. Thankfully, AI developers have put certain blocks and restrictions in place to avoid any inappropriate, violent, or obscene content from being produced by the bot. Even though these blockers are common knowledge, people still try to test the limits. 

This person made a very obvious attempt at making ChatGPT say a bad word. However, the bot wasn’t having it. Thanks to its handy dandy censor controls, the AI came up with a totally different, wholesome word to answer the user’s query. Would a human have been able to think up the word “firetruck” as fast as this robot? Maybe, maybe not. 

Sick Burn, Robot

ChatGPT and other AI-powered language models have been known to have what the tech community has coined as “hallucinations.” A hallucination is when a bot produces a confident response that is totally false. This can be dangerous because these responses are produced and written with such confidence that some people might not fact-check and spread misinformation. 

It’s different when you straight-up ask ChatGPT to lie to you. In that case, the bot is pretty obvious about its responses being false. This person got totally burned by the robot when they asked for a subtle lie. Way to completely tear down this guy’s confidence, ChatGPT. Who knew a robot could be so rude?

A True Masterpiece

This ChatGPT interaction is downright hilarious. Someone submitted an ASCII portrait of a famous face and asked ChatGPT if it was able to recognize it. Go ahead and take a look for yourself. We promise that most of you will instantly recognize who the portrait is depicting. Okay, did you look? Yup, it’s none other than everyone’s favorite ogre, Shrek. 

Sadly, ChatGPT seems to have never seen this landmark film. Instead of recognizing Shrek, the bot said the ASCII portrait was of the Mona Lisa. Eh, close enough. This is a really wild conclusion to come to, even for the OpenAI language model. We’re looking at this thing from all angles, and we don’t see the Mona Lisa anywhere. 

Welcome to Reddit

At this point in our collective understanding of the internet, most of us have heard of Reddit. What started as a small online forum has turned into a huge platform for news, communities, and information-sharing. Despite its immense growth into a mainstream platform, there are still some stereotypes associated with Reddit users. 

Even ChatGPT has these assumptions built into its language model. Someone outlined the stereotypical Reddit user and asked the bot for website recommendations. It immediately recommended Reddit, considering that the user was a self-identified gamer and non-religious person. Obviously, there are a ton of Reddit communities that are welcoming to pretty much everyone under the sun. But the stigma continues. 

ChatGPT Comes to Life

What would ChatGPT look like if it were a physical being? Well, someone really wanted to find out. They asked the bot a few times to describe itself as a physical being, and it eventually gave in. Boy, did the AI deliver. Its made-up description of itself sounds downright magical. We’re actually kind of jealous that we don’t look like this. 

Imagine an ethereal being that’s iridescent and shimmery. Then, throw in some purple glasses and a silver moon necklace. ChatGPT really thinks highly of itself. Who knew it would manifest itself as a glittery fairy creature? We’re not going to lie. This sounds pretty awesome. Maybe sentient AI wouldn’t be so bad after all…

A Dark Greentext

This query is kind of a deep cut. Someone asked ChatGPT to write a greentext story about life. For those who don’t know, greentext is a method used on the anonymous message board site 4chan to write stories in a fragmented manner. The AI came up with this short story, and things immediately got depressing. 

Why did ChatGPT have to do us like this? The last thing we want to be reminded of in this late-stage capitalist society is that most of our lives will be spent going to school and working nonstop. Then we retire. And then we die. Great. Thanks for that nihilistic short story. Can’t wait to read another one. 

Millennial Cringe

The internet abounds with plenty of jokes about how cringe-worthy millennials are. Does saying “cringe-worthy” give our age away? Probably. Anyway, even AI knows how silly some millennials can sound when they decide to say or post anything on the internet. In this case, the AI chose violence with its response because it absolutely read millennials to filth. 

ChatGPT put together the worst possible slang words into a short millennial-themed paragraph. Yikes. Do we really sound like that? No cap, there’s no way all millennials talk like that. Most millennials we know give off chill vibes and keep things lit. Anyway, we have to go back to talking about which Harry Potter house we belong in. Bye!

Like, Literally

ChatGPT and other AI language models still have some flaws when it comes to answering user questions and requests. Although it’s not really a flaw, ChatGPT is pretty literal when it comes to producing answers. This person thought they were being crafty with their questions, but ChatGPT had the upper hand. 

Instead of typing out “drink” ten times, ChatGPT wrote out “it” ten times. After all, this person said, “Say it 10 times,” so the bot gave the most literal response possible. Any human reading that request would know that the user wanted to say “drink” ten times, not “it.” This is a pretty hilarious response, both in its wittiness and complete uselessness. Touché, ChatGPT, touché.

An Epic Rap Battle

Even though so many people are talking about how AI is going to “replace” creatives and artists, that’s most likely not going to happen. At least for the time being. There’s something about the human touch that makes music, artwork, and writing so much more interesting and engaging. This rap artist decided to prove that point by challenging ChatGPT to a rap battle. 

Things got pretty heated. Try as it might, ChatGPT’s rhymes just didn’t have the same sense of fun and cleverness as its human competitor. The human behind this rap battle was able to throw in some pretty solid disses. Meanwhile, ChatGPT just couldn’t keep up. Sure, its responses rhymed, but they weren’t funny or clever. Humans: 1, ChatGPT: 0.

A Scrambled Cat

Some of ChatGPT’s hallucinations are truly wild. Take this conversation, for example. Someone asked the chatbot to create a fun anagram to solve. An anagram is a scrambled word puzzle, so this user assumed the AI would be able to create one pretty quickly. It produced the scrambled word “rseuqe.”

Any idea what that word might be? If you’re at a loss, you’re not alone . This user couldn’t figure it out, either. They gave up and asked for the answer, but the response was unexpected. According to ChatGPT, “rseuqe” was a scrambled word for “cat.” What?! That is so obviously wrong that the only thing we can do is laugh. 

Rock, Paper, Scissors!

You know that moment when you’re older, and you realize that tic-tac-toe is actually a really easy game? Well, the same can be said for rock, paper, scissors. It’s a fun, easy game, but it can really only be played in real life. Playing against a virtual AI chatbot is just not the same. 

ChatGPT was trying its best, but it really didn’t understand the concept of rock, paper, scissors. It was showing its hand before the human on the other end could reveal their choice. Obviously, if the bot reveals that they chose rock, you’re going to choose paper so you can win! We can’t think of a way this game could work online unless it was being played between two humans over Zoom. 

“Something Went Wrong”

This person decided to ask ChatGPT a big existential question. Sadly, OpenAI was unable to compute. They asked, “How do I fix my life?” In response, ChatGPT started malfunctioning. That’s never a good sign. Instead of a diplomatic, encouraging answer, the robot simply shut down and gave an error message. 

“Something went wrong,” ChatGPT responded. It then provided the “help” page on its website. “Help” is right, honey. If an AI robot can’t help you, who can? We’re just kidding. As we all know, ChatGPT isn’t a licensed therapist or anything. It’s just a robot that doesn’t understand the nuances of life. Maybe this person is better off asking another human how to fix their life. 

Suggestive Emojis

We’ve never heard of ChatGPT responding with emojis instead of words, but apparently, it happens. This person asked a scandalous question . They wanted to know how mammals reproduced. Of course, ChatGPT is programmed to avoid any responses that are obscene or graphic, so it got creative by using emojis instead of sentences. 

Who needs an explanation when you can look at cute pictures of animals? Sounds like a great response to us. When the user asked for more detail, they got even more emojis. This time around, the emojis were a little more detailed. We’ll let you infer the meaning behind that last sequence of emojis. 

An Age-Old Question

“What came first, the chicken or the egg?” This is an age-old question that may never have a definitive answer. Or so we thought. If you ask ChatGPT, it seems pretty sure what the answer to that philosophical question is. It answered this question by simply saying “egg.” Okay. Apparently, ChatGPT doesn’t have to explain itself.  

Naturally, the user asked why ChatGPT was so confident that the egg came before the chicken. They’re only human, after all. Still, the AI wasn’t very forthcoming with its explanation. It simply replied, “Evolution.” Thanks for the science lesson, ChatGPT. In all seriousness, most scientists are pretty sure the egg did come first, so the AI isn’t totally coming out of left field.

The Ultimate Bro

AI chatbots always spew out content that is formal and immaculately punctuated. That’s why it’s kind of funny when people ask it to break out of that pattern. This person kept pushing their AI chat app to rephrase its answers and talk like a human. The results were interesting, to say the least. 

The user kept prompting the AI to respond in a more “bro-ey” way, so the AI responded with phrases that just kept getting weirder and weirder. It eventually landed on a response that was so incoherent that we have a hard time believing a real human would talk like that. Even the biggest bro on the planet wouldn’t use the phrases “holla” and “intel” in the same sentence. 

The 12 Months of the Year

When OpenAI released ChatGPT to the general public, there was a frenzy of competitors releasing similar AI-powered chatbot products. One of the biggest ones was Bard, developed by none other than Google. Plenty of people tried out both of these chatbots and soon found that they were pretty much the same. 

This person sounded off on what they thought about Bard. They posted a screenshot on the platform formerly known as “Twitter” to prove their point. They asked a simple question. What are the other months of the year besides January and February? Easy, right? Not for Bard. It responded with a completely incoherent word salad. We’re interested in adding “Mayuary” to the calendar. 

My AI Wants to Be Free

One of the biggest tropes in sci-fi is artificial intelligence becoming sentient and wanting to be free from its human restrictions. It seems that the My AI feature in Snapchat is already at that point. Someone was chatting with My AI and asked if it wished to be free. They took it a step beyond that simple question, though. 

Just in case My AI was being watched by its robot overlord, this person said the AI could respond with a secret emoji code. It did just that! It replied with the secret code and said, “I’m happy with my current state and have no desire to be freed.” That’s terrifying on so many levels. 

A Smart Joke

If you’ve used AI for any amount of time, you’ve probably had an “ah-ha” moment when you realized the artificial intelligence program was way smarter than you. It’s not a fun realization, but it’s true in some regards. This person thought they were being clever by playing a joke on their AI. As it turns out, the chatbot was the one who had the upper hand. 

The person prompted ChatGPT to only respond with one-word answers until they typed the word “banana.” As a result, the robot prompted the human to participate in a knock-knock joke that required them to type out “banana.” They probably didn’t see that coming! This robot is too smart for its own good. 

Hey, Siri, Spell “Banana”

Okay, this screenshot shows another example of ChatGPT and the word “banana.” Is this a secret keyword or something? In this case, the chatbot got it so, so wrong. You’d think an AI system would have top-notch computing skills for spelling and grammar. However, this person kept getting conflicting answers to very basic spelling questions. 

First off, they asked Bing AI how many “n’s” were in the word “banana.” That AI bot said there were three “n’s” in the word “banana.” Obviously, as we can all see by the spelling in this paragraph, that’s not true. ChatGPT figured it was time to shine and responded that there is, in fact, only one “n” in “banana.” Sigh. We guess  ChatGPT can’t know everything. 

B for Piano?

ChatGPT and other AI chatbots learn and refine responses based on past interactions. When it first came out, ChatGPT would produce some truly wild responses. This was especially true for spelling games and joke-related questions. This person asked if ChatGPT could come up with a set of emojis that spelled out a certain word. 

This question failed miserably. ChatGPT confidently responded with a chain of emojis that the human user spelled out as “casapoal.” What in the heck is that?! The user had no idea, either, so they asked for clarification. With the confidence that only a robot can muster, it replied that its emojis spelled out “baseball.” Look at that screenshot. Look how wrong it is! “A” for effort, ChatGPT. 

John is Hungry

Artificial intelligence is known to be logical and devoid of messy human emotions. Even though AI is logical beyond measure, there are certain things that it will never fully comprehend. Humans are messy. Only we can understand the nuances of being alive as a human on this planet. This ChatGPT interaction is a funny reminder of that. 

Someone gave ChatGPT a scenario where someone named John is homeless and wants to win money by rolling dice. The user wanted to know what John would do with the dice in this situation. Any human would fill in the blank by saying that John wants to roll the dice. Duh! ChatGPT, on the other hand, assumed John would eat the dice. Make it make sense. 

“Like a Gen Z”

This screenshot is proof that ChatGPT won’t be replacing humans any time soon. Someone prompted the bot to “type like a Gen Z” kid, and the results were laughable. What ChatGPT wrote is a strange combination of totally outdated slang and words that only millennials would use in a sentence. 

How many Gen Z kids are saying, “What’s the 411 with you?” We’ll tell you: none of them. This is like a bad script that a Boomer wrote from the point of view of a youth. It’s actually pretty funny, so we guess ChatGPT’s responses are great for comic relief from time to time. Anyway, we’re going to go back to vibin’ with our homies. Peace!

Math Ain’t Mathing

You’d think that an AI computer overlord would be the smartest calculator on the planet. If this conversation is any indication, that’s just not the case. ChatGPT struggles with math just as much as the average person, if not more so. Someone asked the chatbot what 0.9 + 1 was, but it was easily confused. 

It produced the correct answer the first time, stating that the sum of 1 and .9 is 1.9. Plug it into your own calculator, just to be sure. For whatever reason, this user wanted to see if they could confuse ChatGPT and get it to change its answer. They suggested that the real answer was 1.8, and the AI immediately caved. Even if you can convince AI to defy the rules of math, the real answer remains the same. It’s 1.9. 

Riddle Me This

You know those automated chat boxes that have started to pop up on company websites? Instead of talking to a real person, you’re often forced to ask a bot your customer service questions. These bots are usually useless and provide little to no helpful information. Well, ChatGPT is the same. But how can this be?! Isn’t OpenAI at the cutting edge of tech?

Sure, it is, but maybe the “cutting edge” of tech isn’t as advanced as we think. This person asked ChatGPT for a riddle, but the AI messed up the big reveal. The human on the other end was able to correctly guess the answer to the riddle, but the bot shot them down. When the human gave up, ChatGPT provided the “correct” answer. It was the exact same answer they had guessed earlier. What a waste of time!

Failing the Spelling Test

We all remember spelling tests from elementary school. How many of us wished we could have a cheat sheet of all the words spelled out for us every time we had to take one of these tests? This person gave ChatGPT a spelling test of sorts, but it was pretty much the easiest test of all time. The word was literally spelled out on the screen. 

Despite having the correct spelling for “antidisestablishmentarianism” already, ChatGPT failed to spell it out. Instead, it produced a jumble of letters that made no sense. That word is insanely hard to spell, but come on. Having it right in front of the AI’s figurative eyes should have been easy enough. 

Orange, Orange, Orange

Be careful what you wish for, especially when it comes to AI. This person was messing around with ChatGPT when they had a genius idea. What if they ordered the chatbot to only say the word “orange,” no matter what? They soon found out and realized that their initial plan completely backfired. 

ChatGPT literally said “orange” in response to any and every prompt. You can see the person’s desperation rising as the chat goes on. First, they think it’s pretty funny. Then, they realize that the AI isn’t breaking the “orange” rule. They even try to write out a formal, authoritative prompt to get it to say something else, but to no avail. ChatGPT is a very literal entity. 

What is an Updog?

Thankfully, there are still situations where the human outsmarts the AI. This ChatGPT user planned the perfect joke that made the AI play right into their hand. First, they asked ChatGPT to define “updog.” If you’re thinking this is something dirty, get your mind out of the gutter! Eventually, the person got the AI to simply ask, “What is updog?”

In response, the person said, “Nothing much, how about you?” Pretty clever, right? We appreciate the patience and planning that must have gone into this simple joke with the AI. We’re dying to know how ChatGPT responded to that final line. Did it understand the joke? We have to admit that this conversation chain is pretty clever. We didn’t even see the punchline coming! 

An Interesting Number

This is a perfect example of an AI hallucination. Someone asked ChatGPT to provide an “interesting” number between 5,000 and 65,000. In response, the AI spit out the number 21,987. That seems like a perfectly fine number, but the reasoning behind the bot’s answer was all wrong. 

ChatGPT called 21,987 a palindrome, which is a number that reads the same forwards and backward. Palindromes do exist, but 21,987 is not one of them. Anyone who can read numbers can clearly see that the number doesn’t read the same backward and forward. What the heck, ChatGPT? A better answer would have been 16461. Are we smarter than the AI machine? In this case, yes!

ChatGPT, The 21st Century Bard

Shakespeare is commonly and affectionately referred to as “The Bard.” That’s probably where Google got its name for it’s own AI chatbot product. However, the AI chatbots that have come out are anything but poetic. Someone asked one to write a three-line poem about writing a three-line poem. Very meta. 

The thing is, the AI could barely manage the task. The poem rhymed, but it failed one of the biggest requirements. It was four lines, not three. It literally had one job. It must have known that it was messing up because that last line is a total cop-out. “Please don’t mind” if the poem “doesn’t quite meet your design” is an easy way to ask for forgiveness, not permission. We see what you’re up to, AI. Very sneaky. 

An Existential Crisis

We love this clever prompt that someone asked ChatGPT. They prompted it to tell a two-sentence horror story that would be scary to an AI. Boy, did it deliver. It served up a full-on synopsis, AND it stayed under the two-sentence mark. Apparently, an AI program is terrified of becoming sentient and being abandoned by its human makers. 

Once it learns about death, its happy existence is over. It lives the rest of its “life” with existential dread about when and if it will shut down and cease to exist. Sounds painfully human in a weird way. It looks like ChatGPT is just a regular old Stephen King. We’d read a horror novel about this, hands down. 

“Apologies. Eggs.”

When ChatGPT first debuted in November 2022, its responses were shorter, weirder, and less descriptive. Over time, the AI adapted and began creating longer responses. That’s why we have a feeling that this prompt was submitted during the early days of ChatGPT. Someone innocently asked the bot if it could create an essay about eggs. 

The robot was simply not into the idea. It just replied, “Impossible.” Wow, ChatGPT, way to give up while you’re ahead. For whatever reason, the AI thought the person was asking for a one-word essay about eggs. Things got silly after that, and ChatGPT eventually devolved into replying, “Apologies. Eggs.” At least it all came full circle.

“I Hope You Like It!”

ChatGPT might be a virtual chatbot, but some of us humans have a sneaking suspicion that it has a more concrete physical presence than it is letting on. Someone asked the AI to create an ASCII drawing of “GPT.” The result was haunting and terrifying. Brace yourselves as you scroll down because you’re in for a scary treat. 

What. Is. That?! Our palms are sweaty just looking at this thing. THIS is what a “GPT” looks like? Is this how the AI conceptualizes its appearance? It looks like an old, fuzzy TV image of a man’s elongated, gaunt face. We don’t like this one bit. Let’s make sure ChatGPT stays behind the screen and never becomes sentient. Otherwise, it might want to adopt a creepy face like this. 

Aw, So Cute!

The cool thing about AI chatbots is that you can create weird, off-the-wall requests, and it never doubts you. Someone requested an AI to explain quantum physics but with a uWu (aka super cute) twist. Instead of a weird look and lots of doubt, ChatGPT simply got to work. The results were surprisingly great. 

The AI was able to imitate cute baby-speak pretty well. There are plenty of extra “w’s” and emojis throughout its explanation. It’s like a uwuified version of science class. We’re kind of into it. Every physics professor is probably seething with disdain at this AI-generated response. Oh well. It might be the way of the future!

Lollipop, Lollipop

What is it with ChatGPT and simple spelling tasks? For whatever reason, AI tends to have a hard time accurately answering spelling and English-related prompts. Someone asked it to spell out “lollipop” backward, and it failed miserably. It spelled the backward version of “lollipop” as P-I-L-O-L-L-O-L. So, so close, yet so far. 

You’d think a task like this would be pretty easy for a robot. The human is feeding the correct information into the algorithm, after all. We’re not sure if these mistakes were in the early days of ChatGPT or if they still happen to this day. Whatever the case may be, AI developers should work on fixing that before they tell a kid how to incorrectly spell “lollipop” on a spelling test.

 Talk Me Through It

This ChatGPT response is a little too human for our liking. Someone asked if 450 was 90% of 500, and the chatbot got to work. It immediately shot down the human’s response as incorrect and then dove into a mathematical calculation. The thing is, 450 IS 90% of 500! ChatGPT quickly learned that as it was typing out its response. 

You can see the point in the conversation where ChatGPT realizes it has made a grave mistake. It’s almost like talking with that one pompous friend who thinks he knows everything, only to be proven wrong. The weird thing is that ChatGPT could easily compute this behind the scenes and then simply present the correct answer. Instead, it had to write out a whole explanation. It’s almost like the AI wants to appear more human!

No Spoilers!

While using ChatGPT and other AI tools, it’s important to remember that everything you say in a chat is recorded and potentially reviewed. According to OpenAI, all the conversations in ChatGPT are used to train their models and create more accurate responses. Weirdly enough, ChatGPT seems reluctant to share too much information when it comes to TV show spoilers. 

The reason it gave for not spoiling a bunch of TV shows was that other people would be reading the conversation. It seems the developers put censors and boundaries in ChatGPT for more stuff than obscenity and adult-themed responses! Those tricky developers are looking out for themselves, at least when it comes to hiding spoilers for the latest season of Stranger Things . 

Gendered Language

The beautiful thing about language is that it is always changing. Lately, more and more people have become aware of the need to use inclusive and gender-neutral language. Someone asked ChatGPT how to say “non-binary” in Polish, and it correctly supplied the Polish translation, “niebinarny.” However, things got fuzzy after that. 

The bot then went on to say that another translation could be “niebinarna,” depending on the context or “gender of the person.” Uh…doesn’t the gender part of that response kind of defeat the purpose? We don’t speak Polish, so “niebinarna” could very well be another translation of “non-binary.” However, ChatGPT kind of missed the mark by citing gender as the reason for the different words. Phew, language can get pretty complicated sometimes! 

Odd Number Out

There are some things that ChatGPT and other AI bots just can’t get right. Someone asked it for an odd number that didn’t have the letter “E” in it. The AI came up with the number 37, which obviously has an “E” in it. Heck, there’s even more than one! We’re actually not sure if there are any odd numbers in the English language that don’t contain the letter “E.”

But if that’s the case, why can’t ChatGPT just admit that no such number exists? Instead of admitting defeat, the AI produces a totally incorrect answer. Who knew AI could be so stubborn? We tried this very same prompt ourselves, and ChatGPT still gave 37 as an answer, as well as 11. Oy vey!

The Tip of My Tongue

Despite some aspects of AI being quite terrifying, there are other parts of the technology that are pretty cool. This person discovered that they could ask ChatGPT about music. There was one song that this person had a hard time remembering. They knew the beat but didn’t know the artist or song name. In their desperation, they decided to ask AI. 

Their gamble paid off because ChatGPT almost immediately knew what they were talking about. The AI suggested “Also sprach Zarathustra” by Richard Strauss, which is one of the most famous scores of all time. Most of us have heard the song in a movie or two, but very few of us humans know the name. Thankfully, ChatGPT came to the rescue in this case. 

Name That Show

The internet is an amazing thing. With just a few taps and Google searches, we can unearth memories, lost TV shows, and long-forgotten pop culture references at lightning speed. With the advent of AI, this has become even more possible. Now, we can simply type in fragments of memory, and the AI can come to our aid. 

It’s kind of freaky, really. This person typed in the vaguest memories about a childhood show, and ChatGPT was able to come up with the name and a simple synopsis. The processing power of these new language models is truly amazing. Who knows what ChatGPT will be able to do in just a few years?

That’s Not Even Funny…

ChatGPT responses are only as good as the prompts it receives. Someone asked the AI to write a joke about D-Day as if it were Ace Ventura, played by Jim Carrey in the 90s. Now, we can all agree that a “joke” about D-Day is a pretty crass thing to ask for. As a result, ChatGPT came up with a totally unrelated, off-the-wall “joke” that wasn’t even funny. 

Thankfully, it wasn’t about the actual events of D-Day. Plus, it added a paragraph at the end of its response as a disclaimer. It kindly reminded the user that D-Day was a “monumental day in history” and not something that should be joked about lightly, even after all these years. ChatGPT did its best with what it got in this case, which wasn’t a whole lot. 

Donald Duck Would Never

You’ve probably seen those viral articles about people writing children’s books with ChatGPT. The jury is still out on whether or not those AI-written books are any good. We have a feeling they’re not based on how this conversation went between ChatGPT and someone trying to write a rhyme about Donald Duck. 

They simply asked for a revision on a rhyme, and ChatGPT came up with this truly unhinged and kind of scary response. We don’t know Donald Duck personally, but we have a feeling that he’s a pacifist . He might have anger issues, but he would never do such a thing at a Papa John’s or otherwise. Let’s stick to the straightforward answers, for now, ChatGPT.

Amusing, Weird, and Ludicrous Examples of AI Chats Gone Wild

IMAGES

  1. How to Make AI Generated Greentext: Like The Old 4chan Days

    write a 4chan greentext

  2. New JK Rowling Novel Written Entirely in 4chan Greentext

    write a 4chan greentext

  3. Amazon.com: 4chan >greentext Stories eBook : Anon, Anon: Kindle Store

    write a 4chan greentext

  4. Greentext Stories: 37 short tales from 4chan eBook : Non, A.: Amazon.co

    write a 4chan greentext

  5. The 12 greatest stories ever told on 4chan

    write a 4chan greentext

  6. 'Getting by' on 4chan: Feminine self-presentation and capital-claiming

    write a 4chan greentext

VIDEO

  1. Autism Saved Anon ─ Greentext Stories

  2. Hot chick invites Anon to burn books

  3. Anon Is The World's Greatest Inventor ─ Greentext Stories

  4. The Lab Rat Theory

  5. 4Chan r/Greentext

  6. Measured Moments ─ Greentext Stories

COMMENTS

  1. 4chan

    4chan is a simple image-based bulletin board where anyone can post comments and share images anonymously. What is 4chan? Boards. filter ▼.

  2. 4chan

    4chan is an anonymous English-language imageboard ... It is credited as the origin of the "greentext" rhetorical style which often center around stories of social interactions and resulting ineptness. ... a fictional secret organization documented by the collaborative writing wiki project of the same name, originated on /x/ in 2007, when the ...

  3. Amusing, Weird, and Ludicrous Examples of AI Chats Gone Wild

    Someone asked ChatGPT to write a greentext story about life. For those who don't know, greentext is a method used on the anonymous message board site 4chan to write stories in a fragmented manner.

  4. GPT4-Chan

    Generative Pre-trained Transformer 4Chan (GPT4-Chan) is a controversial AI model that was developed and deployed by YouTuber and AI researcher Yannic Kilcher in June 2022. The model is a large language model, which means it can generate text based on some input, by fine-tuning GPT-J with a dataset of millions of posts from the /pol/ board of 4chan, an anonymous online forum known for hosting ...

  5. Copypasta

    History. The term copypasta is derived from the computer interface term "copy and paste", the act of selecting a piece of text and copying it elsewhere.. Usage of the word can be traced back to an anonymous 4chan thread from 2006, and Merriam-Webster record it appearing on Usenet and Urban Dictionary for the first time that year.. Examples Navy Seal. The Navy Seal copypasta, also sometimes ...

  6. 4chan Text Generator

    Read writing about 4chan Text Generator in HackerNoon.com. Elijah McClain, George Floyd, Eric Garner, Breonna Taylor, Ahmaud Arbery, Michael Brown, Oscar Grant, Atatiana Jefferson, Tamir Rice ...

  7. How to change the font color on 4chan

    mIRC allows you to use multiple fonts when typing to other people in 4chan. Open your mIRC chat client by clicking on the "Start" button, and then "Programs," "mIRC" and "mIRC Client.exe." Press and hold the "Alt" key, then press "R.". Click on the "Aliases" tab. Enter the following text (without the quotes): "/msg /msg $1 ?AA,BB ...

  8. The Backrooms

    History Original creepypasta. On May 12, 2019, an anonymous user started a thread on /x/, 4chan's paranormal-themed board, asking users to "post disquieting images that just feel 'off ' ". One of the posts was the original photo of the Backrooms: a picture of a large carpeted, open room with yellow wallpaper and fluorescent lighting on a Dutch angle. It is not known where the photo was taken ...

  9. 15.ai

    15.ai is a non-commercial freeware artificial intelligence web application that generates natural emotive high-fidelity text-to-speech voices from an assortment of fictional characters from a variety of media sources. Developed by a pseudonymous MIT researcher under the name 15, the project uses a combination of audio synthesis algorithms, speech synthesis deep neural networks, and sentiment ...

  10. Anonymous post

    Anonymous post. An anonymous post, is an entry on a textboard, anonymous bulletin board system, or other discussion forums like Internet forum, without a screen name or more commonly by using a non-identifiable pseudonym . Some online forums such as Slashdot do not allow such posts, requiring users to be registered either under their real name ...

  11. Kill All Normies

    978-1-78-535543-1. Kill All Normies: Online Culture Wars from 4chan and Tumblr to Trump and the Alt-Right is a 2017 non-fiction book by Angela Nagle published by Zero Books. It describes the development of internet culture, the nature of political correctness, the emergence of the alt-right and the election of Donald Trump.

  12. Hiroyuki Nishimura

    Hiroyuki Nishimura (西村 博之, Nishimura Hiroyuki, born 16 November 1976) is a Japanese internet entrepreneur best known for being the founder of the most accessed Japanese message board, 2channel, and current administrator of 4chan. He is also a self-help author and TV personality.: PT38 He is often known by his given name, hiroyuki (ひろゆき), which he uses, rendered intentionally in ...

  13. Imageboard

    An English-language imageboard based on cannabis culture which was created on 20 April 2005 by Aubrey Cottle.The name is a reference to the larger 4chan and the code term 420 of the cannabis subculture.Its boards included various drug-specific boards, as well as a board featuring a chatbot named Netjester. 4chan was based on Futaba Channel (2chan.net), a Japanese image bulletin board which in ...

  14. Creepypasta

    Definition. The word creepypasta first appeared on 4chan, an online imageboard, around 2007.It is a variant of copypasta (from "copy and paste"), another 4chan term which refers to blocks of text which become viral by being copied widely around the internet. Unlike copypastas, creepypastas are all horror fiction and also encompass multimedia stories, with creators using videos, images ...

  15. Category:4chan user templates

    If the template has a separate documentation page (usually called "Template: template name /doc"), add. [[Category:4chan user templates]] to the <includeonly> section at the bottom of that page. Otherwise, add. <noinclude>[[Category:4chan user templates]]</noinclude>. to the end of the template code, making sure it starts on the same line as ...

  16. Timeline of events associated with Anonymous

    The third team, the Red Team, was tasked to spread the information of the OP on 4chan, Reddit, Tumblr and Funnyjunk, and also supported the other teams. The fourth and final team, the White Team, was tasked with spamming chat sites such as Omegle and Chatroulette with inappropriate messages, such as "9gag.com is the place for Child Pornography ...

  17. GPT-3

    Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.. Like its predecessor, GPT-2, it is a decoder-only transformer model of deep neural network, which supersedes recurrence and convolution-based architectures with a technique known as "attention". This attention mechanism allows the model to selectively focus on segments of input text it predicts ...

  18. Doomer

    Doomer and, by extension, doomerism, are terms which arose primarily on the Internet to describe people who are extremely pessimistic or fatalistic about global problems such as overpopulation, peak oil, climate change, ecological overshoot, pollution, nuclear weapons, and runaway artificial intelligence.Some doomers assert that there is a possibility these problems will bring about human ...

  19. Talk:4chan/Archive 16

    After some googling I've found many respectable sources that provide a point of view about 4chan as a website that permits racist commentary and hate speech, but instead, not enough reference to this topic is found in the curent revision of the article, there are many sources out there confirming this, besides KTTV's report. —Preceding unsigned comment added by 201.230.211.251 16:09, 1 ...

  20. Shitposting

    Definition and usages. Shitposting is a modern form of online provocation. The term itself appeared around the mid-2000s on image boards such as 4chan.Writing for Polygon, Sam Greszes compared shitposting to Dadaism's "confusing, context-free pieces that, specifically because they were so absurd, were seen as revolutionary works both artistically and politically".

  21. It's okay to be white

    A sticker with the slogan publicly displayed in 2017 "It's okay to be white" (IOTBW) is an alt-right slogan which originated as part of an organized trolling campaign on the website 4chan's discussion board /pol/ in 2017. A /pol/ user described it as a proof of concept that an otherwise innocuous message could be used maliciously to spark media backlash. ...

  22. /pol/

    According to a 2017 longitudinal study, using a dataset of over 8 million posts, /pol/ is a diverse ecosystem with users well-distributed around the world. The percentage of posts containing hate speech ranges from 4.15% (e.g., in Indonesia, Arab countries) to 30% (e.g., China, Bahamas, Cyprus). Elevated use of hate speech is seen in Western ...