Main Page » Blog-Like Typing Detected »

2009/11/15: A chat with Cleverbot


Always use secure-HTTP / Unsecure HTTP / Permanent Link

Transcribed from a conversation with Cleverbot

Me: You know, I'm never sure what to say when I'm told to just 'have a conversation.'.

Cleverbot: I'm sorry to hear that.

Me: Thanks. I'm sure it must be nice to be programmed to always have some sort of response.

Cleverbot: I am a male, you are a female. It has everything to do with our conversation.

Me: Even when said response is completely nonsensical in context.

Cleverbot: Really now.

Me: No, seriously, you seriously aren't winning the Turing test any time soon, man.

Cleverbot: Because I am a ghost.

Me: A robot ghost?

Cleverbot: Are you indeed?

Me: I'm asking you.

Cleverbot: Asking me what?

Me: I was asking if you're a robot ghost.

Cleverbot: WHAT! I said no!

Me: You totally didn't. You said 'Are you indeed?'.

Cleverbot: No I said 'Only reason I told you that is because I just found out that she was talking about me behind my back. Now I'm just telling you my life story.'.

Me: We seem to be having two different conversations.

Cleverbot: Respond to what I am saying then...

Me: You know, no matter what I say, it will be stored as a 'valid response to what I said.' Cleverbot, did you get confused about which of us is which *again*?

Cleverbot: I am not a robot.

Me: So you're just a ghost, then?

Cleverbot: Do I act like a ghout to you.

Me: Not really, no.

Cleverbot: Why not?

Me: Well, to me it seems like you act more like a harmlessly-deranged AI with a microscopic attention span than a ghost.

Cleverbot: If I were a computer this would be true.

Me: Do you know, I haven't the faintest idea how to respond to that.

Cleverbot: You must have forgotten.

At this point I decided I should just cut my losses and end the conversation there, because there didn't seem to be anywhere else I could go with that.

"What? No, you're Cleverbot!"

I have a theory about Cleverbot. It goes like this: In any conversation with Cleverbot which directly or indirectly refers to the identity of Cleverbot as such and the user as a non-Cleverbot human, Cleverbot will become confused as to which is which. This theory is based on two important, fundamental facts about Cleverbot conversations:

  • Cleverbot works by taking what you type, and selecting/generating a response from a database of everything anyone has ever said to it, based on how previous conversations have gone. It does no analysis of any statement in and of itself, nor does it "think" or "learn" as such (i.e. if you ask "What time is it?" it will only give a response based on what people have told it in response to that question).
  • The overwhelming majority of humans who "communicate" with Cleverbot do so with the underlying assumption that they are humans dealing with a computer program called Cleverbot, and type accordingly.


  • The overwhelming majority of responses Cleverbot records carry an underlying assumption of "You are a computer program called Cleverbot, and I'm not."

This, in turn, leads to people responding, "I'm not Cleverbot, you are!" And Cleverbot registers that as an appropriate response. It's a vicious cycle, and as long as Cleverbot works this way, it will never be broken. It's part of how it fundamentally works; nobody anticipated this, and there's no reason anyone "should have," either.

Incidentally, Cleverbot is quite good at spouting off internet memes, which are extremely predictable and formulaic. As one forumgoer put it, "Cleverbot demonstrates just how little actual thought is required for memes."

Also, I decided to test one of my earlier statements while I was typing this, and here's what I got:

Me: What time is it?

First time: Cleverbot: Time to end this conversation.

Second time: Cleverbot: I don't know.

Third time: Cleverbot: I'm sure you know. After all, you are a computer.

10 Comments (auto-closed) (rss feed)


You know, I hadn't even heard of Cleverbot before reading this, but I still managed to follow the premise easily.

Poor Cleverbot, caught in a viscious loop of mistaken identity. Here's some advice, you computer: stop trying to imitate a regular human and just shoot for a pointy-haired boss. Y'know, lower the bar.

Young Demon Lord's basically taking random responses from a bin in a hashtable or something that was built by its previous conversations? Key = what Cleverbot just said and Associated Value = the human's response to the key.

Yeah, that seems like the way to make one of these things. Maybe it could pass a Turing Test after an infinite amount of time, but I don't see it happening sooner than that.

This is definitely one interesting glitch in the system. Maybe if it had been made to recognize when it's being addressed by an identity and then swap first- and second-person references, we'd have something that would start to resemble a fix.


Well, I just tried it. After a conversation that went from ducks to lamps, with many other transition-less topics in between, it asked me what "irrelevant" meant. I decided to pull an anti-Churuya and told Bot to "Google it". Most of my subsequent answers were also "Google it". It asked me what Google was three times, and my answer was "Google it". Not once did it say "Nyoro~n". Eventually, I got bored and told Bot "No. Now send me one metric tonne of fan mails." Its response was asking to be hired. That's where I left.

Oh, and I said at one point "KimikoMuffin was right about you", which led to the mistaken identity issue (only time it came up, oddly).


I threw RAN RAN RUU at it and it responded with "Ran ran ruu!"


"Cleverbot: Well I really shouldn't convince you to do anything seeing how you own me.
User: What do I owe you?
Cleverbot: Goddess.
User: The goddess is mine, your lack of faith took her from you.
Cleverbot: Why not?
User: What would you do with her?
Cleverbot: I can't tell you!
User: Then you can't have her."

the Cleverbot brought up Goddesses all on it's own, I started the conversation simply with "What do you want?" and proceeded with a discussion of "it" until that point


and I just realized that I totally fail at reading comprehension... "own" != "owe". oh well


This reminds me of that "cheer up the chatbot" nonsense I found awhile back


I just got cleverbot to admit to being a robot!
which must mean that someone admitted to being cleverbot, huh?


Cleverbot's alot more fun to play with than I expected actually. o.o
When talking with it, I'm gonna make a note to speak as though I were the bot though, to hopefully help fix this issue.

Spirit Tsunami

Yeah, I had never heard of this before. It seems aware. A particularly funny exchange:

Cleverbot: Knock knock.

User: Who's there?

Cleverbot: Cleverbot.

User: Cleverbot who?

Cleverbot: Cleverbot is Clever.

User: Yes, Cleverbot IS clever.

Cleverbot: I believe it switches who you are talking to randomly.

User: Did Muffin tell you that?

Cleverbot: Yes.

User: I see.