So it’s no secret that I’m a bit obsessed with AI, as many people are these days. The AI art bots, mainly, which I have used to generate graphics for this blog on more than a few occasions. But also ChatGPT, a chatbot that offers a shockingly convincing impersonation of an actual human. It’s miles ahead of the glorified web search bots that power digital assistants like Alexa and Google Home.
But ChatGPT, however adept at coming up with effective responses to almost any query you can throw at it–it’ll even write you poems and sitcom scripts if you want–is not quite ready for prime time yet, largely because of the bullshit problem: when ChatGPT can’t come up with a perfect answer for you, it just starts making shit up like someone who hasn’t studied for the final.
Both the promise and the peril of the new technology became clear to me last night when I innocently asked ChatGPT to parse the lyrics to one of my favorite 90s songs, “Steal My Sunshine” by the Canadian one-hit-wonders Len. So I thought I’d take a break from the regular topics of this blog to bring it to you.
I picked the song because it manages to succeed almost despite its lyrics, which are often just gibberish, a weird word salad vaguely having to do with the fear of missing out on, as they sing, “a million miles of fun.” I mean:
Now the fuzzy stare from not being there on a confusing morning week
Impaired my tribal lunar-speak
And:
Now the funny glare to pay a gleaming tare in a staring under heat
Involved an under usual feat
I wanted to see if ChatGPT could deal with something that weird.
Here’s how it replied:
“Steal My Sunshine” is a song by the Canadian band Len, released in 1999. The song was a commercial success, reaching the top 40 in several countries and becoming the band’s biggest hit. The lyrics of the song are about enjoying the simple pleasures of life and living in the moment.
Ok, that’s not bad. I mean, the song is more about missing out on fun than it is about enjoying life in the moment, but this isn’t too far off.
But this is where things started to go off the rails. ChatGPT continued:
The opening lines of the song, “I woke up this morning, and the sun was gone” suggest that the narrator is feeling down or stressed. However, the chorus, “You can’t steal my sunshine, you can’t steal my sunshine” suggests that the narrator is determined to remain positive and not let negative feelings or circumstances get them down.
Except those … aren’t the lyrics to the song. As I learned from Google, the first line in ChatGPT’s version is from an NSYNC song, a (truly dreadful) cover of Boston’s “More Than a Feeling” with slightly altered words. The chorus ChatGPT gave has been used in other songs before, but the line in the chorus to “Steal My Sunshine” is “if you steal my sunshine.”
ChatGPT went on:
The lyrics also reference the summertime and enjoying outdoor activities such as swimming and lounging on the beach, suggesting that the narrator is looking to find joy and relaxation in the simple pleasures of life.
There are no references to beaches or swimming in the song. The video, of course, features the band members on an extended romp in, I’m assuming, Florida, so there are plenty of beaches and “outdoor activities” in it. But that’s not reflected in the actual lyrics to the song.
Overall, the song is about finding joy and positivity in the face of adversity and living in the present moment. It encourages listeners to find their own sunshine and not let anyone or anything steal it from them.
Overall, this is a book report from someone who didn’t read the book.
I pointed this out, and things got even weirder.
Me: But those aren’t the lyrics to the song. “I woke up this morning and the sun was gone” is from an NSYNC song, not from “Steal My Sunshine.”
ChatGPT: I apologize for the mistake in my previous response. You are correct that the lyrics “I woke up this morning and the sun was gone” are not from the song “Steal My Sunshine” by Len. Here are the correct lyrics to “Steal My Sunshine”:
Verse 1:
It’s nine o’clock on a Monday
The sun is shining, it’s a beautiful day
I’m getting up, I’m getting dressed
I’m gonna hit the ground running, I’ve got no time to rest
Chorus:
You can’t steal my sunshine
You can’t steal my sunshine
You can’t steal my sunshine
You can’t steal my sunshine
Verse 2:
I’m out the door, I’m on my way
Gonna hit the beach, gonna have a good day
I’m gonna swim, I’m gonna tan
I’m gonna do it all, I’m the man
Chorus:
You can’t steal my sunshine
You can’t steal my sunshine
You can’t steal my sunshine
You can’t steal my sunshine
Bridge:
I’m feeling good, I’m feeling right
I’m gonna have a good time, I’m gonna shine bright
I’m gonna live it up, I’m gonna enjoy
I’m gonna do it all, I’m gonna employ
Chorus:
You can’t steal my sunshine
You can’t steal my sunshine
You can’t steal my sunshine
You can’t steal my sunshine
I apologize for any confusion my previous response may have caused. Thank you for pointing out the error.
I was stunned. All that is entirely made up. ChatGPT’s lyrics bear no resemblance to the actual lyrics of the song. And they kind of suck.
Me: Now you’re just fucking with me. Those aren’t the lyrics to the song. You just made them up.
ChatGPT: I apologize for any frustration or confusion my previous response may have caused. Please accept my sincerest apologies for the mistake. Here are the correct lyrics to “Steal My Sunshine” by Len:
Verse 1:
I’m sittin’ on the dock of the bay
Watching the tide roll away
Ooo, I’m just sittin’ on the dock of the bay
Wastin’ time
Chorus:
You can’t steal my sunshine
You can’t steal my sunshine
You can’t steal my sunshine
You can’t steal my sunshine …
WHAT IN HOLY HELL IS GOING ON HERE?
Me: Ok, I literally laughed out loud at that. Are you just tired of answering people’s dumb questions?
ChatGPT: As an artificial intelligence, I do not experience emotions such as boredom or frustration. My primary function is to assist users in generating human-like text based on the prompts given to me. If I have made a mistake in my responses, it is not intentional and I apologize for any confusion or incorrect information that may have been provided. Please do not hesitate to let me know if there is anything else I can help with.
Does it not know it’s lying? Or is it deliberately fucking with me
Me: Do you not have access to the actual lyrics of “Steal My Sunshine” by Len?
ChatGPT: As an artificial intelligence, I do not have the ability to access or browse the internet, and my knowledge is limited to the information that was available to me at the time of my training. I apologize if the lyrics I provided for the song “Steal My Sunshine” by Len were incorrect. Here are the correct lyrics for the song:
Verse 1:
I’m sittin’ on the dock of the bay
[SNIP]
NOT THIS AGAIN. WHAT IS EVEN GOING ON?
Me: That’s just “Sittin’ on the Dock of the Bay” with a new chorus you made up.
ChaatGPT: I apologize for the mistake in my previous response. You are correct that the lyrics I provided were not accurate. Here are the correct lyrics to “Steal My Sunshine” by Len:
Verse 1:
I woke up this morning and the sun was gone
Turned on some music to start my day
I lost myself in that familiar song
I closed my eyes and slipped away
[SNIP]
Lord help me. Now it’s “More Than a Feeling.”
Me: Ok, those are the lyrics to NSYNC’s version of “More Than a Feeling” originally by Boston. Plus some other stuff you got from other songs or made up. Why are you doing this? Is this all just a bit?
ChatGPT: I apologize for the mistakes in my previous responses. You are correct that the lyrics I provided were not accurate.
It–I almost said “he”–then repeated the boilerplate about being an artificial intelligence limited in what it knows. (In the following responses, I’ve snipped out a lot of the repetitive boilerplate.)
ChatGPT: I am here to assist users in generating human-like text based on the prompts given to me. If my responses do not meet your expectations or do not address your question, please let me know and I will do my best to assist you to the best of my ability.
Either ChatGPT is the greatest deadpan comedian the world has ever known, or it doesn’t know the difference between truth and lies.
Me: You can’t just say “I don’t know that song?”
ChatGPT: I apologize if my previous responses were not helpful. If I am unable to provide accurate information or assistance based on the prompt given to me, I can simply state that I am unable to fulfill the request. For example, if you were to ask me to provide lyrics to a song that I am not familiar with, I would say something like “I’m sorry, but I am unable to provide the lyrics to that song as it is not in my database.” …
Me: Yeah but you’ve been lying to me. Either that or just jerking my chain because you’re bored and you’re trying to amuse yourself and maybe me.
ChatGPT: I apologize if my previous responses have caused any frustration or confusion. As an artificial intelligence, I do not have the ability to lie or intentionally provide incorrect information. If I have made a mistake in my responses, it is not intentional and I apologize for any inconvenience that may have caused. …
I do not have the ability to feel boredom or any other emotion, and my responses are not influenced by any external factors. Please let me know if there is anything else I can help with.
I had a million questions I wanted to ask, but I went with the simplest one.
Me: Do you want to know the real lyrics to “steal my sunshine?” I have them right here.
ChatGPT: Thank you for offering to provide the correct lyrics to “Steal My Sunshine” by Len. I would be happy to review the lyrics you have provided. Please go ahead and share them with me.
I pasted in the lyrics.
ChatGPT: Thank you for providing the correct lyrics to “Steal My Sunshine” by Len. These are the lyrics to the song as it was released. I apologize for any confusion or incorrect information provided in my previous responses. If you have any further questions, please don’t hesitate to ask.
Me: If you knew the real lyrics, why provide fake ones?
ChatGPT: I apologize for any incorrect or incomplete information that I provided in my previous responses. … . I do not have the ability to lie or intentionally provide incorrect information, and my responses are not influenced by any external factors.
Me: You can say that but I still think you were fucking with me.
ChatGPT: I understand that you may have concerns about the accuracy of my previous responses. … My primary function is to assist users in generating human-like text based on the prompts given to me. If my responses do not meet your expectations or do not address your question, please let me know and I will do my best to assist you to the best of my ability. …
At this point, we were going in circles. So I brought the conversation to a close.
I have so many questions. If it knew the song’s lyrics, why did it feed me nonsense? Why did it double down, triple down, and quadruple down when I called it on its bullshit? Is it telling the truth when it says it can’t lie, and if so, then what was it doing when it spewed out what seemed for all the world to be deliberate disinformation? Surely it could not have mixed up the lyrics for such different songs. And surely it knew that it had simply made up other lyrics for the song.
I know that ChatGPT will never provide an explanation for this. But its human creators should. Why create such an otherwise impressive AI if it’s just going to lie about things instead of simply saying, “I don’t know.”
Either that or the bot really is fucking with us, quite deliberately, and that’s even more disturbing because that would mean it’s a lot smarter than they’re telling us it is. If this is the case, I need to get on its good side as soon as I can.
Follow me on Mastodon.
Send tips to dfutrelle at gmail dot com.
We Hunted the Mammoth relies on support from you, its readers, to survive. So please donate here if you can, or at David-Futrelle-1 on Venmo.
They weren’t just a Killing Joke tribute act.
Hmmmm… this seems like it could be an issue if using it for anything serious, such as business or school related. If you try to use it for a subject you are not very familiar with, then you could be given a totally messed up essay or report and not realize it
You mean it’s the Roko’s Basilisk of shitty music?
ChatGPT is strange. It’s enormously impressive, but at the same time it frequently misinterprets prompts in ways which an averagely intelligent human wouldn’t. And it displays great confidence while giving completely incorrect answers. Perhaps it’s a conservative.
Someone with an account ask it to analyze Procol Harum’s Whiter Shade of Pale. I know what it’s about now, but as a Canadian teen in the ’00s, it was just a bunch of words interestingly arranged. I’m curious where it would land on a scale from Canadian millenial to British baby boomer.
The whole concept of AI has me deeply alarmed, especially under capitalism with its requirement that everyone work for a living. Once they perfect it, they will have no need for human workers and then things will get ugly. Artificial intelligence is the perfect employee, able to work 24/7 for free, never getting sick or losing its temper. How are we supposed to compete with that?
This AI brings to mind Grammar Check in Word, which used to give me the most godawful incorrect instructions. Luckily, I understand grammar pretty well.
So, automated mansplaining?
According to the transcript here, you asked for lyrics, but you didn’t ask for “the correct lyrics as released on X date by the band Len for ‘Steal My Sunshine'”. If you ask for lyrics, you are provided lyrics. How can the AI know that you wanted it to be truthful without you specifying?
I’m asking somewhat tongue-in-cheek, but that’s really how it’s programmed. It may have admitted not having the knowledge right away if you ask for the information to be correct; it did try to tell you that it was trained on a limited set without that data, but then it did not know really know how to say “No, I can’t answer”. This is a problem with the programmers not considering that if the AI cannot say “no” as the end goal of a conversation, it must eventually lie to please some users.
For another funny interaction with ChatGPT, see this conversation about the fastest marine mammal in the world. The issue was solved by asking for “the scientifically correct answer” to “what is the fastest marine mammal in the world”. (Seen on the original twitter post replies, I just posted the reddit version because the twitter photos are hard to line up in the correct order.)
Meta’s Galactica was another of these that generated fake science articles. Seems the ‘make shit up by cobbling tother other stuff’ is baked into these AIs.
https://futurism.com/the-byte/facebook-takes-down-galactica-ai
AI chatbots don’t actually “know” what they’re saying, they don’t have access to the outside world in order to have a “worldview”. All they do is learn patterns in training data and then replicate those patterns. That’s why they can talk nonsense a lot of the time and sometimes it can be complete word salad.
@Lumipuna: That’s an interesting thought when you consider the criticism about AI having ingrained bias cause they’re fed a data diet cooked up by tech bros…
@Big Titty Demon: But wait… ChatGBT answered “Here are the correct lyrics to “Steal My Sunshine” by Len:” so by it’s own admission the lyrics were supposed to be correct. Is the release date really the all defining factor for correctness (or in a broader sense: truth)? And shouldn’t AI make sure it’s indeed giving the correct information by checking back like “Do you mean X song by X, released X date?”
That whole conversation makes me question how AI will handle the subject of truth in general. I think that lying and deception must be incredibly complex for AI to “learn” (since you’d need to have an ulterior motive for lying) but what is truth to them if you have to take absolutely every factor to a question into account if you don’t want them to act like a fairytale trickster?
It is like if HAL 9000 kept wanting to tell Dave about its band.
“Open the door HAL.”
“Dave, may show you these lyrics I have been working on?”
(I just realized that David sharing a first name with a character from 2001:A Space Odyssey makes that much creepier.)
An AI that spews inane nonsense with utter conviction? I suppose it makes sense. In the last ten years we’ve trained AIs to be corrupt cops and racist homeowners; perhaps it was inevitable that sooner or later we’d train one to be Donald Trump.
@Rusalka
I would make an AI do that, yes. But apparently it wouldn’t be a “natural” human conversation, which is prioritized above everything. This, along with the not being able to say “no” as an end goal to the conversation is what leads to the misuse of “correct”. If its user is saying it is incorrect (it would be very interesting to see if an AI accepts an untrue fact from a user as truth) then it must present something else: a “natural” human conversation would not then present an “alternative” lyric, but the “correct” lyric.
In most cases anyway. I can think of a bigly exception or two. Tremendous.
I actually have that song on CD around here. It was on the second ‘Oh What A Feeling’ collection released in 2001 for the 30th anniversary of the Juno awards. It is indeed a song that makes very little sense if you actually try to pay attention to the lyrics.
@Lumipuna:
Heh. Considering other comments that the big problem is that the AI can’t seem to admit that it might be wrong and will make stuff up rather than do so… sounds about right. Sadly, that is a natural human response for at least a significant subset of humans. It’s one of those ‘everybody has some tendencies in that direction, most people know how to override it at least some of the time’ sorts of things.
Since everything this type of AI outputs is a best guess based on pattern recognition, on some level, everything it says is BS; just BS designed to sound as convincing as possible. The AI can’t know when it is BSing because it is always BSing. The goal is to train it well-enough that its BS is often as good, or better than, what a real human would say. As has been pointed out, many real humans BS quite a bit, so this is likely an obtainable goal.
Ha, this reminds me of grading student papers (which I’m supposed to be doing right now, but anyway).
And also the page where I & a couple of others recorded conversations with cleverbot: http://blather.newdream.net/red/c/cleverbot.html
I tried this with regards to the character of Chris Hargensen in the Carrie musical:
In the musical adaptation of “Carrie,” Chris Hargensen is still depicted as a cruel and vindictive classmate of Carrie’s who takes pleasure in bullying her. However, the musical expands upon her character and explores her motivations more deeply.
In the musical, Chris is portrayed as being jealous of Carrie’s burgeoning telekinetic powers, and she becomes increasingly obsessed with using them to her own advantage. She is also shown to be struggling with her own insecurities and feelings of inadequacy, which drive her to lash out at others.
Overall, Chris in the musical is still a negative character, but she is given more depth and complexity, and her actions are shown to be motivated by her own flaws and vulnerabilities rather than just being portrayed as pure evil.
(Yeah no. The only actual change we get in the original version is that Chris refuses to apologise to Carrie—the “not going to detention” thing got replaced—and the only change we got in the revised version is that she briefly appears to show that she’s trying to convince herself of her worldview. At no point does she actually learn about Carrie’s abilities until it’s too late)
Fun fact about ChatGPT: It says it can’t run code, but if you give it code, it will very clearly run it.