This week we’ll be talking with people about a new form of the Grandparent Scam.

One that plays on victims' heartstrings by using audio recordings of loved ones asking for help abroad.

What makes this new is that the voices are actually AI generated, and they sound pretty convincing.

spinner image

In fact, you’re listening to one right now.

I’m not really Bob Sullivan.

I’m an AI generated voice based on about 3 minutes worth of recordings of Bob on this program.

infographic quite that reads “The scams are nothing new, but it’s so cheap, easy and convenient to clone someone’s voice today that you can make these scams a lot more convincing.

It sounds pretty convincing though, doesn’t it?

(MUSIC SEGUE)

[00:00:42] Bob: Welcome back to The Perfect Scam.

Im your real host, Bob Sullivan.

illustration of a dog chewing up a fraudulent text message that reads “BOA Fraud Dept: (We are sorry but your: BnkofAmericaDebit transaction was blocked due to irregular activity. A representative will be contacting you shortly”

What you heard at the beginning of this podcast wasn’t me.

And as I said, well, as that voice said, it sounds pretty convincing.

[00:02:15] Jennifer DeStefano: “Mom, these bad men have me.

A psychic mail scam is brought down by authorities.

Help me, help me, help me.”

She begged and pleaded as the phone was taken from her.

A threatening and vulgar man took the call over.

“Listen here, I have your daughter.

I ran inside and started screaming for help.

The next few minutes were every parents' worse nightmare.

One mom ran outside and called 911.

The kidnapper demanded a million dollars and that was not possible.

So then he decided on $50,000 in cash.

But I didn’t process that.

It was my daughter’s voice.

It was her cries; it was her sobs.

It was the way she spoke.

I will never be able to shake that voice and the desperate cries for help out of my mind.

Last year, an FBI agent in Texas warned about this use of artificial intelligence by criminals.

Then in March, the Federal Trade Commission issued a warning about it.

The first is Professor Jonathan Anderson who you’ve already met, sort of.

Remember, I asked him to make the fake me.

I certainly have some exposure to that kind of thing.

[00:04:39] Jonathan Anderson: Sure, I mean, scams are nothing new, of course.

And so we have seen instances um, in Canada across the country.

[00:05:22] Bob: But wait.

I mean I’m sure we’ve all seen movies where someone’s voice is cloned by some superspy.

And then you’re able to have it generate whatever words you want.

[00:06:14] Jonathan Anderson: Yeah, absolutely.

[00:06:29] Jonathan Anderson: I did, yeah.

One US dollar for 30 days of generating, I think 50,000 words in cloned, generated voices.

[00:06:38] Bob: 50,000 words.

[00:06:53] Bob: That’s absolutely amazing.

And he said, “Easy.”

[00:07:26] (fake Bob) Steve, I need help.

I hit a woman with my car.

I really need your help, Steve, or I’ll have to stay here overnight.

Can you help me?

yo tell me you could send the money.

I’ve never felt this way about anybody before, Sandra.

I wish we could be together sooner.

But this time is an exception to that rule.

[00:08:35] Bob: Okay real Bob back here.

And to be honest with you, I was like, okay, how good can these things be?

I’ve listened to these clips.

I’m alarmed (chuckles).

[00:08:59] Jonathan Anderson: Yep.

[00:09:00] Bob: I’ve read a lot about this.

I knew this was going on.

It was still disturbing to hear my own voice produced this way.

Have you done it with yourself?

[00:09:06] Jonathan Anderson: I have.

My wife thought it sounded like a computer imitating me, so I guess that’s good.

But it, it does feel strange to listen to yourself, but not quite yourself.

It turns out they do not work the way I imagined they do.

And then another voice comes on the line and can answer questions interactively.

This has nothing to do with that.

[00:12:26] Jonathan Anderson: Absolutely.

[00:12:34] Jonathan Anderson: It sounded a little bit odd, right?

But you don’t have to look too far to see another example of technology that works this way.

[00:14:12] Bob: Let’s face it.

Hearing a cloned voice that comes so close to real life is downright spooky.

In fact, that spookiness has a name.

I think that that’s deeply disquieting in this sense.

Yeah, so people have written lots about the uncanny valley, and especially in like CGI and stuff.

So that makes a ton of sense.

[00:16:01] Bob: Okay, spooky is one thing, unnerving is one thing.

But should we be downright scared of the way voice cloning works and what criminals can do with it?

Your first thought is probably, I wonder if somebody photoshopped that.

[00:17:37] Jonathan Anderson: It does, absolutely.

[00:18:10] Bob: When I went to school, media literacy was a big term.

Um, and, and I guess that’s a good thing.

I, I have mixed feelings about that though, do you?

[00:19:16] Bob: I do think that’s good advice.

And yet, people still believe all kinds of interesting things that they read on Facebook.

Well, if it sounds too bad to be true, then maybe we should check.

Let me call them back.

Do you have a lawyer?

[00:21:37] Bob: So, what about the companies involved?

I don’t know if that sounds like enough to me.

Should we be demanding more of our tech companies who make this technology available?

[00:21:50] Jonathan Anderson: I want to say yes.

I think outlawing artificial voices would be a little bit like outlawing Photoshop.

That happened at Google and Facebook with facial recognition, for example.

What about that as a solution?

So what sort of future are we building here?

But if there’s a picture, it really must have happened.”

What can government do to rein in their use by criminals?

What can industry associations do?

What can the law do?

What kind of future do we really want to build?

[00:26:26] My name is Chloe Autio, and I’m an independent AI policy and governance consultant.

Um, how scared should we be about AI?

[00:27:59] Chloe Autio: That’s a really good question, Bob.

And I think a lot of people are thinking about this.

Should they be very scared, should we be talking about extinction?

Are those fear–, fears real?

And what are the things that we need to be thinking about in relation to deployment implementation.

And I think we’re just sort of getting there with generative AI.

It seems like everyone is talking about that lately.

I asked Chloe to explain first off why should humanity explore artificial intelligence in the first place?

But you know I think there are a lot of ways too that AI can broaden and expand access.

You know people with disabilities, individuals with disabilities, people who are maybe speech impaired.

And some of these types of opportunities and, and use cases are really, really powerful and impactful.

But you know, I understand the fear.

And I would encourage anyone you know to just keep that in mind.

So what, what can be done about this?

Clearly there’s a risk of harm today using this technology.

[00:31:38] Chloe Autio: Correct.

Like I said, this landscape has just completely evolved and evolved really quickly.

[00:32:39] Bob: Especially when they’re taking my voice and making it sound exactly like me.

I mean how is my mom supposed to discern those things?

[00:32:45] Chloe Autio: Absolutely.

You’re 100% right.

It involves imbedding either a visible or an invisible signal in, in content.

It can go either way.

[00:35:29] Bob: Okay, that’s really interesting.

[00:35:40] Chloe Autio: That’s right, yeah.

I mean that’s obviously a perceptible sort of key or certificate, right?

[00:36:33] Bob: Or hearing, yeah, yeah.

[00:36:34] Chloe Autio: Yes.

[00:36:53] Chloe Autio: Absolutely.

We should do something about that.

So how would that work?

[00:38:05] Chloe Autio: It’s really different for every organization, Bob.

[00:39:58] Bob: There is no obvious technological bullet-proof solution, however.

Maybe it’s not.

Again, these are super human, these are super, as in very, human fixes.

[00:41:17] Chloe Autio: Yeah, I agree, Bob.

[00:41:58] Bob: Or consequence.

[00:41:59] Chloe Autio: Or consequence.

[00:42:00] Bob: I don’t see them getting arrested, yeah.

[00:42:01] Chloe Autio: They’re not.

I think that can be said for anything, right?

And such a good ob–, such a good and just a blunt observation.

It’s so true.

Do we, we don’t want that to be illegal, right?

So, so talk me off my, my ledge of wanting to arrest everyone involved in this.

[00:43:48] Chloe Autio: I love, Bob, your Bob Sullivan Police Force.

That’s a great one, and I certainly would take it seriously.

But I do want to talk about some areas of the law that sort of do apply, right?

Fraud and impersonation, we’ve talked about that a lot.

That’s definitely illegal.

Copyright infringement is another one.

So some of that content can be in the public domain.

There are not entirely clear rules about where and when that content can be straped for model development.

And so you know some of these voice generation companies may be infringing copyright.

The third area that I think we could talk about as well is obviously privacy violations.

Um, but let me take a stab at put a fine point on this part.

Um, the other idea is a new law that regulates things like deep fakes.

[00:47:06] Chloe Autio: Yeah, this is the million-dollar question, Bob, truly.

Maybe it’s a billion dollar one by now, I’m not really sure.

How can we adapt it?

So for example, AI used to vet resumes.

[00:49:19] Chloe Autio: Correct, correct, correct.

[00:49:21] Bob: Yeah, yeah, yeah.

You do a good job of laying out how complex this is.

(laughs)

[00:49:32] Bob: Okay, so what do you want to leave people with.

They’ve just listened to this conversation.

We’ve scared them about fake voices that might help criminals steal their money.

We’ve hinted a little bit that AI can do a bunch of good things too.

You’re at the center of all of these people trying to make these sort of structural decisions.

Wh–, what do you want to leave people with?

[00:49:51] Chloe Autio: Hmm…

I want to tell people that, that they should always trust their gut.

And so I just, I just want people to remember to trust their gut.

If something feels wrong with the phone call, right, hang up.

That’s what I would say.

[00:51:21] Bob: Don’t lose the human element.

I’d like to add a big amen to that.

Call the AARP Fraud Watch web link Helpline at 877-908-3360.

Their trained fraud specialists can provide you with free support and guidance on what to do next.

That address again is: theperfectscampodcast@aarp.org.

Be sure to find us on Apple Podcasts, Spotify, or wherever you listen to podcasts.

For AARP’s The Perfect Scam, I’m Bob Sullivan.