CALL US: 216-397-4080  | CLIENT HELP DESK: 216-539-3686

Now You See It

Now You See It

Riverbank Ruminations; Observations from The Banks of The Technology River

Tom Evans ~  Ashton Engineer Emeritus

What do these four images have in common? (Sources at the end)

These are all images of people, animals, places that do not exist. This article started me down a typical internet rabbit hole concerning things/people/animals/etc. that do not exist except in a computer-generated space.

Computer-generated people are nothing new. Who can forget Jar Jar Binks? However, we are now entering the era when the artificial entity is earning money. The fourth picture is of a character named Rozy who has over 100 sponsorships. There is a character named Lil Miquela, who is not quite as realistic looking, who ‘earned’ $10 million (at $8,000 per sponsored post) in 2020. There is Shudu who is a high fashion model with contracts with companies like Samsung and Balmain (who has a trio of virtual models).

The creators of Rozy employed a group of 22-year-olds to respond to social media posts directed to Rozy. They got some positive feedback about how trendy Rozy’s responses were. If you want some insight into the marketing driver behind Rozy, this video is an interesting few minutes. (A stray bit of information from the video: They are targeting Generation MZ. That was a new one for me. This category includes people born between 1977 and 2020)

AI Affects Cybersecurity

As usual, this got me thinking about the security implications of this trend. More of these entities are being designed by AI. In some cases, two AI networks are pitted against each other. One to generate fake persons and one to detect them.  The results have been very realistic. This kind of research has spilled over into the area of voice and movies. There are numerous examples of deepfake videos in which famous personalities are apparently saying things that they haven’t actually ever said. The challenge presented by these technologies is they are getting good enough to fool most people. The things we use to identify someone, inflection, mannerisms, visual features, can be so completely duplicated that it is next to impossible to distinguish real from fake.

This issue becomes a problem when the bad guys get involved. This article discusses one case where AI generated a voice that was able to convince a company to send money to the bad guys.

“The attackers responsible for defrauding the British energy company called three times, Mr. Kirsch said. After the transfer of the $243,000 went through, the hackers called to say the parent company had transferred money to reimburse the U.K. firm. They then made a third call later that day, again impersonating the CEO, and asked for a second payment. Because the transfer reimbursing the funds hadn’t yet arrived and the third call was from an Austrian phone number, the executive became suspicious. He didn’t make the second payment.

Back in 2019, this case was considered unusual. What made it unusual was the use of AI. Just recently, another attack was successful to the tune of $35 million. Since AI may take significant computing resources, it might be considered out of reach for most criminals. However, a search for ‘free deepfake voice generator’ yielded 448,000 hits. A search for ‘free deepfake video generator’ generated over 5 million hits. What does this tell us? The technology is moving down the requirements ladder.

What Policies Should You Have to Counter AI?

All of this points us to the need for policies and procedures as part of your security stance. Will you allow a change of payment destinations based on a phone call? How about based on a video call? Will an email from the CEO be enough? Clearly, the technology has advanced to the point where our usual methods of verification are no longer adequate. Just as we need MFA for logging in, businesses need checks and multiple methods of validating transactions. During security training sessions that I run, I use several examples of companies being compromised by emails supposedly from a trusted source. One customer of Ashton has a prominent footer on their emails stating that payment changes will not be initiated simply by email request.

We are quickly approaching the point where we simply cannot believe our ears or our eyes when It comes to validating important transactions. Businesses need to think about how the usual methods of verification can be imitated and compromised and make sure that important transactions are not governed by a single method. As usual, this makes security less convenient. On the other hand, how convenient would it be to lose $100,000 to a fraudulent transaction? It behooves all businesses to look at policies and procedures and make sure they are not easily circumvented. Otherwise, your money might suddenly be a case of ‘now you see it, now you don’t’.

 

Picture sources:

 

 

Related Posts