So far, I’ve been pretty good at evading scams. I don’t get my ducts cleaned and I have yet to buy gift cards for my boss. BUT when a deepfake of my boss asks me to buy gift cards for duct cleaning, I’m toast.
Lately I have found myself feeling curious about deepfakes and scams, as well as deepfakes and advertising. The former should freak us all out as it becomes much harder to discern a Nigerian Prince from your actual husband, and the latter application similarly seeks to leverage our trust to trick us. In both cases, the content is created with the intention to deceive, leaving you to figure out whether the face and voice seeking to gain your information is legit or pure charlatanism.
Much like the ‘moment’ where we were enamoured with the potential of ‘big data’ before we could anticipate its many misapplications (i.e. surveillance capitalism), this moment where we are charmed by synthetic media feels similarly blind (**other than deepfake porn, it all seems so…cool?) and even more urgent.
For instance, the influencer economy is predicated on individuals building digital audiences that they can monetize as a trusted intermediary. We require sponsored content (“sponcon”) to be labelled. But is it still sponcon if the likeness of a person is promoting a product or service? Where would accountability lie in that instance?
There may be no ‘problem’ with deepfakes in the legal, non-fraudulent sphere. But problems will start when your grandmother gets a deepfake call from her banker, or when the biometric footprint of her voice is mimicked so well that ‘she’ calls her bank and fakes them out.
Another issue is that the government and phone companies seem to have lost all control of the cell phone network. Not to seem unpopular, but a lot of my incoming calls seem to be robocall spam linked to fraud centres abroad (almost makes me little more open to that email from the prince). It makes you wonder what the role could or should be for private firms in protecting against deepfake scams, or pledging to not use deepfakes as a tactic to engage in cheaper-than-usual advertising.
Amidst all this, the big tech platforms are laying off their misinformation teams, which increases the likelihood that we will see or click on junk in our feeds.
So, what happens when the deepfakes get really good? How do you get people to not trust familiar voices on the phone anymore, or even video calls on FaceTime or Zoom?
To date, the government’s approach has generally been to preach vigilance while online firms accept self-attestation that you aren’t a robot during CAPTCHA verification. The CRTC offers guidance to protect yourself against spam. Ontario’s Consumer Protection authority offers education on common scams and how to identify them, and what to do if you’ve been a victim of one. In theory, deepfakes could be considered a form of false and misleading advertising under the federal Competition Act.
This short take from Torys offers a perspective on deep fakes that is super personal: what if someone deepfakes YOU? It surveys personal privacy, intentional infliction of emotional distress, defamation, breach of confidence and public disclosure of private facts, “false light” (I had never heard of this before!), the impact on businesses, privacy torts, copyright, intentional interference with economic relations and whom to sue. This is all useful if you need to defend your image, but what about when a fake image inflicts harm on you?
The emergent sophistication of deepfakes make it unlikely that recipients will be able to easily recognize a digital racket. The technology demands a stronger policy response: investment that can help us better detect and enforce new standards to combat deepfakes instead of just independently defending our own likeness.
If we don’t amp up a robust policy response fast, that will be a scam, too.