This is a newsletter about public policy and technology [regulation stuff] in Canada. 🇨🇦
I wrote [some of] this as part of a "Certified Ethical Emerging Technologist Professional Certificate” course that I am taking. It’s on the online learning platform “Coursera” and it’s called “Promote the Ethical Use of Data-Driven Technologies.”
When it comes to considering how to regulate technology, Canada continues to watch California, ready to copy + paste their progress in an act of eager yet cautious replication. The state is setting new North American standards through their new privacy law and is currently deciding how gig companies can (or can’t) classify their workers via Proposition 22.
Last year, the sunshine state made a modest but meaningful intervention - it banned companies from making chatbots appear as if they are real humans. The law came into effect on July 1st, 2019, and it has the *best* acronym - the “BOT” bill, for “Bolstering Online Transparency.”
*“Chatbots” are computer programs that simulate a human conversation, typically via text or voice. You’ve likely encountered one when shopping online - a little box pops up in the corner asking if you would like to talk to someone for help, often in an excessively peppy tone (!!). Sometimes disguised as a real-life human, chatbots interpret your question and respond with predetermined answers. This helps the organizations that use the, decrease call volume and - in theory - frees up humans for more complex tasks. Chatbots typically use AI to expand their knowledge and address harder questions.
There’s been an explosion of chatbot companies as AI has grown (it’s very - “where is my jetpack?”). The chatbot market size is projected to grow from $2.6B in 2019 to $9.4B by 2024. Chatbots (sometimes referred to as “virtual assistants” AKA Siri, Alexa, Cortana, etc) are pretty useful - they allow firms to offer round-the-clock support and add capacity.
Back to Cali: the BOT Bill has been criticized for ambiguity in broadly defining a “bot,” lumping together customer service chatbots with fake Twitter accounts or political bots. In California, if companies use a bot to communicate with their customers or with the public online, they must disclose this fact or face a penalty of $2,500 per violation. The legislation is silent on the role of online platforms in disclosing bots or creating mechanisms to report them. Instead, this responsibility falls to the creator of the bot.
The thing is - how do you “catch” this deceptive online activity, and it is “worth it” to invest in enforcement efforts?
In theory, individuals could self-report if a bot did not “self” disclose its artificial intelligence - but it can be difficult for the average person to discern whether they are engaging with a human being or a computer program - hence the need for proactive admission.
Better labelling customer service chatbots would be a way to express more ethical applications of AI, making the bots more transparent as a mechanism to build and maintain trust with the people it chats with. Corporate chatbots are a smart place to start as they are mostly innocuous, and could let policymakers start to test levers for bigger bot problems like mis- and dis-information.
As it stands, chatbots blather in a regulatory gray zone.
That hasn’t stopped firms like Toronto-based Ada (which makes personalized AI chatbots) from raising a $63.7M Series B to automate support teams.
The Woebot offers personal psychotherapy support - it has “bot” in the company name, but is it clear to everyone that they aren’t speaking with a human therapist?
A Canadian-based legal software called Destin.ai describes itself as an “immigration assistant” in Canada - the website employs a Facebook Messenger chatbot to determine the immigration eligibility of the inquirer. Could this be confused as an endorsement from an immigration official?
Like, I used to have an Invisible Boyfriend (RIP “Garfield Yamaha”) but turns out, that wasn’t a bot, it was an actual human writing. 💔
The Government of Canada is testing a chatbot that would help people register and get a My Service Canada Account (“MSCA”). While I doubt it will prove a wolf in a chatbots clothing, we should expect the bot to not impersonate a human - right?
*Because the chatbot is being tested, it doesn’t fall under the Directive on Automated Decision-Making.
Dressing up chatbots as humans to perform basic customer service is deceptive - a kind of AI-driven cat-fishing.
Inauthentic online engagement could risk making more people vulnerable to scams. It also erodes trust.
We need to legislate these bots in order to achieve a healthier and more transparent internet.
But Canada has been slow to modulate AI use; seemingly more focussed on trying to understand the potential impact of AI, rather than actually regulating it (am I right, ladies?). We shouldn’t need to wait for a headline-making example of a chatbot-led scam to spur us into solutions-mode.
Policymakers [mostly the Ministry of Government and Consumer Services] can proactively lead with the principle of beneficence, acting as entrepreneurial regulators that are anticipatory of citizen’s digital needs.
While we don’t have specific consumer legislation that targets AI tools like chatbots, we DO have existing laws that prohibit the negative impacts that might arise from the use of algorithms to make decisions:
Provincial Human Rights Codes/the Canadian Charter of Rights + Freedoms prohibit discrimination based on various enumerated grounds in the public and private sectors;
Federal and provincial human rights codes protect against discrimination in the delivery of services, accommodation, contracts, and employment;
Canada’s Competition Act prohibits deceptive marketing practices and false and misleading representations.
💌 What could a provincial policy agenda for chatbots look like?
As in California, chatbots should not be able to appear as if they are real humans. They should clearly disclose that they are bots. That’s step one.
As in the European Union [under GDPR], chatbots should not be able to do things like approve a consumer for a loan. We should set clear boundaries for the kind of actions/assessment a bot can undertake via “ADM” (Automated Decision Making, not “Assistant Deputy Minister”). Quebec may be leading the way here (what else is new?) with Bill 64 - An Act to modernize legislative provisions as regards to the protection of personal information. If the Bill passes in its current state, it will introduce an obligation on “any person carrying on an enterprise” to inform individuals when they “use information to render a decision based exclusively on automated processing” at the time or before the decision. 👀
People should always have the option of engaging with a human when presented with a chatbot - the equivalent of dialling “0” when listening to a phone tree.
Companies could be required to disclose how many bots they “employ,” if we want to take the approach that bots are digital workers that replace/substitute for a human. This could help us understand the dynamics and magnitude of the potential displacement effects of the bot economy.
Other accountability mechanisms like a registration process for AI or insurance relationships should be evaluated.
+ There’s some discussion in the PIPEDA Discussion Paper about the need to increase transparency on the use of automated decision-making.
What’s a little different w/ chatbots is that they’re not *always* used in a decision-making system, but there’s a dignity element in interfacing with a robot (without knowing). 🤖
Enforcement capacity is critical in establishing “trustworthy” approaches to AI. Legislation that better moderates our online lives is just a first step in building a more just internet experience. Without subsequent investment in dynamic enforcement mechanisms, lawmakers won’t be able to truly protect consumers from a chatbot’s deceit.
🌵 If we’re training our policy agenda on California’s state legislature, there is much to learn in service of citizens.
🤓 Vass Bednar is a smart generalist working at the intersection of technology and public policy.