đŹ All Talk
On the internet, nobody knows you're a chatbot
This is a newsletter about public policy and technology [regulation stuff] in Canada. đšđŠ
I wrote [some of] this as part of a "Certified Ethical Emerging Technologist Professional Certificateâ course that I am taking. Itâs on the online learning platform âCourseraâ and itâs called âPromote the Ethical Use of Data-Driven Technologies.â


When it comes to considering how to regulate technology, Canada continues to watch California, ready to copy + paste their progress in an act of eager yet cautious replication. The state is setting new North American standards through their new privacy law and is currently deciding how gig companies can (or canât) classify their workers via Proposition 22.


Last year, the sunshine state made a modest but meaningful intervention - it banned companies from making chatbots appear as if they are real humans. The law came into effect on July 1st, 2019, and it has the *best* acronym - the âBOTâ bill, for âBolstering Online Transparency.â
*âChatbotsâ are computer programs that simulate a human conversation, typically via text or voice. Youâve likely encountered one when shopping online - a little box pops up in the corner asking if you would like to talk to someone for help, often in an excessively peppy tone (!!). Sometimes disguised as a real-life human, chatbots interpret your question and respond with predetermined answers. This helps the organizations that use the, decrease call volume and - in theory - frees up humans for more complex tasks. Chatbots typically use AI to expand their knowledge and address harder questions.
Thereâs been an explosion of chatbot companies as AI has grown (itâs very - âwhere is my jetpack?â). The chatbot market size is projected to grow from $2.6B in 2019 to $9.4B by 2024. Chatbots (sometimes referred to as âvirtual assistantsâ AKA Siri, Alexa, Cortana, etc) are pretty useful - they allow firms to offer round-the-clock support and add capacity.
Back to Cali: the BOT Bill has been criticized for ambiguity in broadly defining a âbot,â lumping together customer service chatbots with fake Twitter accounts or political bots. In California, if companies use a bot to communicate with their customers or with the public online, they must disclose this fact or face a penalty of $2,500 per violation. The legislation is silent on the role of online platforms in disclosing bots or creating mechanisms to report them. Instead, this responsibility falls to the creator of the bot.
The thing is - how do you âcatchâ this deceptive online activity, and it is âworth itâ to invest in enforcement efforts?
In theory, individuals could self-report if a bot did not âselfâ disclose its artificial intelligence - but it can be difficult for the average person to discern whether they are engaging with a human being or a computer program - hence the need for proactive admission.


Better labelling customer service chatbots would be a way to express more ethical applications of AI, making the bots more transparent as a mechanism to build and maintain trust with the people it chats with. Corporate chatbots are a smart place to start as they are mostly innocuous, and could let policymakers start to test levers for bigger bot problems like mis- and dis-information.
As it stands, chatbots blather in a regulatory gray zone.
That hasnât stopped firms like Toronto-based Ada (which makes personalized AI chatbots) from raising a $63.7M Series B to automate support teams.


The Woebot offers personal psychotherapy support - it has âbotâ in the company name, but is it clear to everyone that they arenât speaking with a human therapist?


A Canadian-based legal software called Destin.ai describes itself as an âimmigration assistantâ in Canada - the website employs a Facebook Messenger chatbot to determine the immigration eligibility of the inquirer. Could this be confused as an endorsement from an immigration official?


Like, I used to have an Invisible Boyfriend (RIP âGarfield Yamahaâ) but turns out, that wasnât a bot, it was an actual human writing. đ
The Government of Canada is testing a chatbot that would help people register and get a My Service Canada Account (âMSCAâ). While I doubt it will prove a wolf in a chatbots clothing, we should expect the bot to not impersonate a human - right?
*Because the chatbot is being tested, it doesnât fall under the Directive on Automated Decision-Making.
Dressing up chatbots as humans to perform basic customer service is deceptive - a kind of AI-driven cat-fishing.
Inauthentic online engagement could risk making more people vulnerable to scams. It also erodes trust.
We need to legislate these bots in order to achieve a healthier and more transparent internet.
But Canada has been slow to modulate AI use; seemingly more focussed on trying to understand the potential impact of AI, rather than actually regulating it (am I right, ladies?). We shouldnât need to wait for a headline-making example of a chatbot-led scam to spur us into solutions-mode.
Policymakers [mostly the Ministry of Government and Consumer Services] can proactively lead with the principle of beneficence, acting as entrepreneurial regulators that are anticipatory of citizenâs digital needs.
While we donât have specific consumer legislation that targets AI tools like chatbots, we DO have existing laws that prohibit the negative impacts that might arise from the use of algorithms to make decisions:
Provincial Human Rights Codes/the Canadian Charter of Rights + Freedoms prohibit discrimination based on various enumerated grounds in the public and private sectors;
Federal and provincial human rights codes protect against discrimination in the delivery of services, accommodation, contracts, and employment;
Canadaâs Competition Act prohibits deceptive marketing practices and false and misleading representations.
đ What could a provincial policy agenda for chatbots look like?
As in California, chatbots should not be able to appear as if they are real humans. They should clearly disclose that they are bots. Thatâs step one.
As in the European Union [under GDPR], chatbots should not be able to do things like approve a consumer for a loan. We should set clear boundaries for the kind of actions/assessment a bot can undertake via âADMâ (Automated Decision Making, not âAssistant Deputy Ministerâ). Quebec may be leading the way here (what else is new?) with Bill 64 - An Act to modernize legislative provisions as regards to the protection of personal information. If the Bill passes in its current state, it will introduce an obligation on âany person carrying on an enterpriseâ to inform individuals when they âuse information to render a decision based exclusively on automated processingâ at the time or before the decision. đ
People should always have the option of engaging with a human when presented with a chatbot - the equivalent of dialling â0â when listening to a phone tree.
Companies could be required to disclose how many bots they âemploy,â if we want to take the approach that bots are digital workers that replace/substitute for a human. This could help us understand the dynamics and magnitude of the potential displacement effects of the bot economy.
Other accountability mechanisms like a registration process for AI or insurance relationships should be evaluated.
+ Thereâs some discussion in the PIPEDA Discussion Paper about the need to increase transparency on the use of automated decision-making.
Whatâs a little different w/ chatbots is that theyâre not *always* used in a decision-making system, but thereâs a dignity element in interfacing with a robot (without knowing). đ€
Enforcement capacity is critical in establishing âtrustworthyâ approaches to AI. Legislation that better moderates our online lives is just a first step in building a more just internet experience. Without subsequent investment in dynamic enforcement mechanisms, lawmakers wonât be able to truly protect consumers from a chatbotâs deceit.
đ” If weâre training our policy agenda on Californiaâs state legislature, there is much to learn in service of citizens.

đ€ Vass Bednar is a smart generalist working at the intersection of technology and public policy.
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.