regs to riches

Share this post
💬 All Talk
www.regs2riches.com

💬 All Talk

On the internet, nobody knows you're a chatbot

Vass Bednar
Nov 1, 2020
5
Share this post
💬 All Talk
www.regs2riches.com

This is a newsletter about public policy and technology [regulation stuff] in Canada. 🇨🇦

Twitter avatar for @msbigmilk
big milky @msbigmilk
keeping this pic forever
Image
3:42 AM ∙ Aug 6, 2020
139,539Likes16,394Retweets

I wrote [some of] this as part of a "Certified Ethical Emerging Technologist Professional Certificate” course that I am taking. It’s on the online learning platform “Coursera” and it’s called “Promote the Ethical Use of Data-Driven Technologies.”

Twitter avatar for @techreview
MIT Technology Review @techreview
GPT-3, the most powerful natural-language generator yet, also spits out hate speech, misogynistic and homophobic abuse, and racist rants. Researchers are trying to make chatbots like it safer for the public to actually use, but there's no easy fix.
bit.lyHow to make a chatbot that isn’t racist or sexistTools like GPT-3 are stunningly good, but they feed on the cesspits of the internet. How can we make them safe for the public to actually use?
2:51 PM ∙ Oct 25, 2020
40Likes15Retweets

When it comes to considering how to regulate technology, Canada continues to watch California, ready to copy + paste their progress in an act of eager yet cautious replication. The state is setting new North American standards through their new privacy law and is currently deciding how gig companies can (or can’t) classify their workers via Proposition 22. 

Twitter avatar for @motherboard
Motherboard @motherboard
Being forced to reclassify drivers in California could spell doom for Uber's dreams of global domination.
bit.lyWhy November 3rd Just Became Critical for UberBeing forced to reclassify drivers in California could spell doom for Uber’s dreams of global domination.
10:50 PM ∙ Oct 27, 2020
4Likes3Retweets

Last year, the sunshine state made a modest but meaningful intervention - it banned companies from making chatbots appear as if they are real humans. The law came into effect on July 1st, 2019, and it has the *best* acronym - the “BOT” bill, for “Bolstering Online Transparency.”

*“Chatbots” are computer programs that simulate a human conversation, typically via text or voice. You’ve likely encountered one when shopping online - a little box pops up in the corner asking if you would like to talk to someone for help, often in an excessively peppy tone (!!). Sometimes disguised as a real-life human, chatbots interpret your question and respond with predetermined answers. This helps the organizations that use the, decrease call volume and - in theory - frees up humans for more complex tasks. Chatbots typically use AI to expand their knowledge and address harder questions.

There’s been an explosion of chatbot companies as AI has grown (it’s very - “where is my jetpack?”). The chatbot market size is projected to grow from $2.6B in 2019 to $9.4B by 2024. Chatbots (sometimes referred to as “virtual assistants” AKA Siri, Alexa, Cortana, etc) are pretty useful - they allow firms to offer round-the-clock support and add capacity.

Share regs to riches

Back to Cali: the BOT Bill has been criticized for ambiguity in broadly defining a “bot,” lumping together customer service chatbots with fake Twitter accounts or political bots. In California, if companies use a bot to communicate with their customers or with the public online, they must disclose this fact or face a penalty of $2,500 per violation. The legislation is silent on the role of online platforms in disclosing bots or creating mechanisms to report them. Instead, this responsibility falls to the creator of the bot. 

The thing is - how do you “catch” this deceptive online activity, and it is “worth it” to invest in enforcement efforts?

In theory, individuals could self-report if a bot did not “self” disclose its artificial intelligence - but it can be difficult for the average person to discern whether they are engaging with a human being or a computer program - hence the need for proactive admission. 

Twitter avatar for @SSIReview
Social Innovation @SSIReview
AI tools for #fundraising include online chatbots, software to ID prospective donors, and algorithms that analyze donor data. @Afine & @kanter share how #nonprofits are using these tools and reflect on how they can create a new chapter in #fundraising.
ssir.orgRehumanizing Fundraising With Artificial Intelligence (SSIR)As nonprofits emerge from the shock of the pandemic and financial crisis, there is an opportunity to rethink fundraising and improve donor retention rates by embracing emerging technologies like artificial intelligence.
4:35 PM ∙ Oct 28, 2020

Better labelling customer service chatbots would be a way to express more ethical applications of AI, making the bots more transparent as a mechanism to build and maintain trust with the people it chats with. Corporate chatbots are a smart place to start as they are mostly innocuous, and could let policymakers start to test levers for bigger bot problems like mis- and dis-information. 

As it stands, chatbots blather in a regulatory gray zone.

That hasn’t stopped firms like Toronto-based Ada (which makes personalized AI chatbots) from raising a $63.7M Series B to automate support teams.

Twitter avatar for @AdaSupport
Ada @AdaSupport
We might be biased, but we suspect that we have the best clients. Congrats to Ada clients Mailchimp, InVision, and Webflow on making the Forbes Cloud 100 list! #customersupport
share.ada.supportThe Cloud 100 2020The definitive ranking of the private companies to know in tech’s hottest sector. This list of breakout cloud companies ranges from bootstrapped giants to venture capital startup darlings.
1:02 PM ∙ Oct 8, 2020

The Woebot offers personal psychotherapy support - it has “bot” in the company name, but is it clear to everyone that they aren’t speaking with a human therapist? 

Twitter avatar for @HiWoebot
Woebot @HiWoebot
It’s not always about doing the most. Some days it’s about doing the least. Let’s change the conversation around always having to go hard - the quiet defiance of getting up every day is enough. #MentalHealth #Mindfulness #CBT
Image
3:03 PM ∙ Oct 21, 2020
11Likes3Retweets

A Canadian-based legal software called Destin.ai  describes itself as an “immigration assistant” in Canada - the website employs a Facebook Messenger chatbot to determine the immigration eligibility of the inquirer. Could this be confused as an endorsement from an immigration official? 

Twitter avatar for @destin_ai
Destin AI @destin_ai
We love hearing from our happy clients! 😊💜 We created Destin AI to help applicants better navigate the immigration system so that people from all around the globe could have a chance to realize their Canadian dream. Learn more at destin.ai
Image
2:01 PM ∙ Apr 17, 2019

Like, I used to have an Invisible Boyfriend (RIP “Garfield Yamaha”) but turns out, that wasn’t a bot, it was an actual human writing. 💔

The Government of Canada is testing a chatbot that would help people register and get a My Service Canada Account (“MSCA”). While I doubt it will prove a wolf in a chatbots clothing, we should expect the bot to not impersonate a human - right? 

*Because the chatbot is being tested, it doesn’t fall under the Directive on Automated Decision-Making.

Dressing up chatbots as humans to perform basic customer service is deceptive - a kind of AI-driven cat-fishing.

Inauthentic online engagement could risk making more people vulnerable to scams. It also erodes trust.

We need to legislate these bots in order to achieve a healthier and more transparent internet. 

But Canada has been slow to modulate AI use; seemingly more focussed on trying to understand the potential impact of AI, rather than actually regulating it (am I right, ladies?).  We shouldn’t need to wait for a headline-making example of a chatbot-led scam to spur us into solutions-mode.   

Policymakers [mostly the Ministry of Government and Consumer Services] can proactively lead with the principle of beneficence, acting as entrepreneurial regulators that are anticipatory of citizen’s digital needs. 

While we don’t have specific consumer legislation that targets AI tools like chatbots, we DO have existing laws that prohibit the negative impacts that might arise from the use of algorithms to make decisions:

  • Provincial Human Rights Codes/the Canadian Charter of Rights + Freedoms prohibit discrimination based on various enumerated grounds in the public and private sectors;

  • Federal and provincial human rights codes protect against discrimination in the delivery of services, accommodation, contracts, and employment;

  • Canada’s Competition Act prohibits deceptive marketing practices and false and misleading representations.

💌 What could a provincial policy agenda for chatbots look like? 

  1. As in California, chatbots should not be able to appear as if they are real humans. They should clearly disclose that they are bots. That’s step one.

  2. As in the European Union [under GDPR], chatbots should not be able to do things like approve a consumer for a loan. We should set clear boundaries for the kind of actions/assessment a bot can undertake via “ADM” (Automated Decision Making, not “Assistant Deputy Minister”). Quebec may be leading the way here (what else is new?) with Bill 64 - An Act to modernize legislative provisions as regards to the protection of personal information. If the Bill passes in its current state, it will introduce an obligation on “any person carrying on an enterprise” to inform individuals when they “use information to render a decision based exclusively on automated processing” at the time or before the decision. 👀

  3. People should always have the option of engaging with a human when presented with a chatbot - the equivalent of dialling “0” when listening to a phone tree. 

  4. Companies could be required to disclose how many bots they “employ,” if we want to take the approach that bots are digital workers that replace/substitute for a human. This could help us understand the dynamics and magnitude of the potential displacement effects of the bot economy. 

  5. Other accountability mechanisms like a registration process for AI or insurance relationships should be evaluated. 

Sign the Petition

+ There’s some discussion in the PIPEDA Discussion Paper about the need to increase transparency on the use of automated decision-making.

What’s a little different w/ chatbots is that they’re not *always* used in a decision-making system, but there’s a dignity element in interfacing with a robot (without knowing). 🤖

Enforcement capacity is critical in establishing “trustworthy” approaches to AI. Legislation that better moderates our online lives is just a first step in building a more just internet experience. Without subsequent investment in dynamic enforcement mechanisms, lawmakers won’t be able to truly protect consumers from a chatbot’s deceit.   

🌵 If we’re training our policy agenda on California’s state legislature, there is much to learn in service of citizens.

Share



🤓 Vass Bednar is a smart generalist working at the intersection of technology and public policy.


Share this post
💬 All Talk
www.regs2riches.com
Previous
Next
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Vass Bednar
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing