Last year, The Intercept detailed how big tech manipulates academia to avoid regulation. Their analysis indicated that large tech companies hiring and funding academic experts is an intentional strategy to avoid legally enforceable restrictions of controversial technologies.
The phenomenon of corporate capture of academia by large technology companies has been discussed vigorously online after Google’s firing of Timnit Gebru - one of the most highly regarded AI ethics researchers in the world - after she was critical of the firm’s approach to ethical AI.
In an unusual turn of events, the firing played out publicly over Twitter, with Timnit and her team live tweeting throughout the days, and now weeks, of turmoil, dissent, and demands for authentic accountability.
This real-time rupturing of a research community that focuses on fairness, inclusion and ethics in AI raises big questions, like:
how can employees hold Big Tech accountable from within,
can research undertaken at a corporation be truly independent,
what kind of reporting structures are optimal to maintain and reinforce that independence,
what are the terms and conditions of research supported by corporate giants, and
what should they be?
Some of the answer lies in determining what constitutes a healthy research environment at a corporate institution, like: doing excellent research, which means presenting at conferences and publishing papers, and building good relationships with the academic community. Yet there will always be trade-offs between academic freedom and corporate salaries. A major difference now is that there are more corporate research jobs than there used to be, and less academic positions.
Timnit has noted that there needs to be a lot more independent research. The associated questions remain: who will fund it, and how we can ensure that even funded research maintains academic integrity and independence without being undermined.
*Below is an excerpt from Break 'Em Up: Recovering Our Freedom from Big Ag, Big Tech, and Big Money:
Indeed, more than either of the other two giants, Google has burrowed deeply inside the intellectual leadership of the [US]. As a result, its power can seem natural instead of artificial. It funded Harvard’s Berkman Klein Center for Internet and Society in the mid-2000s, gave $2 million to fund the Stanford Center for Internet and Society in 2006, and has funded countless conferences and events. It has recruited and cultivated hundreds of law professors who support its views. And it does not react kindly when those views are questioned.
Largely in a spectator role as these conversations play out, Canada should carefully consider the implications for its scholarship, public university system and immigration policies. The dynamics and implications of so-called academic capture are especially under-explored. For instance, when an academic is hired away from the academy, these hires influence perceptions of demand for certain credentials, which are heavily subsidized by the state and then captured by private interests. Indeed, these courtships can carry implications for immigration processes and the ‘war on talent,’ dictating in-demand credentials like computer science and engineering.
Some of the other dynamics at play:
First, large tech firms have the capital to hire away the best and the brightest - essentially neutralizing experts who may criticize them.
In the US, the average salary for an assistant professor is about $70,000/year, whereas the starting salary for a technology company could be in the low six figures. Of course, there is significant variance based on the institution, discipline, and years of experience, but generally it’s more financially rewarding for someone with a PhD to work at a technology company. Plus, there are fewer jobs available in academia and less public funding for research work.
Second, large tech firms may exert corporate influence on this research; privileging research pathways that contribute to product development over critical views.
This is not a new risk - rather, a familiar tension. We have seen how ‘big pharma’, ‘big tobacco’ and ‘big cannabis’ have similarly courted top scholars. The ethical AI space is similar, plus there is a more acute awareness in recent years of the lack of inclusivity for racialized minorities; worsening their underrepresentation in society and sometimes automating discrimination as a result.
Third, large tech firms fund research programs that are anchored on academic campuses, such the MIT-IBM Watson AI Lab collaboration or the MIT Quest for Intelligence program.
Corporations may fund academic research labs for a number of reasons, like: recruiting, access to that research, public relations benefits and/or tax write offs. In many instances, these funds catalyze important research that might not be possible otherwise, and provide critical training and learning opportunities for students.
Fourth, technology companies offer virtually unlimited computing resources and other research project perks that universities may not.
Mathana @StenderWorldPersonally, I’d say school surveillance, student tracking & police-accessible doorbell cameras are super unethical applications of facial recognition. So...maybe we shouldn’t let Amazon & Microsoft policy teams help draft anymore “ethical framework”? https://t.co/77EZboWHlA https://t.co/nynRFhbCrd
Canada is no less vulnerable to these conflated interests of private and public interests, albeit at a smaller scale. More work should be done to understand the motivations and trade-offs considered by scholars when accepting a recruitment offer from a large firm, applying to work off campus, or accepting industry funds to bolster their research. These factors may include, but are not limited to: access to computer resources, research capacity, supportive resources, pace of work, etc.
It is also worth remembering that the number of PhDs at a technology company can influence and inflate its valuation. At its height, recently acquired Element AI had more than 500 employees, including 100 PhDs. It would be interesting to assess where these researchers have returned to the post-secondary sector or if they have been hired by another, similar private firm. The company was also notably active in the ethical AI policy space, working to connect the dots between AI ethics principles, human rights law, and industry standards to support rights-respecting AI.
Now that Element AI has been sold to ServiceNow, it remains to be seen which Canadian actors will fill the prominent ethical AI policy space that Element AI has vacated since the policy team was eliminated as part of the acquisition, and whether these actors will or can enjoy full independence from any corporate influence. While it would be clumsy to fuse policy advocacy with academically-relevant research work, a superficial skim suggests that few of these institutions are totally independent of corporate interests.
Element had notably partnered with the Mozilla Foundation to build data trusts and advocate for the ethical data governance of AI - a practical application of research that pilots implementation but might muddle distinctions between public policy and research work. Borealis AI, created by the Royal Bank of Canada - recently published an op-ed championing Canada’s opportunity to ensure AI remains a force for good. However, this is more of a public policy stance from a major bank than academic research.
The heavy-hitting Canadian Institute for Advanced Research (CIFAR), a Canadian-based global research organization stewarding Canada’s $125M Pan-Canadian Artificial Intelligence Strategy, is largely supported by the governments of Canada, British Columbia, Alberta, Ontario, and Quebec, but Facebook and the RBC foundation have also supported the strategy for a undisclosed amount.
Across the street, the University of Toronto’s Schwartz Reisman Institute for Technology and Society is supported by a landmark $100M gift from Gerald Schwatrz (Onex Corporation) and Heather Resiman (Indigo).
🤷♀️ How might we view the Institute differently if it were funded by a fintech like Plaid and Amazon?
The Vector Institute for Artificial Intelligence is partially supported by Google, Facebook. Accenture, and Nvidia. This partial support may garner these technology companies access to on-the-ground research that inform their product development.
In pretty much all of these instances, private dollars support research efforts that benefit the public good while also serving corporate interests. Advancements in artificial intelligence seem to necessitate a mix of government investment, university research, large companies, and startups. The reach of these investments is deserving of more study, as is the comparable under-investment in scholarship by large Canadian firms.
🇨🇦 What is the right ‘mix’ for Canada?
This question of whom or what will invest in such research remains, and could become more pronounced post-pandemic as resources become more scarce.
As a net tech importer, Canada could lead in the space with radical transparency and a preference for no-strings-attached funding that is met with less skepticism and more trust. We have an opportunity to set a clear and high standard here, with statements or contracts regarding independent research be it on or off campus. We can also explore more checks and balances; like mandating disclosures regarding the sources of research funding in a way that is mutually beneficial to both actors.
The worst thing we could do is pretend that these tensions can’t, won’t, or doesn’t manifest here. 😉
Vass Bednar is the Executive Director of McMaster University’s new Master of Public Policy in Digital Society Program.