regs to riches

Share this post

🎓 academic capture

www.regs2riches.com

🎓 academic capture

private funds + public interests

Vass Bednar
Jan 16, 2021
9
Share this post

🎓 academic capture

www.regs2riches.com

Twitter avatar for @onekade
pls vaccinate me @onekade
One thing I enjoy doing is drinking coffee. Who agrees
9:21 PM ∙ Jan 4, 2021
143Likes3Retweets

Last year, The Intercept detailed how big tech manipulates academia to avoid regulation. Their analysis indicated that large tech companies hiring and funding academic experts is an intentional strategy to avoid legally enforceable restrictions of controversial technologies. 

Twitter avatar for @safiyanoble
Safiya Umoja Noble PhD @safiyanoble
All the way correct and worth the time to read. The invention of “ethical AI”: how big tech manipulates academia to avoid regulation interc.pt/2PH6FQC by Rodrigo Ochigame
interc.ptHow Big Tech Manipulates Academia to Avoid RegulationA Silicon Valley lobby enrolled elite academia to avoid legal restrictions on artificial intelligence.
6:11 PM ∙ Dec 22, 2019
460Likes173Retweets

The phenomenon of corporate capture of academia by large technology companies has been discussed vigorously online after Google’s firing of Timnit Gebru - one of the most highly regarded AI ethics researchers in the world -  after she was critical of the firm’s approach to ethical AI.

Twitter avatar for @anabmap
Ana Brandusescu @anabmap
Elements of corporate capture include: - Community Manipulation - Economic Diplomacy - Judicial Interference - Legislative & Policy Interference - Privatizing Public Security Services - Revolving Door - Shaping narratives - Capture of Academic Institutions
8:44 PM ∙ Aug 10, 2020
10Likes2Retweets

In an unusual turn of events, the firing played out publicly over Twitter, with Timnit and her team live tweeting throughout the days, and now weeks, of turmoil, dissent, and demands for authentic accountability.

Twitter avatar for @alexhanna
Dr. Alex Hanna @alexhanna
In @WIRED today, @mer__edith and I write on how the firing of @timnitGebru exposes the dual crisis of AI research: failures of diversity and the dominance of corporate funding and control.
wired.comOpinion: Timnit Gebru’s Exit From Google Exposes a Crisis in AIThe situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.
1:31 PM ∙ Dec 31, 2020
1,036Likes425Retweets

This real-time rupturing of a research community that focuses on fairness, inclusion and ethics in AI raises big questions, like:

  • how can employees hold Big Tech accountable from within,

  • can research undertaken at a corporation be truly independent,

  • what kind of reporting structures are optimal to maintain and reinforce that independence,

  • what are the terms and conditions of research supported by corporate giants, and

  • what should they be? 

Share

Some of the answer lies in determining what constitutes a healthy research environment at a corporate institution, like: doing excellent research, which means presenting at conferences and publishing papers, and building good relationships with the academic community. Yet there will always be trade-offs between academic freedom and corporate salaries. A major difference now is that there are more corporate research jobs than there used to be, and less academic positions. 

Twitter avatar for @oliviasolon
Olivia Solon @oliviasolon
Tech companies have told me several times that their research divisions have academic independence -- a position that's increasingly difficult to defend
reuters.comGoogle told its scientists to ‘strike a positive tone’ in AI research - documentsAlphabet Inc’s Google this year moved to tighten control over its scientists’ papers by launching a “sensitive topics” review, and in at least three cases requested authors refrain from casting its technology in a negative light, according to internal communications and...
4:19 PM ∙ Dec 23, 2020
454Likes166Retweets

Timnit has noted that there needs to be a lot more independent research. The associated questions remain:  who will fund it, and how we can ensure that even funded research maintains academic integrity and independence without being undermined.

Twitter avatar for @anabmap
Ana Brandusescu @anabmap
Corporate Capture: Definition and Characteristics
escr-net.orgManifestationsview PDF Corporate Capture: Definition and Characteristics Corporate capture refers to the means by which an economic elite undermine the realization of human rights and the environment by exerting undue influence over domestic and international decision-makers and public institutions. The elements …
12:20 AM ∙ Aug 10, 2020
5Likes2Retweets

*Below is an excerpt from Break 'Em Up: Recovering Our Freedom from Big Ag, Big Tech, and Big Money:

Indeed, more than either of the other two giants, Google has burrowed deeply inside the intellectual leadership of the [US]. As a result, its power can seem natural instead of artificial. It funded Harvard’s Berkman Klein Center for Internet and Society in the mid-2000s, gave $2 million to fund the Stanford Center for Internet and Society in 2006, and has funded countless conferences and events. It has recruited and cultivated hundreds of law professors who support its views. And it does not react kindly when those views are questioned.

Largely in a spectator role as these conversations play out, Canada should carefully consider the implications for its scholarship, public university system and immigration policies. The dynamics and implications of so-called academic capture are especially under-explored. For instance, when an academic is hired away from the academy, these hires influence perceptions of demand for certain credentials, which are heavily subsidized by the state and then captured by private interests. Indeed, these courtships can carry implications for immigration processes and the ‘war on talent,’ dictating in-demand credentials like computer science and engineering.  

Twitter avatar for @nitashatiku
Nitasha Tiku @nitashatiku
My profile of @timnitGebru based on interviews & documents. In Oct, she was promoted & Jeff Dean asked her 2 look @ ethical risks of large language models. 6 weeks later she was fired "How can you still ask why there aren’t Black women in this industry?” washingtonpost.com/technology/202…
8:10 PM ∙ Dec 23, 2020
1,205Likes503Retweets

Some of the other dynamics at play:

First, large tech firms have the capital to hire away the best and the brightest - essentially neutralizing experts who may criticize them.

In the US, the average salary for an assistant professor is about $70,000/year, whereas the starting salary for a technology company could be in the low six figures. Of course, there is significant variance based on the institution, discipline, and years of experience, but generally it’s more financially rewarding for someone with a PhD to work at a technology company. Plus, there are fewer jobs available in academia and less public funding for research work. 

Twitter avatar for @mathbabedotorg
Cathy O'Neil @mathbabedotorg
For me, this is the critical problem of this whole debacle: "Many of the top experts in AI ethics work at large tech companies because that is where the money is."
technologyreview.comWe read the paper that forced Timnit Gebru out of Google. Here’s what it says.The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
2:43 PM ∙ Dec 5, 2020
595Likes137Retweets

Second, large tech firms may exert corporate influence on this research; privileging research pathways that contribute to product development over critical views.

This is not a new risk - rather, a familiar tension. We have seen how ‘big pharma’, ‘big tobacco’ and ‘big cannabis’ have similarly courted top scholars. The ethical AI space is similar, plus there is a more acute awareness in recent years of the lack of inclusivity for racialized minorities; worsening their underrepresentation in society and sometimes automating discrimination as a result. 

Twitter avatar for @DataRemixed
Ben Jones @DataRemixed
Like many, I've followed Google's firing of @timnitGebru with growing concern about the state of AI research, & the public's ability to learn of ethical issues related to the emerging technologies. This article, & this quote in particular, sums it up well. wired.com/story/timnit-g…
Image
6:20 PM ∙ Dec 31, 2020
350Likes126Retweets

Third, large tech firms fund research programs that are anchored on academic campuses, such the MIT-IBM Watson AI Lab collaboration or the MIT Quest for Intelligence program.

Corporations may fund academic research labs for a number of reasons, like: recruiting, access to that research, public relations benefits and/or tax write offs. In many instances, these funds catalyze important research that might not be possible otherwise, and provide critical training and learning opportunities for students. 

Twitter avatar for @anabmap
Ana Brandusescu @anabmap
Also: “It is a normal practice for corporations to fund [ethics] research, and these sources of funding are expected to be disclosed, but the reporters found that some of these funding sources were not always detailed in the groups’ research publications.”
4:30 PM ∙ Dec 23, 2020
6Likes2Retweets

Fourth, technology companies offer virtually unlimited computing resources and other research project perks that universities may not. 

Twitter avatar for @StenderWorld
Mathana @StenderWorld
@anabmap Earlier in the year I did a little dive into the WEF’s “ethical framework” for facial recognition that was co-authored by people from Amazon’s and Microsoft’s policy teams. It’s bad.
Twitter avatar for @StenderWorld
Mathana @StenderWorld
Personally, I’d say school surveillance, student tracking & police-accessible doorbell cameras are super unethical applications of facial recognition. So...maybe we shouldn’t let Amazon & Microsoft policy teams help draft anymore “ethical framework”? https://t.co/77EZboWHlA https://t.co/nynRFhbCrd
7:28 PM ∙ Dec 23, 2020
2Likes2Retweets

Canada is no less vulnerable to these conflated interests of private and public interests, albeit at a smaller scale. More work should be done to understand the motivations and trade-offs considered by scholars when accepting a recruitment offer from a large firm, applying to work off campus, or accepting industry funds to bolster their research. These factors may include, but are not limited to: access to computer resources, research capacity, supportive resources, pace of work, etc.  

It is also worth remembering that the number of PhDs at a technology company can influence and inflate its valuation. At its height, recently acquired Element AI had more than 500 employees, including 100 PhDs. It would be interesting to assess where these researchers have returned to the post-secondary sector or if they have been hired by another, similar private firm. The company was also notably active in the ethical AI policy space, working to connect the dots between AI ethics principles, human rights law, and industry standards to support rights-respecting AI. 

Now that Element AI has been sold to ServiceNow, it remains to be seen which Canadian actors will fill the prominent ethical AI policy space that Element AI has vacated since the policy team was eliminated as part of the acquisition, and whether these actors will or can enjoy full independence from any corporate influence. While it would be clumsy to fuse policy advocacy with academically-relevant research work, a superficial skim suggests that few of these institutions are totally independent of corporate interests.

Twitter avatar for @atg_abhishek
Abhishek Gupta @atg_abhishek
On the Canadian front, I am particularly happy with the work that has been done on the algorithmic impact assessments that give a concrete methodology for assessing the downstream implications of the systems and incorporate that into decision making #TrustworthyAI @mozilla
5:32 PM ∙ Jan 5, 2021

Element had notably partnered with the Mozilla Foundation to build data trusts and advocate for the ethical data governance of AI - a practical application of research that pilots implementation but might muddle distinctions between public policy and research work. Borealis AI, created by the Royal Bank of Canada - recently published an op-ed championing Canada’s opportunity to ensure AI remains a force for good. However, this is more of a public policy stance from a major bank than academic research. 

Share regs to riches

The heavy-hitting  Canadian Institute for Advanced Research (CIFAR), a Canadian-based global research organization stewarding Canada’s $125M Pan-Canadian Artificial Intelligence Strategy, is largely supported by the governments of Canada, British Columbia, Alberta, Ontario, and Quebec, but Facebook and the RBC foundation have also supported the strategy for a undisclosed amount.  

Across the street, the University of Toronto’s Schwartz Reisman Institute for Technology and Society is supported by a landmark $100M gift from Gerald Schwatrz (Onex Corporation) and Heather Resiman (Indigo).

🤷‍♀️ How might we view the Institute differently if it were funded by a fintech like Plaid and Amazon?

The Vector Institute for Artificial Intelligence is partially supported by Google, Facebook. Accenture, and Nvidia. This partial support may garner these technology companies access to on-the-ground research that inform their product development. 

In pretty much all of these instances, private dollars support research efforts that benefit the public good while also serving corporate interests. Advancements in artificial intelligence seem to necessitate a mix of government investment, university research, large companies, and startups. The reach of these investments is deserving of more study, as is the comparable under-investment in scholarship by large Canadian firms.

🇨🇦 What is the right ‘mix’ for Canada? 

This question of whom or what will invest in such research remains, and could become more pronounced post-pandemic as resources become more scarce.

Leave a comment

As a net tech importer, Canada could lead in the space with radical transparency and a preference for no-strings-attached funding that is met with less skepticism and more trust. We have an opportunity to set a clear and high standard here, with statements or contracts regarding independent research be it on or off campus. We can also explore more checks and balances; like mandating disclosures regarding the sources of research funding in a way that is mutually beneficial to both actors.

The worst thing we could do is pretend that these tensions can’t, won’t, or doesn’t manifest here. 😉

Vass Bednar is the Executive Director of McMaster University’s new Master of Public Policy in Digital Society Program.

Share this post

🎓 academic capture

www.regs2riches.com
Previous
Next
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Vass Bednar
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing