The night before my flight to Denver for #AAM2023, I went over my slides for the annual presentation on TrendsWatch. The talk was formatted around the chapters of the latest report (access your free download here), but rather than simply recapping what I hoped everyone had or would read, I shared some recent news and additional thoughts that had not made it into print.
Coming to the section on technology (“A Digital (R)evolution”), I paused to contemplate the “recent news” I’d chosen for the lead: ChatGPT and the blizzard of related stories since it was released last November. Within two months, the AI-powered chatbot had reached 100 million monthly active users, which apparently makes it the fastest-growing consumer application in history.
Since that debut, I’ve bookmarked more than three dozen articles about ChatGPT and its kin, filled with speculation about whether this application of artificial intelligence will remake our workplace, our educational systems, and our newsfeeds—and/or spell the end of civilization as we know it. For anyone familiar with the Gartner Hype Cycle of technology (see below) this was a familiar script, notable only for the speed with which we went from “technology trigger” to the “peak of inflated expectations.” (Assuming we’ve reached that peak, which remains to be seen.)
Contemplating the meteoric rise of ChatGPT, I felt conflicted. On one hand, it’s gratifying to note that I was blogging about the potential museum applications of AI-powered chatbots way back in 2017. On the other, I felt tired…tired and more than a little jaded by yet another digital technology being hailed for its potential to transform the world for good.
I have been down this road before, many times, in the fifteen years I’ve been trendswatching for the Alliance. Technology is, after all, one of the five main categories of the scanning framework I use, which I learned at the University of Houston (acronymically known as “STEEP”.) In my work, I’ve consistently pointed out that technology is only interesting in the context of the S in STEEP—social or cultural trends. What social need does a technology fill? What makes people want or need to use a new digital tool?
But in retrospect, I feel that I’ve often been naïve (or to be generous to myself, overly optimistic) in writing about the impact of emerging technology. For example: In 2014 I wrote about the digitally powered rise of the sharing economy, with the internet fueling ecologically minded collaborative consumption. In the interconnected future I envisioned, access would trump ownership, and savings from sharing material possessions would be redirected to creating meaningful experiences. I did flag the risks of rideshare companies exploiting workers and homeshare companies skirting regulations—but I did not anticipate the stunning magnitude of these negative impacts. Gig work associated with digital platform companies has turned out to be insecure, poorly compensated, and harmful to workers’ health. Airbnb is significantly contributing to the US housing crisis by limiting affordable rental units available to local residents and driving up housing prices. So yes, an app on my phone can help me find the nearest Little Free Library (yay!), but the overall economic and social impacts of digitally powered sharing are problematic, at best.
How does my growing techno-skepticism affect my thinking about ChatGPT and other AI systems fueled by large learning models (LLMs)? I recognize they are probably the next technology trend I should explore for museums, but my internal filters for harm are more finely tuned than they have been in the past. While I doubt the doomsayers who warn that AI poses a “risk of extinction,” I am highly aware of the harm the technology is already inflicting on the world, for example:
- Training LLMs involves scraping vast amounts of material from the web without permission or compensation.
- LLMs also scrape personal information, and while existing regulations in the EU and California afford some protections, it’s unlikely most individuals who might assert their privacy rights even know their data resides in the training sets.
- Even the best LLM-trained AI text generators are prone to what is euphemistically called “hallucinating,” meaning they flat out make things up. (And they are quite willing to double down by lying when asked if they made things up. See the recent example of the lawyer who used ChatGPT to prepare a court filing. It did not go well.)
- Even in their current, imperfect form, LLM-trained AI generators are effectively parasitizing the work of artists and authors, enabling users to generate text, images, and music “in the style of,” creating reputational confusion as well as, in some cases, depriving creators of income.
- AI-generated content inherits the biases embedded in the training material it sources across the web, bias that is often further obscured by the lack of transparency about how the generators operate (and their failure to cite or link to sources).
As I pointed out in my talk for #AAM2023, any technology can be used for good or evil. Digitally powered personalization can take the form of affordable, attractive 3-D printed artificial limbs, or creepy advertising that follows us across the web. Realistic chatbots, trained on a person’s writing, video, and recordings, can help us remember loved ones we have lost, or allow us to engage in sexploitation of AI avatars.
Often the benign or beneficial varieties of any given technology are created bespoke by companies or communities (e.g., blockchains supporting distribution of food aid) or by public cooperation (e.g., the decentralized, open-source microblogging service Mastodon). But museums often take advantage of the “free” services and platforms provided by huge technology companies whose profit incentives are not necessarily aligned with museums’ values (whether that is policing hate speech or protecting data privacy). As GWU professor David Karpf has noted, “Over time, the trajectory of every new technology bends toward money.” One of the basic questions to ask, when using a “free” technology service or platform, might be who is making money from the tech, and how? What’s the business model? What values guide their work?
I’m increasingly aware of the environmental cost of digital technologies as well. The graphic to the right (from digwatch) summarizes data from several sources on the environmental impact of digital activities, noting that digital tech accounts for 4 percent of global greenhouse gas emissions (GHG), seven thousand data centers consume about 2 percent of global electricity, CO2 emissions related to online video streaming equal the total GHG emissions of Spain, and transactions related to Bitcoin (just one of several major cryptocurrencies) exceeds the energy consumption of Chile or New Zealand.
As museums increase their reach via the web—digitized collections, online programs, virtual tours—they are just beginning to assess the associated impact on energy use and carbon emissions. While the net contribution of the museum sector to digital carbon emissions may be small, compared to the whole, the same might be said of the collective impact related to building design, HVAC, and touring exhibitions—all areas receiving more time and attention from our field.
Let me be clear that I’m not advocating for digital abstinence—digital practices can offer many benefits to museums and to their audiences. However, I do think it is important for museums to be aware that these benefits are not without cost. I ended each section of my Denver talk with a “what if?” provocation, and for technology this provocation was:
“What if a cost/benefit analysis of new tech is a standard museum practice?”
And by this, I mean not just internal costs and benefits, but the externalized costs to society of the business model and ecological footprint of a given technology.
As we are seeing in the current national and international debate about the need to regulate AI, the guard rails around tech need to be created by society and by governments. For that to happen, the people who will live in the future created by these technologies need to understand the issues. That being so, here is one more proposition:
“What if museums use their superpower of trust to become a major influencer of public knowledge/attitudes towards new technologies?”
Beyond the direct impact of our own operations, museums have the potential to lead by example, raising awareness of what people can do via their own choices as consumers and voters, as humanity contemplates how to regulate emerging tech to minimize harm.
Comments