Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. —Stephen Hawking
This article originally appeared in Museum magazine’s January/February 2024 issue, a benefit of AAM membership.
What is generative AI, and what are the practical applications and implications for museums?
In the past decade, artificial intelligence (AI) has leapt from the realm of science fiction and tech culture into our everyday lives. Most recently, generative AI is disrupting the work of creators, upending education, and performing key tasks in white-collar work. Some feel AI poses an existential threat to humanity—others forecast that it will quickly retreat into specific, narrow applications.
What is AI, is the hype justified, and how can museums make informed judgments about this or any other emerging technology? What are the practical applications and implications for museums in the short and long term?
The Challenge
Artificial intelligence encompasses a range of technologies that seem to mimic human ability to reason, make decisions, generate predictions, and perform complex tasks. Birthed in the 1950s, when Dartmouth professor John McCarthy coined the term and Alan Turing proposed his commonsense test for machine intelligence, AI literally began toddling in 1966, when the Stanford Research Institute introduced “Shakey,” the first mobile robot capable of interpreting instructions.
When TrendsWatch first covered AI in 2017, the major focus was “big data”: mining and analyzing huge datasets to generate business insights (or beat human grandmasters at chess) and using AI conversational skills to interact with users and respond to questions. Now AI is experiencing an adolescent growth spurt, fueled by the ability of generative AI (GenAI) to create original content as text, images, and sound. If chatbots powered by conversational AI might be mistaken for customer service reps, programs like ChatGPT and DALL-E, powered by GenAI, are trying out for the roles of author, artist, and composer.
ChatGPT is just one of a slew of GenAI programs taking the world by storm. A 2023 McKinsey survey found that over 79 percent of respondents had some exposure to GenAI and nearly a quarter were using it regularly in their work. It is being used by students for writing assignments, researchers for academic papers, lawyers to produce legal briefs, and high school students to generate college application essays. In the process, developers of GenAI seem to be obeying Meta founder Mark Zuckerberg’s exhortation to “move fast and break things.” The growing list of things that, if not broken, at least cracked by generative AI ranges from copyright and intellectual property to both primary and higher education.
Some of the disruptions created by GenAI arise from flaws deeply embedded in the current algorithms. GenAI has a well-documented tendency to “hallucinate” (i.e., make shit up). In 2023, a Federal District Court judge threatened to sanction lawyers who used ChatGPT to generate a filing that turned out to be filled with fake judicial opinions and legal citations, and staff of the research platform Preprints.org removed a ChatGPT-written submission that included fictitious references. GenAI also has an unfortunate tendency to amplify bias inherited from its training data. As Bloomberg recently reported, to judge by the outputs of the Stable Diffusion image generator, “CEOs are white men; women are rarely doctors, lawyers or judges; men with dark skin commit crimes; and women with dark skin flip burgers.”
Any new technology can have a disruptive effect on labor, with the heaviest impact historically falling on blue-collar jobs. AI, and particularly generative AI, may be the first technology to destabilize white-collar work as well. We are already seeing grave threats to whole professions, especially in the creative class. Text and image generators are producing cover art, writing news stories, and producing blog posts and marketing copy. One of the major demands of the Hollywood strike of 2023 was restricting studios’ use of AI, both to create digital likenesses of actors and to generate scripts. While this new technology will create some high-skill, adequately compensated jobs (AI research scientist, AI creative director), they will be outnumbered by more precarious and stressful gig work, such as labeling training data and flagging content that contains sexual abuse, hate speech, and violence.
AI’s potential to do harm goes beyond its effect on labor. Its power, reach, and plausibility supercharges the dissemination of false information and fake content, including videos, news articles, social media posts, and even books. The chairman of the Senate Intelligence Committee has warned that misinformation generated and promulgated using AI could disrupt the 2024 presidential election. Even when used for good, like improving education, the widespread adoption of GenAI could widen the digital divide. While most of the current wave of AI applications have launched on a freemium model, eventually, to be profitable, the companies that create and deploy this technology will have to charge. The more these applications become embedded and necessary tools, the more we risk exacerbating existing inequalities due to disparate levels of access and ability to pay.
This technology is evolving so fast that the sectors impacted by its application, as well as regulators, are struggling to keep up. How should existing laws and policies, written before the age of AI, be applied with regard to intellectual property, liability for harm or damage, privacy, bias, and discrimination in employment? What new regulation is needed, and how should we allow, ban, control, or regulate use? What role should GenAI play in P–12 and higher education, hiring, research, or law enforcement?
What’s Next?
Will AI continue to accelerate at the current hypersonic rate? As with many technologies financed by large companies, the long-term business model is not yet clear. While OpenAI, the creator of ChatGPT, projected $200 million in revenue by the end of 2023, it was spending $700,000 a day to run the system, leaving it in the red. It remains to be seen whether it can convert enough free users into paid customers to make the product sustainable.
When regulators do catch up with the system, the constraints placed on how data is mined, and used, may severely crimp the growth of a system that was fueled by exploiting free content (a practice that has already spawned numerous lawsuits). Ironically, by displacing human creators, GenAI may have sowed the seeds of its own destruction. GenAI trains on datasets collected from the internet, but when it learns from data produced by other AI, its performance degenerates, resulting in what researchers have dubbed “model collapse.” That breaking point may not be far off: the European security center Europol predicts that as much as 90 percent of online content might be “synthetically generated” by 2026.
What This Means for Museums
Museums have been experimenting with various flavors of AI for at least two decades, from practical applications (Indianapolis Museum of Art at Newfields using AI to crop digitized images) to playful (the wonderful Send Me SFMOMA, which responded to text prompts with images from the collection). Museums have deployed AI-powered robots as docents, predictive analytics to forecast visitation and set ticket prices, cognitive search to enhance collections metadata, and sentiment analysis to mine visitors’ social media posts. At the Museum of Tomorrow in Rio de Janeiro, Brazil, the AI-powered chatbot Iris+ queries visitors about their experience and suggests steps they can take to address their biggest concerns about the future.
It is no surprise that museums have leapt to experiment with GenAI as well. In 2023, Duke University students and faculty used ChatGPT to organize the exhibition “Act as If You Are a Curator” at the Nasher Museum of Art, prompting the platform to select works from the collection and write labels. Behind the scenes, some museums are experimenting with text generators to create communications for members and donors, draft reports, and even write grant applications. But for all the exciting new possibilities presented by GenAI, it poses profound challenges to museums as well.
A general erosion in trust accelerated by AI-generated misinformation (accidental or deliberate) might impact the long-standing and robust trust the public has in museums, or it may be an opportunity for museums to lean into their reputation as sources of reliable information. What role might museums play in creating systems to verify “content provenance”—the source and legitimacy of images, videos, text, and articles? How can museums ensure that data and metadata associated with their own records follow these digital assets into the AI universe?
GenAI is trained on vast datasets compiled by scraping text and images from the web—without permission from or compensation to the creators of that material. Many artists, educators, and researchers are furious that GenAI harvests their work without consent, acknowledgement, or compensation. Do museums want to encourage the development and use of technology that is based on what many feel is ethically, if not yet legally, theft of intellectual property? How might museums help protect creative, educational, and academic communities of practice from harm and advocate for their interests?
AI and GenAI provide opportunities to increase income and reduce costs. Museums need to decide whether, when, and how to adopt this technology—decisions that are both values-based and practical. Some may seem like clear wins: using chatbots to provide personalized experiences for visitors, image recognition to help the public engage with collections, and business applications to forecast attendance. Others may require more thought. How should AI be deployed to help staff with their work? If the use of AI could reduce the number of staff, do museums want to be complicit in this loss? AI depends on vast computational resources and massive energy use. How might museums factor AI’s environmental impact into their strategies to reduce their carbon impact?
Working through these issues will require museum board members and staff to acquire basic AI literacy to understand its benefits and drawbacks and identify where AI adoption might require expenditures (in technology, training, new staff) or result in savings (through increased productivity or staff reductions). Hardest of all, it will require making some projections about the timeline of development and adoption of AI, a critical issue on which even the experts disagree.
How Generative AI Might Harm the Creative Economy
By ChatGPT
This essay was written by OpenAI’s ChatGPT 3.5 in response to the prompt “Write 500 words on the damage generative artificial intelligence may do to the creative economy.” The program returned the following text in less than 10 seconds.
Generative artificial intelligence (AI) has emerged as a powerful tool, capable of producing creative content such as art, music, and literature. While it offers numerous benefits, there is growing concern about the potential damage it may inflict on the creative economy, particularly on artists and writers. This essay explores the multifaceted ways in which generative AI can harm the creative economy.
Devaluation of creative work. Generative AI can flood the market with easily produced content, devaluing the work of human artists and writers. As AI-generated art becomes more prevalent, traditional art may lose its uniqueness and value. Consumers may be less willing to pay for original creations when cheaper AI alternatives are available, thereby reducing the income of artists and writers who rely on their work for a livelihood.
Copyright and plagiarism issues. AI-generated content can blur the lines of copyright and intellectual property rights. Identifying the origin of a piece of content becomes challenging when AI creates works that closely mimic human creations. This can lead to a surge in plagiarism, where AI-generated work is passed off as original, causing financial and reputational harm to genuine creators.
Homogenization of creativity. Generative AI algorithms are often trained on a massive corpus of existing creative works. While this can lead to impressive imitation, it can also result in a homogenization of creativity. The AI may perpetuate existing styles and trends rather than encouraging innovation and the exploration of new artistic frontiers. This stifling of creativity can be detrimental to the vibrant and diverse nature of the creative economy.
Erosion of skill development. If AI becomes the primary tool for generating creative content, it could discourage aspiring artists and writers from developing their skills. The sense of accomplishment and personal growth that comes from honing one’s craft may diminish, as people may opt for AI shortcuts. This could ultimately lead to a decline in the quality of human-created art and literature.
Job displacement. The adoption of generative AI in the creative industry could lead to job displacement. Artists and writers may find themselves competing with AI systems that can produce content more quickly and cost-effectively. This could result in job loss and economic instability for many individuals in the creative sector.
Loss of emotional connection. Art and literature often have a profound emotional impact on consumers because they are imbued with the personal experiences and perspectives of the creator. AI-generated content may lack this emotional depth, making it harder for audiences to connect with the work on a meaningful level. This could result in a decline in the overall appreciation of art and literature.
Ethical concerns. Generative AI can inadvertently create content that is offensive, harmful, or biased, which can have a negative impact on society and culture. Artists and writers often contribute to important conversations and social change through their work. If AI-generated content lacks ethical considerations, it may contribute to harmful narratives and reinforce existing biases.
In conclusion, while generative AI holds immense promise in the realm of creativity, it also poses significant threats to the creative economy, including artists and writers. To mitigate these risks, there is a need for thoughtful regulation, ethical guidelines, and a recognition of the value of human creativity. Generative AI should be seen as a tool to enhance human creativity rather than a replacement for it. By carefully navigating the integration of AI into the creative process, we can harness its potential while preserving the vitality and diversity of the creative economy.
Museums Might …
- Help the public make informed choices, as users and voters, by providing education on AI and attendant issues.
- Explore how AI-generated content could be used to enhance collections, exhibits, education, and visitor interactions.
- Educate staff and the board about the choices the museum faces regarding AI use.
- Create ethical frameworks for decision-making.
- Take an active role in developing policies, guidelines, and regulations around issues such as sourcing of training data, tracking of content provenance, and protecting the rights of creators.
- Identify where AI can be used to help staff do their work while assessing the impact on museum labor overall.
Museum examples
In October 2022, the Museum of Science, Boston, opened the permanent exhibition “Exploring AI: Making the Invisible Visible” to illuminate how the approaches and data used to train computer systems have often resulted in AI that mirrors human biases, raising questions about unchecked use of these technologies across all aspects of our lives. Complementing the exhibition, a series of in-person and online programs dove into topics such as computer vision, large language models, generative AI, and deepfakes to encourage community deliberation about the future of AI technologies. The museum also created a resource library—including animated explainers, infographics, videos, and games—that challenges learners to consider how we can ensure that our society creates and uses AI technologies in ways that are ethical, inclusive, and can benefit all people.
In 2022, the Carnegie Science Center in Pittsburgh hosted researchers from Carnegie Mellon University’s Human-Computer Interaction Institute to pilot a Novel Research-based Intelligent Lifelong Learning Apparatus (NoRILLA) with support from the National Science Foundation. This AI-enhanced interactive science exhibit adds a camera, touchscreen, display, and an AI assistant to a traditional earthquake table or other physical apparatus, such as ramps. NoRILLA—brought to life as a virtual gorilla—helps participants make scientific discoveries through interactive feedback and guidance. Evaluations revealed that children learned significantly more from the AI-enhanced intelligent science exhibit compared to the traditional exhibit, and the dwell time increased by a factor of four. NoRILLA has since deployed at the Children’s Museum of Atlanta, the Please Touch Museum in Philadelphia, the Children’s Discovery Museum of San Jose, and the CaixaForum Valencia in Spain. Read more about NoRILLA in the Journal of the Learning Sciences at bit.ly/NoRILLAJLS.
Beginning in 2015, the Illinois Holocaust Museum has worked with the USC Shoah Foundation to capture Holocaust survivor stories and bring them to life via high-definition, AI-powered holograms paired with voice-recognition technology. The resulting Survivor Stories Experience enables the interviewees to tell their deeply moving stories and respond to questions from the audience, inviting visitors to have a personalized, one-on-one “conversation.” The museum’s own research and national studies show that Holocaust survivors’ stories humanize difficult history, helping visitors develop empathy, learn the dangers of indifference, and recognize their responsibility to stand up to hatred and antisemitism. Recent research by the University of Illinois finds that visiting the museum and seeing the holograms empowered and motivated attendees to address injustice in their lives and communities.
Resources
Generative Artificial Intelligence and Data Privacy: A Primer, Congressional Research Service, 2023
This publication provides an overview of generative AI, an explanation of the underlying data models and data sources, and what happens to data shared with AI models. It also outlines policy considerations with regard to privacy and related laws, and data scraping. crsreports.congress.gov/product/pdf/R/R47569
Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, White House Office of Science and Technology Policy, 2022
This document identifies five principles to guide the design, use, and deployment of automated systems to “protect the American public in the age of artificial intelligence,” including the rights to algorithmic discrimination protection and data privacy. The framework is accompanied by a technical handbook for anyone seeking to incorporate these protections into policies and practice. whitehouse.gov/ostp/ai-bill-of-rights
Comments