When visitors leave a museum, in what ways are they different than when they walked in? What intellectual connections did visitors make with the ideas and artifacts with which they interacted? To what extent were visitors touched emotionally by what they saw, heard, or touched and how intense were their emotional reactions? These are some of the big questions that museums are interested in. However, answering them poses tremendous conceptual and methodological challenges. This recorded webinar focused on traditional and state-of-the-art concepts and methods of psychological science that museums can use to measure different aspects of the audience experience, especially those that have proven to be most challenging to measure.
Presenter: Dr. Pablo P.L. Tinio, Chair of the Department of Educational Foundations at Montclair State University
Date: Wednesday, Oct. 20 at 1:30PM EST
Transcript
Dr. Pablo P.L. Tinio:
… basis of behavior is one of the things that I really focus on in my work. So I’m going to talk a little bit about that today. And then go into methods and designs. Again, beyond the surveys and the questionnaires that we’re all so familiar with and we’re all really good at using. And then examples of research, a little bit know theory. And then end by talking about collaborations. And I think one of the things that I mentioned in the article is the importance of drawing from other fields. And in this particular case, it’s the field of psychology.
See, my field is Psychology of Aesthetics, Creativity, and the Arts. And you probably never heard of it as a field. You’ve heard of psychology, right? But it’s a subfield. It is the second oldest subfield in all of psychology, which people find kind of weird, right? How is that? The oldest field is psychophysics. And Psychology of Aesthetics, Creativity, and the Arts started around 1870s initially with a study that we all kind of credit as starting the field and that’s by Gustav Theodor Fechner in 1876. And what he did was… So there was a debate. I think it was an exhibition in Dresden at that time. And there was two versions of the Holbein Madonna and they wanted to figure out which version was the original from the actual artist and which was the copy.
And what he did was he used the method of choice. So basically, he asked people, like a paired comparison, “Which of the paintings do you like better?”, right? He collected some data. And based on the choices that people made, he said the one with the highest number of decisions from people, the one they liked the most, that was an indicator of which one was the authentic Holbein Madonna. So we kept going for 100 plus years or up to this, to now. And basically, the main focus and emphases of the field have been on things like preference and liking. For decades and decades and decades, we love trying to figure out what sort of artworks people like, what sort of genres and style, what sort of compositional sort of designs and what specific visual elements whether that’s complexity or symmetry or contrast and so forth, even the sizes of artworks in museums.
So that has been the focus and in a way we’ve learned a great deal. I’ll talk a little bit about some of the things that we’ve learned through that research, but it’s been very limiting, all right? It’s been limiting in the sense that although we know that people prefer symmetry and we can even tell you what sort of symmetry do people prefer, although we can tell you that people like works that they can sort of identify an object in like a face or some kind of scene, we know that there are certain groups and categories of people that actually like or prefer the opposite, the things that are more ambiguous and so forth. And one of the reasons for that is that when academics do research and people do laboratory-based research, they tend to kind of stick to the most controlled context in situations and experimental conditions. And what we do is we lose a lot of things with it.
So there’s been that bias, that sort of I call it the laboratory bias since really the founding of this field of Psychology of Aesthetics, Creativity, and the Arts. But within the last 10 to 15 years, there’s a movement away from that, right? And it’s really peaking within, it’s peaked I guess about five years ago. And that’s continuing. So people are leaving the lab, researchers are, scientists are, psychologists are and trying to figure out what is it really about art, right? What is it about that experience?
So when you walk into a museum and just pick out any person standing in front of a painting or a sculpture or whatever artifact, the strange thing about it is if we just watch them, there’s nothing that we can really see. Very few things, like the non-verbals. Maybe they like it, maybe they don’t. Do they approach or move back? Do they spend a minute looking at that particular object? Or maybe just two seconds, right? So what we try to do now is try to figure out when we’re in that context and that environment and we’re looking at that person, everything is latent, right? Everything is inside, all the thoughts, the cognitive processes, the decisions that they’re making. The push is try to figure out what those things really are and how they interact with the museum environment and so forth, and the programming as well, and curation of course.
So a few things about psychological science. These are guiding principles that I follow. My colleagues, my students, whenever we start research, these are things that we keep in mind. The first thing is we are more common than we are different, all right? So there’s been a whole lot of research over the years trying to figure out differences in people’s reactions. And what we found is actually we are more common. So it’s that sort of shift towards trying to figure out commonalities. And I’ll give some examples of this.
The second is we’re quite predictable, right? We really are. And as much as… So I have two kids. I always tell them, “You guys are special,” right? And they are. But as a psychologist, honestly, they’re quite predictable. I can tell sort of even just when they’re nonverbals, when I’m watching it, it’s like, “What did you do?” Right? We can predict given circumstances and context and stimuli what they will do. And it’s the same thing with anyone.
Our behaviors are often irrational. And this is where we kind of, as evaluators and researchers, if we think behaviors are rational, then we get into some issues, right? We only see some of the things that we want to see. When we start thinking about behavior being irrational, then we’re open to what people give as far as behavior, their thoughts, their perspectives, right?
And then the last one, and I kind of talked about this already, most of what we do is latent. We cannot see it. And this is the reason why we have our jobs. Evaluating programs, trying to figure out whether an exhibition is successful or what’s successful. And trying to figure out, based on what we’re doing as far as creating a program or an exhibition or a show, we need to start thinking about what are those indicator behaviors that we want to see to figure out whether what we did was actually effective where it’s meeting people’s needs and expectations and they basically had a positive experience within that situation.
So visitor experience in museums. Just quickly through what we found over the years, it’s mostly linear, this experience of going into a gallery, piece to piece, artifact to artifact. But even then, there’s potential for rare, powerful outcomes. And I’m going to talk about some of those outcomes today, all right? Shared meaning for example. Perspective taking.
Empathy is a big one. I gave a talk at the Empathy Summit. This was not the previous one, but the one before that, 2020 Empathy Summit. And that just shows you, right? It’s like, what was it? Three days. Some of you probably attended that. Three days of talking about empathy and emotion and so forth. So we know that these are important outcomes, right? And within the practice of actually doing museums, it’s a big one.
Intentionality. So people will try to figure out inside why things were done in certain ways. And not just as far as like, “Why are these organized…” Like for example, paintings. “Why are they hang this way? Why is the text and why are the labels this way?” But there also there’s intentionality with regard to the actual pieces themselves. So for example, the question of “Why did this artist paint this canvas this way?” right? So these are big things, big questions that people are asking.
Metacognition. Imagination, right? Of course. Self-awareness and self-reflection. So these are some of those big, big outcomes of the experience of visitors in museums. It touches on the full range of emotions. And a lot of these things, the experiences and the interactions, they’re brief yet cumulative interactions. They’re very brief actually. And I won’t say it now because I have a slide later that kind of tells you how brief interactions with objects are within museums.
So now, the methods of psychological science before I get into the sample studies. Most of you are familiar with these, all right? And I just write… This is not a complete list. But demographics, pretty much every museum collects them throughout the year, right? Surveys and questionnaires, that’s the bread and butter of most of our work. Interviews and focus groups. Focus groups are less common than interviews. And so, I basically reorganize these into the most common to the least common.
Then we start getting into observations. As much as we think that observations are used in museums, they’re really not. If you look at a lot of the articles written about visitor behavior, presentations, observations are not as common as we think. And I’m going to again give an example of a study that use observations, direct observations.
Think-aloud protocol. And then we now start getting into some of the least commons, some of the things I really want to talk about today. The think-aloud protocol. Psychophysiological measures. Measures of locomotion, basically objective measures of how people move around in a space or spaces, galleries, and so forth. And then indirect measures of latent behavior. This is when you’re not asking somebody exactly how they’re thinking, or how they’re feeling, or what they learned and so forth, right? These are basically giving them tasks to do that give an indication of what they’re thinking and what they’re feeling. And again, I’ll talk a little bit about that.
And then the emotion heat maps, this is actually what probably prompted this webinar. That’s the base… That’s really what I focused on in the Museum article. The one that was shared by AAM. I’ll talk a little bit about the methods, the sort of technicalities behind creating those emotion heat maps. And then finally the mixed-methods approaches.
So that’s a lot. This is not even a complete list of possible research methods that you all can use in your own work, all right? But we’re going to map this. We’re going to complicate this even further by talking about research designs. So most of what we do, because it’s the easiest and it’s the cheapest, right? It’s the quickest. It’s the one-shot method. So we set a goal, right? Any given day, we’re going to survey 100 people, 100 visitors. What we’re going to try to do is, the 100 visitors, distribute them. Different ages, different genders, and so forth within different exhibits, different shows, different galleries. So the one-shot is the most common. And then we go into the least common: the longitudinal, the pre-post, repeated, right? That’s where we really get into some of the experimental things. Control groups and randomized conditions.
Most of the data that we have in the work that we do are descriptive data. Counts essentially. And that we can easily put them into graphs. Only recently are we starting to see inferential statistics going into some of these studies in some of these research that’s being done in museums. This is when we’re really trying to use probability to figure out what we’re seeing whether it’s by chance, whether it’s randomly happening or something that’s there that probably can predict visitor behavior later on, all right? A new group of visitors.
New and glittery methods are hot and exciting, okay? We receive emails probably from new apps, new equipment that we can use, right? Eye tracking, which I’ll talk about a little bit. And those are really fun and they’re really fine, right? You can use them. But it’s all about matching the method to the question, right? That is the most important thing that we have to think about, that we have to keep in mind when we’re doing some sort of evaluation project, is asking the question, asking the right question and trying to figure out what is the method or a set of methods if you’re using mixed methods that will actually answer that question best, all right? I’m going to come back to this idea again. But first I’ll just give you a quick sort of introduction to the study that we did in the lab and then go into the museum, right?
A question that we wanted to ask, and this is how that sort of matching between method and research question. This was the first study that’s looked at this at least to our knowledge in the museums where we got some of the artworks from, some of the images that we used in the study. So we wanted to measure the impact of surface cleaning and restoration on the perception and aesthetic evaluation of paintings, right? Very simple question. A lot of resources are basically devoted to making sure that the artworks are clean, artifacts are clean, and then if they need some sort of restoration procedure, that that’s done, all right? It protects them. Keeps it long term. It preserves them.
So it was an exploratory, grounded theory approach, but we use eye tracking on this. So we did this in the lab. Recently published. So we took two versions of the same painting, the pre-restored and the post-restored. We basically had people look at the images while we measured their eye movements, all right? So again, all the ones on the left are the pre and all the ones on the right are posts.
So we looked at statistically what the differences were in terms of a few variables. So the duration of the first fixation. So as soon as you see an image, how long was that first eye movement? Usually it’s around the main areas of the images. So the faces, the center of the canvas, the faces. And typically, hands, right? People love to look at faces and hands. We also looked at total viewing time, total number of fixations, and then how the eyes moved around the different canvases. So what we found was all of these variables, measures, were significantly greater for the restored versus the unrestored images. So the restoration did something.
So I’m using this as an example of like a perfect match. It’s really difficult because sometimes the changes pre- to post-restoration are so minute that you can only… It’s really like an attention thing, right? It’s a visual sort of change in people’s experiences. But that question has never been tested empirically. So we did that recently. So that’s a lab study just to illustrate what that sort of meeting of that method to the question, that matching I should say.
The next one also used eye movement tracking, and this was Eliasson’s installation. So it was two specific installations to the Belvedere Museum in Vienna where a lot of my colleagues actually work. And it was the exhibition Baroque, Baroque. What we wanted to see here is the interaction in these installation works between the objects that were installed and the ambient… the regular room or the gallery. The aspects of the gallery like the walls, any furniture, even the floors and the ceiling, and so forth. So we wanted to look at how people interacted with those things, those objects. We wanted to look at the shifts between installation, gallery, installation, gallery. How they moved around.
So when the visitors came in, we put their glasses, eye tracking glasses. Mobile ones. And basically, it recorded everything. A video of what they looked at, where they walked, to the millisecond of how long they looked at whatever they were looking at, right? So really, really fine grained data collection. So we found a few things. I’m not going to go over some of the findings, but I’m going to go over one really interesting finding. The wearing of the eye tracking glasses did not have a significant impact on visitor experience, right? So because people always wonder like, “Doesn’t that change the behavior?” Not really. Actually, we did a whole bunch of measures and that didn’t change much as far as that experience of the galleries, the experience of the installation and so forth.
So those are two eye movement studies, again, match the question with the method. The one thing, if you’ve ever considered using eye movement track, it’s a really powerful technique. And it’s one in… 10 years ago, it was super hot and glittery, like, “Wow! Eye tracking.” We can have fancy images that we can present to the audience. We can present it. You can write articles about it. Funders, especially, love that stuff. But psychologically, eye movement tracking is a really, really good method because it tells a lot about what’s happening in the mind of the visitors. So it’s all about attention, right?
So basically, if an area of… There’s a part of the museum, part of a gallery that is not getting a lot of traffic, right? Maybe that’s on purpose, right? That could be. But oftentimes, that’s a question that people would ask me like, “Hey, why don’t you take a walk? There’s a place that nobody goes to. Why is that?” right? And a lot of it is because it’s competing against everything else. One thing to keep in mind with eye movements is our eyes move about two to three times a second. Think about that. Each time it moves from one point to the next, we’re blind, right? And then our brains kind of piece everything together. So it looks like everything is smooth and there’s no disruption in our vision.
Eye movement tracking looks at attention. It’s an indication of attention, but it’s also an indication of something more: interest, right? But even bigger than that is engagement. In the end, because when you walk into a museum or a gallery, let’s say there are four or five things there. For a big museum, that’s not a lot, right? It’s sometimes in the 10s and 20s. And for a visitor walking in, the cognitive load as far as processing things and the amount of cognitive resources, and they’re limited resources. And that’s related to time, right?
The allocation of cognitive resource is really tricky. It’s a tricky matter. It’s not as visitors would say, “Oh, I’m going to just pay attention to this. And I’m going to pay less attention to that.” That’s not it. It’s all about that visual attention, right? So eye movement tracking allows you to see where that visual attention is going. And also, when creating exhibitions or whatever sort of program, the ergonomics of that is huge. What are the things that stick out? What are the things that disrupt attention and disrupt the flow of things? So eye movement tracking allows us to do that. Again, matching the method with the question.
David Carr, some of you might know him. He is sadly no longer with us. And he’s one of my greatest heroes in the museum and this cultural institution world. And this is from one of his books. It says, “Nothing is there by accident, not even its users. At its best, a museum offers a constructed situation, a place we seek out purposely,” right?
In designing shows, in designing exhibits, in designing programs, we all like to think that everything we put in has a purpose. And they do when we’re actually in the design process. And then we execute it. Put it in place. And then we open it to the public, open it to our audiences and so forth. But we never fully know which aspects of those things that we design are working well and are not, all right? I love this quote because when I walk into a museum and I’m asked to help out redesign an evaluation system or even run a scientific study, I always keep this in mind. Everything is there for a reason and there’s always a department that did something to something. Somebody was in charge of a specific aspect of it. And we want to know what’s working and what’s not.
Some of the eye tracking is a good method for doing that and some of the other things that I’m actually going to talk about next, because context does matter. So this idea… And this is actually really huge, comparing the lab and the museum. So when we do research in the lab, typically what I do is I do the same study in the lab as I do in the museum and vice versa, just to sort of triangulate the findings. But what we’ve learned over the last almost two years now is this idea of virtual visits, right? It’s the next big thing. Hopefully it doesn’t become so big because we’re stuck at home again for a long period of time. Hopefully that’s not the case.
We’ve written a few articles, my colleagues and I, over the last five years about comparing the lab and the museum. We didn’t realize that it was actually going to be super valuable when you think about practice, actual practice, because of things that are moved online. So we’ve done quite a bit of things like try to figure out what it is that is specific to that museum space, right? It’s a big question. The museum space, every inch of it, it’s valuable, it’s expensive, it’s competing. Things are competing for that space, right? Departments are competing for that space. Shows are competing for that space.
One of the things that we did study at the Queens Museum is we used the think-aloud protocol, but we wanted to just figure out how people naturally process art. So if we don’t ask you a question, what will you say just from your own sort of flow of thoughts about that art, artwork, or that exhibit? We wanted to know how that experience is different between the lab and the museum. So we did that study. What we did was we had people in the museum look at this particular piece by Gropper, William Gropper, The Senate from 1935. We also then took a very high resolution version, like a copy of this painting, had it printed professionally, not just our printer at work or in the lab, to the same size, the same orientation and so forth as the one that the genuine artwork in the museum.
We gave them a very simple instruction. We had them stand in front of the piece and we told them… And I think this was the exact one. I might be confusing it with another, but these are typically very similar. “Please look at this artwork and approach/view it as you normally would. At the same time, please describe your experience, your thoughts, and emotions while you’re looking at the artwork by talking into an audio recording device. Try and be as detailed as possible regarding your experience of the work. You can take as much time as you want. After you’re finished, we’ll ask you to fill out a questionnaire.” All right. So again, really kind of open. We didn’t really ask them a question. The key thing here is you can take as much time as you want.
I’m always trying to sell the think-aloud protocol as a method. And I don’t think we use it enough. One of the reasons why I’m trying to sell it is because when we do a survey or we give a survey or a questionnaire, we’re constraining people’s responses immediately from just that first question, right? It’s like, “We know what we want you to say. Although you have probably other things you’d like to say because we’re asking these questions, we’re probably not going to hear you.” So the think-aloud is so open and it’s really good for visitors who are really uncomfortable being in a museum. And we know there’s a lot of folks who walk in and when you ask them, “What are the things you’re thinking about? What about this visit? How comfortable are you being in a museum?” And it’s like there’s this idea that they don’t know how to actually do museums, right?
When we do the think-aloud, it takes that away a little bit. Not completely. Because it gives them control. It lets them self pace to basically report what is it that they’re seeing, what is it that they’re thinking, what is that they’re feeling. So we’re taking away some of the constraints that we typically put on them, which is quite a bit, all right?
So think-aloud to do this. So what we found was… Hold on. We basically had them hold the recorder. They talk. We tried to move away. And then we transcribed the recordings. And then we did a formal thematic coding of those transcriptions. So it’s like one researcher did it, went through it. Another one did it independently. And then we met and then discussed some of the findings.
And they also completed… Oh, in a 32nd study, that’s wrong. It’s in a second study. We did not do 32 studies for this one, all right? I know we’d like to keep our participants, but that’s a typo. It’s, in the second study, they completed two other scales. A memory retention. We gave them information basically related to the artwork. And then we tested them later on as to what they remembered. But that’s independent of the think-aloud procedure.
Some results here. Again, comparing museum/lab. Aesthetic judgments of the artwork were more intense in the museum. That doesn’t seem surprising. It’s like, yeah, of course it’s a real artwork, versus a reproduction. But not a lot of people have shown that, all right? And it does speak to this idea now, like, “What are we doing online with these virtual experiences that you’re putting people through?” And also even more sort of interesting for myself as a psychologist, is the memory for art-related information was better for participants in the museum versus the ones that saw it in the lab.
Now, we can all come up with all the factors that kind of try to explain these results, right? For example, different population of participants. Motivation, right? Expectations. If you go into a museum, you plan to go there and so forth. But again, the think-aloud was the perfect method for that research question. It really was. I mean, you could not have gotten this anywhere else. And then some of the measures that we used were also really a good match.
So we’re following this up. I have actually colleagues in Vienna, are moving artworks from the museum to the lab, like real works. I don’t know the insurance situation, how they did that, but they’re doing it. And to really figure out what is it about the museum, right? We know there are cues, physical cues, the curatorial elements, visitor motivation, their expectations, their time that they devoted to that particular visit. So we’re trying to figure all this out now. And then I think within the next five years, maybe less, we’re going to have a better idea of how that works.
So the power of think-alouds when done right, I think it’s empowering as far as the visitors. Again, we’re removing some of the constraints. We’re authority figures, right? Researchers, evaluators, the evaluators in the museum. And when we allow them, we stand back and we have them talk to themselves, right? That’s pretty powerful.
And then quickly through this, the direct observation. Again, it seems like we use it, but we don’t use it enough. We did two studies. My colleagues did the first study, 2001 at The Met. We replicated the study and extended it in 2017 at the Art Institute of Chicago. And we try to figure out, it’s a super simple question, how long do people look at art, right? And if you’ve read the article, I think I gave that out, the answer to that. But pause for 10 seconds. How long do people look at specific works of art that are considered masterpieces? What do you think?
All right. So I’ve heard anywhere from 30 seconds to five minutes. If you said five minutes or longer, you’re really off and you’re giving a lot of credit to the visitor’s ability to stay focused. It’s actually, again, matching the method to the question. I’m going to tell you the answer in a second, but it’s 456 participants. It was a lot of participants. I think The Met study had 300 some. The Art Institute study, 456. 9 artworks approximately. 50 observations per artwork. It’s a huge study. And these were the works. They’re known. Raphael, right? Suzanne, Hopper, Picasso, Van Gogh, Matisse. So the answer is at The Met, 27.2 seconds. This is the mean. Art Institute, 28.3 seconds. Almost the same. And now you’re thinking, “But that’s the mean. What about the median? Maybe that was pulled by a lot of people just kind of passing by and just glancing at it.” 15 seconds at The Met. 17 seconds at the Art Institute of Chicago.
How about this one that we did in the lab? We also measured it in milliseconds. 19 seconds pre-restoration, 22 seconds post-restoration. This study was replicated in other museums in other countries, they came up with roughly 30 seconds. It’s like 20 to 30 seconds. So that’s something we kind of need to… It’s an important piece of information I think when designing exhibits. So again, matching the method to the question, and theory is important as well.
So one of the things that… And I’m going to give the last example here as far as studies that we’ve done. And this is the one that I wrote about in the Museum article. So theory is important because it helps us to refine questions, connect questions to methods. And that’s sort of my theme, connecting questions to methods. And of course in the end, help interpret findings.
The model and the theory that’s been guiding a lot of the work that my colleagues and I have done starts with this statement. It’s, “Where the creative process ends, the aesthetic experience begins,” right? And it’s based on the mirror model of art. Basically, I made the argument that the art making process begins with an initiation phase where the ideas come about, the motivation for the artist to create a work. And I’m only using the example of a painting, but this is the process if we look at creative production, all right? The process of making things, it works in layers. We know this in architecture. We know this in design. So we have the initialization phase. And then the artist slowly builds the layers. They expand. They adapt. And then they finalize the piece.
The aesthetic experience is the opposite, the mirror opposite of that. So basically, the first thing we see as viewers and perceivers and as audience members, it’s like the last layer that the artist put down, is the last thing that they interacted with. That finalized piece, right? The finishing aspects of that creative process. And then the more time we spent, memory kicks in. You start recognizing things in the piece, right? You start to think about your own memories and what it reminds you of maybe. If it’s a portrait, who does it remind you of? And then we start asking the questions again later on in that process about, “What does it mean for me? What do I think about this? Do I like it? Do I not like it? What emotions do I feel” and so forth.
So based on this model, there’s this idea of a curatorial paradox, right? Again, we’re not as unique as we think we are. We are special, but we’re not as special as we think we are. Because of that, I mean, our brains work very similarly. Anyone that I pick from the Zoom tiles, your brain is about the same as mine, all right? Occasionally, there’s a jump in that who argue Einstein… But for most of us, that’s not the case. We’re very similar. So because of that, when we encounter something, we’re probably going to react similarly to that something, to that encounter. And it’s the same thing with emotions, right? If an artwork is supposed to produce some sort of emotion, sometimes you can sort of guess based on historical writings about that art work what it should produce, what it should elicit. Most of us are going to react the same way emotionally.
So there’s that paradox. It’s like, we’re so different, but we’re not, right? But we are actually probably supposed to react similarly. So we tried to test that. Is there a correspondence between an artwork and the emotions people felt after experiencing that artwork or a set of artworks or an exhibition, an exhibit? And to what extent are these emotions shared by visitors?
So we use the technique, I’m going to skip that, using Scherer of 2005, this is called the emotion wheel, the Geneva Emotion Wheel. And basically if you look at it, it’s so comprehensive as far as the groups of emotions, families of emotions, that people will likely experience in response to anything, all right? It goes positive to negative, pretty much everything in between. And you can use this tool. So the round things, they can check or X or shade in. The farther they shade out, to the periphery of the field, the stronger that emotion is, right? So that sort of a measure of intensity of emotion. And then they can also choose to report no emotion felt, which never happens. And other emotion felt, which rarely happens, right? Because you have so many choices.
A few things that this does, why it’s so effective as a method, the Geneva Emotion Wheel, is that we’re not asking people “How do you feel about something?” When we ask people that, and most of you know this, it’s like, “Oh, it makes me feel happy. It makes me feel sad. I think it’s exciting. I think it’s fun,” right? So the range of responses you’ll get is not a big range. Fairly limited.
If you need the paper, I can send you the paper. I’ll have my email at the end that sort of talks about the theory behind it. But we basically use it. We’ve been using this measure for quite many years now at the Whitney, in few other museums. It started at the Whitney actually, in the old building. My colleague, Katherine Potts, she was so open to that study. But the data that we got looked like this. So these are the dominant emotions that people felt. And then they reported the intensities of those emotions. We were like, “Okay, what does this mean?” We tried to put it on a graph. It didn’t really look good, right? It was hard to communicate.
So what we tried to do, and there’s a paper on this that’s forthcoming about the method in exactly how to do it, we tried to do the same thing we did with some of our eye movement tracking studies that you saw earlier with the blobs, the green and the red, right? So we turned essentially the Geneva Emotion Wheel into a field where we could do this, right? So basically, the redder, the yellower, or actually the hotter it is, the more people found those emotions to be as the ones that they experience in relation to the artwork or the exhibition. We did it for both. And we’re also about to collect more data on this now.
Also, we had half the participants go from right to left, and the other left to right. So there’s not that bias in direction. So here, interest and amusement and laughter are the main ones. And then you start to occasionally see some of the little things on the left. So these are two different exhibitions. I should have put which exhibition, but I don’t remember now. But anyway, it just general kind of [inaudible].
So they also completed measures of self-reflection, estimate of how long they spent in the gallery, demographic stuff. And then we basically correlated. And you’re all very good at correlating things with demographic information. We did the same thing with individual artworks. It’s very sensitive. So if a piece is basically supposed to elicit negative emotions, people feel that. They report it, right? They’ll report pity and compassion. They’ll report feeling sadness and despair, right? So despair, for example, is something that people would not say to you if you ask them, “How did this make you feel?”
“I felt despair.” No, they don’t. But if you give them those, if they can just look at it and scan it, then pick it… The first one on the left longing/nostalgia, when was the last time people report a sense of longing if you just ask them how they feel. So tools like this, the method matching it to the question, the research question.
I’m going to spend the last two minutes. I should have gone 45… I mean, 45 minutes about talking about just the psychology of the visitor, the participant. I wrote a paper specifically on and recently on motivation and expectations. If we don’t take into consideration what they bring in right from the beginning, and this is where a pre/post design is so critical, and we don’t use that enough, then it’s hard to know once they leave or they’re about to leave our spaces, like how effective we were in delivering what we thought we were delivering or giving.
The role of incentives is also key. So when we do studies, often we give posters, or I don’t know, cards and stuff with artworks. But incentives is more than that, right? I have it in italics to emphasize the fact. So when we do a think-aloud for example, that’s an incentive to our audience, to our individual visitor. Oftentimes, they don’t like answering questions that we’re asking. But you know what they’ll do? They’ll talk about things they want to talk about. And oftentimes we get more out of that than asking them, create questions that we’ve created. Cognitive load is massive too. Have you ever exit a concert or like going to the opera or going to see a Broadway show and everyone’s walking out and somebody’s like, “Hey, where are we going?”
Like, “I don’t know. I just want to walk out. Let’s just get to the car. Let’s get to the subway,” right? That’s exactly also what they’re experiencing. And it’s related to the timing of evaluations. When do we do most evaluations? At the end, right? When they’re walking out, right? It’s not the best time to do it actually.
Always keep in mind verbal ability. Having English as not their native language, that’s huge when we’re talking about kids, right? I mean, gosh, it’s so hard to do studies with kids. Comfort and trust. Again, related to that thing. Are we listening to them or are we just asking them questions? It’s okay to ask questions, but we sort of have to balance that within our practice.
And then the use of indirect measures. And this is probably where we’re going to in the future, at least my work and some of my colleagues. To try to explain this in a very straightforward way is, we give them an experience or they have an experience. And then instead of asking them directly about it, we have them do a task or an activity. Depending on how they do in that task or activity, it’ll indicate how that experience was. But we never asked them directly about whatever it is that they did.
Balancing subjective and objective measures. Interviews, surveys, questionnaires, subjective, indirect measures, eye tracking, objective. I’m going to skip that, but basically it’s all about the outcomes, measuring the changes, and the big stuff. Personal transformations, huge impact. It’s important for practice. It’s important for us researchers. And it’s certainly important for funders, right? So again, using all these things.
Special thanks to my colleagues here, Dean, Elif, Amanda, Erika, Ann, Michelle, Rachel, Brooke for giving me this chance. And this has been a good webinar. And now, I’m really looking forward to your questions. But before that, this is my email. Screenshot it. Take a picture of it. Whatever. Don’t hesitate to reach out if you want to just chat. I’m always willing to chat and hear about what you’re doing. And we can open it up to questions.
Erika: Excellent. So if you have any questions, you can either pop them in the chat or go ahead and unmute yourself. That’s totally fine as well.
Pablo: Oh, there it is. Thank you, Amanda. Wow, you’re good.
“Has the wheel ever been used with youth?” Not within the context of museums. And it’s one thing that we’ve actually been meaning to do, is expand that population of… It’s going to be really interesting. The one thing that we’ve talked about, the sort of issue, potentially, with that is some of the emotions are really advanced, right? Just the actual words themselves like longing, nostalgia. So probably, high school, middle school, you’re okay. But as soon as you start dipping into the earlier grade levels, the younger kids, that gets a little bit trickier. So one of the things that we’ve considered is actually reducing that wheel, but at the expense of, of course, now you’re losing the sensitivity that was the reason for the wheel initially, right? So, great question. Thank you.
“We’ve used eye tracking in the past and struggled a bit with lengthy analysis time, some people with glasses, calibration issues, among other things. How are you able to combat these issues in your research?” Siniwa, if you wanted, send me an email. I can share with you that article. And it talks about the actual diversion, the eye tracker that we use, the equipment, as well as the calibration procedure and all the things we used to try to clean up the data. But yes, it’s always a struggle. It’s always going to be a struggle with cleaning up the noise and the data absolutely. Typically with… I hope you didn’t do that in-house. It’s really hard to it in-house. Typically, you’d want to work with the manufacturer of the eye tracking and have some tech support from them. And that usually resolves the issues. But as soon as you start doing it and mucking with it and changing some of the settings, the defaults, typically you run into some issues.
“Can you talk a bit more about how museums get started with these types of projects? Are they primarily grant funded? Are they internally or externally driven?” Really, really good question. The easiest way of doing it is to start internally, all right? And you can actually build, at least museums I’ve worked with in the past, internships, internships specifically for these kinds of projects. Typically, they’re great if you work with academic institutions whether it’s like an Ed department or an Art department. They love doing that, those sorts of partnerships, which I think are key moving forward. It’s good to have these partnerships.
So internally, leaning on some of the academic institutions around your museum or your own institution. But then again, if also you can have apply for grant funding. And oftentimes some of the museums I work with, they were from donations. It’s like, “Okay, create a good argument as to why this is important for the institution and then see where you can get the money” whether that’s through funding, development office, whether that’s actually reaching out and getting some external sort of grant that you apply for and get some funding through that. Internal is good to start with, unless you’re using some equipment.
“Can you say more about how you conclude from some of these results that are personal transformation has taken place?” Yes, absolutely. I think the one method that really gets to that is the think-aloud, right? The think-aloud is… One of the keys is, as they’re recording, you’re also listening, right? So mixed methods. So they’re talking into the whatever recording device you give them. As they’re starting to talk, you want to start noting key points that they make. So for example, one of the things we hear a lot about, and this is always surprising is like, “It made me cry,” right? “It made me cry. This piece made me cry. The room I walked into, at the moment I teared up a little bit. It remind me of something.” Make a note, right? Follow that up. And then it becomes an unstructured interview with prompts you actually already made as you were listening.
So then you follow that up and so forth. It becomes an interview that’s quite personal sometimes, but then again, that’s the only way. It’s hard to get a sense of personal transformation when you have questions like surveys. I don’t even know how you would do that. And the second way of looking at that is longitudinal studies, right? Following up, whether that’s an email or a phone call. Typically, follow up emails also come with surveys, which is not always good, and one of the reasons people are hesitant to actually send them back, right? But if that follow up email, a week later, a month later, it’s like, “Hey, do you have time for a brief conversation? We’re really, really interested. The last time we spoke, you mentioned this,” okay? And then, “I really want to talk more about that experience.” Then you have that sort of follow up that is more meaningful and more personal.
Let’s see. “In the study where you used the emotion wheel, were visitors instructed to read or not read the wall label?” Both. We’ve done both. And what we found is the visitors who read the wall label, actually there was a positive association with reading the wall label and more intense emotions that they reported. There’s also, of course, more time because as they read the label, we basically put a framework around how they were going to look at that piece and they spent more time. It’s really slowing them down. The key is to slow down that experience. 27.2 seconds, right? Roughly 20 to 35, 40 seconds. That’s all we get as far as individual pieces. Most get a glance. Occasionally, they’ll get 10 seconds. And then when they’re really actively engaging, that’s when we actually see about 30 seconds, 45 seconds, one minute. Rarely does it ever go longer than that. If we can slow them down, then we really kind of… It takes us to a whole different dimension as far as that interaction with the artwork.
If it’s a big museum, that’s even more of a challenge of course, because they’re pressed for time, they want to see a whole bunch of stuff. If they’re not a regular museum visitor to that specific museum, another challenge there, right? If they’re a repeat visitors, then typically they won’t do the whole thing. They’ll go to specifically from one gallery or one show or one installation.
Let’s see what else here. Thanks for sharing it again, Amanda. “Is there a simple and easy way to do the heat mapping specific software? It looks like a great way of representing data especially about emotions. I played with differently size bubbles with words in the middle…” That’s good too. “… with words in the middle or different sizes of emojis, but heat maps feel more serious for reports.” I agree. I absolutely agree. Right now, that’s what we’re trying to develop. That’s what’s forthcoming. We’re trying to figure it out. We’re going to report of course the full method. And that comes with a bunch of scripts that hopefully people can just run, but they do need a software. MATLAB is one and we’re thinking about other software that are open access. But the future of this is to create something, maybe a website that people can use where you put stuff in and then outcomes the heat map. Hopefully, we can get to that point in the next few years.
Any other questions? So you have my email. Feel free to reach out again. Yeah, I’m always happy to figure out, to hear about what people are asking, the questions that you’re asking, and of course if I can help out in any way as far as helping you think through some of your questions and potential projects. It’s just fun to do this.
Erika: Excellent. Well, thank you so much. And thank you so much, Pablo, for presenting today and staying to answer questions. Like you said, if you have any questions, send him an email. Amanda, thank you so much for linking both the article and his recent research. Those are both in the chat as well. So thanks for joining today and we hope to see you at the next webinar. Excellent. Bye.
Pablo: Thank you all.
Comments