The Malta Independent 20 May 2024, Monday
View E-Paper

Technology and the evaluation of the museum visitor experience

Tuesday, 16 August 2022, 10:37 Last update: about 3 years ago

Dylan Seychell

Part of the MAtics Project team at the Pilot Demonstration Site in Heritage Malta’s Inquisitors’ Palace: Matthew Sacco, Noel Buttigieg and Dylan Seychell
Part of the MAtics Project team at the Pilot Demonstration Site in Heritage Malta’s Inquisitors’ Palace: Matthew Sacco, Noel Buttigieg and Dylan Seychell

Starting in the 17th century, there used to be a job occupation that involved the recruited individual carrying out mathematical calculations. This job title was that of a Computer, a person tasked with computing mathematical operations. These were individuals with sound scientific knowledge capable of thinking logically while performing mathematical tasks. The mechanical computer in the 19th century, followed by the electrical computer in the 1940s, radically changed this profession.

ADVERTISEMENT

What happened to "human computers"? The electrical computer did not only take over the tasks they used to do but also their job title. While this "takeover" sounds radical and drastic, it was indeed an opportunity for these individuals of logic. Instead of calculating routine scientific calculations that were now done by a machine, the role of the human-computer arguably evolved into that of a programmer, software developer, computer scientist, software engineer or other countless professions less than a century old.

All these professions empowered by and evolved from computers are celebrated and respected in society. For each of these, the computer is a platform and tool for further growth. After all, the computing machine brings to the table significant strengths. Computers do not get tired. Instead, they can repeatedly perform a task in a standard and consistent way, no matter how often it repeats the task. Humans get tired or bored after repeating a task, which affects our accuracy and precision. Computers do not compromise on accuracy because of boredom. The other main strength of a computer is speed. Tasks at hand are not only consistently and accurately repeated, but they are also done at an outstanding speed that no human can match.

Computers did not replace these professions. Computers augmented their skills and talents and provided an opportunity to automate meaningless tasks. The latest advancements in Artificial Intelligence proliferated this notion towards more professions, including that of experience evaluators or ethnographers.

Analysing the museum visitor experience

User experience design or visitor centric design are design concepts that keep the visitor at the core of the experience. User experience design gained importance and relevance in the software industry, where competition is intense, cutthroat and fast-changing. Those applications which ignored the users' needs faced a small number of downloads or instant uninstallations.

In scenarios that are not as fast-paced as software, principles such as user experience design are not often given front row priority. The design of experiences in museums is one of them. Let us compare our experience when visiting a museum against our experience when downloading a new app on our mobile device. One of the main distinctive factors is the availability of alternatives. When we download a new mobile application, we get the luxury of quickly going through its main features and judging the experience within seconds. If we enjoy the app, we keep it. If not, we uninstall it and go back to the store to download another one until we are satisfied with our choice. This reality beacons brightly in the face of whoever is designing the experience of mobile applications and therefore promotes the adoption of the best user experience design methods.

On the other hand, when we walk into a museum or a physical experience, we subconsciously know that we will stay there for a specified time. It is also understood that if we do not like the experience, we cannot instantly teleport ourselves to another museum until we find the one we want. The knowledge of this restrictive situation is also subconsciously clear to the designers of the same experiences. For example, over the past decades, decisions related to the visitor experience in a museum were mainly based on assumptions and opinions of the curatorial team.

While data is plentiful and readily available in mobile app development, the scenario is diametrically opposite in the museum world. In the context of lack of data, it is understood that an autocratic approach takes priority in designing visitor experiences in museums.

Before the tech industry was as data-driven as today, the situation was very similar. Netscape was one of the main actors in the browser wars of the 1990s that gave birth to the accessible web as we know it today. Jim Barksdale, CEO of Netscape, once said: "If we have data, let's look at data. If all we have are opinions, let's go with mine."

In the absence of consistent data in museums' visitor experience design process, it is understandable that autocratic decisions are taken.

The human museum experience evaluator

When designing experiences in sites such as museums, the golden question is, "How are visitors interacting with the artefacts on display?" The standard data collection method for evaluating museum visitor experience is an ethnographic observation by human evaluators. Trained professional ethnographers observe visitors in museums and record milestones, behaviour, paths and even timed stops. Surveys are also carried out with visitors.

Irrespective of the scenario or industry, human annotation and analysis provide deep and valuable qualitative insights. However, this comes at a significant quantitative cost. The proper understanding of this accepted compromise sheds light on the solutions ahead.

In the Museum Analytics (MAtics) project, SeyTravel Ltd collaborated with Dr Noel Buttigieg, a trained professional ethnographer with years of experience evaluating visitor experience in museums and cultural heritage sites. This collaboration provides a unique opportunity of designing an AI approach that supports human evaluators.

Through research, observations of observations and workshops, one can start to deduce the challenges of manual ethnographic studies.

Number and availability of evaluators

Such studies cannot be carried out all the time because they rely on humans staying in a limited space and observing visitors. This challenge is coupled with the availability of trained professionals who can carry out such studies and observations. This is one factor that also affects these exercises' scalability.

Costs

The scarcity triggers the economic strain of demand and supply on the museum commissioning the study. From my experience, I did not encounter professional ethnographers who take advantage of scarcity in favour of raising their rates. But, no matter how low the rate is, it is a standard hourly cost when you engage human evaluators.

Limited temporal coverage

The limited availability of professionals, timeliness to process the results and conclusions and costs discourage museums leadership from engaging in such essential studies. These studies are conducted in short bursts where visitors are observed in limited timeframes. These timeframes could be different hours during a day, an entire week or random days. If the engagement is inconsistent, the effects of seasonality, weather and other factors will be entirely missed. The statistical significance of the conclusion is also challenged.

Consistency and bias

Humans get bored and tired, especially when repeating a task in a limited area. The first casualty of this limitation is consistency in the data collected on an individual level. When we get tired or bored, we tend to ease our sharpness in our observations and perhaps even confuse cases in our minds when taking notes. As individuals, we also have our personal biases. While training reduces the risk of bias, it cannot be entirely eradicated and can still cloud conclusions. Bias can vary. Is that young man looking at his phone in front of a masterpiece, disinterested in the museum or is he looking up some detail about the painting? Is that older woman staring at the digital audio guide feeling lost because she is out of touch with technology or is she fondly appreciating the content communicated from the device? Any answer to these questions probably carries a hint of personal bias.

Above, individual limitations such as tiredness, boredom or bias were outlined. We are inconsistent as individuals, but that is just the tip of the iceberg. Matters complicate themselves significantly when we consider that a single study is done by different human evaluators, each with varying tiredness, boredom and bias. Thinking about data quality in terms of consistency can be severely demotivating.

How can Artificial Intelligence help?

The challenges faced by human visitor experience evaluators are neither new nor unique. Just like tiredness and boredom affected human computers in the 19th century, they also affect professional human evaluators. The computing machine helped reshape original professions and improved other scientific professions. With the recent advancements in AI, technology is providing this profession with a unique opportunity for renewal and revival.

The MAtics project is carefully tailoring a strategy for adopting AI in observations. The aim is to automate mundane tasks that bore humans while augmenting the strong human qualitative capabilities with AI-driven insights.

Computer Vision is a branch of AI that deals with understanding visual data such as images and videos. These techniques can analyse video footage of anonymised individuals in a museum's hall and convert it into a detailed visual and quantitative trail of each visitor's experience. By automating the observation process, this project reduces the strain on the availability of human evaluators to be present on-site and, therefore, reduces the costs while improving data consistency. In doing so, it provides an augmentation opportunity for human evaluators to dedicate their time to focus on insights collected over time and add meaningful insights that the machine cannot yet capture.

Visitor privacy is at the core of the MAtics project. A privacy-by-design approach was taken when designing and building this project. In practice, this means no identifiable data is stored anywhere in the system and no identifiable personal data is uploaded to the cloud. Using state-of-the-art methods in AI that the SeyTravel team is equipped to deliver, the privacy of individuals is safeguarded while not jeopardising the quality of insights provided to evaluators.

The opportunities for augmenting human evaluators and decision-makers with AI are numerous. Through the MAtics project, we demonstrate how computer vision can make a difference, but this is only the start of how AI can contribute to the empowerment of these institutions.

This article is based on the research project, MAtics (Museum Analytics), which is financed by the Malta Council for Science & Technology, for and on behalf of the Foundation for Science and Technology, through the Fusion: R&I Research Excellence Programme. For more information refer to https://www.seytravel.com/matics.

 

Dr Dylan Seychell is the founder of SeyTravel Ltd and an academic in the Department of Artificial Intelligence at the University of Malta

 


 

  • don't miss