Arts and culture experiences really matter to people’s lives. But not every experience is equally beneficial, nor necessarily even positive. Arts organisations need to know what is working and why in order to improve their practices and audience experiences. This insight is needed in order to maximise the benefits (e.g. social capital) and minimise the downsides (e.g. social exclusion) of audience engagement with the arts.

Evaluation is incredibly important for guiding cultural practitioners. High quality evaluation and audience research that is skillfully conducted and effectively shared enables practitioners to discover what aspects of an experience are working, in what ways, with which types of audiences and why. Without high quality evaluation, there may be a temptation to use unreliable ‘gut instincts’ to intuit whether effective experiences are being delivered.

Arts and culture organisations operate under highly challenging circumstances in terms of available budget, personnel and methodological expertise. These are real challenges for many organisations (Jensen, 2014b), undermining their ability to use robust evaluation methods. Indeed, educators, marketers, visitor experience staff and managers working in arts and culture organisations are all very busy people. Arts and culture organisations have many highly skilled and knowledgeable staff, but very few are trained in social scientific research methods. This can make it very difficult for them to know how to produce their own high quality evaluations or be savvy consumers of impact evaluations delivered by other organisations and consultants operating in the sector.

Some knowledge about key topics such as evaluation and survey design would certainly be beneficial for arts and culture practitioners, who will encounter evaluation evidence over the course of their careers. It is important to be a savvy consumer of evaluation research, able to identify and avoid common limitations and spot bad methods and problematic conclusions in audience research studies. When working from such a good foundation, recent improvements in open source technology bring good quality evaluation within easy reach of many more arts and culture organisations.

For many, if not most, such organisations, technology-enhanced evaluation could be a real solution for embedding robust evidence within the fabric of engagement with audiences. Automated evaluation tools can be used to obtain real-time answers to questions such as, what proportion of visitors are satisfied with their experiences? And, what factors are affecting the quantity and type of impact on visitors? Automated methods of evaluation can reduce the need for on-going costs such as data entry and expensive external consultants in order to gain high quality evaluation evidence. New technologies enable the design of evaluation systems that can be partially or fully automated after an initial customization and set-up. Using these technologies, a one-time infusion of expertise can be used to help prepare evaluation tools that are then used by practitioners without the time or specialist skills in social scientific analysis that would normally be required to conduct many types of evaluation effectively.

This type of automation was the focus of previous projects such as Qualia (qualia.org.uk), which I developed as the research partner on the project with the technology partner i-DAT and the practice partner Cheltenham Festivals. The current project builds on such prior research to show how automated evaluation tools can be embedded in arts and culture organisations to build evidence-based practice that enables progress and development in a field that has so much realised and unrealised potential to improve lives.

Image above: Young Producer-led panel debate about representation for young creatives of colour in the Cultural Sector, as part of a YP Takeover event (c) The National Gallery.