Validating why #istandforthearts or why #artsmatters let alone why we work in the arts and the value of that work is critical to our industry and a core focus for TNN. It is also a topic that generates a lot of discussion, much dismay, and occasionally some solutions.
Recent conferences and media sources have been putting the Culture Counts model forward as a way we might more easily capture what performing arts can do.
Today, Julian Meyrick wrote ‘Why a scorecard of quality in the arts is a very bad idea’ which is a very timely article on measurement models. While TNN keeps working with you and its peers to better tell the incredible value of what we do, we thought we’d share an excerpt from ‘Why a scorecard of quality in the arts is a very bad idea‘.
Culture Counts, developed in Western Australia by the Department of Culture and the Arts, is a computer dashboard data program, designed to be used across art forms. It is currently being trialled for wider rollout by Arts Council England. Its aim, according to a Manchester-based pilot, is “a sector-led metrics framework to capture the quality and reach of arts and cultural productions”. What is proposed is substantial, serious, and no doubt well-intentioned. Unusually for a government-led measurement scheme, arts practitioners as well as policy experts have helped develop it. Yet we at Laboratory Adelaide – an ARC Linkage research project into culture’s value – view the venture with dismay. We argue that the approach is wrong-headed and open to political abuse. In essence, Culture Counts is a quantitative scorecard for artistic quality, with a set of standardised categories translating a set of verbal descriptions into numbers. For example, if a surveyed audience can be prompted to say of a cultural experience that “it felt new and different” or “it was full of surprises”, it would rate highly on a 5-point scale for “originality”. That number would then sit on the dashboard beside other numbers for “risk” and “relevance”. Numbers and culture can be dangerous bedfellows. Andy Maguire/flickr, CC BY The categories are nuanced enough to provide usable feedback for practitioners and bureaucrats with the time and desire to think hard about what the numbers mean. And we understand the pressure cultural organisations face to justify their activities in quantified ways. But will funders analyse the numbers with care? Will artists resist the temptation to trumpet “a 92 in the ACE metric” any more than vice chancellors have refrained from boasting of their rankings in university league tables? We think not. A quantitative approach to quality betrays naivety about how people look at dashboard data, privileging a master figure or, at best, two or three figures. Context is lost to the viewer, and the more authoritative a number is presumed to be, the more completely it is lost.