Methodological questions about the Giving Evidence list of research priorities

Friday 19 July 2019

by Tobias Jung

This month, Charity Futures and Giving Evidence launched their findings on whether, where and what research is needed about UK charities and giving. The work is divided into two categories. The first, research demand, looks at what questions donors and charities would like to see researched and offers research priorities; the second, research supply, explores what research already exists. The vision is that the work could inform academic research and activity on philanthropy and charity. Laudable intentions. Unfortunately, these appear to be accompanied by some lamentable issues. As the work ‘urge[s] academics to consider the list of priority questions presented’, here are my initial impressions on the prioritisation survey.

Sadly the prioritisation survey methodology does not engender confidence in its overall planning and, crucially, its findings. The reason for this is simple, the consequences are serious. Firstly, to participate in the prioritisation survey, a single link was used prominently: you click on it, it takes you to the survey. If you have the link, you can participate. The problem with this is that the survey was only meant for charities, donors or funders. Leaving aside the slightly, unacknowledged, paradoxical issue that UK universities are registered charities and thus would be eligible for inclusion, academics were to be excluded. The rationale for this was that academics had other mechanisms to influence the research agenda. Unfortunately, the link to participate in the survey was shared and advertised publicly and liberally, including on webpages and social media. Theoretically, therefore, academics and indeed anybody in the world could participate. If anybody was bored, s/he could even do it more than once. This leads me to my second concern.

Alongside ensuring that only intended participants can complete the survey, one also wants to make sure that they do so only once: under no circumstances does one want the same person(s) to complete the survey over and over again. This would skew the results. To this end, online survey instruments offer tools such as individual links, IP or password restrictions. These mean that a link can only be used once, that each IP address (this identifies a computer) can only complete one survey, or that only those with the correct password can access and fill in the survey. Various horses for various courses. None of these appear to have either been used or worked with the prioritisation survey. As a screen recording shows, using the advertised link anybody could fill in the survey repeatedly. If this happened once, it might have happened twice, it might have happened thrice. Thus, before we even reach any other level of critical reflection that the report urges academics to undertake, there is plenty of doubt about who completed the survey, about what was submitted, about the results.

Bearing these points in mind, we are therefore unlikely to be using the findings from the prioritisation survey at our Centre. Instead, we will continue to develop work in a dialogical and co-produced way on issues and questions that are directly identified with interested parties.

There is, however, one silver lining arising from the concerns highlighted. According to the Giving Evidence report, there is a feeling of disconnect with academic work. Whatever one’s views on academic research relating to charity or philanthropy, at least it might offer useful assistance on any future research methodology.


This piece was first published on Tuesday 16th July 2019 on the Alliance Magazine blog under the title ‘Giving Evidence on research priorities: one link to rule them all?‘.

Posted in

Share this story