June 29th, 2012
How do you measure interdisciplinary graduate research? The very strength of such graduate programs—diversity—makes it really hard for researchers to quantify. Traditional surveys of earned doctorates, for instance, have tended to be discipline-based, and therefore the information harvested from these surveys has been skewed or limited. We now know that asking institutions for such information doesn’t really help. Better to ask graduate students directly about whether their work is interdisciplinary or not. Not surprisingly, many who are actually pursuing their degrees in discipline-based programs claim their work is interdisciplinary in nature.
Surveys in the USA show that 28.4% of all doctorate recipients between 2001 and 2008 reported that their dissertations were interdisciplinary. That percentage seems to be holding steady, and is probably pretty similar in Canada, but then we aren’t doing a very good job of either defining or counting, and so who knows? Surveys aren’t all that reliable but they are all we have, at least until we can figure out a better way of gathering information. We do know, with some accuracy, that more women tend to pursue interdisciplinary research. That’s so interesting in itself. Does it have to do with nature or education and socialization or all of the above? Women are supposed to be much better at multi-tasking than men. Does that have something do with it?
In a somewhat related way, like it or not, many Canadian universities are starting to have serious conversations about performance indicators. In the UK, such measures of productivity have been de rigueur since Thatcher’s time, and, as many would argue, much to the detriment of the entire post-secondary universe. These measures of achievement would apply to faculty members, not students, but how these conversations develop here will surely have an impact on graduate students. If a university values interdisciplinary research will it incorporate ways of evaluating such research in its performance discourse?
And there’s another piece to this. Let’s face it, it’s way easier to apply bibliometrics to interdisciplinary science research than it is to arts. It’s easier to apply bibliometrics to science, period. Interdisciplinarity just complicates the whole picture. We know from bad experience that interdisciplinary programs have been overlooked or marginalized in the performance indictor game in the UK. We best not repeat those bad lessons here.
The point is we need to be vigilant about who narrates the story of performance indicators. If that’s the way we are being compelled to think about success—because that’s really what this is all about—then we better make sure that we have control over the right vocabulary and definitions of the research we do. Graduate students who see that interdisciplinary research gets short changed overlooked will be discouraged from going there. And that would not be good for any of us.