The recent flurry of attention to SL and its numbers (here, here, here, and, most recently, here) leads me to think that folks might be interested in having a chance to chew through some methodological stuff, along the lines of the “Methodologies and Metrics” panel on which Nic, Dmitri, and I served at the State of Play/Terra Nova Symposium early this month. Below the fold, some tweaked ideas from some emails I circulated among the panelists in preparation for the panel. While I’m not discussing virtual worlds and the methodologies we’d use to understand them specifically, I hope this will be helpful background for such a discussion.
It is hard to get away from a common conception, both within and outside academia, that numbers are the one, true path to understanding. This is part of a set of cultural expectations that are reproduced precisely because they are so rarely challenged. Most commonly, one hears that claims with numbers are “grounded” or otherwise true in a way that other kinds of claims (such as the ones based on the kind of research that Tim talked about here), are not. Claims based primarily on those other kinds of research, particularly on interviews and participant observation, often get branded as “anecdotes”, with the suggestion that they hold no real value as reliable claims. Here I would like to push against this association, and help clarify our understanding of what qualitative social science research methods (ethnographic research ones in particular) bring to the table. In short, they are not “anecdotes”, and they can form the basis of reliable claims, even without numbers, although as Dmitri and I never tire of saying, having both is better than having just one.
No social scientist, of course, would want to “generalize from anecdotes,” but the problem is that often we do not really understand what that means; or perhaps it is more accurate to say that across the academy many scholars (not to mention the public at large and policy makers) do not know enough about methodology (this is true of both qualitative and quantitative methods, and more broadly about exploratory versus experimental research), and therefore these charges are in essence a political move meant to marginalize the other side’s research that can succeed because of that lack of broad grounding. From my conversations with everyone involved with TN I have never felt that we (as a group of authors) were particularly prone to make these errors, but there is no question that it finds its way into the discussions on TN, as in the recent threads.
The goal of all social science is “generalization” in a sense, but the legacy of positivist thinking about society (that it is governed by discoverable and universal laws) has left us in the habit of thinking that the only generalization that counts is universal. It is always interesting to me how some work (especially that done by the more publicly-legitimized fields, such as economics) can proclaim itself to be about the universal despite the fact that only a moment’s thinking reveals the application of the ideas to be narrow (to industrialized, capitalist contexts, etc). The strange thing is that this doesn’t end up being a problem for those already-legitimate fields; instead, it is largely ignored — this is what being well situated on the landscape of policy and academic relations of power gets you (to be Foucauldian for a moment).
But of course generalization, in the more limited sense of seeking a bridgehead of understanding across times and spaces, has long been the hallmark of history (the first social science, in a way). The strange thing is how difficult it seems to be for those who would like to criticize methods such as participant observation and interviewing to see the projects those methods support in a similar light to history and its efforts. There is nothing inherently problematic with such claims; they are just as able to inform policy as universal ones, and have the benefit of incorporating more nuance.
So then what is an anecdote? It is a description of an event isolated from its broader context, so no wonder all of us would like to shy away from the suggestion that we are drawing our conclusions in isolation of the broader context. But ethnography (meaning principally participant observation, along with interviewing, surveying, and other methods), to speak of that relevant methodology most familiar to me, quite distinctly does not treat these events in isolation. Brief descriptions are often presented in the course of ethnographic writing in order to illustrate a point concretely, but the point made is only as sound as the degree to which we trust the author’s command of the broad array of processes ongoing in the context at hand. How is this credibility established? Through a complex of many, many, many techniques of writing, thick description, peer review (always including experts in that period or place), solid reasoning itself, track record of previous research, etc, etc. This form of generating reliable claims is not somehow “less” viable than other ones, and its strengths and weaknesses of similar scope (though differing in their particulars).
So one of the tropes that one finds in the recent spate of posts about SL and its numbers is the suggestion that only when numbers that we trust are present do we feel that the claims authors make are “grounded”. This is not true. As anyone with much experience with statistics knows, the numbers say nothing without the ability to interpret them provided by other kinds of interpretive research. In fact, given the above, if any research has a claim to being “grounded” it is the first-hand research of participant observation.
Even when this kind of contribution from qualitative research methods is acknowledged, however, there is still a tendency to see the claims of work based on them as always and severely limited to a “niche”, at least until numbers come along. But a social history or ethnography of a place and time is not this narrow. They are able to make general claims at the level of locale, region, or even nation, and they often do (when done well). The idea for ethnographies is that the ethnographic research method, at root, inculcates in the researcher a degree of cultural competence such that he or she can act capably (and sensibly) as a member of that culture. Supported by observation, archival research, surveys, or interviews (usually some combination), as well as (possibly) prior work, this learned disposition informs an account of the shared disposition of the actors on the ground, and is laid out in the published work (as best one can in writing) as representative of a worldview from a particular time and place. Thus, my claims about gambling in Greece were made beyond the level of the city where I did my research, and I argued for the existence of a cultural disposition that characterizes Greek attitudes toward contingency at something like the national level (without holding too much to hard boundaries).
Of course, these claims are further bolstered by the broadening of one’s research methods, whether through surveys, demographic data, archival research, media studies, or any other means that support the big picture. Relatedly, there is nothing about quantitative methods that dictates that they must “stay big.” They can be productively focused and narrow as well.
This is not to say that there isn’t a limit to the level of generalization for qualitative research that is exceeded by quantitative methods. So, for example, while an ethnography could make reliable claims about Greek culture, I don’t think it could about American culture. The reason for this connects to what culture is — a set of shared expectations, based on shared experience and continually re-made through shared practices — and why it is far too fragmented and varied across the US for an ethnography to make such claims. But while this is true, the important point is that qualitative methods’ levels of claims are not as particularist as they are sometimes made out to be.
I become, I confess, a bit sad whenever I encounter this kind of marginalization in action (for me, it most often happens on interdisciplinary fellowship review panels and the like), because at root it bespeaks a lack of trust across the academy. There is little doubt that there have been excesses across the gamut of methodologies and theories that the social sciences use (reductions to representation, or materiality, or power all come to mind), and perhaps this accounts for the parochialism and suspicion, but let’s hope that we don’t fall prey to what are more often, in my view, essentially not battles over the nature of sound inquiry but instead part of gambits meant to direct or redirect institutional resources.