(no subject)

Date: 2009-09-04 08:38 pm (UTC)
From: (Anonymous)
Not really. I don't think [info]gaudior would play games like that.

I admit I phrased my first comment non-neutrally, because I was frustrated that (as I saw it) [info]gaudior's ideological bias was causing her to miss something that seemed obvious to me.


Ok, that makes sense to me. I was trying to reconcile the fact that your original comment didn't seem neutral to me with the fact that I didn't think you would think that gaudior was deliberately playing mind-games with an admittedly non-scientific survey, so that is why I asked for clarification and/or correction. Thanks for providing it!

I think the comparison to SurveyFail is unfair

Hm. Maybe my intent didn't come across clearly. I don't think that gaudior's questions themselves were comparable to the SurveyFail ones; in fact, that was part of the point I was trying to make.

I was trying to draw a comparison between the type of bias problems I saw in the surveyfail survey and gaudior's survey, and point out that I was seeing some of the same general type of bias problems (for instance, "leaving out an option") in both surveys, even though the survey topic and extent and visibility of bias were in fact, so very different in each individual survey.

Using the surveyfail example may have been a bad call on my part because it's so inflammatory, but I chose it as an example, despite its potential for drama, for a few reasons:
- I have recently been reading about it and using it as a tool to think about survey bias and bias in general in my own life and head.
- I wanted a survey where most people would agree that the example in question showed clear bias problems and little nuance, such that there wouldn't be a question of whether my example of clear bias actually contained clear bias.
- I wanted a survey that showed obvious and clear bias problems and little nuance in order to contrast it with gaudior's survey, which has subtler bias problems and a lot more nuance in its questions.
- I wanted a survey that people reading this thread would probably be somewhat familiar with.

My second-choice example of a biased survey, a survey I took on furries in July, contained some bias. However, I think the survey would have been unfamiliar to readers, the bias would not have been as obvious, I wasn't sure if I would not remember it well enough to use it accurately, and it was vetted beforehand by an IRB as part of an ongoing research project. So, I chose not to use it as my example.

I wondered why both surveys shared the same types of bias problems, despite their other differences. I realized that the more obviously biased the survey itself was, the easier it was to see that bias; but that the less obviously biased the survey was, the more difficult it became to see these very same biases. The strength of nuanced questions, their nuance, can also be a liability in terms of bias visibility.

(Perhaps this is because less nuanced questions tend to not only have large biases in the first place, but act as their own pointers to the bias contained in the question, while more nuanced questions may have fewer biases and be less able to act as their own pointers to the bias in the question? I view nuance as a sort of a narrowing-down of space for bias to exist within the question--though the possibility of bias outside of the question or the perniciousness of the bias itself is another matter entirely.)

You can probably learn a lot about how to avoid bias from surveys that have obvious bias problems and little nuance: if you've seen enough obvious instances of the same type of bias on a large scale, you can start seeing less obvious instances of the same bias on a smaller scale, since you already have some idea of what you are trying to avoid.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting
Page generated Jun. 30th, 2025 05:09 pm
Powered by Dreamwidth Studios