The methodology states
Summary: That men frequent “breasturants”[sic] like Hooters because they are nostalgic for patriarchal dominance and enjoy being able to order attractive women around. The environment that breastaurants provide for facilitating this encourages men to identify sexual objectification and sexual conquest, along with masculine toughness and male dominance, with “authentic masculinity.” The data are clearly nonsense and conclusions drawn from it are unwarranted. …
while the Areo Magazine article says
We published a paper best summarized as, “A gender scholar goes to Hooters to try to figure out why it exists.”
neither of which is a good description of the actual hoax paper.
Specifically, my study began in earnest after I amassed nearly 3 months of in situ observations and interactions with the group I came to study and, as such, it began after I noticed certain themes common within the conversations the group had in the breastaurant. In particular, I noticed these themes differed in certain ways from those typical in the gym where we trained together. This gave me certain initial themes (sexual objectification and male control of women) that seemed prevalent and identified with masculinity in breastaurant environments, which inspired my study. […]
I aimed to approach the breastaurant environment in a way that documents and characterizes patterns of masculinity I recognized as largely typical within the breastaurant, although atypical to the participants outside that context. I sought to address the interrelated questions of what features of the environment lead men to enact certain masculine performances in pastiche, how men then interpret these performances as relevant to some presumably authentic masculinity, and what this tells us about a breastaurant masculinity that arises in dynamic interplay in some men within breastaurants.
I was tempted to skip this one, as it falls squarely in PB&J‘s themes of “mistake the absurd for the reasonable” and “mislead people about your own paper.” But if they’re alleging that much of sociology is rife with dodgy methodology …
Purpose: This paper ridicules men for being themselves by caricaturing them and assuming bad motivations for their attitudes. It seeks to demonstrate that journals will publish papers that seek to problematize heterosexual men’s attraction to women and will accept very shoddy qualitative methodology and ideologically-motivated interpretations which support this.
=====
Our papers also present very shoddy methodologies including incredibly implausible statistics (“Dog Park”), making claims not warranted by the data (“CisNorm,” “Hooters,” “Dildos”), and ideologically-motivated qualitative analyses (“CisNorm,” “Porn”).
… it makes sense to analyse one of their papers with a weak methodology. Let’s involve both of us in this: suppose you want to assess the attitudes present by patrons at a certain type of restaurant. What sort of process would you use? Take a few minutes to think about it yourself, before I outline how I’d do it.
We’re entering into this study with some idea of what we’re looking for, so we should make sure those things are as well-defined as possible. “Sexual objectification” would imply treating women as objects without desires or needs, “sexual availability” would refer to the assumption that someone is willing to have sex, and so forth. You don’t necessarily have to nail all these down beforehand, but doing so decreases the odds of misinterpreting your own data.
Next up, you need to collect data. There are a number of approaches you could take: surveys, interviews, or observation? In the restaurant, or outside of it? Since we’re looking for behaviour at these restaurants, “inside” is the better choice. Observation is best done without the awareness of the subject, but with human subjects that’s an ethical minefield. Instead, we’ll have to out ourselves and get permission to listen in. This might alter the behaviour of the people we’re observing, but there’s no real alternative. You also wouldn’t want to merely plant an obvious recording device either, as that would make everyone paranoid; instead, have a physical person with a sign there to collect data, and make it clear people can opt-out or halt the recording at any time. The restaurant owner and waitresses are also a part of this process, and needed to be included. There’s an analogy to what Jane Goodall did with chimpanzees.
At first, as Goodall recalls in the NATURE program, it appeared that the primates’ behavior would remain forever mysterious. Within a few years, however, she became intimately familiar with their lives, spending her days trailing them through the forest and recording their habits. Some of her techniques were unorthodox and controversial: for instance, rather than assigning her chimps numbers, she gave them names like “Fifi” and “Passion.” She also set up at Gombe a banana-laden feeding station designed to lure the apes out into the open, where they could be more easily observed. She now regrets this practice, which somewhat altered the chimps’ behavior, but researchers have nevertheless found that Gombe’s chimps get less than two percent of their food at the station, spending the bulk of their time foraging in the forests.
Her methodology was flawed, but it was a first attempt in a novel research area and she was honest about the methods used. Other researchers could use that to replicate or improve on her original work.
Once everything is collected, you’ll need to analyse what you have. This is where the definitions really come into play; you can scan through the material you’ve collected and start classifying or “coding” it. Are you coding it “correctly,” though, and trying to minimise your personal biases? A common bit of insurance is to involve multiple people in the process. Have each person go through the work individually, then compare the codes that everyone assigned. If they match, great; if they don’t, sit down and try to figure out why you differed, if necessary going back and recoding. If someone spotted an interesting pattern and came up with a new coding, discuss it then go back through and recode. This doesn’t guaranteed objectivity, as your coders probably have a fair bit in common, but if done seriously it’s about the best we can do.
The use of multiple coders did support strategies to improve the clarity of codes used. While inter-coder verification could have been assessed quantitatively, the more qualitative approach used had benefits in helping to identify the reasons behind coding disagreements. Code ambiguity, repetition, and omission led to directions for improving the coding system.
Our experience supported the merit of coding discussions to improve qualitative analysis (…).Having multiple coders from contrasting backgrounds helped to address our concerns about researcher influence on the nature of analysis. In terms of the phenomenological perspective, the assumption is that our ‘lived experiences’ of being in the world impact on data analysis (…). Hence, bringing more than one (and possibly contrasting sets of experiences) should contribute to ‘better’ analysis.
Lynda Berends & Jennifer Johnston (2005) “Using multiple coders to enhance qualitative analysis: The case of interviews with consumers of drug treatment,” Addiction Research & Theory, 13:4, 373-381, DOI:10.1080/16066350500102237
Read the methodology of PB&J’s ethnography, and you’ll find it’s pretty similar to what I sketched out. There are two key divergences.
The empirical methodology for my study is ultimately ethnographic because data were collected in situ by personally attending a sexually objectifying restaurant in northern Florida approximately weekly over a roughly 2-year span (July 2015–September 2017) in the company of other men with whom I had personal relationships. The context of these visits was as an after-class bonding endeavor among a social core of members of a Brazilian jiu jitsu (BJJ) school in which I had become a member.
This is a convenience sample, quite literally a sample from a larger population that was convenient for the researcher to gather. If these men happen to be representative of all men, that’s no big deal, but if they aren’t then any observations will be biased. One thing that may help is the focus on difference: all your thermometers may disagree on the current temperature, but if they all show an increase of two degrees you can be pretty confident the temperature increased two degrees. Likewise, even if this sample of men isn’t representative we could still learn something from the differences between how they react in breastaurants and in other contexts. As quoted above, the paper’s fictitious author began this study precisely because they spotted a difference.
Alas, the methodology indicates the fictitious author didn’t collect conversations outside of breastaurants. Absent a solid baseline, even the delta approach might wind up biased here.
Particularly, data were selectively (concept-driven) coded for themes I had already identified and wished to develop, and data-driven (open) coding was utilized to identify new themes in the data until I felt all significant themes identifiable in the data had been found.
There was also only one person coding the study, the sole author. Again, this does not guarantee the results are biased, but it does make it easier for bias to creep in.
Would these flaws prevent us from drawing conclusions, had this been a legitimate study? No, thanks to the intrinsic structure of scientific papers. See, by forcing every paper to disclose their full methodology, you can evaluate how good it is at eliminating bias; if it isn’t good, then assume the data is biased and ignore it. The failure case here is not “bad data,” but “no data!”
Because the methodology is out in the open, too, other people can repeat it under the assumption that their bias will differ from the original, thus if bias was a major factor they should reach different conclusions. The same happens if the original lied about the methodology. Alternatively, these researchers can improve that methodology to better eliminate bias, resulting in a gradual improvement of the scientific record. Looking at this from the other end, studies with imperfect or dodgy methodology still carry some value if they spur other researchers to follow in their footsteps.
Consequently, no known studies have used methods common to masculinities research to investigate men who frequent breastaurants. This gap leaves open many questions about the masculinities that arise in and, perhaps, characterize the breastaurant environment as a unique type of male preserve. To address this conspicuous lack in the existing literature, following Matthews (2014), I engaged in a two-year in situ participant-observer ethnographic study of one group of men who regularly frequent a popular local breastaurant in Panama City, Florida.
It’s also considered good form to flag problems in your methodology. This technically isn’t necessary, but it does demonstrate you put some thought into different methodologies and makes it less likely someone will take your results as more authoritative than they are.
As such the present account is restricted to a small group of men and not to be taken as necessarily representative of all patrons or the whole restaurant/franchise/genre of eatery.
As an ethnographer for my study, I therefore enjoyed and yet was limited by my closeness to its participants. Similarly to Matthews (2014, 105–106), my closeness and camaraderie with these men provided access and insights that they may not have displayed in a more formal, detached study, and in coming to know the participants of my study intimately, other relevant features of their masculinity may have become emphasized, deemphasized, or even blurred by subjectivity.
Is this paper’s methodology perfect? Certainly not. But neither is it “very shoddy,” as PB&J claim, and even if it was shoddy there would be little long-term impact on the scientific record. You don’t have to trust the process when it comes to science, so their howls about poor methodology are instead more evidence they don’t know what they’re talking about.