Issues & Questions: Cultural Knowledge (& Self-Deception) & Mating Success

First, thank you to all who participated in our recent “Pop Culture & Mating Success” survey. Frankly, I am stunned at our success in collecting over 1000 responses in just over one week.  As an anthropologist, I don’t often rely on surveys to collect data from large samples.  I am accustomed to going person to person over longer periods of time to collect cultural, biological, & psychological data.  This was like mana from heaven!  I attribute that to the kindness of those of you who posted the study to your Facebook pages.  So thanks also to those of you who helped us in this & other ways.

We will post results of the study here when & as we can. It will take place piecemeal, so as not to jeopardize future efforts at publication.  In the meantime, there were so many friendly questions & comments about the study that I wanted to share & address some of them in this public forum for those of you wondering about the same things but who didn’t ask.

Here is one that offers a methodological suggestion for future research:

So some background. I suffer from mild glaucoma, which might be useful if I lived in a medical marijuana state, but Texas isn’t one. What this means is that I’ve gone through a lot of different eye tests, particularly about peripheral vision. One of those tests involved looking into a device with a white glowing screen and tapping a mouse button every time I saw a flashing object pattern. It showed up an seemingly not so random intervals, but it varied in faintness and location. On thing that the test does is it occasionally doesn’t display the item at all, providing an opportunity for the test to calibrate the extent to which the subject pushes the mouse when they think they ought to see something but don’t. (I forget what this was called in the test report, but the optometrist specifically pointed it out to me because she said that she’d never seen anyone who was as reliable as me in this respect.)…this might be a good proxy for various types of over-confidence or wishful thinking. It seems like organizing a study in which you offer people free peripheral vision exams should be possible.

That’s a good idea, but I wonder if the tendency to indicate seeing something when one has been trained they will see something is just an anticipation effect, or do they really claim to have seen the flashing pattern?

Another said:

Interesting. Took it and have a couple of comments. I have a feeling there is a difference between people who do not notice popular culture (e.g. physicist friends I’ve had) and people who purposefully avoid pop culture (e.g. hipsters, dropouts). How are these distinctions represented? Hipsters are exposed to more social experiences (i.e. mating opportunities) than a physics grad student. These should give vastly different results (bimodal distribution).

In all honesty, the study wasn’t about pop culture. I’m really not sure how we could possibly define “pop,” since it is relative. Perhaps I should have my friend Ann Powers help devise a pop culture survey when we move on to our next phase of research. She is currently writing a journalist’s take on the cultural neuroscience of music and sex. But, as someone else pointed out, how is “mofongo” pop?  Not sure what food would be considered pop.  At any rate, the study was really called “Cultural Knowledge & Mating Success.”  The “pop culture” part was just to get your attention.   But to answer the question, I would imagine hipsters & physics grad students might be bimodally distributed, but then there are others in between.  I don’t think we just captured hipsters & physics students, but we don’t know yet.  Actually, we didn’t ask occupation, so we’ll never know.

Another:

I think that the decision to give fake information that is closely related to real information causes a lot of problems for the survey-taker, and, along with what I believe may be some minor unintentional errors in the survey, will give you a lot of noise in your results.  Many of the fake entries appear simply as typos or carelessness on the part of the survey writer, and I personally have a hard time considering compensating for someone else’s careless mistakes to be deception.  Based upon my experience, I imagine that many other survey-takers will fall into the same pitfall and provide similarly misleading results. In a bit more detail, here’s what happened to me and why I have these concerns:

On the first page, where it asks for demographic information, it said that I could leave items blank. I did not really want to give my birthday, and so I just gave the year, leaving the month and day empty in accordance with the instructions. I clicked continue, and saw that, despite the instructions at the top of the page, I actually DID have to include my birthdate. That may have been intentional, but it set me up to expect a survey that may contain small careless errors.

So, I get to questions about the sports figure “Jeremy Ling” or the movie “Dark Nite” or the book “Here and Back Again” or the term “Mini” being listed as vehicle slang when it’s an actual marquee brand name, or “Harry Potter” being the name of a book. These are just the ones I remember off the top of my head, but I was aware of several others. These just reinforce my initial impression that the survey was put together somewhat carelessly. With all of these, I don’t know if I should take the questions at face value or answer them in the way I assume they’re supposed to be taken- “Jeremy Lin,” “The Dark Knight,” “There and Back Again (The Hobbit)”, or say that I am familiar with the “Mini” name even if I don’t believe it was ever actually slang, or to claim that I am familiar with the book “Harry Potter” since I read the whole Harry Potter series, even though I am aware of no single book titled “Harry Potter.”

I’m still not even sure if including “Mini” as vehicle slang is one of the intentional fakes or a bit of carelessness or a reference to some obscure history of the brand. Similarly, I’m not sure if the reference to the book “Harry Potter” was an intentional fake or a goof on the survey. Also, “Slumdog Millionare” was listed as a book– I don’t know it this is an intentional fake or not, since it’s really known as and was originally a movie, but a novelization of the story was released.

But that’s a bit beside the point. My point is, about the point I came across “Jeremy Ling,” I was pretty convinced that the survey was put together carelessly by a grad student who couldn’t be bothered to double-check if the TV show he or she remembered was called “My Totally Sweet 16” or “My Super Sweet 16.” So, as I imagine many others have done and will do, I completed the survey while mentally correcting these perceived errors when I found them, and thus claimed to be “Very Familiar” with lots of fakery.

I think I have an idea of why you chose to to pick fake names that closely matched real ones, but I believe it can cause a lot of confusion and misunderstanding among survey-takers, and will throw a lot of noise into your results as survey takers “help out” the writer who probably just got a little careless.

On to the questionably IRB-related issue:

I’m sure this was not intentional, but when I got to the end of the survey it made me angry.  I was requested to take the survey and wanted to be helpful.  I then made mental notes of the typos and mistakes going through the survey and attempted to compensate what I perceived to be the survey writer’s carelessness.  In the end, my attempts to be helpful were labeled as lying.  I’m not looking for an apology or anything, but I figure that if I got angry, someone else might get angry as well.  You’re a lot more experienced with IRB’s than I am, but I could see this as potentially causing you a headache in the future.  Or it may be no big deal, but you might be interested to know about it anyway.

There is one other issue that I noticed as a participant that I believe may have affected my behavior on the survey, and that is that I was requested to take it by a friend who I care about.  Because of this, I wanted to be helpful and supportive, and I believe that this made me more likely to accommodate for what I perceived to be simple mistakes and typos.  Had I been prompted by a stranger, or without such a personal connection, I know that I would not have answered the same way, and would have been much more particular about the accuracy of the questions.

These comments are very insightful & hit upon an issue we had in designing the survey. I do admit that saying you could skip any question then forcing you to answer your birthdate was an accident, & that it primed people to think we were poor survey designers was unfortunate.  As for the subtle foils & “misspellings”–this was our effort to use Del Paulhus’s overclaiming technique but to do so by designing lists relative to our study. In pilot versions, obvious foils rarely fooled anyone.  However, in making subtle changes to real items, we ran the risk of people claiming familiarity because they didn’t notice the change had been made (i.e., we all cognitively fill in the blanks when things are minimally incongruent).  Ultimately, we opted for a mixture of obvious fakes & subtle fakes.  We think some people will only falsely claim knowledge of subtle fakes & others will falsely claim knowledge of subtle & obvious fakes.

As for the concealment of the true purpose, we do apologize but did provide the option of retracting data upon revealing this knowledge.  Some people took us up on this & opted out at the end, & we stand by that.  On the other hand, many of more of you wrote that the “twist” at the end is what you liked most about the study & motivated you to send it to other people.

I’m not sure how taking the study because it was suggested by a friend could negatively influence things. I think this is exactly what we are looking for–what is your tendency to overclaim knowledge to manage the impressions of others?

Here is another comment:

 Okay, I took your survey but I am puzzled about how you will know from the data that anyone has exaggerated their knowledge of culture. Also, given the wide range of sub-cultures that this survey left out entirely it is quite possible for someone to appear as other than they are. My step-father is a musician and producer and former DJ who has likely never heard of a single artist or band in the survey, because he is a political folk singer. My brother was AR director for a record label, a radio DJ for seven years, and also likely never heard of most of the artists and groups because he is into a genre called ’60’s Garage Psychedelic. The women my brother has dated have been very into the same kind of music he is an expert in. Finally, I know you were aiming to get at status with the question about jobs and income and number of friends and such. This also can be tricksy…I am poor by the numbers and have a crappy job, but I live very well, collecting art and antiques and traveling, eating great organic food (I have worked as a chef…).

Also, a point my wife raised when I discussed this with her, many subcultures have disdain for other, mostly less sub, parts of our culture, and so members may not admit knowledge of, say, Minaj when they are part of the punk/anarchist scene. It reminded me of the amazing period during my childhood when Disco died a sudden death; the word that disco “sucked” spread around like wild fire in just a day or two on our school grounds. Within literally two weeks, if memory serves, nearly everyone claimed to have “never liked” disco and to have rarely even listened to it.

This person is right–there was much left out in terms of culture. The purpose of the survey was not so much as to compare how much one knows versus mating success but to compare how much one exaggerates. So if one indicates strong familiarity with something that really doesn’t exist, it suggests a tendency to exaggerate that can be assumed across the board. There is no penalty for not knowing stuff. The point is well-taken, however, in that my tendency to exaggerate will be most easily tapped when the topic is something I like & think I should know something about. Tough to cover all (or any) of the sub-cultural bases & push a survey out on the general public. I threw out list after list that probably would have tapped his brother nicely (I used to work in music distro in a former life & kept coming up with obscure garage bands on accident when trying to make up names!).

He makes a good point regarding status too, in that the intent was for participants to self-rate & rate their mates given the usual status markers PLUS/OR the intangibles of cultural/symbolic capital (art collecting, world travelling, being a chef–all enviable to some of us!)–self-rated status is a better indicator of life stuff than objective ratings like job, income, etc. (Singh-Manoux et al 2003). I’m afraid I didn’t make that clear in the instructions & it is my loss.

As for detecting the bias of being unwilling to admit having heard of something (e.g., “disco sucks” phenomenon)…anonymity is one of the best ways we have to encourage honesty but, as pointed out, certainly no guarantee. There is another scale designed by Del Paulhus that measures the tendency to deny negative things about oneself. I didn’t include it so I could keep the survey short & because some of the items tend to freak people out (but I referenced it in my last post).

Here is another comment:

now you really have to think about the last question. How many people have you had sex with regardless of their sex? Being straight, I’ve had sex with 0 people regardless of their sex. I’ve had sex with lots because they are women…

This was just to try to catch people who bullshitted about the number of males & females they’d had intercourse with (earlier questions).  Hopefully, the answers match.

Another:

1. How are you measuring successful reproductive behavior? I’m suspicious of simple number of intercourse partners, given that arguably the most successful maters are those who are pair bonded, and have been for a long time, and thus with potentially very few mates over their lives (say, if they got married in college, and are now raising kids together). Felt like, minimally, you needed some control questions, like, how long have you been together with your current partner, estimate of number of “serious” partners over past few years, how long each relation lasted, etc., and alternate measures of mating behavior, like number of kids, sexual activity/ week, …
2. A potential conflation: the same people who exaggerate their cultural knowledge might also inflate their reported sexual partners

As well we should all be suspicious of overinterpreting the data from this survey or any survey that simply asks for number of sex partners.  Yet I asked this. Why? It’s similar to why we ask for race/ethnicity even though it just reifies a cultural construct & perpetuates division. We do it because it is a rough proxy that captures some of the categorical distinctions we make in life, though it obscures others. This was not an anthropological study. And the results won’t be the definitive anything. I think building a body of data toward addressing a question is important & utilizing as many methods & approaches as possible is critical.  This is one, first, tentative, preliminary step toward addressing the question, “Does self-deceptive enhancement have anything to do with mating success?”  How should we be measuring mating success? In this case, we used status & sex partner rate because that is the way we did it in the last study, which only measured college students.  College students don’t have a lot of variability in the number of sex partners they’ve had, media depictions of so-called college life to the contrary.  So we wanted a general population sample to compare to that. And, yes, we expect people who overclaim one thing probably overclaim the other. As a previous commenter put it, that constitutes more potential noise in the data. Finally, on this note, one of the essential design issues was to keep the survey very short, so it could be completed in less than 15 minutes &, thus, motivate more people to take it. Therefore, a lot of data we could have collected with a longer survey had to be sacrificed.

Finally, as one person pointed out:

For older participants, the cultural references aren’t relevant. Seems like the survey is meant for people age 20 to 40 (maybe). Gender choice shouldn’t be binary. The question about the number of people whith whom you’ve had consensual sex doesn’t take into consideration the time span of the sexual activity.  For example, someone could report a large number of partners, but the bulk of those encounters doesn’t relfect the person’s typical sexual behavior. It might be good to add a question like, “in a typical year, how many different sex partners do you have?” You might also ask about sex addiction/treatment for sex addiction. A sexually-addicted person could skew your results. The survey doesn’t take into consideration people who choose celibacy or have had sexual trauma.

Excellent, excellent points. Yeah, we could have had focus groups of different age sets develop lists for us & run more pilots. At a certain point, the cost-benefit wasn’t there, but that certainly does potentially limit the validity of the data for older people.  Making gender choice binary was necessary for testing evolutionary questions. I don’t know that evolutionary modeling is sophisticated enough to handle intermediate sexuality or if there’s any relevant theory to make predictions about it in this case. If it is, I am not familiar with that literature. At any rate, I would have simply had to throw out anyone who answered in the middle, as I am fairly certain it wouldn’t have been a large enough category to be statistically useful.

We calculate intercourse-partner rate as number of partners over lifetime divided by years since first consensual intercourse.  I think that addresses the lifespan concern, though perhaps not. Not sure I want to know “in a typical year,” as that wouldn’t address lifetime potential for success (e.g., number of partners with whom one could have potentially reproduced).

I didn’t even think about sex addiction. I’ll have to think that through. Would that skew results or is that legitimate to include? As for celibates & those with sexual trauma, the question that asks how many people participants could have slept with if not for personal, religious, or cultural beliefs that preclude taking advantage of such opportunities was meant to address this. Admittedly, it is very tough to make such calculations.

Again, thank you all for your time & feedback. The Limitations section of our paper has written itself! I will post more as results (& random thoughts) come in…

 

Share
Christopher Lynn

About Christopher Lynn

Christopher Dana Lynn is an associate professor of anthropology at the University of Alabama, where he founded the Evolutionary Studies program.  Chris teaches undergraduate and graduate courses in biological anthropology, human sexuality, evolution, biocultural medical anthropology, and neuroanthropology.  He received his Ph.D. in Biological Anthropology in 2009 from the University at Albany, SUNY, where his doctoral focus was on the influence of speaking in tongues on stress response among Pentecostals.  Chris runs a human behavioral ecology research group where the objectives include studying fun gimmicky things like trance, religious behavior, tattooing, and sex as a way of introducing students to the rigors of evolutionary science.  In all his “free” time, he breaks up fights among his triplet sons, enjoys marriage to the other Loretta Lynn, strokes his mustache, and has learned to be passionate about Alabama football (Roll Tide!).  Follow Chris on Twitter: @Chris_Ly
This entry was posted in Evolution and Psychology, Evolution and Scientific Method, Mating and Sexuality. Bookmark the permalink.