Demographics of Mechanical Turk

Discussion in 'MTurk Help' started by clickhappier, Jul 28, 2014.

  1. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    July 2013 - academic paper: "What Matters to Users? Factors that Affect Users' Willingness to share Information with Online Advertisers" (pdf)
    by Pedro Giovanni Leon, Blase Ur, Manya Sleeper, Rebecca Balebako, Richard Shay, Lujo Bauer, and Lorrie Faith Cranor, at Carnegie Mellon University; Yang Wang at Syracuse University; and Mihai Christodorescu at Qualcomm Research Silicon Valley

    "We recruited our participants using Amazon’s Mechanical Turk crowdsourcing service. Recruitment materials indicated that the study would be about how individuals experience the Internet. They provided no indication that either OBA [online behavioral advertising] or privacy would be major components of the study. We required that participants live in the United States and be age 18 or over."

    "We analyzed responses from 2,912 participants between the ages of 18 and 74 (mean = 31 ...)."

    "Table 1: Demographics of our 2,912 participants."

    Female: 1,375, 47%
    Male: 1,537, 53%

    IT Background:
    Yes: 695, 24%
    No: 2,217, 76%

    Internet Usage (hours/day):
    <1: 72, 3%
    1–5: 1,144, 39%
    5–9: 975, 34%
    9–13: 519, 18%
    13–17: 135, 5%
    >17: 67, 2%

    Administrative support: 183, 6%
    Art, writing, or journalism: 178, 6%
    Business, management, or finance: 205, 7%
    Computer engineering: 299, 10%
    Education (e.g., teacher): 184, 6%
    Engineering: 48, 2%
    Homemaker: 176, 6%
    Legal: 43, 2%
    Medical: 102, 4%
    Retired: 44, 2%
    Scientist: 80, 3%
    Service (e.g., retail clerks): 177, 6%
    Skilled labor: 77, 3%
    Student: 624, 21%
    Unemployed: 253, 9%
    Other: 212, 7%
    Decline to answer: 27, 1%

    Some high school: 46, 2%
    High school degree: 243, 8%
    Some college: 987, 34%
    Associate’s degree: 266, 9%
    Bachelor’s degree: 1,038, 36%
    Graduate degree: 331, 11%

    "Around half of our participants were unwilling to disclose any personal information in exchange for targeted ads. The remaining participants were willing to disclose their gender, low-granularity location, operating system, and web pages they had visited at a higher rate than other types of personal information. ... The data-retention period and scope of use significantly impacted participants’ willingness to disclose the types of information for which participants had varied responses. ... under 3% of participants would disclose their phone number. On the other extreme, participants were most willing to disclose arguably innocuous information, such as their country (53%) and gender (46%). Between these two extremes were types of information for which users’ willingness to disclose was affected by the scope of use of the information, and for how long it would be retained. ... Very few participants were willing to disclose sensitive information. For instance, only a handful of participants were willing to disclose their SSN (<1%), credit card number (<1%), address (2%), phone number (3%), exact current location (4%), and credit score (5%). ... In contrast, nearly half of our participants were willing to disclose less sensitive information. Many participants were willing to disclose their web browser version (43%), operating system (45%), and gender (46%). Participants were similarly willing to disclose coarse-grained information about their location, such as the state (43%) and country (53%) from which they were visiting the health website. ... More than half of our participants would not be willing to permit data collection on any of the nine categories of sites we presented. Participants were most willing to allow data collection on arts and entertainment websites (40% of participants), travel websites (34%), and news websites (32%). Only around 8% of participants would be willing to have their actions on dating or online banking sites used for targeting ads, and only 15% of participants felt the same for photo-sharing websites. ... 25% of participants were willing to have data from health sites used for OBA purposes. ... 62% of participants would not pay to stop data collection, 69% would not pay to remove ads, and 80% would not pay to see generic ads in place of targeted ads. Participants cited several reasons for not being willing to pay. They commonly felt they could obtain the information they wanted on other websites without paying, or use free software to block ads. They also felt that websites should be free, and that privacy is a right they should not have to pay for."
    • Like Like x 1
  2. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    July 15, 2013 - academic paper: "Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys" (pdf)
    by Adam Berinsky, Michele F. Margolis, and Michael Sances, at Massachusetts Institute of Technology

    "Screeners work by instructing subjects to demonstrate that they are paying attention by following a precise set of instructions when choosing a survey response option. ... By recording who responds with the specified answers, we can identify those respondents who are paying attention at a specific point during the survey. As we will discuss below, a great number of people—between a third and a half of our respondents from national samples—fail to properly answer these questions. ... To date, most researchers using Screeners simply exclude inattentive respondents, often measured from a single Screener, from their analysis. ... On one hand, if we do not employ Screeners, we run the risk that our surveys will attenuate substantively meaningful correlations on related items and yield false negatives in experiments. On the other hand, culling the sample based on a single Screener question—as is often done in psychology and political science—will cause us to drop a large and non-random portion of the sample, leading to selection bias in our survey and experimental research. Using multi-item scales to measure attentiveness, showing the politically relevant predictors of Screener passage in specific applications, and presenting results stratified by levels of attentiveness can improve both data quality and transparency."

    "Between June 2011 and April 2012, we conducted two Internet studies using samples collected by Survey Sampling International (SSI), an Internet survey company. ... SSI recruits participants through various online communities, social networks, and website ads. SSI makes efforts to recruit hard-to-reach groups, such as ethnic minorities and seniors. ... Both studies enable us to assess the general measurement properties of Screener questions ... Study 1 consisted of a two-wave panel in June-July 2011, with about two weeks between waves. There were 1,227 and 728 respondents in Wave 1 and Wave 2, respectively ... In each wave, we asked four Screener questions spaced evenly throughout the survey. ... The four Screeners were presented in a random order for each subject. ... Study 2 consisted of a single wave survey in April, 2012, which included 1,255 respondents ... The purpose of the study was to test whether receiving a Screener question changes the response pattern or completion rate for subjects. As such, half the respondents received a Screener question before the substantive questions on the survey, while the other half received the Screener question at the very end of the survey. ... In Section 4 of the online Supporting Information, we show that Screener passage is associated with greater time spent on additional questions. We also show that those who pass Screeners think more deeply about a cognitive processing task. ... passage rates on Screeners vary greatly, ranging from as low as 59% on the website Screener and as high as 76% on the feeling Screener. ... only 47% of the sample answers all Screeners correctly, while 12% of the sample fails all the Screener questions. The rest of the sample falls somewhere in between. These passage rates are comparable to those found by researchers who use students in a lab setting; Oppenheimer et al. (2009) found that 54 percent of their subjects passed their Screener question. Similarly, Clifford and Jerit (2013) used two Screener questions on a nationally representative sample and found that 38 percent passed their first item and 62 percent passed their second question."

    "In additional studies, we have found that passage rates are typically higher when recruiting subjects from’s Mechanical Turk platform (Berinsky, Huber, and Lenz 2012). For example, whereas 69% of the SSI sample passed the color Screener, 91% of Mechanical Turkers passed in a May 2011 study. Likewise, 70% of a September 2012 Mechanical Turk survey passed the news Screener that only 59% of the SSI sample passed. We attribute these higher passage rates to the MTurk population being accustomed to performing non-survey tasks where payment is conditional upon attention to detail."

    "Given that we can measure attentiveness with a scale created from Screener questions, what should we do with inattentive “shirker” respondents? The easiest option is to choose a minimum level of attentiveness and drop respondents who fall below the threshold. Indeed, this is a common practice. As of July 2013, we identified 40 articles in peer-reviewed journals published since 2006 that use Screeners as a tool to identify inattentive respondents. In 32 of these articles, the researchers discard respondents who failed the Screener. In 28 of the articles, the authors purge the sample of respondents on the basis of a single Screener question. However, we do not think this common practice is a good strategy. By throwing out those who fail Screeners, researchers implicitly assume that subjects may be cleanly partitioned into “worker” respondents, who always pay attention, and “shirker” respondents, who never pay attention. Thus, practitioners are assuming a deterministic model of survey attention, an assumption that comes with a stark tradeoff between bias and efficiency. Theoretically, using a single Screener to trim the sample should reduce noise. However, even setting aside the fact that our analysis above suggests that Screeners measure attentiveness with error, if attentive and inattentive respondents are different types of people, removing all inattentive respondents may skew the sample. If attention on a survey is a function of the characteristics of respondents — be it via measured factors or unmeasured factors — then discarding respondents who fail the Screener could remove a distinct portion of the population from the sample. For example, if attentive respondents are also wealthier and more educated, this will bias the results of any study that excludes Screener failers."

    "Our results show that, at least on characteristics we can measure, Screener passers look quite different from Screener failers. In Table 5 we show the results of regressions across five surveys, each of which employed at least one Screener. ... Despite coming from multiple studies, the models show some clear trends. First, older respondents are more likely to pass Screener questions ... the positive relationship between age and Screener passage decays for respondents over 60. Women are always significantly more likely to pass Screeners than men—between 6 and 12 percentage points depending on the study. Finally, racial minorities are less likely to pass Screeners ... African Americans, for example, are significantly less likely to pass the Screeners compared to white respondents in three of the five studies. ... We found highly significant differences in political information between respondents who passed the Screeners a[n]d those who failed. ... throwing out respondents with low levels of education — which we know to be correlated with political interest and engagement — is problematic if researchers want to make claims about the public's attitudes. The pool of people who pass Screeners is different in important ways from the pool of people who fail these questions measuring attentiveness. Researchers must be cognizant of these differences and careful to not fully expunge respondents who fail Screeners from the presentation of results."

    "First, a single Screener item is insufficient for measuring attention. Screener passage at one point in time does not imply Screener passage at another point in time. Instead, a Screener question, like most survey questions, measures its underlying construct with error. As such, it is preferable to create a scale of attentiveness rather than relying on a single measure. ... Second, researchers should present results stratified by attention. Because Screener passage is in part a function of measurable demographic characteristics, researchers should not simply discard respondents who fail Screeners out of hand. By throwing away those who fail a Screener, a researcher may create a sample that over-represents certain races, ages, and levels of education. ... Third, researchers should analyze the predictors of screener passage in their sample ... to gauge whether removing inattentive respondents may skew the sample and induce bias."
    Last edited: Jan 18, 2015
    • Like Like x 1
  3. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    July 20, 2013 - Amy Quarton, at Maryville University, wrote a blog post: "Mechanical Turk 101: Worker Demographics"

    "One of the first questions I had as a MTurk researcher was about the demographic make-up of the workforce. ... I surveyed 1,300 Workers. They were asked about their age, current employment status, number of hours worked per week, and tenure at current employer."

    [The above-mentioned employment data isn't presented in the linked post; the other data is presented as pie charts, which I've converted to text here:]

    Age Range:
    18-27: 50%
    28-37: 28%
    38-47: 12%
    48-57: 7%
    58-67: 3%
    68-77: 0%
    78-87: 0%

    "Additional demographic information was collected from 425 of these Workers. They were asked about their gender, industry, and education level."

    Male: 63%
    Female: 37%

    Agriculture: 0%
    Construction: 1%
    Educational services: 11%
    Federal government: 3%
    Financial activities: 7%
    Health care & social assistance: 10%
    Information: 12%
    Leisure & hospitality: 3%
    Manufacturing: 6%
    Other services: 17%
    [Percents with industry not listed in legend: 12%, 7%, 4%, 3%, 3%, 1%]

    Education Level:
    High school: 8%
    Some college: 24%
    Associate's degree or vocational school: 11%
    Bachelor's degree: 38%
    Some post-graduate: 3%
    Master's degree: 12%
    [Not listed in legend, presumably Doctoral: 4%]
    • Like Like x 1
  4. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    February 2014 - academic paper: "The Language Demographics of Amazon Mechanical Turk" (pdf)
    by Ellie Pavlick at University of Pennsylvania; and Matt Post, Ann Irvine, Dmitry Kachaev, and Chris Callison-Burch at Johns Hopkins University

    "We validate workers’ selfreported language skill claims by measuring their ability to correctly translate words, and by geolocating workers to see if they reside in countries where the languages are likely to be spoken. Rather than posting a one-off survey, we posted paid tasks consisting of 1,000 assignments to translate a total of 10,000 words in each of 100 languages. Our study ran for several months ..."

    "When Amazon introduced MTurk, it first offered payment only in Amazon credits, and later offered direct payment in US dollars. More recently, it has expanded to include one foreign currency, the Indian rupee. Despite its payments being limited to two currencies or Amazon credits, MTurk claims over half a million workers from 190 countries (Amazon, 2013). This suggests that its worker population should represent a diverse set of languages."

    "A number of other studies have informally investigated Turkers’ language abilities. Munro and Tily (2011) compiled survey responses of 2,000 Turkers, revealing that four of the six most represented languages come from India (the top six being Hindi, Malayalam, Tamil, Spanish, French, and Telugu)."

    "We automatically collected each worker’s current location by geolocating their IP address. A total of 5,281 unique workers completed our HITs. Of these, 3,625 provided answers to our survey questions, and we were able to geolocate 5,043. Figure 1 plots the location of workers across 106 countries. Table 1 gives the most common self-reported native languages."

    "Figure 1: The number of workers per country. This map was generated based on geolocating the IP address of 4,983 workers in our study. Omitted are 60 workers who were located in more than one country during the study, and 238 workers who could not be geolocated. The size of the circles represents the number of workers from each country. The two largest are India (1,998 workers) and the United States (866). To calibrate the sizes: the Philippines has 142 workers, Egypt has 25, Russia has 10, and Sri Lanka has 4."

    "Table 1: Self-reported native language of 3,216 bilingual Turkers. Not shown are 49 languages with 20 or fewer speakers. We omit 1,801 Turkers who did not report their native language, 243 who reported 2 native languages, and 83 with 3 or more native languages. "
    	English   689    Tamil     253    Malayalam 219
    	Hindi     149    Spanish   131    Telugu     87
    	Chinese    86    Romanian   85    Portuguese 82
    	Arabic     74    Kannada    72    German     66
    	French     63    Polish     61    Urdu       56
    	Tagalog    54    Marathi    48    Russian    44
    	Italian    43    Bengali    41    Gujarati   39
    	Hebrew     38    Dutch      37    Turkish    35
    	Vietnamese 34    Macedonian 31    Cebuano    29
    	Swedish    26    Bulgarian  25    Swahili    23
    	Hungarian  23    Catalan    22    Thai       22
    	Lithuanian 21    Punjabi    21    Others   <=20
    "Based on our study, we can confidently recommend 13 languages as good candidates for research now: Dutch, French, German, Gujarati, Italian, Kannada, Malayalam, Portuguese, Romanian, Serbian, Spanish, Tagalog, and Telugu. These languages have large Turker populations who complete tasks quickly and accurately. Table 6 summarizes the strengths and weaknesses of all 100 languages covered in our study. Several other languages are viable candidates provided adequate quality control mechanisms are used to select good workers."

    "Table 6: The green box shows the best languages to target on MTurk. These languages have many workers who generate high quality results quickly. We defined 'many' workers as 50 or more active in-region workers, 'high' quality as 70% or higher accuracy on the gold standard controls, and 'fast' if all of the 10,000 words were completed within two weeks."
    Last edited: Jan 18, 2015
    • Like Like x 1
  5. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    March 2014 (written in 2013) - academic paper: "Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers" (pdf)
    by Jesse Chandler at Princeton University (now at University of Michigan); Pam Mueller at Princeton University; and Gabriele Paolacci at Erasmus University, Netherlands

    "Examinations of worker IP addresses typically reveal a small minority of workers (around 2.5 %; Berinsky et al., 2012) who submit HITs from the same IP address, which may often result from workers being separate members of a single household. A secondary analysis of a recent study that tracked demographic responses and IP addresses across time points (from Shapiro et al., 2013) similarly found that 2.8 % of respondents (N =14) shared an IP address with at least one other worker. However, eight of these workers reported demographic characteristics that were consistent with being distinct individuals in a single household: Sexual orientation matched partner sex, and demographic characteristics remained consistent across different HITs 1 week apart. The remaining six observations may have been produced by two other individuals with multiple accounts. This suggests that the number of responses produced by workers with duplicate accounts is much lower than simple IP examination suggests."

    "To investigate the prevalence of duplicate respondents, we pooled data from the authors and several collaborators, resulting in a sample of 16,408 HITs (i.e., individual observations) distributed across 132 batches (i.e., academic studies). ... These HITs had been completed by a total of 7,498 unique workers. The average worker was observed to have completed 2.24 HITs ... but a small minority of workers were responsible for submitting most of the HITs. The most prolific 1% of the workers from this sample were responsible for completing 11% of the submitted HITs, and the most prolific 10% were responsible for completing 41% of the submitted HITs"

    "Our sample mirrored previously recruited samples on income, age, and education. The population was disproportionately white (80% ...) and Asian (8% ...), relative to the U.S. population as a whole (75% and 3.6%, respectively). Although a number of participants identified as Black (8% ... vs. 12.3% of the population as a whole) and/or Hispanic (5.4% ...), both groups were underrepresented, as compared with the U.S. population as a whole ... Although there is nothing peculiar about the demographics of more productive workers, they tended to be somewhat older and more educated and more likely to be White than the sample as a whole."

    "Our survey revealed that although most workers completed the HIT from home (86% ...) and alone (73% ...), they were often engaged in other activities simultaneously: 18% ... of them reported watching TV, 14% ... of them reported listening to music, and 6% ... of them were also instant messaging with at least one other person ... If anything, these estimates may be conservative, since workers are likely to be motivated to underreport behaviors that call the quality of the data they provide into question"

    "The majority of our participants (55% ...) reported having a list of favorite requesters that they monitored for available HITs, and 58% ... of those who followed favorite requesters (about a third of the entire sample) reported that this list included academic researchers. The most productive workers were especially likely to follow specific requesters. ... In our survey, 26% ... of participants reported knowing someone else who used MTurk personally, and 28% ... reported reading forums and blogs about MTurk."

    "Researchers who have tried to collect follow-up data from workers on MTurk by directly contacting participants and asking them to complete a follow-up study have typically obtained response rates greater than 60% within the first few months of collecting data ... We recontacted workers who responded to our survey 1 year later by sending three e-mails inviting them to complete an unrelated survey that paid $1.50 for 30 min. One hundred forty-two participants completed the survey, for a response rate of 44%."

    "A closer look at who responded to our follow-up survey revealed that the response rate was significantly higher (59%) among workers who were known to have completed at least one HIT prior to completing our initial survey, as compared with workers who could not be identified as such (29 %) ... Moreover, among workers who completed at least one HIT, the number of HITs they had completed previously was positively associated with the likelihood that they would complete the follow-up ... reaching 75% among the top 10% most productive workers ... 'Super Turkers' "

    "we conducted an exhaustive search of all MTurk papers published prior to December 31, 2011. According to Google Scholar, over 3,400 papers, dissertations, and conference proceedings were published that contained the words 'Mechanical Turk' or 'MTurk'."

    "At the end of the survey, we asked workers to indicate how they had found it. Thirty-one [out of ~100] workers in the no-keyword condition and 20 [out of ~100] workers in the keyword condition [HIT tagged with the keywords “psychology,” “survey,” “academic,” “research,” and “university”] reported seeing the HIT on a forum post, with the earliest mentions of forum posts within the first hour. The previous workers and autogrant-only conditions had proportionately larger populations who reported finding the HIT in forums (Ns = 50 [out of ~100] and 82 [out of ~100], respectively) ... the large uptick in responses observed in hour 6 is largely attributable to workers who found the HITs on Reddit. ... workers who found the HIT through forums were younger ([mean] age = 28.9 ...) and predominantly male (66%), as compared with workers recruited from MTurk at large ([mean] age = 34.5 ...; 48% male). Most of these workers came from Reddit."

    "it turns out that many workers have completed dozens, and likely hundreds, of experiments and surveys."

    Gabriele Paolacci also presented some of this data at an October 2012 conference in a slideshow, "Inside the Turk: Methodological Concerns and Solutions in Mechanical Turk Experimentation", aka "Non-Naivety among Experimental Participants on Amazon Mechanical Turk" (pdf).
    Last edited: Jan 18, 2015
    • Like Like x 1
  6. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    May 8, 2014 - academic paper: "On the Ethics of Crowd-sourced Research" (working paper) (pdf)
    by Vanessa Williamson at Harvard University

    "Amazon’s Mechanical Turk, or MTurk, has become a common method of subject recruitment for social science survey experiments. [Berinsky et al, 2012] ... social scientists should consider the ethics of their participation in these largely unregulated markets. ... research has shown that response rates are slower when payments are smaller. [Buhrmester et al, 2011] But unless one believes that market forces cannot be exploitative of workers, the “going rate” is not necessarily fair compensation."

    "The question of fairness would be less urgent, perhaps, if Mechanical Turk participants were just hobbyists. Indeed, if any given person were participating in only a single survey, the difference between a ten-cent and a thirty-cent inducement would be nearly meaningless, at least to most residents of the United States. ... But Mechanical Turk is different. Most tasks are completed by high-use participants, who spend more than fifteen hours a week working on MTurk. Turkers are not, and should not be treated as, one-time participants in a single study. They are workers upon whose labor an increasing percentage of social science research is based. Their extraordinarily low wages, and their lack of collective bargaining power, would be problematic under any circumstance. (MTurk wages have been estimated at under $2 an hour. For an excellent review of the various ethical problems with Mechanical Turk, see Karen Fort, Gilles Adda, K. Bretonnel Cohen. 2011. “Amazon Mechanical Turk: Gold Mine or Coal Mine?”) But the exploitation is particularly serious given that a sizeable portion of MTurk workers, even those based in the United States, are poor."

    "In the course of my ongoing research on American tax opinion, I conducted 49 long-form, open-ended interviews with Mechanical Turk workers. (The 49 interviewees were drawn from 406 volunteers in a total survey sample of 1404 respondents.) ... Some respondents are indeed economically comfortable people who treat MTurk as an amusement or source of disposable income. I spoke to a federal patent attorney and a retired lieutenant colonel, among other people of high socio-economic status. But a very substantial number of the people I spoke to were not hobbyists. In fact, many of them were barely making ends meet."

    "Particularly among older MTurk participants, answering surveys appears to be an important, but inadequate, source of income. Among the fifteen (15) people I interviewed over fifty (50) years old, six (6) [40%] were surviving on disability benefits. One woman, a 59 year old woman in rural Washington state, worked in a sawmill until her right hand was crushed in an accident. She now supplements her small monthly check with MTurk earnings. Another interviewee, 53 years old and living in Indiana, used to work in a grocery store and as a substitute teacher, before a bad fall broke several bones and left her unable to work."

    "I also spoke to several young mothers for whom Mechanical Turk was an important source of income. Alexa, from Mississippi, is married with two children; her husband was earning about $9 an hour working full-time, and she is “working two part-time jobs that makes one full-time job.” ... Though they are trying to get by without government benefits, the family is living on the edge of poverty ... She, too, uses MTurk to support her family."

    "One study suggests that 19% of U.S.-based MTurk workers are earning less than $20,000 a year (Ross, Joel et al., 2010), a finding which closely matches my own survey results (18.8%). In my sample, most of those low-earners are not current college students, who might have the safety of a parent’s income. Even removing those partway through a college degree, 12% of my MTurk respondents had a household income below $20,000 a year."

    "From an ethical standpoint, moreover, if even a minority of workers rely on MTurk as their primary source of income, social scientists (including myself) are participating in a market that leaves people we study in precarity and poverty. What can the individual researcher do? One option is to set a “minimum wage” for one’s own research. The federal minimum wage is currently $7.25; among states with a higher threshold, the average is about $8.00. In addition, several states and cities have passed legislation to increase the minimum wage to $10. For a task that takes five minutes, for instance, one should pay each worker 61 cents [$0.61, or ~$0.12/minute] to surpass the federal minimum wage, 67 cents [$0.67, or ~$0.13/min] to pass the $8-an-hour threshold, and 84 cents [$0.84, or ~$0.17/min] to surpass the $10-an-hour mark. (Picking a higher rate can help offset the time a Turker loses between HITs.)"

    "An easy fix ... [for researchers concerned about a pay rate higher than the current MTurk average distorting the pool of respondents, and for] anyone who wishes to retroactively increase their MTurkers’ wages – is to increase respondents’ earnings via automatic bonuses after the research is complete. (This is the method that I used to raise the rate paid to my survey respondents to 17 cents a minute, the nominal equivalent of a ten-dollar hourly wage. Code to write a shell script to apply bulk bonuses is available at Interviewees received an additional $15 payment.)"

    "Of course, paying higher rates costs money. But the cost is less than one might imagine. ... As a [PhD] graduate student who has made this calculation and chosen to pay more, I certainly recognize that it is not entirely painless to young and underfunded researchers. And voluntarily increasing the rate of payment for MTurk HITs will not resolve the fundamental inequities of flexible/precarious employment. But it is the right thing to do. ... Grantmakers should require crowd-sourced projects pay respondents at a living wage rate and provide funding at appropriate levels given that commitment. Academic internal review boards concerned with the use of human subjects should create guidelines for the employment of crowd-source workers."
    Last edited: Jan 18, 2015
    • Like Like x 1
  7. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    June 5, 2014 - My (clickhappier) Reddit comment: "re: Is it worth posting to mturk for non-US jobs?"

    " There is a natural attrition rate as previous users get bored and move on, get blocked and are forced to stop, or were doing it because they needed the money and stopped because they got other better income sources. Those users are no longer being replaced now for non-US countries.

    There was a study done back in 2010 about the countries and other demographics of mTurk workers, but unfortunately it hasn't been redone since they stopped accepting international registrations around the end of 2012. I found the next-best thing (better than nothing), though: analysis of Alexa's traffic estimate data.

    As of April 2014:
    • USA: 51.5% of users, 65.7% of page views;
    • India: 33.0% of users, 29.1% of page views;
    • Pakistan: 2.0% of users, 1.1% of page views;
    • Australia: 0.9% of users, 0.2% of page views;
    • UK: 1.9% of users, 0.8% of page views;
    • Canada: 0.7% of users, 0.2% of page views;
    • Brazil: 0.8% of users, 0.1% of page views;
    • Spain: 0.6% of users, 0.3% of page views;
    • France: 0.5% of users, 0.4% of page views.
    So you can expect a job restricted to Australians to take ~100-500 times longer to get done than one open to everyone. "

    Addendum to my Reddit comment - data for comparison from archived copies of the AppAppeal Alexa analysis page (there is some fluctuation in which low-volume countries they include, presumably due to sample size issues):

    As of July 2013:
    • USA: 49.3% of users, 61.3% of page views;
    • India: 38.9% of users, 32.4% of page views;
    • UK: 1.2% of users, 0.4% of page views;
    • Canada: 0.6% of users, 0.4% of page views;
    • Germany: 0.5% of users, 0.9% of page views;
    • China: 0.8% of users, 0.3% of page views.

    As of June 2012:
    • USA: 43.7% of users, 33.9% of page views;
    • India: 34.0% of users, 56.8% of page views;
    • Australia: 0.8% of users, 0.1% of page views;
    • UK: 1.7% of users, 0.4% of page views;
    • Canada: 2.3% of users, 1.4% of page views;
    • Spain: 0.5% of users, 0.1% of page views;
    • France: 1.5% of users, 0.5% of page views;
    • Germany: 0.9% of users, 1.3% of page views;
    • China: 0.9% of users, 0.2% of page views;
    • Slovenia: 0.8% of users, 0.2% of page views;
    • Philippines: 0.9% of users, 0.6% of page views;
    • Romania: 0.6% of users, 0.4% of page views;
    • Thailand: 0.8% of users, 0.7% of page views;
    • Italy: 1.4% of users, 1.0% of page views.

    As of April 2011:
    • USA: 40.8% of users, 30.3% of page views;
    • India: 37.8% of users, 58.2% of page views;
    • Australia: 0.9% of users, 0.3% of page views;
    • UK: 3.4% of users, 1.0% of page views;
    • Canada: 2.1% of users, 0.7% of page views;
    • Germany: 1.9% of users, 3.8% of page views;
    • China: 0.6% of users, <0.1% of page views;
    • Tanzania: 2.5% of users, 1.3% of page views;
    • Portugal: 1.1% of users, 1.2% of page views;
    • Indonesia: 0.8% of users, 0.4% of page views.

    Dec 18, 2014 Update: AppAppeal seems to be no longer providing that Alexa data, so here's what is available from Alexa itself.

    As of Dec 2014:
    • USA: 64.5% of visitors;
    • India: 23.3% of visitors;
    • Mexico: 1.0% of visitors;
    • UK: 0.9% of visitors;
    • Argentina: 0.8% of visitors;
    • Canada: 0.8% of visitors;
    • Romania: 0.7% of visitors;
    • Spain: 0.6% of visitors.
    • Like Like x 2
    • Love Love x 1
  8. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    July 2014 - academic paper: "Privacy Attitudes of Mechanical Turk Workers and the U.S. Public" (pdf)
    by Ruogu Kang, Stephanie Brown, Laura Dabbish, and Sara Kiesler, at Carnegie Mellon University

    "We report here two comparisons -- a comparison of a U.S. based MTurk worker sample with a representative telephone sample of the U.S. public that uses the Internet, and a comparison of the same U.S. MTurk sample with a sample of Indian MTurk workers. We studied their comparability with respect to two topics: (1) how they manage their personal information online, and (2) their attitudes and preferences regarding privacy and anonymity online. ... We began this work with the hypothesis that MTurk workers may have more concerns about privacy than the average member of the Internet-using public. First, these workers are a self-selected group that has chosen an anonymous worksite. Second, recent studies comparing MTurk with other samples suggest that MTurk workers are better educated, more liberal, and younger than the general population ... Ross, et al. (2010) found an increasing proportion of young people, males, and people with lower income in active Turkers. They also found Indian workers on MTurk to be younger than U.S. workers, and have lower incomes and higher education levels. These differences might predispose MTurk workers to be more knowledgeable about threats to online privacy. ... Indians seem to have a lower degree of privacy concern than Americans ... Therefore, another purpose of this work was to compare the privacy attitudes and behavior of a U.S. MTurk worker with an Indian MTurk worker sample."

    "We compared responses in two survey studies of privacy and anonymity, one a representative telephone sample of U.S. Internet users and the other (a few months later) an online survey of MTurk workers. ... The first survey was administered by the Pew Research Center’s Internet Project (referred to here as “Pew”) in July 11-14, 2013. ... Pew surveyed a representative sample of U.S. adults consisting of 1,002 U.S. adults ages 18 and over, with 500 surveys using landline telephones and 500 surveys using cell phones. Respondents were not paid, except any cell phone charges were reimbursed. ... Of the total participants, 775 said they used the Internet and our analysis is based on responses from these Internet users."

    "The authors conducted the second survey on Amazon Mechanical Turk. We recruited 418 people from MTurk from February 16-20, 2014. We used the same sampling criteria as in previous studies to increase quality, by restricting the participants to those with an approval rate of at least 95% and at least 100 approved HITs. Each participant was paid $1 for completing the survey. ... Separate HITs were released to recruit participants from the U.S., India and other countries. ... Twenty-two responses (5%) were excluded because they failed the attention check questions or entered invalid responses. The dataset we analyze here includes 310 valid responses: 182 from the U.S. and 128 from India."

    "We first compare the demographic characteristics of the U.S. public sample, the U.S. MTurk sample, and the Indian MTurk sample (Table 1). Our MTurk samples seem similar to MTurk samples in other studies, for instance the 2,912 participants in (Leon et al, 2013). Consistent with previous studies, our MTurk samples are younger and the Indian sample is better educated than the U.S. public sample (81% have a college education or higher ...). Both MTurk samples had more male than female respondents, whereas the U.S. public representative sample had equal male and female respondents. The MTurk samples are also much more likely to use social media."

    "Table 1. Demographic characteristics of three datasets: U.S. telephone representative sample (referred to as U.S. public in paper), U.S. Turk sample and Indian Turk sample. Total N = 1085."
    	                     U.S.   U.S.  Indian
    	                    Public  Turk   Turk
    	N                     775    182    128
    	18-24:                12%    24%    23%
    	25-34:                14%    41%    56%
    	35-44:                13%    23%    12%
    	45-54:                17%     9%     5%
    	55-64:                24%     3%     2%
    	65+:                  19%     1%     2%
    	Mean age:            49.8   32.7   30.5
    	Female:               50%    42%    35%
    	Male:                 50%    57%    65%
    	High school or less:  26%    12%     5%
    	Some college:         31%    45%    14%
    	College and more:     42%    43%    81%
    	use social media:     68%    90%    98%
    "We found that U.S. MTurk workers were significantly more likely to seek anonymity than the U.S. public generally (31% vs. 17% ...) This difference remained significant when we added age (Model 2) and (education, gender, and social media use) into the prediction (Model 3). Thus, we found that younger people, people with higher education levels, and people who use social media were more likely to have ever sought anonymity or hid their identity but even controlling for these factors, MTurk workers were also more likely to have done so ..."

    "Pseudonyms are considered an important method of protecting one's privacy. ... Thirty-three percent (33%) of the U.S. public sample said they had posted without revealing who they are. In the MTurk survey ... We asked respondents if they ever posted using a username that people did not associate with them, and if they posted using no name at all. Eighty-one percent (81%) of the U.S. MTurk respondents said “yes” to at least one of these last two choices. Although these questions are not the same across the two samples, the results ... suggest that U.S. MTurk workers may attempt to use unidentifiable communications or hide their identity more than the U.S. public."

    "Significantly more participants in the U.S. MTurk sample reported having tried to hide content from at least one group than in the U.S. public sample (73% vs. 53% ...). This difference remained even when adding demographic variables into the regressions. ... U.S. MTurk workers had tried to hide content from their family members, a romantic partner, certain friends, or coworkers than U.S. public had (54.4% vs. 19.3% ...); the same is true for their employers, supervisors or companies they work for (26.9% vs. 9.8% ...); and for law enforcement, government, or companies or people that may want payment for the files that they downloaded (18.1% vs. 10.5% ...). However, respondents in the U.S. public sample were significantly more likely to report hiding from hackers, criminals, or advertisers than the U.S. MTurk workers (43.6% vs. 28% ...). The two samples did not show any significant difference in hiding content from people from the past and people who might criticize, harass or target them."

    "U.S. MTurk workers in our study expressed more concern about their information than the U.S. public. Sixty-three percent (63%) of the U.S. MTurk workers said they worried about how much information is available about them on the Internet, while only 50% of the U.S. public sample said this ... Adding demographic variables and social media use in the models, the effect of the sample difference dropped only slightly and remained significant. This finding suggests that U.S. MTurk workers are more worried about their online information than the U.S. public, regardless of their age, gender, education, and social media use. Additionally, there is a separate effect of education level and social media predicting these concerns. Those with higher education and those who use social media are more likely to worry about their personal information online."

    "Our analysis showed that U.S. MTurk workers did not differ significantly from the U.S. public in their opinions about whether current privacy laws provide enough protection of their privacy ... Only eighteen percent of the U.S. MTurk workers thought current laws provide reasonable protection of people’s privacy, and 23% of the Pew sample said so. None of the demographic variables and the social media use made a difference either."

    "Prior work suggests most people, regardless of nationality or experience, understand that anonymity has tradeoffs. ... We wanted to know whether respondents thought anonymity is possible on today’s Internet and whether they should have the ability to be anonymous online. ... We found that 37% of the U.S. public respondents and 31% of the U.S. Turk sample thought that it was possible to be completely anonymous online and there was no significant difference between the two samples. Male and lower education respondents agreed more strongly anonymity is possible. ... Our results showed that anonymity is embraced among more U.S. MTurk workers ... The percentage of the U.S. MTurk sample who said people should have the ability to be anonymous online was significantly higher than in the U.S. public sample (86% vs. 63% ...). The difference between the two samples remains significant when we add more demographic information into the model ... Separately, demographic factors predicted people’s anonymity preferences: younger people and men preferred more anonymity than their counterparts."

    "Although they have the same amount of personal information online, more MTurk workers have tried to be anonymous, they have tried to hide their contributions from more different audiences, are more worried about their online information, and believe they should be able to communicate anonymously online. Their opinion about whether or not it is possible to be completely anonymous online, however, is not significantly different. Another important point is ... the two samples show similar trends in how their behaviors and attitudes change based on age. Younger people seem to have more personal information online, but also have stronger tendency towards hiding their online identity and content."

    "We analyzed the same set of questions in our survey answered by U.S. MTurk workers and Indian MTurk workers. ... On average, Indian MTurk workers reported that more of their personal information was online than U.S. MTurk workers did ... None of the demographic variables had an effect on their perception of online information, but using social media predicted more personal information online. ... We also found U.S. MTurk workers were more likely to seek to hide their identity than Indian MTurk workers (31% vs. 16% ...) ... we did not find any significant demographic variables explaining the difference, so we can conclude that, for the variables we have studied, the two groups differ in their anonymity-seeking behavior."

    "Although more U.S. MTurk workers reported seeking anonymity, they did not name more people or groups they were hiding from than Indians MTurk workers did (73% vs. 76% in each sample named at least one individual or group that they have hidden content from). ... the two samples did not show any difference but younger respondents hid from more groups across both samples ... significantly more Indian MTurk workers reported hiding from employers or supervisors than U.S. MTurk workers (42% vs. 27% ...), and slightly (but not significantly) more Indian MTurk workers hid from people from the past, those who might criticize them, and hackers, criminals, or advertisers (35% vs. 27% ...). Their experiences with the other three groups did not show significant difference."

    "Although more of their information was online and more of them used social media, Indian MTurk workers were significantly less worried than U.S. MTurk workers about their personal information on the Internet ... Sixty-two percent (62%) of the U.S. MTurk workers said they worried about how much information was available about them on the Internet, but only 35% of the Indian participants said this ... Adding demographic variables and social media use in the model did not reduce the significant effect of the sample difference ... The finding suggests that U.S. MTurk workers have more concerns about their personal information online than Indian MTurk workers, regardless of their age, gender, education and whether they use social media or not."

    "We also found consistent significant differences between Indian and U.S. MTurk workers’ policy preferences and their opinions about anonymity. U.S. MTurk workers showed more dissatisfaction about how the government protects their privacy than Indian MTurk workers ...: only 18% of the U.S. MTurk workers said current laws provide reasonable protection of people’s privacy, whereas 52% of the Indian participants thought their laws provide enough protection of their privacy ... Less U.S. than Indian MTurk workers believed that people could achieve complete anonymity on today’s Internet (31% vs. 64% ...). More U.S. than Indian MTurk workers said people should have the ability to use the Internet completely anonymously (86% vs. 77% ...). Consistent with this finding, a question added to the MTurk survey (that was not posed in the U.S. public survey) asked respondents whether the government should be able to monitor everyone’s email and other online activities “if officials say this might prevent future terrorist attacks.” Fifty-seven percent (57%) of the Indian MTurk workers agreed with this statement but only 9% of the U.S. MTurk workers agreed ... A different national U.S. survey (Stuart et al, 2012) asking the identical question showed somewhat higher agreement among the U.S. public (45%) as compared to the U.S. MTurk workers (9%)."

    "Most of the demographics of our Indian Turk sample are similar to the U.S. Turk sample, except Indian MTurk workers reported higher levels of education. Almost everyone from the Indian Turk sample used social media. Indian MTurk workers reported having put more personal information online than the U.S. MTurk workers did. Although we might expect more use of social media and more information online to predict more privacy concerns ... this was not the case among Indian MTurk workers. They were less worried about their information and did not take more actions to protect their identity. Also, Indian MTurk workers showed less positive attitudes about anonymity than did U.S. MTurk workers. The only notable difference in the other direction is that Indian MTurk workers more often hid information from employers. Indian MTurk workers’ policy opinions were very different from those of U.S. MTurk workers. More than half [of Indian MTurk workers] thought their laws provide enough protection to their privacy, and more than half agreed to government monitoring. This difference might be due to cultural differences or a result of different national events or news. Additionally, there is a potential bias in that the surveys were conducted after the Snowden revelations (June 6, 2013). The news coverage of these revelations in the U.S. may have reduced American’s trust in online privacy and government Internet policy and practices."
    Last edited: Jan 18, 2015
    • Like Like x 1
  9. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    July 15, 2014 draft - academic paper: " 'Who are These People?': Evaluating the Demographic Characteristics and Political Preferences of MTurk Survey Respondents" (pdf)
    by Connor Huff and Dustin Tingley, at Harvard University

    "a survey conducted on MTurk ... during the fall of 2012. The MTurk survey had 2706 respondents ..."

    "One of the most common questions we hear at workshops and conferences is about the occupational categories of MTurk respondents. Many scholars are rightfully concerned that MTurk respondents might all be unemployed or overwhelmingly draw from a small number of industries. However, in this paper we show that the percentage of MTurk respondents employed in specific industries is strikingly similar to CCES."

    "Table 1: The occupation of respondents by survey" [Mturk portion]
    Management: 11.94%
    Independent Contractor: 8.72%
    Business Owner: 2.73%
    Owner-Operator: 1.92%
    Office and Administrative Support: 17.24%
    Healthcare support: 4.56%
    Protective service: 1.22%
    Food preparation and service: 6.03%
    Personal care: 2.44%
    Installation, Maintenance and Repair: 2.93%
    Grounds Cleaning and Maintenance: 0.81%
    Other Service: 16.01%
    Trade worker or laborer: 9.25%
    Professional: 14.18%

    "Political scientists might also be concerned [whether] MTurk respondents are overwhelmingly drawn from either Urban or Rural areas. ... In both the MTurk and CCES data we have self-reported zip codes. We then link this data up with the United States Department of Agriculture (USDA) Rural-Urban continuum classification scheme ... codes range from metro areas coded 1-3 in decreasing population size, to non-metro areas coded from 4-9. ... the number of respondents living in different geographic categories on the Rural-Urban continuum is almost identical in MTurk and CCES. Both MTurk and CCES draw approximately 90% of their respondents from Urban areas with the remaining 10% spread across Rural areas."

    "Table 2: The percentage of respondents in Urban/Rural areas by survey" [Mturk portion]
    1 [urban, >1mil metro area]: 57.13%
    2 [urban, 250k-1mil metro area]: 22.91%
    3 [urban, <250k metro area]: 7.72%
    4 [rural, >20k, adjacent to metro area]: 4.76%
    5 [rural, >20k, not adj to metro area]: 1.01%
    6 [rural, 2.5k-20k, adjacent to metro area]: 2.82%
    7 [rural, 2.5k-20k, not adj to metro area]: 2.53%
    8 [rural, <2.5k, adjacent to metro area]: 0.62%
    9 [rural, <2.5k, not adj to metro area]: 0.49%

    "over time, we have collected a large pool of MTurkers that have taken our surveys and told us their gender, ideology and partisan affiliation, and zip code. In this sample of 15,584 mTurkers for which we had all of these variables, 54% were male, on a one [liberal] to seven [conservative] point ideology scale the average was 3.35, 34% self identified as Democrat, 22% as Republican, and 26% as independent (the remaining identified with “other” parties), and the average age was 32."
    • Like Like x 1
  10. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    Feb 10, 2014 - Dahn Tamir of Techlist wrote a forum post on TN: "MTurk Work and Education Survey (Results)"

    " We surveyed a couple thousand US Turkers over the weekend, and here's what they told us...

    27% 18-24
    42% 25-34
    17% 35-44
    9% 45-54
    4% 55-64
    1% 65+

    45% female
    55% male

    Highest level of education completed:
    1% some high school
    10% high school graduate or GED
    32% some college
    10% 2-year college degree
    36% 4-year college degree
    10% graduate or professional degree

    Employment status:
    39% employed full time
    14% employed part time
    16% self employed
    15% full-time student
    16% unemployed "

    addendum by 'RippedWarrior', in reply to the same thread:

    " Here are some statistics I offer in comparison and/or relative to techlist's findings. I went through the thread What's Everyone Turking For [(Nov 2013-Feb 2014)], and here is what we have to say:

    Student Loans: 12% (7)
    Credit Card and other Consumer Debt: 18% (10)
    Utilities and Living Expenses: 27% (15)
    Luxury Items and Christmas Presents: 24% (13)
    Medical Bills: 5% (3)
    Auto/Appliance/Home Repair: 9% (5)
    Boredom: 2% (1)
    Trips to Las Vegas: 2% (1)

    My percentages are rounded, and total respondents is 55.


    Debts: 45%
    Living Expenses: 27%
    Future Purchases: 25%
    Other: 2% "
  11. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    Jun 11, 2014 - academic paper: "Lessons Learned from an Experiment in Crowdsourcing Complex Citizen Engineering Tasks with Amazon Mechanical Turk" (pdf or pdf)
    by Matthew Staffelbach, Peter Sempolinski, David Hachen, Ahsan Kareem, Tracy Kijewski-Correa, Douglas Thain, Daniel Wei, and Greg Madey, at University of Notre Dame

    "Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of labor for problem-solving tasks, but there are a number of concerns over the quality of the data or analysis conducted. This is a significant concern when dealing with civil infrastructure for obvious reasons: flawed data could lead to loss of lives. Our goal was to determine if workers on Mechanical Turk were capable of developing basic engineering analysis skills using only the training afforded by comprehensive tutorials and guided questionnaires."

    "In this paper we test the effectiveness of turkers at completing engineering tasks, building an army of so-called 'citizen engineers'. Virtual Wind Tunnel ... data analysis is used as a representative complex citizen-engineering task, because it would be unfamiliar to turkers and therefore would require some training ... In this experiment every turker is required to read and comprehend a 4 to 5 page tutorial in order to begin to grasp the concepts necessary to effectively participate in any of these tasks."

    " We compared the skill of the anonymous turkers of Amazon Mechanical Turk, our unskilled crowd in assessing the quality of Virtual Wind Tunnel Data with the skill of two domain experts with formal training in the fields of fluid mechanics and fluid-structure interaction, who would serve as the source for ground truth.
    We released the 13 HITs (Human Intelligence Tasks) to two groups of turkers: turkers with 1) the masters qualification (a qualification awarded by Amazon) and 2) the default custom qualifications which requires the turkers to have completed at least 1000 HITs with a 95% approval rating.
    We also had two groups of graduate students with some coursework/training in fluid-structure interaction complete our HITs, students from the University of Notre Dame (USA) and Beijing Jiaotong University (PRC). These would be viewed as a skilled crowd for comparison sake.
    This first HIT contained the tutorial, a short survey, and three simulation results that each had three graphical outputs to be assessed as indicators of simulation quality ... Each subsequent HIT contained three simulations; each simulation again contained three graphical outputs to be evaluated. This totaled to 117 questions for all the HITs, not including the survey questions.
    66 master workers completed our first HIT, 59 of them were qualified to move on the next HIT, and 36 finished all the HITs. 51 Non-master turkers completed the 1st HIT, 27 were qualified to move on to the other HITs, and 9 finished all the HITs. "

    "Our results (Fig. 2) showed that quality of the unskilled crowdworkers’ work was slightly higher than that of the skilled crowd (graduate students)."

    "Fig. 2. Only turkers who completed all thirteen HITs with ten or less missing answers were included in these calculations. The most common answers of each group was calculated, this set of answers was denominated the majority consensus."

    "During this process of data collection we also studied the turkers’ demographics by requiring that they complete a five-question survey. We discovered that 60% of turkers who chose to complete our first HIT had earned a college degree or higher, and 71% (out of 39) of the turkers, who completed all 13 HITs had a college degree or higher. This may explain why they performed so well."

    "In our survey we also asked, “Which of the networks listed below have you ever used in order to enhance your use of Mechanical Turk (through discussions, ratings of requesters, finding new HITs, etc.)?” Only 13.5% of 59 turkers said that they did not use any networks; 60% said that they used MTurkForum on a regular basis. The other most popular sites were Turkopticon ( and Turkernation ( All mentioned sites also include ... CloudMeBaby (, Reddit(, Facebook (, LinkedIn (, and mTurk grind ( There was no evidence found that turkers were using these sites to “cheat” on our HITs. Instances were found where turkers would ask other turkers for clarification on how to approach some of the questions, but other turkers only responded by saying that the requester may not approve."
  12. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    Aug 5, 2014 - academic paper: "Are You Transphobic?: How Biological Views Influence Attitudes" (abstract only, in pdf on p25)
    by JoEllen Blass and George Chavez, at Bloomsburg University

    " I recruited participants through Amazon's Mechanical Turk (MTurk) ... To start the survey, participants were given two instructional manipulation checks and asked if they spoke English fluently. After excluding participants who did not pass the manipulation checks (N = 10), 98 participants (65% female, 32% male, 1% trans) were included in the analysis.
    The sexual orientation of the participants was 87% heterosexual, 3% homosexual, 7% bisexual, 1% asexual and 1% heteroflexible.
    The racial/ethnic background of participants was largely Caucasian (77%).
    The average age of participants was 38.84 years (SD = 14.01) with a range from 19 to 75. "
  13. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    Feb 2015 - statistics from PickFu: "Demographic Information for PickFu Poll Responders"

    PickFu is an MTurk requester which posts numerous quick surveys on behalf of businesses large and small, consisting of a single primary question and a few demographic questions.

    They have an occasionally-updated page on their website summarizing the results of the demographic questions. The results appear to be cumulative, and rounded up or down to the nearest 1%. I used the Internet Archive to collect earlier versions of the data (extracted from the broken-from-archiving Google Charts API calls), and estimate when approximately those updates occurred: ~Dec 2008, ~Jul 2009, ~Nov 2010, ~May 2012, ~Oct 2013, ~Jun 2014, and ~Feb 2015.

    On Jan 7, 2015, PickFu began issuing qualifications to every MTurk worker who had ever provided any of their demographics. The number of users who've received these quals indicate that the five basic questions for which results are reported have a total of about 36,000 respondents as of Feb 9, 2015. All PickFu surveys (as far as I know) have required "Location is US", limiting the response pool to US-based workers.

            ~Dec  ~Jul  ~Nov  ~May  ~Oct  ~Jun  ~Feb
            2008  2009  2010  2012  2013  2014  2015
    Female   58%   56%   56%   54%   48%   47%   46%
    Male     42%   44%   44%   46%   52%   53%   54%
            ~Dec  ~Jul  ~Nov  ~May  ~Oct  ~Jun  ~Feb
            2008  2009  2010  2012  2013  2014  2015
    3-17      1%    1%    1%    1%    1%    1%    1%
    18-34    58%   65%   69%   71%   74%   75%   75%
    35-49    31%   25%   22%   20%   18%   18%   18%
    50+      10%    9%    8%    8%    7%    6%    6%
            ~Dec  ~Jul  ~Nov  ~May  ~Oct  ~Jun  ~Feb
            2008  2009  2010  2012  2013  2014  2015
    White    80%   76%   72%   76%   76%   77%   77%
    Black     2%    3%    4%    5%    5%    5%    5%
    Asian    12%   14%   16%   12%   11%   11%   10%
    Hispanic  4%    3%    4%    4%    4%    5%    5%
    Other     2%    3%    3%    3%    3%    3%    3%
                ~Dec  ~Jul  ~Nov  ~May  ~Oct  ~Jun  ~Feb
                2008  2009  2010  2012  2013  2014  2015
    High School  20%   18%   18%   21%   21%   21%   21%
    College      60%   61%   63%   63%   66%   66%   66%
    Grad School  20%   21%   19%   16%   13%   13%   13%
                ~Dec  ~Jul  ~Nov  ~May  ~Oct  ~Jun  ~Feb
                2008  2009  2010  2012  2013  2014  2015
    $0-$30k      40%   46%   48%   50%   50%   51%   50%
    $30k-$60k    36%   33%   33%   32%   32%   32%   32%
    $60k-$100k   17%   15%   14%   13%   13%   13%   13%
    $100k+        7%    6%    5%    5%    4%    5%    5%
  14. clickhappier ★★Ⰼ₳ՖŦξᚱ⌚ Contributor

    Jul 1, 2014
    Likes Received:
    Sep-Oct 2014 - statistics from Stephen Rapier's "Online Search Use" survey demographic questions

    This researcher, a Pepperdine University professor, kindly made the informal stats from his SurveyMonkey-hosted survey's demographic questions visible upon completion of the survey. It was posted on MTurk in mid-September 2014 with a one-month expiration date (likely reached the desired number of participants well in advance of that date). I returned a month later to save a copy of the final demographics results in mid-October; the data hasn't changed from then to the time of this writing in Feb 2015. Based on the smallest reported percentage being 0.2%, I think this data must have included at least 500 participants; and based on the presence of other percentages at odd numbers such as 1.7% (reflecting 0.1% increments), likely included at least a few hundred more participants (e.g. 700+). The qualification requirements were "Total approved HITs is not less than 50, HIT approval rate (%) is not less than 90, Location is US".

    	Female   44.6%
    	Male     55.4%
    Year of Birth:
    	Before 1946      0.4%   [~69+]
    	1946 to 1964     5.5%   [~50-68]
            1965 to 1976    12.0%   [~38-49]
    	1977 to 1994    77.0%   [~20-37]
            1994 to Present  5.1%   [~18-20]
    US State/Territory:
    	Alabama          2.1%
    	Alaska           0.4%
    	American Samoa   0.0%
    	Arizona	         2.3%
    	Arkansas         0.8%
    	California      17.5%
    	Colorado         1.7%
    	Connecticut      1.7%
    	Delaware         0.4%
    	D. of Columbia   0.0%
    	Florida          8.3%
    	Georgia          3.1%
    	Guam             0.0%
    	Hawaii           0.2%
    	Idaho            0.6%
    	Illinois         4.2%
    	Indiana          1.5%
    	Iowa             1.5%
    	Kansas           0.2%
    	Kentucky         2.7%
    	Louisiana        0.6%
    	Maine            0.2%
    	Maryland         0.8%
    	Massachusetts    2.5%
    	Michigan         3.3%
    	Minnesota        1.5%
    	Mississippi      0.6%
    	Missouri         1.5%
    	Montana          0.2%
    	Nebraska         0.8%
    	Nevada           0.2%
    	New Hampshire    0.0%
    	New Jersey       1.7%
    	New Mexico       0.2%
    	New York         6.3%
    	North Carolina   2.9%
    	North Dakota     0.0%
    	N. Marianas Isl. 0.0%
    	Ohio             2.7%
    	Oklahoma         0.2%
    	Oregon           2.1%
    	Pennsylvania     5.2%
    	Puerto Rico      0.0%
    	Rhode Island     0.4%
    	South Carolina   0.0%
    	South Dakota     0.2%
    	Tennessee        1.0%
    	Texas            6.0%
    	Utah             1.5%
    	Vermont          0.2%
    	Virginia         4.6%
    	Virgin Islands   0.0%
    	Washington       2.3%
    	West Virginia    0.2%
    	Wisconsin        0.6%
    	Wyoming          0.2%
    Religious Affiliation:
    	Agnostic              21.4%
    	Anglican               0.0%
    	Apostolic              0.2%
    	Assembly of God        0.6%
    	Atheist               20.2%
    	Baptist                6.9%
    	Buddhist               1.5%
    	Catholic              13.7%
    	Charismatic            0.4%
    	Christian Reformed     2.5%
    	Church of Christ       2.1%
    	Episcopalian/Anglican  0.8%
    	Evangelical            1.2%
    	Hindu                  0.2%
    	Interdenominational    0.4%
    	Jewish                 2.9%
    	LDS [Mormon]           1.0%
    	Lutheran               2.5%
    	Messianic              0.2%
    	Methodist              3.5%
    	Muslim                 1.0%
    	Nazarene               0.0%
    	Non-denominational     4.2%
    	Orthodox               0.0%
    	Pentecostal            1.2%
    	Presbyterian           0.8%
    	Seventh-Day Adventist  0.0%
    	Southern Baptist       0.8%
    	Other                  6.7%
    	Not sure yet           2.9%
    Religious Attendance:
    	Weekly          16.4%
    	Monthly          5.4%
    	Occasionally     9.6%
    	On Special Days  6.7%
    	Seldom          18.7%
    	Never           43.2%
    • Love Love x 1
  15. ChristopherASA MTG Elite Contributor

    Apr 2, 2015
    Likes Received:
    Ok, Click. Now I know why your are so busy.

    Luck, Click!

Share This Page