Amazon's Mechanical Turk (www.mturk.com) is a service where you can post Human Intelligence Tasks (HIT) that people can complete for a small payment. It's catching on for a lot of academics who want to get really cheap sample (though perhaps without as much respondent verifiable information like a sample provider would offer).
You can sign up for Mechanical Turk using an e-mail address, or using an existing Amazon account. As a Requester, you can create tasks for workers using a variety of templates that Amazon provides. We'll use the Survey Link template.After creating a new Task, you can then edit the details of the task. On the first screen, you provide your own Internal Project Name for your reference, and then the Title, Description, and Keywords to help workers find and understand your task.Next, you set up some more details about the tasks available, including the reward per task, the number of assignments you have (so that multiple people can use the same survey link), and some variables about time to complete and get paid.We would recommend considering to look under the Advanced link of this page, where you can customize the worker requirements, meaning who can view the task and accept it. Amazon defaults to Master Workers, who would in this case be people who have consistently completed similar tasks and received payment without issue. You might not have such strict requirements, so consider setting your own worker requirements such as in the image below.Next, you can customize the design of your task. In our limited experience, it seems like most researchers stick with the default template and just make some word changes to reflect their survey. You'll want to make sure you test what you put in for the Survey link before publishing the task.
To use Mechanical Turk for a survey link, Amazon provides an opportunity for a worker to post a survey code as proof of completing the survey so the owner of the HIT knows the worker actually completed the survey. The survey code is really just Amazon facilitating the verification of your workers that they indeed took your survey. It does not need to follow any particular pattern, and is not provided by Amazon.
We'll talk about two approaches to generating these codes so you can validate your Mturk workers.
Option 1: Random Number Generation
Option 1 will use a combination of generating a random number and saving it the survey data so you can generate a code and compare your survey data with the Mechanical Turk survey code your workers input. The RandNum() function can be used by adding a seed, min# and max# as parameters like this: RandNum(1,1111,9999). Seed numbers are just as starting points for random number generators, so really any value can be used in place of the 1. See https://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?drawingrandomnumbers.html for more details.
In order to save a randomly-generated value, the easiest way to use existing functionality is to create a pass-in field that you don't plan on using, and the SetValue() function to save the output of your RandNum() function. Pass-in fields can be found in the Questionnaire Access & Passwords: https://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?hid_web_passinfields.html.
If we created a pass-in field called Mturk, it will technically show up in the link to take your survey, but we can leave it out. Elsewhere in our survey (say, the footer of the first question), we can nest the RandNum() function inside SetValue() within scripting tags like this: [%SetValue(Mturk,RandNum(1,1111,5555))%] to generate a random number between 1111 and 5555 and save it to the Mturk variable in the study.
After the study is complete you can look at your data to see the values you saved and approve your Mturk workers.
Option 2: The Cryptography Approach
The cryptography approach aims to make it so you don't actually have to search through your data to see if someone did in fact take your survey, and utilizes the RespNum() and Mid() functions. Mid() lets us pull characters out of a text string (see https://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?stringfunctions.html for more detail).
The first thing we would do is add an open-end question at the beginning of your survey to ask for the workers Mturk ID. We'll call it Mturk like above.
Next you would determine your cryptographic sequence on how will you make your survey code. Here's an example you could use, which would be put at your Terminate page at the end of your survey:
What this would do is concatenate the following: Respondent Number, _99, one character from the second spot of the Mturk variable, &, %, one character from the third spot of the Mturk variable, Aa, Respondent number+1550. It's perhaps a little overkill, but would be pretty difficult for someone to try to reverse engineer.
If I were respondent 14 in your survey and had an Mturk ID of ABCDEFG, the code above would result in displaying a code of 14_99B&$CAa1564 on your terminate page. The worker would then copy and paste this in to the Amazon task window and you would see something like this from your Mturk Results page.We can compare the worker ID with our code to validate that if someone really was respondent 14 and had an ID of ABCDEFG, then a valid code would include B and C in their correct spots, and 1550+14=1564 at the end. We could also combine the step above using SetValue() to save our cryptographic code to the respondent record as well for extra verification.