Follow us on your preferred platform
Matt Hilburn is a freelance marketing research statistician that has worked in marketing research for 15 years. He holds a Master of Statistics in Econometrics and a Bachelor of Science in Economics and Business.
Matt has a passion for translating technical findings into everyday language for informed decision making. He enjoys the full market research process and has used advanced analytics to conduct large scale projects and provide data-centric reports to executives of Fortune 100 companies down to the smallest non-profits.
Matt’s passion for research design lends itself well to both qualitative and quantitative research. Matt has also been an instructor at the University of Utah Business School and a developer of many data analytics courses and certificates.
Matt Hilburn
*Note: The following material is a true and direct transcription from voiced conversation. Spelling and grammar errors should be expected.
Justin Luster: Good morning, good afternoon, good evening, wherever you are in the world. We have people all over the world joining us today, I'm excited to introduce Matt Hilburn. He is a freelance marketing research statistician with 15 years of experience in marketing research. He has a master's of statistics in econometrics and a bachelor of science in economics and business.
He has a passion for translating technical findings into everyday language. For informed decision making he has instructed at the University of Utah business school. I went to the University of Utah, so that's cool. And he's developed many data analytics courses and certificates. So welcome Matt.
How's it going?
Matt Hilburn: It's going well, Justin. Thanks for having me. I appreciate it.
Justin Luster: Yeah, we're excited. Take it away, Matt.
Matt Hilburn: All right. Wonderful. Thanks, Justin. Really good to be with all of you and super excited to be with Sawtooth Software and appreciate all the webinars that Sawtooth Software puts on.
And as we discussed what to talk about for today's webinar, I thought, we do so much with conjoint and max diff and maybe for those of you out there who are either getting into research for the first time, or for those of you who have been writing surveys for decades and would like a refresher and to just see this again.
I thought this would be a good topic to cover. So super excited to have you all here and to get any questions that you've got at the end as well. So a really quick outline. There's so much to talk about with regard to survey design, survey method, methodology, questionnaire, writing, this isn't going to be covering everything because we only have such a limited amount of time, but we'll certainly cover some key things.
And in particular we've got we'll do a quick overview. We'll talk through defining the problem. So the problem that we're trying to solve with a survey, developing an approach to solving that problem, we'll look at hierarchical structure of questions that, that we should be implementing in surveys, basic survey flow, general survey guidelines.
We'll talk through how to write questions, right? And then making sure that we're aligning our surveys with objectives, and then we'll talk through some quality assurance issues as well. Most of this presentation is going to actually follow the chronology of writing a survey. We'll walk you through how to write a survey from start to finish.
It's not gonna be perfectly in order in that way, but most of it will. We're gonna talk about who I am, skip over that. You've all seen the webinar overview. This is just for reference. So we'll move on through that and jump right in. So first off, most of you are probably familiar with the 7 step marketing research process.
Depending on the author, sometimes it's not 7 steps, sometimes it's a little different. But for this presentation, we're going to focus on the first 3 steps of the market research process, which is that 1. Defining the problem and developing an approach to the problem and the and we'll just touch on those a little bit because we really do need to talk about those before we can talk about physically writing a survey.
But then the majority of this discussion will focus on designing and preparing the research instrument. Let's start with defining the problem. What does this mean? So you either work at a company and you're responsible for writing a survey, or perhaps you have a client who's asking you to write a survey, whatever that may be before we ever even think about writing any questions, jumping into a survey.
We have to really fully identify what the problem is. And this is a really missed step really commonly skipped step. And, a lot of times when I watch webinars or trainings or C courses, sometimes there's a lot of fluff in courses. And, you think, oh, maybe this isn't applicable to me.
This is just for academic purposes. No, this is really important. We need to make sure that we're doing this. And so we conduct a problem audit. This might be really simple. Perhaps the problem is really simple. We know the problem and it's not that difficult to define. And so maybe this isn't very in depth and that's okay.
In other cases, if the problem is a little less defined, maybe the client or our boss or our customer, whoever it is we're conducting this research for is not fully aware of what the problem is and how we should approach it. We might need to conduct a couple of in depth interviews with key stakeholders or decision makers.
These could be really formal, or they could be really informal perhaps some focus groups where we pull a few people into a room together and talk through the issue. Maybe we need to talk to some industry experts if there's a particular topic where we're worried about that's a little bit outside of our expertise.
Understanding the history behind the problem and any culture surrounding the problem, and then we never want to ignore the fact that sometimes there's already research that exists on these sorts of problems. And so if there's any secondary data that exists, we should be investigating that as well.
Once we've understood all of that, we have to define the problem and we do this through building objectives. And again, I know most of you are probably corporate researchers where you either do this full time as a job, or you're the only person around in your team who maybe has any experience with this.
And so you're the 1 who's been asked to write a survey. Maybe you've never written a survey before. And so this is your chance to try and start doing that. We need to come up with some objectives and it's often the case that. As the researcher, it's our job to help the in person, whether that's our boss or the stakeholder, we need to help them define this.
So we probably are the ones who are going to write the business objective. And before we get into writing these objectives, we ask ourselves why we're doing all of this. And there's an approach called the five whys, maybe it's not five, maybe it's four, maybe it's seven, where we hear a problem.
To use a non-research example, my car won't start. Okay, why won't it start? The battery's dead. Why is the battery dead? The alternator is not working. Why isn't the alternator working? It's because the belt was worn out and I didn't replace it. Okay, why didn't you replace it? I just didn't maintain my car as well as I should have.
The issue that we're facing, which is that the vehicle won't start, that's not really the problem. The problem is that we don't have a good cadence for conducting car maintenance. So this is the kind of exp experiment that we need to have as we're talking through research objectives. To use a research example, I'm often asked by clients, they'll say, hey, we wanna conduct a brand perception study.
When they say that to me, it sounds to me like they've already come up with the solution to the problem rather than giving me the problem. And sometimes I like to work through with them what is the actual problem you're trying to solve? And they'll say we want to conduct this survey because we want to better understand our customers.
Why do you want to do that? We want to see how we can better track Gen Z. Why? Because sales with Gen Z are low. Okay. Why are sales with Gen Z low? We don't know. Okay. So the real issue here is that we want to understand why they're not attracting Gen Z. And maybe that is a brand perception study.
Maybe it's something else. Maybe it's a conjoint study or a max diff. The point is, by understanding that objective really well, it may actually completely change the research design and the methodology for how we'll actually approach this problem. And so we want to be sure that we're pushing whoever is asking us to write a survey to really get down to what the core issue is that we're looking at.
Now, once we've figured that out, Now, we want to figure out. All right. How do we collect the data? That's going to answer these problems. An online survey isn't always the right way to do that, though. It's certainly one of the most excellent ways to collect primary data. We may also need secondary data.
And so we need to explore through that problem what the approach to this is going to be. And the best way I've found to do that is to write what's called a research plan. This is a really simple document. You document the background of the project, what the issue is, why we're conducting this research.
What the objectives are and the approach that you've developed to solving this problem will plan any relevant information. You really want to have everything documented. This is not a difficult thing to write. It's probably 2 pages and by having it you'll find that you can reference it over and over again as you conduct this research project.
It's also not uncommon that. If you conduct a research project, and there's a lot of stakeholders, or a lot of people interested in this study that they may at some point say, hey, how can we didn't do this? Or why didn't we introduce this into the survey by having a documented research plan? That has been approved by all the stakeholders at the table.
You can always reference back to that research plan and say that wasn't in the research plan. The research plan is the document that guides the entire research project. So we need to be really clear that's in place. Now, I'll point out that almost all statistical analysis is based on the assumption that data is coming from a random sample, but surveys are not random samples.
They're a form of what's called convenient sampling. Part of this research plan needs to have what we call a sample plan, or a sample frame that talks about who's going to be taking the survey. Where are we going to get the sample from? What's going to be the sample size? We need to make sure that sample plan, this presentation is on survey writing.
So I won't go deep into a sample plan. But if you're going to collect data, you need to make sure you've got a really good, robust sample plan so that your data is going to be something we can call statistically significant, large enough sample sizes, representative, et cetera. Okay, so we figured out what the problem is and now and we've written a research plan.
We've defined probably a business objective by now, or a few business objectives from there. How do we start writing a survey? Step 1 for a lot of people in writing a survey is writing survey questions. That's almost the last step. The 1st step is everything we've already discussed and building these objectives.
I've got business objectives here and I've got research objectives. The only reason I've got those separated is a business objective is usually an issue that a client is facing or the business is facing such as. The objective might be how do we attract Gen Z audiences? The research objective might be something more like identifying what Gen Z audiences want, right?
So we convert a business objective into a research objective, something that we can research. This distinction may or may not end up being applicably relevant to you. You can do it that way. I have gotten over the years to where I just create objectives and then survey questions, and I cut out this middle part.
But it's not a bad step to be that that thorough. So we create these business objectives. We can convert them into research objectives. And then we convert those into survey questions, keeping in mind that for any given survey, you're probably looking at 3 to 5 objectives per survey. If you get too many objectives.
We're going to need, say, three to five questions per objective, then our survey is going to get too long. And so we want to keep our surveys focused. Now, this is a little less applicable with choice based surveys, because usually with choice based surveys, we have a really clearly defined objective, product optimization, or brand equity, or whatever it may be.
And it might just be one objective, but something to keep in mind. What does this look like? So we've got our objective up here and I've made just a generic example here. Objective one is we want to identify brand awareness. So now that we have that objective is when we finally start even thinking about writing survey questions.
And our exercise here is to say, what questions do we need in order to address that business objective? Okay. To identify brand awareness, let's ask an unaided awareness question, which would be an open response. We say, when you think of widget companies, which ones come to mind? Then we might ask a second question that says, specifically, so this is an aided question.
How aware are you of Acme Widget Company? And then this might have a scale. I've never heard of them all the way down to. I shop there regularly. And then we might say, okay what if we get a little bit more detail? Did we say how aware of you are of the following products? Acme widget company sells and then we offer a few options there in a matrix table like this.
So then we step back and we say, all so our objective was to identify brand awareness. We've asked three questions. Have we fully addressed this objective? And the answer is no, there are lots of questions we could ask about brand awareness. We could ask, where did you first hear about Acme widget company?
What sorts of ads do you recall seeing from the company? Where did you see those ads? So there are so many things we could ask but this is the structure we need to create so that. So that we don't get any kind of scope creep when we start by just writing survey questions 1st. It gets a little I'd say it gets a little scattered, a little confusing.
You just end up with a lot of questions that you feel like, I don't know, do we really need to ask that question? Or, maybe you ask questions that aren't actually related to brand awareness at all. And so when somebody says, when you finish the survey, and someone looks it over, and they say, hey, how come we didn't ask about the reputation of the company, then you might say does that apply to any of the research objectives that we developed or not?
And if the answer is yes, it does, then maybe you missed a question. And if the answer is no, then we say, okay then that doesn't fit within this survey. We would either need to add a new business objective, or in many cases again, your survey can be getting too long and you don't want to add too many objectives, which we'll get to that shortly as well.
Survey length. Okay, survey flow. So what should this generally look like? So we'll start really high level and then we'll get more detailed. So we start with an introduction. We need to tell respondents why we're asking them to take this survey. We need to give them an estimate of how long the survey will take.
We need to tell them what the incentive will be. If there's going to be an incentive, if you're using a sample provider or a panel company, you wouldn't mention anything about the incentive. But if you're sending this to if you're sending this to customers of your own business that you work at.
You might want to tell them what the incentive is going to be by the way I don't think I mentioned this anywhere else in the survey. You should offer an incentive do not ask someone to take a survey without incentivizing them for this reason if you will get what's called non response bias where you'll only hear from people who love you, or you'll only hear from people who hate you, or you'll only hear from people who love taking surveys for one reason or another.
But you won't hear from the 95 percent of the general population of people who receive that survey. I really really encourage you if you ever send out a survey, include an incentive. If you don't include an incentive, I would not make any decisions based on that data because it will have non response bias.
Help them understand that the information that they provide is confidential. If it is confidential sometimes the survey you send out won't be confident, confidential. Also, understand the difference between confidential and anonymous. Sometimes we send out an anonymous survey, but sometimes it's not anonymous.
Sometimes it's just confidential, which means that we know who they are. We have their name or their email address attached to their responses, but we're going to keep it confidential and we won't share that with anyone. And then we thank them for their time. Of course. Okay. So then the next section after the introduction is we need to have a really robust screener section.
This is another thing that I see people do either really poorly or miss altogether. And this is a really important part of your survey. So this is the place where we make sure we're getting the right people to take our survey. And again, if you're sending this survey through a panel company, then you've already done some work to predefine who the audience is.
The survey is only going to that predefined audience, or perhaps it's all of your customer base. And in that case, Hopefully, everybody's getting the survey that's supposed to be getting it. However, even if you've done all of those checks, it's still a really good idea to include some questions to make sure you're talking to the right person.
Let's say you only want to talk to your customers in Utah, and you've got customers nationwide. So you want to include a question that Asks what state you live in. Now, we don't want to be distrusting of people, but there are also bots online and that get ahold of a survey that can take a survey. And so we want to create questions that aren't going to be obvious what the answer is.
So we wouldn't want to say, do you live in Utah? Yes or no? Because then a respondent might say, okay, they want me to say, yes, I'm going to click. Yes. So I can take the survey and maybe get the incentive they're offering. Okay. So instead, we would just ask, what state do you live in? And if they say anything other than Utah, they get terminated from the survey.
So we want to have up to, say, somewhere between one to five questions usually is enough to make sure we're talking to the right person. Are you a decision maker with regard to whatever the topic we're talking about? What state do you live in? Let's say we're conducting a study only of a particular demographic.
Let's say it's Gen Z. We want to ask a question of how, what's your age? Your age range and those sorts of things. Some people, some surveys have a tendency to have too many screener questions. We're probably not going to incentivize somebody who gets kicked out of the survey for failing the screener.
Don't make them ask don't make them answer twenty questions only to find out that they don't qualify and then they get kicked out. Keep it as short as possible. Okay, so then we'll get into the survey body, which we'll talk about in a second. And then we'll close the survey at the close. You ask for their information for their incentive, like an email if you're sending this to customers.
But if it's through a panel, you wouldn't need to do that at all when we write the objectives and we write the survey questions under each of the objectives. We'll have them in an order. Probably when we write the survey, we don't simply delete the objectives and then we're left with the order of the survey questions. It does often make sense that the questions are going to be, clustered together with like themes, but we need to address the flow of the survey to make sure that the questions fit and that they make sense.
And that it makes for a good experience for the respondent. So we think in terms of this funnel approach where you start really broad. So you might introduce certain topics. You might ask about awareness of a company awareness of a product awareness of the topic. You're about to discuss. Then you get up to the more general questions.
What's your overall satisfaction? How likely are you to recommend our company? Really broad questions. Then we get into more specific things like, experiences that they've had really detailed product feedback. Broad to specific and also keeping sensitive questions at the end of the survey.
If there are any sensitive questions, demographics don't tend to be very sensitive, but. Those we would tend to put at the end of the question at the end of the survey or questions that have a socially desirable answer, we would usually put at the end of a survey. Keeping in mind that some of these questions might actually need to go out the front of the survey in the screener section.
So again, if we're targeting Gen Z, then we're going to need to ask about age at the beginning of the survey. Not at the end. Okay. Some other general survey guidelines, survey length. Don't worry about question count really when you're developing the survey. I find it to be best to include every question you think would make sense for this survey. And don't restrict yourself by saying, oh, I think we're getting too many questions in here.
Just write all the questions out. We can always delete questions. But if we miss a question, because we just feel like the survey is getting too long, number one, we don't want that to just be our decision. We might want others to have a say in that. But number two, I just find it better to cut at the end.
But as far as survey length, we want it to be as short as possible. And I do find that this is an issue in the industry where people write really long surveys. People like me and maybe like you, we like taking surveys because we're research nerds. And it's just the cool thing that research nerds do.
But a typical person does not love taking surveys. That's the sad thing to admit. We need to make it as engaging as possible and as interesting as possible. But people generally don't like taking long surveys. We should be targeting 7 minutes for a survey, maybe 10 minutes max. This is a little bit easier to get away with.
If we're talking about a choice based survey, because choice based surveys are a little more interesting to take versus a traditional survey. So we can get away with it being a little longer, but we still need to be cautious of that. I had 1 client years ago who In their RFP, they dictated that the survey needed to be 20 minutes long, which is an interesting thing to dictate, because the only thing that we should be dictating is perhaps what the objectives are or what we want to learn.
The length of the survey is really irrelevant to any of that. We, and other than making sure we keep it as short as possible. Now, if you're providing a larger incentive, you can also usually get away with a longer survey. So that's something that you can consider conjoint and max studies. Also be careful to not have too many additional objectives.
If you have a conjoint exercise, and you're showing 10 or 20 screens. We want to be cautious to not ask a whole lot of additional questions outside of that conjoint exercise. You can ask some, but again, it can just make the survey too long. And even within, say, conjoints, mathematically, the longer a conjoint exercise is, or a max diff exercise is, The better from a mathematical standpoint, but keep in mind that the experience might go down as people are taking that survey.
So even if it works, mathematically, don't ignore the actual survey experience that people are going to go through. Okay? So once the survey is programmed, you're going to want to time it if it's too long, think about cutting some questions. If it's too short, you're also not going to be getting enough value out of it.
If you time a survey and it takes four or five minutes. You don't want to waste the time, energy, and money to send out that survey. Now, you don't want to throw in any unnecessary questions, and so if it's short then it's short. But just keep in mind that if there's extra room, you might want to keep in mind putting some extra questions in there.
Okay, open ended questions. This is a big pet peeve of mine. Surveys are not really for open ended questions. Surveys are for closed ended questions. If you find that you have a lot of open ended questions you need to ask, You might need to start with qualitative research, hold some in depth interviews with customers or stakeholders, whoever the target audience is.
You might need to hold some focus groups. The purpose of qualitative research is that it's exploratory research. We're exploring the topic and we're creating open and we're creating a response options from their feedback. That will go into the online survey. So just keep that in mind. People don't like typing when they're taking a survey.
So you want to keep those to a minimum. I do encourage that you put at least one open ended question in the survey because it's a great data quality check. If there's gibberish in an open ended response, that can be a red flag for data quality. If it's a really great answer and well articulated, then you know that it's good quality.
So definitely include at least one, but I try to think in terms of three or so as being the max. I have gone over that, but try to keep it short if possible. Also, are you going to do anything with all of those responses? Hopefully you will, but, if you ask if you send a survey to a thousand people and you've asked 10 open ended responses, that's a lot of information to read.
And I don't encourage using AI as your sole way of evaluating open ended questions. It can be a great approach to identify themes. But ideally, you're reading those questions and you're using quotes from it and gaining real insight from it. So just keep that in mind. Would you really use all of that information?
And if so, then maybe that's a good approach. Okay, number of survey questions. I get this question a lot as well. How many questions should be in a survey? It's irrelevant. It's not really how many survey questions you have. It's really more about the length of the survey. How long does it take to complete the survey? To get a seven minute survey.
What is that? Maybe 20 questions or so, depending on the type of question. If it's a matrix question with a lot of options, then that's going to take a long time. If it's an open ended question, that's going to take a long time. It's a really fast, easy question with only two options. That'll be shorter. So the number of questions is not so relevant.
It's more about how long it takes to complete the survey. We want to randomize all nominal data. If you don't know what nominal is, we're going to talk about that in a second, but it's basically a descriptor. So if you have a question that says what's your favorite hobby? The results are gonna be I like skiing or mountain biking or going to the theater.
That's nominal data doesn't have a number in it in any way. Those sorts of questions should usually be randomized and that helps us prevent some primacy and recency bias, which we'll talk about in a 2nd as well. The only reason you wouldn't randomize nominal data is if there's a good reason for the order if you ask what state do you live in, and you randomize 50 states, that's going to be a nightmare for people to find their state.
So those should be in order so that people can find what they're looking for. You also want to randomize all questions that might make sense. For example, if you're asking people about a certain tagline or message and you have three questions for each tagline, how well do you like this tagline?
What's your favorite thing about it? Whatever it may be. You might want to randomize those taglines so that people are receiving the taglines in different orders to prevent any bias. You want to force response all questions. The only exception might be if you have an open ended question. That's maybe not a critical open ended question.
You could have them not. You could have them give the ability to skip that. But unless you really like dealing with data imputation issues and, cleaning up missing data. I always force response. We're sending this survey to them. I expect them to answer the questions. They don't want to answer the questions.
That's okay. They would they would just not be able to move forward with the survey. Okay, a couple of quick issues that are sometimes forgotten. Make sure that your lists have the right answer. I just took a survey yesterday that did not have all of the options. In particular, it did not have a non option.
So it asked me a question and it gave me a few options and it said, which of these apply to you? None of them applied to me, but there was no non option. So make sure that your lists are mutually exclusive and collectively exhaustive. Make sure they've got all the right answers. But at the minimum, you've got to include a non option, maybe throw in another option for somebody to click on so that they can move forward in the survey.
And you should also build an analysis plan at this stage. Without getting too detailed into what an analysis plan is, this is going to be this is going to tell you by the time you get all the data back, what are you going to do with the data? I'm going to create a model with this data, and I'm going to create some crosstabs with this data.
You need to define all of that before you finish the survey, so that you can make sure your survey is written in a way that you can actually conduct all of that analysis when you get to that point. Okay, a couple of really quick things about writing questions, right? Double barreled questions. This is a fun one, the classic example, do you feel like Coke is delicious and refreshing?
That's two questions. Make sure you're only asking one question, so this needs to be separated into two questions. Leading questions. I find this really common in nonprofit research for some reason, and I think it's because any time we're trying to get information about anything of importance perhaps the people writing the survey have a little bit of passion regarding that topic and questions can somehow reflect that passion.
So leading would be most students think virtual learning isn't working. Do you agree? Yeah. You wouldn't want to say it that way. You would want this to be a very neutral question asking how they feel whether virtual learning is working or not. You see this a lot at retailers. We strive for five star ratings.
Please do. Rate our customer service. You don't want to tell them what the answer is before they start. That's leading. Or do you feel our advanced technology offers better results and competitor options? Don't tell them it's advanced. You just need to ask about the technology. Okay acquiescence bias.
This is more of an issue with focus groups and interviews than surveys, but this is the desire to be agreeable. Okay. And you might think that not a lot of people have that, but this is a, this is something that is an issue. People desire to just be agreeable and say yes. So we want to be sure that we're writing questions in a way that the answer isn't obvious that we're looking for.
And again, sometimes it's actually difficult to just get an honest answer for a socially desirable answer. So being really careful how we write questions is important. Satisficing is often an issue. This is where respondents will just select any satisfactory answer. This is probably because they're quickly going through the survey.
This is usually because the survey is too long. And at this point, they're just trying to get through it. So this is something we can minimize by making a really clean, easy and short survey. recency bias. If you have a list and have options and you say, Hey, which of the below options is your favorite?
And you've got a list of 10 items. There is a tendency for some people to either select the 1st option or as they read it down it, they might select the last option. 1 way to prevent the bias of that is by again, randomizing those options. So that every respondent sees a different version of that list.
a different order of that list and that can help with that. And then finally don't educate or sell anything in a survey. That is not the time for either of those. It'll take up extra space in the survey and it will bias your results. Try to make your questions really concise. Don't use any unnecessary language. I still see this a lot on a scale from not at all likely to very likely. You don't need to say that you can just say how likely are you to recommend our product and they can see the scale. So it's a little thing, but it can save a little bit of cognitive load from the responded by not having to read that.
And the next question I always get when I show this is how many scale points should we use? Should we use 3, 5, 7, 10? And the answer to that is it depends on what analytics you want to conduct. If you want to use this as a dependent variable for a regression analysis, Which some people may not love, because this is not ratio level data.
But you would probably want this to be at least seven scale points. If you're really just trying to get a really quick, minimal, I just need a quick pulse on this, maybe you could do three. They're either not likely, they're neutral, or they're likely. So it's all dependent on what you plan to do with the data, which is why you need to have your analysis planned before you finish your survey. Okay, try to mix it up with different kinds of questions, though your questions are likely going to be very important to be written in a certain way. So you hopefully can find some ways to mix it up. Visually switch up the orientation and the presentation of the questions. If it's just a yes or no, maybe make it a horizontal yes or no, rather than a vertical.
These things may seem really simple. But again, we're really focused on data quality and making the survey an easy and positive experience for a respondent can really go a long way. Make sure your multiple selects and single selects are programmed properly. I have taken surveys that say, select all that apply, but it only lets you pick one.
So make sure your quality checking those sorts of things. And again, at the end of the day, always include an, I don't know, or a not applicable option if you need to, so that you can make sure that people have a response they can select. Okay, and make sure that your surveys are not necessarily grammatically correct.
Okay, so you don't need to write a survey and send it to the copy editor. Surveys can actually be confusing if they're written in a grammatically correct way. They should be written so that everyone can take the survey and no one's going to be confused by it. Being grammatically correct sometimes takes a little bit more space.
Whereas not being grammatically correct and sometimes be shorter. So we just need to write it so that it's easy to read and easy to understand. It's okay. If there's a dangling participle in there, just make sure it's easy. Don't use industry lingo unless your survey is going out to industry experts.
Make sure that the questions are easy to interpret. Okay, we're getting a little short on time. So I'm going to breeze through these a little bit, but I'm going to pause just here for a 2nd. This is a really important slide. This will hopefully be easier to study if you view the recording later. But these are our four data types.
We've got nominal, ordinal, interval, and racial. Nominal data is, it just has a label, such as gender or marital status. There's no numbers in there. Ordinal has some sort of an order, like less likely and more likely are, that's an order. Interval are going to have some sort of an absolute distance.
Between the options, such as I have two children versus three children, and then origin is our, I'm sorry, our ratio data is going to have a true zero point, such as price, depending on how we're asking a question and what data type we have, it's going to dictate what kind of analysis we can do. We can't run a regression on nominal data.
But, we can on ratio data and these are hierarchical, by the way. So ratio data is interval ordinal and can be nominal technically. Each of these has the components of the previous one. Nominal data just offers descriptions, ordinal has orders, intervals has defined distances, and ratios have origins.
At the end of the day, when you've finished your survey, you need to make sure that we re evaluate whether or not the survey questions align with an objective. And that we've answered all of the objectives. It's very easy, especially when multiple people are involved in providing feedback on a survey.
It's really easy to say, hey, let's ask this. Let's ask this. And let's ask this. That's a really easy way to get the survey out of control. They need to tie back to an objective. And if we determine that those questions are relevant and that we missed an objective, great. Let's add it to the research plan.
But we need to just be sure that there's always going to be some consistency there. And by the way, once the survey is launched, there's no turning back. Before you launch the survey, make sure you've got everything you need in there. Okay, some quick quality checking. Make sure someone internally other than you quality checks the survey.
Pre test it, which means you'll have other staff members take the survey, but you might even have some target audience members take the survey. These would be people you would coordinate with, and you would ask them to take the survey. I like to actually do this live over Zoom. They'll take the survey, and I'll watch them take the survey, and they'll address any issues or confusions that they have as they take it.
And then, when you launch it, only launch it to a soft launch, 10 percent of the sample, then review the data, make sure everything is looking good, fix any errors before you do a full launch. Okay, make sure all your question numbering matches what you want it to be. Make sure that there's any recoded values you need.
If you are going to, say, take a Likert scale and turn that into a coded scale, you can do that after the fact, or you can do that inside the survey. Make sure you've got your quotas in place. Ensure that any skip logic, display logic, et cetera, is working properly. And I know this feels really basic and fundamental, but please run a spell check.
Make sure that it may not need to be perfectly grammatically correct, but it does need to have proper spelling. With that that's all I've got for you. Again, probably not fully comprehensive, but hopefully this gives you the basics of what you need to know to make sure you're writing effective surveys.
And Would always be happy to help out anybody who has any questions about this as well.
Justin Luster: Awesome. You have some great skills in presenting very clearly. Thanks so much, Matt. Really appreciate it. This is all going to be recorded, so we'll be able to watch it again. And Matt's offered to share his slides. So we'll share those as well.
Thanks again. Everybody have a great day and we'll see you on our next webinar. See you guys. Bye.
Vanessa: Thanks for joining us for this episode of Research to Revenue. If you found the material helpful or insightful in any way, we'd appreciate if you'd leave us a rating. It only takes about 10 seconds and really helps us grow the podcast. Also, don't forget to subscribe if you'd like to hear more episodes like this in the future.
Thanks again for listening. See you next time.
Support: support@sawtoothsoftware.com
Consulting: analytics@sawtoothsoftware.com
Sales: sales@sawtoothsoftware.com