How to Talk About Research So Stakeholders Will Listen

Podcast

Inspiring data-driven action—it's what every applied researcher strives for. But how do you achieve it? From choosing the right data visualizations to framing your narrative effectively, every decision matters in ensuring your research resonates and prompts stakeholders to act.

As a former consultant and current research director at Red House Communications, Anna Dragotta will share a number of tried-and-true strategies for communicating research in a compelling and impactful manner.


Follow us on your preferred platform

Apple Podcasts badgeSpotify badge

About Our Guest(s)

Anna Dragotta, PhD, SHRM-SCP is a highly versatile consulting and analytics professional with 15 years of experience in executing start-to-end mixed methods research. She combines expertise in behavioral science, data and people analytics, and qualitative research and has helped numerous organizations set and achieve strategic goals through the use of data. She has worked with organizations both big and small across a range of industries—some of her more well-known clients include Credit Karma, CrowdStrike, Clif Bar, Environmental Sciences Associates, and more.

Anna is a summa cum laude graduate of the University of Groningen, the Netherlands with an M.S. in Applied Social Psychology. She also earned an M.S. in Social Psychology and a dual Ph.D. in Social and Biological-Health Psychology with a Minor in Quantitative Research Methodology from the University of Pittsburgh.

Anna Dragotta

Anna Dragotta

Transcript

Vanessa: Welcome to Research to Revenue, a podcast for marketing research professionals who want to hear from experts, learn about methodologies, stay up to date on industry news and explore new ideas and thinking. In each episode, we will pack in as much value as possible while helping you connect the dots between research and revenue.

Now, let's jump in. Quick disclaimer before we start, this episode was originally recorded as a webinar and edited for podcast format. You can find the original webinar recording on our website at sawtooth. com or on our YouTube channel. Now back to the show.

Justin Luster: Good morning good afternoon, good evening, wherever you are in the world. We're grateful to have you guys join this Sawtooth webinar and hope you enjoy it. We're here with our guest speaker, Anna Dragata, and the title of the presentation is how to talk about research so stakeholders will listen.

[00:01:00] I think it's a very important topic and we're excited to share it with you today.

With that I'm going to introduce our guest speaker, Anna Dragata. She is the director of research and analytics at Red House Communications.

She is a consulting and analytics professional with 15 years of experience. And executing start to end mixed method research. She's worked with clients, including Credit Karma, CrowdStrike, Cliff Bar, Environmental Sciences, Environmental Science Associates, and more. And she has a lot of degrees a master's in Applied Social Psychology, another one in Social Psychology with applied social psychology, I'm sorry. She has a dual PhD in social and biological health psychology with a minor in quantitative research. Wow. So with all of that excited to introduce Anna.

Anna Dragotta: Thank you so much for the warm welcome.

Like Justin said I come to you from red house communications [00:02:00] in Pittsburgh, Pennsylvania. And today what I would like to do is share some insights with you about how to talk about your research.

When sharing it with your stakeholders. So really what we're talking about is. It's mostly the context of applied research, not necessarily academic research. So let's go on to our agenda. What I wanted to do is focus on three of the major steps in the research and data analytic process, where I see a lot of opportunity to hone your skills and do a really good job presenting.

So we're going to focus on how to visualize data, how to talk about our analyses after these have been completed, and then how do we talk about the research, like once it's all completed, so the reporting phase. So let's dive straight in with insight number one. So this one I named let your insights speak for themselves by visualizing your data with intention.

And let me show you what I mean by that. Now,  when you're choosing visualizations, I don't believe a visualization is ever really wrong, unless of course it shows incorrect data. In that case, yes, we can make the case like this was not a good choice. There are also times when visualizations are more right for your audience, and I believe that is when they answer they help us answer our key question.

They answer this question clearly, and they're designed in an intuitive way, and what I mean by intuitive is that somebody can look at your visual visualization. and understand, get it without having to overthink it. Like it should really be that obvious. And there are a few things that you can do to make that happen.

And we're going to start with something simple. I think most of us here probably love a good bar chart. We, it's, I think the most often used visualization out there. And here's some example data I collected. I'm going to actually show you a lot of example data. So each example will come with yeah with a graph or like some nodes in the analysis.

Just know that this data is either fictitious or completely anonymized to of course respect current and previous client privacy. So this visual is from data collected from people who have traveled to a specific location. And here you can see the reasons that they shared with us for visiting. So 15 percent came for business, 10 percent for educational purposes, 25 percent to visit friends and family, and 50 percent came for a vacation.

Here's some more information about these visitors. So 14 percent of them have come in 2024, 55 in 2023, 22 in 2022. Ironically, I never saw this before and so forth. And if I were to ask you, is there a problem do you see a problem with these charge? The answer would probably be no and technically that is correct because there's absolutely nothing wrong with them But there is a small tweak that you can make to make it easier to process information when looking at these charts And the first one is that when we have sequential information So information that has some kind of natural element of order to it like time We often find it a lot easier to see this information in the same way that we think about it.

And because we are in an English speaking country, our sense of order is usually processed from left to like that feels like the right order to us, because that's how we were taught to read and therefore think. So my advice will be moving forward, show any information that has a built in sense of order in it, using a vertical bar chart.

It's just easier to process. But there is also information that has no really set order to it, we can organize it however we like, it's not time, it's not something like that. And for this type of information, I would suggest using a [00:06:00] horizontal bar chart. And what you can do is that you can sort top to bottom like highest category at the top or at the bottom like whatever you prefer.

Because in these types of bar charts, stakeholders usually look for information about magnitude, like what is the biggest group, what are the largest. two biggest groups. And that's easier to see when data is sorted top to bottom. And here, if we show it this way, as opposed to the way I had it before, it becomes immediately obvious that the largest group of visitors came to us for vacation purposes.

For a final touch, we can also use data highlights. Say we want to highlight vacation visitors from 2023, because maybe we have a research question about them, they're a prime focus of our research. What you can do is you can add a touch of color to highlight only what we're interested in, which is another tactic that you can always use when visualizing your data.

Like use color to create clarity, and to draw attention [00:07:00] to what needs to be seen. Now, because we're on the topic of bar charts, a question I have often been asked is how to visualize multiple bar charts. Usually we don't just have one or two, depending on how many demographics we've collected, how many survey questions.

We can go nuts here, like maybe we have a hundred questions. And let's say in my visitor data set that I just showed you some examples from, I asked people to read how much they enjoyed, say, four different activities. So on an agree to disagree scale. And many people will create a visual like this one.

I have seen hundreds of these, have made many myself. Is there anything wrong with it? Not really. However, it does get a little confusing when you're trying to show like four, five, ten, maybe you have twenty of these. What is the takeaway

When we look at this page, we could use [00:08:00] color here, like we just discussed, to highlight what information is more relevant what should we be focusing on, but even with this type of change, hard to immediately see which one is the most preferred activity, especially because some of these percentages are really close to each other the red bars are so far from each other that it becomes a little hard to mentally put them next to each other.

There's just too much to look at. So in these cases, what I suggest is using stacked horizontal bar charts, which when we think about it, they're really a hybrid between horizontal and vertical bar charts because they allow us to do two things at once. So they show order data. So our agree response skill from an intuitive left to right format, which we just agreed is a more intuitive way to show this type of ordered like sequential data. And they allow us to sort the questions like top to bottom based on strongly agree responses so that it's immediately obvious like which activity is the most versus the [00:09:00] least preferred. So this is a nice win and if you have a lot of bar charts like this will save you a ton of space too.

Now a few more things now that we're on the topic of visualization. So usually Our eyes as humans are drawn to whatever is appealing like whatever stands out. So because of that, we want to try and stay away from too many tables like tables definitely have their place, especially in an appendix but like in the main body of the text.

We don't want to come with 100 tables it's like really hard to see what needs to be seen. So let's say we have a table here about hey, where did some of our visitors come from? There's nothing wrong with a table, but we could also do something like this. We can draw a map if we have access to software like Tableau, Power BI, there are others out there.

And what you're doing here if like location is really important for your report, it's like you engage your stakeholders minds by engaging their eyes like you give them something interesting to [00:10:00] look at, and their eyes are drawn to it. Other times, our data has some obvious characteristics to it so here you see a number of bars.

And what immediately stands out is that one of these bars is a lot larger than the others. Now, you can make another bar chart like I did here, or you can choose to use this property of the data because maybe you already have 10 bar charts and you want to mix it up a little bit. So let's say here we had a select all that apply kind of question that asks people what they mainly look for in a weekend getaway location.

So instead of a bar chart, we can use a variation of, for example, a bubble chart where we work with this data property of there's one part of the data that is so much larger than the others. And if we do that here, like the visual makes it very clear that more than half our visitors or participants, had a certain opinion, [00:11:00] chose a certain answer.

And if we color that differently too, That really stands out. And my final advice for how to present visualizations is we can mix up some of our information. So say you have to make a one pager with a few important characteristics or few important demographics or other information. So we want a one pager for our stakeholders.

We could have done this with a series of bar charts or tables. But if you mix it up, it attracts interest. It makes people often pause for a second. And because you have done a nice job here going from left to top to bottom you have given each visualization its own color with bold, it's like color coded key insights.

Like those few seconds that their attention will be drawn to this, make it more likely that they will digest some of your insights too. So it really is a nice win. Now let's move on, excuse me, to the second [00:12:00] part, which is about data analysis. And this really is about sharing the results for analysis.

This is not a statistics webinar, so we're not going to talk about what the best techniques might be. But a question I am often asked is how do we analyze our data in the way that feels right to us and get our message across clearly, but without getting lost in the weeds. And this is an important point because I am sure that many of you here have been trained very thoroughly in data analytics, and that is great.

It gives you like a big toolbox to work with, like lots of analyses that you can use to answer your questions. However, though, at the end of the day, our stakeholders, especially in an applied research context, they want to walk away with an actionable insight. So it's up to us to translate what we did, To plain language, or at least something very close to that.

And I have a few examples in which we could do that. And because [00:13:00] this is a Sawtooth webinar, I wanted to start with something I am sure many of you have used here, which is sharing out the results of a MaxDiff analysis. So let's say in this case, we did an anchored MaxDiff that is asking people about their favorite activities while traveling.

And we collected our data, we did our analysis, or better yet, we did, we let Sawtooth guide us there and give, give us like the utilities and everything. Now the question is what do we do with our results? Because utilities are not necessarily the most intuitive statistic for stakeholders to understand.

So how do we show them in a way that makes sense? And here's one way in which which I've found in my experience that people like seeing these types of results. So it's an easy on the eyes visual that doesn't actually show their utilities. It it actually uses order and color. To in this case show us that people like hiking the [00:14:00] most and skiing the least at the bottom.

And the reason I chose not to show the utilities is that, like I said, they're not Many people aren't familiar with them. They're not the most intuitive statistic out there. And it can be a distraction when somebody starts wondering you have a score of 220 here. Like how is that different from 190 or 210?

So instead I put the utilities in an appendix or a more detailed methodology section. And I made this my main image because it gets the point across it very clearly. Know that you can, of course, also use a bar graph for this. Sawtooth has its own built in bar graph. But I have found that we usually consume bar charts with a label attached to them.

So a lot of people have come to expect a label with a bar chart. So you might get some questions if you do choose to use the bar chart with the utilities. And instead, I suggest something like this. The nice thing here is that because this was an anchored max differential, [00:15:00] we can use we can draw our boundary line to see which activities people wouldn't necessarily do.

So in this case, we see that on average, our participants wouldn't necessarily go climbing or skiing. And what we also see here is that They may not be all that fond of skating compared to the other activities that are at the top, have a dark green color. However, most of them would still do it because they're above the boundary line.

But I don't want to stop there because there are other exciting things that we can do when we are talking about max diff analyses. So since this is an anchored max differential say we also ask people like you see in the example here, what activities did they actually do while they were at the destination?

And now we know what they like because we have their utility scores, and we also know what they did, and that really is a goldmine of information. Because if we take a look at what our data set might look like. So here we see [00:16:00] participant number one, one of our participants. This person has a utility score for playing both that is well over 100.

So I think we can safely assume that participant number one likes golf and would probably play it if given the opportunity to do and the good news for us is because or the good news for us is that this person actually played golf at our destination. So this is great. It's a win. However, say we had a second participant and participant number two here also seems to really like golf because their utility score is also really high.

However, this participant did not play golf or visiting our destination. So we have what we call a missed opportunity to engage them. And what that allows us to do is to go back to our original graph and make something like this. So we can give our stakeholders a clear understanding of where we missed some opportunities to engage our visitors.

So here I'm showing that in our fictitious example 39 percent of [00:17:00] visitors. Like golf probably would play it elsewhere did not play it at our destination. And I calculated that the same way that we just went through those two fictitious participants. And this really is a type of statistic that is really interesting to our stakeholders because it gives us a clear sense of where action is needed.

It's also really clear that if. This would have been 5 percent instead of 39. We might not care about that 5 percent that much. That's pretty good because it means that in 95 percent of the cases we, we did a pretty good job, but 39 is like almost half our sample here. So that's not really good.

So it can guide us like it can guide our action. It's like the kind of number that kind of speaks for itself. So percentages are good. Ideally, we want to deliver a metric like a proportion, a percentage because it's not just that they're so easy to understand. It's just, it's also that most benchmarks and KPIs, like key [00:18:00] performance indicators, are built based on percentages.

But sometimes we don't really have a percentage to share. So what do we do then? And here I want to give you a different kind of example from, there we go. So here I want to give you a different kind of example. And we're going to stop with visitation data, just in case you needed a break from that.

Let's go into into some of my consulting practice in the past. So our team was once hired to measure people's experiences in their organization, but there was a bit of a complication because this was a really large organization and people could interact with it in three different ways. So there was a global level where people attended events like conferences and other large events multiple times a year.

They were able to they were able to participate, attend as a, as a participant. They could actually be a speaker, they could help organize these events. So there was a lot of interaction. There was also what they called a local [00:19:00] level, where participants attended mixers, talks, and other events closer to home.

So if they lived in the United States, it would be in their city, or at the very least in their state. Like this would be a drivable event. very much. And they also have a small group level where they held regular meetings with the same people like every couple of weeks. So these people got to know each other pretty well.

And the organization, what they wanted to know is how do these members feel about our organization? And that's what we measured. So among a list of outcomes, we measured their felt belonging I feel like I'm a part of this. I feel like I'm supposed to be here. I feel like I belong. We also measured the extent to to which they felt involved in decision making.

And also how fairly they believe this organization treated them. And we call this composition of outcomes, positive member experience. And finally like this wasn't complicated enough, the organization wanted to know if women experienced the organization [00:20:00] differently than men. So unfortunately, we did not have enough gender expansive members or employees to include in our analysis.

So my focus for this example will be on women. And we wrote a survey for this client that looked something like this. We asked our survey questions to measure things like belonging, all these concepts that we just talked about. And they answered each question three times to reflect the three organizational levels.

So this is the type of survey that that we wrote. Same question three times multiple questions to reflect the multiple outcomes. And this type of data is complicated because this is what our design looks like. We have multiple dependent variables like belonging. We have at least one variable that differs between participants, in this case, gender.

And we also have a variable that repeats itself within each participant. So these are the three levels of the organization. So that is a lot, and we need to figure out on top of [00:21:00] that do we see meaningful differences based on organization level based on gender and so forth. Now, again, this is not a statistics webinar, so we're not going to go into the details of what methods I use, but I do want you to recognize two things.

The first one is it is our job as researchers to share the results accurately. Okay, so if we find differences between organizational levels or between men and women, it, we don't just need to share the result. It needs to be clear how meaningful these are. So some kind of percentage or a metric that's easy to understand will be ideal here. But secondly, this type of data usually calls for an approach that analyzes continuous data and doesn't spit out like some kind of percentage.

Now we could choose to dichotomize our data. So we could take a participant's response and categorize it into a high versus low for each question. And we could analyze these categorical variables. But if we do that, that can lead to loss of information, it could be a misspecified model. [00:22:00] So sometimes it really is better to stick with a model from the general linear model family.

And that's exactly what we did here. And what we ended up finding is an interaction between member experience and organizational level, which was mostly driven by member belonging. But now the question is, how do we show this? These are like some averages for a member belonging that you see between men and women at the global local group level and so forth.

But what is the true difference between a 3. 0 and a 2. 5 or like a 4. 5 and a 4. 4 are these small, large, are these meaningful and we could solve this by, for example, including effect sizes, so we could include the r squared, we could include something like the partial omega squared. Partial eta squared.

Maybe we also show the unstandardized B or the beta. I'm not saying to not report these statistics. As a matter [00:23:00] of fact, I think you should, and I typically do. I just put them in an appendix or like a detailed methodology section. But if this is the main visual that our client sees, like our job is to communicate our message like fast and with clarity.

And I'm going to show you one possible way in which you can do this. So we are going to dichotomize people's responses. However, we're going to do so after the fact. So we're going to still run our regular analysis that we determined was the best for our data because this was our job. But we're going to show our stakeholder proportions, instead of these averages I just showed you on the previous slide.

And what we're going to do specifically is that every score that falls into the agree or strongly agree range of the skill that we measured on, we will interpret as that person feeling like they belong, because right now we're looking at the mean belonging score. So basically what we want to do is.

Code every person for which we feel confident that they belong as yes, [00:24:00] this person belongs. And maybe for your data, that means a score that falls in just a strongly agreed range. It doesn't matter what your cutoff is. What matters is that you're consistent, as long as you do the same across responses across participants.

And what you can do then is you can use these new variables to create a graph like this one here. This graph actually follows our averages very nicely. You will find that it typically does, especially if you use model estimated marginal means. Of course, your model has to be specified. If you're going to take this approach and what this shows us very clearly is that women experience lower belonging than men at every level of the organization, but that this gap begins to close as we move closer to the group level.

In fact, we don't really see a meaningful gap. Once we're at the group level, and how do we know it's not meaningful. It's a variety of factors, it's not statistically significant so we still only call out the differences that are analysis. told [00:25:00] us are statistically significant. The effect size when we look at that, it's pretty much nil.

And here our eyes can easily tell us that it's not really that big of a deal just by looking at the percentage. Whereas if you look at the global level, for example, only about half of women there feel like they belong. And even though men's score isn't that high either, it's 65%. At the very least, we can say that over half of men feel like they belong at that level.

One final note here, if you choose to do this with your data, I would say make sure to still report your full method and the actual averages both raw data, estimated marginal means, somewhere in the report, because of course we always want to be transparent. So this is not in place of a thorough analysis.

This is just a way to communicate our analysis more clearly. And I think of this as a great win in all the ways like in all the ways that we have used it because you still use the analysis that he was the expert thought were best to answer [00:26:00] your questions. But he also chose to present them in a way that is highly digestible and therefore actionable.

Let's do another perception example. Because sometimes we do actually have clear categorical data, excuse me, but even then. The results of our analyses may not always be very intuitive. So let's say here in our example, we assess the relationship between organizational turnover and organizational culture as perceived at the individual level, so by surveying a number of employees.

And we also looked at or, excuse me, as the outcome variables, so organizational turnover, what we did is we asked people if they were actively applying for jobs. So the answer is a clear cut yes or no. Our independent variable here was the rating of organizational culture, which could be low, medium, or high.

And I hope you can see here that we can easily run a binary logistic [00:27:00] regression because our outcome variable is just yes or no. And we can control for relevant variables in this analysis, maybe like tenure, organizational size, other things. We can make it a multi level one if we're measuring perceptions in different organizations or different departments.

So there's a lot of things that you can do with this type of data. And maybe if we do that, we find that employees who rate their organizational culture poorly are 25 times more likely to be applying for new jobs. than not, compared to employees who rate their culture highly. So this is a statistic, I hope you can see that from the language, that is derived from an odds ratio, which is what a logistic regression gives us.

It's the exponential, the exponentiated B from the analysis. It's the main statistic and it's great, it's easy to understand. However, I just want to take a quick look behind the scenes to remind ourselves what [00:28:00] an odds ratio actually does. Because it takes the odds that people will apply for a job versus not apply for a job under both culture ratings in comparison to each other.

So here we see that 50 people who gave their organization a low rating said they were applying for jobs and only 10 said they were not applying. And then we see the opposite for those who gave their organization a high rating. And that means that the odds that people are applying versus not applying under a low culture are 50 over 10 and 10 over 50 under a high culture rating.

The 25 times. just tells us that the odds in one condition is 25 times higher than in the other. However, we don't necessarily tend to think in odds. And often when you supply an analysis like this, what you will do is stakeholders will want to see a graph like a bar chart. And to see the proportion of employees in each culture rating group.

So here we see like the low, the medium and the high culture rating [00:29:00] and the percentage in these graphs. It's like what percentage of employees are applying for a new job under each of these culture ratings. Now, do you see the problem with having the odds ratio next to the bar graph. The problem is that the 25 percent is not very easily created.

Calculated out of this graph. So it's not a huge problem, but like worst case scenario, your stakeholders might see that, say, wait a minute, 25 times. That doesn't seem right. And they might think that you're, that you made a mistake. That might start scrutinizing your numbers and therefore completely miss the larger point that you're trying to make that this organization has a big problem.

So here's something that you can do instead. I would say in these cases, if you have to present binary logistic progression results next to proportions. Give them a relative risk ratio instead of an odds ratio. And this is usually how we tend to think, most of us, unless we're [00:30:00] gamblers, or something like that.

We don't necessarily think in odds. We think in terms of proportions, and that's what's happening here. So a relative risk takes the 83 percent we saw under the high or the low culture rating and compares it to the 17 percent we saw under the high culture rating and tells us how much larger the 83 percent is compared to the 17%.

So in this case, that's five. And here it is. Okay. So now we put the five times more likely next to the bar graph. And the information flows a lot more intuitive because there is no disconnect between these two. One thing that I would do if you choose to go this route, always take the model predicted proportions, not the raw proportions because otherwise some slight discrepancies might arise there.

But that also means your model is always needs to be well specified. Like the right variables need to be in there. Yeah, we [00:31:00] can't have a huge variation between the raw data and the model predicted data.

So these were some tips, some things to think about when it comes to like how do we share the results of our analyses, especially when they're a little bit more advanced. And now we have arrived at point number three, which is okay now we have our data, we have analyzed it, we have visualized whatever needs to be visualized, but what do we do with the data.

And luckily for us, most of us actually learned through stories, so we're going to use our data and we're going to tell a good story. And when we think about a story, they usually have a beginning, middle, and end. And we can translate this structure, the beginning, middle, and end, to our data reporting in multiple ways.

And I actually want to show you my favorite approach, like there are multiple out there we don't have all the time in the world, so this is my number one. And this is the what, so what, and now what approach. And I really [00:32:00] like it because it gives us an opportunity to connect multiple pieces of data. So for example, I'm a mixed methods researcher and what that means is that I use a combination of quantitative, qualitative data data that collects it on my own.

I rely on existing data. So that's a lot of puzzle pieces to put together. Okay. And what this framework does it gives us a very nice way to connect these different puzzle pieces and still tell a very coherent story. So let's dive in the what and the so what parts of our framework are where we're going to use all the data that we collected.

Okay, so this is the part where we talk about what we found. Once we have analyzed each source independently, so if you have all these different sources, all these different puzzle pieces, we look at them one at a time, each gives us an insight. So we need to look at them one at a time to make sure that we're not influencing ourselves when, I found something [00:33:00] here, let me try to confirm it, like in a different source.

Once you've done that it will, it would actually become very clear that some of your data shows describes what the current situation is like, what are we dealing with, what are we working with what's your landscape. And that is your what part. And there will be other data that will tell you why what we're seeing matters like what the consequences are like really why do we need to care about this why is this important.

And this is the so what part of your story. And then provided we don't like this trajectory that the what and the so what part showed us what can we do to make a change? But there is something important. When the research is partly under our control, which it often is, for example, if we have been given the green light to write our own survey or interview our stakeholders, like there will be a piece that is under our control usually as a researcher.

We really want to think about each of these three questions on beforehand, [00:34:00] because we can't answer any of them effectively, especially the last one so then now what part. If we don't have anything specific to share because what we really don't want is to deliver something that feels big. So in other words, what we want is to collect specific enough information that will guide us toward a solution.

And let's dive into another example to make this a bit more concrete. So let's stick with this employee perception example within an organization. What we want to do is we want to focus on specific perceptions, behaviors that we know are connected to important outcomes like employee retention, like If you're doing this type of work, one of the key outcomes is employee retention, also employee performance, like other things like that.

So we're not just going to measure whether people like the organization because that would tell us nothing. Say we found okay half the people don't like it here. How do we fix that? The first question that your stakeholders will [00:35:00] ask, and rightfully is why don't they like it here?

What is it about us? And here is your opportunity to anticipate that. So design with this why in mind, because this is often under your control. And now let's go back to the beginning, like our what's part of the story. So the beginning of your story will be broad in general beginnings often are. So maybe we found that 65 percent of our surveyed employees are not satisfied with the organization and rated poorly.

That's a pretty large percentage, not what we want to hear. Okay. Now that we know this, Tell us why. And maybe a key driver analysis on our survey data shows that employees are especially dissatisfied with our performance reviews. Maybe only 20 percent of employees believe that their performance is evaluated fairly, and that's not good.

Here's your chance, now that we're still in the what part, to bring in multiple data sources. Say you also have focus group or interview [00:36:00] data, or you have open ended survey responses, and these actually show similar results, like here's your opportunity to bring in some anonymous quotes. And maybe you saw that multiple people commented about lack of performance feedback, and you select this quote here.

We don't have official reviews and it's completely up to my manager if and when I get feedback. Okay, now it's the time to move on to the so what part, which is all again about the consequences like why do we care. It's your job now to show us the consequences of this lack of performance feedback, and we have multiple options here.

Maybe you can show us a consequence from your own survey data if you measured. A consequence type variable, because maybe you found that employees who rate the organization poorly are more likely to be on the lookout for new jobs. So we don't really like this. Maybe you also have more data from your focus groups.

And maybe multiple people here said that the lack of feedback is hurting their development, and maybe they're thinking [00:37:00] about leaving as a direct result of that. So this is like where, a really good point to mention that. You can also bring in secondary research like many of us rely on both primary and secondary research.

So if you found sources citing performance feedback is key to employee satisfaction and that employee satisfaction is linked to retention. Here would be the place to mention that. But then comes the question like how do you put it all together. And what I wanted to show you is. A script that are really like a way of thinking about your data that I have always found very helpful to connect all these different puzzle pieces.

And what I suggest you do is start with your observation, which is the what part, start broadly and then go into the why. Tell us why it matters. So this will be the so what part and highlight your conclusion. So end with the bottom line. And then before showing this to your stakeholders, your clients, flip it.

So take your conclusion, put it at the top. [00:38:00] Your conclusion often makes a really good headline. And this is something like this format that you can easily present to your stakeholders because it's short, it's clear, and it also communicates a sense of urgency because now we understand what the problem is and what's going to happen if we don't do anything about it.

And in this fictitious data set, we don't like the direction that we're on. We don't want to lose our employees and especially not so the good performers. Okay, so that brings us to the now what part. Now we really understand what the problem is. And if you did your job well, this should really be the easy part because it should be crystal clear on how to solve the problem.

Because really what we're going to do is we're going to solve the problems that you set through your analysis were problems. And because we understand why we have a problem, the solution should flow pretty intuitively. For example, sticking with what we just saw, if your largest area of opportunity was performance appraisals, and you started to report with that, you dedicated a [00:39:00] whole section to that finding, it should now be reflected in the now what part.

And you could, and I highly recommend that you do actually add some hierarchy to your recommendations to separate the must do's from the nice to do's. For example, your key driver analysis if you did one will be very important here that will help you separate like the key drivers are, for example, employee discontent or employee turnover.

So built in a hierarchy. And if you're a researcher, a data analyst, a data scientist maybe your work stops here. But if you're a subject matter expert, like here would also be the place to give your recommendations on like how to address the issues that you found. So In that case, it would not be enough to say we need to fix the performance appraisal problem.

Like you're going to be expected to separate your recommendation into strategies, which is the goal. We're going to improve employee retention and then the tactics that will help you achieve [00:40:00] these strategies. So here you have it clear framework to tie it all together to tell an impactful story. have had lots of good experience with this.

I know plenty of people who have started using this have heard great feedback, so I hope it resonates with you. I hope there was something in this presentation that you found valuable that maybe you could implement in your own research and or data analytic practice. Thank you for listening. Really, thank you for having me here.

Justin Luster: Thank you, Anna. You did such a great job and it's such an important topic. It's always hard, in my opinion, to like present to Zoom instead of seeing the live people and, it's I much rather would present to a live group that I can see and interact with, but you did a great job.

Anna Dragotta: Thank you.

Justin Luster: And it's just so important. It's like your research does not matter if you cannot communicate it properly to the stakeholders and totally believe that can you can you tell us a little bit more [00:41:00] just briefly about red house? Just a brief, so everybody knows what you, what your company does.

Anna Dragotta: Of course. So we are located in Pittsburgh, Pennsylvania. What we do is we're really like a full communications agency. So we do we do marketing, we design creative. Where I come in is I help clients test their creative. So like conjoined max diff analysis, are really big in my practice. We also do full service research like I write surveys to help people understand whatever topic they have questions about whether that's tied to media or not. We do buy media. We do full fledged campaigns. My department analyzes, like estimates the impact of some of our campaigns. So that's a lot of what we do.

Justin Luster: Anna, we really appreciate your time. And we'll wrap it up. so much, everybody. And we will see you next time.

Anna Dragotta: Okay. Thank you again.

Thank you so much for your [00:42:00] time.