Measuring Change: Why Survey Design Matters in Evidence-Based Training

Kognito is a health simulation company. Why do we implement surveys?

Surveys are a means for our clients to see data on what users are learning with our simulations. It also gives them confidence in what they’re implementing since they have chosen an evidence-based training. If we could, we would sit down with every single user and do a semi-structured interview. In reality, technology gives us a mechanism to gather data that delivers this same value in a scalable, accessible, and time-efficient way.

Surveys are an assessment tool. They’re critically important to our work because the data they give us is the only way to gauge the impact of our simulations. The work that goes into producing our simulations would be all for naught if there were no mechanism for our clients to measure change among their users. Survey data also allows us to back up that a simulation is an evidence-based training tool. The more data we have, the more statistical power we have to examine evidence in our data analyses.

We care about the impact that our products make because our clients expect to see change after implementing an evidence-based training. They should be able to prove that they are making progress towards changing attitudes and behaviors in their organization. And this impact should be sustained over time, rather than diminishing shortly after the training ends.

With our At-Risk simulations for higher education, we want to know if users will be more likely to recognize signs of distress in others or themselves after taking a simulation, for example. When we start to do data analysis, survey responses contribute towards helping the overall goals of our clients, which in turn helps them justify the investment.

Have surveys always been a component of Kognito?

Yes. At Kognito, our clients can access survey data practically instantly, since we use online surveys that are incorporated into our online simulations. Currently, we seamlessly integrate them into the user experience with our survey tool. So, when a user logs in to complete a simulation, they actually see a survey first, and then enter the simulation.

We developed this integration intentionally since in-person training presents some limitations for research. For example, with in-person workshops, people may be more used to paper surveys that are only administered after a training is complete. In the end this means that clients get limited data on whether a training was effective, and must wait until data is manually inputted and analyzed. All of this slows down evidence-based research.

How have Kognito surveys evolved?

It’s imperative that our surveys stay current in a couple of ways. We always want to make sure that we are an evidence-based training, or that the measures we are using are good at evaluating the impact of our simulations.

Second, we want to make sure we are on the pulse of measuring what is important to our clients. We add measures to our surveys based on the written feedback we receive. In the K-12 space, for example, school safety is a topic of growing importance. So we’ve added survey questions on school climate, student-teacher rapport, and stigma to align with our clients’ goals and objectives.

Regardless of new topic areas and the breadth of simulations we have in PK-12, higher education, and healthcare, what does not change for us is that we always fall back on attitudinal constructs. The literature in psychology has shown that knowledge does not predict behavior; attitudes predict behavior. This has become my mantra. So measuring attitudes is critical to our work because the end goal of every simulation we make is to impact behavior.

Kognito conducts surveys for all users before, after, and two months after they complete a simulation. Why do we follow this structure?

We are able to implement surveys at various points of the user experience through our online integration, as opposed to paper data collection that may only occur once in other methods of training. We have to have a starting point in terms of what we are measuring – where a user is before they take our simulation. This pre-survey gives us a sense of current attitudes and behaviors – for example, how many students are you identifying in psychological distress? This way, we have a baseline to compare our measures with.

Then, users complete a post-survey after they have finished a training simulation. This tells us the immediate impact of the learning experience and the effect of the simulation. For example, did it change or affect your preparedness or self-confidence in addressing the topic you just learned about? Data points like this are important for building evidence-based research.

The follow-up survey, which is emailed to users two months after they complete a simulation, shows us how the original pre-post changes are sustained after a long period. This data is valuable for demonstrating the impact of the simulation beyond completing it and gives clients the confidence that the training was worth investing in.

What do we measure in our surveys?

Our surveys are partly structured based on Kirkpatrick’s four levels of evaluation, which is a model for examining the impact of training.

Level 1 is about satisfaction, like would you recommend the simulation to others, or how would you rate it. Level 2 is about the impact on knowledge, skills, and attitudes. Level 3 is about behavior, which we measure by comparing baseline surveys to follow-up surveys. It’s really looking at how well people are putting their learning to use. And finally, Level 4 looks at the results of how the simulation has impacted the organization. In the case of Kognito, we can translate this into contributing to school retention rates, the number of patients being screened for mental health and substance use disorders, or a reduction in overall stigma.

When it comes to level 2, we developed an 11-item assessment tool that measures attitudinal constructs of preparedness, likelihood (or behavioral intent) and self-confidence/self-efficacy: the Gatekeeper Behavior Scale. This validated tool measures the impact of our online gatekeeper training programs on behavior. Again, the focus is on attitudinal changes because attitude predicts behavior. For our clients who license our simulations, they want to see changes in behavior. The surveys are designed so that the data shows them how behavior changes, and how much it changes.

In our surveys we are also measuring self-efficacy, which reflects overall how confident you are in handling situations in life. We also implement what’s called means efficacy, which is a relatively new measure looking at how confident you are in the tool that you’re learning from. We measure means efficacy because the more confident you are in the tool, in this case the simulation, the more likely it is to change behavior. So if you have high means efficacy, that means you think it’s useful, well-constructed, easy to use, has realistic scenarios…these attitudes about the tool are more likely to change behavior.

What types of questions do we ask in our surveys?

Besides the quantitative side, our surveys also include a qualitative component. It’s very important to give users the opportunity, in their own words, to comment on the impact of the simulation they just completed – such as what you liked, what would you improve, etc. We also ask that knowing what you know now from this simulation, can you think of an example of how you can apply what you’ve learned to a situation from your past? And in the follow-up survey, we ask: since you’ve completed, can you give us an example of how you’ve applied what you’ve learned?

“I met with a student who expressed that the concussion she had initially been diagnosed had been re-diagnosed as extreme burnout…I think the training made me much more aware of how many people really perceive seeking help as a weakness – when in my own experience I have always viewed in as a powerful gift to myself and those around me. I am now much more aware of the importance of helping students see this as both an opportunity and a strength.” -University faculty member

Finally, depending on the topic of the simulation, we incorporate questions tailored to specific populations. In Veterans on Campus for Faculty and Staff, that survey addresses not only gatekeeping skills, but also military cultural competency. In this case it’s important that we capture the impact of the simulation on a change in knowledge and attitudes towards veterans as they navigate student life.

What happens to the data that Kognito collects?

It’s very important to understand that all the data is stored on a secure server. In the surveys, there is no identifying information to link that survey to the person who completed it. So once you have secured the anonymity and secured access to the data, then you can aggregate the data. It allows us to run a variety of different statistical analyses on those aggregate numbers. Some of these de-identified data are then made available to our clients. They can see the volume of users in their institution who have completed a simulation, and among that group, we provide them with the impact on attitudes and behaviors in an insight report.

The short answer responses on our surveys are also a great way to hear from our users. We often pass feedback along to our production and technology team. This is really helpful for us – it not only tells us what we can improve, but also lets us know what our users would like to see in terms of expansion into other topics.

 

Stay tuned for an upcoming blog post with Dr. Albright about Kognito’s evidence-based research.


Dr. Albright is a Co-Founder and Director of Research at Kognito. His research involves integrating empirically-based findings drawn from neuroscience such as emotional regulation, mentalizing, and empathy, as well as components of social cognitive learning models including motivational interviewing and adult learning theory. He is a clinical psychologist who received his Ph.D. from The City University of New York in the area of experimental cognition with concentrations in neuropsychology and applied psychophysiology. Dr. Albright is the former Chair of the Department of Psychology at Baruch College of the City University of New York and has received distinguished teaching awards at both Baruch and New York University.

Scroll to Top