Evaluspheric Perceptions

Reflections of an everyday evaluator/learner/educator exploring the evalusphere


2 Comments

Why Are Evaluators So Tentative about the Advocacy Aspect of Our Profession? (Guest Post by Rakesh Mohan)

I’ve recently had the pleasure of meeting an evaluator whose work I’ve followed, and I invited him to write for Evaluspheric Perceptions. Rakesh Mohan has been a regular on EvalTalk, and I’ve admired him for putting his work out there and asking for feedback. In early 2014, his office released a report, Confinement of Juvenile Offenders, and I found myself curious enough to read it. Quite honestly, it wasn’t the topic that interested me, but rather, it was the idea of reading a governmental evaluation report produced by someone who is a great fan of and frequent presenter at the American Evaluation Association. Needless to say, the report impressed me to no end. Rakesh put into place nearly every principle of good evaluation reporting and data visualization that I have been learning and studying myself. I’m not the only one impressed by the work coming out of his office. In 2011, they received AEA’s Alva and Gunnar Myrdal Government Evaluation Award. Recent posts on two of my favorite blogs highlight the work of his office as well. One can be found at Better EvaluationWeek 15: Fitting reporting methods to evaluation findings – and audiences and the other at AEA365Sankey diagrams: A cool tool for explaining the complex flow of resources in large organizations.Mohan, 2014

I’m also happy to learn that Rakesh is on the 2014 ballot for the AEA presidency.

Today, Rakesh presents another topic that often confounds me, but he demystifies it with ease. So, please join me in learning about Rakesh Mohan, and evaluator advocacy!

Why Are Evaluators So Tentative about the Advocacy Aspect of Our Profession?

My mother used to say that where there are two or more people, there will always be politics over resources. Because evaluations involve making judgments about prioritization, distribution, and use of resources, evaluations will always be inherently political.

Greetings! I am Rakesh Mohan, director of the Office of Performance Evaluations (OPE), an independent agency of the Idaho Legislature. This year our office is celebrating 20 years of promoting confidence and accountability in state government.

OPE logo

At OPE, there is nothing tentative about advocating for our work—i.e., promoting the use of our evaluations, defending our evaluation approaches and methodologies, and educating people about evaluation. For us, evaluator advocacy is all in a day’s work.

I believe it is the fear of politics that makes many evaluators tentative about advocacy. Some evaluators say that it is not their job to mess with the politics lest they be perceived as taking sides, while others do not even acknowledge the fact that evaluation and politics are intertwined. The option for evaluators is not to ignore the political context of evaluation, but to understand and manage it without taking sides.

The following advocacy activities of my office are grounded in professional evaluation and auditing standards and are guided by our personal ethics:

  1. Conduct my “daily sojourn.” Each year during the legislative session, I visit the capitol every day even if I do not have a scheduled meeting. These visits help me to inform others about the work of my office and be informed about the political context in which we conduct evaluations.
  2. Keep legislative leadership informed. This is the first step in building relationships with policymakers and gaining the confidence and support of the leadership.
  3. Keep stakeholders informed. This is imperative if we want to have the buy-in from key stakeholders.
  4. Assist policymakers with evaluation requests and legislation. This helps with getting good evaluation assignments and subsequently facilitates the implementation of our evaluation recommendations.
  5. Educate policymakers and others about evaluation. We should let policymakers and those who influence policymaking know who we are, what we do, and why we do it.
  6. Work with the news media effectively. If betterment of the society is one of the purposes of evaluation, we need to reach out to the people. The press can serve as a bridge between evaluators and the public. Here are three examples of how the media can help evaluators and evaluation offices:

Mohan 2

 

Details about these strategies and other thoughts on evaluator advocacy are discussed in my recent article, Evaluator Advocacy: It Is All in a Day’s Work (April 25, 2014).

This article was published along with two related articles in the Forum section of the American Journal of Evaluation:

How to Become an Effective Advocate without Selling Your Soul (George Grob, April 22, 2014)

Broadening the Discussion about Evaluator Advocacy (Michael Hendricks, April 17, 2014)

All three articles are available from OnlineFirst of the American Journal of Evaluation.


14 Comments

When a Direct Question is NOT the Right Question

Who hasn’t answered the question, “What did you learn?” after attending a professional development session? As a PD facilitator and evaluator, I’ve certainly used feedback forms with this very question. After all, measuring participant learning is fundamental to PD evaluation.

In this post, I’ll share examples of actual data from PD evaluation in which we asked the direct question, “What did you learn?” I’ll then explain why this is a difficult question for PD participants to answer, resulting in unhelpful data. Next, I’ll offer a potential solution in the form of a different set of questions for PD evaluators to use in exploring the construct of participant learning. Finally, I’ll show where participant learning fits into the bigger picture of PD evaluation.

What happens when we ask “What did you learn?” 

Here are examples of actual participant responses to that question:

  • After a session on collaborative problem solving with students with behavioral difficulties: How to more effectively problem solve with students
  • After a session on co-teaching: Ways to divide up classroom responsibilities
  • After a session on teaching struggling learners: Some new strategies to work with struggling students

In my experience, about one-third to one-half of participant responses to that ubiquitous question are nothing more than restatements of the course title and thus similarly uninformative to an evaluator.

On the futility of asking “What did you learn?”

It’s challenging to get people to clearly articulate what they have learned on a feedback form distributed after a professional development session. Whether the question is asked immediately after the learning has taken place, or after some time has passed and the participant (in theory) has had time to process and apply the learning, the outcome (in terms of the data collected) is the same. People don’t seem to be able (I’m working under the assumption that they are indeed willing) to answer “What did you learn?” with the depth and richness of written language that would help inform professional learning planners make effective decisions about future programming. They’re not to blame, of course. It’s just as difficult for me to answer that question when I’m a participant.

Parents and teachers know this: When you ask a child a question and he or she answers with “I don’t know,” that response can have a whole range of meanings from “I can’t quite articulate the answer you’re looking for” to “I’m not certain I know the answer” to “I need more time to process the question” to “I don’t understand the question” to “I really don’t know the answer” to “I don’t want to tell you!” It’s no different for adults. Someone who answers “What did you learn?” by essentially restating the title of the PD session is in effect saying, “I don’t know.” As a PD evaluator, it is my job to figure out exactly what that means.

How else can we know what participants learned?

Of course, we’re talking about surveys – self-reported perceptions of learning. There are certainly other ways for evaluators to gain an understanding of what participants learned.

We can interview them, crafting probes that might help them more clearly articulate what they learned. Interviews include dedicated time and the opportunity for participants to give full attention to the question. In contrast, surveys are often completed when participants feel rushed at the end of a PD session, or at a later time, when they are fitting survey completion in with a myriad of other job duties.

We can observe participants at work, looking for evidence that they are applying what they learned in practice, thus getting at not only what they may have learned, but also what Kirkpatrick called “behavior” and Guskey calls “participant use of new knowledge and skills” (see below for more on these evaluators and their prescribed levels of PD evaluation).

Both interviews and observations, however, are considerably more time consuming and thus less feasible for an individual evaluator.

As an alternative, I wondered what might happen if rather than asking, “What did you learn?” we asked, “How did you feel?” Learning has long been highly associated with emotions. (For more on learning and emotions, check out this article, this one, and this one, and look at the work of John M. Dirkx and Antonio Damasio, among many others.) Would PD participants be better able to articulate how they felt during PD, and would their learning then become evident in their writing?

What happens when we ask a different question?

Well, a different set of questions, really. A colleague and I created a new feedback form to pilot with PD participants in which we seek to understand their learning through a series of five questions. We discussed at length what it is we want to know about participants’ learning to inform our programmatic decisions. We concluded that it is not necessarily the content  (i.e. if participants attend a course on instructional planning, then we expect they will learn something about instructional planning), but whether participants experience a change in thinking, feel they have learned a great deal, and whether or not the content is new for them.

We begin with these three questions using 5-point standard Likert response options (Strongly disagree, disagree, neither agree nor disagree, agree, strongly agree):

  1. This professional learning opportunity changed the way I think about this topic.
  2. I feel as if I have learned a great deal from participating in this professional learning opportunity.
  3. Most or all of the content was a review or refresher for me (this question is reverse-coded, of course).

We then ask participants about their emotions during the session with a set of “check all that apply” responses:

During this session I felt:

  • Energized
  • Renewed
  • Bored
  • Inspired
  • Overwhelmed
  • Angry
  • In agreement with the presenter
  • In disagreement with the presenter
  • Other

Finally, we ask participants to “Please explain why you checked the boxes you did,” and include an open essay box for narrative responses.

I’ve only seen data from one course thus far, but it is quite promising in that participants were very forthcoming in their descriptions of how they felt. Through their descriptions we were able to discern the degree of learning and from many responses, how participants plan to apply that learning. We received far fewer uninformative responses than in previous attempts to measure learning with the one direct question. As we continue to use this new set of questions, I hope to share response examples in a future post.

Image Credit: Collette Cassinelli

Image Credit: Collette Cassinelli via Flickr

Where does participant learning fit into the PD evaluation picture?

Donald Kirkpatrick famously proposed four levels of evaluation of training or professional development – essentially measuring participants’ 1.) reactions, 2.) learning, 3.) behavior, and 4.) results –  for training programs in the 1950s. Thomas Guskey later built upon Kirkpatrick’s model, adding a fifth level – organizational support and learning (Guskey actually identifies this as level 3; For more on this topic, see this aea365 post I wrote with a colleague during a week devoted to Guskey’s levels of PD evaluation sponsored by a professional development community of practice).

For hardcore evaluation enthusiasts, I suggest Michael Scriven’s The Evaluation of Training: A Checklist Approach

What other questions could we ask to understand PD participant’s learning?

I welcome your suggestions, so please add them to the comments!

 


11 Comments

Outputs are for programs. Outcomes are for people.

A recent experience reviewing a professional organization’s conference proposals for professional development sessions reminded me of the challenge program designers/facilitators encounter in identifying and articulating program outcomes. Time after time, I read “outcome” statements such as: participants will view video cases…, participants will hear how we approached…, participants will have hands-on opportunities to…, participants will experience/explore…and so on. What are these statements describing? Program activities. Program outputs. What are they not describing? Outcomes.

OUTPUTS  ≠  OUTCOMES

OUTPUTS are the products of program activities, or the result of program processes. They are the deliverables. Some even use the term interchangeably with “activities.” Outputs can be identified by answering questions such as:

  • What will the program produce?
  • What will the program accomplish?
  • What activities will be completed?
  • How many participants will the program serve?
  • How many sessions will be held?
  • What will program participants receive?

OUTCOMES are changes in program participants or recipients (aka the target population). They can be identified by answering the question:

How will program participants change as a result of their participation in the program? 

In other words, What will program participants know, understand, acquire, or be able to do?

Change can occur for participants in the form of new or different levels of:

  • awareness
  • understanding
  • learning
  • knowledge
  • skills
  • abilities
  • behaviors
  • attributes

When I teach graduate students to compose outcome statements, I ask them to visualize program participants and think about what those participants will walk out of the door with (after the program ends) that they did not enter the room with.

Image credit: Melina via Flickr

Image credit: Melina via Flickr

An important distinction is that we can directly control outputs, but not outcomes. We can control how many sessions we hold, how many people we accept into the program, how many brochures we produce, etc. However, we can only hope to influence outcomes.

Let me offer a simple example: I recently took a cooking course. I was in a large kitchen with a dozen stations equipped with recipes, ingredients, tools, and an oven/range. A chef gave a lecture, and demonstrated the steps listed on our instruction sheets while we watched. We then went about cooking and enjoying our meals. As I left the facility, I realized that for the first time, I actually understand the difference among types of flour. Yes, I can explain the reasons you may choose cake flour, all-purpose flour, or bread flour. I know how much wheat flour I can safely substitute for white, and why. I can describe the function of gluten. I can recognize when active dry yeast is alive and well (or not). I am able to bake a homemade pizza in such a way that the crust is browned and crispy on the outside, yet moist on the inside. I could do none of these things prior to entering that cooking classroom. If the program included an assessment of outcomes (beyond the obvious “performance assessment” of making pizza), I’m certain I could provide evidence that the cooking course is effective.

However, if the sponsoring organization proposed this course including a list of “outcomes,” what might they be? See if this sounds familiar: Participants will hear a chef’s lecture on flour types and the function of gluten, and will see a demonstration of the pizza-making process. Participants will have the opportunity to make enough homemade dough to bake two 12″ pizzas with toppings of their choice and will use a modified pizza stone baking process.

We know now that those are indeed potential outputs, or program activities. If program personnel decide to evaluate this course based on the articulation of these “outcomes” (which, of course, are not outcomes), what might that look like?

Image credit: djbones via Flickr

Image credit: djbones via Flickr

Did participants hear a lecture? Check. Did they see a demonstration? Check. Did they make dough and bake pizzas? Check. Success??? Perhaps. But what if those responsible for the program want to improve it? Expand it? Offer more courses and hope that folks will register for them? Will knowing the answer to these questions help them? Probably not.

To be sure, outputs can be valuable to measure, but they don’t go far enough in helping us answer the types of evaluative questions (e.g. To what degree did participants change? How valuable or important is that degree of change? How well did the program do with regard to facilitating change in participants?) we need to make programmatic decisions for continuous improvement.

Outcomes are much more difficult to identify and articulate, but well worth the effort, and best done during the program design and evaluation planning process.

For more on outcomes vs. outputs, check out these links:

More about outcomes – why they are important…and elusive! (Sparks for Change blog)

Outputs vs. Outcomes (University of Wisconsin Extension)

It’s Not Just Semantics: Managing Outcomes vs. Outputs (Harvard Business Review)


3 Comments

Ask a Brilliant Question, Get an Elegant Answer?

With a properly-framed question, finding an elegant answer becomes almost straightforward.

-Stephen Wunker, Asking the Right Question

Here’s what every successful leader, coach, marketer, or teacher knows: You cannot understate the importance of asking the right questions.

It’s no different for evaluators.

Evaluation questions form the foundation of solid evaluations, while also serving to frame and focus the work. Starting with the right questions can impact the effectiveness and appropriateness of key programmatic decisions that rest on evaluation results. In fact, clearly defined evaluation questions are part and parcel to a successful evaluation – that is, one that is actionable and is used.

That said, evaluations are carried out all the time without the benefit of well-articulated, focused questions. How often have you been asked to “evaluate the program,” or “study the program” to “find out whether it’s working” or “to see if it’s effective?”

Taken at face value these are perfectly legitimate questions, but they have no practical utility until “working” or “effective” are clearly defined – until the questions are sufficiently focused as to be able to inform and direct the evaluation design. What does “working” look like? How will you recognize “effectiveness”? When I pushed to get a definition of “working” before beginning an evaluation, the client asked me, “whether the program has made a difference” for the target population. “What kind of difference?” I asked. It took s great deal of conversation before we were able to settle on what exactly that meant.

This post explores evaluation questions from three angles:

  • Understanding the nature of evaluation questions
  • Three critical functions of evaluation questions
  • Considerations for crafting quality evaluation questions

Understanding the nature of evaluation questions

Evaluation questions are the broad, overarching, or “big picture” questions an evaluation is expected to answer. If answered and actionable, they help us (or help us support those who) make key programmatic decisions. They are distinct from the individual questions asked on measurement instruments such as surveys or interview protocols, although they can often overlap.

Evaluations often include descriptive questions such as:

  • To what extent was the program implemented as designed?
  • How many of the target population did we reach?
  • Did the program meet its stated goals?
  • What outcomes were achieved?

However, it is also important to ask “explicitly evaluative questions” (for a detailed discussion of these, read actionable evaluation basics by E. Jane Davidson)

  • How well was our program implemented?
  • How adequate was our reach? Did we reach the right target population?
  • How important were the outcomes? How valuable are they?
  • How substantial was the change we observed?

You can see the words in italics that signify the question as evaluative. Other words such as meaningful, significant, or appropriate might be found in evaluative questions as well.

Three critical functions of evaluation questions:

Just as a building’s foundation functions to bear the load of the building, anchor it against potentially damaging natural forces (i.e.earthquakes), and shield it from (also potentially damaging) moisture, evaluation questions can be thought to have three similar functions:

1.) They bear the load of the evaluation. The evaluation approach and social science theories that inform your approach, along with choices about evaluation design and selection of measures rest squarely on the evaluation questions. These questions set the purpose for the entire evaluation.

2.) They anchor the evaluation again potentially damaging “forces.” What could potentially damage an evaluation? Looking for the wrong indicators (i.e. those most readily observable), selecting the wrong measures (i.e. the most readily available, cheapest, easiest to administer), collecting the wrong data, engaging the wrong stakeholders (i.e. those easiest to access), sampling the wrong respondents…You get the picture. Leveraging evaluation questions as the anchor lends critical purpose to all choices you make as you craft the evaluation.

3.) They shield the evaluation from that which can seep in slowly and destroy it. Distrust, disdain, fear, misplaced expectations. These insidious dysfunctional attitudes towards evaluation can fester and erupt at any time in the evaluate life cycle. Clearly articulated questions give the evaluator the ability to defend against these and the potential to address them productively.

Considerations for crafting quality evaluation questions

There’s no dearth of good advice available. Here’s some I’ve assembled over the years:

Considerations for developing evaluation questions:

  • What are the information needs?
  • Whose information needs are going to be considered?
  • What do you need to know about the program for the purpose of the evaluation?
  • What do you need to know in order to make (or support others to make) necessary decisions about the program?
  • Will evaluation questions be determined by the evaluator, program personnel, other stakeholders, etc.? Will they be developed collaboratively?

Community Toolbox offers this:

You choose your evaluation questions by analyzing the community problem or issue you’re addressing, and deciding how you want to affect it.  Why do you want to ask this particular question in relation to your evaluation?  What is it about the issue that is the most pressing to change?  What indicators will tell you whether that change is taking place? 

The venerable CDC describes their strategy to help evaluators and offers a checklist to assess your evaluation questions: To help get to “good questions” we aggregated and analyzed evaluation literature and solicited practice wisdom from dozens of evaluators. From these efforts we created a checklist for use in assessing potential evaluation questions.  

Better Evaluation offers this advice: Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it and links to additional resources for crafting key evaluation questions here.

Why all this angst: by patterned via FLickr Am I helping humanity? by existentialism via Flickr Where is the war on greed? by play4smee via Flickr hat does humility require by gak via Flickr

Sure, “detailed questions are not as exciting as brilliant answers,” claims Wunker. But you’ll never get brilliant answers without them, says Sheila.

Enjoy these: 15 Great Quotes on the Importance of Asking the Right Question

Image credits: gak, play4smee, existentialism, and patterned via Flickr.


Leave a comment

Every Day is Thanksgiving Day!

Americans are celebrating Thanksgiving today, and while my personal practice is to give thanks every day, today certainly feels like the right day to share thanks to all who subscribe, follow, read, and comment on this blog.

In addition to my wonderful family and friends, good health, and other gifts, I have been blessed with the opportunity to enjoy my work. It hasn’t always been this way, but for many years now, I have truly enjoyed my work. I generally get up in the morning, look forward to going in, and take great pleasure and pride in the work that I do. In fact, most days I consider it fun.

I’m thankful to have the opportunity to work in a place (i.e. my fields of education and evaluation) where I am both veteran and novice, expert and apprentice, teacher as well as learner. ALWAYS a learner.

My fields offer no dearth of opportunities to learn and grow from cutting edge thinkers and doers who graciously offer their work through websites, tweets, slide shows, videos, webinars, blogs, books, articles, courses, and conferences, and for all of them, I am thankful. I take advantage of the opportunity to learn through reading, writing, watching, listening, and asking questions of colleagues in my fields every single day with the hope that I too, can offer something of value.

thanksgiving by nyoin via Flickr

Image credit: nyoin via Flickr

To all of you, I offer my sincerest thanks for your time and attention as you continue to read and comment on this blog. Happy Thanksgiving everyone!


6 Comments

Seven Simple Strategies to Engage Any Audience

I love PowerPoint! I especially love well-designed slides, and I have fun putting into practice what I’ve learned about slide design.

Once you have visually appealing slides that encourage your audience to focus attention on you and support your content…AND you have appropriate  content that is important, interesting, or imperative to your audience, how will you deliver it in an engaging way?

Last summer, I had this article published on Presentation Magazine’s site. Now, I’ve turned the key points into a slide deck posted on SlideShare.

Enjoy!


Leave a comment

Put me in, coach…I’m ready to survey! (Cross post with Actionable Data)

Sheila here, writing with the wonderful Kim Firth Leonard of the Actionable Data blog.

This post highlights some favorite recommendations from our collective experiences in crafting survey questions. It is also a continuation of our earlier co-authored posts (here and here).

1. You can’t fly without a pilot! One of my [Sheila] graduate school advisors gave me simple advice, but some of the best coaching I’ve ever received on one of my first surveys. Of each of my questions she would ask, “Now, what do you REALLY want to know here?” and would challenge me with, “So, what if someone answers ‘yes’ to this? So what? Does that really tell you anything?” From this I learned the importance of piloting surveys (and other protocols, for that matter), at the very least with one person, before sending out to all respondents.

Likewise, I [Kim] often give the advice (and try to remember it myself) to think carefully about what you are trying to learn AND DO with the resulting information when composing a question. It’s sort of like reverse engineering questions. If I want to learn and report on how satisfied students are with a particular aspect of a program, then I’m going to mirror that language in the question and response options. Seems simple, right? But it’s easy to write questions that aren’t this direct without realizing it if you haven’t thought it through. See the difference in the (perhaps over-simplified) questions below? The key is in being very clear about what it is that you want to learn.

How sufficient was the instruction you received in the X program?

  1. Excellent
  2. Sufficient
  3. Poor

How satisfied are you with the instruction you received in the X program?

  1. Very satisfied
  2. Satisfied
  3. Neutral
  4. Unsatisfied
  5. Very unsatisfied

And because when we write survey questions we easily become “too close” to the material, using others as a sounding board is vital. No matter how well I think I’ve crafted a survey I will always want others’ feedback, and ideally that includes feedback from members of whatever group the survey is intended for!

2. Ask a silly question, get a silly answer! Selecting the right type of question to get the kind of answers you’re looking for is a related challenge. Sometimes the simplest sounding questions are the most difficult for respondents to answer. I [Sheila] mostly evaluate professional development. Open-ended questions such as, “What did you learn?” and “How will you measure the impact of your learning?” sound terrific, but you might be surprised at how difficult they are to answer for some people, and how challenging they are to analyze. Often, people aren’t able to clearly articulate their learning, or how they will measure its impact in a neatly packaged paragraph. This is the case whether we ask the question immediately after the learning opportunity, or after participants have been given time to process and apply the learning in their own contexts.

Ask people only questions
they are likely to know the answers to,
ask about things relevant to them,
and be clear in what you’re asking.

-Babbie

silly questions by Travelin' Librarian via Flickr

Image credit: Travelin’ Librarian via Flickr

Sometimes it’s best to be more direct – think carefully about the construct you are trying to measure and break it down into more easily observable and reportable indicators, in order to get the richness of data you’re after. (For a fabulous resource on evaluation terminology, check out Kylie Hutchinson’s free Evaluation Glossary for mobile phones and tablets.) In other words, (returning to tip #1 here), the evaluator needs to think, “What do I REALLY need to know from a potential respondent?” So, “What have you learned?” might become several questions, such as “How have you used [example of the content] in your practice?” “How has your thinking changed with regard to [example of the content]” “What changes have you observed in your [students, patients, clients…]?” These questions come either after you have determined that people have used the content, or their thinking has changed, OR if you include a response option for those who have not used the content, or feel that their thinking has not changed.

3. Everyone should have a chance to play! Ensure an appropriate response option for everyone! Survey researchers often concern themselves with including appropriate response options for demographics – race/ethnicity, gender identification, etc., but in our experience, the other questions often sorely need the same attention. We’ve both been given too many surveys (especially online) where there aren’t sufficient or appropriate response options for a required question, leaving us with two choices (neither of which are appealing) – 1) to forgo completing the survey and deny the organization or individual our input, or 2) to answer in a way that does not honestly reflect our opinion, attribute, status, experience, etc.

hands raised by stina johnsson via Flickr

Image credit: stina johnsson via Flickr

Additional rules of thumb for response options:

  • Avoid too many. Or, use just the number you need and no more. No need to turn what might as well be a 5-point scale into a 7- or 9-point scale if you’re likely to end up collapsing the responses anyway. Use enough options to give folks a chance to express nuanced experiences, but no more than necessary (we find that 5 points are usually enough).
  • If you’re trying to get folks to rate something as ultimately ‘passing muster’ or not, you might consider an even number of response options — ideally two above the ‘muster’ point (positive options) and two below (negative options). Something like: excellent, meets standard, developing but below standard, & poor.
  • In contrast, if what you’re after is a rating of satisfaction or the like, then a 5 point Likert or Likert-like scale may be your best bet: Strongly agree, Agree, Neutral, Disagree, Strongly Disagree. Disclaimer: We’re not necessarily advocating for the 5 point scale vs a 4 point (or any other number for that matter). We’re well aware of the never-ending debates and are of the opinion that one is not necessarily superior to the other, and each has its place and use in evaluation. (For a great and humorous take on this topic, check out Patricia Rogers’ and Jane Davidson’s post Boxers or briefs? Why having a favorite response scale makes no sense).
  • Aim for visually appealing lists – don’t make it difficult on the eye by creating matrices that are too big, or response options running across the page rather than down.

If you’re using scales throughout a survey (especially the same scale) always run them in the same direction (positive to negative or vise versa). Seems like a no-brainer, right? But it’s worth attending to because it’s easy to forget.

4. Avoid these pitfalls!

  • Leading questions — is it extremely obvious how you want respondents to answer? If so, social desirability bias can come into play. Have you given the respondents a ‘lead’ in your framing of the question – which might look like this: “Are you happy with the instructor?” as opposed to “How satisfied or dissatisfied are you with the instructor?”
  • Double-barreled questions — these are questions that cram too much into one question and often indicate that either two questions are needed where one is written, or you’re not clear enough about exactly what you’re trying to learn yet. Example: “How timely and helpful was the feedback?” Do you want to know whether feedback was received quickly? Or that it was helpful? Or both (separately)?
  • Over-use of open-ended questions — if what you need to ask isn’t easily condensed into a simple set of response options, maybe it shouldn’t be part of a survey. You are likely better off getting these types of responses from interviews, focus groups, or other qualitative mechanisms. One or two open-ended questions isn’t going to foul things, but you may not capture information that’s as exhaustive or nuanced as you would if you were speaking to someone over the phone or in person.
eyes by wallyg via Flickr

Image credit: wallyg via Flickr

5. Not for your eyes only! Keep the survey respondent in mind as you write questions, and consult Universal Design principles and tips that are thankfully, widely available now. The American Evaluation Association held a webinar with Jennifer Sulewski back in 2010 that covered Universal Design tips for evaluators. The handout (available from the AEA’s Public eLibrary) includes great nuggets of wisdom, many of which boil down to making sure things are as simple as possible, and are written with the survey respondent in mind, such as, “Make sure surveys are easy to understand and responses are intuitive, even if people can’t or won’t read the instructions closely.

What are YOUR favorite survey construction recommendations? Please add them in the comments!

(NOTE: We relied on our go-to texts on survey design and management for this post, as with others: Babbie’s Survey Research Methods, Dillman’s Internet, Mail, and Mixed-Mode Surveys, and Fink’s How to Conduct Surveys: A Step-by-Step Guide 3rd Ed. See our last post for more on these and other survey design resources).

Follow

Get every new post delivered to your Inbox.

Join 525 other followers