Evaluspheric Perceptions

Reflections of an everyday evaluator/learner/educator exploring the evalusphere


Leave a comment

Data Visualization: Sail Forth – Steer for the Deep Waters Only (Part II)

Either you decide to stay in the shallow end of the pool, or you go out into the ocean.

-Christopher Reeve

In an ongoing quest to improve my data visualization skills, I recently ventured out from the security of the shallow end and took a stab at the next level of sophistication with some basic charts. In Part I of this series I describe the process I used to create my first back-to-back bar chart. Once again, I learned most of the skills I applied for these from Stephanie Evergreen and Ann K Emery, both wonderful dataviz artists and great teachers.

Here is my first stab at small multiples. I evaluated a summer Leadership Academy for school leaders where participants were asked to name their top “take aways” from the Academy.

small multiples

In this way, the reader can see not only the top 9 topics cited, but can also determine what appeared to have more of an impact on participants who work at different levels. It’s clear that topics A and H are important to the secondary folks, while elementary folks seemed more interested in topics C, B, G and H. There were many fewer K-12 people at the Academy, but for that group, Topics A and C appear to stand out. Knowing this can help inform those who need to plan future academies and other professional learning opportunities.

Here is how to do this:

  1. Create a simple bar graph for one dataset (Topic A).
  2. Make these basic changes:
    • DELETE gridlines, x-axis and its tick marks, chart border. (NOTE: I could have deleted the x-axis line as well, but I left it in this chart for the purpose of visually “anchoring” each small bar chart)
    • ADD data labels.
    • FORMAT fonts (larger, bold).
    • ADJUST colors (I used colors associated with the organization).
  3. When you have everything looking the way you want it to on the first graph, create the remaining graphs by copying the first and editing the data.
    • Click “select data” and edit to get the correct data on the chart.
  4. ALIGN all the small charts.
    1. DELETE the y-axis on all but the leftmost charts. (Again, this could also be deleted, but I liked the way it “anchors” the the group of charts.)
  5. INSERT text boxes to identify the topics (this eliminates the need for the legend). (I created one – Topic A, then copied, pasted, and aligned it, and then replaced the text for the other topics).
  6. INSERT text boxes at the bottom for the shared x-axis (the categories secondary, elementary, and K-12 need only appear at the bottom of each column of small graphs).
  7. ADD a title and subtitle.

For more on small multiples and different ways of creating them, see Stephanie’s post and Ann K Emery’s blogs on small multiples.

Let’s be honest. This graph took some time and some fiddling to get it right. But, the investment will most certainly pay off in the future. Each time I clicked, I learned something new, and my next chart will be even better in less time! Onward!

 


Leave a comment

Data Visualization: Sail Forth – Steer for the Deep Waters Only (Part I)

Sail Forth- Steer for the deep waters only. Reckless O soul, exploring. I with thee and thou with me. For we are bound where mariner has not yet dared go. And we will risk the ship, ourselves, and all.

-Walt Whitman

I consider myself a novice, for now, staying safe in the shallow waters of data visualization. I’ve learned to create clean and modern-looking bar, column, and {gulp!} the occasional pie charts. I follow basic safety rules, dispensing with the unnecessary – gridlines, tick marks, superfluous axis labels and legends. I avoid default colors, and proudly leave 3D charts to the real amateurs. To venture beyond that, into the deep current of dashboards and interactivity, I would need a life jacket and tow rope. Recently, though, I waded a bit deeper, and experimented with two variations – back-to-back bar charts and small multiples. In this post, I share how I created my first back-to-back bar chart. Part II will tackle the small multiples.

I created a back-to-back bar chart to display some common public school data. Much of this process I learned from Stephanie Evergreen’s blogs on dataviz. I chose a back-to-back bar chart because I have two datasets that people need to see in one chart, yet they don’t necessarily need to compare one dataset (Math) to the other (ELA). What they do want to compare is one year to the next for each grade level.

Back to back chart

Here is how to do it:

  1. Create one horizontal bar chart.
  2. Make these basic changes:
    • DELETE gridlines, x-axis tick marks and line, y-axis, chart border. (NOTE: I could have deleted the x-axis as well, but I left it in this chart for the purpose of drawing the reader not just to the absolute scores, but to the scores as compared to 100%)
    • ADD data labels and title.
    • FORMAT fonts (larger, bold).
    • ADJUST colors (I used colors associated with the organization)
    • REDUCE gap width (I chose 40% for this graph).
  3. When you have everything looking the way you want it to on the first graph, create the second by copying the first and editing the data.
    • Click “select data” and edit to get the correct data on the chart.
    • FLIP the graph on the left (my Math graph) by right clicking on the x-axis, and checking “values in reverse order.”
  4. INSERT text boxes in the top bars of each graph to identify the years (this eliminates the need for the legend).
  5. INSERT text boxes in the center (between the two graphs) for the shared y-axis (I created one – Grade 3), then copied, pasted, and aligned it, and then replaced the text for the other grade levels).
  6. OUTLINE the bars in white, (a cool trick I learned from Ann K Emery’s blogs about dataviz) and increase the line weight bit (I put it up to 1.25).
  7. GROUP graphs by selecting both along with all text boxes and right click to “group” them. Grouping can be tricky in Excel and when you go to copy and paste, be sure to grab the grouped object, rather than just the graph.

 

SUCCESS! Go on, try it yourself! You know you want to…

Look for Part II on small multiples…coming soon.


2 Comments

Why Are Evaluators So Tentative about the Advocacy Aspect of Our Profession? (Guest Post by Rakesh Mohan)

I’ve recently had the pleasure of meeting an evaluator whose work I’ve followed, and I invited him to write for Evaluspheric Perceptions. Rakesh Mohan has been a regular on EvalTalk, and I’ve admired him for putting his work out there and asking for feedback. In early 2014, his office released a report, Confinement of Juvenile Offenders, and I found myself curious enough to read it. Quite honestly, it wasn’t the topic that interested me, but rather, it was the idea of reading a governmental evaluation report produced by someone who is a great fan of and frequent presenter at the American Evaluation Association. Needless to say, the report impressed me to no end. Rakesh put into place nearly every principle of good evaluation reporting and data visualization that I have been learning and studying myself. I’m not the only one impressed by the work coming out of his office. In 2011, they received AEA’s Alva and Gunnar Myrdal Government Evaluation Award. Recent posts on two of my favorite blogs highlight the work of his office as well. One can be found at Better EvaluationWeek 15: Fitting reporting methods to evaluation findings – and audiences and the other at AEA365Sankey diagrams: A cool tool for explaining the complex flow of resources in large organizations.Mohan, 2014

I’m also happy to learn that Rakesh is on the 2014 ballot for the AEA presidency.

Today, Rakesh presents another topic that often confounds me, but he demystifies it with ease. So, please join me in learning about Rakesh Mohan, and evaluator advocacy!

Why Are Evaluators So Tentative about the Advocacy Aspect of Our Profession?

My mother used to say that where there are two or more people, there will always be politics over resources. Because evaluations involve making judgments about prioritization, distribution, and use of resources, evaluations will always be inherently political.

Greetings! I am Rakesh Mohan, director of the Office of Performance Evaluations (OPE), an independent agency of the Idaho Legislature. This year our office is celebrating 20 years of promoting confidence and accountability in state government.

OPE logo

At OPE, there is nothing tentative about advocating for our work—i.e., promoting the use of our evaluations, defending our evaluation approaches and methodologies, and educating people about evaluation. For us, evaluator advocacy is all in a day’s work.

I believe it is the fear of politics that makes many evaluators tentative about advocacy. Some evaluators say that it is not their job to mess with the politics lest they be perceived as taking sides, while others do not even acknowledge the fact that evaluation and politics are intertwined. The option for evaluators is not to ignore the political context of evaluation, but to understand and manage it without taking sides.

The following advocacy activities of my office are grounded in professional evaluation and auditing standards and are guided by our personal ethics:

  1. Conduct my “daily sojourn.” Each year during the legislative session, I visit the capitol every day even if I do not have a scheduled meeting. These visits help me to inform others about the work of my office and be informed about the political context in which we conduct evaluations.
  2. Keep legislative leadership informed. This is the first step in building relationships with policymakers and gaining the confidence and support of the leadership.
  3. Keep stakeholders informed. This is imperative if we want to have the buy-in from key stakeholders.
  4. Assist policymakers with evaluation requests and legislation. This helps with getting good evaluation assignments and subsequently facilitates the implementation of our evaluation recommendations.
  5. Educate policymakers and others about evaluation. We should let policymakers and those who influence policymaking know who we are, what we do, and why we do it.
  6. Work with the news media effectively. If betterment of the society is one of the purposes of evaluation, we need to reach out to the people. The press can serve as a bridge between evaluators and the public. Here are three examples of how the media can help evaluators and evaluation offices:

Mohan 2

 

Details about these strategies and other thoughts on evaluator advocacy are discussed in my recent article, Evaluator Advocacy: It Is All in a Day’s Work (April 25, 2014).

This article was published along with two related articles in the Forum section of the American Journal of Evaluation:

How to Become an Effective Advocate without Selling Your Soul (George Grob, April 22, 2014)

Broadening the Discussion about Evaluator Advocacy (Michael Hendricks, April 17, 2014)

All three articles are available from OnlineFirst of the American Journal of Evaluation.


14 Comments

When a Direct Question is NOT the Right Question

Who hasn’t answered the question, “What did you learn?” after attending a professional development session? As a PD facilitator and evaluator, I’ve certainly used feedback forms with this very question. After all, measuring participant learning is fundamental to PD evaluation.

In this post, I’ll share examples of actual data from PD evaluation in which we asked the direct question, “What did you learn?” I’ll then explain why this is a difficult question for PD participants to answer, resulting in unhelpful data. Next, I’ll offer a potential solution in the form of a different set of questions for PD evaluators to use in exploring the construct of participant learning. Finally, I’ll show where participant learning fits into the bigger picture of PD evaluation.

What happens when we ask “What did you learn?” 

Here are examples of actual participant responses to that question:

  • After a session on collaborative problem solving with students with behavioral difficulties: How to more effectively problem solve with students
  • After a session on co-teaching: Ways to divide up classroom responsibilities
  • After a session on teaching struggling learners: Some new strategies to work with struggling students

In my experience, about one-third to one-half of participant responses to that ubiquitous question are nothing more than restatements of the course title and thus similarly uninformative to an evaluator.

On the futility of asking “What did you learn?”

It’s challenging to get people to clearly articulate what they have learned on a feedback form distributed after a professional development session. Whether the question is asked immediately after the learning has taken place, or after some time has passed and the participant (in theory) has had time to process and apply the learning, the outcome (in terms of the data collected) is the same. People don’t seem to be able (I’m working under the assumption that they are indeed willing) to answer “What did you learn?” with the depth and richness of written language that would help inform professional learning planners make effective decisions about future programming. They’re not to blame, of course. It’s just as difficult for me to answer that question when I’m a participant.

Parents and teachers know this: When you ask a child a question and he or she answers with “I don’t know,” that response can have a whole range of meanings from “I can’t quite articulate the answer you’re looking for” to “I’m not certain I know the answer” to “I need more time to process the question” to “I don’t understand the question” to “I really don’t know the answer” to “I don’t want to tell you!” It’s no different for adults. Someone who answers “What did you learn?” by essentially restating the title of the PD session is in effect saying, “I don’t know.” As a PD evaluator, it is my job to figure out exactly what that means.

How else can we know what participants learned?

Of course, we’re talking about surveys – self-reported perceptions of learning. There are certainly other ways for evaluators to gain an understanding of what participants learned.

We can interview them, crafting probes that might help them more clearly articulate what they learned. Interviews include dedicated time and the opportunity for participants to give full attention to the question. In contrast, surveys are often completed when participants feel rushed at the end of a PD session, or at a later time, when they are fitting survey completion in with a myriad of other job duties.

We can observe participants at work, looking for evidence that they are applying what they learned in practice, thus getting at not only what they may have learned, but also what Kirkpatrick called “behavior” and Guskey calls “participant use of new knowledge and skills” (see below for more on these evaluators and their prescribed levels of PD evaluation).

Both interviews and observations, however, are considerably more time consuming and thus less feasible for an individual evaluator.

As an alternative, I wondered what might happen if rather than asking, “What did you learn?” we asked, “How did you feel?” Learning has long been highly associated with emotions. (For more on learning and emotions, check out this article, this one, and this one, and look at the work of John M. Dirkx and Antonio Damasio, among many others.) Would PD participants be better able to articulate how they felt during PD, and would their learning then become evident in their writing?

What happens when we ask a different question?

Well, a different set of questions, really. A colleague and I created a new feedback form to pilot with PD participants in which we seek to understand their learning through a series of five questions. We discussed at length what it is we want to know about participants’ learning to inform our programmatic decisions. We concluded that it is not necessarily the content  (i.e. if participants attend a course on instructional planning, then we expect they will learn something about instructional planning), but whether participants experience a change in thinking, feel they have learned a great deal, and whether or not the content is new for them.

We begin with these three questions using 5-point standard Likert response options (Strongly disagree, disagree, neither agree nor disagree, agree, strongly agree):

  1. This professional learning opportunity changed the way I think about this topic.
  2. I feel as if I have learned a great deal from participating in this professional learning opportunity.
  3. Most or all of the content was a review or refresher for me (this question is reverse-coded, of course).

We then ask participants about their emotions during the session with a set of “check all that apply” responses:

During this session I felt:

  • Energized
  • Renewed
  • Bored
  • Inspired
  • Overwhelmed
  • Angry
  • In agreement with the presenter
  • In disagreement with the presenter
  • Other

Finally, we ask participants to “Please explain why you checked the boxes you did,” and include an open essay box for narrative responses.

I’ve only seen data from one course thus far, but it is quite promising in that participants were very forthcoming in their descriptions of how they felt. Through their descriptions we were able to discern the degree of learning and from many responses, how participants plan to apply that learning. We received far fewer uninformative responses than in previous attempts to measure learning with the one direct question. As we continue to use this new set of questions, I hope to share response examples in a future post.

Image Credit: Collette Cassinelli

Image Credit: Collette Cassinelli via Flickr

Where does participant learning fit into the PD evaluation picture?

Donald Kirkpatrick famously proposed four levels of evaluation of training or professional development – essentially measuring participants’ 1.) reactions, 2.) learning, 3.) behavior, and 4.) results –  for training programs in the 1950s. Thomas Guskey later built upon Kirkpatrick’s model, adding a fifth level – organizational support and learning (Guskey actually identifies this as level 3; For more on this topic, see this aea365 post I wrote with a colleague during a week devoted to Guskey’s levels of PD evaluation sponsored by a professional development community of practice).

For hardcore evaluation enthusiasts, I suggest Michael Scriven’s The Evaluation of Training: A Checklist Approach

What other questions could we ask to understand PD participant’s learning?

I welcome your suggestions, so please add them to the comments!

 


11 Comments

Outputs are for programs. Outcomes are for people.

A recent experience reviewing a professional organization’s conference proposals for professional development sessions reminded me of the challenge program designers/facilitators encounter in identifying and articulating program outcomes. Time after time, I read “outcome” statements such as: participants will view video cases…, participants will hear how we approached…, participants will have hands-on opportunities to…, participants will experience/explore…and so on. What are these statements describing? Program activities. Program outputs. What are they not describing? Outcomes.

OUTPUTS  ≠  OUTCOMES

OUTPUTS are the products of program activities, or the result of program processes. They are the deliverables. Some even use the term interchangeably with “activities.” Outputs can be identified by answering questions such as:

  • What will the program produce?
  • What will the program accomplish?
  • What activities will be completed?
  • How many participants will the program serve?
  • How many sessions will be held?
  • What will program participants receive?

OUTCOMES are changes in program participants or recipients (aka the target population). They can be identified by answering the question:

How will program participants change as a result of their participation in the program? 

In other words, What will program participants know, understand, acquire, or be able to do?

Change can occur for participants in the form of new or different levels of:

  • awareness
  • understanding
  • learning
  • knowledge
  • skills
  • abilities
  • behaviors
  • attributes

When I teach graduate students to compose outcome statements, I ask them to visualize program participants and think about what those participants will walk out of the door with (after the program ends) that they did not enter the room with.

Image credit: Melina via Flickr

Image credit: Melina via Flickr

An important distinction is that we can directly control outputs, but not outcomes. We can control how many sessions we hold, how many people we accept into the program, how many brochures we produce, etc. However, we can only hope to influence outcomes.

Let me offer a simple example: I recently took a cooking course. I was in a large kitchen with a dozen stations equipped with recipes, ingredients, tools, and an oven/range. A chef gave a lecture, and demonstrated the steps listed on our instruction sheets while we watched. We then went about cooking and enjoying our meals. As I left the facility, I realized that for the first time, I actually understand the difference among types of flour. Yes, I can explain the reasons you may choose cake flour, all-purpose flour, or bread flour. I know how much wheat flour I can safely substitute for white, and why. I can describe the function of gluten. I can recognize when active dry yeast is alive and well (or not). I am able to bake a homemade pizza in such a way that the crust is browned and crispy on the outside, yet moist on the inside. I could do none of these things prior to entering that cooking classroom. If the program included an assessment of outcomes (beyond the obvious “performance assessment” of making pizza), I’m certain I could provide evidence that the cooking course is effective.

However, if the sponsoring organization proposed this course including a list of “outcomes,” what might they be? See if this sounds familiar: Participants will hear a chef’s lecture on flour types and the function of gluten, and will see a demonstration of the pizza-making process. Participants will have the opportunity to make enough homemade dough to bake two 12″ pizzas with toppings of their choice and will use a modified pizza stone baking process.

We know now that those are indeed potential outputs, or program activities. If program personnel decide to evaluate this course based on the articulation of these “outcomes” (which, of course, are not outcomes), what might that look like?

Image credit: djbones via Flickr

Image credit: djbones via Flickr

Did participants hear a lecture? Check. Did they see a demonstration? Check. Did they make dough and bake pizzas? Check. Success??? Perhaps. But what if those responsible for the program want to improve it? Expand it? Offer more courses and hope that folks will register for them? Will knowing the answer to these questions help them? Probably not.

To be sure, outputs can be valuable to measure, but they don’t go far enough in helping us answer the types of evaluative questions (e.g. To what degree did participants change? How valuable or important is that degree of change? How well did the program do with regard to facilitating change in participants?) we need to make programmatic decisions for continuous improvement.

Outcomes are much more difficult to identify and articulate, but well worth the effort, and best done during the program design and evaluation planning process.

For more on outcomes vs. outputs, check out these links:

More about outcomes – why they are important…and elusive! (Sparks for Change blog)

Outputs vs. Outcomes (University of Wisconsin Extension)

It’s Not Just Semantics: Managing Outcomes vs. Outputs (Harvard Business Review)


3 Comments

Ask a Brilliant Question, Get an Elegant Answer?

With a properly-framed question, finding an elegant answer becomes almost straightforward.

-Stephen Wunker, Asking the Right Question

Here’s what every successful leader, coach, marketer, or teacher knows: You cannot understate the importance of asking the right questions.

It’s no different for evaluators.

Evaluation questions form the foundation of solid evaluations, while also serving to frame and focus the work. Starting with the right questions can impact the effectiveness and appropriateness of key programmatic decisions that rest on evaluation results. In fact, clearly defined evaluation questions are part and parcel to a successful evaluation – that is, one that is actionable and is used.

That said, evaluations are carried out all the time without the benefit of well-articulated, focused questions. How often have you been asked to “evaluate the program,” or “study the program” to “find out whether it’s working” or “to see if it’s effective?”

Taken at face value these are perfectly legitimate questions, but they have no practical utility until “working” or “effective” are clearly defined – until the questions are sufficiently focused as to be able to inform and direct the evaluation design. What does “working” look like? How will you recognize “effectiveness”? When I pushed to get a definition of “working” before beginning an evaluation, the client asked me, “whether the program has made a difference” for the target population. “What kind of difference?” I asked. It took s great deal of conversation before we were able to settle on what exactly that meant.

This post explores evaluation questions from three angles:

  • Understanding the nature of evaluation questions
  • Three critical functions of evaluation questions
  • Considerations for crafting quality evaluation questions

Understanding the nature of evaluation questions

Evaluation questions are the broad, overarching, or “big picture” questions an evaluation is expected to answer. If answered and actionable, they help us (or help us support those who) make key programmatic decisions. They are distinct from the individual questions asked on measurement instruments such as surveys or interview protocols, although they can often overlap.

Evaluations often include descriptive questions such as:

  • To what extent was the program implemented as designed?
  • How many of the target population did we reach?
  • Did the program meet its stated goals?
  • What outcomes were achieved?

However, it is also important to ask “explicitly evaluative questions” (for a detailed discussion of these, read actionable evaluation basics by E. Jane Davidson)

  • How well was our program implemented?
  • How adequate was our reach? Did we reach the right target population?
  • How important were the outcomes? How valuable are they?
  • How substantial was the change we observed?

You can see the words in italics that signify the question as evaluative. Other words such as meaningful, significant, or appropriate might be found in evaluative questions as well.

Three critical functions of evaluation questions:

Just as a building’s foundation functions to bear the load of the building, anchor it against potentially damaging natural forces (i.e.earthquakes), and shield it from (also potentially damaging) moisture, evaluation questions can be thought to have three similar functions:

1.) They bear the load of the evaluation. The evaluation approach and social science theories that inform your approach, along with choices about evaluation design and selection of measures rest squarely on the evaluation questions. These questions set the purpose for the entire evaluation.

2.) They anchor the evaluation again potentially damaging “forces.” What could potentially damage an evaluation? Looking for the wrong indicators (i.e. those most readily observable), selecting the wrong measures (i.e. the most readily available, cheapest, easiest to administer), collecting the wrong data, engaging the wrong stakeholders (i.e. those easiest to access), sampling the wrong respondents…You get the picture. Leveraging evaluation questions as the anchor lends critical purpose to all choices you make as you craft the evaluation.

3.) They shield the evaluation from that which can seep in slowly and destroy it. Distrust, disdain, fear, misplaced expectations. These insidious dysfunctional attitudes towards evaluation can fester and erupt at any time in the evaluate life cycle. Clearly articulated questions give the evaluator the ability to defend against these and the potential to address them productively.

Considerations for crafting quality evaluation questions

There’s no dearth of good advice available. Here’s some I’ve assembled over the years:

Considerations for developing evaluation questions:

  • What are the information needs?
  • Whose information needs are going to be considered?
  • What do you need to know about the program for the purpose of the evaluation?
  • What do you need to know in order to make (or support others to make) necessary decisions about the program?
  • Will evaluation questions be determined by the evaluator, program personnel, other stakeholders, etc.? Will they be developed collaboratively?

Community Toolbox offers this:

You choose your evaluation questions by analyzing the community problem or issue you’re addressing, and deciding how you want to affect it.  Why do you want to ask this particular question in relation to your evaluation?  What is it about the issue that is the most pressing to change?  What indicators will tell you whether that change is taking place? 

The venerable CDC describes their strategy to help evaluators and offers a checklist to assess your evaluation questions: To help get to “good questions” we aggregated and analyzed evaluation literature and solicited practice wisdom from dozens of evaluators. From these efforts we created a checklist for use in assessing potential evaluation questions.  

Better Evaluation offers this advice: Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it and links to additional resources for crafting key evaluation questions here.

Why all this angst: by patterned via FLickr Am I helping humanity? by existentialism via Flickr Where is the war on greed? by play4smee via Flickr hat does humility require by gak via Flickr

Sure, “detailed questions are not as exciting as brilliant answers,” claims Wunker. But you’ll never get brilliant answers without them, says Sheila.

Enjoy these: 15 Great Quotes on the Importance of Asking the Right Question

Image credits: gak, play4smee, existentialism, and patterned via Flickr.


Leave a comment

Every Day is Thanksgiving Day!

Americans are celebrating Thanksgiving today, and while my personal practice is to give thanks every day, today certainly feels like the right day to share thanks to all who subscribe, follow, read, and comment on this blog.

In addition to my wonderful family and friends, good health, and other gifts, I have been blessed with the opportunity to enjoy my work. It hasn’t always been this way, but for many years now, I have truly enjoyed my work. I generally get up in the morning, look forward to going in, and take great pleasure and pride in the work that I do. In fact, most days I consider it fun.

I’m thankful to have the opportunity to work in a place (i.e. my fields of education and evaluation) where I am both veteran and novice, expert and apprentice, teacher as well as learner. ALWAYS a learner.

My fields offer no dearth of opportunities to learn and grow from cutting edge thinkers and doers who graciously offer their work through websites, tweets, slide shows, videos, webinars, blogs, books, articles, courses, and conferences, and for all of them, I am thankful. I take advantage of the opportunity to learn through reading, writing, watching, listening, and asking questions of colleagues in my fields every single day with the hope that I too, can offer something of value.

thanksgiving by nyoin via Flickr

Image credit: nyoin via Flickr

To all of you, I offer my sincerest thanks for your time and attention as you continue to read and comment on this blog. Happy Thanksgiving everyone!

Follow

Get every new post delivered to your Inbox.

Join 683 other followers