Tuesday, March 1

Assessing Learning Initiatives

I'm very happy to be working with the Big Question Thought Leaders. We've added some European perspective to the group with the addition of Kasper Spiro.

For March the LCBQ is:

How do you assess whether your informal learning, social learning, continuous learning, performance support initiatives have the desired impact or achieve the desired results?

As training organizations increasingly focus on improving performance with new kinds of learning/performance initiatives, how do we go about making sure they have impact/results? The focus here is what people are doing today and what makes sense to do. We'd love to hear examples.

By the way, if you have an idea for what all of this should be called, we would be curious to hear about that as well.

How to Respond:

Option 1 - Simply put your thoughts in a comment below. This may be hard given the complexity of the topic.

Option 2 - Tweet your thoughts using the hastag: #LCBQ

We will do our best to collect together tweets around the topic.

Option 3 -

Step 1 - Post in your blog (please link to this post). We recommend including #LCBQ in your title to help us.

Step 2 - Put a comment in this blog with an HTML ready link that I can simply copy and paste (an HTML anchor tag). I will only copy and past, thus, I would also recommend you include your NAME immediately before your link. So, it should look like:

Tony Karrer - e-Learning 2.0

or you could also include your blog name with something like:

Tony Karrer - e-Learning 2.0 : eLearningTechnology

Posts so far (and read comments as well):

23 comments:

Alex Taylor - TJ Taylor said...

My take on this question is that we are confusing the tool with the content.
After seeing the words impact or results many people are naturally going to start thinking about Kirkpatrick’s levels.
However, social learning, EPSS or informal learning are all based around how we deliver learning – the how – rather than what we’re learning – the what, the result.
The impact or the results referred to in the question should be about the content. The delivery mechanism is involved only when it comes to preferences or logistics, or in some cases the type of learning required.
It’s like asking how we assess if our holiday had the desired impact or achieved the desired results. Sure, whether we travelled to our destination by car, plane or train will influence the quality of our holiday (queues or otherwise), but it doesn’t determine the impact on our psyche of our sunbathing or hiking.

Tony Karrer said...

Alex - I agree that the answer will be dependent on the particular situation, but likely there are some strategies that can be used to assess results. Otherwise, we would be saying that each case is completely unique and there are no reusable strategies to be used to assess impact across different initiatives.

Or maybe I'm missing your point?

Alex Taylor - TJ Taylor said...

Hi Tony,
That's not quite what I was trying to say - let me try to rephrase.
I think it would clearer if I used the word methodology. I see social learning, EPSS etc as tools, not as methodologies.
Yes, we can measure methodologies and the degree of application of content, but the tool is just the tool. There is no impact or degree of effectiveness of the tool, just the methodology.
We can use a whole range of methodologies in the classroom, but the classroom is just the space, it's not a methodology. We can't measure how effective the classroom was, only how effective the methodology we used was. There can be great instructors (in a classroom) and terrible instructors, in the same way there can be great methods and terrible methods, but only bad craftsmen blame their tools!
I hope that's clarified where I'm coming from, or am I getting myself tied in knots here? :)

Tony Karrer said...

Alex - okay, I get the issue now, thanks for clarifying. I'm not asking people to say how they will assess the tools the way you have defined it. Rather, I'm wondering how you would go about assessing when one of these tools were used for an initiative which as you say, implies you are doing more than generically applying informal learning or social learning or one of the other tools. So, yes, there's will be additional levels of detail - you call them methodologies, I'm not quite sure I buy that term, but fine.

I guess I'm wondering if we are just discussing syntax and terminology or is it really unclear what we were trying to ask with the question?

Eloise said...

It got long so:

Eloise Pasteur Assessing Informal Learning

jay said...

My response to the Big Question.

Why single out informal, social, and continuous learning? We need better approaches for assessing all learning initiatives. People who think the old 1-2-3 is adequate for assessing formal learning are kidding themselves.

The purpose of learning is behavior change. Up front, we need to secure our sponsor’s agreement on what behavior we’re trying to change, why it matters, and what evidence will credibly demonstrate that the new behavior is taking place. Knowing something is not enough; we’re after people doing things. The behavior change best expressed in business terms.

You need to wait a while before taking the assessment. Smile-sheets and test scores prove nothing because they are administered before the forgetting curve sets in. The reason only 10%-15% of what is learned shows up on the job is that most of what you learn disappears rapidly unless it’s reinforced by reflection and practice. That’s why it’s a good idea to wait three to six months — to see what sticks.

When the time is ripe, there are several approaches to assessment. First is to use the yardstick the sponsor agreed to upfront. Did the needle move or not? This is often insufficient, because learning initiatives are never isolated acts. Sure, we had sales training on the new product, but we also had a publicity campaign, the product was better than the competition’s, and everyone was enthusiastic. How can we isolate the impact of the learning? Sometimes we can’t, because learning was indeed one component of a multi-pronged solution.

However, you can find out a lot by interviewing a sample of people. Ask them what they had to know to succeed and how they learned it.

Some would suggest that this is not scientific, that you would have to interview everybody, and nobody’s got time for that. It’s a bogus argument. I used to work in public opinion polling. You can generalize results for the whole group by interviewing a small sample of people. A formula can determine what’s statistically significant.

Furthermore, asking open-ended questions yields a lot more meaningful information than check-boxes and rating scales. It yields stories and anecdotes that are more persuasive than percentages.

It would be sweet if you could punch a button on an LMS and get an instant evaluation. That’s a pipe dream. An LMS measures activity, not outcomes.

Besides, as we noted earlier, results are in the eye of the sponsor. This is why no training department can ever claim to have reached Level 4; they don’t own the yardstick by which Level 4 is measured.

Unknown said...

I put my response in a blog: http://kasperspiro.com/2011/03/03/impact-of-informal-learning-output-learning-lcbq/

Unknown said...

@Alex. I quit agree with you,I think it is exactly what I tried to address in my respond (see blog)

Anonymous said...

Jay - I totally agree with your comment/post. Focusing on behaviour change should always be foremost in mind.

I think it is hard for people who are trying to convince their sponsors that NOT doing a formal training course will change behaviour and if so how they will go about measuring that. I think the discussion up front about what the goal is and how you'll know you got there might be harder for "internal" consultants who don't always get to have that conversation the way that those of us who are "external" do.

What do you think?

Jeff Goldman said...

Eloise,

I added this comment to your blog too -

Thanks for the great response and some practical ideas for assessing informal learning in an org.

In a nutshell, your "Check what questions are asked and what guidance is given." really sums up what I think we should first look for when gauging what people are getting from informal learning.

Thanks,

Jeff

Yahswe Sukuyugi said...

Thanks for the links too.

Anonymous said...

For me, the whole point of learning & development initiatives is to support performance - so regardless of the mode or approach taken to learn something, the assessment of its impact ultimately rests on the performance stats (whatever they may be).

It's clear that *how* you learn something is becoming less important over time. The proof of the pudding is in the eating, so they say.

However, the situation becomes less comfortable when the performance stats are not being met. The question that must be asked is: Why not?

Isolating the respective success and failure of informal learning, social learning, continuous learning and performance support initiatives would be a stretch. In the absence of strong empirical evidence reported in the literature (which I suggest unfortunately is the default position), assessment probably relies on surveying employee use of various modes of learning, and correlating that to their individual performance stats. Comparing the stats of populations within the organisation who do and do not utilise particular learning tools and platforms is another idea.

At the end of the day, though, I believe the informal approach to L&D is going to be much greyer than we would like.

Clive Shepherd said...

Clive Shepherd doesn't get it

Anonymous said...

Interesting take, Clive. I agree with pretty much everything you say in your post.

However I think my take on the question was slightly different. I had in mind an example like Facebook. It's one of those old chestnuts: Does Facebook support learning or is it just a frivolous waste of time and productivity?

I read the Big Question as asking, how would you find out?

Anonymous said...

I have tried to unpack formal and informal knowledge, and learning, in an this article (http://k-m-etaphors.wikispaces.com/Knowledge+Process+Cycle+), which might be of use - at least the diagrams.

And I have made an attempt to show how complex, emergent knowledge and experience might be assessed, based on participation in a digital storytelling workshop, in this wiki post (http://k-m-etaphors.wikispaces.com/Assessing+Complexity) which might be more to the point.

Guy W. Wallace said...

http://eppic.biz/2011/03/09/my-answer-to-the-big-question-for-march-2011-at-learning-circuits-blog/

Philip said...

Alex Taylor says that there is no impact or degree of effectiveness of the tool, just the methodology.

Can I take some liberty to take a more irrational view. There are lots of reasons why as learners we say we learn differently or more/less effectively depending upon the tools used. Like it or not, as learners we do not always think or act rationally so there is a good probablity that content delivered via classroom is assessed as being more effective (using business "results" as a measure) than the same content in an e-learning or social media format. Why? Some learners may get some kind of mental blockage around a specific delivery mechanism which impacts upon their learning processes, and hence their performance/results, even if it based on irrational reasoning perhaps based on personal prejudice formed by experience(e.g I've always hated computers and therefore classroom is always best).

I guess it will take time and education to get to the point where that all learners are not irrationally (or rationally) biased towards or against different delivery mechanisms, but in the meantime surely we will have these difficulties in dealing with or trying to control for this sort of bias in assessing learning initiatives delivered in an innovative and "unusual" way.

Felecia said...

I am responding to this post as a part of my graduate studies in instructional design. In response to the big question, I don't think it’s possible to accurately assess informal learning or social learning. In my experience, these types of learning have always been developed and offered in conjunction with other ‘traditional’ methods of education delivery. Regardless, I think the Kirkpatrick evaluation would still be applicable. Whether you decided to use Kirkpatrick’s four levels in reverse as a planning method or in the original order, learning in the corporate world will be still be evaluated by the bottom line.

Dianne said...

Hello,

I am a student at Roosevelt University completing my MA in Training & Development. Assesment (evaluation) has been one of my greatest concerns as I have become more "educated" in T&D. Is not how well the training is being transfer our number one objective no matter what learning methodology we are using?

I am currently working in the HealthCare field and 75% of the employee's training is done via a Standard Operating Procedures. Dependent on the employee's role, they could have anywhere from 10-40 SOPs that they are required to "read and understand". Not my favorite method of knowledge transfer.

One of my current assignments is to develop new training for the role of complaint handlers based on the data I collect from a gap/needs analysis. My Director is also requiring me to start developing training metrics, so that we identify current performance against future (after new training as been developed).

We currently use metrics that show many different areas of performance (which presentely are not meeting the desired outcome). In addition to the gap/needs analysis (done via interviews with managers/employees), I will be using these performance metrics to determine what key areas need to be focused on to redevelop the complaint handling training. While I believe performance idicators are a geat evaluation method, I would also like to consider interviewing managers at perhaps 3 months after completion of training or pre and post testing methods.

I would love to hear how others are handing a similiar situation.

Thanks,

Dianne

Taruna Goel said...

Taruna Goel - My response to the Big Question here - Assessing Informal Learning

Nightwyrm said...

Apologies for the delay all.

Glenn Hansen - Measuring the impact of learning initiatives

GED online said...

Totally agree with Alex he got a point on his whole discussion

Mike Petersell said...

My response to April's Big Question can be read at the Many Ways to Learn blog: Addressing Stakeholders Who “Want It Now” #LCBQ http://bit.ly/iEEDp9