Friday, February 9

Is it possible to have a universal argument, "Simulations work better than traditional formal learning programs"?

I am often asked to prove that simulations work better than traditional formal learning programs. As a tiny bit of background, I wrote this a few years ago in my column in Online Learning magazine:

"People often ask me what the return on investment (ROI) of e-learning is. I tell them it's 43 percent. How did I come up with that figure? Truth be told, I made it up. That's because knowing the ROI of e-learning is sort of like knowing that the average depth of the ocean is 2.5 miles. Interesting, but not very helpful to a ship's captain."

Given that, I have done some studies for both my own simulations (Virtual Leader), and others (Ngrain). I have interviewed countless practitioners, users, and sponsors. I have been involved in surveys and studies. I have argued that simulations come in genres (such as branching stories), and that analysis should be done at the genre level.

And I still have no idea how to approach that question. I don't even know who is qualified to measure effectiveness, or even define what effectiveness is. Even within a neutral body, there are advocates who do the study.

So what do people think? I am not asking, are simulations more effective? I am asking, what is the simplest argument that you would find compelling?

11 comments:

Matthew Nehrling said...

Universal- No, but I don't believe that anything in 'learning theory' can be summed up in a blanked universal law. With that said, while I believe that each training situation is unique and could potentially have cases where traditional training outperforms simulations, I am a firm believer that the closer training is to job performance (ie, simulate the work environment) the greater the comprehension.

We've had a chance recently to compare a synchronous and asynchronous training of the same system through a L3 survey. One thing that was an unintended consequence is the asynchronous training was written in job simulation type of format while the synchronous was a traditional classroom type training that focused simply on tasks.

We are finding that in the L2 studies, there is very little difference in immediate comprehension, however in the L3 studies, there is far greater knowledge retention while applying the knowledge from the asynchronous simulation type condition versus the synchronous traditional condition.

Of course, there are other factors that could play into these results, but for any system or environment, having a pilot program to compare the two options should provide you a clearer picture of what is best for the given condition.

Harold Jarche said...

Why do you suppose that the military spends billions of dollars on making training as realistic - especially using simulation - as possible instead of keeping soldiers in classrooms?

Karl Kapp said...

This is, perhaps, the oldest instructional technology argument that exists. Is the new technology...in this case immersive learning simulations or ILS...better than existing technologies or even stand up instruction.

The answer for radio, television, video, and computer-based instruction is...that when well designed instruction is created adn delivered, the medium doesn't matter. It is effective regardless of the technology.

The medium is NOT the message. So, a well design simulation may be better than poorly designed classroom instruction but given an equally well designed classroom experience and a simulation...it will always be a tie.

The problem is that most classroom instruction is NOT well designed. It is an instructor who lectures to learners...not a good design...a lot of e-learning is a poor design...page turning text on screen with no instructional strategies.

Simulations tend to be better designed because not everyone can create them and because they cost so much money people spend the time on designing them properly. Although our "so easy a SME" can do it tools will change that shortly.

If a simulation is designed properly, it can have a higher ROI because of reusability but not because it is inherently better than another method of instruction.

The really good thing about humans (or really bad thing from a research perspective) is that given even the crappiest of learning materials, if we feel the need to learn something, we'll learn it.

Mankind has certainly made a leaps and bounds of progress from fire to railroads to airplanes with classroom instruction or traditonal mentoring...there must be something to standing up infront of a group and conveying knowledge.

So, I guess to make a long answer even longer...my response is that simulations and traditonal formal learning programs can be equally effective...sell on the design behind and not the delivery mechanism.

Anonymous said...

Dr. Kapp has raised a valid point-- "simulation" on its own is not better or worse than any method. Karl makes the point that good instructor-led learning is probably more effective than bad simulation, which is tough to argue with. So to build on that thought, the question becomes "simulation works better…for what?"

I don't think simulation is universally better than all other learning methods. I think simulation is inherently an application-based methodology. There may well be more effective and efficient ways to transfer knowledge. But once that knowledge is transferred, how is it applied and contextualized? From my experience, this is where (good) simulation has the most impact, and is perhaps more effective than any other method I have seen. Application are contextualization are the most important steps in turning learning into behavior, and behavior change is where any learning effort is going to have the most impact.

So I might change the thesis-- it's not whether "simulations work better than traditional formal learning programs," but rather, "simulations have greater impact." Can impact be measured? Absolutely, although it is more challenging, since it can't be measured through quizzes or evaluating retention. I’m aware of at least one behavioral study that compared the impact of simulation learning against a control group who received traditional learning methods. The results, while not unexpected, are eye-opening.

Anonymous said...

Rich Mesch says, "I’m aware of at least one behavioral study that compared the impact of simulation learning against a control group who received traditional learning methods." What is that research, please?

Matthew Nehrling said...

Here is a study similar to what Rich mentioned:
http://www.experiencebuilders.com/eb/why/white_papers/SimulationEffectiveness.pdf

Unknown said...

Interesting post, Clark. For me, the primary question is based on flawed logic: the idea that simulations to be better than traditional education programs implies One True Knowledge and One True Learning Method to Rule the All.

My answer to the question would be yes, sometimes simulations are best. Sometimes a group conversation with peers is best. Sometimes lecture is best. (Really, it can be.) It all depends on what you want people to learn and what constraints you have on the learning environment.

The challenge then becomes, how do we device the decision matrix for what method works best for what learning outcome? That is a real bear of a problem, but one that I think most well-practiced education designers do implicitly.

Also, to address the white paper mentioned in Matthew's post, it confounds simulation (computer based) with simulation ( outdoor activity) and consequently gives blurry conclusions. The paper reads as a marketing tool, rather than research. There has been research in Journal of applied Psychology - mostly military simulation and lecture. There was little resutl found, IIR.
JOE

Peter Isackson said...

Harold's comment is as pertinent as you can get. But the answer this implies isn't necessarily qualitative. It may be quantitative. For a particular learning goal, there may be a penury of classrooms and teachers in relation to the number of learners. In which case, simulation would at the minimum be cheaper and more viable but not necessarily better. At the maximum it would be the only way of meeting the demand.

For pilot training simulation doesn't compete with classroom training but with hands-on experience operating an aircraft. Simulation is dreadfully expensive, but cheaper and less risky than a real plane (even if it can't be amortized by transporting people), so it really is a no-brainer.

I go along with those, like Karl, who eschew direct comparisons that limit the consideration of choice to two methods and/or settings. We learn in all sorts of ways and with all sorts of means. Limiting variety limits the scope of learning. One of things we always need to learn when learning is how other people think about learning, both those who have something to "teach" us and those who are learning in order to apply it professionally or operationally. Call it the social added value of learning when groups of people and communication are involved in some essential way, which may -- and probably should -- be informal.

Declaring a winner in a contest between two or more "contenders" for top honors limits variety. It's usually a commercial ploy to draw attention to a propriety product, method, etc. I'm still hoping that we can move beyond that kind of reasoning and begin looking at learning as the mobilizing of multiple resources for complex operational outcomes.

Phil Charron said...

Long Comment:
I think the subtext of what I'm reading in a lot of these comments is the notion that selecting the appropriate medium is always the first step. That is absolutely the case. All too often, producers of learning materials are trying to fit their square pegs in their client's round holes. In other cases, clients fully recognize that offering a host of solutions to the audience will address multiple learning styles.

The sad truth is, at least in the commercial world, our clients have limited budgets and timelines. This causes them to look at all of our offerings and say, "I get it, but which one will have the most impact?" The right answer is usually, "It depends on so many things - the audience, your goals, timeline, budget. It would take a consulting gig to determine that."

There probably would never be enough data to support every combination of delivery method, audience, content type, etc. But a few well-directed head-to-head studies can show that certain learning methodologies under certain conditions have specific tendencies.

Armed with this data, the smart Performance Consultants will treat every engagement as a consulting gig - selecting the right shaped pegs for the right shaped holes. If my client has a need for simple comprehension and I pitch them a sim, I'm doing them a disservice. If they want their learners to have a safe environment in which to practice behaviors and experience the consequences of those behaviors and I don't mention activites that are sim-based*, I'm doing them the same disservice.


Short comment:
In the end, many clients will still ask for data to justify doing something 'different'


*not necessarily electronic sim... could be advanced role playing, live assessment centers, paper-based sim, etc.

Anonymous said...

My first real distance-learning project was mainframe CBT for what was then Amtrak's new reservation system. We couldn't use the live system to train on, so we mimicked it a lot -- not a full simulation (too complex), but realistic exercises.

We coupled that with "training trains" -- practice trains based on real trains, using real schedules, accepting real fares. After training, reservation agents and ticket clerks could practice virtually any command in the reservation system on the training trains, getting the real system's feedback. The only limit was that you couldn't print actual tickets for a training train reservation.

A long point, but I see the training trains as at least a first cousin to a good simulation: they provided a rich, realistic way to apply basic skill and knowledge, and to acquire fluency, while safeguarding against serious real-world consequences.

(Tangent: I'm dumbfounded time and again that an organization will develop a complex computer system essential to the performance of hundreds of jobs but not allow a practice mode. They get a practice mode anyway; the cost of this "discovery learning" just isn't broken out.)

Karl Kapp's concern about "so easy an SME can do it" rings true for me -- it's the Gresham's Law of training.

Or, it's "feeding the elephant" as a colleague once referred to her company's leading-edge network of live-via-satellite learning centers of excellence: inappropriately applying some tool or technology because you have it lying around the corporation, because you like it, or because somebody spent a lot of money for it and you need to amortize that.

Clive Shepherd said...

I'm sure they can be more effective, when they're used to solve the right learning problem, for the right audience and when they are feasible given the practical constraints. This is likely to be a minority of cases, but still significant.

By the way, surely simulations have been around long enough to be regarded as 'traditional' approaches.