Economic Benefits of Corporate e-Learning
Hall and LeCavalier (2000b) summarized some firms' economic savings as a result of converting their traditional training delivery methods to e-learning. IBM saved US $200 million in 1999, providing five times the learning at one-third the cost of their previous methods. Using a blend of Web-based (80 percent) and classroom (20 percent) instruction, Ernst and Young reduced training costs by 35 percent while improving consistency and scalability. Rockwell Collins reduced training expenditures by 40 percent with only a 25 percent conversion rate to Web-based training. Many other success stories exist. However, it is also true that some firms that have spent large amounts of money on new e-learning efforts have not received the desired economic advantages.
In addition to generally positive economic benefits, other advantages such as convenience, standardized delivery, self-paced learning, and variety of available content, have made e-learning a high priority for many corporations. Much of the discussion about implementing e-learning has focused on the technology, but as Driscoll (2001b) and others have reminded us, e-learning is not just about the technology, but also many human factors.
There is no doubt that corporations are increasing their emphasis on e-learning. Forrester, an independent research firm that helps companies assess the effect of technology change on their operations, interviewed training managers at 40 Global 2500 companies and found that all but one of them already had online initiatives in place (Dalton 2000). A survey of 500 training directors (Online Learning News, 2001a) clearly shows the new priorities:
* Sixty percent had an e-learning initiative
* Eight-six percent had a priority of converting current instructor-led sessions to e-learning
* Eighty percent will set up or expand knowledge-management programs
* Seventy-eight percent were developing or enhancing electronic performance support
ASTD (2002), in its State of the Industry Report, noted that the year 2000 marked a new era of growth for e-learning. The events of September 11, 2001, have only accelerated this growth as organizations cut back on business travel, improve their security, and increase their e-learning efforts.
There is always a focus on the fiscal bottom line in corporate training; the comparatively low costs of e-learning are attractive. Even so, more corporations are looking at such options as blended learning, using more than one method of delivery (e.g., e-learning plus traditional classroom delivery of content, to increase training effectiveness), even if it raises costs. However, Clark (in Online Learning News 2001b) points out that many training managers are not sure how to find the optimal blend for their corporate training programs. He feels they are making decisions based on programs they are familiar with rather than on concrete information about which programs actually produce effective results.
Barron (2001) observes that learning technology providers have been increasingly able to "demonstrate cost-savings and broader benefits, develop integrated offerings, and propose innovative ways of applying e-learning." However, how do training managers decide which educational products and which learning technology providers actually produce effective results? How do they balance product quality with training costs? As the new corporate adage goes: "Wise training managers realize the bitterness of poor quality remains long after the sweetness of low price has been forgotten." To justify making decisions about training programs independently of training cost considerations, managers need concrete measures of program effectiveness. While there is no doubt that we see an increasing number of case studies showing success with e-learning, it is still difficult to find solid research measures of learner achievement in the specialized setting of a corporate training program.
Measuring Results
When we measure the results of e-learning, do we have to evaluate e-learning differently from traditional training methods? ASTD (2000a) points out that current training evaluation techniques and processes can be expanded to include e-learning as a method of delivery. Indeed, they conclude that the techniques to evaluate e-learning are the same as evaluating other training solutions.
How do we measure the results of e-learning, whatever the delivery method? Using Kirkpatrick's classic model, any training - traditional or e-learning - can be evaluated at four progressive levels (Kirkpatrick 1979). Level I: Reaction is a measure of learners' reactions to the course. Level II: Learning is a measure of what they learned. Level III: Transfer is a measure of changes in their behavior when they return to the job after the training program. Level IV: Results is a measure of the business outcomes that occur because they are doing their jobs differently. Phillips (1996) recommends the addition of a fifth level to Kirkpatrick's model where appropriate. The new Level V is a measure of the Return on Investment (ROI), the cost-benefit ratio of training. In this level, the Level IV data are converted to monetary values and then compared with the cost of the training program.
In spite of all the enthusiasm in corporate training programs for e-learning, an An American Society of Training and Development (ASTD) study found that 67 percent of the training directors interviewed do not measure the effectiveness of their net-based programs at all (2000b). This study found that while 95 percent of surveyed organizations gauged trainees' reactions to courses (e.g., how well they liked the courses) [Level I measure], only three percent of respondents make a real effort to measure the business results of training programs [Level IV measure].
While it is still early to draw solid conclusions about measuring the effectiveness of actual learning that takes place as a result of e-learning - especially within corporate training programs - we can analyze the somewhat controversial results that have come out of mainly academic distance learning programs, using Kirkpatrick's Four Levels of Evaluation.
Level I - Reaction
Evaluation at this level measures how the participants in a training program feel about their experience. Are they satisfied with what they learned? Do they regard the material as relevant to their work? Do they believe the material will be useful to them on the job? This level, therefore, does not measure learning; it simply measures how well the learners liked the training session.
How do Learners Feel?
It is not hard to find learner enthusiasm for e-learning. The majority of 1,002 students who responded to an e-college.com survey said they chose the online format because of the flexibility and convenience of the program. Comments included: "I love that I have the flexibility to continue to hold a full time job." "To study any time that best suits my busy schedule." "I travel extensively." "I was able to work with my instructor, receive tremendous technical support at all hours of the night and gain the same quality content and evaluation as my peers taking the same class on campus." The survey reports that 75 percent of those students online were employed and 68 percent of the learners worked more than 30 hours per week (ecollege.com 1999). This fact makes this study particularly relevant for corporate trainers who seek to fit e-learning into an already demanding work schedule.
Corporations are beginning to gather more data on how their trainees feel about the use of e-learning technologies. For example, the following results were obtained from an ASTD-Masie Center study involving the experiences of more than 700 e-learners (ASTD 2001):
* Eighty-seven percent preferred to take digital courses during work hours.
* Fifty-two percent preferred e-learning in a workplace office area.
* Eighty-four percent would take a similar e-course if offered again.
* Thirty-eight percent said they generally preferred e-learning to classroom training.
How do e-Learning Instructors Feel?
This question is really an alternate application at Level I evaluation, examining the trainer rather than the trainee. For example, in a recent survey conducted by ecollege.com (1999), 85 percent of the faculty said their students learned equally effectively online as on campus. Some said their students did even better online than in traditional classroom settings. In another TeleEducation study of 130 faculty respondents, 62 percent said their students learned equally effectively in the online environment; 23 percent of faculty stated that their students learned better online; while 90 percent indicated that they were satisfied with online teaching. One faculty comment was: "Online students participate more, perform slightly better than, and are at least as satisfied as their on campus counterparts. From that I conclude that online education appears to be very effective!" (TeleEducation, 2000).
These are qualitative results - both from the learners and instructors - but what about quantitative results?
Level II - Learning
According to Kirkpatrick, learning is defined as the principles, facts, and techniques that are understood and absorbed by trainees. When trainers measure learning, they try to find out how much the skills, knowledge, or attitudes of their trainees have changed. Measuring learning requires a more rigorous process than a reaction survey. Ideally, both a pretest and posttest are given to trainees to determine how much they learned as a direct result of the training program. While many organizations do not measure at this level, other corporate training centers, such as Sun Corporation's Network Academy, keep careful track of what employees have learned through the use of both pretests and posttests (Bylinsky, 2000).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright International Review of Research in Open and Distance Learning Apr 2002
Abstract
A survey of 500 training directors (Online Learning News, 2001a) clearly shows the new priorities: * Sixty percent had an e-learning initiative * Eight-six percent had a priority of converting current instructor-led sessions to e-learning * Eighty percent will set up or expand knowledge-management programs * Seventy-eight percent were developing or enhancing electronic performance support ASTD (2002), in its State of the Industry Report, noted that the year 2000 marked a new era of growth for e-learning. While it is still early to draw solid conclusions about measuring the effectiveness of actual learning that takes place as a result of e-learning - especially within corporate training programs - we can analyze the somewhat controversial results that have come out of mainly academic distance learning programs, using Kirkpatrick's Four Levels of Evaluation.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer




