Content area
The quality of an e-learning product is an important concern for users. Most e-learning providers find it difficult (and expensive) to furnish the customer with products that have quality characteristics that are always identical from unit to unit or module to module. Many producers take the view that quality control costs money and reduces profits. However, these costs are really the costs of doing it wrong first time. A few examples of these costs are: 1. operating costs, 2. prevention costs, 3. appraisal costs, 4. internal failure costs, and 5. external failure costs. There are two clearly defined approaches to quality control -- unstructured and structured. With the unstructured approach, specifications are usually the result of the design process for the product. Taking a structured approach, a guideline that has been widely used to improve the product quality is the capability maturity model. The e-learning product quality control process consists of content review, graphic review and technical review.
Quality: its relevance to e-iearning
According to Joseph Juran and Philip Crosby, "quality" implies "fitness for use" and "conformance to specifications". That is, the quality of a product is such that it does exactly what the users want it to do. In case of software products, fitness of purpose is usually interpreted in terms of satisfaction of the requirements laid down in the system requirement survey (SRS) document. For an e-learning product, this can be considered wholly true.
Alternatively quality can be defined as conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed e-learning products. The quality of an e-learning product is an important concern for users. They not only want the product they purchase to be highly reliable, but also often wish to check the quantitative estimation of the reliability of a product before deciding to buy it. Most e-learning providers find it difficult (and expensive) to furnish the customer with products that have quality characteristics that are always identical from unit to unit or module to module. There is a certain amount of variability in every product. For example the colour intensity of the screens in two modules in a course -or even in the same module - may not be same. If this intensity variation is small, it may have no impact on the customer. However if the variation is large, the customer may want to change the product. The customer -whether purchasing for his or her own needs or for those of an organization - is the key decision maker for selecting an e-learning product from competing products and services.
Dimensions of quality of an e-learning product
Understanding and improving quality is a key factor leading to business success. There is a substantial return on the investment to be had from improved quality and from successfully employing quality as an integral part of overall business strategy.
Most customers of e-learning products have a conceptual understanding of quality, related to one or more desirable characteristics of the product or service. These characteristics are known as the dimensions of e-learning product quality and are:
* Portability. An e-learning product is said to be portable if it can be easily made to work in different operating system environments, in different machines, with other software products.
* Usability. An e-learning product is usable if different sets of users (including experts and novice users) can easily invoke the functions of the product.
* Reusability. An e-learning product is reusable if the different modules of the product can be easily reused to develop a new product.
* Correctness. An e-learning product is correct if the different specifications mentioned in the SRS or functional specification document (FSD) document are correctly implemented.
* Maintainability. An e-learning product is maintainable if errors can be easily corrected as and when they show up, new functions can be easily added to the product, and the functionalities of the product can be easily modified.
* Reliability. An e-learning product is reliable if it can be expected to perform its intended function with precision.
* Integrity. The integrity of an e-learning product is defined in terms of the extent of control over an unauthorized person's access to the software or data.
* Flexibility. The flexibility of an e-learning product is measured in terms of the effort required for modifying a program.
* Interoperability. The interoperability of an e-learning product is measured in terms of the effort required to couple two learning units or modules with one another.
* Execution efficiency. The efficiency of execution of an e-learning product refers to the runtime performance of a program.
* Instrumentation. The interoperability of an e-learning product is defined as the degree to which the program can monitor its operation as well as identify possible errors.
Cost of quality
Many producers take the view that quality control costs money and reduces profits. However, these costs are really the costs of doing it wrong first time. An organized process of quality control results in increased profitability in the long run. A few examples of the costs incurred because of not doing work right first time are:
* the cost of rework and the effort spent in reviews and testing;
* wasted effort resulting from improper communication or design;
* damage to, or deterioration of, the product because of improper practices; and
* the unplanned use of resources during production.
Operating costs. Operating costs are the costs of operating or managing quality-related activities. Operating costs can be classified as the costs of control and costs of failure control. The former are the costs associated with the definition, creation and control of quality as well as the evaluation and feedback of the conformance with quality, reliability and learnability. The latter are the costs of reducing the amount spent on cutting the expenses associated with the failure of the product within the development centre and in hands of the customer.
Prevention costs. Prevention costs are associated with design, implementation, maintenance and planning before actual operation, in order to avoid defects from happening. Activities associated with the prevention cost are: the proposal review; contract, functional specifiction document (FSD) and design specification document (DSD) reviews; project planning; and planning for reviews and testing.
Appraisal costs. Appraisal costs are spent to detect defects and to ensure conformance to quality standards. Appraisal costs are focused on the discovery, rather than prevention, of defects. Activities associated with appraisal costs are: prototype testing; process reviews; final review; and quality audit.
Internal failure costs. An internal failure cost occurs when the results of the work fail to reach the designated quality standard, and this failure is detected before delivery to the customer. Activities associated with internal failure costs are: design/framework change; wastage of staff time because of client review comments; excess usage of late hours work facility; down time of the internet or intranet; and trouble-shooting and investigations to find the defect.
External failure costs. External failure costs occur when the product fails to reach designated quality standards and this is not detected until after transfer to the customer. Activities associated with external failure costs are: debugging the customer reported defect/review; product liability and litigation costs; failure to deliver the product on time because of improper planning; loss of customer goodwill and sales; and the cost of the customer claim settlement.
Mathematically it can be shown that:
Prevention cost+ Appraisal cost
<
Internal failure cost + External failure cost
Different stages ofe-learning product development
Although we use different technologies for coding e-learning products, these products are totally different from other software products. The performance of an e-learning product is largely dependent on the correctness of the work done by the content, graphics and technical teams. The e-learning industry uses a typical software development lifecycle (SDLC) model for the development of its products (Figure 1 ).
The compliance and integration activities associated with developing e-learning products are often more complex than those used in pure software industries. For the development of e-learning products, content, graphics and technical groups work together.
Content team members
Contact team members:
* conduct studies to understand training needs, requirements and the audience profile;
* analyse raw content and generate new content;
* apply instructional methods to structure content;
*define instructional objectives, the amount of content and the ways of assessing the learner's knowledge;
* write a story-board based on the learner's profile, subject and cultural context;
* define identification standards and visualization of animation and graphics; and
* co-ordinate with the technical and graphics teams during production phase.
Graphics team members
Graphics teams are composed of visualisers, illustrators and animators.
Visualisers deal with conceptualization, and page and template design. Illustrators define the illustration style and create digital illustrations based on the needs of the client. Animators are responsible for storyboarding, design layout and creating 2D/3D animations.
Technical team members
The technical team consists of application engineers and software engineers. Team members ensure the technical excellence of the product. They are responsible for:
* conducting system studies to finalise the functional specification;
* designing the development of the entire system architecture, database, unit modules and coding application;
* reviewing codes, design elements and unit test cases; and
* installing the product for the client.
Hence, the development of an e-learning product calls for huge interdependency among all functions. A small problem from one of the function can cause total failure of the product. Proper quality control is therefore essential. A way of correlating the activities of the different teams to prepare an e-learning product is represented in Figure 2.
The significance of quality control to e-learning
Quality control activities are associated with reviews and testing to produce a bug free product.
Since variability can only be described in statistical terms, statistical methods play a central role in quality improvement. In the application of statistical methods to quality engineering, it is fairly typical to classify data on quality characteristics as either attributes or variables. Variables are usually continuous measurements. Attributes, on the other hand, are usually discrete data, and generally take the form of counts, as in the number of bugs in an e-learning product.
Quality characteristics are often evaluated relative to specifications. In case of service industries (like e-learning product supply), specifications are typically expressed in terms of the maximum amount of time taken to process an order or to provide a particular service with some specified norms. A value of the measurement that corresponds to the desired value for that quality characteristic is called the nominal or target value for the characteristics. For example, for an e-learning product there are specified defect densities - that is, the number of defects present per unit size of the product. These target values are usually bounded by a range of values that, most typically, we believe will be sufficiently close to the target so as not to affect the function or performance of the product. The highest and lowest allowable values for quality characteristics are called the upper specification limit (USL) and lower specification limit (LSL) respectively. Some quality characteristics have specification limits on only one side of the target. For example, the defects density in an e-learning product has a target value and an upper specification limit, but no lower control limit. Usually it is taken as zero.
As Taguchi says, in Quality Engineering in Production Systems, the production target value for the defect density rework can be taken as "the smaller the better" (S-type). The target, upper control limit and lower control limit values are generally fixed by the quality assurance group (QAG) after studying the performances of lots of similar processes. Management approves them during the process capability baseline report review meeting.
For an e-learning product, the activities related to content, technology and graphics can be taken as different operations of a single process - the process of making an e-learning product - and their correctness is. say, 98 per cent, 97 per cent and 99 per cent respectively. Hence DPU (defect per unit) in the different operations is 0.02, 0.03 and 0.01 respectively. Thus, yield of content, technology and graphics operations will be 0.98, 0.97 and 0.99 respectively, where Yield = e^sup -DPU^ according to the six sigma method. As these operations are different steps of a process, the yield of the total process will be (considering the rolled throughput yield) 0.99 x 0.98 x 0.97 = 0.9410. Thus we can say that, although the individual operations have a high yield, the overall yield of the process is not satisfactory. Thus, for achieving good final output, the yield of each operation should be high (near 100 per cent) and this indicates the importance of quality control for overall e-learning product development processes.
For testing purposes, the most common terms used are "defective" and "non-defective". But with advances in quality management systems, terms like "conforming" and "non-conforming" have become more popular. These characteristics are typically called attributes. For the quality control of e-learning products, we generally use fraction non-conforming control charts, as the product size units are not the same.
There are two clearly defined approaches to quality control - unstructured and structured.
With the unstructured approach, specifications are usually the result of the design process for the product. Traditionally, design engineers arrived at a product design configuration through the design principle and the specifications mentioned by the customer in the functional specification document. Prototype construction and review, and testing, follow. The reviews and testing are generally done without the use of statistically based experimental design procedures, and without much interaction with, or knowledge of, production processes. We refer to this as the over-the-wall approach to quality control.
For the past decade, the e-learning industry has put substantial effort into improving the quality of its products. This has been a difficult job, since the size and complexity of the products increase rapidly while customers and users are becoming more demanding. To improve the product quality, e-learning producers have had to focus on improving their development processes. Taking a structured approach, a guideline that has been widely used to improve the product quality is the capability maturity model (CMM). This is often regarded as the industry standard for process improvement. But there is limited scope for quality control in the model. Hence, there has recently been greater emphasis on the testing maturity model (TMM), which has been developed out of the CMM. TheTMM process is diagrammatically represented as in Figure 3.
At level 1 (the initial level) testing is a chaotic, undefined process and is considered as a part of debugging. The objective of testing at this level is to show that the e-learning product runs without major failures. At this stage, the product does not often fulfil needs, is not stable, or is too slow to work with. Within the testing process, there is a lack of resources, tools and well-educated testers. There are no process areas at this level.
At level 2 (the definition level) testing is a defined process and is clearly separated from debugging. In the context of structuring the test process, test plans are established containing a test strategy. For deriving and selecting test cases from requirement specifications, formal test design techniques are applied. However, testing still starts relatively late in the development lifecycle - for example, during the design phase or even during the coding phase. The main objective of testing is to verify that the product satisfies the specified requirements.
At level 3 (the integration level), testing is fully integrated in the product development lifecycle. It is recognized at all levels of the SDLC of the e-learning product development model. Test planning is done at an early project stage by means of a master test plan. The test strategy is determined using risk management techniques and is based on documented requirements. A test organization exists, as well as a test training programme. Testing is perceived as being a profession. Reviews are carried out, although not consistently and not according to a documented procedure. In addition, to verify that the e-learning product satisfies requirements, testing is very much focused towards invalid testing.
At level 4 (management and measurement) testing is a thoroughly defined, well-founded and measurable process. Reviews and inspection take place throughout the product development lifecycle and are considered to be part of quality control. Products are evaluated using quality criteria for characteristics such as reliability, usability and maintainability. Test cases are gathered, stored and managed in a central database for reuse and regression testing. A test measurement programme provides information and visibility regarding the test process and product quality. Testing is perceived as evaluation; it consists of all lifecycle activities concerned with checking e-learning work products.
On the basis of the results that have been achieved by fulfilling all the improvement goals of the previous levels, testing is now a completely defined process and it is possible to control the costs and effectiveness of testing. At level 5 (optimization) the methods and techniques are optimised and there is a continuous focus on test process improvement. Defect prevention and quality control are introduced as process areas. The test process is characterised by sampling-based quality measurements. A procedure exists for selecting and evaluating test tools. Tools support the test process as much as possible during test design, test execution, regression testing and test case management. Testing is a process with the objective of preventing defects.
Stages of traditional software quality control and e-learning quality control
Traditional software quality control
The traditional software quality control process is represented in Figure 4.
Here review means review of design and codes. The review process can be classified as:
* self-review - when the coder reviews the code written by himself or herself;
* peer review - when two coders sit together and cross-check the codes;
* off-line review - when the material to be reviewed is sent to the reviewers and, after review, they log the bugs in the defect tracking system; and
* walk-through - the formal kind of team review where reviewers, after review, note their findings for discussion in a walk-through meeting where the coder is also present.
Code inspection aims to identify some common types of errors caused by oversight and improper programming. During code inspection, the code is examined for the presence of certain kinds of error, in contrast to the hand simulation of the code execution (dry running of the code).
The aim of the testing process is to identify all defects existing in a software product. Testing of a product can be classified as:
* Unit testing. This is the testing of a unit (or single module) of a system in isolation.
* Integration testing. This is the testing of a product after combing all modules together. Integration testing can involve: the big-bang approach (where all the modules are put together and tested); topdown integration (which starts with the main routine and one or two sub-routines in the system and, after the top level skeleton is tested, combines the immediate sub-routines of the skeleton and tests them again); bottom-up integration (where each sub-system is tested, to test the interfaces among various modules of the subsystem); mixed integration testing (which is a combination of topdown and bottom-up approaches).
* System testing. This is carried out to validate a fully developed system to ensure that the system is meeting customer specification. It involves: alpha testing (when testing is carried out by the test team within the developing organization); beta testing (when testing is carried out by a selected group of the friendly customer); and acceptance testing (when testing is done by the customer to accept or reject the product).
Testing strategy is classified as:
* white-box testing - when testing is carried out knowing system design and architecture or code; or
* black-box testing - when testing is carried out without the knowledge of design, system architecture or code.
During white-box testing, testing is carried out for branch coverage, condition coverage and path coverage. During black-box testing test cases are prepared to check the equivalent partitioning and boundary values.
Performance testing, meanwhile, is carried out to check whether the system meets the non-functional requirements mentioned in the SRS or functional specification documents. Performance testing is essentially black-box testing and includes:
* stress testing - to judge the performance of the program when it is in stress condition for a short time;
* volume testing - to check whether the data structures (arys, queues or stacks) have been designed successfully for handling extraordinary conditions or dada overflow;
* configuration testing - to analyse system behaviour in various hardware and software configuration platforms;
* compatibility testing - to test the interfaces of one system with the other;
* recovery testing - to test the response of the program in the presence of faults or loss of power, devices or services data; and
* usability testing - to check the user interfaces to see whether the program meets all the user requirements.
Regression testing does not belong to unit testing, integration testing or system testing. It is a separate dimension to these three kinds of testing. Regression testing is the practice of running an old test suit after each change to the system or after each bug fix to ensure that no new bug has been introduced as a result of the changes made.
For conventional software testing we can use automated testing tools like Winrunner. Loadrunner, Testdirectorand QTP.
Quality control for e-learning products
The e-learning product quality control process consists of content review, graphic review and technical review. The process can be represented as Figure 5.
Content team members check the correctness of the content appearing on the product screens, by comparing it with the final version of the accepted story-board. They also check for punctuation, spelling and the alignment of text. If the product contains audio, video or animations, they check for the correctness of those by comparing them with the accepted version of the story-board.
The graphic team reviews the story-board from a graphics perspective and tests for the size of the pop-up boxes, click areas, background colours, and any problems associated with icons, images or animations, audio and video.
The technical team carries out code review, code coverage and functionality testing, mainly for the navigation buttons.
When all the three functions are satisfied with the product form their own perspective the product comes for testing in the quality department. The testing may be functional testing, content testing or both. For content testing, the tester has to match the content of the final accepted story-board text with the on-screen texts and audio (if it is there). He or she also has to check for spelling, font, font size, spaces, punctuation and alignment of the texts for each screen. For functional testing, testing is generally of the white-box variety for all functionalities on a screen. The tester has to test the navigation buttons present on the screen, and whether correct or incorrect entries in the knowledge checks or assessments match the sign-off story-board.
For graphic and multimedia testing a tester has to adopt a separate testing method as bugs for these are not the conventional types. In order to check the graphic portion he or she has to check for: click areas of the boxes; colour change of click area after click; size of the pop-up boxes, font and font size; activation and deactivation buttons; the "play-pause-replay" actions; and proper matching of the audio with the screen text or the animation.
For multimedia testing, audio and video functions such as volume control, zoom and pause are checked to ensure that they conform to the final version of the story-board. After testing, the tester sends the bug report for taking corrective actions and subsequently carries out regression testing for the product, after the bug closing operation by the production team.
After completion of the regression testing, system testing is carried out. This includes stress testing, volume and configuration testing. In the regression testing different combinations of operating system, browsers, random access memory, processors and different versions of the executable programme are used to test the robustness of the product. Beside these, learning management system (LMS) testing, using the ADL test suite, is carried out.
The tester sometime uses tools constructed in-house. These are helpful for content testing. The final version of the story-board and of the product is loaded within the tool. The tool then compares the texts. It gives its output taking the story-board as the reference and completes the content testing in a very short time. But there are some limitations of the process. In all screens the on-screen text position should be same. Moreover, in all the story-board the text alignment and the text containing box size should be same.
Besides these, a tester has to check for the look and feel of the product. He or she may suggest:
* additions or changes in the text of a particular screen;
* change of colour, font or font size;
* the addition or removal of navigation buttons;
* the alignment of text;
* improvements to the quality of audio and video; or
* better use of animations and images.
Hence, the work of an e-learning tester is significantly different from that of a conventional software tester.
Problems associated with quality control of an e-learning product
The development process of an e-learning product is unconventional, so its quality control is unconventional, too. The process is uncomfortable and unstructured. The guidelines available are often insufficient to carry out quality control in a systematic fashion. It is difficult to classify bugs of in an e-learning product in proper way - that is, whether it is a content bug, a graphics bug or a technical bug. It is almost impossible to subdivide known bugs in to major, minor or trivial, as is done in case of other software industries.
Different customers have different requirements. One customer may want users to be able to visit the course without attending assessments, while another customer may require on-site assessments. Moreover, when a tester gives a bug sheet for content, graphic and technical defects it is often difficult for the respective people to decide which bug is to close and which is not at all a bug even though it is mentioned in the bug sheet.
Again, there is no tool for load testing. Similarly a tester has to change the machine frequently for testing a product in different environments and with different combinations of operating system, browser, RAM, processor and different versions of the executable programme. This is very time consuming and laborious, and requires extreme patience and attention. But the time schedule for e-learning products is often very tight and the production team keeps pressure on the tester to get their product tested earlier. As a result there is every possibility of deterioration in the quality of testing.
Points for thought
* With the increasing demand of e-learning products, maintaining high quality testing is of the utmost importance.
* A major problem associated with the quality control of e-learning products is the huge dependency on personal skills for testing and review works. A company can only supply good quality e-learning products if the testing and review resources are good.
* One possible solution is to make maximum use of robust automated tools for content and functional testing of the product.
* At the same time, standard guidelines should be available, particularly for testing activities.
* In institutions where software testers are trained, there should be dedicated resources for testing e-learning materials.
* These actions will help to improve the quality, and quality control processes, of e-learning products.
References
Mall, R. (2005), Fundamentals of Software Engineering. 2nd ed., Prentice-Hall, India.
Montgomery, D.C. (2004), Introduction to Statistical Quality Control. 4th ed., John Wiley & Sons, New York, NY.
Phzdek, T. (2003), The Six-Sigma Handbook. McGraw-Hill, New York, NY.
Pressman, R.S. (2004), Software Engineering: a Practitioner Approach, 5th ed., McGraw-Hill. New York, NY.
Taguchi, G., Elsayed, E.A. and Hsiang, T.C. (1989), Quality Engineering in Production Systems, McGraw-Hill, New York, NY.
van Veenendaal, E. and Swinkels, R. (2002), "Guidelines for testing maturity", Professional Tester. Vol. 3 No. 1, March.
Shirshendu Roy
Manager - Business Excellence, Tata Interactive Systems, Salt Lake, India
Gopal Ghatak
Software Tester, Tata Interactive Systems, Salt Lake, India
Shirshendu Roy is software quality assurance manager and Gopal Ghatak is software tester, both at Tata Interactive Systems.
Copyright Emerald Group Publishing, Limited 2007
