Abstract
When, and how, performance-contingent incentives improve performance is an important question for organisations. Empirical results have been mixed - performance-contingent incentives sometimes increase performance, sometimes decrease performance, and sometimes have no effect. Theorists have called for further research to identify the effect of various moderating variables, including knowledge and task complexity. This study responds by considering the role of instruction in providing the necessary knowledge to reduce task complexity. The results suggest that a performance-contingent penalty can be a particularly effective means of directing effort for a simple task. For a complex task, performance can be improved through instruction. The type of instruction is important - with rule-based instruction effectively directing effort - however principle-based instruction is necessary to facilitate problem investigation and problem-solving.
Keywords: Performance-contingent incentives; Standard cost variance analysis; Employee empowerment; Problem-solving; Accounting education.
JEL Classification: C91; D83; M52.
1. Introduction and Motivation
An important issue for accountants is how incentives affect performance. The study of incentives has a long history of apparently conflicting results. Performance-contingent incentives have been found to improve performance on some tasks and decrease performance on others (Ashton 1990; Awasthi & Pratt 1990; Drake, Wong & Salter 2007; Mallin & Pullins 2009; Alpkan et al. 2010; Dugar 2010). Calls for further research have emphasised the importance of considering moderating variables such as task complexity (Libby & Lipe 1992; Bonner & Sprinkle 2002; Bonner et al. 2000).
A feature of this study is the computerised Standard Cost Variance Analysis (SCVA) task which has the capacity to accurately monitor effort direction and duration. SCVA is an important tool used by management accountants and others throughout both manufacturing and service organisations (Davila & Foster 2005). Importantly, SCVA has been recognised as a tool for identifying and solving operational problems (Emsley 2000, 2001; Mitchell 2005). For example, Emsley (2001) provides a case study that emphasises (among other things), the importance of raising employee awareness of the problem-solving role of variance analysis, and calls for further research (such as laboratory studies) to identify the causal relationships between control system design and effective SCVA.
To use SCVA as a tool for continuous improvement individuals must first determine which variances to investigate (i.e. their effort must be appropriately directed). Indeed, misdirected effort consumes resources and distracts attention from other important tasks. Whereas most previous research has focussed on rewards, this study demonstrates that an explicit penalty provides a powerful way to reduce misdirected effort. This form of performance-contingent incentive is effective when the task is simple. However, when the task is complex (such as identifying a recurring variance), performance can be blocked by a lack of knowledge. A further contribution of this paper is to demonstrate the importance of instruction for complex tasks. Without instruction a performance-contingent incentive is not effective for a complex task.
Furthermore, the type of instruction is important. Rule-based instruction (i.e. a set of rules that can be followed in determining which variances to investigate), is found to effectively direct effort for the complex task of identifying variances. However, after a variance has been identified, the time spent investigating the variance (effort duration) becomes important. Principle-based instruction that explains the purpose of SVCA leads to a longer investigation and recommendations that focus on solving the most costly problems.
2. Background and Hypotheses Development
Performance-contingent incentives improve performance in two ways. First, they can increase arousal, attention and effort that will improve performance on effort-sensitive tasks (Ashton 1990; Sprinkle 2000, 2003). Second, incentives can be an important means of communicating expectations and thereby focussing attention and effort (Merchant & Van der Stede 2003). However, the empirical evidence suggests that the cost of providing such incentives may not always be justified, and may instead have undesirable effects (e.g. Ashton 1990; Awasthi & Pratt 1990).
There are various factors that may block or enhance the beneficial effect of a performance-contingent incentive. Bonner and Sprinkle (2002) provide a comprehensive model of the incentive-performance relationship that classifies the various moderating variables as a person, task, environmental or incentive scheme (see Figure 1). Person variables include knowledge and task variables include complexity. Task complexity is identified as a potential block to the beneficial effect of a performance-contingent incentive since effort is either misdirected or not exerted due to a lack of understanding of the task requirements. Importantly, task complexity is determined by the level of knowledge that an individual brings to the task. The role of a performance-contingent incentive in directing effort and instruction as a way to overcome task complexity is considered in the following hypotheses (see Figure 2).
Bonner et al. (2000) categorise vigilance and detection as the least complex task, and thereby the most likely to benefit from incentives. The compliment of detection - of particular importance in management without exception - is avoiding the investigation of immaterial variances (i.e. avoiding the misdirection of effort). A fundamental principle of SCVA is that improvement efforts should be focussed on costly, recurring variances where future improvements can be made, rather than on random variances that are unlikely to recur (Horngren et al. 2010). Therefore, efficient SCVA requires vigilance, as well as detection which distinguishes between variances that should be investigated and those that should not be investigated.
Dugar (2010) provides one of the few studies that has considered the importance of sanctions as well as rewards in improving efficiency. They find that sanctions can be more effective than rewards in achieving efficiency due to 'negativity bias'. Penalties and sanctions communicate disapproval which affects performance beyond any financial consequences (Henderlong & Lepper 2002). One way to include sanctions in a SCVA task is to incorporate the cost of investigation into the incentive calculation, thereby penalising unsuccessful variance investigation. The performance-contingent incentive used in this experiment incorporates such a penalty. Therefore, it is predicted that a performance-contingent incentive will be effective in reducing the identification (and thus the time spent investigating) immaterial variances:
H1: A performance-contingent incentive will decrease the identification and investigation of immaterial variances.
As previously noted, incentive-induced effort does not necessarily lead to increased performance, and sometimes even degrades performance relative to a flat-rate contract (Awasthi & Pratt 1990; Ashton 1990; Drake, Haka & Ravenscroft 2001; Bonner & Sprinkle 2002; Bonner et al. 2000; Libby & Lipe 1992). Of particular interest here are previous studies which have found that performance outcomes depend on the cognitive demands of the task (Ashton 1990). Task complexity increases with the required level of attention and cognitive effort (Wood 1986), and effort will only increase performance if the individual has the necessary knowledge to complete the task (Awasthi & Pratt 1990; Libby & Lipe 1992).
Certain aspects of SCVA are more complex than others. Identifying immaterial variances that should not be investigated is a relatively simple task. A more complex task arises when the problem is recurring. Recognising a pattern of related variances is an example of component complexity (Wood 1986). Without instruction or pre-existing knowledge, a performance-contingent incentive is unlikely to improve performance on such a complex task. This leads to the following hypothesis:
H2a: Without instruction, a performance-contingent incentive will not improve performance in a complex detection task.
Bonner at al. (2000) note that it is the gap between knowledge/skill and the demands of the task that attenuates the benefits of incentives by making the task complex for the individual. Providing instruction offers an opportunity for organisations to decrease the complexity of the task (Campbell 1988; Wood 1986), and consequently direct effort appropriately, thereby increasing the positive impact of performance-contingent incentives. Thus, it is hypothesised that:
H2b: A performance-contingent incentive will increase the identification of a recurring variance when rule-based instruction is provided.
Although rule-based instruction directs attention to identifying a recurring variance, it does not convey the financial significance of the variance. In contrast, principle-based instruction emphasises the importance of investigating the root cause of a recurring variance so that an appropriate response can be made. Investigating the root cause will require more time, and therefore the effort duration directed at the recurring variance is predicted to be greater when principle-based instruction is provided:
H2c: A performance-contingent incentive will increase effort duration when principle-based instruction is provided compared to rule-based instruction.
Responding to the most costly problems is the ultimate measure of SCVA performance. Rule-based instruction was identified in H2b as a means of providing the understanding necessary to identify a complex, recurring variance. In H2c, however, it was argued that rule-based instruction does not provide the necessary understanding to allow individuals to recognise the implications of a recurring variance. Principle-based instruction conveys the understanding necessary to guide and measure the investigation (via effort duration). From that investigation individuals will understand the importance of responding to the recurring variance. Therefore, it is predicted that individuals who receive principle-based instruction will make recommendations directed at the most costly problems:
H3: A performance-contingent incentive will increase the emphasis on solving the most costly problems when principle-based instruction is provided compared to rule-based instruction.
3. Experimental Design
In order to isolate the effects of instruction it was important to control for the many other moderating variables that might affect the relationship between a performance-contingent incentive and task performance. This level of control was achieved by conducting an experiment comprising a 2x3 between-subjects factorial design.
Overview of Experiment
The task consisted of a computerised case study of a small toy manufacturer whose actual costs had exceeded its budget. All participants received the following case information and basic SCVA1 information:
"You are in charge of lacquering each wooden car that comes from the assembly area. You are accountable for how much time you spend and how much lacquer you use. Since things never go perfectly there will always be unfavourable variances as you either spend more time, or use more lacquer, than is expected for the number of cars completed.
Budget variances for the past 15 weeks are calculated and provided to you. You will be asked to make two recommendations for improving financial performance based on the problems that you uncover in your variance investigation. The total budget variance for your workstation can be split into two main variances:
1. A Labour Efficiency Variance (LEV) occurs when you complete less cars than would be expected, given the number of hours that you worked.
2. A Material Usage Variance (MUV) occurs if you use more lacquer than would be expected, given the number of cars that you completed."
Participants then received the instruction and incentive treatments. A series of variances (in both graphical and tabular form), were presented, and participants chose which one to investigate (see Figure 3).2 The computer program then provided an explanation for the variance, and participants could choose whether to investigate further or not (see Figure 4). In order to increase the realism of the task, and to create a penalty for investigating immaterial variances, investigation was a costly process. The cost of investigating the variances ($100 for each additional piece of information) was continuously updated and conspicuously displayed (see top right corner of Figure 3). Potential savings were also identified so that all participants could make cost-benefit evaluations of their variance investigation. This also reinforced the performance-contingent incentive treatment.
After satisfying themselves that they understood the main causes for the unsatisfactory performance, participants were asked to make two recommendations to respond to the most important problems that they had identified. Measures associated with the effort direction and duration through the SCVA process were considered; however, the ultimate dependant variable in this task is the extent to which the recommendations address the main causes for the unsatisfactory performance. Based on the initial variances and the explanations, the greatest costs were associated with the recurring problem ($1,150) and two problems that gave rise to costly exceptions ($800 and $450 respectively). If a participant focussed solely on investigating these three problems the net benefit would be $1,200 ($2,400 in savings less $1,200 in investigation costs).
Every attempt was made to make the task realistic in order to increase the level of engagement and interest, and this appears to have been successful. In response to the question 'Overall, I found the task interesting' and 'Overall, I enjoyed the task', the average response was 'Agree' (4.356, s.d 0.698; 4.300, s.d 0.743, respectively, on a 5-point Likert scale). These responses are significantly higher than neutral (t =21.093, p=0.000; t=18.952, p=0.000, respectively).
Manipulation of Variances
From Figure 3 it can be seen that only two variances are material exceptions to the dollar or percentage limits that were given. The Labour Efficiency Variance (LEV) for the first week in January exceeds the $200 limit and the Material Usage Variance (MUV) in the third week of April exceeds 25% of the cost.
Upon further scrutiny it can also be seen that both the LEV and MUV tend to be higher in the final week of each month (although within limits). The explanations for these variances suggested the recurring nature of the problem - that in the middle of each month a large order came through from a big toy store chain and that this 'spike' in production upstream starved downstream production until it passed through.
Care was taken to ensure that no other investigation rules (as provided in either of the forms of instruction) were satisfied.
Independent Variables
Instruction
Rule-based instruction comprised a series of rules for determining which variances to investigate. Principle-based instruction focussed on the principles underlying variance investigation. A control treatment comprised task instruction, but did not include any variance investigation instruction.
Participants in the rule-based instruction treatment received the following instruction:
"Use the following rules to determine which variances to investigate:
Rule 1. Review the table of LEV (Labour Efficiency Variances). Identify any which exceed $200.
Rule 2. Review the table of MUV (Material Usage Variances). Identify any which exceed $200.
Rule 3. Review the table of LEV (Labour Efficiency Variances). Identify any which exceed 20% of expected cost.
Rule 4. Review the table of MUV (Material Usage Variances). Identify any which exceed 20% of expected cost.
Rule 5. Review the graph of variances. Identify any recurring variances if the combined effect exceeds $350.
Rule 6. Review the graph of variances. Identify any consecutive variances if the total effect exceeds $350."
Participants in the principle-based instruction treatment received the following instruction:
"The most important task in variance analysis is to understand why variances arise and then use that knowledge to promote continuous improvement.
Small variances are normal. Other variances may be uncontrollable. In either case no corrective action is necessary or possible. The objective is to determine when investigation and corrective action will be possible and worthwhile.
When a large variance occurs (i.e. > $200 or > 20% of expected cost) then it is probably going to be worth investigating. As you identify patterns and dig deeper you may also find that some problems recur on a regular basis and so large cost savings may be achieved by solving the root cause and thereby preventing the variances from recurring in the future."
Incentives
In the control treatment participants received a payment of $15, regardless of their performance (and therefore their effort) on the task. Participants in the performance-contingent treatment received a flat rate of $10 plus an additional bonus (capped at $10) calculated as 1% of the net annual cost savings that they were able to identify (i.e. the potential cost savings associated with the variances investigated, less the investigation cost).
Performance-contingent payments ranged from $10 to $13, with an average payment of $10.20 (s.d $0.600). This is very low and therefore decreases the salience of the manipulation and hence the likelihood of a significant result. This raises interesting issues about the differences between ex ante expectations and actual payments. Furthermore, the incentive scheme communicates the evaluator's expectations (the informational role of incentives), which the individual may strive to meet, even if the rewards are not salient.
It is important to note that the performance-contingent incentive incorporates an inherent penalty for investigating immaterial variances. Without such a disincentive participants would be encouraged to investigate all variances. Furthermore, the penalty is consistent with the fact that, in practice, investigating variances is a costly process.
Dependent Variables
There are a number of key phases in the SCVA process that are of interest here. Namely: identifying which variances to investigate, investigating the variances, and providing appropriate responses. Therefore, the following measures relate to which variances are investigated (effort direction) and the time spent investigating the variance (effort duration). The choice of which variances to investigate, and the extent of the investigation, are important in that they represent substantial costs for the organisation - in terms of both managerial time expended and the opportunity cost of failing to investigate variances associated with costly problems. A problem may be indicated by a single, large variance or a pattern of recurring variances whose combined impact is substantial.
After identifying a variance to investigate, participants made choices about whether further investigation was warranted, and they were also asked to make recommendations to solve the problem. The ultimate success of the SCVA process is the cost savings achieved. The information obtained through further investigation, and the amount of the variance itself, provided information to participants about the total costs associated with the problem. Participants could then prioritise and focus their improvement efforts on the most costly problems.
Details of the calculation of these dependent variables are as follows:
Identifying which Variances to Investigate
The initial decision to investigate is indicated by clicking on a variance in either the graph or the table (see Figure 3). A number of variances were deemed to be worthy of investigation either because they were material alone, or because they formed part of a pattern of recurring variances whose combined value was material. All other variances were immaterial and care was taken in the design of the task to ensure that no other patterns existed.
Variance Investigation
The time spent investigating each problem was captured by the computer program. Participants spent more time investigating a particular problem when they chose to receive further explanations, and when they chose to investigate related variances (i.e. variances that formed a pattern). This required the participant to understand the significance of the explanation to determine whether to continue the investigation or not. After a certain number of explanation pages (maximum of four) (see Figure 4) participants were informed that further investigation had not yielded any additional information about the problem.
Value of the Recommendations Made
The recommendations were analysed to identify which, if any, of the organisation's problems they addressed. The costs associated with each problem were determined based on the size of the variance (or the set of related variances) and on the information provided in the explanations about how often the problem would recur within a year. Thus, the potential cost savings were determined for each problem. This variable captured the potential cost savings for all of the problems addressed by an individual's recommendations. No attempt was made to assess the likelihood that the recommendation would be effective in solving the problem.
Participants and Procedures
One hundred and eighteen undergraduate students from a regional university volunteered to participate in the experiment. Participants were both business (n= 68) and non-business students (n=50) (see Table 1).
Undergraduate students are commonly used in psychology and business research. Brownell (1995, p15) argues that the "...external validity sacrifice resulting from a pre-occupation with college sophomores as experimental subjects is vastly overwhelmed by the strength or power to make causal statements which is brought about by internal validity." A review of studies that have specifically considered the validity of student surrogates suggests that the most important factor is whether the students have the necessary knowledge to complete the task (Chang & Ho 2004). For this experiment students are particularly appropriate because they lack prior experience which decreases the potentially confounding differences caused by their existing knowledge structure.
Participants were advised that the experiment required a one-hour commitment. So that participants could allocate their time appropriately, and thereby increase the likelihood that they would complete all aspects of the experiment, the first computer-screen provided participants with an overview as follows:
Introduction and Instruction (2-10 minutes)
Illustrative Example (5-10 minutes)
Task
Variance investigation and idea generation (5-15 minutes)
Submit two recommendations (2-10 minutes)
Post-Task Questions (5-15 minutes)
At the top of the computer screen participants were reminded which stage of the experiment they were currently in and the time remaining (see Figure 3). It is important that participants had a great deal of discretion in the direction and duration of their effort. Another key feature of the computerised simulation is that it responded to the individual's choices and provided immediate, context-specific feedback.
The post-task questionnaire included demographic questions and three questions about whether the participant had found the task interesting, challenging and enjoyable. There were three questions to test the participant's knowledge about variance investigation, four pattern recognition questions and twenty questions to determine the individuals' 'tolerance of ambiguity'.
4. Results
Manipulation Checks
After the incentive structure was explained, participants were provided with an illustration of the task and required to calculate the correct payment. The successful completion of this manipulation check was necessary before the participant could continue. Participants were also reminded of their incentive treatment with a statement on the variances page (see Figure 3). Interestingly, participants who did not receive a performance-contingent incentive were still committed to completing the task (as shown in their effort duration).
Upon completing the instruction, participants were asked to self-assess their understanding of SCVA by responding to the question 'I have a good understanding of variance investigation'. The mean response was 3.831 on a five-point Likert scale (strongly agree was 5.0). This is significantly higher than the midpoint (neutral) of 3.0 (t = 9.827, p=0.000). Interestingly, however, this did not differ significantly between instruction types (the mean of the no-instruction treatment was 3.641). Apparently, the participants in the no-instruction treatment believed that they had sufficient instruction to complete the task, despite the lack of guidance about when to investigate a variance. This is important because a lack of self-efficacy can seriously attenuate the incentive-effort relationship (Bonner & Sprinkle 2002).
Participants were also asked three multiple-choice questions in the post-task questionnaire to evaluate the effect of the instruction treatments on their understanding of SCVA. Table 2 indicates the percentages of participants by response and treatment. Chi-square tests were performed to determine any significant differences. Consistent with the bonus calculation, participants who received the performance-contingent incentive were more likely to believe that variance investigation should be undertaken only if the anticipated benefits are greater than the anticipated costs (70.9% vs 55.6%). Unexpectedly, those who received rule-based instruction were less likely to emphasise the importance of the variance occurring again in the future when deciding which variances to investigate (48.8% vs 61.5% for no instruction and 57.9% for principle-based instruction). This is despite the fact that identifying a recurring variance was specifically mentioned in the rule-based instruction. Apparently, the other rules diluted the importance of identifying a recurring variance (individuals were required to identify the most important factor in determining an investigation). As seen later, however, individuals who received rule-based instruction and a performance-contingent incentive were the most likely to actually identify the recurring variance.
Participants were randomly assigned to treatments by the computer program. To ensure that this random allocation was effective, cells were compared in terms of age, gender, previous experience with SCVA, the number of university courses completed, whether or not they were business students, whether or not they were part time or full time students, and their length of work experience. Chi-square tests confirmed that there were no significant differences between cells, except for work experience where the performance-contingent incentive with no instruction treatment had a greater number of participants with seven or more years of work experience (χ^sup 2^=7.266, df=3, p=0.064). As work experience is likely to increase performance in variance analysis this is likely to counter the lack of instruction and decrease the chance of finding a significant difference in H2a, where the comparisons are made to the no-instruction treatment.
An important design feature of this study is that participants had no relevant prior knowledge that would significantly influence their behaviour in the task. Twelve percent of participants indicated that they had some previous SCVA instruction. Excluding these participants did not affect the relative ranking of the means. In addition, the results were split on the basis of business versus non-business students and again the pattern of means was found to be the same.
Participants also completed a test of their tolerance of ambiguity and four pattern recognition questions. ANOVA analysis confirmed that there were no statistically significant differences between treatments for these potentially confounding variables, or for the time spent on the introduction page or on common instruction.
Hypothesis Testing
Hypothesis 1 (H1) predicted that incentives would be an effective means of reducing the unproductive investigation of immaterial variances. To test this hypothesis the number of immaterial variances investigated and the time spent investigating these immaterial variances was considered. A significant main effect was found for both direction (variances identified) and duration (time spent in investigation) (F1,116 = 6.608, p=0.006 and F1,116=4.049, p=0.024 one-tailed, respectively).3 Performance-based incentives led to less immaterial variances being investigated (mean number = 0.600 vs 1.333) and less time spent on investigating immaterial variances (20.733 vs 9.292) (see Figure 5). Therefore, H1 is supported.
Hypothesis 2a (H2a) predicted that for the complex task of identifying a recurring variance, then a performance-contingent incentive would not improve performance. Identifying the recurring variance would be a complex task for individuals who received no instruction. So, for participants who received no instruction, those who received the fixed-rate incentive were compared to those who received the performance-contingent incentive. Thus H2a is supported. Indeed, participants who received a performance-contingent incentive and no instruction were less likely to identify the recurring variance (72.2% vs 85.7%, marginally significant χ^sup 2^ =1.905, DF = 1, n=21, p=0.08) (see Figure 6). This is not surprising, as H1 showed that a performance-contingent incentive decreased the random identification and investigation of variances, including the chance identification of the recurring variance.
In H2b it was argued that task complexity could be reduced through instruction and that this would correctly direct the effort motivated by the performance-contingent incentive. Rule-based instruction provided the necessary knowledge to understand the importance of seeking out recurring variances. Of those participants who received a performance-contingent incentive, those who also received this direction (rule-based instruction) were compared with those who did not (no instruction).The results provide moderate support for the hypothesis (at p=0.057, one-tailed). Rule-based instruction led to more participants investigating the recurring variance compared to those who had received no instruction (88.89% vs 72.22%, χ^sup 2^ = 2.492, p=0.057 DF = 1, n= 18) (see Figure 6).
Hypothesis 2c (H2c) predicted that individuals who understood the purpose of SCVA (principle-based instruction) would spend more time investigating the recurring variance (effort duration). A planned contrast confirmed that, of those participants who received a performance-based incentive, those who received principle-based instruction spent more time investigating the recurring variance than those who received rule-based instruction (37.5185 vs 15.6779 seconds, t= 1.912, p=0.030, one-tailed) (see Figure 7 and Table 3).
The purpose of SCVA is to identify problems that have a material financial impact so that an appropriate response can be made. Hypothesis 3 (H3) was concerned with the dollar value of the problems that participants chose to address in their recommendations. It was argued that principle-based instruction - because it communicated the role of SCVA from a perspective of continuous improvement - would lead to recommendations that were more focussed on costly problems. Planned contrasts that focussed on those participants who received a performance-contingent incentive4 confirm that principle-based instruction ($1310.53) was superior to no instruction ($822.22, t=2.099, p=0.019, one-tailed) (see Figure 8 and Table 3) and rule-based instruction ($925.00, t=1.657, p=0.050, one-tailed).
From Figure 8 it appears that the type of instruction was more important than the performance-contingent incentive in determining the dollar value of the problems dealt with. In considering this result it is necessary to consider both an important implication of the performance-contingent incentive and the nature of the task. The SCVA task employed in this study includes identifying a complex, recurring variance. This variance can be identified either by chance or by carefully examining the variances. The performance-contingent incentive created a disincentive to randomly investigate. This penalty reduced the investigation of immaterial variances (H1). It also decreased the chance investigation of the recurring variance. Principle-based instruction was no more effective in directing the identification of the recurring variance (see Figure 6), but when the variance was identified then these individuals understood the significance of the explanations they received and provided recommendations accordingly (H3).
5. Summary and Discussion
This study contributes to our understanding of performance-contingent incentives. The results highlight the benefits of incorporating a penalty that can be effective in reducing misdirected effort. In the SCVA task studied in this paper, feedback was provided in the form of constant information on the cost of the investigation process relative to the savings achieved.
A performance-contingent incentive was not effective in providing direction when the individual did not have the necessary knowledge. This block to performance was overcome, however, by providing rule-based instruction which was particularly effective in directing effort to the identification of a complex, recurring variance. In addition to identifying which variances to investigate, however, SCVA also requires an understanding of the purposes of the investigation. These purposes were conveyed through principle-based instruction, the benefits of which were seen in the time spent investigating the recurring variance. Principle-based instruction also led to recommendations that focussed on problems which had the greatest opportunity for cost savings. Therefore, if SCVA is to be used as a tool for continuous improvement, the results of this study highlight the importance of communicating the underlying principles so that management accountants will investigate and respond to the most costly problems.
Limitations and Further Research
While a laboratory study provides increased internal validity, there are numerous limitations to the generalisability of such results. Studies such as this one, however, provide important insights into the effects of incentives and suggest avenues for further studies of the impact of management accounting tools (such as SVCA) on organisational performance. This study supports previous research that has argued for a greater problem-solving focus for SCVA (Emsley 2000, 2001).
It should be noted that this experiment manipulated instruction, which is not the same as knowledge. Instruction conveys the preferences of the evaluator, and - when accompanied by performance-contingent incentives - has the potential to focus attention in a manner that is consistent with the hypotheses. In order to limit the confounding affect of existing knowledge, the participants in this study were chosen specifically for their lack of previous instruction or experience in SCVA. Further research that specifically considers knowledge is therefore important. Furthermore, over a longer period of time a performance-contingent incentive may encourage individuals to overcome their lack of knowledge by acquiring additional information (Lee et al. 1999; Sprinkle 2000).
Incentives have both informational and motivational effects on performance. Most of the previous research has focussed on the motivational impact of incentives. In this study the value of the incentive was very low. Despite this, an incentive effect was observed. Further research distinguishing the motivational and informational value of performance-contingent incentives is warranted.
1 Note that 88% of participants had no previous experience with SCVA.
2 Note that all variances are unfavourable because ideal standards are used.
3 Outliers for effort duration are included in these results. Eliminating these outliers strengthened the results in the predicted direction.
4 The pattern of means for participants who did not receive the performance-contingent incentive were in the same direction (see Figure 8) but this was not significant.
References
Alpkan, L, Bulut, C, Gunday, G, Ulusoy, G & Kilic, K 2010, 'Organizational support for intrapreneurship and its interaction with human capital to enhance innovative performance', Management Decision, vol.48, no.5, pp732-755.
Ashton, R H 1990, 'Pressure and performance in accounting decision settings: paradoxical effects of incentives, feedback, and justification', Journal of Accounting Research, vol.28, (Supplement), pp148-180.
Awasthi, V & Pratt, J 1990, 'The effects of monetary incentives on effort and decision performance', The Accounting Review, vol.65, no.4, pp797-811.
Bonner, S E, Hastie, R, Sprinkle, G B & Young, S M 2000, 'A review of the effects of financial incentives on performance in laboratory tasks: Implications for management accounting', Journal of Management Accounting Research, vol.12, pp19-64.
Bonner, S E & Sprinkle, G B 2002, 'The effects of monetary incentives on effort and task performance: Theories, evidence, and a framework for research', Accounting, Organizations and Society, vol.27, nos.4-5, pp303-345.
Brownell, P 1995 'Research Methods in Management Accounting', in Coopers & Lybrand Accounting Research Methodology Monograph No.2, Coopers & Lybrand and Accounting Association of Australia and New Zealand, Melbourne.
Campbell, D J 1988, 'Task complexity: A review and analysis', Academy of Management Review, vol.13, no.1, pp40-52.
Chang, C J & Ho, J L Y 2004, 'Judgment and decision making in project continuation: a study of students as surrogates for experienced managers', Abacus, vol.40, no.1, pp94-116.
Davila, A & Foster, G 2005, 'Management accounting systems adoption decisions: Evidence and performance implications from early-stage/startup companies', The Accounting Review, vol.80, no.4, pp1039-1068.
Drake, A, Haka, S F & Ravenscroft, S P 2001, 'An ABC simulation focusing on incentives and innovation', Issues in Accounting Education, vol.16, no.3, pp443-471.
Drake, A, Wong, J & Salter, S B 2007, 'Empowerment, motivation, and performance: Examining the impact of feedback and incentives on non-management employees', Behavioral Research in Accounting, vol.19, pp71-89.
Dugar, S 2010, 'Nonmonetary sanctions and rewards in an experimental coordination game', Journal of Economic Behavior and Organization, vol.73, no.3, pp377-386.
Emsley, D 2000, 'Variance analysis and performance: Two empirical studies', Accounting, Organizations and Society, vol.25, no.1, pp1-12.
Emsley, D 2001, 'Redesigning variance analysis for problem solving', Management Accounting Research, vol.12, no.1, pp21-40.
Henderlong, J & Lepper, L M 2002, 'The effect of praise on children's intrinsic motivation: A review and synthesis', Psychological Bulletin, vol.128, no.5, pp774-795.
Horngren, C T, Datar, S M, Foster, G, Rajan, M V, Ittner, C, Wynder, M, Maguire, W & Tan, R 2010, Cost Accounting: A Managerial Emphasis, Pearson, Frenchs Forest.
Lee, H, Herr, P M, Kardes, F R & Kim, C 1999, 'Motivated search: Effects of choice accountability, issue involvement, and prior knowledge on information acquisition and use', Journal of Business Research, vol.45, no.1, pp75-88.
Libby, R & Lipe, M G 1992, 'Incentives, effort, and the cognitive processes involved in accounting-related judgements', Journal of Accounting Research, vol.30, no.2, pp249-273.
Mallin, M & Pullins, E 2009, 'The moderating effect of control systems on the relationship between commission and salesperson intrinsic motivation in a customer oriented environment', Industrial Marketing Management, vol.38, no.7, pp769-777.
Merchant, K & Van der Stede, W 2003, Management control systems: Performance measurement, evaluation and incentives, Prentice Hall, New York.
Mitchell, F 2005, 'Management Accounting - Performance Evaluation', Financial Management, October, pp33-34.
Sprinkle, G 2000, 'The effect of incentive contracts on learning and performance', The Accounting Review, vol.75, no.3, pp299-326.
Sprinkle, G B 2003, 'Perspectives on experimental research in managerial accounting', Accounting, Organizations and Society, vol.28, no.2-3, pp287-318.
Wood, R E 1986, 'Task complexity: definition of the construct', Organizational Behavior and Human Decision Processes, vol.37, no.1, pp60-82.
Monte Wynder(a)*
a Faculty of Business, University of the Sunshine Coast, Australia. *[email protected]
Acknowledgements
This research was funded by a grant from the University of the Sunshine Coast. I am grateful to the participants at research seminars at the University of the Sunshine Coast, the University of Technology Sydney, The University of Lethbridge and the Australian National University for helpful comments on previous versions of this paper. Thanks also to the two blind reviewers for their insightful comments and suggestions.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright University of Wollongong 2010
Abstract
When, and how, performance-contingent incentives improve performance is an important question for organisations. Empirical results have been mixed - performance-contingent incentives sometimes increase performance, sometimes decrease performance, and sometimes have no effect. Theorists have called for further research to identify the effect of various moderating variables, including knowledge and task complexity. This study responds by considering the role of instruction in providing the necessary knowledge to reduce task complexity. The results suggest that a performance-contingent penalty can be a particularly effective means of directing effort for a simple task. For a complex task, performance can be improved through instruction. The type of instruction is important - with rule-based instruction effectively directing effort - however principle-based instruction is necessary to facilitate problem investigation and problem-solving. [PUBLICATION ABSTRACT]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer