Content area
In this paper we describe an exploratory assessment of the effect of aspect-oriented programming on software maintainability. An experiment was conducted in which 11 software professionals were asked to carry out maintenance tasks on one of two programs. The first program was written in Java and the second in AspectJ. Both programs implement a shopping system according to the same set of requirements. A number of statistical hypotheses were tested. The results did seem to suggest a slight advantage for the subjects using the object-oriented system since in general it took the subjects less time to answer the questions on this system. Also, both systems appeared to be equally difficult to modify. However, the results did not show a statistically significant influence of aspect-oriented programming at the 5% level. We are aware that the results of this single small study cannot be generalized. We conclude that more empirical research is necessary in this area to identify the benefits of aspect-oriented programming and we hope that this paper will encourage such research. [PUBLICATION ABSTRACT]
Software Qual J (2008) 16:2344
DOI 10.1007/s11219-007-9022-7
Marc Bartsch Rachel Harrison
Published online: 8 May 2007 Springer Science+Business Media, LLC 2007
Abstract In this paper we describe an exploratory assessment of the effect of aspect-oriented programming on software maintainability. An experiment was conducted in which 11 software professionals were asked to carry out maintenance tasks on one of two programs. The rst program was written in Java and the second in AspectJ. Both programs implement a shopping system according to the same set of requirements. A number of statistical hypotheses were tested. The results did seem to suggest a slight advantage for the subjects using the object-oriented system since in general it took the subjects less time to answer the questions on this system. Also, both systems appeared to be equally difcult to modify. However, the results did not show a statistically signicant inuence of aspect-oriented programming at the 5% level. We are aware that the results of this single small study cannot be generalized. We conclude that more empirical research is necessary in this area to identify the benets of aspect-oriented programming and we hope that this paper will encourage such research.
Keywords Aspects Object-orientation Maintainability
1 Introduction
Aspect-orientation is an emerging paradigm that is based on the separation of concerns principle. It offers the idea of a new modular unit that encapsulates crosscutting concerns
M. Bartsch (&)
School of Systems Engineering, University of Reading, Reading RG6 6AY, UKe-mail: [email protected]
R. Harrison
Stratton Edge Consulting, Stratton Edge, School Hill, Cirencester GL7 2LS, UKe-mail: [email protected]
An exploratory study of the effect of aspect-oriented programming on maintainability
123
24 Software Qual J (2008) 16:2344
which would otherwise be scattered across multiple modules. From very early on in the development of aspect-oriented programming, the claim was made that AOP-based implementations were easier to develop and to maintain (Kiczales et al., 1997). These claims are based on the idea that the more the concerns in an application are separated (Dijkstra, 1982), the easier it is to perform changes locally. Such a promising technique can have major impact on software projects, potentially reducing the cost of maintaining or developing software. However, so far little empirical evidence can be found to justify this claim. This exploratory study aimed to investigate this claim and to accumulate evidence regarding the effect of aspect-oriented programming on maintainability, in particular on changing a given software system to implement a new requirement. In our context we dene maintainability as understandability and modiability.
We compared aspect-oriented programming and object-oriented programming with regard to software maintainability from the viewpoint of the maintainer. We used a between-subjects design with 11 randomly assigned professional software programmers participating. Each programmer was assigned to either an object-oriented or a corresponding aspect-oriented implementation of the same program. In the experiment, the subjects were asked to carry out a maintenance task which deals with the implementation of a new requirement on their respective program. The data was collected with the help of online questionnaires. Although the results of the experiment suggested that the object-oriented system under investigation may be more maintainable than the AO system we could not nd any statistically signicant evidence for the effects of aspect-orientation.
The remainder of this paper is structured as follows: Sect. 2 describes related work, Sect. 3 gives a detailed description of the experiment and Sect. 4 presents the experimental results followed by a discussion in Sect. 5. Sect. 6 presents threats to validity and Sect. 7 nishes with conclusions and future work.
2 Related work
Despite the fact that aspect-oriented programming was introduced in the second half of the 1990s, very little experimental and/or quantitative work can be found in the literature. Research related to measurement has focused so far on dening appropriate aspect-oriented metrics and their initial application. Suites of aspect-oriented metrics have been suggested by Ceccato and Tonella (2004), SantAnna, Garcia, Chavez, Lucena, and von Staa (2003) and by Zhao (2004). The new AO metrics are mostly based on work by Lopes (1997)oron Chidamber and Kemerers suite of object-oriented metrics (Chidamber & Kemerer, 1994).
Aspect-oriented metrics have been applied to investigate java-based real-time system development (Tsang, Clarke, & Baniassad,2004). It was found that aspect-oriented programming improved modularity, but that certain metrics suggest a negative impact on understandability and maintainability.
Design patterns have also been the target of empirical investigations. Hannemann et al. suggest aspect-oriented implementations of popular design patterns using a role-based approach (Hannemann & Kiczales, 2002). Garcia et al. present a quantitative study on these design patterns (Garcia et al., 2005) and found that most aspect-oriented implementations improve the separation of concerns principle.
Filho et al. investigated the use of aspect-oriented programming to modularize exception handling. An object-oriented system was refactored in order to modularize exception behaviour into aspects (Filho, Rubira, & Garcia, 2005). It was found that the aspect-oriented version did not offer any signicant benets over the object-oriented
123
Software Qual J (2008) 16:2344 25
solution. A subsequent study (Filho et al., 2006) found that aspects are only benecial in simpler cases or if the design is done carefully.
Early experimental assessment of aspect-oriented programming was carried out by Walker, Baniassad, and Murphy(1999). They performed two exploratory experiments to investigate the claim that aspect-oriented programming makes it easier to reason about, develop and to maintain certain kinds of application code. They compared the performance and experiences of subjects working on debugging and change tasks. They found that programmers may be better able to understand an aspect-oriented program than an object-oriented program when the effect of the aspect code has a well-dened scope. Also, there seems to be a change in coding strategies when aspect-oriented code is used.
Their initial assessment of aspect-oriented programming most closely resembles the work reported in this article. However, certain differences can be pointed out. In their study, Walker et al. used an early version of AspectJ1 (Version 0.1) that consisted of a slightly modied Java2 version (JCore) and the two purpose driven aspect languages Cool and Ridl. Cool was designed for synchronization concerns and Ridl was designed for expressing distribution concerns. In our experiment, we used a more mature, general purpose version of AspectJ (Version 1.3) that offers the concept of a general purpose aspect. Our change task was therefore directed at AspectJ and Java as a control language whereas Walker et al. used AspectJ and Emerald (Black, Hutchinson, Jul, & Levy,1986). Other differences are the experimental subjects involved. We used 11 software professionals while in their study 6 individuals were used for the change tasks. The participants were graduate students and professors of computer science and an undergraduate student in computer engineering. We also allowed more training time for each participant (see Sect.1.2 below) and our focus was on collecting quantitative data, while in their study qualitative data was collected with the help of videotaping and questions asked after each task. We see our study as an extension of Walker et als work.
Roychoudhury, Gray, Wu, Zhang, and Lin (2003) report on a survey that targets the comprehensibility of meta-programming and aspect-oriented programming. They found that aspect-oriented programming supports comprehensibility much better than the use of reection.
One of the largest empirical studies was carried out by SantAnna et al. (2003). In their study they compared reuse and maintenance scenarios in pattern-oriented and aspect-oriented implementations of an agent-based system. They report that the use of aspect-oriented programming results in a better separation of concerns, lower coupling between the components of the system and fewer lines of code. As far as agent-based systems are concerned, the aspect-oriented approach offered a better alignment with high-level abstractions.
Successful experiences with AspectJ are described in Kersten and Murphy (1999), Soares, Laureano, and Borda(2002) and Rashid and Chitchyan (2003). Even though the case studies reported in these papers are not of an experimental nature they are nonetheless relevant to the work at hand since they contribute to a wider understanding about the applicability and benets of AspectJ. Kersten and Murphy (1999) report on a case study that built a web-based learning environment using AspectJ. They found that it was possible to build a well-structured system in a reasonable amount of time. Similar results were reported by Soares, Laureano, and Borda (2002). They implemented distribution and persistence with AspectJ and concluded that separating persistence concerns allows for an
1 AspectJ, http://www.eclipse.org/aspectj
2 Java, http://java.sun.com
123
26 Software Qual J (2008) 16:2344
easier change of the persistence mechanism. Rashid and Chitchyan (2003) also focussed on persistence as an aspect and demonstrated successfully that it was possible to use AspectJ to modularize persistence in a real world application scenario.
Limitations of aspect-oriented programming were identied by Kienzle and Guerraoui (2002). They report on their experience of using AspectJ to implement concurrency control and failure management. They concluded that except for simple academic examples there is no general solution to fully separating these concerns from the main application. However, aspect-oriented technologies can help to achieve some degree of separation.
3 Description of the experiment
The aim of this exploratory study is to investigate the effect of aspect-oriented programming on maintainability, i.e., understandability and modiability. Subjects were asked to answer a questionnaire (chosen randomly) on either an object-oriented or on an aspect-oriented implementation of the same set of requirements. In this section we will dene a quality model of maintainability and related hypotheses. We will also identify response variables in our quality model, which helped us to dene the experimental design and the experimental materials.
3.1 Quality model
Maintainability is an external product attribute that we cannot measure directly. Instead, we can perform experiments that involve certain response variables. A quality model describes the relationship between these response variables and the external product attribute or factor. Figure 1 shows our quality model of maintainability. We divided the factor maintainability into the criteria understandability and modiability (Boehm, Brown, & Kaspar, 1978), because in order to maintain code it has to be understood and the nature of a change and its impact on the software system has to be determined. Modiability refers to the ease with which a change can then be applied to a software system.
Factor
Criteria Metric/Response Variable
Identification of components
Identification of classes/aspects Identification time
Change effort in NCLOC Change time in minutes
Identification of relationships
Follow control flow to identify output Identification time
SU rating
Understandability
Maintainability
Software Understanding
Change counts
Modifiability
Fig. 1 Quality model for maintainability
123
Software Qual J (2008) 16:2344 27
We would like to measure understandability with the help of three different metrics or response variables. First, we are interested in how many classes or aspects a subject can correctly identify and how much time each subject needs for this task. Identifying the components is a simple task, but gives an insight to whether the experimental subjects are able to gain an overview over a software system. Second, in addition to linking understandability to the identication of the components of a software system, we think that the identication of the relationships between those components also indicates whether a system is understandable. A software system can be considered understandable if its output can be determined correctly by following the applications control ow. A software system can also be considered understandable if the identication of the output can be carried out in a short time. Third, understandability is also measured by Software Understanding (SU), which is a subjective rating on an ordinal scale (15). 1 denotes a system that is easy to comprehend while 5 applies to a system that is difcult to comprehend. Such a Likert-type scale (Likert, 1932) has been used in experimental work, e.g., in Pfahl, Laitenberger, Dorsch, and Ruhe (2003) and for understandability ratings in Harrison, Counsell, and Nithi (2000).
Modiability can be measured by change counts. We consider a software system less modiable if it takes a long time to implement a new requirement. Also, we consider a system less modiable the more non-commented lines of code (NCLOC (Fenton & Peeger, 1996)) the implementation of a new requirement needs.
Thus, for our experiment, we can identify the following response variables to measure maintainability: percentage of correctly identied components, the time to identify those components, identifying the output and the time it takes, the SU rating, the number of NCLOC changed in order to implement a new requirement and the time it takes for this implementation.
3.2 Hypotheses investigated
Based on the quality model above, we dened four hypotheses in order to investigate the effect of aspect-oriented programming on maintainability.
H01 The rst null hypothesis is that the use of aspect-oriented programming does not affect the understandability of a system.
H1 The rst alternative hypothesis is that the use of aspect-oriented programming does affect the understandability of a system.
H02 The second null hypothesis is that the use of aspect-oriented programming does not affect the modiability of a system.
H2 The second alternative hypothesis is that the use of aspect-oriented programming does affect the modiability of a system.
It should be noted that no direction has been specied in the alternative hypotheses. There is no prediction whether the effect of understandability and modiability would be positive or negative. Consequently we will perform 2-tailed signicance tests at the 5% level.
3.3 Systems investigated
In the experiment, two systems were used both of which implement an on-line shopping system with the usual functionality.3 Users can log on to the shop and put goods into a
3 Both systems are available under: http://www.personal.rdg.ac.uk/*sir04mb2/ShopSystem.zip
123
28 Software Qual J (2008) 16:2344
shopping cart which can be bought. An administrator can place and remove products from the shop system. Both systems implement the same requirements and offer the same public interface. Hence, their output is identical. Source code comments have only been used to indicate code that could lead to confusion. The two systems were dened as follows:
3.3.1 System OO
The Java version of the shopping system contains about 460 non-commented lines of code (NCLOC (Fenton & Peeger, 1996)). Inheritance was not used. Exceptions were used to indicate erroneous behaviour. In total, System OO consisted of 11 classes. Figure 2 shows a UML diagram of System OO. Class ShopManager constitutes the central component of the system. It offers a public interface that provides entry points to carry out a shopping activity. However, many more concerns are involved that are scattered across this public interface. Each triggered action needs to be authenticated and authorized, i.e., the system must check whether a user that requests an action is indeed logged in to the system. Also, the system must check whether this user is authorized to carry out an action and whether any of the parameters passed to the public interface contain null values. What is more, each call to the public interface and the authentication manager will be logged to System.out. These are all concerns which crosscut the main concern of class ShopManager, i.e., to provide the essential functionality to carry out a shopping activity and which are scattered across the entire public interface of the ShopManager class. The effect of such tangled concerns is a relatively small number of components compared with the corresponding aspect-oriented system (see below) and a relatively large ShopManager class (154 NCLOC).
Fig. 2 UML diagram of System OO
123
Software Qual J (2008) 16:2344 29
3.3.2 System AO
The AspectJ version of the shop system contains about 490 NCLOC. Again, inheritance was not used. Exceptions were used to indicate erroneous behaviour. Occurrences of reection in aspects were commented. Concerns that have been implemented as aspects are: logging, authentication, authorization, null argument checking and aspect precedence. In total, System AO consisted of nine classes and eight aspects. Figure 3 gives an overview of the aspect-oriented system. The dashed lines that run from an aspect to a class indicate which classes are affected by an aspect.
The public interface of the program (ShopManager) offers methods to log in to the system and to log off as well as methods to add products to or remove products from a shopping cart. Other methods support the purchase of a shopping cart or the removal of all selected products. A logged in administrator can also add products to or remove products from the shop. Every public method expects a User object which will be taken to determine the authentication and authorization status. For example, if an attempt is made by a regular user to call a public method which is reserved for administrators, an exception will be thrown. An exception will also be thrown if a user is not logged in.
In order to derive an aspect-oriented system that is functionally equivalent to the object-oriented version, System OO was taken as a template and crosscutting code was migrated into aspects. Thus, aspects can be found that deal with the authentication and authorization of users and the administration when calls to the public interface of the system are made (LogonStatus, AuthorizationManagerUser and AuthorizationManagerAdmin). The overall logging concern has been implemented in three aspects each one dealing with a certain logging variant: logging on and off and calls to the ShopManager interface. The
Fig. 3 UML diagram of System AO
123
30 Software Qual J (2008) 16:2344
CheckArgument aspect throws an exception if one of the parameters passed to the ShopManager class contains null values. All aspects implement exactly the same functionality as in System OO. Finally, aspect AspectPrecedence makes sure that the order in which the different aspects will be woven into the system corresponds to that of the object-oriented system.
During the design of the two systems certain trade-offs had to be made. Even though the overall size of the system in non-commented lines of code does not differ signicantly (System OO 460 NCLOC, System AO 490 NCLOC), the number of components does: System AO consists of 17 components whereas System OO consists of only 11. On the one hand more components might mean higher cognitive complexity, so that System AO might be more difcult to understand. On the other hand, the components of System AO are smaller which might lead to better understandability. For example, class ShopManager is much smaller in System AO (80 NCLOC) than in System OO (154 NCLOC).
Besides size, another factor that inuences the effort to maintain code is the separation of concerns and the clear assignment of responsibilities to components. The AO system addresses this issue by separating some concerns from the main business components, such as logging or authentication, into aspects. For example, class ShopManager implements only concerns related to the main business logic in System AO whereas in System OO class ShopManager also implements part of logging, authentication and auhorization. Such a clearer separation of concerns and distribution of responsibilities might have a benecial impact on the understandability of the code.
Our choice to use more components in System AO was motivated by the aim to achieve a high separation of concerns. Logging, for example, is a crosscutting concern which could have been implemented in a single aspect. However, such an aspect would have supported three different kinds of logging each of which crosscuts a different set of classes or methods. Walker et al. conclude that programmers may be better able to understand an aspect-oriented program when the effect of the aspect code has a well-dened scope (Walker, Baniassad, & Murphy, 1999). A well-dened scope means that the aspect-core interface, i.e., the boundary between aspect code and base code, is narrow. Thus, in order to keep the aspect-core interface of each aspect as narrow as possible, we divided the logging concern into three different aspects, one for each specic logging concern and scope.
Another trade-off that we needed to consider was that between system size and the motivation of the experimental subjects. None of the subjects received any compensation (see next section) and thus we relied on their own motivation to complete the entire experiment. Prechelt et al. report on a mortality rate of four subjects out of 22 in their experiment (Prechelt, Unger, & Schmidt, 1997). Since we were dealing with only half as many experimental subjects, a similar mortality rate would have led to very few data points. We therefore limited each system to not more than 12 pages of printed A4 paper to ensure also that the participants of the experiment would be able to work reasonably well with a paper version of the source code.
3.4 Subjects
The participants of the experiment were software professionals. None of the participants had prior knowledge of aspect-oriented programming. The participants were recruited in two ways. One group was personally known to one of the authors. Through them, further contacts were established with other colleagues. The second smaller group responded to a
123
Software Qual J (2008) 16:2344 31
posting for professional Java developers on a User Group Web page. Those who responded were asked to indicate their professional status. In total, 11 programmers were recruited. All subjects had a minimum of 25 years of experience in programming in an object-oriented language. Compensation was not offered to the subjects; they all participated voluntarily.
3.5 Preparation
Since none of the 11 participants had any experience in aspect-oriented programming, an online tutorial was designed that introduced them to AspectJ in ve sessions. These ve sessions were published online over a period of two weeks and each experimental subject could work on them at their own pace. The course material for each session consisted of source code that the participants were to run using the Eclipse development environment.4 In addition, they were provided with online information about the AspectJ language and with questions and exercises about each source code example.
The questions and exercises were designed to give them experience in exploring and understanding aspect-oriented concepts and to practice writing aspect-oriented software. The participants were asked to submit solutions to the programming exercises by email. Thus, their development could be monitored during the course of the tutorial. In order to guarantee at least a minimum amount of experience in AspectJ, participants were only admitted to the questionnaire if they had sent in solutions to all ve exercises. The ve sessions covered all language features necessary to answer the questionnaire and also focused on possible applications of aspect-oriented programming. The following list gives an overview of the ve tutorials:
1. Session 1 (Hello World) introduced the basic concept of an aspect. The subjects were asked to debug a Hello World program in the Eclipse environment to see the effect that an aspect has on a given trivial program.
2. Session 2 (Error Management) was about different kinds of advice and inter-type declarations. The subjects were to implement a simple error management aspect by introducing a new member to an existing class modelling an error-prone device.
3. Session 3 (Logging) showed how aspects can be used for logging and how the reection API can be used to collect context information.
4. Session 4 (Design by Contract) asked the subjects to implement an aspect that checks invariants of classes before and after certain method calls were made.
5. Session 5 (Caching) was designed to point out how caching can be implemented with the help of aspects.
3.6 Experimental materials
Subjects were provided with an email that contained specic instructions on how to answer the questionnaire, followed by the questionnaire itself (see APPENDIX A: Questions), which consisted of four questions, reecting the response variables identied earlier. In question 1 (Q1), the experimental subjects were asked to identify all classes and aspects in the source code. Question 2 (Q2) dealt with the identication of the output of the software.
4 Eclipse, 2006, http://www.eclipse.org
123
32 Software Qual J (2008) 16:2344
This question required the subjects to follow the control ow of the program. In the third question (Q3), the software professionals were asked to implement a new requirement. The software should throw an exception if the user has been idle for more than 5 min. This requirement is crosscutting in the sense that it relates to the entire public interface of the shop system. The last question (Q4) was an understandability rating on a scale from 1 to 5. The subjects were asked how understandable they found the software system they had been assigned to. As an attachment to the email a PDF le was provided that contained the source code of either the Java or the AspectJ program. Each subject only received one program. The PDF le was password protected and allowed printing but not copying and pasting in order to discourage the use of tools, in case this biased the results (this is discussed further in Sect. 5, Threats to validity). In the questionnaire the subjects were not only asked to perform tasks on the given source code, but also to note the current time before and after each task. This allowed for a comparison of the time it took the subjects to carry out the tasks. In the instructions, the subjects were asked to print out the source code and to mail back the answers to the questions. Also the subjects were asked to conform to the following rules:
1. To answer the questionnaire in an undisturbed environment;2. Not to look at the source code prior to the experiment;3. To take about 60 min to answer the questionnaire;4. Not to use any aid or resources, including pencils, internet or programs;5. To declare that they adhered to the instructions they had been given.
The experimental subjects were told to take about 60 min to answer the questionnaire. We only gave an approximate time because we did not want them to stop after an hour but to nish the questionnaire no matter how long it took.
3.7 Experimental design
A between-subjects design was used to test the hypotheses. The dependent variable was the system used (object-oriented or aspect-oriented) and the independent variables were software understanding and modiability. Learning effects were avoided since each subject only answered questions with one system. The data were collected online. After a participant had sent in solutions to all ve exercises from the aspect-oriented tutorial, he or she was sent an email as described in the previous section. The subject was asked to mail back the answers to the questionnaire. Neither during the preparation phase nor during the experimental phase were the names of the participants disclosed within the group. All participants were locally separated. Also, there were no deadlines set for the solution to the exercises and the answers to the questionnaire. This policy was established from the beginning of the study to give the participants the chance to work at their own pace. The subjects were assigned on a random basis to each of the experimental pools.
3.8 Pilot study
In order to test the experimental design and the materials a pre-pilot and a pilot study were carried out. Prior to the pilot study two students were asked to participate in a pre-pilot assessment. Both students were PhD students of Computer Science. One of them had previous work experience in Java and the other had previous experience in
123
Software Qual J (2008) 16:2344 33
aspect-oriented programming. Firstly, the subject with experience in aspect-oriented programming answered the questionnaire on the aspect-oriented program which led to major modications of both the questionnaire and the software systems. Secondly, the other subject answered the modied questionnaire using the modied object-oriented system. Also, both subjects provided comments that proved helpful in improving the materials. Overall, this early assessment helped to improve the material for the pilot study and the experiment.
After the pre-pilot study, a pilot study was carried out involving 12 students from the School of Systems Engineering at the University of Reading. The participants were either MSc or PhD students of Computer Science. Since none of the students had any knowledge of aspect-oriented programming they received a tutorial introduction to AOP one day prior to the pilot study which lasted about 1.5 hours. In the pilot study, each subject was provided with a questionnaire and a printout of a program. Half the subjects were given an object-oriented program and the other half were given the equivalent aspect-oriented program. The subjects had to identify all classes and aspects that are dened in the program. Also, they were asked to write down all the lines of output and to apply a change to the program. Finally they were asked to rate the understandability of the program on a scale from 1 to 5. The subjects were given 60 min to complete the tasks. Table 1 summarizes the results for all 12 students that participated in the pilot study.
Only one subject (#8) achieved 100% in all tasks. This subject rated the aspect-oriented system with a score of 2 (fairly easy to understand). Overall performance deteriorated in the modication task (Q3). Three subjects achieved 100% while four subjects provided no solution. The assessment framework for the modication task accepted detailed answers in pseudo code. Since the overall performance of Q3 was so low, even for System OO, it seems that the systems and tasks given to the subjects were not trivial. It needs to be said, however, that the knowledge of object-oriented programming among the entire group varied considerably.
Table 1 Pilot study results
Subject Questions (% correct)
Q1 Q2 Q3 Q4 OO or AO Classesor aspectsidentied
1 100 100 10 3 AO
2 100 80 100 3 OO
3 100 0 0 3 OO
4 100 60 20 3 OO
5 100 50 0 5 AO
6 100 50 0 3 AO
7 100 0 0 4 AO
8 100 100 100 2 AO
9 100 100 50 2 OO
10 91 5 10 5 OO
11 100 40 20 5 AO
12 91 70 100 2 OO
Outputs identied
SU (15)
Apply modication
123
34 Software Qual J (2008) 16:2344
The subjects also provided comments on the questions which gave helpful input. It became obvious that aspect-oriented programming is a concept that needs more instruction than could be provided in the course of this investigation. The amount of instruction the subjects received as a preparation only led to correct answers in one case (subject #8).
4 Experimental results
Table 2 shows the results of the experiment for all 11 software professionals that participated in this study. Four subjects failed to identify either all the classes or all the classes and aspects. Subject #6 failed to identify a single class, which seems to be caused by a misreading of the question and the assumption that only aspects were to be identied. The other three subjects (#5, #7, #8) all missed exactly one class or one aspect.
The different percentage values (91%, 94%) are caused by the different number of components (System OO contains 11 classes whereas System AO contains nine classes and eight aspects). The lines of output were identied correctly by nine out of the 11 subjects. Subject #5 did not identify any correct output possibly because of misreading the question. Subject #6 identied only six of the 10 lines of output.
The implementation task was carried out successfully by all the subjects. Full marks were given if the suggested solution was specic enough to lead to a correct program. In particular, the crosscutting nature of the implementation had to be correct. The subjects were free to choose either Java or AspectJ to implement the modication. Minor syntactic errors and the use of detailed pseudo-code were considered acceptable. The solutions that the experimental subjects suggested differed between the two groups. The subjects that were given the OO system extended classes without adding any aspects whereas the other group extended or added aspects and also extended classes. None of the subjects added a class and only two subjects added an aspect. Both systems were designed so that localized implementations were possible. The fact that all subjects answered Q3 successfully might mean that the question was too easy. In this case the time that was needed to answer the
Table 2 Experimental results: questions
Subject Questions (% correct)
Q1 Q2 Q3 Q4 OO or AO Classesor aspectsidentied
1 100 100 100 2 AO
2 100 100 100 4 AO
3 100 100 100 1 OO
4 100 100 100 1 AO
5 91 0 100 2 OO
6 0 60 100 2 OO
7 91 100 100 2 OO
8 94 100 100 2 AO
9 100 100 100 2 OO
10 100 100 100 1 AO
11 100 100 100 2 OO
Outputs identied
SU (15)
Apply modication
123
Software Qual J (2008) 16:2344 35
Table 3 Time taken for each question
Subject Questions (min)
Time (Q1) Time (Q2) Time (Q3) Time(Q2, Q3)
question might give more insight into the effects of AOP. Both systems were given scores of 1 or 2 (easy to understand) by almost all subjects. Only subject #2 rated the AO system with a 4 (relatively difcult to understand).
Table 3 summarizes the time that the subjects needed for each question. The time is given in minutes and was calculated by taking the difference of the time that the subjects stated before and after each question. The column Time (Q2, Q3) contains the total time each subject needed to answer questions Q2 and Q3. Following the control ow (mostly in a debugger, but also by looking at the source code) is very often the rst part of a maintenance activity followed by a modication. Q2 tests whether the subject can follow the control ow in order to understand the relationships within the program and Q3 tests whether the subjects were able to add a modication. Consequently we were not only interested in the time the subjects needed to answer Q2 and Q3 individually, but also in the total time it took them to answer Q2 and Q3. Subject #6 only needed one minute for the identication of the classes and aspects, but this is the subject that failed to identify a single class correctly.
Since our data analysis of outliers showed that two subjects (#5, #6) possibly misunderstood a question, the time it took them to answer those question has been deleted from Table 3, shown with a . We also deleted the data point that is based on these times (Time (Q2, Q3)). Further statistical analysis was carried out without these values.
The experimental subjects were required to state as precisely as possible how they would implement a new requirement. They were asked to write down each code line change and each removal or addition of code. Table 4 gives an overview of how many line changes were carried out by each subject. We counted non-commented lines of code. The largest change was carried out in the AO system (subject #4) and so was the smallest one (subject #8). The second largest and smallest change was carried out in the OO system (subjects #5 and #9). It seems that an aspect-oriented implementation does not necessarily lead to less code than an object-oriented implementation. We used the MiniTab5 statistical software package to calculate all statistical data presented in this paper.
5 MiniTab Statistical Software, Release 14.1, http://www.minitab.com/products/minitab/14
Classes or aspects identied
Outputs identied
OO or AO
Apply modication
1 7 39 35 74 AO
2 5 29 14 43 AO
3 6 23 23 46 OO
4 5 25 27 52 AO
5 5 25 OO
6 20 21 41 OO
7 6 12 24 36 OO
8 5 40 20 60 AO
9 5 28 17 45 OO
10 6 14 30 44 AO
11 4 29 20 49 OO
123
36 Software Qual J (2008) 16:2344
Table 4 Experimental results: code changes Subject # changed NCLOC OO or AO
1 17 AO
2 7 AO
3 10 OO
4 27 AO
5 20 OO
6 18 OO
7 11 OO
8 5 AO
9 6 OO
10 10 AO
11 7 OO
5 Discussion
Table 2 shows that there was little variation in the correctness of the answers to questions Q1, Q2 and Q3. With this data it is difcult to discriminate enough between the two groups to draw any conclusions. Therefore, we focused our analysis of the results on the timing data, the code change data and the SU rating. First, we investigated the median of the data rows shown in Tables 3 and 4 and the SU rating from Table 2. Table 5 summarizes the median and other quartiles for all data rows. The median is the value of the middle-ranked item and divides a data set into two partitions. The lower quartile is the median of the lower partition and the upper quartile is the median of the upper partition. Please note that Q1, Q2, Q3 refer to question 1 (Q1), question 2 (Q2), question 3 (Q3), respectively.
We found that the median for the OO answers is the same as the median for the AO system for Time (Q1) and for SU rating (Q4). In all other cases, the AO median was higher than the OO median. To illustrate this data, the following box plots show the timing data for Q1, Q2, Q3, the total timing data for Q2, Q3 and the rating data for Q4 SU.
Table 5 Robust summary statistics (Quartiles)
Question Minimum Lower quartile Median Upper quartile Maximum
Time (Q1) OO 4.00 4.50 5.00 6.00 6.00
AO 5.00 5.00 5.00 6.50 7.00
Time (Q2) OO 12.00 16.00 23.00 28.50 29.00
AO 14.00 19.50 29.00 39.50 40.00
Time (Q3) OO 17.00 19.25 22.00 24.25 25.00
AO 14.00 17.00 27.00 32.50 35.00
Time (Q2, Q3) OO 36.00 38.50 45.00 47.50 49.00
AO 43.00 43.50 52.00 67.00 74.00
Q4 SU OO 1.00 1.75 2.00 2.00 2.00
AO 1.00 1.00 2.00 3.00 4.00
NCLOC OO 10.00 13.00 15.00 20.75 26.00
AO 6.00 6.50 16.00 31.00 33.00
123
Software Qual J (2008) 16:2344 37
Figure 4 shows the box plot for Time (Q1). Box plots provide a useful presentation of how the data points are distributed. They are constructed from the three summary statistics: the lower quartile, the median and the upper quartile (Fenton & Peeger, 1996). The grey box indicates all three values: the lower and the upper quartile dene the dimension of the grey box (left and right edge) and the median value is placed inside the box. In the lower box plot of Fig. 4 there is no median value visible, because it coincides with the lower quartile. The single lines in the box plot are the lower and upper tail. These points represent the theoretical bounds between which we are likely to nd all the data points if the distribution is normal (Fenton & Peeger, 1996). An outlier is represented as a star (*). As stated, the median was the same for both systems, but the subjects of the OO system tended to use slightly less time for the identication of the classes. The data could be explained by the fact that the AO system contains almost twice as many entities as the OO system. We realize that it is difcult to draw any conclusions from Time (Q1) as far as an effect of AOP is concerned.
The time that the subjects needed to answer Q2, i.e., the identication of all lines of output, shows a difference of 6 min in the median (see Fig. 5), which can be interpreted as a tendency to use less time to understand the control ow of an application for the OO system. This tendency is intuitively obvious, since the actual control ow in an AO program has to be reconstructed from knowledge of all aspects of the system and how they relate to the base system. The subjects were not allowed to use tool support which could have assisted them to reconstruct the control ow.
The median of the time to apply a modication to the two systems (Time (Q3)) is smaller for the OO system (see Fig. 6). Also, there is more variance in the times taken for the AO system. This tendency could be explained with a more consistent answer pattern of the OO group due to their greater experience of the OO group with the OO paradigm relative to the AO groups experience of the AO paradigm.
We also investigated the distribution of Time (Q2, Q3) (see Fig. 7) because in order to do maintenance, programmers typically need to be able to follow the control ow in order to understand a program before a modication is applied. Again, there is more variance in the times taken for the AO system.
Figure 8 shows a box plot for the rating of Q4 SU. There is one outlier for the OO system (one subject thought the system was very easy to understand). Otherwise the values
Fig. 4 Box plot: time (Q1)
Fig. 5 Box plot: time (Q2)
123
38 Software Qual J (2008) 16:2344
Fig. 6 Box plot: time (Q3)
Fig. 7 Box plot: time (Q2, Q3)
Fig. 8 Box plot: Q4 SU
were all the same. The scores for the AO system show more variance. The median values for both sets of data are the same.
Most subjects did not nd it difcult to understand the two systems. This result could be explained by the fact that the subjects that participated were highly motivated and probably above average in ability and intelligence, and the fact that the systems were quite small.
Overall, the timing data seems to suggest that it takes less time to answer the questions for the OO system than for the AO system. However, the differences of the median values could still be caused by chance.
Figure 9 shows a box plot of the code changes in number of NCLOC changed by the software professionals to implement the new requirement. The median value of the AO system is only slightly higher than the median value for the OO system. The AspectJ group shows the largest and smallest number of non-commented lines of code changes. This larger variation could be attributed to the fact that the subjects were not as familiar with AspectJ as they were with Java and that some subjects needed more code to express the requirement in an aspect-oriented language.
We did not expect the data sets to be normally distributed and so we conservatively applied the non-parametric MannWhitney test (Conover, 1971) to nd out whether the
Fig. 9 Box plot: code changes
123
Software Qual J (2008) 16:2344 39
difference of the median values for each data set was caused by chance or not. Non-parametric tests can be used in situations where it may be doubtful whether the underlying distribution is normally distributed (Freund & Simon, 1997). The MannWhitney (also called two-sample rank or two-sample Wilcoxon rank sum) test makes inferences about the difference between two population medians based on data from two independent, random samples (Conover, 1971). The p-value that is calculated represents the probability that the two median values come from the same population. Table 6 shows the results of the Mann Whitney test. We used a 2-tailed test with a signicance level of 5%: a p-value of 0.05 or less indicates that the two median values do not originate from the same population and if this happened we would conclude there is a measurable difference in the application of aspect-oriented programming in this case. A 2-tailed test was chosen, because our hypotheses specied no direction.
From Table 6 we see that the probabilities for the hypothesis that the object-oriented median values of Q4 SU and NCLOC are equal to the aspect-oriented median values are quite high at 0.91 and at 0.93, respectively. The lowest probabilities can be found for Time (Q2, Q3) and Time (Q2) at 0.21 and 0.25. However, none of the probabilities are significant at the 5% or even at the 10% level. Thus, no evidence was found in the data that would justify a rejection of the null hypotheses formulated for this experiment.
The high p-value for NCLOC seems to suggest that there is no difference in the modiability of object or aspect-oriented code. Whatever the underlying programming technique, both offer opportunities for shorter and larger implementations. We cannot conclude that an aspect-oriented solution to a new requirement leads to less code and thus to less effort.
We investigated the distributions of the timing data using the Anderson-Darling test (Stephens, 1974) and found that in fact all the data sets were approximately normally distributed except Time (Q1) AO. Consequently we also investigated the mean values of the data sets, which can be found in Table 7. All the mean values for the timing data were higher for the AO system than for the OO system, thus conrming the trend that it takes less time to answer questions on the OO system. Also, just like the median value, the mean value of NCLOC is higher for the AO system than for the OO system. We did not compute a mean value for Q4 SU, since it is on an ordinal scale.
We also performed a parametric test (the unpaired t-test) on Time (Q2), Time (Q3), Time (Q2, Q3) and NCLOC. Again none of the probabilities were signicant at the 5% level, or even at the 10% level using a 2-tailed signicance test.
Whenever more than one test is performed on the same data set, a Bonferroni correction could be used for the multiple comparisons that are being made (Harris, 1975). In our case the multiple comparisons are an investigation into the median and the mean values. In order to avoid a statistical signicant result by chance, the Bonferroni correction raises the
Table 6 MannWhitney test Data row p-value (Probability OO median is equal to AO median)
Time (Q1) 0.65
Time (Q2) 0.25
Time (Q3) 0.46
Time (Q2, Q3) 0.21
Q4 SU 0.91
NCLOC 0.93
123
40 Software Qual J (2008) 16:2344
Table 7 Mean values Data row Mean p-value (Probability OO mean is equal to AO mean)
Time (Q1) OO 5.20 AO 5.60
Time (Q2) OO 22.40 0.27
AO 29.40
Time (Q3) OO 21.67 0.42
AO 25.20
Time (Q2, Q3) OO 43.40 0.13
AO 54.60
NCLOC OO 16.50 0.61
standard of proof, i.e., it either decreases the alpha level or it increases the p-values. The alpha level is the probability of making a type I error, i.e. the rejection of the null hypothesis although it is actually true. It is also referred to as the level of signicance (Freund & Simon, 1997)in our case this is 0.05. For example, since we carried out two comparisons, we could lower the alpha level from 0.05 to 0.05/2 = 0.025, i.e., 2.5%. The Bonferroni correction lowers the chance of making a type I error, i.e., rejecting the null hypothesis, although it is true. Applying a Bonferroni correction in our analysis would not change the results since we could not show statistical signicance in the median or mean values.
In total, we have noticed tendencies in the data that suggest that understanding and applying changes to AO code takes more time than the same activity for an OO system. However, the differences between the two systems could not be shown to be statistically signicant at the 5% or at the 10% level. Our data also exhibits no signs that aspect-oriented programming has an effect on the modiability of our software systems as far as the number of NCLOC changed is concerned. Therefore, we accept the null hypotheses H01 and H02 and conclude that our data does not show that AOP has a statistically signicant effect on software understanding and modiability.
6 Threats to validity
Potential threats to the validity of an experiment as described in this paper are a major concern. We will discuss threats to internal and external validity. Internal validity describes the extent to which the observed effect is caused only by the experimental treatment condition (Christensen, 1988; Perry, Porter, & Votta, 2000). In our case it is concerned with the question of whether any observed effect was caused only by the programming techniques involved. External validity is the extent to which the results of an experiment can be applied to and across different persons, settings and times (Christensen, 1988; Perry, Porter, & Votta, 2000).
6.1 Internal validity
Internal validity was threatened by the low number of subjects that participated in this experiment limiting the strength of the conclusions. The size and complexity of the two software systems might also have been too low to discriminate sufciently, since most
123
Software Qual J (2008) 16:2344 41
subjects regarded the programs as understandable. For software professionals the programming language or technique might not be an issue unless a certain level of complexity has been reached. Furthermore, the experiment has not been carried out in a controlled environment, although it was designed to encourage the participants to imitate a controlled setting. The experimental subjects were asked to answer the questionnaire in an undisturbed environment and to explicitly acknowledge this requirement on the questionnaire. What is more, the software professionals were all familiar with object-oriented programming languages. This high level of experience in OO compared with a relatively low level of experience in AspectJ may explain the stronger performance of the OO group. Selection effects were avoided by assigning subjects on a random basis to each of the experimental pools. Instrumentation effects due to the difference in the experimental materials employed were reduced by ensuring that the modications required for the OO and AO systems were equivalent. Also the amount of tool support provided to both groups was kept to a minimum for two reasons. Firstly, we regarded the use of tools as a confounding factor since their inuence on the result of the study could not be anticipated. Some subjects might be more familiar with e.g., an IDE than others. Also, there might be differences between the versions of the IDE that the subjects were using. It would be difcult to control this factor remotely. Secondly, the aim of this initial study was to investigate AspectJ itself and not AspectJ in conjunction with a certain set of tools. In particular we designed this experiment in such a way as to limit the confounding factors and produce an instrument that represented a cognitive exercise in maintenance. We regard the paper and email version of this experiment as the most unbiased approach.
6.2 External validity
This experiment has been carried out with professional software developers which represent the target population we are interested in. However, the selection process might have favoured highly motivated programmers which might not represent the average programmer. All subjects volunteered not only to participate in the experiment but also in a ve session tutorial on AspectJ, which shows a highly motivated attitude. The other threat to the external validity of the experiment is due to the relatively small size of the systems. However, the tasks which were set are typical maintenance tasks for small scale subsystems.
7 Conclusions and future work
This paper reported an experimental assessment of the effect that aspect-oriented programming has on software understandability and modiability. 11 software professionals participated in this study and were asked to answer a questionnaire on either an object-oriented or an aspect-oriented program. Although statistically we did not nd evidence for a 2-tailed signicance level of 5% or 10% we can acknowledge weak indications in favour of the object-oriented system. The time that the subjects needed to answer the questions was in all cases slightly shorter for the object-oriented system and we will investigate this in follow-up experiments. This observation is mirrored by efforts of the aspect-oriented community to provide tool support for crosscutting views which show how aspects and base code are related to each other.6 In future we will also investigate the benets that crosscutting views can provide to assist in understanding aspect-oriented systems.
6 AspectJ Development Tools (AJDT), 2006, http://www.eclipse.org/ajdt
123
42 Software Qual J (2008) 16:2344
Both groups were able to answer the questions correctly. Also, we did not nd a difference in the understandability rating of the two systems. This is a surprising result given the fact that no tool support was allowed for the detection of aspects and the programmers experience of AO was minimal. In addition, our data does not show any indication that aspect-oriented programming had an effect on the modiability of a software system. The number of NCLOC changed in order to implement a new requirement did not differ signicantly between the two systems.
We are aware that the results of this study cannot be generalized. As far as the study at hand is concerned, we conclude that more research is necessary to investigate the effects and potential benets of AOP. Future research will be focused on the replication of this study in different settings to draw more reliable conclusions from the experimental results. In addition, the identication of components metric was not found to be useful and will be dropped in future experiments. We will also consider alternative metrics to support the different aspects of maintenance tasks.
Currently, a lot of research is focused on the development of aspect-oriented concepts and languages. We believe that it is essential for the aspect-oriented community to also focus on empirical research to determine the benets that can be gained from aspect-oriented techniques. We need to understand the benecial applications of aspect-orientation and also the characteristics that systems need to possess to benet from this emerging programming paradigm. Even though statistical signicance was not evident here, this paper may motivate other researchers interested in using empirical techniques to investigate the benets of aspect-oriented programming. We hope that this paper will encourage such research.
Acknowledgements We would like to thank all participants of the pre-pilot study, the pilot study and the experiment for the time and effort they invested as well as all anonymous reviewers for their helpful comments.
Appendix A: Questions
Q1. Please list all classes and aspects, if any exist, that are dened in the source code. Q2. What is the output from this program? Please write down every line that will be printed. Q3. Describe how you would change the program so that an exception will be thrown if a user attempts to use the shop after he has been idle for more than 5 min. Please state clearly every change you would make to the code i.e., every line change, addition or removal of code. The choice of programming language is not important. For a correct solution you can either use Java or AspectJ.
Q4. How understandable, on a scale of 15, do you think this program is?
1 2 3 4 5
Very understandable Moderately understandable Not very understandable
References
Black, A., Hutchinson, N., Jul, E., & Levy, H. (1986). Object structure in the Emerald system. ACM
SIGPLAN Notices, 21(11), 7886.
123
Software Qual J (2008) 16:2344 43
Boehm, B. W., Brown, J. R., & Kaspar, J. R. et al. (1978). Characteristics of software quality, TRW series of software technology. North Holland: Amsterdam.
Ceccato, M., & Tonella, P. (2004). Measuring the effects of software aspectization, CD-Rom. In Proceedings of the 1st Workshop on Aspect Reverse Engineering (WARE 2004), Delft, The Netherlands.
Chidamber, R., & Kemerer, C.F. (1994). A metrics suite for object-oriented design. IEEE TSE, 20(6), 476493.
Christensen, L. B. (1988). Experimental methodology (4th ed.). Newton, MA: Allyn and Bacon. Conover, W. J. (1971). Practical nonparametric statistics. John Wiley & Sons.
Dijkstra, E. W. (1982). On the role of scientic thought. In selected writings on computing: A personal perspective (pp. 6066). Springer-Verlag.
Fenton, N. E., & Peeger, S. L. (1996). Software metrics: A rigorous and practical approach (2nd ed.).International Thomson Computer Press.
Filho, F. C., Rubira, C. M. F., & Garcia, A (2005). A quantitative study on the aspectization of exception handling. In ECOOP2005 Workshop on Exception Handling in Object-Oriented Systems. Glasgow, UK.
Filho, F. C., Cacho, N., Maranhao, R., Figueiredo, E., Garcia, A., & Rubira, C. M. F. (2006). Exceptions and aspects: The devil is in the details. In Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering, Portland, Oregon, USA.
Freund, E. J., & Simon, G. A. (1997). Modern elementary statistics (9th ed.). New Jersey: Prentice Hall. Garcia, A., SantAnna, C., Figueiredo, E., Kulesza, U., Lucena, C., & von Staa, A. (2005). Modularizing design patterns with aspects: A quantitative study. In 4th International Conference on Aspect-Oriented Software Development (AOSD05), Chicago, USA.
Hannemann, J., & Kiczales, G. (2002). Design pattern Implementations in Java and AspectJ. In ACM conference on object-oriented programming systems, languages, and applications.
Harris, R. J. (1975). A primer of multivariate statistics. Academic Press.
Harrison, R., Counsell, S. J., & Nithi, R. (2000). Experimental assessment of the effect of inheritance on the maintainability of object-oriented systems. Journal of Systems and Software, 52(23), 173179. Kersten, M., & Murphy, G. C. (1999). Atlas: A case study in building a web-based learning environment using aspect-oriented programming. In Proceedings of the 14th ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA99), Denver, Colorado. Kiczales, G., Lamping, J., Mendhekar, A., Maeda, C., Videira Lopes, C., Loingtier, J.-M., & Irwin, J.
(1997). Aspect-oriented programming. In Proceedings of ECOOP.
Kienzle, J., & Guerraoui, R. (2002). AOP: Does it make sense? The case of concurrency and failures. In Proceedings of the 16th European Conference on Object-Oriented Programming (ECOOP 02), pp. 3761.
Likert, R. (1932). A technique for the measurement of attitude. Archives of Psychology, 22(140), 55. Lopes, C. (1997). D: A language framework for distributed programming. PhD Thesis, College of Computer
Science, Northeastern University.
Perry, D. E., Porter, A. A., & Votta, L. G. (2000). Empirical studies of software engineering: A roadmap.ICSEFuture of SE Track 2000, 345355.
Pfahl, D., Laitenberger, O., Dorsch, J., & Ruhe, G. (2003). An externally replicated experiment for evaluating the learning effectiveness of using simulations in software project management education. Empirical Software Engineering, 8(4), 367395.
Prechelt, L., Unger, B., & Schmidt, D. (1997). Replication of the rst controlled experiment on the usefulness of design patterns: Detailed desciption and evaluation, Technical Report wucs-9734, Department of Computer Science, Washington University, St. Louis, MO, pp. 6313064899. Rashid, A., & Chitchyan, R. (2003). Persistence as an aspect. In Proceedings of the 2nd International
Concference on Aspect-Oriented Software Development (AOSD 03), Boston, Massachusetts. Roychoudhury, S., Gray, J., Wu, H., Zhang, J., & Lin, Y. (2003). A comparative analysis of metaprogramming and aspect-orientation. In 41st Annual ACM South East Conference, Savannah, GA. SantAnna, C., Garcia, A., Chavez, Ch., Lucena, C., & von Staa, A. (2003). On the reuse and maintenance of aspect-oriented software: An assessment framework. In Proceedings of the Brazilian Symposium on Software Engineering (SBES03), Manaus, Brazil, pp. 1934.
Soares, S., Laureano, E., & Borba, P. (2002). Implementing distribution and persistence aspects with AspectJ. In Proceedings of the 17th ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA02), Seattle, Washington.
Stephens, M. A. (1974). EDF statistics for goodness of t and some comparisons. Journal of the AmericanStatistical Association, 69, 730737.
123
44 Software Qual J (2008) 16:2344
Tsang, S. L., Clarke, S., & Baniassad, E. L. A. (2004). An evaluation of aspect-oriented programming for Java-based real-time systems development. In Proceedings of IEEE International Symposium on Object-oriented Real-time distributed Computing (ISORC), Vienna, Austria.
Walker, R. J., Baniassad, E. L. A., & Murphy, G. C. (1999). An initial assessment of aspect-oriented programming. In Proceedings of the 21st International Conference on Software Engineering (ICSE 99), pp. 120130.
Zhao, J. (2004). Measuring coupling in aspect-oriented systems. In 10th International Software MetricsSymposium (METRICS 2004), (Late Breaking Paper), Chicago, USA.
Author Biographies
Marc Bartsch studied at the University of Munster, Germany, and received the 1. State exam in Computer Science, Mathematics and English. He also received the MA degree in German Studies from the University of Washington, Seattle. He has more than three years experience as a C++ programmer at Vodafone Information Systems, Germany and is currently a PhD candidate in Computer Science at the University of Reading, UK. His research interests are in the area of empirical software engineering, including aspect-oriented programming and validation of aspect-oriented metrics. Marc Bartsch is member of the BCS.
Rachel Harrison obtained her MA degree in Mathematics from Oxford University, an MSc degree in Computer Science from University College London, and a PhD degree in Computer Science from the University of Southampton. Her current research interests center around empirical software engineering, particularly measurement and modeling of the aspect-oriented paradigm, and the assessment of risk in requirements engineering. Prof. Harrison is currently a Visiting Professor at the University of Reading and Managing Director of Stratton Edge Consulting. She is a member of the IEEE Computer Society, the ACM and the BCS and is also a Chartered Engineer.
123
Springer Science+Business Media, LLC 2008