Content area
The University of South Africa have implemented a Science Foundation Provision (SFP) program in conjunction with the Department of Higher Education and Training of the Government of South Africa. This program aims to assist at risk students in the fields of science, engineering and technology in an effort to raise their performance, increase their throughput rates and lower their dropout rates. The study that this paper reports on involves an investigation into research questions around the impact of time and assessment on the throughput in a first-year programming subject in an e-learning context. An introduction is presented on aspects relating to the Science Foundation Program, the underlying pedagogy of the subject, and how it was adapted for an e-learning environment. The literature review investigates issues in e-learning research and concepts related to assessment and study time in programming modules. It explains the different types of assessment practices implemented in the course namely self-assessments, blogs and projects. Study time and time management for programming modules in an e-learning environment is also addressed. Using quantitative data extracted from the institutional database for 2015, comparisons are made between the submission rates of formative assessments, formative assessment results, summative assessment results, and throughput of the two models of the Science Foundation Provision programs. The discussion of the findings will start by looking at the specific differences and similarities, as related to available study time and assessment, of the two Science Foundation Provision program models. Conclusions will show the impact of the differences between the two models on the throughput rates. Recommendations are made for the Science Foundation Provision program in the programming module and in general for the Science Foundation Provision program in other courses. These recommendations can also be applied to main stream programming modules and will contribute towards international problems related to low throughput rates in programming modules.
Abstract: The University of South Africa have implemented a Science Foundation Provision (SFP) program in conjunction with the Department of Higher Education and Training of the Government of South Africa. This program aims to assist at risk students in the fields of science, engineering and technology in an effort to raise their performance, increase their throughput rates and lower their dropout rates. The study that this paper reports on involves an investigation into research questions around the impact of time and assessment on the throughput in a first-year programming subject in an e-learning context. An introduction is presented on aspects relating to the Science Foundation Program, the underlying pedagogy of the subject, and how it was adapted for an e-learning environment. The literature review investigates issues in e-learning research and concepts related to assessment and study time in programming modules. It explains the different types of assessment practices implemented in the course namely self-assessments, blogs and projects. Study time and time management for programming modules in an e-learning environment is also addressed. Using quantitative data extracted from the institutional database for 2015, comparisons are made between the submission rates of formative assessments, formative assessment results, summative assessment results, and throughput of the two models of the Science Foundation Provision programs. The discussion of the findings will start by looking at the specific differences and similarities, as related to available study time and assessment, of the two Science Foundation Provision program models. Conclusions will show the impact of the differences between the two models on the throughput rates. Recommendations are made for the Science Foundation Provision program in the programming module and in general for the Science Foundation Provision program in other courses. These recommendations can also be applied to main stream programming modules and will contribute towards international problems related to low throughput rates in programming modules.
Keywords: programming, e-learning, assessment, time, throughput, science-foundation
1. Introduction
The Science Foundation Provision provides learning support in the form of tutorials within each module, but also includes mentorship and assistance in English language, mathematics, information, and computer literacies. The purpose of Science Foundation Provision is to improve the success rate in science programmes; hence, increasing the throughput of our science students. An Academic Points Score (APS) is used to stream students into foundation and mainstream. Students may be selected for foundation provision if they have an APS < 20 for Diploma qualifications or < 24 for Bachelor of Science and Consumer Science qualifications. The points are calculated as follows:
* The score in English Language
* Adding the points from five (5) other subjects excluding Life Orientation: first the score of the subjects prescribed for normal entry into the regular qualification, and then the scores of the best other subjects;
Currently, the science colleges are running two models of the SFP Program at the university namely: The Augmented Model (AM) - where foundation support is integrated within each mainstream module for selected foundation students. All students in this model complete their studies during the normal 14 week semester. The second model is the Extended Model (EM) - where selected registered students are registered for the same module, but are completing the course over an extended period of one year.
The Diploma in Information Technology is presented by the University of South Africa. The aim of the programme is to enable qualifying students to analyse, design, develop and maintain web systems, networks, database systems, programming products and interfaces according to client requirements, applying principled methodologies in continuously changing industrial, organisational and commercial information and communication technology environments. The module Introduction to Interactive Programming forms part of the first year of the program. The aim of the module is to provide students with the skills to develop a working computer based program, with the knowledge, skills and values needed to add interactive functionality to the program through structured object-oriented programming through the use of the JavaScript programming language.
The aim of both the diploma and the module clearly show the skill set being taught prepares students for a vocation in the field of information technology. To be considered competent as a programmer the students need to be able to practically demonstrate that they are able to design and code programs according to user specifications which are as close to correct as possible in design and programming (Hayes & Offutt, 2010). From this it becomes evident that although writing code is considered the core ability, the students also require additions skills such as communication, analytical and problem-solving (Ahmed, Capretz, & Campbell, 2012). Incorporating all these skills when teaching and learning programming requires a constructivist approach which allows students to actively engage with their material on different levels, as stated by Dong, Li, Zhang, and He "..constructivism pedagogy is more appropriate to resolve the problems encountered in programming teaching" (2012).
Due to the complex nature of distance e-learning, with no face-to-face contact with students, implementation of the constructivist pedagogy are somewhat challenging. To accommodate the constructivist pedagogy in the distance e-learning environment a blended approach was followed for the teaching and learning of the module. Lim, Morris, and Kupritz, (2007:28) identified three representative definitions of blended learning as "(a) a learning method with more than one delivery mode is being used to optimize learning outcomes and reduced cost associated with program delivery, (b) any mix of" lecturer-led "training methods with technology-based learning, and (c) the mix of traditional and interactive-rich forms of classroom training with any of the innovative technologies such as multimedia, CD-ROM, video streaming, virtual classroom, email/conference calls, and online animation/video streaming technology". The module reported on in this research used a blend of printed material, the prescribed book, online delivery of the study material in the form of the Learning Units, online discussion sessions via the Big Blue Button and vodcasts explaining coding examples.
The primary goal of the assessment is to choose an assessment method that assesses the objectives of the module most effectively. In addition, the choice of assessment methods must be aligned with the systems and requirements of the institution. In choosing the assessment tools for this module, the outcomes for the particular module, the broader aims of the qualification and the qualities of the graduating learner were kept in mind, as well as the systems and requirements of the institution. The current qualities and abilities of the learners were also taken into consideration when the assessment tools were selected.
In an attempt to get students to study the theoretical concepts a database of 360 multiple choice, 271 fill in the missing word and 281 true/false questions was created on the myUnisa virtual learning environment using the Samigo tool. The self-assessments were set up to create 20 new questions from the database each time a student attempts a self-assessment for a particular chapter. The student can do the self-assessment for each chapter as many times as they prefer and feedback for each self-assessment if provided immediately upon submission.
The blog assignments are meant to be fun assignments, where students get to have their say on what they have learnt. It is an on-going assignment which starts in the first week of their studies and continues until the end of the semester. For these assignments students need to write blogs for the work they studied in certain chapters. They need to reflect on the work they have studied and write down what they think of the work. This reflection must not be a summary of what the prescribed book says, it must be the students own reflection. After going through the chapter the student should close the book and then write down how they would go about explaining what they've just studied to a friend or family member. Lecturers and e-tutors use the blogs to gauge the students understanding of particular pieces of work and comment accordingly on the blogs.
As part of the formative and summative assessment the students have to contact a small business in their area and obtain their permission to design a website for the business. The site has to consist of a minimum of 10 pages, for example a home page, products page (6 pages), order form, contact page etc. Students have to add the logic design for the functionality they will be adding to the website. This logic design includes a feature that uses at least one function to perform a mathematical calculation based on user input, using if, if/else, else if or switch statements, exception handling of code and validation of user input. An additional page also needs to be added that educates visitors about web security. Once the logic design is completed, the students are required to write the JavaScript code and implement it on the site they designed. Once all the design, development and coding is done the students have to copy everything, including the assignment rubric, into a single document and save it as a PDF file which must be submitted online via the myUnisa system. In the online study guide (Learning Units) students are provided with information on what they need to complete the project, how they should do this and an approximate time frame.
2. Literature review
Programming students need both theoretical knowledge as well as practical knowledge to be successful (Matthews, Hin, & Choo, 2015). In order for a programming student to apply specific practical concepts they have to understand the theory behind the concept. Because of the practical nature of programming, students are also prone to know how to implement practical concepts within a given context without having the understanding as to why the concept works the way it works, once the context changes the student is no longer able to apply the concept. Students should thus be assessed on both the theory and the practical. In order to achieve this goal the self-assessments were created. This provides students with the opportunity to prepare for their assignments and their examinations, since the same question pool is used in both cases, thus integrating formative and summative assessment. A further purpose of the self-assessments is to provide lecturers and tutors the opportunity to identify students who are not performing as well as they should (Antal & Koncz, 2011).
When one considers the teaching and learning of a programming language, blogging is rarely considered as a means of assessment. The effect of blogging on student understanding of programming concepts has been studied. Ramasamy, Valloo, Malathy, and Nadan (2010) used the number of threads and number of replies to gauge student initiative in gaining new knowledge, whereas Safran (2008) measured learning performance through blog activity, practical experience and theoretical knowledge. Both studies recognized the effectiveness of blogging in improving student understanding. Van Heerden and Van Der Merwe (2014) investigated the blog results versus the final results of students. They stated that "the implementation of knowledge blogging in an ODL environment is particularly well suited to introductory programming courses when such blogging demands reflective activities and continued engagement with the course work. Specifically, we suggest knowledge blogging to be a constructive learning tool in a programming environment since it promotes metacognition and differentiated instruction by nurturing multiple learning skills."
Project-based learning is ideally suited to the constructivism pedagogy, since students actively participate, learn by doing, implement their learning and solve real or simulated problems (Doppelt, 2003). This type of assessment is more than a mere evaluation of the students' knowledge, it allows the student to show in practical ways that they have mastered the theory and are able to apply it in a real world scenario (Rand, 1999). Project-based teaching, learning and assessment have been and are currently being used by numerous residential universities as the preferred method of teaching and assessing programming modules (Todorova, Hristov, Stefanova, & Kovatcheva, 2010) (Vega, Jiménez, & Villalobos, 2012). There are several articles which indicate that there is an improvement in the performance of students taking programming modules when project based learning and assessment are implemented (Wilson & Ferreira, 2011) (Bubas, Coric, & Orehovacki, 2012).
The researcher for this paper could not find any empirical data to ascertain what an appropriate time frame is for learning a programming language. The internet provides varying viewpoints that indicate learning a programming language can take between 10 days to "The faster you try to 'learn programming', the longer you take. Why? Because the moment you think you learned everything more new things will appear: Programming is one of those things you never stop learning, even after you program professionally for years."(Ibañez, 2011). The researcher fully agrees with statement made by Ibañez considering the many changes and enhancements HTML and JavaScript have undergone in the past 10 years during which the course have been presented.
During the first semester of 2014, the University of South Africa's Directorate of Institutional Research conducted a Student Module Evaluation Report (SMER) to provide an overall picture of the student views and experiences of the module in order to provide data for improvement (Archer, Liebenberg, & Chetty, 2014). The answer to the open ended question "Would you prefer if this module was presented as a year module?" was an overwhelming yes, with students leaving comments such as "Workload a bit too much especially for part time students", "Too much information for 6 months to comprehend.", "I feel that a semester is insufficient to cover the work especially for someone who has no background to the module". From the SMER it seems that the students feel if they have more time to study the module they will perform better.
There are various research articles indicating that time management is one of the major causes in low throughput rates for programming modules. Zainal,Shahrani, Yatim, Rahman, Rahmat, Masura and Latih states "There are a number of possible reasons to the limited time spent or allocated by students. One of them is due to the time constraints imposed on students enrolled for the program. Another reason is students' ineffective time management."(2012). Willmana, Lindéna, Kaila, Rajala, Laakso and Salakoski states "Time management skills are among the most central skills that affect students' grades and stress levels" (2015). Egan, Cukierman and Thompson found "...time management and academic planning skills were correlated with lower GPA."(2011). Çakiroglu also indicates that time management is not only a cause for concern in programming but is confounded in the distance learning environment, "Time management is an important issue in DL and some researchers consider it as a major concern for online students."(2014).
3. Research methodology
The basic casual-comparative design is used and involves the selection of the two SFP models namely the Augmented Model (AM) and the Extended Model (EM) (Gay, Mills, & Airasian, 2011). The population consist of 200 students in the AM model and 236 students in the EM model.
The entire population of each group is used in the research; there is thus no sample representation. With the exception of the time frame in which the modules are presented and the number of assignments each group has to complete, both groups are similar with respect to critical variables other than the grouping variables.
Using quantitative data extracted from the institutional database for 2015, comparisons are made between the submission rates of formative assessments, formative assessment results, summative assessment participation, and summative assessment results of the Science Foundation Provision models. The researcher is attempting to determine whether more time to prepare for assessments in the EM model results in an improved submission rate and better results in comparison to the AM model.
Since it is not our intention to generalize the results, simple descriptive statistical analysis techniques that attempted to count how many times certain behaviours occurred (quantitative methods), and nonparametric procedures were appropriate. Firstly an F-Test is performed to determine if the variances of the two populations are equal or not equal. In particular, we made use of distribution free methods such as tabulations and means to present the data. Based on the results, the appropriate t-test (assuming either equal or unequal variances) will be used to determine if there was a significant difference between the mean of the formative assessment and the final summative assessment results in the AM model and the mean of the formative assessment and the final summative assessment results in the EM model, the null hypothesis predict that there will be no difference between the two models.
4. Results
Table 1 is divided into two sections and several sub-sections. Section 1 shows results for the summative assessment. In the first sub-section, the number of students registered for the module (n), their mean assignment submission rate expressed as a percentage. The next sub-section shows the mean of the assignment results as a percentage.
Section 2 shows results for the summative assessment. In the first sub-section, the number of students registered for the module (n), their mean examination project submission rate expressed as a percentage. The next sub-section, the mean of the examination project results as a percentage. The order of presentation is repeated in the next section for the written examination.
The F-Test, to determine if the variances of the two populations based upon the final summative results are equal, indicated that F > F Critical one-tail, 0.842 < 0.365. Therefore, we accept that the variances of the two populations are equal.
The average submission rate of formative assessment in the AM model (80%) is much higher than that of the EM model (66%), the average results of the formative assessment in the AM model (60%) is only slightly higher than that of the EM model (58%).
The average submission rate of the summative assessment project in the AM model is slightly higher (51%) than that of the EM model (45%), the average result of the summative assessment project in the AM model is much lower (34%) than that of the EM model (48%).
The appropriate t-test using equal variances indicates that there are no significant difference between the mean of the formative assessment results of students in the AM and those in the EM model, the null hypothesis which predicted that there will be no difference between the two models can thus be rejected because p < 0.05.
The F-Test, to determine if the variances of the two populations based upon the final summative results are equal, indicated that F > F Critical one-tail, 1.149 < 0.1965. Therefore, we accept that the variances of the two populations are equal.
Both models have a similar percentage (62%) of students who wrote the summative assessment. The students in the AM model performed much lower (29%) than those in the EM model (43%). The final result of the AM model (37%) is also significantly lower than that of the EM model (46%).
Based on the results, the appropriate t-test using equal variances indicates that a significant difference between the mean of the final summative assessment results in the AM model and the mean of the final summative assessment results in the EM model exists, the null hypothesis which predicted that there will be no difference between the two models can thus not be rejected because p > 0.05.
5. Discussion
The biggest difference between the AM and EM models are the amount of time available to students to prepare and complete their assessments and the number of assessments they are required to submit. The AM was presented as a 14 week semester module and consisted of 200 students at the start of the semester. The EM was presented as a year module and consisted of 236 students at the start of the year. Students registered in the AM model are required to complete 7 assignments, of which 3 are multiple choice, 3 are blog and 1 is practical. The 3 multiple choice and 3 blog assignments of the AM contributed 50% towards the students year mark and the practical assignment the remaining 50%. Students in the EM model are required to complete 17 assignments of which 8 are multiple choice, 8 are blog and 1 is practical. The 8 multiple choice and 8 blog assignments of the EM contributed 50% towards the students year mark and the practical assignment the remaining 50%.
The way in which the module is presented for both the AM and EM models are totally similar. Students in both the AM and EM models had access to the exact same online study material, used the same prescribed book, received the same online support from the lecturer and e-tutors and completed self-assessments from the same database of questions. Both sets of students had access to a study schedule for their specific registration cycles and were sent weekly, in the case of the AM model, or tri-weekly, in the case of the EM model, announcements to keep them informed on where they need to be in their material to keep current to the study schedule. Students in both groups, who did not submit their formative assessments on time, were contacted with reminders on extended submission dates. Online discussion sessions, using the Big Blue Button tool, were presented to both models to explain requirements for the practical assignment as well as the summative assessment.
The statistical analysis of the formative and summative assessments of the AM and EM models suggests that there is no significant difference between the results in the formative assessments (AM = 60% vs EM = 58%) and the results in the final summative assessments (AM = 35% vs EM = 46%).
6. Conclusion
The statistical analysis shows that having more assessments and more time to prepare, complete and submit assessments makes no significant impact on the results of either the formative or the summative assessments. This insignificance occurs despite the fact that the AM model has a higher submission rate for their formative assessments as well as their summative assessment project. Students in the EM model performed only slightly less well in the formative assessment than students in the AM model, but consistently performed better in the summative assessment than the students in the AM model. Considering the statistical analysis the researcher concludes that additional time and assessments does not have any impact on the formative or summative results of students. Conversely, we acknowledge that more research on other variables that may influence formative and summative assessment results is required.
The research conducted in this paper indicates that the SFP program, as well as other programming modules, needs to investigate additional means in an effort to raise their performance, increase their throughput rates and lower their dropout rates.
References
Ahmed, F., Capretz, L. F., & Campbell, P. (2012). Evaluating the Demand for Soft Skills in Software Development. IT Professional, 14(1), 44-49. doi:10.1109/MITP.2012.7
Antal, M., & Koncz, S. (2011). Student modeling for a web-based self-assessment system. Expert Systems with Applications, 38(6), 6492-6497. doi:10.1016/j.eswa.2010.11.096
Archer, L., Liebenberg, H., & Chetty, Y. (2014). Student Module Evaluation Report - Semester 1 ( 2014 ) (Vol. 1). Retrieved from https://www.dropbox.com/s/n2653l38l30h7df/SME Instrument (29-05-14).pdf
Bubas, G., Coric, A., & Orehovacki, T. (2012). The integration and assessment of students' artefacts created with diverse Web 2.0 applications. International Journal of Knowledge Engineering and Soft Data Paradigms, 3, 261-279. Retrieved from http://inderscience.metapress.com/content/v5h441075k726327/
Çakiroglu, Ü. (2014). Analyzing the Effect of Learning Styles and Study Habits of Distance Learners on Learning Performances : A Case of an Introductory Programming Course. International Review of Research in Open & Distance Learning, 15(4), 161-184.
Dong, L., Li, C., Zhang, W., & He, J. (2012). The Reform of Programming Teaching Based on Constructivism. In W. Hu (Ed.), Advances in Electric and Electronics (Vol. 155, pp. 425-431). Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-28744-2
Doppelt, Y. (2003). Implementation and Assessment of Project-Based Learning in a Flexible Environment. International Journal of Technology and Design Education, 13(3), 255-272. doi:10.1023/A:1026125427344
Egan, R., Cukierman, D., & Thompson, D. M. (2011). The Academic Enhancement Program in Introductory CS : A Workshop Framework Description and Evaluation. In ACM (Ed.), 16th annual joint conference on Innovation and technology in computer science education (pp. 278-282). Darmstadt, Germany. doi:10.1145/1999747.1999825
Gay, L. R., Mills, G. E., & Airasian, P. W. (2011). Educational Research, competencies for analysis and applications. (J. W. Johnston, C. Robb, L. Carlson, & P. D. Bennett, Eds.) (10th ed.). New Jersey: Pearson.
Hayes, J. H., & Offutt, J. (2010). Recognizing authors: an examination of the consistent programmer hypothesis. Software Testing, Verification and Reliability, 20(4), 329-356. doi:10.1002/stvr.412
Ibañez, A. (2011). How long should it take to learn how to program? Retrieved December 28, 2015, from https://www.quora.com/How-long-should-it-take-to-learn-how-to-program
Lim, D. H., Morris, M. L., & Kupritz, V. W. (2007). Online vs. blended learning: Differences in instructional outcomes and learner satisfaction. Journal of Asynchronous Learning Networks, 11, 27-42.
Matthews, R., Hin, H. S., & Choo, K. A. (2015). Comparative Study of Self-test Questions and Self-assessment Object for Introductory Programming Lessons. Procedia - Social and Behavioral Sciences, 176, 236-242. doi:10.1016/j.sbspro.2015.01.466
Ramasamy, J., Valloo, S., Malathy, J., & Nadan, P. (2010). Effectiveness of Blog for Programming Course in Supporting Engineering Students. Information Technology, 3, 1347-1350.
Rand, M. K. (1999). Supporting constructivism through alternative assessment in early childhood teacher education. Journal of Early Childhood Teacher, 20:2(March 2013), 125-135.
Safran, C. (2008). Blogging in Higher Education Programming Lectures : An Empirical Study. In 12th International Conference on Entertainment and media in the ubiquitous era (pp. 131-135).
Todorova, M., Hristov, H., Stefanova, N., & Kovatcheva, E. (2010). INNOVATIVE EXPERIENCE IN UNDERGRADUATE EDUCATION OF SOFTWARE PROFESSIONALS - PROJECT-BASED LEARNING IN DATA STRUCTURE AND PROGRAMMING. ICERI2010 Proceedings, 5141-5150. Retrieved from http://library.iated.org/view/TODOROVA2010INN
van Heerden, M. E., & van der Merwe, T. M. (2014). Employing Objective Measures In Search Of A Relationship Between Knowledge Blogs And Introductory Programming Performance Outcome. In 9th International Conference on e-Learning (pp. 185 - 189). Valparaiso, Chile. doi:978-1-909507-69-2
Vega, C., Jiménez, C., & Villalobos, J. (2012). A scalable and incremental project-based learning approach for CS1/CS2 courses. Education and Information Technologies, 18(2), 309-329. doi:10.1007/s10639-012-9242-8
Willman, S., Lindén, R., Kaila, E., Rajala, T., Laakso, M.-J., & Salakoski, T. (2015). On study habits on an introductory course on programming. Computer Science Education, 3408(August), 1-16. doi:10.1080/08993408.2015.1073829
Wilson, T., & Ferreira, G. (2011). E-Learning and support tools for Information and Computer Sciences. In 7th Europe International Symposium on Software Industry Oriented Education. China.
Zainal, N. F. A., Shahrani, S., Yatim, N. F. M., Rahman, R. A., Rahmat, M., & Latih, R. (2012). Students' Perception and Motivation Towards Programming. Procedia - Social and Behavioral Sciences, 59, 277-286. doi:10.1016/j.sbspro.2012.09.276
Copyright Academic Conferences International Limited Jun 2016