Abstract
The field of education research is a complex and difficult area particularly for early career researchers or students new to research. Focusing upon the most common types of research methods is a starting point as students, and those new to research would be more likely to encounter these methods in journals. The research literature contains arguments regarding the most frequent type of research methodology as well as the nature of this research. This article describes the authors' review of a select number of education research journals and the surprises the researchers encountered-in particular, challenges in the development of a suitable mapping tool called the Journal Article Research Analysis (JARA) schedule. The results of this study indicate there may be implications for education researchers and editors of journals as well as teachers of research.
Keywords: education research, research methodology, education research journals.
Introduction
The field of education research is broad and interrelated, with diverse disciplinary boundaries, including curriculum and pedagogy, education systems (early childhood, primary, and secondary education), and various specialist areas such as assessment, leadership, technology, and gender. While academics in universities produce the majority of education research, other stakeholders such as government departments and non-government organizations also contribute.
A prime purpose for education research is to inform classroom practice through improvement in the process of teaching and learning; therefore, teachers need to be able to read and understand research. Policy makers, principals, and teachers have questioned the use and relevance of education research to benefit and assist classroom practice, education policy, and school leadership (Gore & Gitlin, 2004; Grossman, 2008; Miretzky, 2007). If research is to be of value to staff in the education sector, teachers need to identify and understand the research process and how they can make use of research findings. Discovering the strengths and weaknesses of a particular research methodology is of value. Teachers and pre-service teacher education students in postgraduate and undergraduate research methods courses should have a basic understanding of different types of education research methods.
Lack of clarity in published research reports regarding methodology and research design is of concern (Levin, 2006; Lingard, 2013; Yates, 2006), a view students in initial teacher education research methods classes frequently expressed to the authors (Knipe & Bottrell, 2013). Several studies have examined the type and nature of education research, for example, the research design in 196 teacher education articles (Sleeter, 2012), papers from the 2011 ATEA conference (Tuinamuana, 2012), and the methodology, topics, and content of teacher education research over a 10-year period (Murray, Nuttall, & Mitchell, 2008; Nuttall, Murray, Seddon, & Mitchell, 2006). The results from these reports reflect previous claims that education research tends more toward small-scale and one-time projects (Levin, 2004; Murray, Nuttall, & Mitchell, 2008).
Barriers arise when there is no shared understanding of the different types of research methodology such as method, data source, data gathering, and data analysis (Oancea, 2005). The widespread use of general terms, such as "qualitative research" and "quantitative research," and the term "mixed method research," which largely refers to the use of both verbal data and numerical data in a research study, can cause confusion. The education research field is wideranging and interconnected, so consumers of research, or those new to reading research, are often ovenvhelmed and unsure where to start.
Classifying and Categorizing Research
Several disciplines have reported methods of classifying research into various categories and the development of instruments researchers use to undertake this process. Early examples of classifications include Cooper (1984) in social science, and more recent classifications in areas such as Sports Science (Williams & Kendall, 2007), Criminal Justice (Kleck, Tark & Bellows, 2006), and Marketing (Ensign 2006). An early attempt to categorize education research methods by Barr, Almack, Ayer, Dashiell, Gates, Good, Johnson, Kelley, Mccall, Ruch, Symonds, Toops, Trabue, Whitney, Woody, Kilpatrick, Henmon, and Freeman (1931) identified eight areas, and more recently, Isaac and Michael (1995) designated nine categories. Chapters, rather than methodological classifications, mostly organize research books addressing the various aspects and components of research. Suter (2005) and Bums (2000) have argued that "case study research" classifies most education research. This has become an "over-arching" term to describe education research that does not fit with experimental, historical, or descriptive research methods. Education research encompasses many different naturalistic, interpretative, hypothesisgenerating models as well as hypothesis-testing models, which contribute to difficulties in determining categories of research.
Education researchers could place an initial emphasis on case study research methodology as a starting point in teaching research methods. It is likely consumers of research and early career researchers would more readily encounter this method in education research journals, and, therefore, would have a useful starting point for reading research and designing a research study. Confidence researchers can gain through understanding and knowing one method of research provides a springboard for understanding other types of research methodology.
The question then arises: What type of research method is most prevalent in the education research literature? Newcomers to education research are likely to read journal articles as they are a common source of published education research. With the emphasis on journal research, an investigation of a selection of journal publications would be worthwhile. The researchers determined there was a need to review education research journals to identify the most frequent education research methodology. The authors required a tool to classify education research into various research categories to undertake a review of research journals. This article describes the process involved in developing a classification tool, or schedule, which analyzed education research in research journals. This tool is named a Journal Article Research Analysis (JARA) Schedule. Further, the authors will examine the complexities that arose during the exercise.
Selecting Articles for Analysis
Research articles this study reviews came from major education journals. To select journals appropriate for analysis, five academics from three Australian universities nominated six research journals to which they would consider submitting a research article for publication. Each academic received the most recent list of journals under the code 'education journals' available from the former Department of Education, Employment, and Workplace Relations website.
Two or more academics selected 6 journals from the total of 30 nominated journals. Five journals had an "A" ranking, and one journal had a "B" ranking. The researchers randomly selected articles from the journal's website in the development of the JARA schedule. These articles encompassed one year's publication of a particular journal involving two or three issues. After an initial stage of examination of the articles, the researchers deleted some because they were not reporting research but were opinion or position papers. At this initial stage, some articles did not provide information about the type of 'data gathered' or the nature of the 'data analysis' - a factor that became a difficulty later on in the development process.
The authors established a research team for the development of the JARA Schedule. The authors had extensive experience in teaching research methods to teacher education students and at the postgraduate level. To strengthen the expertise of the team, the authors invited two academics who had expertise in research and publication, including supervision of numerous doctoral students, and in teaching research methods at the post-graduate level. The research team commenced the development process by examining schedules from other disciplines that could be applicable to the JARA Schedule. Additionally, the research team investigated a wide range of texts on education research to refine categories and reach mutual agreement on definitions. The purpose of the JARA Schedule was to categorize the type of research to estimate frequency of use, not to make a value judgment about the research method.
Developing Categories for the JARA Schedule
Developing categories for the JARA Schedule included a category for Research Method and a separate category to identify the type of data the researchers collected. For example, the classification of ex-post facto research as causal-comparative appears in some texts, but in other texts as applicable to different research methods such as developmental research (McMillan & Schumacher, 2006). The research team concluded that the use of 'ex-post facto' as a research method was misleading because the term referred to the fact that the data existed already. With the advent of data mining of large data bases, the use of existing data in research has wide application.
In the hypothesis generating categories, the term "Field Study" encompassed multiple research methods such as ethnography, grounded theory, and phenomenology, which refer to 'data gathered in the field.' "Action Research" became a separate category because of the emphasis on problem solving and its context specific nature. "Case study" became a category because the literature defines characteristics of case study. "Historical Research" methods were a separate category because of distinct and different characteristics. Hypothesis testing methods designated categories including "Causal Comparative," "Developmental (Longitudinal)" research, "Correlational" research, and intervention studies using "Quasi-experimental" methods. The remaining category for "Descriptive" research referred to an investigation that describes systematically a situation or area of interest factually and accurately, and usually pursues an objective. Descriptive research is a category useful in education to describe the many small scale and one-off studies found in journals.
The JARA Schedule contained nine separate categories for Research Methods four of these categories were for hypothesis testing models, and five of these were for hypothesis generating models. The JARA Schedule was useful in detecting the frequency of a group of research methods as distinct from a focus on one particular method. Categories for Data Gathering Methods encompassed six major areas as follows: data gathered by observation, survey, interview, document analysis including existing audio-video or electronic material, existing data in numerical form, and data from intervention studies as in quasi-experimental research designs. The coding method enabled identification of various combinations numbering 27 in total. However, the application of the JARA Schedule to the analysis of articles revealed that 'Survey' was the most frequent data gathering method, and 'Interview' was the second most common data gathering method. More than half the number of articles used only one method of data gathering. The researchers report details further in the article.
Data Source was a designated category, indicating from whom or from where the researchers gathered data. This included information from school students, teachers including trainee teachers, school administrators (principals, deputy principals), parents, community, curriculum, policy, existing numerical data (system data, records), and data from intervention studies, plus combinations of data sources. The JARA was able to detect a pattern in the use or not of multiple sources of data for research studies. The fourth category was Data Analysis techniques and methods. This included different methods of analysis of verbal data, including software programs, analysis of numerical data such as means standard deviations and statistical tests, and statistical tests and methods of analyzing intervention data.
Establishing Inter-Rater Reliability
The researchers trialed JARA to determine inter-rater reliability and to gather feedback on the 'usability' of the schedule. They undertook the first trial with a group of academics independent of the research team, and they purposefully sampled from an Education Faculty at an Australian university. The aim of this phase was to measure the inter-rater reliability of the schedule. They circulated an expression of interest to several academic, doctoral, or post-doctoral staff. Seven members of staff attended the information session, and five academic staff participated in the trial, which required reviewing six articles using the JARA schedule.
A training workshop for those participating staff ensured a common understanding of the schedule and allowed for questions, points of clarification, and discussion. The researchers provided participants with the "JARA Coding Schedule" together with a copy of the "JARA Key." This is an accompanying sheet containing definitions and explanation of the categories. The participants then read the articles and completed the coding schedule. After the participants submitted the results, the researchers held a debriefing session to capture comments from the participants regarding the use of the schedule and to discuss the process. Feedback from the participants involved in the trial indicated they found the schedule to be a very useful tool that allowed them to focus on the various aspects of a research report. They were surprised when the article did not articulate clearly or omitted altogether significant components, such as the data source or the data analysis. All participants agreed that the category of research methods was the most difficult to code. One of the JARA Schedule research team analyzed the responses from participants. The researchers made modifications to the schedule to refine the format as a result of the feedback from participants.
The researchers then trialed the JARA Schedule with a group of university students undertaking a research methods subject as part of their initial teacher education program for a Master of Teaching degree. The researchers gave a copy of the JARA Coding Schedule together with a copy of the JARA Coding Key to 44 students and asked them to code individually one article the lecturer provided to them. The article the students selected was one they took from the trial the academic staff used. At the conclusion of the trial, the researchers invited students to comment on the usefulness of the JARA Schedule. Students reported the area that caused concern for them was categorizing the research method. Results showed equal division as to whether they should classify the method as a 'Case Study' or as 'Descriptive Research.' The discussion with the students reflected opinions the academics expressed in the first trial. This included the difficulty in classifying the research method the author reported and the depth of their own understanding of different research methods. The analysis from the second trial revealed more than 90% agreement in the categories Data Source, Data Gathering, and Data Analysis, but a low level of agreement for the category "Research Methods."
The researchers conducted a third trial with a group of eight experienced academics. The researchers purposely selected the academics from several Australian Universities to establish an acceptable inter-rater reliability measure. Six articles for this trial were from an 'A rank' education journal, using an edition that academics selected randomly and that was recently published. Results of this trial indicated an acceptable inter-rater reliability for all categories, except research methods. Feedback about the process from participants in the third trial, particularly in terms of engagement with the content in the articles, revealed unexpected outcomes. The participants commented that the JARA Schedule had educational benefits, for example: "I got more from the articles having to read them in this way as it makes you actively engage whilst reading," and "the [JARA] schedule itself helps focus your reading, particularly on the methodology used" were typical of comments. It was interesting that the eight academics were experienced researchers, yet the views they expressed reflected the views of the post-graduate students from an earlier trial. Both groups considered that the JARA Schedule has use as a learning tool as well as a research tool. As with the previous trials, participants raised concerns about the lack of detail in some research articles regarding data source and data analysis. Participants noted an increased awareness of their own writing, as the following remark indicates, "the process is a reminder of what to include in your own presentation of research regarding methodology."
Four experienced academics were involved in the final trial of the JARA schedule. They reviewed five articles that one member of the research team purposefully sampled in order to eliminate opinion papers and ensure that articles provided adequate information for the researchers to make a judgment. One of the research team analyzed the results and indicated an acceptable inter-rater reliability of 86% for all categories.
Final Design of the JARA
The final model of the JARA schedule and score sheet incorporated feedback from those involved in four trials regarding the 'usefulness,' layout, and design. In classifying education research into mutually exclusive categories, the focus was upon research methodologies, separate from data gathered, sources of data, and data analysis techniques. The researchers designated four categories as follows:
(1) Source of Data
(e.g., teachers, students, school administrators, parents, non-school personnel, etc.)
(2) Data Gathering Technique
(e.g., interview, observation, survey, existing data, etc.)
(3) Data Analysis Techniques
(e.g., categories, themes, open and axial coding, statistical analysis, etc.)
(4) Research Methods
(e.g., case study, action research, field study, quasi-experimental, developmental, historical, etc.)
The JARA score sheet also includes three additional categories as follows: category for "sampling methods," whether or not the author reported how issues of reliability, dependability, and validity or authenticity. The researchers included a category for ethical approval because many journals require authors to report that they had secured ethical approval for their research to take place.
JARA Schedule Research Project
The researchers undertook a research project to investigate the question for which they had designed JARA: that is, to find out the research methods educational researchers use most frequently. Journals from the list identified in the development of the JARA Schedule provided the articles. The researchers selected randomly the year and volume of journals from editions that participants could easily access online. The researchers selected articles from the following journals: Journal of Education Administration, Australian Journal of Language and Literacy, Journal of Education Research, Australian Journal of Education, and European Education Research Journal. The journal issues the researchers selected contained 107 articles. One of the JARA Schedule research team undertook the task of reading, reviewing, categorizing, and scoring the selected journals using the JARA Schedule. As a means of verification to improve validity of scoring, 2 other members of the research team randomly sampled 10 articles from the sample for scrutiny to verify the recorded scores. Of the 107 journal articles the researchers reviewed, 25 articles were not research reports, but were opinion or position papers, and the researchers removed them. Of the articles the researchers removed, 4 were from 36 articles from the Journal of Education Administration, 3 from 15 articles from the Australian Journal of Language and Literacy, 3 from 25 articles from the Australian Education Researcher, 5 from 6 articles from the Journal of Education Research, 6 from 18 articles from the Australian Journal of Education, and 4 from 9 articles from the European Education Research Journal. Table 1 shows this information.
Research Methods
Using the JARA Schedule to analyze articles, results indicated that the most used research method is Correlational research (24 studies - 29.3%), with Descriptive research used in 21 studies (25.6%). Together, correlational research and descriptive research account for more than half of the methods used in the research studies (54.9%). Only one-fifth of the studies used case study method, a result that does not support claims by Suter (2005). These results are available in Table 2.
The JARA Schedule contains four research methods that are hypothesis generating methods and four methods that are hypothesis testing methods. Descriptive methods may test hypotheses, but as the term implies, descriptive research investigates a question or describes a situation. Table 3 shows the research methods participants used in articles in these three groups indicating the major methods participants used are for hypothesis generating.
Data Gathering
The category for data gathering methods indicated that nearly two thirds (53 - 64.6%) of research studies reported using one method only. Researchers used Survey only and Interview only in 35 studies (42.7%). Four studies used data mining as the data'gathering method, and four studies used pretest post'test data. With nearly 40% of research studies using hypothesis generating methods, it was surprising that nearly two-thirds of researchers used one data gathering method only. This result is available in Table 4.
Data Source
Results for the category Data Source showed students and teachers as a major data source for researchers. Researchers selected students as a source for 26 studies, teachers for 16 studies, and students and teachers for a further 9 studies. This information is available in Table 5.
The data analysis category recorded the data analysis methods researchers used. More than one third of studies used statistical test as the method of data analysis, which is consistent with correlational research being the most frequently used research method. Together, numerical data analysis, both descriptive and statistical, accounted for nearly two-thirds of research studies. This result is available in Table 6.
Reliability or Dependability, Validity or Trustworthiness
Researchers scrutinized articles as to whether researchers addressed issues of Reliability or Dependability, Validity or Trustworthiness, depending on the design of the data gathering instruments or mechanisms they used for the research study. Researchers provided information regarding the design of data gathering instruments in less than 20% of articles as visible in Table 7.
Sampling Methods and Ethics Approval
Researchers examined articles to determine the sampling methods researchers used, whether they used "Random" sampling methods, and what type of sampling method. Only five (6.1%) articles reported using random sampling methods. Of the non-random sampling methods researchers used, nearly two-thirds used "Purposeful" sampling methods; see Table 8. Only two articles contained information about or reference to seeking ethics approval.
Discussion
The results of the analysis of 107 journal articles revealed, in the first instance, that 25 articles (23.4%) were not research reports but were opinion or position papers. The analysis of the 82 research articles, using the JARA Schedule, revealed, in this particular sample, that correlational research was the most frequently used method followed by descriptive research. Our results do not support the claim by Burns (2000) and Suter (2005) that case study method was the most frequent method in education research. We found Survey and Interview to be the two most frequently used data gathering instruments, which is consistent with the use of correlation and descriptive research. However, with nearly two-thirds of studies using one data gathering instrument only, and nearly 40% of studies using either case study, action research or field study, one could expected that a greater number of researchers would have made use of multiple data gathering methods to address issues of triangulation. A major purpose of education research is to inform and improve classroom practice, so it is not surprising that most of the research studies used students and teachers as a data source. The result showing that nearly 40% of researchers used statistical analysis is consistent with nearly 30% of researchers using correlation method.
Few researchers, from the sample of articles in this study, addressed issues of reliabilityvalidity or dependability-authenticity, and only two researchers referred to seeking ethics approval for their research. This result is quite surprising because of the concern universities and departments of education show regarding the need to seek ethics approval for research in schools. Editors of journals should be more demanding in relation to the reporting of ethics approval. Only two studies indicated that the reported research replicated previous research the author had undertaken, a point that supports claims by Levin (2006) and Murray, Nuttall, and Mitchell (2008) about the prevalence of'one-off or small scale studies.
One could question the results of this study according to the limitations of the articles the researchers selected for analysis or by the limitations of the design of the instrument, the JARA Schedule, the researchers developed to undertake a journal article analysis. Nevertheless, the findings have provided a useful insight into important aspects of education research design.
Conclusion
The purpose of the project was to investigate the type of research methods in educational research. The results of this particular analysis have identified a discernible pattern showing a greater frequency for correlation research and descriptive research. It would be a useful introduction to the study of education research to concentrate initially on the two research methods, as distinct from attempting to cover a wide range of research methods.
The unexpected finding regarding the value of using the JARA Schedule as a learning tool and the positive feedback from experienced academics and new-to-research post-graduate students was an important one. Researchers also could use the JARA Schedule with research groups and post-graduate students as a simple but useful exercise to sharpen and expand knowledge and understanding of research methods.
Discussion Questions
1. What questions arose for you regarding education research design?
2. Were the findings from this study a surprise? If so, why? If not, why not?
3- According to the JARA schedule, what categories should you address in your article?
To Cite this Article
Knipe, S., & Bottrell, C. (2015, Summer). JARA schedule: A tool for understanding research methodology. Journal of Multidisciplinary Research, 7(2), 17-30.
References
Barr, A. S.; Almack, J. C.; Ayer, F. C.; Dashiell, J. F.; Gates, A. I.; Good, C. V.; Johnson, P. O.; Kelley, T. L.; Mccall, W. A.; Ruch, G. M.; Symonds, P. M.; Toops, H. A.; Trabue, M. R.; Whitney, F. L.; Woody, C.; Kilpatrick, W. H.; Henmon, V. A. C.; & Freeman, F. N. (1931). Symposium on the classification of education research. Journal of Education Research, 23(5), 353-382.
Burns, R. (2000). Introduction to research methods (4th ed.). French's Forest, New South Wales: Longman.
Cooper, H. (1984). The integrative research review: A Systematic approach. In Applied social science research methods series. Beverly Hills, CA: Sage Publications.
Ensign, P. (2006). Ensign international channels of distribution: A classification system for analyzing research studies. Tire Multinational Business Revieiu, 14(3), 95-120.
Gore, J., & Gitlin, A. (2004). [Re]Visioning the academic-teacher divide: Power and knowledge in the education community. Teachers & Teaching: Theory & Practice, 10(1), 35-58.
Grossman, P. (2008). Responding to our critics: From crisis to opportunity in research on teacher education. Journal of Teacher Education, 59 (1), 10-23.
Isaac, S., & Michael, W. (1995). Handbook in research and evaluation: A collection of principles, methods, and strategies useful in the planning, design, and evaluation of studies in education and the behavioral sciences (3rd ed.). San Diego, CA: EDITS Publishers.
Kleck, G., Tark, J., & Bellows, J. (2006). What methods are most frequently used in research in criminology and criminal justice? Journal of Criminal Justice, 34(4), 453-462.
Knipe, S. (2015). Development of a Journal Article Research Analysis (JARA). 2015 Australian Teacher Education Association Conference, Darwin, Northern Territory, Australia.
Knipe, S., & Bottrell, C. (2013). Barriers to Reading and Interpreting Research. 2013 European Conference for Education Research, Istanbul, Turkey.
Levin, B. (2006). How can research in education contribute to policy? Australian Education Researcher, 33, 147-157.
Levin, B. (2004). Making research matter more. Education Policy Analysis Archives, 12(56), 1-20.
Lingard, B. (2013). The impact of research on education policy in an era of evidence-based Policy. Critical Studies in Education, 54(2), 113-131.
Miretzky, D. (2007). A view of research from practice: Voices of teachers. Theory into Practice, 4(4), 272-280.
Murray, S., Nuttall, J., & Mitchell, J. (2008). Research into initial teacher education in Australia: A survey of literature 1995-2004. Teaching and Teacher Education, 24(1), 225-239.
Nuttall, J., Murray, S., Seddon, T., & Mitchell, J. (2006). Changing research contexts in teacher education in Australia: Charting new directions. Asia-Pacific Journal of Teacher Education, 34(3), 321-332.
Oancea, A. (2005). Criticisms of education research: Key topics and levels of analysis. British Education Research Journal, 31(2), 157-183.
Sleeter, C. (2014). Toward teacher education research that informs policy. Educational Researcher, 432(3), 146-153.
Suter, L. (2005). Multiple methods: Research methods in education projects at NSF. International Journal of Research & Method in Education, 28(2), 171-181.
Tuinamuana, K. (2012). What do ATEA people research and why? Conference Presentation, Australian Teacher Education Conference, Melbourne, Victoria, Australia.
Williams, J., & Kendall, L. (2007). A profile of sports science research (1983-2003). Journal of Science and Medicine in Sport, 10, 193-200.
Yates, L. (2006). Is impact a measure of quality? Producing quality research and producing quality indicators of research in Australia. Australian Education Researcher, 33, 119-132.
Sally Knipe
La Trobe University (Australia)
and
Christine Bottrell
Federation University (Australia)
About the Authors
Sally Knipe, Ed.D. ([email protected]), is Associate Professor (Education) at the College of Arts, Social Science, and Commerce at La Trobe University, in Wodonga, Victoria, Australia. Previously, she has been a Course Director at Charles Sturt University, where she was responsible for strategic leadership and academic management of a range of courses, and a Senior Education Officer for the New South Wales Department of Education & Training. She has been Chief Investigator for several research projects including some the Australian Teacher Education Association and the Dusseldorf Skills Forum funded. Her publications include Middle Years of Schooling: Reframing Adolescence (Pearson Education Australia, 2007) and a chapter in Big Fish Little Fish (N. Mockler & S. Groundwater-Smith, eds., Cambridge University Press, 2015). She has a Master of Education degree from Deakin University. She is a national Standards AssessorTeacher Education.
Christine Bottrell, Ph.D. ([email protected]), is affiliated with Federation University, Northern Territory Department of Education, Victorian Department of Education, and Catholic Care NSW. Her involvement in research and postgraduate education in New South Wales, the Northern Territory, and Victoria for 30 years informs her practice, project work, and writing. Expertise in evaluation and measurement, research methods, pedagogy, leadership and supervision, curriculum development, and visual cognition play an important role informing her current projects around social policy, education, and community capacity building.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright St. Thomas University Summer 2015
Abstract
The field of education research is a complex and difficult area particularly for early career researchers or students new to research. Focusing upon the most common types of research methods is a starting point as students, and those new to research would be more likely to encounter these methods in journals. The research literature contains arguments regarding the most frequent type of research methodology as well as the nature of this research. This article describes the authors' review of a select number of education research journals and the surprises the researchers encountered-in particular, challenges in the development of a suitable mapping tool called the Journal Article Research Analysis (JARA) schedule. The results of this study indicate there may be implications for education researchers and editors of journals as well as teachers of research.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer