Document Type: Original Article

Authors

1 PhD student in University Sains Malaysia (USM), 11800, Penang, Malaysia, Professor assistant at University Sains Malaysia (USM) 11800 Penang, Malaysia

2 Assistant Professor of psychology department of Payame Noor University, PO BOX 19395 - 3697 , Tehran, I.R. of Iran

3 Faculty of psychology department of Payame Noor University, PO BOX 19395 - 3697, Tehran, I.R. of Iran

Abstract

The purpose of this study is to examine the score comparability of institutional English reading tests in two testing methods, i.e. paper-based and computer-based tests taken by Iranian EFL learners in four language institutes and their branches in Iran. In the present study, the researcher tried to examine whether there is any difference between computer-based test results (henceforth CBT) and paper-based test (PPT) results of a reading comprehension test as well as exploring the relationship between students' prior computer experience and their test performance in CBT. Two equivalent tests were administered to one group of EFL learners in two different occasions, one in paper-based format and the other in computer-based test. Utilizing t-test, the means of two modes have been compared and the results showed the priority of PPT over CBT with .01 degree of difference at p

Keywords

Bachman, LF (2000). Modern language testing at the turn of the century: Assuring that what we count counts. Language testing, 17(1), 1-42.

Bennett, R.E., Braswell, J., Oranje, A., Sandene, B., Kaplan, B., Yan, F (2008). Does it matter if I take my mathematics test on computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment, 6(9).

Berner, J. E (2003). A study of factors that may influence faculty in selected schools of education in the Commonwealth of Virginia to adopt computers in the classroom. George Mason University.

Boo, J (1997) Computerized versus paper-and-pencil assessment of educational development: Score comparability and examinee preferences. Unpublished PhD dissertation, University of Iowa, USA.

Bridgeman, B., Lennon, M. L., & Jackenthal, A (2003). Effects of screen size, screen resolution and display rate on computer-based test performance. Applied Measurement in Education, 16, 191-205.

Chapelle, C. A (2007). Technology and second language acquisition. Annual Review of Applied Linguistics. 27: 98-114.

Clariana, R. and Wallace, P (2002). Paper-based versus computer-based assessment: key factors  associated with the test mode effect. British Journal of Educational Technology. 33 (5) 593-602.

Coniam, D (2006).  Evaluating computer-based and paper-based versions of an English-language listening test. ReCALL. 18: 193–211.

Cumming, A., R. Kantor, K. Baba, U. Erdosy & James M (2006). Analysis of discourse features and verification of scoring levels for independent and integrated prototype written tasks for next generation TOEFL (TOEFL Monograph 30). Princeton, NJ: Educational Testing Service.

DeAngelis, S (2000). Equivalency of computer-based and paper-and-pencil testing. Journal of Allied Health, 29, 161-164.

DeBell, M. & Chapman, C (2003). Computer and Internet use by children and adolescents in 2001: Statistical Analysis Report. Washington, DC: National Center for Education  Statistics.

Douglas, D.,  Hegelheimer, V (2007). Assessing language using computer technology. Annual Review of Applied Linguistics. 2007, 27: 115–132.

Fleming, S. & Hiple, D (2004). Foreign language distance education at the University of Hawai'i. In C. A. Spreen, (Ed.), New technologies and language learning: issues and options (Technical Report #25) (pp. 13-54). Honolulu, HI: University of Hawai'i, Second Language Teaching & Curriculum Center.

Higgins, J., Russell, M., & Hoffmann, T (2005). Examining the effect of computer-based passage presentation on reading test performance. Journal of Technology, Learning, and Assessment, 3(4). Retrieved from http://escholarship.bc.edu/jtla/

Horkay, N., Bennett, R. E., Allen, N., & Kaplan, B (2005). Online assessment in writing. In B. Sandene, N. Horkay, R. E. Bennett, N. Allen, J. Braswell, B. Kaplan, & A. Oranje (Eds.), Online assessment in mathematics and writing: Reports from the NAEP Technology-Based Assessment Project (NCES 2005-457). Washington, DC: U.S. Department of Education, National Center for Education Statistics.

Isleem, M (2003). Relationships of selected factors and the level of computer use for instructional purposes by technology education teachers in Ohio public schools: a statewide survey. The Ohio State University.

Jamieson, J. M (2005).Trends in computer-based second language assessment. Annual Review of Applied Linguistics. 2005, 25: 228–242.

Kathleen Scalise and Bernard Gifford, (2006). Computer-Based Assessment in E-Learning: A Framework for Constructing "Intermediate Constraint" Questions and Tasks for Technology Platforms(Volume 4, Number 6).

Leahy, S., Lyon, C., Thompson, M., & William, D (2005). Classroom assessment minute by minute, day by day. Educational Leadership, 63(3), 19-24.

Laborda, J (2007). On the Net Introducing Standardized EFL/ESL Exams. Journal of LLT, 11, 39.

Leeson, H (2006). The Mode Effect: A Literature Review of Human and Technological Issues in Computerised Testing. International Journal of Testing, 6(1), pp.1-24.

Mason, B. J., Patry, M., & Bernstein, D. J (2001). An examination of the equivalence between non-adaptive computer-based and traditional testing. Journal of Educational computing research. 24, 29-39.

Mazzeo, J. Druesne, B., Raffeld, P., Checketts, K. & Muhlstein, A (1991). Comparability of computer and paper-and-pencil scores for two CLEP general examinations. (College Board Report 91-5). Princeton, NJ: ETS.

Pommerich, M (2004). Developing computerized versions of paper-and-pencil tests: Mode effects for passage-based tests. The Journal of Technology, Learning, and Assessment, 2(6).

Pomplun, M., Frey, S., & Becker, D.F (2002). The score equivalence of paper-and-pencil and computerized versions of a speeded test of reading comprehension. Educational and Psychological Measurement, 62, 337-354.

Pomplun, M., & Custer, M (2005). The score comparability of computerized and paper-and-pencil formats for K-3 reading tests. Journal of Educational Computing Research, 32(2), 153-166.

Pomplun M., Ritchie, T., & Custer M (2006). Factors in paper-and-pencil and computer reading score differences at the primary grades. Educational Assessment, 11(2), 127-143.

Rezaee, A.A., Zainol Abidin, M. J., Issa, J. H., Mustafa, P. O (2012). TESOL in-Service Teachers’ Attitudes towards Computer Use. English Language Teaching. Vol. 5, No. 1. pp. 61-68.

Roussos, P (2007). The Greek computer attitudes scale: construction and assessment of psychometric properties.Computers in Human Behavior, 23(1), 578-590. http://dx.doi.org/10.1016/j.chb.2004.10.027.

Sadik, A (2006). Factors Influencing Teachers’ Attitudes toward Personal use and school Use of Computers New Evidence From A Developing Nation. Education Review, 30(1), 86-113.

Salimi, H., Rashidy, A., Salimi,  A. H., Amini Farsani, M (2011). Digitized and non-Digitized Language Assessment: A Comparative Study of Iranian EFL Language Learners. International Conference on Languages, Literature and Linguistics IPEDR vol.26.  IACSIT Press, Singapore.

Schaeffer, G., Reese, C., Steffen, M. McKinley, R. & Mills, C. (1993). Field Test of a Computer-Based GRE General Test. Reports-Research/Technical ETS-RR-93-07.

Taylor, C., Kirsch, I., Eignor, D., Jamieson, J (1999). Examining the relationship between computer familiarity and performance on computer-based language tasks. Language Learning, 49(2), pp.219-274.

Wang, S (2004). Online or paper: does delivery affect results? Administration mode comparability study for Stanford Diagnostic Reading and Mathematics Tests. Harcourt Assessment Inc, USA.

Yurdabakan, I (2012). Primary School Students' Atitudes Towards Computer Basedtesting and Assessment in Turkey. Turkish Online Journal of Distance Education- Volume: 13 Number: 3 Article 12.

Zhang, L, & Lau, C.A (2006, April). A comparison study of testing mode using multiple-choice and constructed-response items – Lessons learned from a pilot study. Paper presented at the Annual Meeting of the American Educational Association, San Francisco, CA.