Readability of Texts: Human Evaluation Versus Computer Index

Authors

  • Pooneh Heydari Islamic Azad University, Shiraz, Iran.
  • A. Mehdi Riazi Macquarie University, Australia.

Abstract

This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert
readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and
graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different
components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability
index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different
from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty
and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL
instructors and graduate students’ evaluation of the text difficulty. The findings of the study imply that while readability formulas
are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary
to check the validity of the readability formulas and the findings of the present study.

Downloads

Download data is not yet available.

Downloads

Published

2012-01-01

How to Cite

Readability of Texts: Human Evaluation Versus Computer Index. (2012). Mediterranean Journal of Social Sciences, 3(1), 177. https://www.richtmann.org/journal/index.php/mjss/article/view/10954