Corpus-Based Study on Readability of Integrated English Textbooks

Tunan HU


Readability has been used extensively as a quantitative indicator to evaluate the difficulty of reading textbooks. Based on a corpus consisting of 96 texts from two versions of integrated English textbooks published by the top-tier publishing house in China in 2013 and 2020 respectively, this paper employs three readability formulas (i.e. FRE, FKGL and LR) to examine their readability trends and differences in readability. The results show that: 1) the readability of both book sets is moderate, corresponding to the reading level of 8th-9th grade students in the US, 2) no significant differences are found in three readability indices between two book sets, 3) some of LR subindices, such as deep cohesion, show an opposite tendency to the change of overall readability. It is claimed that: 1) both book sets are difficult for the students in the corresponding grades to read and have a scientific hierarchy of readability; 2) the upgrade does not mean that reading difficulty should increase limitlessly; 3) some subindices can be tapped into purposefully to moderate the overall read difficulty. This study tries to provide a quantitative approach to evaluate English textbooks in terms of readability.


Corpus-based; Readability assessment; Integrated English textbooks

Full Text:



Benjamin, R. G. (2012). Reconstructing readability: Recent developments and recommendations in the analysis of text difficulty. Educational Psychology Review, 24(1), 63-88.

Biber, D. (1998). Variation across speech and writing. Cambridge: Cambridge University Press.

Crossley, S. A., D. Allen & D. S. McNamara. (2011). Text readability and intuitive simplification: a comparison of readability formulas. Reading in a Foreign Language, 23(1): 84-102.

Crossley, S. A., Salsbury, T., McCarthy, P. M., & McNamara, D. S. (2008), LSA as a measure of coherence in second language natural discourse. In V. Sloutsky, B. Love, & K. McRae (Eds.), Proceedings of the 30th annual conference of the Cognitive Science Society (pp. 1906-1911). Washington, DC: Cognitive Science Society.

Crossley, S. A., Skalicky, S., & Dascalu, M. (2019). Moving beyond classic readability formulas: New methods and new models. Journal of Research in Reading, 42(3-4), 541-561.

Dale, E., & Chall, J. S. (1948). A formula for predicting readability: Instructions. Educational Research Bulletin, 27(2), 37-54.

Deng, W. B. (2013). A comparative study on new and old editions of intensive reading textbooks of 21st century college English. Journal of Changchun University, 23(1), 118-121.

Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221-233.

Graesser, A. C., & McNamara, D. S. (2011). Computational analyses of multilevel discourse comprehension. Topics in Cognitive Science, 3(2), 371-398.

Graesser, A. C., McNamara, D. S., & Kulikowich, J. M. (2011). Coh-Metrix: Providing multilevel analyses of text characteristics. Educational Researcher, 40(5), 223-234.

Gu, X. D., & Guan, X. X. (2003). A sample study on readability of CET reading test and reading materials of college English. Journal of Xi’an International Studies University, 11(3), 39-42.

Hartley, J., Sotto, E., & Pennebaker, J. (2002). Style and substance in psychology: Are influential articles more readable than less influential ones? Social Studies of Science, 32(2), 321-334.

Jin, T., Duan, H. Q., Lu, X. F., Ni, J., & Guo, K. (2021). Do research articles with more readable abstracts receive higher online attention? Evidence from Science. Scientometrics, 126(8), 8471-8490.

Just, M. A., & Carpenter, P. A. (1980). A theory of reading: From eye fixations to comprehension. Psychological Review, 87(4), 329-354.

Kincaid, J. P., Fishburne, R. P., Jr., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Retrieved October 11, 2022, from

Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85(5), 363-394.

Klare, G. R. (1963). Measurement of readability. Ames: Iowa State University Press.

Lively, B. A., & Pressey, S. L. (1923). A method for measuring the vocabulary burden of textbooks. Educational Administration and Supervision, 9(7), 389-398.

Lu, X., Gamson, D. A., & Eckert, S. A. (2014). Lexical difficulty and diversity in American elementary school reading textbooks: Changes over the past century. International Journal of Corpus Linguistics, 19(1), 94-117.

McLaughlin, G. H. (1969). SMOG grading: A new readability formula. Journal of Reading, 12(8), 639-646.

McNamara, D. S., A. C. Graesser & M. M. Louwerse. (2012). Sources of text difficulty: across genres and grades. In J. P. Sabatini, E. Albro & T. O’Reilly. Measuring up: Advances in how we assess reading ability (pp. 89-116). Plymouth: Rowman & Littlefield Education.

McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Zhiqiang, C. (2014). Automated evaluation of text and discourse with Coh-Metrix. New York: Cambridge University Press.

Richards, J. C., Platt, J., & Platt, H. (1992). Longman dictionary of language teaching and applied linguistics. London: Longman.

Sheehan, K. M., Kostin, I., Napolitano, D., & Flor, M. (2014). The TextEvaluator tool: Helping teachers and test developers select texts for use in instruction and assessment. The Elementary School Journal, 115(2), 184-209

Smith, E., & Senter, R. (1967). Automated readability index (AMRL-TR-66-22). Ohio: Aerospace Medical Re.

Stevens, R. J., Lu, X., Baker, D. P., Ray, M. N., Eckert, S. A., & Gamson, D. A. (2015). Assessing the cognitive demands of elementary school reading curricula: An analysis of reading text and comprehension tasks from 1910 to 2000. American Educational Research Journal, 52(3), 582-617.

Yang, G., & Chen, L. J. (2013). A study on use of reflections on skills under various interaction patterns during asynchronous online discussion. Foreign Languages and Their Teaching, 269(2), 16-19.

Zhao, Y., & Zheng, S. T. (2006). Theoretical Analysis on some foreign systems of English textbook evaluation. Foreign Language Education, 27(3), 39-45.



  • There are currently no refbacks.

Copyright (c) 2023 Author(s)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Share us to:   


Online Submission


How to do online submission to another Journal?

If you have already registered in Journal A, then how can you submit another article to Journal B? It takes two steps to make it happen:

1. Register yourself in Journal B as an Author

Find the journal you want to submit to in CATEGORIES, click on “VIEW JOURNAL”, “Online Submissions”, “GO TO LOGIN” and “Edit My Profile”. Check “Author” on the “Edit Profile” page, then “Save”.

2. Submission

Go to “User Home”, and click on “Author” under the name of Journal B. You may start a New Submission by clicking on “CLICK HERE”.

We only use three mailboxes as follows to deal with issues about paper acceptance, payment and submission of electronic versions of our journals to databases:;;

 Articles published in Studies in Literature and Language are licensed under Creative Commons Attribution 4.0 (CC-BY).


Address: 1055 Rue Lucien-L'Allier, Unit #772, Montreal, QC H3G 3C4, Canada.
Telephone: 1-514-558 6138 
Website: Http://; Http://;;

Copyright © 2010 Canadian Academy of Oriental and Occidental Culture