Purpose

The purpose of the test is to assess the reading ability of students in the early grades of primary school, and more specifically, to evaluate decoding skills, especially among first-grade students.

Brief Description

The test consists of four subtests, each evaluating the ability to decode different types of words. The words are either simple or complex in phonological structure. Specifically, the test includes: (a) phonologically simple words with basic consonant and vowel combinations (1st subtest), (b) words with double consonants and vowels (2nd subtest), and (c) words with double consonants, vowels, and less familiar or complex consonant combinations and diphthongs (3rd and 4th subtests). The words are either real or pseudowords and are monosyllabic or disyllabic. Special attention is paid to the grammatical and phonological rules of the Greek language in the construction of pseudowords.

Sample

The adaptation of the test was based on a sample of 476 students. Among them, 198 were first-grade students with a mean age of 81.41 months and a central distribution of 82 months. Second-grade students numbered 133, with a mean age of 90 months and a distribution of 89 months. Third-grade students numbered 145, with a mean age of 102.32 months and a distribution of 101 months. In total, 243 were girls and 233 were boys. Based on the adapted sample, the mean scores (M.O.) and standard deviations (S.D.) of errors by grade and gender were as follows:
For first grade: boys had M.O.=66.08 and S.D.=43.68; girls had M.O.=60.41 and S.D.=50.89.
For second grade: boys had M.O.=27.77 and S.D.=38.47; girls had M.O.=44.34 and S.D.=45.80.
For third grade: boys had M.O.=34.65 and S.D.=30.21; girls had M.O.=30.61 and S.D.=22.27.

Scoring Method

The test is administered individually. First-grade students usually require between 12 and 35 minutes. For second- and third-grade students, the time needed is typically less. The student reads the words presented one at a time on a computer screen. After each response (correct or incorrect), the examiner presses the right arrow key to move to the next word. Incorrect responses are recorded by the examiner on a special form and later entered into the “Olympos” software, which automatically calculates the student’s accuracy and generates a quantitative reading profile.
The information provided includes accuracy rates in each of the underlying grammatical/phonological categories (e.g., which phonemes or final letters are substituted, total number of anagrams, substitutions, suffix alterations, omissions, additions, and phonological decoding mechanisms). Quantitative analysis is based on two main units: letters/phonemes/syllables and word structure.
A major advantage of the tool is that results are stored in the computer, which maintains detailed cognitive profiles for individual students. These can be used to design highly targeted remediation programs.

Validity

The test evaluates students’ reading ability through a genuine reading process, as they must respond orally to visually presented words. This demonstrates its construct validity. For concurrent validity, comparisons were made with results from the TORP test. Pearson correlation coefficients ranged from 0.63 to 0.93 (p<0.01), and with teacher ratings, correlations ranged from 0.77 to 0.93 (p<0.01).
For predictive validity, three external criteria were used: first, the Reading Ability Diagnosis Test (Tafa, 1995), where statistically significant correlations were found (significance levels 0.01 and 0.05); second, students’ academic performance assessed by their teacher the following school year, again showing statistically significant correlations (0.01 and 0.05); and third, teacher ratings two years later, which also showed significant correlations (0.01 and 0.05).

Reliability

The test’s diagnostic reliability is rooted in the fact that each phoneme or phonetic unit appears multiple times across subcategories and is also repeated during the test, increasing confidence in distinguishing between persistent cognitive difficulties and temporary phonetic/grammatical errors.
Internal consistency (Pearson r) was measured as 0.98 for first grade, 0.96 for second grade, and 0.99 for third grade. The standard error of measurement was 0.29 for the lower grades, further supporting the test’s accuracy and predictive value.

Key References

McLeod, J., & McLeod, C. (2000). The McLeod Phonics Test. Saskatoon, SK: McLeod Educational Consultants.
Pellidas, S., & Sideridis, G. (2000). Discriminant Validation of the Test of Reading Performance (TORP) for Identifying Children at Risk of Reading Disabilities. European Journal of Psychological Assessment, 16(2), 139–146.
Papadopoulou, M.T. (2005). Reading Difficulties: Early Identification and Error Analysis. Doctoral dissertation, Department of Early Childhood Education, University of Thessaly.
Papadopoulou, M.T., & Zafeiropoulou, M. (2002, November). Evaluation of reading skills and educational interventions using computer software. Proceedings of the Panhellenic Conference of the Pedagogical Society of Greece, Athens.
Papadopoulou, M.T., McLeod, J., & Zafeiropoulou, M. (2002). Evaluation of Reading Skills Using Computers. Proceedings of the Scientific Conference Psychopedagogy in Early Childhood. Rethymno: University of Crete.
Paradantoulou, M.T., & Zafeiropoulou, M. (2005). Comparison of Reading Errors made by Greek students and English-speaking students with Reading Problems. Australian Journal of Learning Disabilities, 10(1), 25–33.
Papadopoulou, M.T., Zafeiropoulou, M., & McLeod, J. (2004). The Most Common Reading Errors in First Grade Greek Students: Differences and Similarities with Reading Errors made by English-speaking Students. Workshop presented at the 8th BDA Conference, Warwick, England.
Tafa, E. (1995). Reading Ability Diagnosis Test. Athens: Ellinika Grammata.