Who's Outside the Box

Locations of visitors to this page

Tuesday, October 6, 2009


We can all agree that the interpretation of cognitive test scores mostly leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Knowing this fact, it is absolutely necessary for us, as future school psychologists, to administer and score cognitive tests without any errors.

A recent study published in Psychology in the Schools (Graduate students’ administration and scoring errors on the Woodcock-Johnston III Tests of Cognitive Abilities, vol.46, issue 7, pgs. 650-657, July 2009) looked at the frequency and the types of errors that occurred during administration and scoring of Woodcock Johnston III Tests of Cognitive Abilities (WJ III COG). Data from 36 graduate students across 108 test records revealed a total of 500 errors across all records!!!! Three frequently occurring errors included the use of incorrect ceilings, failure to record errors, and failure to encircle the correct row for the total number correct.

Can we avoid making scoring errors on cognitive tests and if so how? Are these errors more likely to occur on WJ III or they happen regardless of the test used? Are we properly trained to administer cognitive tests? Do you think that wrong scores may result in some children being placed in the wrong settings?

And finally…What can our graduate programs do to ensure that we are all properly trained to administer cognitive tests?

Posted by Tjasa Korda.


SBartolozzi said...

There will always be room for error when giving cognitive tests, I don't think that is avoidable. However, as trained school psychologists, we should do everything in our power to make the test as accurate as possible.

I don't think the type of test matters; I think what matters is that we are especially careful when administering and scoring the test, instead of just trying to get it done. I feel that we had pretty significant practice on giving and scoring tests, but that we need to go over the directions and examples before giving them just to refresh. I also believe that we should utilize the manual to score the test as accurate as possible. I think that some people might think that they've done enough that they can do it by memory, but we should always double check.

Graduate programs cannot do much more than to have the students practice over and over giving and scoring the tests. In our program, I feel that the lab was a great way to get that done. I felt more comfortable giving the test on a peer first before going out and giving it to a student. I feel that the professors need to make sure that each person is comfortable with each test, which will definitely take a lot of time but will certainly be worth it in the end.

Courtney said...

I agree with Sue, in that some margin of error in test administration is unavoidable. I think the School Psychology program at NJCU did a great job of training us by first presenting us with theoretical basis and background information for each battery, alerting us to common errors made in administration, and then allowing us the much needed hands on experience of administering the tests amongst our cohorts. This alloted me the confidence and training necessary to appropriately administer the battery in real life situations. The more experience you have with a battery the less likely administration error will occur.

Roxane Nassirpour said...

As Sue and Courtney said, error is unavoidable. I believe that this underscores the importance of collecting a variety of data and NOT allowing test scores to make sole determination. You should be going into testing with a hypothesis about the child's ability based on teacher, student and parent interviews, as well as work samples.

Jessica S said...

As everyone said, error is a fact of testing. While we need to ensure the integrity and accuracy of the test, we also needn't get carried away. Confidence intervals are designed to account for error. Is the idea to get this all encompassing sacred IQ score, or is it to get a general picture of the child's overall functioning? If the latter is true, I think IQ scores should be taken with a grain of salt.

This is why a carefully prepared battery of tests is always in order so that we can get an accurate picture of strengths and weaknesses.

Tammarra R. Jones said...

We have been trained to use several resources to determine the best educational programs for children. One strategy for minimizing errors in evaluating tools is to have them scored in several ways. Many school districts have computer programs that score diagnostic tests. After tests are scored manually, they are checked with computer data. In addition, professionals who incorporate a "buddy- system" in checking work, usually find they have fewer errors.
The practice of administering tests and interpreting results has been helpful in school. I am though, learning about the art of administering tests to people, as I function in placement. As a young educator, I was trained well, and learned much in school. My more substantive training began though, when I was in the classroom, teaching children.
We can avoid most arithmetic, and administration errors easily enough. Our focus can then be on the interpretations of those results ,and on the recommendations we make.