Who's Outside the Box

Locations of visitors to this page

Tuesday, October 27, 2009


Ever since a child is first born, he or she is welcomed into this world with the Apgar test to measure pulse, activity, appearance, reflex, and respiration. Any score below 3 is considered failing. Then, before one knows it, it is time to test four year olds to determine whether they are proficient in literacy or math (this usually takes place in Head Start Programs). Come kindergarten, a child undergoes testing to see whether he or she is gifted. By the time any child is 6 he or she has been on a testing roller coaster. Little do these children know that in their future, there are many more tests to come…

Further, a child who is in preschool may not perform the best to his or her abilities on cognitive tests due to many reasons. Perhaps the child at this age has problems concentrating, or is shy to show what he or she knows? I remember one of our professors mentioning that research has shown IQ tests to be less accurate predictors of learning and achievement before the age of 6. Perhaps this research should be taken into consideration before we test yet another child in preschool.

1.Should we expose preschoolers to any cognitive testing?
2. Do you feel that intelligence tests are adequate for a child that is only 4 years old?
3. Do we as nation like tests so much that we have structured society around them?

This blog was created by Tjasa Korda.

Thursday, October 22, 2009

Mirror, mirror on the wall...who's the fairest of them all?

A Projective test is a method of personality assessment in which an individual is presented with a standardized set of ambiguous, abstract stimuli and asked to interpret their meanings, wherein the individual's responses are assumed to reveal inner feelings, motives, and conflicts.
Although clinicians frequently use projectives, the subjective interpretation of responses to projective stimuli is problematic with regard to reliability and validity ( Beutler, 1995; Dawes, 1994). Recent refinements in scoring have focused on quantifying responses and comparing them to established norms ( Exner, 1993), with resulting improvement in reliability and validity.
It seems imperative that users score responses to projective tests objectively if adequate reliability and validity are to result. Without such objective scoring, the door is left open to biased interpretation. For example, if one believes the respondent is aggressive, one may tend to note responses that support such an impression and pay less attention to responses that do not fit as well.

History has shown that the scientific method is the most useful method for gaining reliable knowledge about the world. Will clinical practice improve with the adoption of empirically based rules?

Is there a place for subjective interpretation in psychological assessments and treatments or should clinical practice be guided by the scientific method?

Posted by Courtney Lynch.

Resiliency: The Best View of the Sky is from the Ground

We can teach resilience.

Resilience- the ability to cope successfully with adversity, is not only a naturally developed skill that many use as a means of survival, it is an ability that can be shaped and encouraged through specific activities that can be effectively taught in the classroom. "Contributory Activities", those activities in which children are involved in helping others, have been shown to make children less likely to display negative or angry behaviors and to foster resilience or practical problem solving. When we can convince children that they make a difference in the way we live, and we work at communicating with them in positive ways, they respond.

Even though schools are great places to develop resilience in children, parents can encourage practical problem solving by discussing why things have to be done, having family meetings, and collaborating about the conditions under which activities will be completed. Drs. Kenneth A. Dodge and Robert Brooks assert in the book Raising Resilient Children, that "success builds upon success, and that children faced with oceans of adversity, must be helped to find islands of competence."

As school psychologists, we can contribute to the relations our students have with us and to their levels of desire to persevere, adapt and thrive in their environments. We can teach ways to self-regulate ,and help our students to maintain good strong mentoring relationships.

The circumstances in the districts where we work can be dire. Children are failed by us more often than they are served. In addition to seeing the natural sparks in the metaphorical eyes of the children we serve, we must uncover the buried sparks and ignite sparks where they have been extinguished. What are some specific interventions that can encourage children?

Posted by Tammarra R. Jones.

Tuesday, October 13, 2009

There's no I in TEAM...or is there?!?!?!

As School Psychologists we are going to be a part of the Child StudyTeam in whatever kind of school that we choose. As in any "team,"there are always going to be conflicting personalities and people thatare difficult to get along with. No matter what type of people we haveon our team, we're going to have to learn to work together. How do youfeel about team dynamics and how will they affect our jobs? Ifeveryone got along and had similar systems of working, that would bean ideal situation. But, that will not always happen.

There are many factors that contribute to the team dynamic--age, howlong they've been there, methods, opinions, and much more. Do youthink there are different strategies we can use to solidify themembers in order to work better together? What role, if any, does theSupervisor have in this?

Posted by Susan Bartolozzi.

Tuesday, October 6, 2009


We can all agree that the interpretation of cognitive test scores mostly leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Knowing this fact, it is absolutely necessary for us, as future school psychologists, to administer and score cognitive tests without any errors.

A recent study published in Psychology in the Schools (Graduate students’ administration and scoring errors on the Woodcock-Johnston III Tests of Cognitive Abilities, vol.46, issue 7, pgs. 650-657, July 2009) looked at the frequency and the types of errors that occurred during administration and scoring of Woodcock Johnston III Tests of Cognitive Abilities (WJ III COG). Data from 36 graduate students across 108 test records revealed a total of 500 errors across all records!!!! Three frequently occurring errors included the use of incorrect ceilings, failure to record errors, and failure to encircle the correct row for the total number correct.

Can we avoid making scoring errors on cognitive tests and if so how? Are these errors more likely to occur on WJ III or they happen regardless of the test used? Are we properly trained to administer cognitive tests? Do you think that wrong scores may result in some children being placed in the wrong settings?

And finally…What can our graduate programs do to ensure that we are all properly trained to administer cognitive tests?

Posted by Tjasa Korda.