Wrightslaw l Wrightslaw Way Blog l Fetaweb l IDEA 2004 l Yellow Pages for Kids

EspañolDisclaimer l Privacy Policy

 Home > The Beacon, Spring 2006
Check It Out!

Year in Review Series



Order PDF from Wrightslaw - Immediate Download

About the Book

Understanding Your Child's
Test Scores (1.5 hrs)
Immediate Download

Understanding Your Child's Test Scores
Learn More
To Order
Suggested Retail Price: $24.95
Special: $14.95


Products & Services
Home
Products
The Beacon: Journal of
Special Education Law & Practice
Beacon Archives
Bulk Discounts
Military Discounts
Student Discounts
Orders
Exam Copies
Service & Support
Mission
Press Room
Contact Us
Legal & Advocacy Training
Wrightslaw Webex Training Center
Special Ed Law & Advocacy Seminars
Special Ed Law & Advocacy Boot Camps

From Emotions to Advocacy Seminars


RESPONSE TO RESPONSE TO INTERVENTION:

HAVE WE FOUND A BETTER WAY OR WILL WE BE JUST AS CONFUSED AS WE HAVE BEEN FOR THE LAST TEN YEARS?

by

Guy M. McBride

Burke County Public Schools, NC

John O. Willis

Rivier College

Ron Dumont

Fairleigh Dickinson University

Print this page
Print this page in PDF

Abstract

Congress is in the process of reauthorizing the Individuals with Disabilities Education Act and appears almost certain to eliminate or make optional the use of IQ tests and discrepancy formulae for identification of specific learning disabilities in favor of Response to Intervention and the Problem Solving Model, which are already being used in school districts in at least ten states. This dramatic shift in the practice of school psychology will require school psychologists, ready or not, to master new skills, new approaches, and new roles for the parts of their jobs involving learning disabilities assessments. This article attempts to analyze some of the anticipated changes and possible implications for school psychologists.

Background

Last year, both houses of Congress introduced bills (HR 1350, S 1248) reauthorizing the Individuals with Disabilities Education Act (IDEA). Although the bills had significant differences, with respect to Specific Learning Disabilities (SLD) they both offered identical amendments to the current (1997) law. The House passed its version on April 30, 2003 and, as we write this article, we await action from the Senate. Briefly, both houses of Congress proposed that the states be prohibited from requiring that Individualized Education Program (IEP) teams find a severe discrepancy between ability and achievement in order to identify a student as having a Specific Learning Disability; and both bills proposed that "a local educational agency may use a process which determines if a child responds to scientific, research-based intervention" as an alternative marker for the "true" child with SLD.

Although these changes are not discussed or elaborated upon within the context of the legislation, the history behind those changes suggests that the impact on parents, administrators, teachers, school psychologists and special educators may be analogous to being sucked into a black hole and blown out the other side.

Some of the Many Flaws in the Current "Discrepancy" Model

To understand why RTI may make sense, it is necessary to understand why the current model may be senseless.

In 1975, Congress defined a Specific Learning Disability as a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written, that may manifest itself in an imperfect ability to listen, think, speak, read, write, spell, or to do mathematical calculations, including conditions such as perceptual disabilities, brain injury, minimal brain dysfunction, dyslexia, and developmental aphasia.
(ii) Disorders not included. The term does not include learning problems that are primarily the result of visual, hearing, or motor disabilities, of mental retardation, of emotional disturbance, or of environmental, cultural, or economic disadvantage.


Perhaps recognizing that the definition was so broad as to mean just about anything and everything anybody wanted it to mean (a statutory Rorschach card), Congress called upon the Office of Special Education and Rehabilitative Services (OSERS), the Education Department's sub-agency responsible for writing implementing regulations, to provide guidance to schools regarding their definition. In 1977, OSERS gave us §300.541, which included a requirement that the IEP team find a severe discrepancy between ability and achievement before determining a student was eligible for services. A number of exclusions, if found to be the primary reason for the identified "severe discrepancy," were mentioned.

OSERS never required states to adopt regulations requiring that IQ tests be given (Guard, 2002), or that individually administered standardized achievement tests be administered. As early as 1980, the Office for Special Education Programs (OSEP) informed the states that whatever guidance OSERS may offer regarding regulations for the determination of a severe discrepancy, was expressly that - "guidance." IEP teams must decide, on whatever basis they deem relevant, whether a student does or does not have a severe discrepancy and what, if anything, a severe discrepancy really means. And while OSEP said states may require IEP teams to document the basis for that determination, states may not override a team's decision, even if it is based on criteria other than those suggested in their state regulations. States and local school districts have nonetheless created and imposed myriad strange and wonderful statistical formulae, with the result being that multidisciplinary teams often came, over time, to mistake them for law. [See Dumont, Willis, & McBride, 2001, for a more in-depth discussion of the severe discrepancy clause.]

Multidisciplinary teams, in an attempt to decide about a child's eligibility for special education services, have utilized information provided by school psychologists, learning specialists, and/or independent evaluators who have been assiduously administering and comparing IQ (WISC, WAIS, DAS, KAIT, Slosson, ad nauseum) and achievement tests (WIAT, Woodcock-Johnson, PIAT, K-TEA, etc.) since 1977. Have we, in that process, misclassified children? Services delayed are services denied. Most children are not identified as having learning disabilities until age nine (Lyon et al., 2001). Yet 75% of third grade children with reading disabilities who did not receive early intervention continue to have difficulties learning to read throughout life (Lyon, 1997).

One thing on which virtually everyone seems to agree is that a Specific Learning Disability is not something you catch. If you've got it, you had when you were in kindergarten, and you will have it when you graduate. But what seems to be the current state of affairs in special education service delivery? One common scenario seems to be as follows. A child is referred to the special education team while in kindergarten. His parents have taught him a few letter sounds and to add one or two numbers, so, when he is administered a nationally normed achievement test, he scores in the low average range and consequently, he doesn't qualify for special education services. A year or two later, the parent or teacher refers him once again for an evaluation. We call in the parents once more. "Mrs. McGillicutty," we say, "We're really sorry to have to tell you this, but we reevaluated your child, and I guess the best thing for us to do is just come to the point. Your child still isn't disabled. Despite that fact that he is falling behind in his work and showing great difficulty in acquiring the skills necessary for successful academics, there isn't a big enough gap between his IQ and his achievement scores for us to be able to identify a 'severe discrepancy.' In another year or two, however, he may fall even further behind, and we're hoping he'll qualify for help then. Better luck next time." That same child gets evaluated again in third, fourth, or fifth grade, and he finally qualifies for services. By then it's too late. He hates school, and more likely than not, even if the school were to offer him effective help, he wouldn't want help from the school. Besides, he's fallen too far behind to ever catch up. And in point of fact, what would be done for him might be little different than what would be done for children with EMD, ADHD, ED, and all the other "D's." While his mother is expecting the school to provide qualified teachers to help him with reading (80% of children identified as having a Specific Learning Disability were referred for reading problems), his teachers will instead be attending workshops on understanding regulations, understanding changes in the regulations, or on completing new forms which were just revised for the third time in a year because the people who drew up the first and second sets of forms didn't really understand what the law required.

The main problem in using IQ test results to calculate severe discrepancy is that the underlying assumption (i.e., IQ tests accurately predict achievement and establish a child's potential) is a myth. IQ test scores have never predicted academic achievement very well. Most commonly used IQ tests only account for 25% to 35% of the variance, which means that 65% to 75% of what we call achievement is affected by something other than IQ (whatever "IQ" might be). [See, for example, Hammill & McNutt (1981).] When we use IQ scores to predict specific aptitudes, such as performance on a phonics test, the amount of variance accounted for drops to about ten percent. There's nothing good or bad about those facts intrinsically. It's only when we use IQ as a marker to differentiate between children who have or do not have SLD that the whole thing starts to look a bit absurd.

We have constantly and consistently talked metaphorically about how a child with SLD is characterized by having a weakness in a sea of strengths. There has never ever been any research to suggest that children who have a strength in a sea of weaknesses (or who are simply drowning in a sea of weaknesses with no life preserver at all) needed help less, or even that they would have profited less from the same directed instruction that the typical child with a SLD receives (or ought to receive). In fact, many researchers have provided evidence and opinions suggesting that IQ scores and discrepancy measures do not distinguish one disabled reader from another (e.g., Aaron, 1997; Fletcher, Francis, Rourke, Shaywitz, & Shaywitz, 1992; Mather & Healey, 1990; Fletcher et al., 1994; Stanovich, 1991, 1993).
Although the main problem with using IQ tests is that they don't predict achievement scores very well, that is not the only problem. Scores from different IQ tests are not interchangeable, nor are scores from different achievement tests. Different tests given to the same child will yield different scores, not necessarily because they are inherently flawed, but because they all measure differing aspects of the same child and because they have different psychometric characteristics (e.g., Bracken, 1988, Floyd, Clark & Shadish, 2004). What that means is that a savvy evaluator can tip the scales for or against a child's probability of being eligible for special education if he or she has some basic information about a child's strengths and weaknesses. In short, whether a child does or does not qualify could depend to no small degree on the cleverness and versatility of the examiner in playing the "refer-test-place" game.

Perhaps to prevent the exercise of such cleverness and versatility, some school districts require their evaluators to administer the same battery of tests on every child, regardless of the child's suspected and known disabilities and other issues. Such rules, of course, not only violate the evaluation procedures outlined in §300.532 of the 1999 Regulations, they also further diminish the usefulness of evaluations.

Yet another problem with the use of total IQ scores is that the IQ scores are often depressed by the same cognitive weaknesses that depress the child's achievement. If, for example, a child has a severe weakness in learning, retaining, and retrieving oral vocabulary, that weakness will be reflected in depressed scores on tests of reading comprehension and written expression. However, the same weakness will also depress verbal intelligence measures, which are large components of most total intelligence scores. This "Mark Penalty" (Willis & Dumont, 2002, pp. 131-2) may seem to eliminate a mathematical discrepancy between measures of achievement and ability for some children even when there is a genuine disparity between ability and achievement.

RTI and PSM

What does this have to do with RTI and the Problem Solving Model (PSM)? In the late 1980s, Jeff Grimes et al., in Heartland Area Education Agency in Iowa, recognized that the federal Child Find regulation (34 CFR 300.125) did not require children to be labeled with any particular disability in order for them to be appropriately served in special education. They proposed to their state a radical new way of identifying children non-categorically, using a problem-solving model and data-driven, research-based interventions as the marker for disability. If, after intensive remediation, a child still was not thriving, the IEP team, without further assessment, could declare that student eligible for special education services. Vaughn, Linan-Thompson, and Hickman (2003) provide an empirical example of this process.

It is to that model (RTI and PSM) that OSERS turned after the 1999 regulations were promulgated. OSERS recognized that there were deficiencies in the SLD criteria, but they said they needed time to research the matter and come up with an alternative. They promised to address those deficiencies in the next reauthorization. Considerable pressure, from various advocacy groups and "think tanks," was brought to bear before that reauthorization, which is now awaiting action from the U. S. Senate.

In January, 2001, The Fordham Foundation produced a series of 14 papers on "Rethinking Special Education for a New Century," one of which was titled "Rethinking Learning Disabilities" (Lyon et al., 2001). They called for the abolition of the ability-achievement discrepancy and suggested that the current system used to identify those eligible for special education was hurting more children than it helped. The authors said "Children who get off to a poor start rarely catch up . . . We wait -- they fail." They also suggested that the exclusionary clause related to lack of appropriate instruction was itself inappropriate because inappropriate instruction was a cause of SLD classification. ["Poor instruction causes LD and should not be exclusionary."] They cited research suggesting that early intervention could cut identification of Specific Learning Disabilities by 70%.


The National Association of School Psychologists, the Council for Exceptional Children, the Learning Disabilities Association, and others issued a National Consensus statement which noted (we're rephrasing) that SLD was intrinsic to the child; that "Individuals with SLD show intra-individual differences in skills and abilities," and that "learning disabilities persist across the life span." They concluded that "Identification should include a student-centered, comprehensive evaluation and problem-solving approach that ensures students who have a specific learning disability are efficiently identified." (Specific Learning Disabilities: Finding Common Ground, 2002).

Robert Pasternack (2002), former Assistant Secretary of OSERS, at his presentation to the NASP convention entitled The Demise of IQ Testing for Children with Learning Disabilities, called for RTI and a PSM while at the same time cautioning against the use and utility of IQ testing in learning disability determination ("IQ tests provide no added value in identification or intervention with LD."). Pasternack's concerns about IQ testing included his belief that they did not adequately differentiate outcomes or needs of children with reading problems; that they were poor indicators of treatment between slow learners and children with discrepancies; and importantly, that the use of IQ testing resulted in disproportionate impact of special education identification on different groups.

The President's Commission on Excellence in Special Education (2002), comprised of educators, researchers, and others, called for the abolition of the ability-achievement discrepancy and the implementation of RTI within a PSM. They too described our current system as an "antiquated model that waits for a child to fail, instead of a model based on prevention and intervention" [emphasis theirs]. Congress responded. RTI is the marker they have chosen; the Problem Solving Model (PSM) is presumably the vehicle.

OSERS has referred to Heartland, Iowa; Horry County, South Carolina; and Minneapolis, Minnesota as models worthy of emulation. However, these are not the only models using PSM and RTI as a marker for disability. Illinois, Kansas, Florida, Washington, Pennsylvania, Wisconsin, and Ohio are other states that have adopted similar models. [Ohio, for example, has done away with the IQ test requirement and instituted an RTI model similar to Heartland and Horry, but instead of identifying children as being Entitled, the children are identified as having Learning Disabilities. Their teams are still, however, burdened with showing in some fashion or another that there is a severe discrepancy between ability and achievement.]

The PSM is a self-correcting model that typically uses a four-level process. The first level involves the parents and teacher; the second the parents, teacher, and other school staff; the third level involves all of the above plus school psychologists and/or other qualified professionals; and the fourth level is when the child is referred, parent rights are given, consent is obtained for additional testing, if needed, and the team decides whether more interventions need to be tried, or whether the child can be deemed eligible for special education based upon current documentation. These decisions are data driven, as typically extensive, normed and criterion-referenced testing is done throughout the process. Although IQ tests may be administered, experience in Heartland, Iowa, and Horry County, South Carolina, suggests that the teams will be requesting them infrequently - no surprise when you consider that very few IEP goals are ever developed based on strengths and weaknesses revealed by an individualized intellectual assessment.

One component that seems to be missing from the federal bills currently before Congress, though found in virtually all of the models OSERS reviewed, is a low achievement component. Although low achievement is arbitrarily defined (that is, differently defined) in the various model programs, the programs all differ from our current system in that the job of the team isn't to find the lowest possible achievement score (and highest IQ score) in order to determine if a child qualifies for special services. Rather, the team's function will be to find consistency in or a convergence of the data showing that the child is or is not being successful. Thus, unlike the current situation, within the PSM virtually every decision should be based upon professional judgment grounded in data, not (and this for many districts will be revolutionary) a single mathematical calculation. Additionally, if the level of needed instructional intervention rises to the level of specially designed instruction (special education), and if the team determines that the instruction is needed in order for the child to continue to maintain progress, an IEP team can declare the child eligible even if his or her level of achievement has risen above the district's guidelines (6th, 8th, 10th, or 12th percentile, depending on the system.) Of course, the Reauthorization bill is always just the first shoe to be dropped; OSERS's implementing regulations typically come out about two years later, and those regulations could change the face of SLD as dramatically as did their first regulations in 1977.

States that have previously documented prereferral interventions may find the new PSM process similar to what they have already employed; but, whereas in current practice, prereferral interventions may be documented by a single page, in the Problem Solving Model using RTI, that documentation might look like a small book.

Some districts may find the level of parental involvement unprecedented, as well. In the PSM/RTI model, parents will be involved at every level of the process, and they will be provided an opportunity for input, asked to participate in the nuts and bolts educational planning for their child, and encouraged to be equal partners in their child's remedial program by reinforcing instructional strategies recommended by the team through parental tutoring at home.

Most school psychologists went into the business with some notion that their learning disabilities evaluations might help children. Instead, in that aspect of school psychologists' jobs, many have gone from school to school carrying WISC kits and engaged in the not-so-noble pastime of trying to squeeze the little round bodies of sinking children into special education's square holes. The PSM/RTI model will attempt to bring professionals and parents together the very first time a child is referred. Of course, the PSM/RTI model will still leave at a disadvantage children whose parent(s) are not available or who cannot or will not participate in the process. While Congress seems to like RTI because they hope effective interventions will cut the numbers of served children, and OSERS likes RTI because they're tired of trying to defend an indefensible definition, and the Office of Civil Rights (OCR) likes RTI because they hope it will reduce disproportionality, the Problem Solving Model was not designed for or intended to address any of those concerns.

The PSM with RTI was designed to bring professionals together with one and only one purpose: to help children learn. It intersects neatly, but coincidentally, with No Child Left Behind (the 2001 amendment of the Elementary and Secondary Education Act or ESEA, previously amended in 1994 as the Improving America's Schools Act) (NCLB), because, while NCLB holds schools as a whole accountable for addressing the needs of every group of children, even children in special education programs, the PSM offers a potential tool to regular educators to help them meet those children's needs in the regular classroom setting.

Some Problems with PSM

Does the PSM have problems? Absolutely. Professionally, the PSM will challenge most school psychologists. Most school psychologists are not adequately trained in curriculum-based measurement and many are not as well trained in even traditional, norm-based assessment of achievement as they are in cognitive assessment. Whereas many school psychologists currently hold quasi-administrative positions in relationship to the teachers and the schools they serve, the PSM will require the exercise of consultative skills on a day-to-day basis. Most school psychologists are not trained in scientific, research-based instructional strategies; and if they are to be effective, not only they, but also the teachers they serve, are going to need those skills.

Implementation with Integrity

The argument against "implementation now" reminds one of the lad in a leaking lifeboat who, when asked if they shouldn't move to another lifeboat, said (pausing in his bailing), "No! It might be leaking too!" There is no single rubric for monitoring the PSM for implementation with integrity. Indeed, there is as of yet no national research base that suggests that the PSM has been adequately validated, although local research from Heartland, Iowa, Minneapolis, MN, and Horry County, SC, has been very encouraging. The lack of national research has been cited by both the Council for Exceptional Children and the Learning Disabilities Association (Council for Exceptional Children, 2002, 2003; Learning Disabilities Association, undated a, b) as a rationale for delaying elimination of the severe discrepancy requirement; but even those agencies are calling for implementation of the problem- solving model in addressing children's problems. Implementation with integrity is not just a catch phrase; if we are to better serve children with RTI and the PSM, it is a necessity. Research is definitely needed. The National Research Center on Learning Disabilities, a joint research venture involving Vanderbilt and Kansas Universities, is currently sponsored by the Office of Special Education Programs, Washington, D.C. Their task includes exploring the adequacy of RTI models as alternative models of identifying students with specific learning disabilities and discovering best practices of RTI currently in use.

A Changed Professional World

However, while we do not know with certainty what problems the PSM will bring us, we do know with certainty that the lack of early intervention and remediation of learning problems has hurt some children both in the short and long run. There is some research from the Orton school, corroborated by research out of Canada (Rourke, Young, & Leenaars, 1989), that suggests that LD children with dyslexia (about 80% of all children identified as SLD) are significantly more prone to suicide and that upon graduation that they have significantly greater difficulty in accessing the public health system; up to fifty percent of adolescent suicides were previously identified as being learning disabled (Learning Disabilities Association of Canada, 2002).

On the other hand, not all children who have come under the current system have been harmed - indeed some children seem to have made substantial educational progress despite the faults of the service delivery model. The next few years should tell whether we can use the PSM/RTI model to stop hurting children or whether we will just hurt children in different ways or just hurt different children.

An educational world without Wechslers or Woodcocks is not just an alliterative phrase; it appears likely eventually to become a reality. Whether that eventuality is the reader's dream or nightmare probably makes little difference. With support from national professional organizations (including the National Association of School Psychologists), from the Office for Civil Rights, from the Office for Special Education and Rehabilitation, and from both houses of Congress, it does not appear that the RTI train can be stopped. Change brings stress, and the extensive changes brought by IDEA 2004 are likely to be particularly stressful for school psychologists. Most, we predict, will successfully adapt. However, school psychologists who have become wedded to their IQ test kits, whose first response to every referral question is to "WISC the child," and who think being a real psychologist only means being able to quote Cattell-Horn-Carroll jargon in a description of a child's strengths and weaknesses on an IQ (GIA, GCA) test - for them, the next decade could be traumatic.

Perhaps we helped bring about this change. To some extent, school psychologists and other evaluators may have brought this long-overdue angst upon ourselves, although we may not have had much choice in the matter. To follow what we believed were the current federal and state mandates (or sometimes, simply to keep our jobs), many of us have tolerated or even actively supported local policies and state rules that imposed rigid discrepancy formulae never mandated by Congress or OSERS. Some have criticized identification of SLD when children did not have sufficient discrepancies, apparently assuming the discrepancy formulae were valid and that the non-formulaic identifications were neither valid nor legal. Working within these systems, some have ignored or been ordered to ignore situations in which the same weakness (for example, processing speed) that depressed measures of ability also depressed measures of achievement and led the IEP team to conclude that there was no significant difference between the two measures, and thus no learning disability (Willis & Dumont, 2002, pp. 131-2). We have followed orders and wasted time administering IQ tests for no purpose other than obtaining a total IQ (GCI, GCA) score when the referral concern was reading, not intelligence. Some of us have ignored research suggesting that scores beyond the total might provide useful information. We may have used cognitive ability tests for purposes that were unintended, unvalidated, and unreasonable, e.g., assessing handwriting speed with a test of transcribing a digit-symbol code instead of by timing a handwriting sample. We have also tended to follow state and district mandates to devote far more time and energy to cognitive assessment than to careful and thorough assessment of reading, writing, phonology, and math skills. At times, we have treated achievement tests with different content and different norms as if they were interchangeable. We have used one-sentence reading comprehension items when the referral question addressed difficulty reading whole chapters. No wonder Congress and OSERS found little support for what we have been doing. Had we been permitted, or had we simply taken the initiative, better evaluations could have been possible.
Individual Assessments (if any) in the Future

There will almost certainly be a diminished role for norm referenced, standardized individual assessments under the IDEA reauthorization. If the interventions were successful at Level 1 or 2, there would be no point to doing an individual assessment at Level 3. Some children, however, will not respond to generic interventions, even though those interventions have been empirically demonstrated to be effective in most cases. This outcome would lead to the third level in the four-level process, which might be time for a very thorough, individual evaluation directed not toward classification but toward determining what individualized interventions are needed for the student. Emphasis might be placed on careful, individualized assessment of achievement using multiple probes measuring progress in the general curriculum (Curriculum Based Measurement) (e.g., reading of consonants; reading of vowels; reading of blends, digraphs, and diphthongs; reading of common syllables and affixes; structural analysis; sight vocabulary; word comprehension; sentence comprehension; paragraph comprehension; comprehension of stories and essays; silent and oral reading fluency; etc.) and of more basic skills underlying achievement (e.g., phonology, rapid naming, oral vocabulary, oral language comprehension, working memory, long-term memory storage and retrieval, etc.). The first three steps in the four-step process can provide a wealth of information to direct the individualized assessment. Intelligence tests would be used sparingly, not to provide an IQ number, but only to help explain the individual's learning problems that had not responded to intensive and well-documented intervention. Some current research (e.g., Evans, Floyd, McGrew, & Leforgee, 2002; Fiorello, Hale, McGrath, Ryan, & Quinn, 2002; Floyd, Evans, & McGrew, 2003; Hale, Fiorello, Kavanagh, Hoeppner, & Gaitherer, 2001) suggests that multifaceted intelligence tests may have a role in understanding academic achievement deficits.

When it comes to this specific aspect of a school psychologist's job description, many school psychologists would welcome the opportunity to do far fewer evaluations and to do those few with a clear and valuable purpose, leaving the rest of their time relating to children specific learning disabilities devoted to Level 3 of the process. School psychologists are ideally positioned to assert positions of leadership in the Problem Solving Model. If school psychologists follow in the footsteps of their peers in systems like Heartland and Horry, they will assume leadership at Level 3 by virtue of their knowledge and training in the PSM process. They will assist the team in defining problems, validating problems by reviewing collected data, and completing functional assessments, if not already done, using such instruments as the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and curriculum-based measurements. They will assist their systems in developing local norms for those instruments. They will be especially valued because of their training in individual assessment, as the action team deals with such problems as setting acquisition or performance goals, writing intervention plans, establishing baseline data, selecting measurement strategies for progress monitoring, providing trend lines, and assessing performance goals. Their expertise will also be invaluable in developing a decision making plan to evaluate outcomes and effectiveness. They will also be indispensable in organizing that information in a final report to be reviewed at Level 4 if the interventions are unsuccessful and entitlement is to be considered. However, such a model will require many school psychologists to expand their knowledge of assessment and especially to the area of linking assessment to specific instructional recommendations.

The Future for School Psychologists

"Lead, follow, or get out of the way." This anonymous aphorism, quoted by many, including President Reagan, seems to apply to school psychologists in the 21st century. It is impossible to predict how fast change will occur. The Council for Exceptional Children, Learning Disabilities Association, and maybe one or two other groups are urging restraint. But the House passed RTI in April 2003, and there's no certain indication as to when the Senate will act or if it will amend its current proposals. All ten major stake holders at the LD Roundtable sponsored by OSERS were agreed that the ability-achievement discrepancy was invalid. The American Psychological Association (not invited to the LD Roundtable in 2002) did not endorse RTI or the PSM, saying "Current legislative proposals to reauthorize IDEA do not require states to take into account the discrepancy between achievement and intellectual ability in determining whether a child has a specific learning disability. Whether or not this criterion is retained (pending the development of a more valid and reliable alternative assessment), it is critical to conduct a comprehensive evaluation of a child's cognitive strengths and deficits" (American Psychological Association, 2003). However, the National Association of School Psychologists has said unequivocally that the scientifically unsupported discrepancy should be abolished. Both organizations call for the continuance of cognitive testing; yet neither group, within the context of the respective statements, provided research-based data supporting their contention that knowing a child's cognitive abilities could positively influence that student's academic outcomes. Although it might make sense to give people time to develop some nation-wide training materials, RTI could, if the Senate acts, easily be on us before anybody is ready. Some districts have implemented non-categorical procedures (e.g., Iowa for 100% of its special education children, Horry County for selected categories) in order to avoid the SLD requirements. That implementation required rules replacement. Students are called various things, but no specific label is given, e.g., "Entitled Child with an Entitlement to Special Education" (Heartland AEA, 2002b, p. 7-2). If Congress passes what it is proposing, the category "SLD" will be about the same as "Eligible Child with a Disability." Students who now have mild EMD, SLD, mild BED, Language Impairment, and ADHD (without hyperactivity) would be identified under the SLD umbrella instead of the Eligible Child with a Disability umbrella. So it really does not matter much what you call it. Low achievement is (or ought to be) part of the picture, although it is not mentioned explicitly in the proposed changes to the statute, but the other part is that the school absolutely must document resistance to instruction using research-based, scientific interventions within the context of a problem-solving model. The problem-solving model is also not explicitly mentioned in the federal regulations, but when Congress talks about a process, that is the process to which they are referring. We will be very surprised if both low achievement and resistance to intervention are not explicitly referenced in the OSERS regulations.

Documenting resistance to intervention is going to require lots and lots of assessment and record keeping, particularly at Level 3 of the 4-level process, which might (or might not) become part of school psychologists' roles. The bottom line is that nobody knows what is going to happen to school psychologists when (not if) RTI goes national. The National Association of School Psychologists is advocating for school psychologists to assume leadership roles on the teams, and that is both reasonable and supported by what is happening in most of the model sites. However, even if school psychologist positions increase in number as a result of these changes (and that is not guaranteed), there is also no guarantee that they will be filled by the same people now filling more traditional roles. In some school systems, learning disabilities assessment is only one part of a school psychologist's many-faceted responsibilities, but in others, school administrators have sought to contain costs by restricting school psychologists to mandated activities only. In those systems particularly, RTI and the PSM could pose a real challenge to psychologists seeking to justify their positions. In a preventive model, the assessments, observations, consultations, and interventions are, for the most part, being done with regular education children not yet suspected of having disabilities. Under current funding laws, schools may be prohibited from using state special education funds to pay for those services - and so the question, "What funds will be used to pay school psychologists to lead the intervention teams?" has not yet been answered.

As if RTI were not enough to cope with, NCLB has revolutionized special education (Wright, Wright, & Heath, 2003). NCLB calls for a reduction in the artificial barriers between special education and regular education, advocates for a de-emphasis on process and paperwork, and calls for a major system change. Most people do not realize that schools have never, ever actually been accountable for children with disabilities learning anything. Schools have to give parents their rights, develop an IEP, and teach to the IEP, but if the child does not learn a single thing listed on the IEP, the law has always explicitly said it "does not require that any agency, teacher, or other person be held accountable if a child does not achieve the growth projected in the annual goals and benchmarks or objectives" (34 CFR 300.350). The challenge from the President's Commission on Excellence and from NCLB is to change the focus from process-based accountability to results-based educational accountability. It is hard not to feel nostalgic for the good old days when giving it the old college try was enough -- now people actually want to see something for their money.

Additional Questions and Some Tentative Answers

In real and concrete terms, how will Levels1 and 2 differ from what we have been doing in the past? Can Levels 1 and 2 succeed if the parents are not willing or able to be involved? The PSM calls for extensive documentation, but schools and individuals will vary in their enthusiasm and capacity for that effort. We will have to see whether and how vigorously such documentation will be enforced. The amount of change will depend in various schools upon what teachers were doing in the past. Teachers have always had conferences to apprise parents of their child's progress. In some school systems, for example, board policies require teachers to warn parents that their child might not be promoted as early as January. This model adds an added burden for the teacher to try to problem-solve with the parent - and adds a requirement that the problem-solving be documented. Obviously, if the parent does not come in, problem-solving cannot be as effective. But the model assumes up to 80% of student problems will be resolved at this step. If parents refuse to become involved, the assistance team cannot refuse to serve the child at Levels 2 and 3. But it is expected that parent refusal will not be accepted without extensive documentation of the team's efforts to gain their involvement.

How long does a team wait before moving from step to step?

Will there be in the federal regulations some specific time line for response to the intervention? Would one, two, four, eight, or sixteen weeks be reasonable? How much discretion will be permitted to local teams? Success in this area is heavily dependent upon local probes being normed. You cannot have a data-driven response to intervention system if you don't have comparative data for fall, winter, and spring for the instruments being used. Using district norms, the team decides at the first meeting what would be an acceptable level of performance, and the next meeting time is set before the meeting is adjourned. Parents have the right at any point to initiate a referral, which would trigger state timelines. Parental participation will help to guarantee the integrity of the process. Teams will be accumulating much more data during the Level 3 process than currently is the case, using a RIOT format (Review, Interview, Observe, Test) across the ICEL Domains (Instruction, Curriculum, Environment, Learner) supplemented with classroom observations and CBM probes (Heartland AEA, 2002a, p. 54).

The goal here is not to get the lowest achievement test score or the highest IQ score as under the present system, but to document a convergence of data showing the child is or is not making adequate progress. Graphing of the results will also be essential, because a determination of need will be based not only on absolutes (e.g., the child is at the 10th percentile) but on whether trend lines show that child as making acceptable progress toward his goal. Our understanding is that the team will consider those data plus the intensity of service being provided to maintain progress. If a student is being maintained at above, say, the tenth percentile and the trend line is showing promising growth, the team could still consider entitlement if the level of intervention is, in the team's collective judgment equivalent to specialized instruction (special education). The four-level system isn't structured to be inflexibly sequential.

There are many reasons why a child might be automatically elevated to Level 3 (bypassing 1 and 2). Such reasons include:
· Parents request an evaluation;
· The child has previously been referred to Level 2;
· The child is below the tenth percentile on end-of-grade tests;
· The student does not meet grade-level standards in more than one area;
· Student is potentially harmful to self or others;
· Student appears unable to participate in any academic activities;
· Behavior consistently interferes with learning of self or others in the classroom;
· Behavior significantly disrupts the classroom's functioning;
· Student moves in from another district or area with interventions or services having been provided in the past;
· Student has had significant trauma or mental health concerns or issues.

How will teams determine whether proposed interventions are "empirically based"?

Will there be lists of prescribed interventions? Will teams be forbidden from applying common-sense solutions? Will it be legal to conduct research to test new methods that cannot, by definition, be empirically based until the research is completed? These questions appear to remain unanswered for the time being.

What will be the fate of Independent Educational Evaluations (IEEs)?

Will schools still be required to consider the results of IEEs? Will there be a bull market for IEEs? Under what, if any, circumstances, will parents be able to demand IEEs at district expense? We will probably need to wait for the final version of the statute, and even longer for the final regulations to definitively answer these questions. For those states and school districts that experience earlier and greater involvement of parents, there may be fewer requests for IEEs. The PSM may also provide more data to demonstrate that a district's evaluation has been adequate.

If the school has already attempted research based, appropriate interventions, and those have failed, what next?

Have we already tried our best? How will evaluating the student at the upper levels provide useful information that will lead to interventions different from those already tried? The real advantage to this system is earlier intervention. In the current system, the student is referred, but does not qualify, and the teacher says, "Well, I tried to help him. They let him fall between the cracks." The PSM process insures that appropriate assessments focused on a child's actual problems are conducted as early as kindergarten with specific interventions for remediation being made and, hopefully, implemented. Earlier intervention means fewer children would fall between the cracks and it means that children should have a much better prospect for success than in our current wait-to-fail model.

We estimate that approximately 6% of the students for whom interventions are tried will prove resistant. Those are the children who would move to Level 4 and consideration of entitlement. If all assessment questions have not already been answered in the course of problem-solving using curriculum based measurement and applying RIOT and ICEL techniques, then, as needed, team members will participate in planning a full and individual evaluation and completing any necessary additional assessments. Horry County, however, reported a 94% reduction in time spent on additional assessments when the problem-solving model was introduced (Barbour & Schwanz, 2002).

Advantages of RTI at Level 4 (entitlement) should be obvious. Under the current model, often all that the special education teacher has for hard data are scores from the Woodcock Johnson Tests of Achievement, 3rd Edition, or other nationally normed, standardized test battery (not very helpful in targeting specific areas of weakness in more than global terms unless the evaluator has made special efforts) and regular classroom teacher input. With the data accumulated during Level 3, the IEP will almost write itself. The teacher will know what works a little and what works not at all. Personal communications (2003) with Ben Barbour in Horry County suggest that teachers have found this level of specificity very helpful in providing interventions that actually teach the child what we want her or him to know more quickly.

How, and by whom, are the Level 1 and 2 interventions monitored and evaluated?

If a parent and a teacher get together to say, "Mildred has a problem, let's try something new," who monitors the intervention? Will we end up with a "been there - done that" mentality, with no empirical evidence or accountability? There will need to be new record-keeping and follow-up mechanisms so that the PSM does not just open up a new set of cracks through which children can fall. Level 1 is informal, but includes some documentation. Extensive documentation is required even at Level 2. The team at Level 3 is charged with reviewing all of that documentation and may, at its discretion, return the child to an earlier level for more extensive remediation. The principal is key in any school. Principals will be charged with enforcing integrity of intervention.

Is there some level of achievement (we have mentioned certain percentiles) at which one would assume that a child is suffering?

Will federal regulations specify some level of achievement below which schools will automatically be required to intervene? Will there be some level of achievement above which the school will be allowed to declare there is no problem? Historically, Congress and OSERS have avoided specifying numerical cut-offs and formulae, but the courts and OCR have reacted with some hostility to local and state rules that appeared to deny services to otherwise eligible children. Being at grade level will no longer be defined as being average. In North Carolina, being at grade level was defined as being at the 38th percentile or above. Renorming has dropped that cutoff point considerably. If we defined "grade level" as 50th percentile, the goal of NCLB of having every child reading at grade level would be ludicrous. It may be ludicrous anyway, but we do not lower the bar just because more children are getting under it.

What will become of "gifted SLD" children?

RTI and the PSM can be expected to identify children, especially those with low scores on IQ tests, who would not have been identified before. However, children with high IQ scores and average achievement might no longer be eligible for special education identification and services. Advocates for "gifted SLD" children can be expected to be as unhappy as advocates for "slow learners" have been in the past.

Will the entire process, starting at Level 1, be considered "special education"?

If not, will there be restrictions on the involvement of personnel whose salaries are paid partly or totally by designated special education funds? This is a potentially complicated question that may not be entirely resolved by the legislation or even the new regulations. We may need to reply on later interpretations by OSEPS, courts, and other authorities.

Useful Links Related to RTI and PSM

As a resource for those thoroughly scared or upset by all these proposed changes, we offer the following incomplete list of useful links related to RTI and PSM. Some of these are "political" or "legal" documents, but we've always been a profession dominated by laws written by politicians. What they are thinking is important.


http://www.nasponline.org/information/pospaper_rwl.html (NASP article on rights without labels)
http://www.nasponline.org/futures/psmbiblio.html (NASP links)
http://tasponline.org/home.htm (Scroll down to Pasternack's PowerPoint -- it is the document that opponents to RTI and the PSM find necessary to rebut)
http://dibels.uoregon.edu/ (The best of the best CBM.)
http://www.interventioncentral.org/ (Jim Wright's Intervention Central with information on CBM)
http://education.umn.edu/CI/MREA/CBM/cbmMOD1.html (CBM Progress Measures - Stan Deno)
http://www.fsds.org/ (The Illinois version, called Flex)
http://www.nssed.k12.il.us/progserv/services/flexibleservice.htm (More Flex)
http://www.nasponline.org/futures/horrycounty.html (Description of Horry County's model)
http://www.mpls.k12.mn.us/services/speced/resources/psm/Comparison_PSM_state.pdf (Brief description of the Minneapolis model, sometimes referenced in PSM discussions)
http://www.mpls.k12.mn.us/departments/speced/resources/pdf/psm.pdf (Brief description of the Minneapolis entitlement process)
http://www.nasponline.org/publications/cq308minneapolis.html (NASP article on Minneapolis process)
http://www.nrcld.org/html/information/articles/digests/digest3.html (OSEP's researcher Consensus Statement)
http://www.nasponline.org/advocacy/ldreferences.html (NASP Position Statement and other links; scroll down to the LD link)
http://nrcld.org/html/news/symposium2003.html (A description of OSEP's research grant)
http://nrcld.org/html/symposium2003/index.html (Research links)
http://www.geocities.com/vshr1350/DEFINITIONS.htm#sec602 (Both bills on reauthorization)
http://www.apa.org/ppo/issues/ideareauthbf603.html APA's position statement on LD)
ftp://ftp.pattan.k12.pa.us/pattan/OSEP/CY2002-4qu/Baumtrog01.pdf (OSEP says that they have never required the states to impose an IQ testing requirement for LD.)
http://www.ed.gov/inits/commissionsboards/whspecialeducation/reports/summ.html (President's Commission report)
http://www.edexcellence.net/library/special_ed/ (Links to the Fordham papers. Number 12, Rethinking Learning Disabilities, is a seminal document.)
http://www.lexialearning.com/library/newssource_final_031120.pdf ("Waiting to Fail" a variation on a common theme)
http://www.air.org/ldsummit/ (These executive summaries followed the Fordham series. You can get the gist of what the researchers reported in the Researchers' Consensus Statement (see above), but two articles, one on processing, the other on discrepancy, are worth reviewing.)
http://marketplace.psychcorp.com/PsychCorp/Images/pdf/wisciv/definingtherole.pdf (An interesting article by Dr. Holdnack (20 pages) providing rationales for continuing to give cognitive tests)
http://www.aea1.k12.ia.us/spedmanual/manual.html (Manual (228 pages,) from the Keystone AEA. Keystone has published its material on the Internet addressing entitlement decisions in some depth)
http://www.ldonline.org/ld_indepth/legal_legislative/idea_reauthorization.html ( LDA has several articles on RTI, the PSM, and its potential impact on LD children.)
http://www.ldonline.org/article.php?max=20&id=661&loc=41 (There are a number of PowerPoint files, including Pasternack's that may be worth your time.)
http://www.fcrr.org/staffpresentations/Joe/NA/Special%20Ed%20Directors--LDNA.ppt (This explores the potential relationship between Reading First and the PSM.)
http://www.cecdr.org/testimony/February25/Reschly.pdf (Discussed disparate impact and cites the PSM as a means for reducing disproportionality; excellent reference)

References

Aaron, P. G. (1997). The impending demise of the discrepancy formula. Review of Educational Research, 67, 461 - 502.

American Psychological Association (2003, June). APA Online. Briefing Sheet on IDEA. Recommendations for the Reauthorization of the Individuals with Disabilities Education Act (IDEA). Retrieved February 10, 2004, from http://www.apa.org/ppo/issues/ideareauthbf603.html

Barbour, C. Ben, and Kerry A. Schwanz (2002). The Winds of Change: A Problem Solving Model in Horry County. NASP Communiqué. Vol. 30. June 2002. Retrieved February 29, 2004 from http://www.nasponline.org/futures/horrycounty.html

Bracken, B. A. (1988). Ten psychometric reasons why similar tests produce dissimilar results. Journal of School Psychology, 26, (2), 155-166.

Council for Exceptional Children (2002). Council for Exceptional Children, IDEA Reauthorization Recommendations April 2002; retrieved on 2/25/2004 from http://www.cec.sped.org/gov/IDEA_reauth_4-2002.pdf

Council for Exceptional Children (2003). Recommendations on House and Senate IDEA Bills: Comparison of House Passed Bill and Senate Committee Passed Bill Part A and B September 2003

Dumont, R., Willis, J, & McBride, G. (2001). Yes, Virginia, there is a severe discrepancy clause, but is it too much ado about something? The School Psychologist, 55 (1), 1, 4-13, 15.

Evans, J. J., Floyd, R. G., McGrew, K. S., & Leforgee, M. H. (2002). The relations between measures of Cattell-Horn-Carroll (CHC) cognitive abilities and reading achievement during childhood and adolescence. School Psychology Review, 31 (2), 246-262.

Fiorello, C. A., Hale, J. B., McGrath, M., Ryan, K., & Quinn, S. (2002). IQ interpretation for children with flat and variable test profiles. Learning and Individual Differences, 13, 115 - 125.

Floyd, R. G., Evans, J. J., & McGrew, K. S. (2003). Relations between measures of Cattell-Horn-Carroll (CHC) cognitive abilities and mathematics achievement across the school-age years. Psychology in the Schools, 60 (2), 155-171.

Fletcher, J. M., Francis, D., Rourke, B., Shaywitz, S. & Shaywitz, B. (1992) The validity of discrepancy-based definitions of reading disabilities. Journal of Learning Disabilities, 25, 555-561.

Fletcher, J. M., Shaywitz, S. E., Shankweiler, D., Katz, L., Liberman, I., Steubing, K., Francis, D. J., Fowler, A., & Shaywitz, B. A. (1994). Cognitive profiles of reading disability: Comparisons of discrepancy and low achievement definitions. Journal of Educational Psychology, 85, 1-18.

Floyd R. G., Clark, M. H., & Shadish, W. R. (2004). The Exchangeability of Intelligence Quotients. Manuscript submitted for publication.

Guard, P. (2002, October 9). Letter from Patricia Guard to Dr. Colleen Baumtrog. OSEP. Retrieved February 8, 2004 from ftp://ftp.pattan.k12.pa.us/pattan/OSEP/CY2002-4qu/Baumtrog01.pdf

Hale, J. B., Fiorello, C. A., Kavanagh, J. A., Hoeppner, J. B., & Gaitherer, R. A. (2001). WISC-III predictors of academic achievement for children with learning disabilities: Are global and factor scores comparable? School Psychology Quarterly, 16 (1), 31- 35.

Hammill, D. D., & McNutt, G. (1981). The correlates of reading: The consensus of thirty years of correlational research. Austin, TX: Pro-Ed.

Heartland AEA (2002a). Heartland AEA Program Manual for Special Education. Johnston, IA: Author.

Heartland AEA (2002b). Data-based decision making (ch.7, "Entitlement." Johnston IA: Author.

Learning Disabilities Association of Canada (2002), "Statistics on Learning Disabilities, 2002. Retrieved 2/25/2004 from http://www.ldac-taac.ca/english/indepth/bkground/stats01.htm

Learning Disabilities Association (undated a). LDA Talking Points on IDEA. Retrieved February 26, 2004 from http://www.ldanatl.org/LDA%20Points%20IDEA.htm

Learning Disabilities Association (undated b). Learning Disabilities Association of America Answers Your Questions on the Reauthorization of IDEA, undated, Retrieved 2/24/2004 from http://www.ldanatl.org/Q&A.htm

Lyon, G. R. (1997, July 10). Testimony before the Committee on Education and the Workforce, a committee of the US House of Representatives.

Lyon, G. R., Fletcher, J. M., Shaywitz, S. E., Shawitz, B. A, Torgesen, J. K., Wood, F. B., Schulte, A., & Olson, R. (2001) Rethinking learning disabilities. Progressive Policy Institute, Thomas B. Fordham Foundation. Retrieved February 4, 2004, from http://www.educationnext.org/unabridged/20012/lyon.pdf

Mather, N., & Healey, W. C. (1990). Deposing the aptitude-achievement discrepancy as the imperial criterion for learning disabilities. Learning Disabilities: A Multidisciplinary Journal 1(2), 40-48.

Pasternack, R. H. (2002, March). The Demise of IQ Testing for Children with Learning Disabilities. Keynote address presented at the annual convention of the National Association of School Psychologists, Chicago, Illinois. Retrieved February 4, 2004, from http://www.hdc.lsuhsc.edu/Powerpoint/pasternack_nasp2002_files/frame.htm

President's Commission on Excellence in Special Education (2002, July 12). Final Report. Retrieved February 4, 2004, from http://www.ed.uiuc.edu/news/2002/specialedreport.htm

Rourke, B. P., Young, G. C., & Leenaars, A. (1989). A childhood learning disability that predisposes those afflicted to adolescents and adult depression and suicide risk - Journal of Learning Disabilities, 22, 169-175

Specific Learning Disabilities: Finding Common Ground (2002). A report developed by the ten organizations participating in the Learning Disabilities Roundtable Sponsored by the Division of Research to Practice Office of Special Education Programs U.S. Department of Education. Washington, DC 20202.
Retrieved February 9, 2004 from: http://www.ld.org/advocacy/pdf/CommonGround.pdf

Stanovich, K. E. (1991) Conceptual and empirical problems with discrepancy definitions of reading disability. Learning Disability Quarterly, 14, 269-280.

Stanovich, K. E. (1993). The construct validity of discrepancy definitions of reading disability. In G. R. Lyon, D. Gray, J. Kavanagh, N. Krasnegor (Eds.), Better understanding learning disabilities: New views from research and their implications for education and public policies. Baltimore, MD: Paul H. Brookes.

Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391-409.

Willis, J. O. & Dumont, R. P. (2002). Guide to identification of learning disabilities (3rd ed.). Peterborough, NH: authors. [http://alpha.fdu.edu/psychology]

Wright, P. W. D., Wright, P. D., & Heath S. W. (2004). Wrightslaw: No child left behind. Hartfield, VA: Harbor House Law Press.



To Top

Back to The Beacon

 

Copyright © 1999-2025, Peter W. D. Wright and Pamela Darr Wright. All rights reserved. Contact Us