This is a good idea. I hope it helps clarify questions and adds to the information base.
I concur with Barb that we need a more interactive forum for persons with questions and comments. Let's see how this can work. I wonder if we can easily interface with some key threads from the ISPA listserve.
I think it would be very helpful as well...a more direct and immediate way to provide feedback and information.
We will have to model for folks and train them to put key words in the subject line if they start a new thread so that it will be easy to search/identify topics of interest.
Let's ask Sean to integrate this blogging account into our website then. What do you think?
I think this is a great idea!
We could begin our blogging with a thread based on setting goals and progress monitoring...
This is a great idea for all of us to learn from the experts at IASPIRE North. Thank you.
BENCHMARKING IN HIGH PERFORMING DISTRICTS- STRAND FROM LISTSERVFrom Laura McNicholas: I currently work in a large, high performing district where approximately 90% of children meet or exceed on ISATs each year. In developing our RTI plan, the district is trying to figure out who should be benchmarked on a regular basis. Some ideas that have been put on the table include: Plan A) Benchmarking all students in order to gain local norms and ensure that all children have CBM dataPlan B) Benchmarking a random sample of students in order to gain local norms, in addition to Plan C.Plan C) Benchmarking all students in grades K-3 three times a year, and using a certain cut score on the ISATs to benchmark students in grades 4-8 students who fall below standards, as well as those children who fall just above the "meet" criteria. The thought process behind not benchmarking all students is that it would free up financial and personnel resources that can then be devoted to interventions for those students who struggle. I wasn't sure what the current research / best practice guides say about doing universal benchmarking in a situation where so many children already meet or exceed standards on the ISATs. Any thoughts or suggestions on this would be greatly appreciated! ___________________________From Gary CatesThe correct answer is A with an additional step.1. Screening must take place preferably with a correlation derived cut score (not national or local percentiles).In addition to A and the additional step, here is an additional strong consideration:1. Think about increasing your own standards in your school. Making AYP is the MINIMUM acceptable standard. Currently AYP is 66.5% of your students getting 50% of the questions correct. 66% graduation rate/90% attendance rate (elementary), and 95% participation for all disaggregate groups in high stakes testing. Making AYP is not an overly burdensome standard currently. Literally getting kids to show up and take the test and get half the questions correct is consider adequate. I may be a loner out there, but i find it inadequate.Our current educational standards in the United States are minimal standards. Setting your own "meets" expectations such as passing 75% of the questions may be a goal to be delineated in the school improvement plan. You all could serve as a model for other high performing schools who are meeting minimal standards.____________________From Mark Shinn:I agree with Gary. There are multiple reasons to sustain CBM benchmarking, even in high performing districts through at least Grade 5, if not Grade 6. At the upper grades, it is possible to transition to Maze to save time by group administration with oral reading (R-CBM) follow up.Indeed, the ISAT is a very low bar and not sufficient to ensure success in secondary content classes that are reading based. Whether people accept it or not, problems will continue to be defined locally; that is, teachers will see a certain proportion of kids as "having disabilities" as will parents. If you stop benchmarking, you are stopping universal screening and you will place more burden on teams to decide 1 at a time. Now if historically you haven't had kids referred post Grade 3, then maybe you don't need to keep doing universal screening, but I doubt that that statement is true.Plus, as part of the entitlement process to rule out underachievement, IDEA 2004 requires that formal assessment of achievement during instruction at reasonable intervals is provided to the parents. One could easily read between the lines and infer they were specifying a benchmark approach. What would be offered as an alternative?I could go on and on that reading growth doesn't stop at Grades 3 and above, that all teachers and parents should have data to report growth and development, it can assist in program evaluation, goal setting for individual kids, etc. Yes, getting kids off to a healthy start K-3 is important, and yes, most students in many districts probably are pretty good readers, but let's put this into a health perspective. Doctors routinely collect vital signs, even with healthy people. Why? Because even with managed care, these vital signs can show stuff. Not all the time, but enough to make it worth it.To me, if as a professional we can't commit 5 minutes of individual reading assessment to all kids 3 times per year, or 15 minutes of individual assessment of vital signs during an entire academic year, well, we have a long ways to go. I mean, I still see schools investing massive amounts of time and $$ to COGATs. What's that about? And Gates. Ditto. And, and, and, and..._____________________________from Stacy Bjorkman:As long as we are on the topic of universal screening in high performing districts… This year, we began using NWEA’s MAP assessment for universal screening of reading at our high performing middle school and are struggling to find an appropriate measure to use for progress monitoring. After identifying our at-risk students based on MAP scores (most of whom are already receiving some sort of reading support), we planned to use Maze for progress monitoring; however, when we obtained baseline Maze scores for our identified students, we found that the VAST majority of our “at-risk” students are scoring FAR above our criterion of success; in fact, some of our “at-risk” students FINISHED the Maze probe! We are now in quite a conundrum about how to proceed, as MAP and Maze have resulted in conflicting data regarding who is a student at-risk. I can assume recommendations from this listserv will be to Maze the entire school and develop local norms for service-delivery decisions, however, district-level administrative decisionshave been made about the following points, that at this time are non-negotiable: MAP (not Maze) will be used as the universal screener at the middle school level. Maze (not R-CBM) will be used as the progress monitoring tool at the middle school level.The 25th percentile on Illinois Maze norms will be our goal for all students being progress monitored. Thoughts, comments, and suggestions are graciously welcomed.__________________________________________From Mark Shinn:I agree with Gary. There are multiple reasons to sustain CBM benchmarking, even in high performing districts through at least Grade 5, if not Grade 6. At the upper grades, it is possible to transition to Maze to save time by group administration with oral reading (R-CBM) follow up.Indeed, the ISAT is a very low bar and not sufficient to ensure success in secondary content classes that are reading based. Whether people accept it or not, problems will continue to be defined locally; that is, teachers will see a certain proportion of kids as "having disabilities" as will parents. If you stop benchmarking, you are stopping universal screening and you will place more burden on teams to decide 1 at a time. Now if historically you haven't had kids referred post Grade 3, then maybe you don't need to keep doing universal screening, but I doubt that that statement is true.Plus, as part of the entitlement process to rule out underachievement, IDEA 2004 requires that formal assessment of achievement during instruction at reasonable intervals is provided to the parents. One could easily read between the lines and infer they were specifying a benchmark approach. What would be offered as an alternative?I could go on and on that reading growth doesn't stop at Grades 3 and above, that all teachers and parents should have data to report growth and development, it can assist in program evaluation, goal setting for individual kids, etc. Yes, getting kids off to a healthy start K-3 is important, and yes, most students in many districts probably are pretty good readers, but let's put this into a health perspective. Doctors routinely collect vital signs, even with healthy people. Why? Because even with managed care, these vital signs can show stuff. Not all the time, but enough to make it worth it.To me, if as a professional we can't commit 5 minutes of individual reading assessment to all kids 3 times per year, or 15 minutes of individual assessment of vital signs during an entire academic year, well, we have a long ways to go. I mean, I still see schools investing massive amounts of time and $$ to COGATs. What's that about? And Gates. Ditto. And, and, and, and..._______________________________From Joel Grafman,My school is also in a similar situation. Besides MAP scores and any CBM scores we have, however, we also look at ISAT performance and classroom performance. Besides looking at the Illinois MAZE norms, you may also want to look at cut scores on AIMSweb probes for the likelihood of students passing ISAT. Also, how are you determining which MAP scores are "at-risk"? We are currently using the "Illinois Proficiency Tables from Scale Alignment Study" from MAP to determine which students may be at-risk.One problem with looking at only one data point (MAP) is that you may not know the real reason why the student obtained the score that they did (especially in group administered tests). If the student is a student with behavioral issues, they may have just guessed or responded quickly without much effort in order to complete the test. An interesting tid bit, my 8th grade teachers created a list of students who would be Tier 2 and Tier 3 students based on MAP data and are not referring over half of those students because either they performed well on ISAT, have a history of good MAP scores prior to Fall scores from this year, or are performing well enough in class that the teacher's are not overly concerned about their progress._______________________________________From Ben Ditkowski:I have a few questions about the Three ADMINISTRATIVE non-negotiable points, especially given that this has to do with a district that is high performing.I suppose you could do worse than MAP as a universal screener, but if you can't use it for progress monitoring (and to my knowledge you can't) then, determining progress is going to be difficult on a basis any more often than the three times per year that you administer MAP. The solution of course is to use the same measure for screening and progress monitoring.Using MAZE for progress monitoring is not entirely problematic, if MAZE and MAP do not correlate, then I would suggest that it is likely that there may be an implementation problem. My guess from your description (i.e., some of our “at-risk” students FINISHED the Maze probe!) I would want to know about the administration of the MAZE. Two common problems are - TIMING - sometimes teachers don't adhere to the time limits, an untimed MAZE is not the same as a timed MAZE, to make things worse, with a selection type response there is likely to be some students who are guessing- GUESSING - if students are guessing, the results are not worthwhile to look at, if students are getting less than 70% of the items that they attempt correct, you should be concerned about guessing (if the test is administered untimed, then it would not be unexpected for students to complete the probe, and in doing so, I would not be surprised to see guessing).Here the solution is adequate proctoring and monitoring.The biggest problem is your criterion. The 25th percentile of the Illinois Norms for MAZE is likely to be too low, especially for a high performing district. The solution is that you are going to need to negotiate this one because it does not make sense. If you admin team has made the proclamation that you will use these three "non-negotiable points" then I would suggest that at least you get a powerful intervention program with regular, built in mastery monitoring (because your goals will likely be problematic).__________________________From Gary Cates:The state mandates RTI. A hallmark of RtI is to use evidence based practices. Although this does solve your short term situation i would encourage you to think long term and emphasize this hallmark when working with your administrators. Your response should be to point out that MAP is not a "universal screener" system despite what the publisher says. It is a diagnostic tool to group/place students for differentiated instruction. A benchmakring system is in line with screening that predicts which students will have difficulty passing the ISAT. They are brief, easy to administer, score, interpret to parents as well (these are not the characteristic of MAP).Second, i would encourage you to continue to collect data over time and share the data back with the administrators about which of the two best predicts ISAT performance as well if the two predict better together than in isolation (regression). They will have a difficult time arguing the data with you. Think long term and think RtI as a developing process based on evidence.Third, i would ask them for the data that suggests that MAP is a predictor of ISAT for your building and i would ask them for the data that suggests MAZE is a predictor of ISAT (or AYP data in general) for your building.. If they don't have it then it is not evidence based. However, on the flip side they can ask you the same question about what you would prefer to use. So what i am suggesting is to be sophisticated in your data analysis and attempt to predict and identify based on your local data and not what MAP says and not what AIMS web says. This is about improving your school as a whole relative to ISAT standards. This is not about improving your school relative to other schools trying to improve their standards represented in MAP (Diagnostic tool) and/or Aimsweb (screening/progress monitoring/benchmarking)Moral of the story; keep pushing for better data analsysi and demanding evidence for the practices being pressed up on you.___________________________From Mark Shinn:All this sounds like fun. I'll start with a precorrection. I don't advocate for CBM because I "like"CBM. I like CBM because it works for what it was developed, field-tested, and researched to do. First and foremost, it was designed and is best used as a measure of student progress in the basic skills. Oral reading works best. It is "sensitive" to within student differences...that is, it "detects" changes in achievement. It also is sensitive to between person differences. This allows it to be used to distinguish among students with and without reading skills.Maze is neither a measure of "comprehension" nor is it "superior" to oral reading (R-CBM) in terms of sensitivity to between and within person differences. It is less sensitive. That said, when you have lots of good readers and want to treat everyone equally, with older students, greater than say, 4th grade, you can achieve economies of time by group testing. It is important to know that careful monitoring during testing is required and suspicious scores should be re-tested individually or by follow up oral reading. It is, after all, a multiple choice test. . Still oral reading works best so if time is not an issue--and again, I'll go back to less than 15 min per kid per year...well, if we can' t do that, we're probably not going to do other things that are more challenging and time consuming.Now a couple of points. I like MAP, but like Ben and Gary, I have some concerns about decision making with individual students. I like it a lot as program evaluation tool. But let me share some data.Time Reading Percentile Math Percentile R-CBM Score in WRC and Other InfoFall Grade 2 87th 59th 100 WRC and Above AverageWin Grade 2 130 and Above AverageSpr Grade 2 158 and Upper 10%Fall Grade 3 48th 74th 147 and Upper 10%Win Grade 3 76th 38th 165 and Upper 10%Spr Grade 3 61st 71st 190 and Upper 10%Fall Grade 4 27th 73rd 175 (in Grade 4 material) and Upper 10%I know this kid pretty well. Pretty darn good reader. What is his level of performance? Should MAP progress monitoring scores vary this much in reading and math? Is he an above average reader or a low average reader or do we split the difference. Again, I like MAP and I wouldn't ignore the data. But, to me, when a test is taken on a computer, it presumes the student is motivated, organized, attentive...when students are tested with examiners, they can detect (usually) these attributes.Now, the USDE/OSEP National RTI Center (www.rti4success.org) doesn't believe that just any old test can (or should) be used for screening. The test should be validated for that purpose. Sound familiar? They have published a set of standards that are attached and have issued their first call for publishers to submit their tools for independent review, like they did for Progress Monitoring.___________________________From Brenda Harness:Tri-Valley Middle School started an RTI program this fall. We house 4ththrough 8th grade. We felt that especially the first year, thatbenchmarking the entire school would be a tremendous project. In order toidentify the first group of potential RTI Tier 2 candidates, we drew fromexisting records, which included: ISAT scores, Spring SuccessMaker gradelevels (1/2 year below grade level was the cutoff), spring report cardgrades (below C), and teacher recommendation. We benchmarked thosecandidates with an ORF, Maze, and M-CBM. We also benchmarked students whowere new to the district. From these scores, we developed a Tier 2 list ofstudents who needed services and a "watch" list. It appears to have been agood system thus far. I have had as many as 38 students, and have"released" 4 due to their progress.Usually, I meet with Tier 2 & 3 students in groups of 1-4, and give aprogress monitor timing to start every lesson. Most of the 7th and 8thgrade Tier 2 math students attend scheduled times to use SuccessMaker in ourcomputer lab, and they are making tremendous strides. All of our Tier 2 &3 students were benchmarked in the fall and will be benchmarked for a totalof 3 times during this school year with ORF, Maze, and M-CBM.We have a graduate assistant (thanks to Gary Cates) helping to correlate ourISAT scores to our computer lab SuccessMaker program, to see if it is alegitimate predictor of ISAT scores. The program tracks skills in math andreading, and it is used by our 4th through 6th grade on a regular basis.Our psychologist is pressing us to benchmark the whole school in reading andmath, so we will do that with M-CBM and Maze, in December. Not all studentsperform at ability in timed test situations, so in some cases, I believeSuccessMaker, once correlated, may give more stable, reliable, results. Theexception would be students who need prompts to attend, so their progress issomewhat delayed, but we have a quiet, well-run computer lab, withapproximately 25 students attending at a time.
Post a Comment