1. There is a reasonable association between SAT scores and how well students do in the first semester of college and
2. There is a strong association between SAT scores and family income.
That makes sense in that people with more financial resources have access to programs that teach students SAT test-taking strategies. A few years ago, partly in response to this sort of criticism, in their wisdom, the College Board added a third section to the SAT. Somehow they surmised that if a test is ineffective, making it longer might solve the problem. They never asked the right questions. What is it that we actually want college students to be able to do? And what, if any, test can we design to measure that? The answer to that question very likely is none.
In Massachusetts younger students, of course, take the MCAS. I have never met one public school teacher or principal who had anything positive to say about the test, and I am not certain that the test has told us anything we do not know about schools. Well-funded school districts produce higher test scores than under-funded school districts, which leads people to talk about the achievement gap, when in fact what we really have is an acquired knowledge gap, a gap that is a product of limited funding, not of student ability. In recent years the Commonwealth points proudly to the fact that MCAS scores are up, but one could argue that this simply means that teachers have done a better job of preparing students to take this particular test. And is it developmentally appropriate to ask students to be taking high-stakes test at this age?
A couple of weeks ago there was a piece in the New York Times about the challenges of teaching middle school students. I quote:
At Briarcliff Middle School almost any minute of any day can become a lesson in weathering the turmoil of adolescence.
Take the large blue and white sign outside the cafeteria urging students to control their impulses. It didn’t stop Daniel Levine a 6th grader from slapping a Groucho Marx mustache on his upper lip and strutting around. But he did hesitate and think about it. “I just wanted to make people laugh,” said Daniel, an energetic 11 year old. “ I fool around, but I know you have to stop sometimes – I’m still trying to learn that.”
If Daniel were in Massachusetts he would be taking the MCAS next year. Who takes the test that week? The Groucho Marx Daniel, the reflective Daniel, or a combination? And what have we really learned about Daniel as a learner in that context? Neither SATs, MCAS or any other high stakes test asks the right questions about assessment.
I also said that we are not talking about grading here. Grading is essentially a mechanical exercise. Elegant spreadsheet programs can at the same time produce precise grades and inaccurate assessments. It also can lead to futile arguments about issues such as grade inflation. A few years ago Princeton University looked at student grades and noted that they had risen in recent years. A faculty committee designed a policy which stated that no more than 35% of the students in any class could receive A’s. Again, we are asking the wrong questions. Grading on a curve does not tell us anything about what students are able to do and what they know within the context of the goals of a class. As far as I know, Princeton did not push itself to consider whether the methods, not the numerical outcomes, of assessment were appropriate for their student body. I also have to ask why they want a policy that strives to ensure that 65% of the students do not achieve at the highest level. In the end most conventional educators commended Princeton for setting high standards.
In the midst of all this we at Beaver are in the process of looking at the issue from a different perspective. We ask our teachers to design curriculum backwards. That is, we ask them to determine what they want their students to be able to do at the end of a unit of instruction or the end of a term and then design curriculum and assessments that enable students to get there. Our assessments need to tell us as teachers and students and families where students are within those goals.
With that in mind, this fall Assistant Head of School Rob Connor and four teachers (middle school science teacher Michelle Wildes, Science Department Chair Mark Wilkins, upper school math teacher Pam Brockmeir, and upper school visual arts teacher Rebecca Roberts) spent three days in, ironically, Princeton, New Jersey, working with educational thought leaders who are doing extensive research on this topic. At Beaver they have spent the year working with their divisions and departments about some of the findings from that research and are in the process of designing a new take on assessment.
At Beaver we want assessment to cover at least three areas.
1) Performance. That is student productivity. What students do, what students know.
2) Process. Student work habits, initiative, and effort.
3) Progress. How students have improved over a period of time.
This is a complex undertaking, more complex and much more useful than designing grading systems or standardized tests, and all the work will take as long as another year to come to fruition. But our hope is that by September of ’08 we will have more effective ways to communicate, again, with ourselves, with our students and with our families about student performance, student process and student progress. Our ways of reporting to families may look quite different than they do now, particularly in the upper school. But the notion is that our reports will depict a much more thorough understanding of students as learners than conventional reports and will great accountability for the school and for students. And, yes, colleges will be just fine with this.
A number of times I have referenced an Education Week article by James Nehring, which reframes the take on conventional education versus progressive education and, in the process, says some laudatory things about Beaver Country Day School. Our heritage demands that we continue to look at major educational issues like these through the lens of our mission and our values. The author of the Education Week article would have it that conventional schools that are searching for new tests and new grading systems are flailing and finding answers to lots of irrelevant questions. Our approach and the approach of all progressive schools is rooted in educational common sense and in questions directly related to student learning or, as the author of the article, says “We are schools grounded in the wisdom of thoughtful educators with a long history of serving children well.” That’s where we focus our attention and our research. If only conventional schools, our universities, test designers, and the political arm of our school systems would do the same.