The BLOG: Voices

Six questions about Common Core-based tests — including MCAS 2.0

The Partnership for Assessment of Readiness for College and Careers (PARCC) recently sent this item out:

The Partnership for Assessment of Readiness for College and Careers (PARCC) is a group of states that brought teachers, administrators and experts together to develop tests to measure how well students understand and are able to apply the skills and knowledge required by the new standards. The PARCC tests are given annually in math and English language arts in grades 3-8 and are structured to be taken online for a more interactive test experience, but can also be taken with pencil and paper. The tests also are designed to provide educators with a deeper understanding of how students learn.

This announcement raised long-festering questions about the failure of the media to examine the influence of one very wealthy man on how the public schools should be shaped (from standards to tests to curriculum), independent of what many parents and teachers want. It also raised new questions about the failure of many state departments of education and local school administrators to guard the public interest. They have seemingly thrown common sense out the window.

1. Who decided that national tests should measure “how well” K-12 students could apply the skills listed by Common Core, with no assessment first of their knowledge of the subjects they study before they are asked to apply this knowledge?

2. Who decided that the same tests could be used for determining “readiness for college and careers”? Who had ever thought that these very different goals could be measured by the same tests, especially in grade 11? What is the rationale for using the same test? We once had what were called “college prep” courses — few suggested requiring those courses of all students. It was clear that the level of these courses was more advanced than the level of the average course in the same subject (often called “standard” and, after course title inflation, “honors”). It was also clear that a majority of students in a typical high school could not read the texts assigned in an advanced English class.

Advanced Placement courses were at first designed for even more advanced study, but, in many subjects, they were watered down to address the zeal of education “reformers” who believed that these courses should be taken by all students, or by any student who wanted to take them and that such a student would benefit even if he or she received a 1 on the AP test. No evidence exists to support this theory. We do know that most colleges in recent years pay little attention to a mark lower than a 5, and even then don’t always consider an AP course whose test was passed with a mark of 5 worthy of college credit.

3. Who wanted “tests designed to provide educators with a deeper understanding of how students learn”? Parents never expressed an interest in such tests. Nor did teachers. If asked, most would have said they wanted tests that showed what students had learned and what teachers could do in the classroom to increase learning. If test-makers had wanted to find out “how students learn,” they should have asked cognitive psychologists to examine small groups of students to figure it out. Nowhere were parents or teachers asked if they wanted national tests to tell them “how students learn.”

4. “Teachers, administrators, and experts” were brought together to design the tests, it seems, but why administrators and not parents? And who qualifies as an expert? We are not told.

5. Why are 27 states using off-the-shelf tests in 2016, instead of PARCC or SBAC (Smarter Balanced Practice and Training Tests)?

6. Why are the lead Common Core standards writers selling their own material to schools to “help” them address the poorly written standards that uninformed state boards of education adopted in 2010? Not a whiff about conflicts of interest in the media about Student Achievement Partners helping New York City with classroom lessons, tasks, and assessments.

Even with skepticism about their worth, the public would have welcomed some tentative information from the results of national tests in 2015. But no study has given us answers to these questions. Do we have any idea how well any age-group of students in this country can apply the skills supposedly taught in Common Core? Do we have any evidence to show whether readiness for college and careers can be measured by the same high school tests? Do we now have better ideas on how any group of students learns? No, it seems.

So we head into another round of Common Core-aligned national tests designed for unwanted purposes without any indication that they are worth the huge and increasing local investment in the technology and teacher “professional development” they entail.

The public — and Congress — badly need some postmortems on why Common Core-aligned tests are being shunned by more and more states. Researchers need to be able to examine the rationale for the kind of test items developed by these “next-generation” test-developers. They also need to try to understand who the consultants were for these new types of test items and why. Who chose them? Why were parents not included in the deadly brew these tests seem to have become, according to recent grade 12 NAEP (National Assessment of Educational Progress) results?

We know very few details to this day about the construction of the tests that the federal government is mandating all students take. Surely, Congress could have mandated some level of transparency about the nature of the instruments ESSA (Every Student Succeeds Act) was requiring states to use in the name of equity.

Sandra Stotsky, former Senior Associate Commissioner of the Massachusetts Department of Education, is Professor of Education emerita at the University of Arkansas. Read more by Sandra Stotsky here.