|Can we really trust an evaluation that uses this color scheme?|
Anyway, I wanted to look at this in a little more detail to see if I could glean any insights. The short answer is I couldn't, but if you want to read a bit more feel free.
The reviewers broke down their evaluation into the following categories: Content & Rigor and Clarity & Specificity. Content & Rigor was further broken down into the sub-categories: Scientific Inquiry & Methodology, Physical Sciences, Physics, Chemistry, Earth & Space Science, and Life Sciences. As I played a role in the life sciences standards, I immediately looked to see how Minnesota did....6/7! Not too shabby, if you ask me. Indeed the overall sense was that 'The treatment of life science and earth and space science is excellent'. YAY us. They noted that the 'flow and logic are such as to convey an understanding of the concepts rather than coming across as a list of topics to check off.' They also noted that the 'life science content is presented quite minimally'. This latter point reads like a negative, but I do not necessarily agree that a minimalistic approach is bad. By focusing on concepts and not specific content, we allow a teacher to deliver the material in a way they are most comfortable with. A teacher with a passion for gardening can deliver the genetics material using botanical examples, another with interests in infectious disease can use pathogenic bacteria. If the reviewers want to ding us for that, fine, I can live with it. Other aspects of the standards do seem to be problematic and I wonder if the reviewers were already annoyed when they evaluated the life science standards.
I was disappointed that the reviewers did not praise the evolution standards, because I spent considerable effort convincing my colleagues on the committee that evolutionary theory is a central and unifying concept in the life sciences that deserves being supported (ie let's stop using change over time and use the more concise term evolution). Of course, the reviewers did not attend a year+ of meetings and discussions so I'll excuse the slight.
The map released with the review (shown above) shows the overall ratings. However, I hypothesized that this review could be used to get a sense of how politics and cultural forces affect science standards. If this hypothesis is correct, then I predicted the life science scores would be most valuable (I don't see laws being discussed regarding the speed of light or the periodic table). So I mapped the life science scores and decided to use a color scheme that wasn't so shitty as the one the Fordham Institute used.
|Purple is 7 and Grey is 0 (7 is high)|
I also mapped Scientific Inquiry and Methodology to see what kind of results I'ld get.
|How does Texas get a 7 when McLeroy was in charge?!?!|
So what's going on? Well you have to realize every state is developing their standards using different ground rules. When we worked on our standards, we used the NAEP document as well as the revised standards of Massachusetts and Virginia (I think), both of which received a 9/10 overall score. However, there were rules (aka laws) that had to be followed. For instance in Minnesota, it was legislatively mandated that the science standards include reference to local indigenous peoples and their contributions to science. This was noted in the review 'Though a minor issue, the standards are occasionally marred by an inappropriate focus on local beliefs.' Every state does this process differently and this means that the standards documents are not being written the same way, they have slightly different goals, different authors (we were fortunate to have a plethora of K-12 teachers, a few scientists, and science related business people on our committee), and have different processes of approval.
What that means is that each standards document is a quite independent venture and there is not necessarily any connection between these documents in the different states. Thus, I find the review and evaluations to be sound as a mechanism to evaluate the strength of state science standards based on a clear criterion. However, I also find that a standards document does not necessarily predict learning outcome, teacher effectiveness, or reflect the state of science education in a particular state. While nationalizing the standards would set a bar, a standard if you will, I do not believe it is feasible in our current political and cultural climate.