Fordham's review of state science standards

As a number of people have already pointed out, the Fordham Institute has released its 'The State of State Science Standards 2012'. This is of particular interest to me as I played a small role in developing the life sciences standards for Minnesota. I was disappointed to see that Minnesota only earned a C, and appalled that this is worse than the B the previous standards earned (more on that in another post if at all).
Can we really trust an evaluation that uses this color scheme?
Based on my experience as an course instructor, about this time I should be bouncing with delight because with so many D's and F's, the evaluators have to grade on a curve. What? there's no curve? That's not fair, Professor Lightweight curves. You know, my parents taxes pay your salary so you better fix my grade. I mean I worked really hard on these standards.... 

Anyway, I wanted to look at this in a little more detail to see if I could glean any insights. The short answer is I couldn't, but if you want to read a bit more feel free.


The reviewers broke down their evaluation into the following categories: Content & Rigor and Clarity & Specificity. Content & Rigor was further broken down into the sub-categories: Scientific Inquiry & Methodology, Physical Sciences, Physics, Chemistry, Earth & Space Science, and Life Sciences. As I played a role in the life sciences standards, I immediately  looked to see how Minnesota did....6/7! Not too shabby, if you ask me. Indeed the overall sense was that 'The treatment of life science and earth and space science is excellent'. YAY us. They noted that the 'flow and logic are such as to convey an understanding of the concepts rather than coming across as a list of topics to check off.' They also noted that the 'life science content is presented quite minimally'. This latter point reads like a negative, but I do not necessarily agree that a minimalistic approach is bad. By focusing on concepts and not specific content, we allow a teacher to deliver the material in a way they are most comfortable with. A teacher with a passion for gardening can deliver the genetics material using botanical examples, another with interests in infectious disease can use pathogenic bacteria. If the reviewers want to ding us for that, fine, I can live with it. Other aspects of the standards do seem to be problematic and I wonder if the reviewers were already annoyed when they evaluated the life science standards.

I was disappointed that the reviewers did not praise the evolution standards, because I spent considerable effort convincing my colleagues on the committee that evolutionary theory is a central and unifying concept in the life sciences that deserves being supported (ie let's stop using change over time and use the more concise term evolution). Of course, the reviewers did not attend a year+ of meetings and discussions so I'll excuse the slight.

The map released with the review (shown above) shows the overall ratings. However, I hypothesized that this review could be used to get a sense of how politics and cultural forces affect science standards. If this hypothesis is correct, then I predicted the life science scores would be most valuable (I don't see laws being discussed regarding the speed of light or the periodic table). So I mapped the life science scores and decided to use a color scheme that wasn't so shitty as the one the Fordham Institute used.
Purple is 7 and Grey is 0 (7 is high)
While I can pick and choose specific states and say 'AHA!' (like Oklahoma for instance), the data is not compelling. Kansas, not a beacon of life science education, earned a 7. There is really no correlation with political leanings or cultural issues that explain the spectrum of color. So my hypothesis is not correct, and I still learned something.

I also mapped Scientific Inquiry and Methodology to see what kind of results I'ld get.
How does Texas get a 7 when McLeroy was in charge?!?!
This is even worse, there's really no good distribution of scores. I don't think we can learn much from this either.

So what's going on? Well you have to realize every state is developing their standards using different ground rules. When we worked on our standards, we used the NAEP document as well as the revised standards of Massachusetts and Virginia (I think), both of which received a 9/10 overall score. However, there were rules (aka laws) that had to be followed. For instance in Minnesota, it was legislatively mandated that the science standards include reference to local indigenous peoples and their contributions to science. This was noted in the review 'Though a minor issue, the standards are occasionally marred by an inappropriate focus on local beliefs.' Every state does this process differently and this means that the standards documents are not being written the same way, they have slightly different goals, different authors (we were fortunate to have a plethora of K-12 teachers, a few scientists, and science related business people on our committee), and have different processes of approval.

What that means is that each standards document is a quite independent venture and there is not necessarily any connection between these documents in the different states. Thus, I find the review and evaluations to be sound as a mechanism to evaluate the strength of state science standards based on a clear criterion. However, I also find that a standards document does not necessarily predict learning outcome, teacher effectiveness, or reflect the state of science education in a particular state. While nationalizing the standards would set a bar, a standard if you will, I do not believe it is feasible in our current political and cultural climate.

No comments:

Post a Comment

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS