Welcome to the 4th Reich part 1.

I've been perusing the whitehouse.gov site and this is something I've seen


Making Our Military Strong Again?!?!

What the fuck does that even mean?

Here are some facts, albeit not 'alternative facts' aka not shit I make up.

Here's how much of our discretionary budget goes to the military:
https://www.nationalpriorities.org/campaigns/military-spending-united-states/

Over half of the discretionary budget goes to the military, although it's more than this graph shows because I think you should include Veteran's benefits (6%) into this figure at the very least. So at least 60% of all discretionary funding goes to the military. (No mention of unfunded wars in this graph.)

How does the US stack up against the planet earth?
From Fake News aka people who disagree with Der Furher
Well not the planet earth, but the next 13 highest spending countries don't spend as much on the military as we do COMBINED! You might notice that France, UK, Japan, Germany, South Korea etc. are our allies. With Trump being in you might want to throw Russia in as an ally.
President Trump will end the defense sequester and submit a new budget to Congress outlining a plan to rebuild our military. We will provide our military leaders with the means to plan for our future defense needs.
This quote is right from whitehouse.gov. So I'm guessing Trump needs more money for the military. I wonder why, is it to have shiny new tanks driving down Pennsylvania Ave for the State of the Union? We spend more than China by roughly 6x! This raises the question, if we spend so much of our fucking money on the military, but (in Trump's world) the military is so fucking horrible, why in the world would we throw more money at them? They'll just fucking waste it right?

I'm wondering if Trump and company will bring up 'data' like we have fewer ships in the navy than in 1860. If so, I'll pit one destroyer against the entire 1860 US navy.

Maybe our military needs more money, I want to know why. I want a justification, because the numbers I see tell me different. 


Reviewing grants for NIH vs NSF: a comparison

During my career, I have reviewed grant proposals for both the National Institutes of Health (NIH) and the National Science Foundation (NSF). The standard NIH research proposal is called the R01, generally giving 5 years of funding to a research lab; the standard NSF proposal generally gives 3 years of support to a research lab. By and large, an NIH award will provide more funding than a NSF award on a per year basis.

How the process works in general terms:
After an investigator(s) writes a proposal to either agency, the proposal is assigned to a study section or panel for review. The study section/panel is comprised of expert researchers in the general research area the proposal is about. Specific experts are recruited based on the specific proposals submitted such that there is at least one expert working in the area of each proposal. In general, reviewers receive a stack of 8-15 proposals to review. Reviewing takes a lot of time and energy with reviewers often referring to the literature to get up-to-date on specific topics. Ultimately, the reviewers all gather together in a room to discuss the proposals and make recommendations for which proposals get funded and which do not. At the NIH, reviewers have much more influence on which proposals obtain funding than NSF. In part, this is because NSF is legally bound to ensure funding is spread across the country and to different types of institutions, the NSF program officers have to superimpose the reviewer recommendations with these other criteria to make funding decisions. (To be clear, proposals NSF reviewers find to be fundamentally flawed are not funded simply to spread the wealth.)
A generic stack of grants to review
Once the initial reviews are written though, the process is fundamentally different between NIH and NSF. I believe the NSF model is profoundly better than the NIH model and I'll explain why using a specific rationale that I think is readily justified but also anecdotes, which I realize do not count as data and are therefore less reliable. (Full disclosure, I have reviewed grants for both institutions, have submitted proposals to both institutions, and have been funded by NIH but not NSF.)

What happens at NIH pre-meeting:
When you review a proposal you score it on a variety of criteria using a 1 - 10 point scale (1 being the best). You also give your proposal an overall score. For your stack of proposals, you are supposed to spread out your scores such that you don't give every proposal 1s across the board. A reviewer has to note both the strengths and weaknesses of the proposal for each of the criteria which is the basis of the review. Any given proposal is reviewed by 3 reviewers (sometimes more, but generally not). Once all the proposals are reviewed and scored, this information is sent to NIH and the information becomes available to the other reviewers. Thus, a reviewer cannot 'cheat' and see what the other reviewers think before writing their own critique.

For a given proposal titled 'XYZ' that is reviewed by reviewers Dr. 123, Dr. 456, and Dr. 789, a different proposal titled 'ABC' would be reviewed by Dr. 123, Dr. 045, and Dr. 232, and a third proposal titled 'JKL' is reviewed by Dr. 123, Dr. 045, and Dr. 789. The point here is that different groups of reviewers are reviewing different proposals. However, there is generally overlap of reviewers because they share similar expertise. Say the study section is on signal transduction in eukaryotic systems, there might be a group of 6 experts who work with mouse models, another 8 experts who work in fungal systems, and 6 more experts who work with Drosophila. So generally speaking, every proposal using mouse models (and likely other mammalian models) would be reviewed by 3 out of the 6 experts who work with mouse models. Say there are 15 proposals in the study section (out of 90) studying mammalian signal transduction, then you should see that these are being reviewed by a specific cohort of the entire study section. Same for the proposals using fungi as a model and proposals using invertebrates as a model.

What happens at NIH during the meeting:
At NIH, once all the proposals are scored, they are ranked with the lowest overall score (based on the 3 reviewers) being ranked first. Depending on the cohort your proposal falls into, this could work for or against you. Some reviewers score high (more likely to give 1s) than others. So one reviewer's 2.2 may be another reviewers 1.3, even if they are equally enthusiastic about their respective proposals. Based on the luck of the (reviewer) draw your proposal might be scored as the 10th best, but with a different draw that same proposal might be scored as the 1st (best) proposal for the entire study section. Here's the outcome of this situation, proposals are discussed in their rank order, so the lowest scoring (best) proposal is discussed first, second best discussed 2nd, third best discussed 3rd. Of the 90 or so proposals submitted only the top third is actually discussed by the entire group, the other two-thirds are 'triaged' (i.e. not discussed). Of those discussed only a handful are actually funded, 0-4.   As the group discusses a proposal the 'best' proposal is described to the entire panel most of whom have not read the proposal at least not in any depth. Usually the first page (the Specific Aims) is read by everyone, but generally not much else of the proposal. After the brief presentation where the reviewers go over their strengths and weaknesses, any member of the panel can ask questions or comment. If a reviewer gave a proposal a 1.3 but stated nothing but weaknesses, the question would inevitably arise 'why did you score this so high?' Once the discussion is complete, the three reviewers give revised scores (generally they change little and if so move towards the mean). The entire panel then enters their own score for the proposal, which is generally the average of the three reviewers. Then the panel moves on to the next grant.
NIH (FYI panels take place at a hotel not here)
How can this go wrong?
First, there is the psychological issue that the panelists know they are discussing the proposals from 'best' to 'worst'. Even though a reviewer may love their proposal, which was ranked 10th, it is not discussed until after nine other proposals. These reviewers may lower their ultimate score to reflect this, but the entire panel knows it was 10th and the reviewers are changing their scores to make it not be 10th, rightly or wrongly.

Confounding this issue is the majority of grants are solid good proposals that should be funded. That me rephrase that, the top 20 proposals (or so) in a study section are solid excellent proposals. Hell, you could take the top 10-15 and (in general terms) the scientific/impact difference between the top 10-15 proposals is negligible, yet only the top few have any chance of funding. This means that once you reach a certain point, funding is really a luck issue and nothing more. In fact, there have been suggestions of putting the top proposals into a lottery to determine funding. This is not a new problem. When I was trying to obtain my first R01, I submitted my last attempt at funding for one project (you had three attempts). Based on previous critiques and scores, I was confident of funding. However, one of the main parts of an Aim had been completed and published during the time I spent on the first and second submission. It would be stupid to propose doing published stuff, so I changed that Aim to focus on the follow up studies based off of what we had published. The third submission was triaged (it was actually discussed because I was a new investigator, but was scored in the triaged range), and the biggest issue was that I had reworked an Aim and 'we have not had a chance to fix it.' (Quote from the actual reviewer.) My program officer recommended I resubmit with a new title as a new proposal, which of course I did. This 'first' submission was funded and received one of the lowest (best) possible scores. My point is how arbitrary the system can be.

Second, people suck. On a study section I have served on (ad hoc) numerous times, there are two distinct factions based on the type of organism each faction studies. Some members of one of these factions would read and critique the high ranking proposals of the other faction in order to present 'issues' and 'faults' during the discussion session. This is completely valid if everything is equal, but this was done with the goal of diminishing the high ranking proposals of the other faction in order to increase the standing of proposals from their faction. (In other words it was not done to critically evaluate the science across the board but in a strategic way to help their colleagues and their field.)

Third, (and most importantly in my opinion) the scoring is done blind. Apart from the three reviewers, the rest of the panel scores the proposal in secret (again based on my discussions with panel members it is usually the average of the three reviewers). Once a proposal is scored it is not brought up or discussed again.

What happens at NSF pre-meeting:
It's pretty much the same as described above for NIH, however there is not a 1-10 scale but a qualitative scale (Excellent, Very good, Good, Not competitive). There are still 3 reviewers, there are still strengths and weaknesses, there are still cohorts based on areas of expertise. NSF proposals are broken up into two sections the 'intellectual merit' basically the science being proposed, and the 'broader impacts' basically how does this benefit society. Each of these sections is a critical part of the review, have an excellent intellectual merit, but no real broader impacts and your proposal is not scored well.

What happens at NSF during the meeting:
At the surface level, its similar to NIH. However, the proposals are not pre-ranked/pre-scored. The order of discussion is based on reviewer availability as some ad hocs call in. It's also based on the leadership of NSF some of whom may be interested in a specific area and want to sit in to hear the discussion in an area they are familiar with. (At NSF the program officers are practicing scientists who have taken a multi-year leave from their research institution to serve at NSF, the upper leadership are usually 'permanent' staff at NSF).
NSF headquarters (FYI panels take place here!)
Proposals are discussed by the reviewers and then a general discussion takes place. This discussion is more robust than what I have observed on NIH study sections. Once the discussion is done, the panel, not the reviewers, suggests a category to put the proposal in (again either Excellent, Very good, Good, Not competitive). The Very good and Good categories are further broken up into two groups I and II to distinguish the very very good and the not so good goods.) After the panelists make a recommendation, the reviewers can agree/disagree and another mini-discussion can ensue. Regardless, the proposal under consideration is placed on the board. (This is an excel spreadsheet projected on the wall.) We then move on to the next proposal.

A key difference is that once a proposal is placed, it can be discussed further. This is particularly important when two similar proposals get profoundly different rankings. We can then discuss why. At NSF proposals can move around a lot. Furthermore, everyone at the table has to agree on the categories and position within the categories of every proposal (we still decide on the most excellent, the 2nd most excellent, the 3rd most excellent, etc.). I may not agree with the ultimate position of every proposal on the board, but as a group we are in agreement.

How can this go wrong?
First, psychology still exists. If the reviewers score a proposal Excellent or Not competitive, as a panelist you are influenced by this. I did not read every proposal (although often I and the other panelists will read particularly contentious proposals at night before we meet the second day). Regardless, those initial critiques carry weight even if we know it happens and try to avoid it.

Second, people still and always will suck (#Trump2016). My most recent NSF full proposal was not funded and one of the reviews referred to me as 'she' and 'her' whereas the other reviewers referred to me as 'the investigator' or 'Dr. XYZ' (this is standard boilerplate when talking about the researcher). I'm not saying the reviewer was biased against me because they thought I was a woman, but its possible. This was the only time in close to two decades, I've read a such a condescending review that attempted to explain to my feeble girl-brain what science is and how it's done by 'real'-scientists. (Full disclosure, I'm not a woman and it doesn't matter anyway.) After talking with my program officer, my proposal was the one on the fence between funding and no funding, which unfortunately fell on the side on no funding. And here is why I think the NSF system us better...

Why the NSF system is better:
Regarding my unfunded NSF proposal: It is my fault it wasn't funded. I could complain about sexism and bias, but if my proposal had been slightly stronger the other reviewers would have gone to bat for me more and the panel would have placed my proposal higher and I would have been funded. This is not the case at NIH, one slightly not enthusiastic review can tank your proposal. I expect when my NIH proposal was dinged for getting too much done and rewriting an Aim the panel hadn't yet corrected, there was some brief discussion of this being not a reasonable critique (if the reviewer didn't actually say this out loud, it wouldn't be discussed period) and then the scores were adjusted somewhat. The reviewers who supported my proposal increased their scores slightly to show some semblance of reviewer cohesiveness and the reviewer who was an idiot decreased their score somewhat to 'fix' the BS critique and the panel scored to the mean, which amounted to triage. If the proposal had to be placed on a board and put in context with other proposals, then I doubt it would have been triaged and expect it would have been funded based on the score of the subsequent proposal.

In conclusion, I like the NSF system more because it is more transparent, accountable, and self-correcting.

Some potential confounding factors:
Success rates: There is really no difference in success rates between NIH and NSF, they both suck (#Trump2016) and essentially a lottery system of top proposals seems appropriate (although NSF has additional criteria that impact who gets awarded).

Number of grants: My experience is that there is really no difference when it comes to the meetings. Some NSF programs have a preproposal (essentially their triage step) and then review 30 or so full proposals, which is about the NIH study section full review. I'll point out that every preproposal is reviewed too, there is no 'it wasn't good enough to discuss' category.

Probably others I cannot think of now.

Also, I know none of this is #Trump2016's fault, but it is my go to hashtag to express contempt at the shortsightedness of one party (Republicans).

Is Peer Review Broken?

No.




Oh, you wanted more than that? Maybe some nuanced reflection on the issue of peer review? Ok, I'll give it a go, but I won't do nuance. Nuance died when Trump was elected, actually before that, but talk about nail in a coffin.


There have been many stories floating around for years on the failings of peer review (the vetting of scientific studies by other expert scientists in the field prior to publication). These stories usually follow the publication of some study that is fundamentally flawed or unacceptable for any number of reasons. Several examples come to mind Arsenic bacteria, Cold fusionCaterpillar hybridization, etc. So questions 'Is Peer Review Broken? and If so, How Do We Fix It? come up.

For those who live in the scientific universe you can skip the blue paragraphs, otherwise if you want a short breakdown of the process feel free to read them.

Once a group of scientists have made observations and gathered data, they write a story (aka scientific manuscript). I want to note this manuscript is a story, not in a fiction story sense, but in a narrative sense. The authors may not describe the experiments in the order they were conducted, because it makes a more logical narrative to describe things out of order. The authors may use their 20-20 vision to redescribe why an experiment was carried out, because at the time of writing the manuscript the original reason may not make sense in light of the narrative. Again, for those just looking to find reasons to disparage science,  I'm not suggesting authors are manipulating data or trying to obscure their findings (although there are cases of this), I'm talking about making a compelling argument to convince a skeptical audience of experts that their interpretations of the data (aka conclusions) are correct.

Ok, once the manuscript is written revised and edited until most if not all the authors hate the thing, it is submitted to a scientific journal for publication. The journal then assigns the paper to an editor who then decides if the paper is of sufficient rigor and interest to the readers to actually get peer reviewed. If the paper passes this hurdle, the editor sends a number of requests to other scientists asking them to evaluate the manuscript, including the experimental approaches and interpretations of the conclusions. (There are some variations on this model, but most journals follow this model. Some have additional levels of scrutiny, but this tends to be early in the process and not by current scientists.) Many potential reviewers decline and the editor repeats sending out requests until, usually, at least 3 agree. The 3 reviewers then read and critique the manuscript and provide feedback to the authors and editor on the pros and cons of the manuscript, this is the PEER REVIEW component. At this point the editor makes a decision on the manuscript which ranges from (rarely) acceptance, editorial revisions, more experiments needed revisions, to outright rejection. The most common response is some kind of revision, either with or without more experiments, and the authors deal with those critiques and resubmit a revised manuscript. (In the case of outright rejection, the authors usually revise the manuscript based on the review comments and send the manuscript to another journal (They revise because the same reviewers are likely to see it.)) The manuscript can either be accepted/rejected by the editor or (most commonly) sent out for re-review


Peer review, much like everything humans do, is a human endeavor. So it is subject to human limitations. This is not new or particular special, it's simply a fact that humans are not robots, have biases, and some even have ulterior motives. In a simplistic sense we can say peer review is broken, because people will and do make mistakes at all the levels described above. However, by this criteria peer review has always been broken and always will be broken. But this is a stupid criteria. Let's go through how this process can fail, because once we know how it can fail, we can make recommendations for how to fix it.

1. Poor editors

This works for and against a manuscript. For you: The editor thinks your shit don't stink. They can send out your manuscript, when they wouldn't send out the same manuscript from a different group. They can pick reviewers they know to be 'easy' or can write the invitation letter in such a way to encourage a positive review. How can this latter event happen? An editor, who is an established scientist can send to one of their former graduate students/post-docs who is now an independent scientist the following invitation,
Dear Prof X, I have this manuscript I think would be a perfect fit for journal Y, do you have time to review it? Abstract attached below.
Do you see how the letter could effect the review? You have extreme cases like Lynn Margulis obtaining numerous reviews until she had 3 she could use to accept an atrocious paper in PNAS (ignoring all the reviews that noted the fundamental errors in the paper).

2. Poor reviewers

I've been an editor for several journals. I can give you two easy reasons why you might get a poor review. First, you, as an editor, may not be an expert in the area the manuscript addresses. You might be generally aware of the area, but are certainly no expert, which means you likely do not know who the experts are in that field (because its not your field). You can do PubMed searches to identify people who have published in specific areas, but you don't know them or their research. So an editor may not be obtaining 3 rigorous expert reviewers.
Second, even in an area of expertise, the researchers you know are experts often say no when asked. This is particularly true if the journal isn't one of the top journals out there. There is little prestige saying you reviewed papers for a general journal like PLoS ONE in your annual reviews compared to Science or Nature.

3. Poor journals

There are also two versions of this. First, there are journals that will publish anything if you pay, its their business model. You can check out Beall's predatory journal list to identify many of these. Second, there are top-tiered journals that care about mass media dissemination of the work published there, also their business model (Science, Nature, I'm looking at you). The Arsenic bacteria was published in one of these journals, as were the ENCODE papers. This is not simply a journal issue as there were problems at all levels, but the journals actively advertised this work.

So how do we fix peer review?

Most of the discussion I've seen has been dealing simply with reviewers, which I think is the least broken aspect of the peer review process. (It's like blaming teachers for poor student performance and ignoring income inequality and poverty.) The solutions I've seen generally revolve around identifying reviewers, which is stone cold fucking stupid. I've obtained jackass reviews on publications and wish I knew who the dumbass was, but that's kind of the point of anonymous reviewers. If I disagree with a reviewer I cannot subconsciously or consciously screw them over on one of their papers or (more importantly) grant proposals in the future. Non-anonymous reviewers means no early career scientists will review papers for fear of career suicide or early career scientists will review papers and be favorable in the hope for favorable reviews in turn going forward. Non-anonymous peer review would essentially end all the good things of peer review and solve exactly 0 problems of peer review.

One idea I have seen floated around is publishing the reviews, which I actually support. The reviewer remains anonymous, but also has to take some ownership of their review. This could reduce what I think have been some bullshit critiques. While the reviewer would remain anonymous, the community could see what the issues were and decide if those were reasonable (and reasonably dealt with by the authors) or unreasonable and the community could actually comment on it (because the age of social media has changed things profoundly).

How can we fix things? I like the publishing reviews along with the articles (make them available online). BioRxiv may help with this as authors can post their original manuscripts that the world can see to compare reviewer critiques to. I personally like the idea of paying reviewers. $50 a review, not enough to cover the cost of the review but to provide some incentive. (I expect I spend 4 hours on every paper I review rigorously, because I check the literature, so not even minimum wage in some states. Some papers are so bad, they can be reviewed in an hour or two, but I wonder why the editor sent the paper out (see below).) You review 6 papers a year, which is pretty low in my experience, you make an extra $300 bucks, which is not nothing. If you suck at reviewing, editors stop asking you which has a financial consequence. I can see the argument that the money could allow systemic abuse, where reviewers want to appease the editors so they get more assignments, but this is essentially the amount I could make mowing a couple of lawns on a summer evening, probably in less time. (FYI, many journals make good money on the backs of free reviewers and free editors.)

One idea I have not seen floated around is to make editors more accountable. Too often in my submitted manuscripts (and those manuscript I have reviewed), the editor simply defers to the reviews and takes no responsibility. If a reviewer asks for an additional experiment, it must be done even if it has no effect on the conclusions made in the manuscript. In too many cases the editors simply pass information between the reviewers and authors.

How can we fix this? One idea is make editors more accountable. First, pay them. Say $3000 a year, this is essentially the cost of one article. FYI the authors pay to publish their work in the journal. If an editor is not doing a good job, boot them and take on another one. Screw it, hire professional editors, PhD scientists, for $90,000 a year and have them cover a research area. 30 articles covers their salary (another 10-15 for benefits). How many Nature papers are biology related every week?!?! Maybe the CEOs make a little less in order to support high quality science?

What about the journals? Well, authors should stop fighting to get their shit in glamour mags. I know scientists are under immense pressure to publish in C/N/S journals (Cell, Nature, Science), but do these journals really publish the best of the best? Don't know, what I do know is that any good study published in an open access journal is available to everyone, every-fucking-one-with-an-internet-connection!!! How many people actually peruse journals anymore as opposed to PubMed searches? I still subscribe to Science and/or Nature, but primarily for the news, reviews, and opinion pieces as well as to support their policy and outreach initiatives. If you are doing quality work, it will be read, because google. If I can find a decent Chinese restaurant in Rome online, I can find interesting articles on phenotypic diversity in microorganisms online. I would point out the Noble Prize winning research on B-cells (the antibody producing cells of the body) was published in the fucking Journal of Poultry Science.

In summary, peer review is a human endeavor and subject to human foibles. Is it perfect? No. Can it be improved? Marginally. Is it the best we have? Absolutely, but with the caveat that minor improvements can be made and the acceptance that there is no such thing as perfection, simply the ongoing striving for perfection.

What I Read (2016)

(Grade A-F, no E's) Title-Author Additional thoughts

A        Injustice: Gods Among Us Year 1. In general, when the video game comes first, the next media manifestation sucks. Think of all the video game to movie travesties (I'm looking at you Super Mario Bros). However, this graphic novel (a compilation of the first year of the comic book series) based on the Mortal Combat style video game is a success. Like Watchmen, it deals with some interesting issues.

C        Die Trying by Lee Child. Another pass the time 'thriller'. Not that thrilling really, but an easy quick read to pass some time.

C-       John Constantine: Hellblazer Vol 1. Meh, better have some knowledge about John Constantine coming into to this or it's not going to be easy figuring out what's going on initially. Last story and a half seemed not related to the overarching arc of the story, but tacked on to add some filler.

C-       The Obelisk Gate by N.K. Jemisin. Had to go back and see what I rated the first in this series (FYI an A), because  I was not a fan of The Obelisk Gate. Maybe I forgot too much of the earlier book, but this one provided little to no context for much of the previous book. (A brief synopsis or some character conversations to orient the reader of last years' book would be helpful.) Basically two independent stories moving along in parallel.


A-       The Sandman: A Game of You by Neil Gaiman.  I enjoyed this story a lot. In general I like stories that interconnect two different worlds, particularly the 'real world' with a fantasy world. But also, this story had some great characters.


B-       Chimera by Mira Grant. Well my opinion of the second book seems justified by this book. I liked the way this story culminated, but I think the second and third books could have been merged to make a more concise flowing story.

B+      Night Stone by Rick Hautala. Classic Indian burial site horror story. Not a feel good happy ending, and considering that this was written in the 1980s not par for the course. Also, Hautala gets a nod because I knew him (my mom worked with his wife) and he gave an aspiring author (me, age 9) a collection of books for my birthday (including Nightshift by Stephen King and October Country by Ray Bradbury).
 
C        Severed Souls by Terry Goodkind. Enjoy Goodkind's Sword of Truth books and I think I enjoyed this one when I read it, though I recall getting bogged down with the flight from the half-men. It just kept going and going, got to the point of being more of the same. The reason I scored this lower is I had to look up a synopsis to remember some of the plot lines of the book, not a good sign.


A-       
Destiny of the Republic by Candice Millard. Knew little of James Garfield and now I know more. Wonderful weaving of the politics and medical science of the time, especially the motivations of individual people. While Alexander Graham Bell is an central character in the narrative, he doesn't actually contribute except as a historical notation (albeit an important notation).

A-       Dust by Hugh Howey. Great conclusion to the series. Kind of a happy ending if you forget about all the horrible things. Overall a good evaluation of the human condition both good and bad. Some issues were never resolved adequately in my opinion, like why were women and children frozen in Silo 1 or why other Silos had to be killed off instead of simply being ignored moving forward.

F        Darwin's Black Box by Michael Behe. Not really any redeeming qualities to this book. Behe's writing is condescending and he treats his audience like children. The book is more propaganda than scientific analysis. After a brief intro where the word 'literal' is misused. Following chapters are supposed to establish the idea of 'irreducible complexity' at the molecular level, but basically the point is that if we don't understand every possible thing, then the god-of-the-gaps argument is true. Some chapters are supposed to address the idea of intelligent design, but are rewordings of god-of-the-gaps argument. I am sure this book appealed to creationists and still does.

A-      Shift by Hugh Howey. A great sequel (more of a prequel), provides much of the backstory for how the apocalypse came to be. It's depressing in large part, but we are talking about the end of civilization. It ties into Wool well and provides a solid foundation for the last book Dust.


A        Wool by Hugh Howey. A great and in my opinion excellent world built by Howey. Life in a gigantic silo to survive a 'dead' earth. I whipped through this book in no time and am looking forward to picking up the second and third of this trilogy.


C+      The Sorcerer's Daughter by Terry Brooks. Definitely a Shannara book (maybe its a 'familiarity breeds...' issue), although I rank it higher because there are some novel aspects to this story, including a not so neatly packaged happy ending for some of the protagonists. 


A        Mr. Mercedes by Stephen King. A suspenseful and exciting mystery by Stephen King. No magical or fantastical forces at work in this story. Not really surprised, because it seems much of the 'horror' in King's work are the human kind.

B-       Galilee by Clive Barker. An engrossing story about two families, one mortal and the other less so. I began this book many years ago, but gave up on it for unrelated reasons. An enjoyable story that I didn't score higher because I found the ending describing how the families became entwined rushed.

C       Killing Floor by Lee Child. An average read to pass some time. Pretty obvious who the bad guys were and what the motivations were. Kind of expected more from the initial Jack Reacher franchise.

A-      Morning Star by Pierce Brown. I truly have enjoyed this series. Like with The Expanse series, I enjoyed this trilogy tremendously although I'm not usually a fan of space operas. Unlike The Expanse series, this reads more as a fantasy that takes place in space instead of different kingdoms on a single world. There are orcs, trolls, gnomes, elves, etc. although with different names. Regardless of genre placement, I highly recommend the books.

B       The White Dragon by Anne McCaffrey. Another solid sequel in this series (3/3 so far). While using the same world established and grown in the first two books, this has a different feel to it. Only aspect I didn't quite follow, which dropped it out of the A category (spoiler alert) is why did it take so long for the Southerners to realize how desperate they were to have to resort to stealing a golden egg.

A-       Neverwhere by Neil Gaiman. Do you sense a trend with this string of A-'s? A thoroughly enjoyable read. I read the Author's preferred version, because why wouldn't you? World building exploring the border between reality and slightly not-reality reminds me of several Clive Barker stories. Hell, even some Stephen King stories. Great characters.

A-       Do Androids Dream of Electric Sheep by Philip K. Dick. Blade Runner was a great movie. This book was better, but different. I don't see how you can compare the two. I love that the movie is different enough to work in that medium compared to this book which works so well in this medium.

A-       Dragonquest by Anne McCaffrey. A great sequel! Probably feeling better when I wrote this to score it higher than the original installment, which I enjoyed a lot. What stuck with me in this book was that it is centered on dealing with problems arising from the solution to the problem in the first book. Even our solutions, which completely solve a problem, are not without consequence.

C       The Satanic Verses by Salman Rushdie. I know this novel is considered by many to be a major contribution to our literary time. Hell it even put a permanent death sentence on Rushdie, but I don't see it. I think the biggest contribution to society this book makes is uncovering the extreme danger of fundamentalist religion 20 years before it was obvious to America.

B       Dragonflight by Anne McCaffrey. An enjoyable romp, especially the prologue/backstory.  I started this book many years ago, probably as a late teen and didn't get into it at all. Not sure why, but I thoroughly enjoyed it now.

A       Lord of the Flies by William Golding. Of all the crap I was forced to read in high school, The Scarlet Letter or Fountainhead (neither of which I actually finished reading), why wasn't this a must read. Great exploration of the human condition and how fragile our hold on civilization may actually be. (I'm looking at you Ammon Bundy and Donald Trump.)


C        The Cat Who Walks Through Walls by Robert A. Heinlein. The first book started and finished in 2016. Stranger in a Strange Land is one of my all time favorite books. I was extremely excited to read another Heinlein, however this did little for me. Don't really get the point of the ending.


B+      A Short History of Nearly Everything by Bill Bryson. An engrossing layperson approach to what we know about the universe and our place in it. Engaging writing with intimate descriptions/discussion of/with the scientists on the ground. As a biologist, I was disappointed with many factual errors in the sections on biology, but the main points held up. I assume the astrophysicists and geologists felt the same way about the description of their fields as well. 

D        After Alice by Gregory Maguire. Alice in Wonderland was a delightful jaunt. After Alice was forced and tedious. Tim Burton's take on Alice was infinitely better, at least there was a story.


28 books, including several graphic novels that I know rankles some readers as not being literature.
Of the 28 books: 25 were fiction for fun, 1 was philosophy, 1 was history (although not academic), and 2 were science(ish). I was on pace to complete close to a record number, but the fall semester is a killer time wise for me. I have three books I'm in the middle of.