Lawrence H. Summers, ex-President of Harvard University, is in hot water for suggesting that genetic differences may keep women from making it to the highest levels in science and mathematics. And well he should be. But not for sexism. Something much worse. [Summers has since gone on to helping mismanage the U.S. economy.]
Our genes determine, entirely or partially, our height, sex, facial features, skin color, susceptibility to diseases, and life span. It's scientifically absurd on its face to believe they have nothing to do with our intelligences (artistic, spatial, mathematical, social, verbal ....) or behavior. Geneticists working with long-separated twins find amazing cases of twins who had never met since childhood having the same jobs, similar personality traits, even coming to reunions wearing similar clothes.
So it's entirely possible that males and females, or members of different ethnic groups, might tend to have different intellectual strengths. So let's assume Summers is right and women do have tendencies to do research differently than men. The question everyone should be asking is why should those differences affect the perceived quality of their science? Why should male-dominated science be inherently better than female-dominated science?
The answer, of course, is it shouldn't, and therefore whatever is keeping women from the "higher echelons" of science has nothing to do with science. Summers' real crime is not making sexist remarks, but defending a pecking order that is, at bottom, mostly arbitrary.
As I look through the faculty directory of my four-year undergraduate campus, I find people with doctorates from Yale, Princeton, Columbia, Rutgers, Cornell, Purdue, Michigan, and UCLA...... One possibility is that Yale, Princeton, Columbia, Rutgers, Cornell, Purdue, Michigan, and UCLA are turning out a lot of second-rate doctorates, in which case they are diploma mills and not great universities. The alternative is there is no significant difference between faculty on my campus and those of Yale, Princeton, Columbia, Rutgers, Cornell, Purdue, Michigan, and UCLA. After all, a doctoral adviser is going to turn out a lot more doctoral graduates in a decade than there will be openings at the most affluent institutions; where will the rest of them go?
Actually there is one significant difference. We teach better. When I was a grad student at Columbia, the chemistry faculty was notorious for one Nobel laureate who routinely flunked half the students in his intro chemistry class. What exactly this Nobel laureate and Columbia thought they were accomplishing is not clear, but I can assure you that any of my chemistry colleagues turns out better-educated chemists. And our students get their lectures from professors, not teaching assistants.
I was also at a Nobel symposium at Gustavus Adolphus College in St. Peter, Minnesota a number of years ago. These events, held annually, are the only outside events authorized to use the name "Nobel." The keynote speaker at this event was a Nobel laureate in Medicine, awarded for his work in genetics. This man's ignorance of anything outside genetics just beggared belief. He had utter disdain for any science but genetics and actually said that no significant science had come from the space program. I sat there in open-mouthed incredulity that any scientist, let alone a Nobel laureate, would stand up in public and utter such remarks. So what does being in the uppermost echelons of science mean? It certainly doesn't mean being a well informed scientist on anything outside your nanospecialty. It doesn't mean being a good human being, either. The annals of science are filled with top-ranked scientists who stole ideas, unfairly undermined rivals, philandered, and backed politically reprehensible causes.
Read any discussion of electronic publishing, and you'll find a positively manic fear, bordering on panic, that the traditional system of ranking scholarship will be weakened, abolished or bypassed. Author after author repeats the same theme: the traditional review system, though imperfect, usefully sorts out science by quality, with the "best" papers going into the most competitive journals and others finding a niche further down the line, ending in an "unrefereed vanity press" at the bottom. That phrase seems to be de rigueur in such discussions.
So in one sense, the hierarchy is very real. There are journals that reject 80% of submitted papers and others that accept most of the papers submitted. And of course the most competitive journals have higher prestige, which means that authors who can get into them don't bother with the less-selective journals. Since very inferior papers don't stand a ghost of a chance of getting into the most selective journals, there is a real quality stratification.
On the other hand, the hierarchy king, prince, duke, marquis, earl, viscount, baron was once also very real. And, since people who grew up in that system tended to be well educated and trained for their eventual post, it produced a real quality stratification as well. Especially capable people could be promoted to the next rank. Of course there was plenty of dumb luck, favoritism, intrigue, and back-stabbing (figurative and literal) but then everyone acknowledges that the peer-review system is flawed, too.
There was only one problem with the aristocratic hierarchy. Ultimately, it didn't mean anything. Firearms and the new tactics they engendered eliminated the distinctions between common soldiers and knights. As capitalism expanded, there soon came to be so many alternative routes to the once restricted upper echelons of society that the aristocratic ranks eventually came to have little meaning. It is quite possible that, thanks to the use of knighthood in modern monarchical societies to recognize achievement, the rank of knight has more real significance today than it ever did in the Middle Ages. In the United States, we did away with titles of nobility altogether and nobody really seems to miss them much. Place in the hierarchy turned out to be wholly arbitrary and not connected with any real qualities at all.
Of course, we have lots of hierarchies today, too, for example, the military. I have a fair amount of military experience, and while there are a lot of military people who confuse rank with reality, many more recognize that filling a slot doesn't necessarily confer superiority. It's a foolish lieutenant who doesn't listen carefully to an experienced sergeant. Most hierarchies in the military, business, and politics are heavily based on experience, and are significantly open ended. Put in the time and decent performance, and you have a good chance of moving up. Maybe not to general, CEO, or President, but you can move up.
Then we have other hierarchies as well where the quality distinctions are a lot more dubious. Hollywood lives on the myth of "the big break," where a previously unknown performer gets a key role or a chance to perform. This myth has enough reality behind it that we can be sure there are innumerable Keanu Reeves or Shania Twains out there waiting tables who will never get that big break. It's axiomatic among film reviewers that box office returns and Oscars are only marginally connected with quality. For every Grammy winner there must be hundreds of equally good garage bands that will never get a recording deal or air time. And even when there is a demonstrable quality difference as measured by some standard, does it mean anything? If the entire NFL were to be wiped out in a disaster, and we filled the ranks with college players, would the resulting difference in quality of play actually mean anything? Is there enough meaningful difference in quality between starting gymnastics or ice skating at the age of eight versus fifteen to justify children wasting their childhoods on regimented sports? These are hierarchies where, even though there are measurable differences, they really don't mean much more than lining people up by height.
How much of the hierarchy in academia is real and how much is purely artificial? Well, there are real differences in quality. There is honest to goodness (or badness) junk. Unfortunately, the "top" journals don't do a terribly good job of weeding it out. The journal Nature devoted space to a paper defending the reality of psychic Uri Geller's fakery, and another on homeopathy. In my field, geology, one top journal once published a paper arguing that craters on Mars were carved by wind vortices. The same journal once published a paper on the alleged tectonic origin of certain meteor impact structures; the paper's methodology was essentially to apply the label "tectonic" liberally and then assert that the "tectonic" origin of the structures was proven. In the closing days of the debate over plate tectonics, another journal, dominated by anti-plate tectonics editors, published a steady stream of what must surely be among the worst rubbish ever put out by a major journal in any field.
Geology has a particularly unhappy history of snubbing advances: continental drift, meteor impact, the great Missoula floods of the Pacific Northwest. Nevertheless, although early papers on these subjects met with a lot of hostility, they differed in one important respect from many, far superior papers today. They got published.
One of my research interests is pseudoscience, and I know better than most academics just how far downward the quality ladder extends. The Journal of the American Medical Association published an article a few years ago that asked whether the peer review process really accomplished anything. That's because they only get papers from qualified doctors. They'd find out in a hurry just how bad it can get if they were to be swamped by papers seriously arguing for every nutritional and folk fad around. The gulf between even the "unrefereed vanity press" and the likes of scientific creationism, Immanuel Velikovsky, and belief that the moon landings were a hoax is vast. We need a filter to keep egregious junk out of serious academic publications. The peer review system provides a filter, but it's at best chain-link fence, not a silk screen. Most educators laugh at high schools that calculate grade point averages to four decimal places, but the system we have now of ranking journals by rejection rates, supplemented by citation indices, is little different. Both amount to trying to do brain surgery with a chain saw. There might be a difference between an article published in Science and one published in an in-house report or privately on line. Then again, there might not. Citation indices are as likely to measure the trendiness of a topic or the failure of scholars to recognize really significant work as they are to measure actual quality.
The occasional hoaxes of the review system show how meaningless most of it is. Some authors have re-submitted papers previously published by others. In some cases the motivation has been pure fraud; in a couple of others, it was a sociological study of the review process itself. One author re-typed and submitted a previously published novel to publishers, just to see what would happen. Four things generally come out of such efforts.
Why might women not get to the "highest levels" of science, assuming for the moment that term has any real meaning? As many analysts have noted, many of the reasons may be purely sociological. Women are more likely to take time off for child rearing, and even if their institution allows the tenure clock to be stopped, it takes time to get back up to speed in research. Women may be less likely to spend overtime at work, possibly less competitive in trying to beat rivals, more likely to do interdisciplinary or synthesis research that tends to be turned down by selective journals, and certainly more likely to be victims of bias or credit theft.
Overwhelmingly, though, the academic hierarchy measures little more than access to resources. The reason history, until recently, was written mostly by the upper class was simply and solely that the upper class was far more likely to have the leisure to write books and the connections to get them published. Of two equally qualified doctoral graduates, one of whom goes to work at a doctoral institution and the other at a baccalaureate institution, who will have the lighter teaching load, more laboratory space, access to graduate assistants, and bigger library? Who, after ten years, will have the longer publication list?
The academic hierarchy also rewards certain research emphases which, not coincidentally, are often emphases that are likely to get grant funds or utilize equipment available at large institutions. One of the most striking examples is the emphasis placed on analysis versus description. It's comparatively easy to publish a paper describing a new research technique, much more difficult to publish a paper that applies that technique and describes the results. The physicist Ernest Rutherford once said that "all science is either physics or stamp collecting," and academia has bought fully into the notion that description is intellectually trivial whereas structure, analysis and process are the cutting edge. Meanwhile, as we confront our environmental problems, we discover that we don't know to within a factor of ten how many species live on the earth, much less have a complete catalog. Every scientist dealing with environmental problems runs headlong into the fact that there is just insufficient information everywhere on earth. Research academics are like stamp collectors who decide the hobby is boring and figure they've done and seen it all because they have a stamp on every third page.
These won't ever happen, but they're fun to contemplate.
Created 21 February 2005, Last Update 30 August 2011
Not an official UW Green Bay site