Once upon time, back in the ’60s, there was this guy named Thomas Kuhn. Well, im sure he was around before and after the ’60s, but it was in the ’60s when he wrote a book called The Structure of Scientific Revolutions. The book claimed that science proceeded in three main stages: normal science, crisis, and revolution. It used the words ‘paradigm‘, ‘anomaly‘, and ‘incommensurability‘ a lot. Sometimes philosophers like to claim that it is responsible for the modern use of the term ‘paradigm shift‘. Which he probably is.
Cover of the second edition of Structure.
The basic idea of Structure is deceptively simple, and it has become so engrained in contemporary philosophical thought that it hardly requires a detailed explication. But because it opposed the then-dominant view of how science progressed, it is worth remarking on the two contrasting pictures of scientific development. The canonical picture, before Kuhn, was of science ever bettering itself, grinding eternally forward toward some teleological end. At that end of science,every bit of knowledge of the natural world would be catalogued, connected systematically to every other part, both synchronously and diachronically. Most importantly, this knowledge would be built up from the same basic concepts and theories that have been the heritage of Western natural philosophy since before the time of Socrates.
This latter aspect of the view was the one against which Kuhn brooked his central arguments. He concluded, through arguments that will not be reconstructed here, that different theories purporting to describe the same phenomena can be, and often are, so conceptually divergent from one another that they cannot necessarily be said to be describing the same thing. In fact, the differences can be so vast that the theories can be said to be incommensurate, not even comparable in terms of their claims about the way the natural world works. Theories diverge from one another to become incommensurate when the earlier theory becomes so fraught with systematic errors, or anomalies, that a new theory is required to perform the calculations and supply the explanations that the old theory no longer can. The new theory contains concepts, equations, predictions, and experimental procedures not seen in the old theory. The incommensurability of theories can be seen in the differing paradigms, that is, world views or conceptual schemes, articulated by the different theories. This process of divergence is the shift described by the move from normal science through crisis and to revolution. The shift from the Newtonian (classical) to the Einsteinian (relativistic) theory of gravitation is generally hailed as the, ahem, paradigm case of scientific revolution.
Kuhn was a very smart man, and he got a lot of things right, as far as I am concerned. Part of power of his theory is the vagueness built into the definitions of each stage of science. He has been attacked for this vagueness before, so much so that in 1969, seven years after Structure came out, he wrote postscript aimed at clarifying some of the ambiguities that arose from the vagueness in the original definitions. Then followed a lot of quibbling from a lot of people over when exactly the number of anomalies grows so great that is causes a crisis and an ensuing revolution.
I think part of the reason that this quibbling continued for so long (indeed, well into the 21st century) is the fact that this picture fails to tell an adequately rich or interesting story about how working theories can evolve without generating a crisis and further revolution. In Kuhn’s story, theories are not terribly interesting things when they work: he calls this period of development normal science, and characterizes it by puzzle-solving activity, painting a picture of “normal” scientists as little more than highly-trained completers of crosswords.
This can't be all there is to "normal" science.
This picture isn’t fair to scientists, and it fails to capture a very important stage of the evolution of theories, namely the extension of a theory to a new domain of application. This stage of theoretical development has not been adequately appreciated by contemporary philosophers of science, which is problem for them. Here’s why: one of the things we as a culture rely most heavily on science to do for us is to reveal the miraculous and previously unexpected relations between parts of the natural world and to figure out how to use these revelations to improve the quality of human life. The job of philosophers of science should be to tell the story of how science can do that for us, and to provide well-reasoned suggestions for how scientists might do it better or how non-scientists might understand the aims, ends, or implications of scientific endeavors. Extending a theory to a new domain of application changes the story of the theory and, often, the aims, ends, and implications available in light of the theory. So understanding what happens when a theory is extended to a new domain is an essential part of responsible philosophy of science.
Okay, this is getting very abstract and high-fallutin’ and I am asking you to ingest a lot on faith alone right now. I think the point I’m trying to make will be easier to see in light of a specific example of what happens when a theory is extended to a new domain. Now that you know where I’m going with it, let’s delve into the example itself.
Not actually one-billionth of a meter thick in any direction.
“Nano” is one of the hottest pop-sci buzzwords around these days. The prefix technically means .000000001, one-billionth, and is most often applied to meters. And Apple devices. In chemistry, nanoscale materials are materials where at least one dimension is between 1 and 100 nanometers thick. These materials are of particular interest because they display properties that differ in systematic and useful ways from materials that are made of the same substances at larger dimensions.
Nothing like a bowl of graphene flakes to start your day off right!
For instance, take graphene. This potentially-revolutionary material is nothing more than carbon atoms, arranged in a very thin, honeycomb-patterned sheet. It won the 2010 Nobel prize in physics, and it has the potential to dramatically affect electronics, gas detection, and a host of other areas of scientific and industrial development. What makes graphene so interesting is the electronic properties that arise when carbon atoms are arranged hexagonally in a single sheet, as opposed to in many sheets (which would be graphite, or pencil lead) or in crystalline (diamond) or amorphous (coal) forms. I won’t bore you with the details here, but the basic gist is that when one puts most of the atoms in a material on the surface of the material, as one does in the case of graphene, weird things happen. Specifically, for graphene as compared with bulk-material graphite, electrical conductance changes by narrowing the average bandgap between the highest-occupied and lowest-unoccupied orbitals of the carbon atoms, allowing for more ready creation of excitons and other fancy-sounding electrical-conductance stuff.
Wavy lines depicting the electronic structure of graphene. For actual explanation, go here.
So that’s graphene, and here’s the thing: no theoretical revolution was needed to make, describe, explain, or use it! *BUT* neither was the synthesizing, characterizing, and outlining of applications for the stuff mere puzzle-solving: it required deep and genuine creativity to manipulate existing synthesis methods in order to grow the very thin, very sensitive material, not to mention all the nanomaterials that preceded it, and to find the right parts of electronic and chemical bonding theories that were needed to explain what was going on in the thin honeycomb-sheet. What’s more, the theories used to characterize the new materials evolve during this extension: new, systematic kinds of predictions and connections between phenomena appear in the nanoscale that would not have been predictable from the tenets of the original theory.
I can’t defend this claim in full yet, because that’s a big part of my dissertation, which is not, well, written yet. But let me try to explain a little by way of example. Coming back to graphene, the original bonding theory used to predict molecular geometries, bond energies, and other properties of interest for those studying materials, relied on the background assumption that the bonds exist within a bulk material. As I mentioned earlier, the electronic properties of graphene, which are intimately connected to the bond structure of the material, change dramatically from the bulk to the nano. So the theory modeling the bonding behavior in graphene, and of the electronic properties that rely on that bonding behavior, changes with the change in the material’s scale.
I’m not sure what to call this transformation of theories yet, but revolution is not it.
One final note. One of the biggest problems with the “puzzle-solving” moniker is the implied deterministic resolution of the affair: with most puzzles, be they crossword, sudoku, jigsaw, or LSAT logic puzzles, there is exactly one solution, and the only challenge is narrowing down what it is. Applying this characterization to a majority of scientific practice minimizes the creative efforts required, and the choice of alternate possible outcomes selected between, in the application of theory to a new problem or set of problems.
Fin. For now.