This week, This American Life ran a Valentine’s Day show. It opened with a story about some Harvard physicists who set out to calculate the odds of finding a girlfriend in the Boston area. I have heard this story before, possibly on RadioLab,* although I think last time the setting was MIT. Anyway, to solve the problem they begin with the population of Boston, halve it to account for gender, and continue to winnow down to fractional portions of the original population by estimating proportions of the population who are within a given age range, have college degrees, are single, etc. The result is a pretty quick path from 600,000 potential petites amies to fewer than 1500. Ira Glass concludes that love is hard to find and goes on to tell more heartbreaking stories about romances lost and won. But I am not Ira (unfortunately), and my own inner monologue went somewhere rather different.
Here’s the path I traveled.
The physicists’ problem-solving method was compared to the Drake Equation for predicting the odds of finding intelligent extraterrestrial life, which is one of the more famous instances of a Fermi problem. Fermi problems, named for Enrico, are a classic example of a semi-empirical computational method. Another archetypical Fermi problem is “How many piano tuners are in Chicago?” To solve it, one isolates different kinds of necessary information: how many people live in Chicago? what percentage of people have pianos? how often do pianos need tuning? how many pianos can a piano tuner tune (if a woodchuck could chuck wood) in a week? Based on the answers to questions like these, it is possible to set up an equation that spits out an estimate. Problems like this became particularly famous during the early days of Microsoft, when they were asked as interview questions for that company in order to evaluate how well interviewees could think on their feet.
I have always thought of Fermi problems as a kind of stoichiometry where the possible range of inputs is broader than just molar masses and weights of reactants. In stoichiometry, one aims to predict some information about the outcome of a reaction by taking known empirical data about various reactant chemicals and solving for a desired piece of information about the product—or vice versa, if one is trying to calculate how much of one reactant to add to another.
For example, suppose I want to make 10 grams of sodium sulfate (Na2SO4), a common detergent. I have in my lab sodium chloride (table salt, NaCl) and sulfuric acid (H2SO4). To figure out how much of each of these reactants I need to use, I begin by writing up the reaction equation, which gives molar ratios of reactants (left-hand side) and products (right-hand side):
2 NaCl + H2SO4 –> Na2SO4 + 2 HCl
Then I figure out how many moles of Na2SO4 are in 10g. One mole of Na2SO4 is, approximately, the combined molar masses of each component atom: 2*23g for sodium (Na), 32g for sulfur (S), and 4*16 for oxygen (O) sum to 142g per mole. So 10g of Na2SO4 is about 0.07mol.
If I want 0.07mol product, I need to multiply the whole reaction equation by 0.07 to find out how many moles of reactants to add.
0.07*(2 NaCl + H2SO4 –> Na2SO4 + 2 HCl) =
0.14 NaCl + 0.07 H2SO4 –> 0.07 Na2SO4 + 0.14 HCl
So I need 0.14mol NaCl and 0.07mol H2SO4. The molar mass of NaCl is about 58g, so I need 0.14*58=8.12g of NaCl. Sulfuric acid is usually found as a solution, in water with a concentration given in moles/liter, abbreviated M. Let’s say I have a 0.5M solution of H2SO4, so I need 0.07mol*1L/0.5mol = 0.14L of my acid solution.
Now all that’s left to do is douse the salt with the acid solution. Well, not quite. If one were to actually carry out this reaction with the aqueous sulfuric acid solution, it would result in the formation of hydrates of sodium sulfate, that is, NaSO4*nH2O. The quantities above would be thrown off as a result, and one would need to go back through and rerun the numbers to determine how many moles are in 10g of a hydrate of sodium sulfate. This question cannot, in fact, be answered without more information about how many water molecules attach to each NaSO4 molecule—in practice, sodium sulfate decahydrate (Na2SO4*10H2O, sal mirabilis) is quite common.
Let’s leave the example there, because it has done the work it set out to do. The procedures by which one sets up and runs stoichiometric calculations are a combination of references to empirical data and basic arithmetic and algebraic inference; in short, they are Fermi problems.
Combining mathematical and empirical methodology definitely did not start with Fermi problems; historically, the “mixed” sciences like optics and astronomy combine empirical observation with arithmetic and geometric methods for, e.g., predicting the location of a planet based on its last known position and the equations that describe an ellipse. Fermi problems are often identified by their use of dimensional analysis, or unit-matching, which is a trademark feature of stoichiometric problems as well.
Okay, so, that’s Fermi and stoichiometry. This next part is a little trickier and requires some more background information if you haven’t been following along with the development of my dissertation. I am interested in how scientific reasoning works in cases where the aim or a scientific practice—that is, the problem to be solved—is synthetic, rather than descriptive. The example of sodium sulfate above is exactly the kind of reasoning I am interested in, partly because of the Fermi-problem-like aspects involved in the setup of the problem, and partly because of what happens when the original stoichiometry fails: The calculations, and the patterns of inference they signify, require revision.
I have been calling this process of revision, exemplified in the above discussion of hydrates, iterative interpolation. The basic idea is that, in the process of trying to make something, one
- tries a bunch of times and aims to be less wrong after each attempt than after the previous attempt (hence, iterative), and
- with each attempt, and each set of corrections to the problem design (e.g. the suggested hydrate-accounting-for revisions to the stoichiometry above), one reduces the error between expected and actual outcomes. The way the error is reduced can mean the system vacillates from overabundance to scarcity of the feature being corrected for (hence, interpolation).
It has been helpful in discussing this with colleagues to describe this reasoning process as analogous to a damped sinusoid, with each iteration representing a peak or a trough in the wave and the decreasing amplitude of the waves signifying reducing error.
Spelling out the details of this aspect of reasoning in the synthetic sciences is the project I am setting out on for the next chapter of my dissertation. If I can turn down NPR long enough to get a draft written. Stay tuned!
*NB: The latest episode of RadioLab, “Speed,” offers some pretty keen insight into the role of timescales in constraining how we interact with the world around us. Those of you who are intrigued by Bob Batterman’s work on the tyranny of scales should check it out! Then go read his article about the subject in his new edited volume, The Oxford Handbook for the Philosophy of Physics.