Awesome Science Terrible for Humans

Philosophers of science (and scientists, and science journalists, teachers, etc.) like vivid examples of sciencey things happening that illustrate various scientific or philosophical concepts. Examples that make you gasp or chuckle or compulsively send the link to everyone in your contacts list. Most philosophers have a list of such examples that they carry around in their mental back-pocket and pull out to explain their research to non-philosophers, impress people on airplanes, and win bar bets. 

So I have my own list. Stained-glass-is-nanoscience is way up there, and so is how nanomaterials can be used to cure cancer. There is a special subset of back-pocket examples that I think of as “Awesome Science Terrible For Humans.” Top biller here is probably nuclear weapons testing. But there are a lot of others; they are the stories that make you go “NEAT!” and then feel like a bad person for thinking something so destructive could be just utterly fascinating. “ASTFH” cases are useful because they are gripping both in talking and writing settings. When something is terrible, it is easier to see why one might want to do something about it; it’s the difference between knowing that malnutrition is a global-crisis issue and walking by someone who is starving and asking for help.

 

ASTFH cases cause something like the cognitive equivalent of rubber-necking at the site of a car crash, and they’re not for everyone or for every day. But I got to write about my favorite ASTFH example recently, and it inspired me to share my top four. Pace yourselves:

4. Bewitching Rye: Ergotism is a malady caused by ingestion of the ergot fungus that grows on grains such as rye and barley. It has the craziest list of associated symptoms: diarrhea, convulsions, pins and needles, psychosis, and, oh yeah, gangrene. The fungus contains the vasoconstrictor ergoline, which just wreaks effing havoc on the body, as it turns out. Ergoline is a kind of building block for a hugely diverse number of chemical compounds that some people put in other people’s bodies (or their own), including LSD and medicine for migraines. What’s even cooler than ergoline’s action as a building-block compound is its function in illustrating one way science and history can interact: over the course of 20th-century research on ergoline and on Western political history, many historians have come to suspect ergot fungi as responsible for some of the more mass-psychosis-y behaviors in both the Salem witch trials and the Anabaptist seiges in Muenster during the latter days of the protestant reformation. So Monty Python should have been asking “Did she feed you beer that tasted of mushrooms?” rather than “Does she weigh more than a duck?”

3. How radium gets in the body: After Marie Curie discovered radium, people were excited to put it everywhere. In their watch dials, in their water. Radium spas became the new “it” getaway vacation. Radium paint became an invaluable asset in the trenches of World War 1. And the women who worked in factories that painted watch dials and other instruments of war with radium would dip their brushes on their tongues to keep the bristles moistened and sticking together. Everyone knows the macroscopic story: radium made their teeth and hair fall out, made them lose weight (hence radium spas), made their bones so brittle they couldn’t walk. But the reason radium was so effective at destroying people from the inside out isn’t just because of radioactive decay. It is also because of the periodic table: radium is a Group 2 metal, an alkaline earth. Which means it has the same outer electronic structure as all the other alkaline earth metals, including calcium. This electronic similarity is a disguise of sorts, allowing radium to enter the body by the same channels that calcium usually makes its way it. It’s kind of like when the old cartoon skunk, Pepe le Pew, is tricked into thinking a pretty cat is a skunk like him because the cat has accidentally walked through paint. And this is a really neat (terrible) stunt that radium can pull off. Then, of course, it gets into the calcium channels and “takes off its mask.” It starts radioactively decaying and basically bull-in-a-China-shop-ing any nearby cellular structures, leading to cell death and, if enough cells die in the right (wrong) places, human death. Bonus: If we only tell bottom-up stories about the structures of atoms and molecules, we would start with the difference between radium’s nucleus and calcium’s nucleus, not the similarity in their outer electronic structures, and we wouldn’t get a good explanation of why radium so easily travels along calcium uptake channels. So, yay anti-reductionism.

2. Thalidomide: Thalidomide is a chiral molecule that gave a generation of children birth defects. It was first received on the market as a wonder drug, a sedative that also alleviated stomach issues. So it was given to pregnant mothers to ease morning sickness throughout the early 1960s. Now, thalidomide has two enantiomers, two ways of arranging itself that are not superimposable on one another–like how we have right and left hands. It turns out that one enantiomer is responsible for all the palliative effects that users experienced. The other enantiomer prevented proper development of fetuses and resulted in really horrific disfigurements of limbs of children whose mothers took the drug during the first three months of pregnancy. This case is fun because not only is it a good illustration of anti-reductionism (lists of atomic makeup are not sufficient to describe the structure and properties of the chemical), but also the story of how thalidomide got approved for, and then pulled from, commercial markets is a gripping case for beginning conversations about bioethics and science in society.

1. De Havilland Comet: Okay, check this out. The first commercial jetliners had square windows. The sharp corners of those windows caused massive amounts of metal fatigue, which increased with changes in pressure and temperature as the jets took to the air. The result? The metal body panels of the planes sheared into pieces, looking like a giant tiger had run its claws down the length of the plane, and causing the planes to depressurize, lose their aerodynamic design, crash, and of course, kill lots of people. The continuum models of metals that were being used to design the planes failed to account for corner-based metal fatigue, which is a phenomenon explicable only by structural (i.e. not continuum) models.

Well, it turns out most of my examples are pretty chemical. No surprise there. But I am so, so curious what other episodes of Awesome Science Terrible for Humans show up in other areas of science. Please share in the comments!

Posted in Uncategorized | Tagged , , , , , , , , , , | Leave a comment

Explanations, Natural Laws and Latin Masses

This isn’t going to make the cut for an academic paper, but I like it and couldn’t resist sharing:

In many explanations, especially in chemistry, natural laws play a role akin to the role played by the resurrection of Jesus in many American Easter rituals. The resurrection (be it truth or convenient fiction) inspires the whole to-do and is essential to the celebration, but its presence is only vaguely felt among the chocolate bunnies and brightly-colored plastic eggs. And while the appearance of matzah in sandwich shops can be explained by a quick cross-reference between the Gospels and the book of Exodus, it takes a bit of creative storytelling to make the connection between the crucifixion and baskets full of artificial plastic grass.

Most accounts of explanation—even the mechanistic ones that purport not to emphasize the role of natural laws as explanantia—have overlooked this distinction between what is essential to an explanation and what does the work of explaining, leading to theories of explanation that look like Latin masses: the laws are center-stage, but the what is being said is far enough removed from the day-to-day activities of science that it sounds like a different language altogether.

Posted in Uncategorized | Tagged , , , | 1 Comment

Size Matters: Nanoethics

Nanoethics is a thing. A new thing, and a small (heh) thing, but a thing nonetheless. So sometimes when I tell people I work on philosophical issues in nanoscience, they assume I mean nanoethics. I don’t. But what I do has some bearing on what nanoethics researchers write about, and I think it would be fun to export what I do to nanoethics someday.

If that day were today, I would probably write a paper that began by reviewing what I have read in nanoethics. What I have read is a lot of very real, very interesting concern about public health and safety and a lot of analogies to times in the past when scientists failed to consider or actively disregarded the health and safety of some community as they were doing research and how bad that was. Cf. nuclear bomb testing, Tuskeegee, radium spas and watch dials, genetically modified foods.

For a lot of reason-y reasons, people who write about nanoethics latch onto the analogy with genetically modified foods. The National Nanotechnology Initiative, a sort of nano-nanny group (okay, nano-nanny isn’t actually the appropriate term, but man is it fun to say. NNI is more of a public-outreach/mediator group) interested in the promotion of nanotechnology, formed in part in response to the public-health outcry about GMOs. Ronald Sandler and William Kay explain in their article, using the GMO analogy, that NNI needs to do more to both monitor public health and safety and communicate about health and safety to the public. They actually focus on differences between GMOs and nanotech, arguing that nano, as a non-food-based and non-biological technology, will be less culturally invasive, less omnipresent, and less potentially religiously offensive than GMOs—and that the public needs better access to information about these differences, so that nano can be seen as a social good.

Sandler and Kay are right. Nano is already getting a bad rap for appearing in cosmetics and food production (despite Sandler and Kay’s prediction that nano would stay out of food), and I still remember one of the first radio stories I ever heard about nano trying to answer the question of whether carbon nanotubes will give you asbestos-like lung diseases.

Right now, these individual public-health scare cases are just that—individual cases. The nanomaterials in Mary Kay’s mineral makeup line are not carbon nanotubes, nor are they the clay nanocomposites that some food and beverage manufacturers have considered mixing into their plastics. But the stories all say there is something about the nanoparticles being small that is bad for you, and there is an intuitive sense that the risk of damage is related to the fact that the materials are often smaller than our red blood cells.

The current focus of nanoethics is on health and safety, communication, industry regulation, and the like. These are important issues and should play central roles in scientific ethics conversations, clearly. But they are not concerns that are specific to nano—health and safety, communication and regulation are all important for any new technology. If you’ve been following this blog, you know that what makes nano unique as a technology is the very same smallness that seems so potentially threatening in each of the nano-safety scare cases. Unlike GMOs and radiation, which are scary because of a particular mechanism of human-body degradation that we associate with exposure, nano is not a science with a dominant mechanism. It is a science with a dominant scale. And where that scale can be seen as threatening to public or individual health in the scare-articles linked above, that scale also has immense therapeutic potential as a cancer-killer, drug-deliverer, and preventative-care imaging technology.

The take-away here is that understanding and communicating how size matters to the health and safety of nanotechnology, and making informed decisions on the basis of that understanding, is going to be what distinguishes conversations in nanoethics from conversations in other areas of scientific ethics. These conversations haven’t happened yet. But if you’ve been following this blog, you know that what I’m trying to do with my life these days is figure out how to tell the right stories to explain how size matters in nanomaterials.

So maybe I am doing nanoethics after all.

(On the other hand, if you want to read an insightful criticism of nanoethics as a discipline, go here.)

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Architecture, Nanoscience and Epistemology

Writing a dissertation abstract is a bizarre exercise. I feel like I am writing a news feature story about my own research. Luckily, I like writing feature stories, and I like my research. But sometimes I get carried away in feature mode and end up writing about my favorite building for a while before I get around to telling my audience that I am actually interested in philosophical issues in nanoscience. And that, friends, is where blog posts come from:

One of the ways that we learn about ourselves and others from a young age is by comparing favorite things. From favorite colors and shapes to favorite movies and songs, sharing favorites is one of the ways that people get to know each other. It’s a form of small talk that can give rise to friendly, social-identification-improving teasing. And it is open-ended enough that games of “What’s your favorite [x]?” can go on for hours or months without exhaustion. 

When I started college, I learned that architecture was a thing you could go to college to study. I’d never really thought about where buildings came from before. But my undergraduate university has one of the best architecture schools in the country, and so I got to play what’s-your-favorite with a bunch of aspiring architects. I learned from them that one of the categories of favorite things people can have is buildings. Mine is La Sagrada Familia

La Sagrada Familia cathedral is Antoni Gaudi’s magnum opus. A naturalistic reinterpretation of Gothic architecture, the cathedral teems with arches and flying buttresses, gargoyles and towers. Like many classical Gothic masterpieces, La Sagrada Familia features a rose window: a circular stained-glass window divided by stone into wedge-shaped panes, like the petals of a flower. But unlike the intricate filigree work that fills rose windows in classical Gothic architecture, the panes of Sagrada’s rose window are irregular, geometric blocks of color, inspired by the cubist movement that took root in Spain at the same time as the cathedral.

The rose window of La Sagrada Familia

La Sagrada’s rose window is dominated by blues and greens in a variety of shades and hues. Cubist painters would have tried to depict the window with pigment-based paints, turning especially to ultramarine pigments derived from ground up lapis lazuli and to copper(II) acetoarsenite, or Paris green. They might have tried to overlay these paints with reflective glosses or undercoat their canvases with special primers to capture the otherworldly glow of the colors in the window’s stained glass. But their paintings would never have done the rose window justice, because the colors in the stained glass do not come from pigments or dyes. Where pigments and dyes produce color by selectively absorbing light at certain frequencies, the colors in La Sagrada’s window are produced by localized surface plasmon resonance, that is, the collective, resonant oscillation of electrons on the surface of a nanomaterial in a conducting medium. Localized surface plasmon resonance (LSPR) occurs when a solution of nanoparticles, such as a glass doped with finely divided metals, is stimulated by electrons or photons of a particular frequency.

LSPR is a scale-dependent material behavior, one that can only occur in nanoscale materials. While LSPR has played a role in color technology throughout the ages, it has only come to be understood as a physical phenomenon in the past three decades. Today, scientists are particularly interested in developing nanomaterials with finely-tuned LSPR responses—that is, intense resonances in response to a narrow range of stimulus frequencies. The recent surge of scientific interest of scale-dependent material behaviors like LSPR raises a variety of philosophical questions, from ontological puzzles about color classification and material identity to ethical concerns about the safety of nanomaterials and epistemological and methodological worries about how to reason about a domain of inquiry that is specified by a length scale rather than a set of common material properties or biological functions.

My dissertation answers three interrelated questions of the latter sort. My main goal is to answer the question How are theories and models used to reason about nanomaterials? To answer this question, I demonstrate that there are differences between the ways in which scientists use theories and models to reason about nanomaterials and the uses of theories and models that philosophers of science have identified up to this point. I highlight these differences by answering two further questions: 1) How do scientists respond to new modeling challenges that arise as a consequence of scale-dependent material behaviors? and 2) How do theories and models help scientists synthesize nanomaterials?

These two questions cannot be answered independently of one another. I argue that in order to gain synthetic control over nanomaterials, scientists need to be able to model the behavior of nanomaterials at multiple scales (microscopic, mesoscopic, and macroscopic), because different, interdependent behaviors of interest occur at different scales. I demonstrate that in designing nanomaterials, scientists use models and theories to achieve a balance of desirable features that trades off between behaviors at different scales.

I argue that this balance is best achieved by a particular kind of non-reductive account of relations between models at different scales. I call this the model interaction account of theory and model use. I demonstrate that novel, scale-dependent material behaviors like LSPR are not modeled solely by top-down nor by bottom-up methods. Instead, the two approaches are combined and adapted in order to, e.g., describe LSPR and synthesize materials that exhibit LSPR behaviors. Drawing on Wilson’s “theory facade” account of concepts and theory structure in the physical sciences, I argue that concepts central to nanoscience, such as surface, change meaning between the macroscale and the nanoscale, and that model interaction is required not only to address synthetic challenges but also to develop conceptual understanding of surface and related scale-dependent concepts.

Posted in Uncategorized | Tagged , , , | Leave a comment

“Tuning” in your science

I wish I knew how to code LaTeX tables into WordPress interfaces.

 

Tuning is the use of theories and models to produce a desired effect, such as a substance with specified properties or a bridge that will bear a certain amount of weight and stand up to a certain amount of wind. It is a design-oriented, engineering methodology, and it plays a central role in scientific research. Philosophers of science have largely overlooked tuning, and I have written elsewhere about why this happens and why it is a problem.

For now, here is a fun piece of internet-scavenging: I have diligently researched* some Facts about the use of the word “tuning” in scientific articles published in 2013.

  • “Tuning” shows up in over 15,000 articles in each of chemistry, nano, materials sciences, physics, and biology. It shows up over 10,000 times in neuroscience and medicine.
  • “Tuning” appears almost as often as “explanation” in biology, neuroscience, chemistry, and physics.
  • It shows up more often than “explanation” in nano and materials science.
  • It does not show up as often in climate science, geology, psychiatry, psychology, or cognitive science.

My current paper is about theory and model use in tuning, comparing and contrasting tuning uses with explanatory, predictive or descriptive uses of theories and models. It is sometimes worth making the point that tuning is kind of a big deal and it is, therefore, extra weird that philosophers haven’t taken it up as whole-heartedly as explanation.

Due diligence disclaimer 1: Of course there are philosophers who have thought about tuning in some form or another. e.g. Mark Wilson, Ian Hacking, the design problems and philosophy of engineering crowds. Woodward and the mechanisms crew have language to talk about tuning in terms of lever-wiggling, but it doesn’t often become the subject of their discussions.

Due diligence disclaimer 2: In some of these searches, “tuning” might be referring to acoustics and not necessarily being used metaphorically to talk about design-oriented methodology. Still, it’s hard to deny that “tuning” is a major use of theories and models when you look at lists of article titles like this:

Biology: “Tuning the Dials of Synthetic Biology”
Neuroscience: “Correlations on Ion Channel Expression Emerge from Homeostatic Tuning Rules”
Physics: “Electrical Tuning of Valley Magnetic Moment through Symmetry Control in Bilayer MoS2”
Materials Science: “Tuning Molecular Adhesion via Material Anisotropy”
Chemistry: “Tuning the Surface Chemistry of Pd by Atomic C and H: A Microscopic Picture”
Climate Science: “Climate Models Sensitive to Tuning of Cloud Parameters”
Nano: “Tuning the Electrical and Optical Properties of Graphene by Ozone Treatment for Patterning Monolithic Transparent Electrodes”

*i.e. compare-and-contrast Google Scholar searched

Posted in Uncategorized | Leave a comment

Philosophy of science as a tool for change

Welcome to the Wednesday link-dump, brought to you by the question that kept coming up over and over this past week in Pittsburgh and New York: How can philosophy make a difference?

On Saturday, some of the editors at The American Reader talked with me about the inheritance of this question from a continental-philosophy perspective. The great experiment and great failure of Marx, and the idea that ideas can change the world, etc. I rehearsed for them some of the areas of philosophy of science that are known for finding some social engagement–bioethics, science and values–and we discussed the relationship between Science and Technology Studies and philosophy of science. We took the fact that they knew Bruno Latour’s name as evidence of the relatively wider reach of his ideas than some of the ideas I suggested (the no-miracles argument, confirmation theory), and spent a while ruminating on the tradeoffs between rigor and accessibility. It has always struck me as a sorrow of our field that the demands for rigor can make the ideas somewhat less tractable to broad audiences, and I admire people like J.D. Trout who can transcend that tension.

Then what should I find in my inbox Sunday morning but an article about the Public Philosophy Journal, a new collaborative project by groups at Penn State and Michigan State. While the project presents itself as more about creating a journal with collaborative, social-media methods than a journal aimed at publicly-minded content, both projects seem to be on the horizon. So I signed up for the listserv, and maybe you should too. It will be exciting to see how it develops!

Speaking of listservs, I manage the listserv for JCSEPHS (the Joint Caucus for Socially Engaged Philosophy and History of Science), a new group of philosophers and historians of science who are interested in doing socially engaged research, which I believe includes figuring out exactly what “socially engaged” means. The Caucus met at November’s Philosophy of Science Association/History of Science Society conference in San Diego, and while I couldn’t attend the meeting, I heard they are interested in involving philosophers and historians of science in policy decisions, among other things. Good idea.

The listserv is so new, in fact, that there haven’t been any discussions on it yet–please pardon the administrative timeline! There should be discussion starting on the listserv in the next month or two; if you want to sign up in time to catch all the action, click here.

Bringing things back to nano (because come on, I made it five whole paragraphs without referring to a length scale, you can’t honestly expect six), I had a wonderful conversation with the tech journalist Chris Baraniuk on Thursday about the prospects for nanoscience/nanotechnology and the potential role for philosophy in those prospects. AKA My Favorite Thing To Talk To Strangers About. You all know the drill by now if you’ve been following along: nano’s theories are underdeveloped, and philosophers of science have the needed expertise in analysis of concepts and methodology in order to help solve conceptual, modeling, and other theoretical conundra. We also talked about his recent Atlantic article about phone phreaking and the methodology of hackers/phreakers as an instance of multiple realizability. Which was super fun and informative.

Sunday night I saw a very different use of philosophy as a tool for change. Ross Perlin and the Ways of Being Together project put on a real, live language-games seminar-party-improv-night-event that would have made Wittgenstein proud. Yes, obviously it was in Brooklyn. The premise of WOBT is to put a bunch of people in a room and make them interact in somewhat novel and/or uncomfortable ways. In other words, they are trying to catalyze new kinds of reactions among and between attendees. (What? This is a philosophy of chemistry blog.) Ross’s idea was to bring to the foreground a variety of elements of our linguistic background. So we played language games and I found myself cursing a friend, then confessing to a stranger, then begging a very uncomfortable girl to give me her glasses. And we made up words, and guessed at the meaning of speech we didn’t understand, and slowly had our grammar stripped away as we talked about death. The whole experience of live-action philosophy of language seems like a great teaching tool, and a way of using philosophy (rather than mentioning it) in order to force more thoughtful and effective communication between strangers, coworkers, or classmates. It’d be more interesting than trust falls, anyway.

Which brings me around to a job title I would like to have someday: Philosophical Consultant. Between Ross’s project and JCSEPHS and my continued work with the Millstone Lab on epistemic kinks in nanosynthesis, I think it’s beyond time to recognize that one of the better ways philosophy of science might make a difference is by simply making itself available to science as a problem-solving strategy, in the form of people whose job it is to recognize improper inferences, problematic inconsistencies, or other methodological and epistemic gaps in the design of theories and experiments. Yeah, it’s a pipe dream and no, I wouldn’t give up teaching to do it, but wouldn’t both scientists and philosophers benefit greatly from a formalized interaction of that sort? After all, Scientific Advisor to The Stars is a much less plausible job title, and it’s already out there.

Posted in Uncategorized | Tagged , , , , , , , , , | 1 Comment

On Love, Fermi Problems, and Scientific Reasoning

This week, This American Life ran a Valentine’s Day show. It opened with a story about some Harvard physicists who set out to calculate the odds of finding a girlfriend in the Boston area. I have heard this story before, possibly on RadioLab,* although I think last time the setting was MIT. Anyway, to solve the problem they begin with the population of Boston, halve it to account for gender, and continue to winnow down to fractional portions of the original population by estimating proportions of the population who are within a given age range, have college degrees, are single, etc. The result is a pretty quick path from 600,000 potential petites amies to fewer than 1500. Ira Glass concludes that love is hard to find and goes on to tell more heartbreaking stories about romances lost and won. But I am not Ira (unfortunately), and my own inner monologue went somewhere rather different.

Here’s the path I traveled.

The physicists’ problem-solving method was compared to the Drake Equation for predicting the odds of finding intelligent extraterrestrial life, which is one of the more famous instances of a Fermi problem. Fermi problems, named for Enrico, are a classic example of a semi-empirical computational method. Another archetypical Fermi problem is “How many piano tuners are in Chicago?” To solve it, one isolates different kinds of necessary information: how many people live in Chicago? what percentage of people have pianos? how often do pianos need tuning? how many pianos can a piano tuner tune (if a woodchuck could chuck wood) in a week? Based on the answers to questions like these, it is possible to set up an equation that spits out an estimate. Problems like this became particularly famous during the early days of Microsoft, when they were asked as interview questions for that company in order to evaluate how well interviewees could think on their feet.

I have always thought of Fermi problems as a kind of stoichiometry where the possible range of inputs is broader than just molar masses and weights of reactants. In stoichiometry, one aims to predict some information about the outcome of a reaction by taking known empirical data about various reactant chemicals and solving for a desired piece of information about the product—or vice versa, if one is trying to calculate how much of one reactant to add to another.

For example, suppose I want to make 10 grams of sodium sulfate (Na2SO4), a common detergent. I have in my lab sodium chloride (table salt, NaCl) and sulfuric acid (H2SO4). To figure out how much of each of these reactants I need to use, I begin by writing up the reaction equation, which gives molar ratios of reactants (left-hand side) and products (right-hand side):

2 NaCl + H2SO4 –> Na2SO4 + 2 HCl

Then I figure out how many moles of Na2SO4 are in 10g. One mole of Na2SO4 is, approximately, the combined molar masses of each component atom: 2*23g for sodium (Na), 32g for sulfur (S), and 4*16 for oxygen (O) sum to 142g per mole. So 10g of Na2SO4 is about 0.07mol.

If I want 0.07mol product, I need to multiply the whole reaction equation by 0.07 to find out how many moles of reactants to add.

0.07*(2 NaCl + H2SO4 –> Na2SO4 + 2 HCl) =
0.14 NaCl + 0.07 H2SO4 –> 0.07 Na2SO4 + 0.14 HCl

So I need 0.14mol NaCl and 0.07mol H2SO4. The molar mass of NaCl is about 58g, so I need 0.14*58=8.12g of NaCl. Sulfuric acid is usually found as a solution, in water with a concentration given in moles/liter, abbreviated M. Let’s say I have a 0.5M solution of H2SO4, so I need 0.07mol*1L/0.5mol = 0.14L of my acid solution.

Now all that’s left to do is douse the salt with the acid solution. Well, not quite. If one were to actually carry out this reaction with the aqueous sulfuric acid solution, it would result in the formation of hydrates of sodium sulfate, that is, NaSO4*nH2O. The quantities above would be thrown off as a result, and one would need to go back through and rerun the numbers to determine how many moles are in 10g of a hydrate of sodium sulfate. This question cannot, in fact, be answered without more information about how many water molecules attach to each NaSO4 molecule—in practice, sodium sulfate decahydrate (Na2SO4*10H2O, sal mirabilis) is quite common.

Let’s leave the example there, because it has done the work it set out to do. The procedures by which one sets up and runs stoichiometric calculations are a combination of references to empirical data and basic arithmetic and algebraic inference; in short, they are Fermi problems.

Combining mathematical and empirical methodology definitely did not start with Fermi problems; historically, the “mixed” sciences like optics and astronomy combine empirical observation with arithmetic and geometric methods for, e.g., predicting the location of a planet based on its last known position and the equations that describe an ellipse. Fermi problems are often identified by their use of dimensional analysis, or unit-matching, which is a trademark feature of stoichiometric problems as well.

Okay, so, that’s Fermi and stoichiometry. This next part is a little trickier and requires some more background information if you haven’t been following along with the development of my dissertation. I am interested in how scientific reasoning works in cases where the aim or a scientific practice—that is, the problem to be solved—is synthetic, rather than descriptive. The example of sodium sulfate above is exactly the kind of reasoning I am interested in, partly because of the Fermi-problem-like aspects involved in the setup of the problem, and partly because of what happens when the original stoichiometry fails: The calculations, and the patterns of inference they signify, require revision.

I have been calling this process of revision, exemplified in the above discussion of hydrates, iterative interpolation. The basic idea is that, in the process of trying to make something, one

  1. tries a bunch of times and aims to be less wrong after each attempt than after the previous attempt (hence, iterative), and
  2. with each attempt, and each set of corrections to the problem design (e.g. the suggested hydrate-accounting-for revisions to the stoichiometry above), one reduces the error between expected and actual outcomes. The way the error is reduced can mean the system vacillates from overabundance to scarcity of the feature being corrected for (hence, interpolation).

It has been helpful in discussing this with colleagues to describe this reasoning process as analogous to a damped sinusoid, with each iteration representing a peak or a trough in the wave and the decreasing amplitude of the waves signifying reducing error.

Spelling out the details of this aspect of reasoning in the synthetic sciences is the project I am setting out on for the next chapter of my dissertation. If I can turn down NPR long enough to get a draft written. Stay tuned!

*NB: The latest episode of RadioLab, “Speed,” offers some pretty keen insight into the role of timescales in constraining how we interact with the world around us. Those of you who are intrigued by Bob Batterman’s work on the tyranny of scales should check it out! Then go read his article about the subject in his new edited volume, The Oxford Handbook for the Philosophy of Physics.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

`Surface’ Tension

After an extended vacation from writing on the internet for public consumption, and from writing period (back surgery’s a bitch, friends), I thought it would be nice to put some things I’ve been thinking out for you all to read. So here is an excerpt from the dissertation chapter I am currently working on. It’s the introduction to a chapter about the role of surfaces in chemistry and nanoscience:

 

Surfaces are everywhere. They are the only parts of bodies with which we ever actually interact. They are the boundaries between objects—and they are the access points to objects. Surfaces are the subject, and the limit, of tactile perception. The concept of surface is so important to our understanding of the world around us that the question “What would this table/house/book/hand look like, or how would it behave, if it didn’t have a surface?” is nearly incoherent: a table cannot be a table without a surface; nor can a house or a book or a hand. Surfaces are essential parts of the objects that populate our world.

Surfaces are thus, unsurprisingly, essential parts of the objects of scientific study. In chemistry, surfaces are the sites of reactions. Because surfaces are, by definition, the boundaries between different chemical systems or environments, they are the places where chemical reactions happen. They are where bonds are broken and formed.

To understand the impact of surfaces on the way chemical change occurs, consider the following middle-school chemistry experiment: I have two glasses of water. Into glass A I drop a large crystal of table salt (sodium chloride, NaCl) of the same size and shape as a grape. Into glass B I drop a heaping tablespoon of granulated salt, of the same mass as the grape-sized crystal. If the glasses have the same amount of water and are kept at the same temperature and pressure, the salt in glass B will dissolve significantly faster than the salt in glass A.

The reason for this difference is that while the mass of the salt is the same, there is more surface area on the smaller crystals, which means there is a larger area where negatively-charged oxygen from the water can interact with positively-charged sodium from the salt, and where positively-charged hydrogen from the water can interact with negatively-charged chloride from the salt. With more reaction sites, the reaction proceeds more quickly.

Conversely, in glass A, while it is thermodynamically favorable for the sodium and chlorine atoms that make up the salt crystal to dissociate and dissolve, no oxygen or hydrogen can physically reach the atoms on the interior of the crystal without going through the surface first. Every interior layer of the salt crystal must first be exposed as a surface layer before it can dissolve—in the same way that you can’t peel an onion from the inside out, it is impossible to induce a reaction without first exposing a reactive surface. Whole areas of study within chemistry are devoted to the principle that increasing surface area increases chemical reactivity: this is one of the founding principles of heterogeneous catalysis.*

Thus, surfaces are integral components of chemical reactions. Surfaces are also integral components of material objects: No solid exists without a surface. And the behavior of material surfaces often influences the behavior of the object as a whole: the physical and chemical structure of an object’s surface dictates whether two materials will attract or repel, slide along one another like well-lubricated ball bearings or stick to one another like velcro. Bricks could not hold up bridges or buildings if there was less friction between the surfaces; steel couldn’t cut through metal or food if there was more friction—anyone who has wielded a dull knife against a day-old bagel can recognize that much.

Atoms on the surface of a material behave differently than atoms on the inside of a material. Compared to interior atoms, surface atoms are in a different relationship with other atoms in the material and with atoms in the surrounding environment. Surface atoms have fewer bonds with other atoms in the material, and they have more opportunities to interact with the surrounding chemical environment. Simply put, surfaces are a different kind of system than the interiors.

Curiously enough, many models of materials ignore the physical and chemical behavior of surfaces, rather than attempting to provide detailed, blow-by-blow descriptions of surface behavior. Mark Wilson has recently pointed out the lack of consideration for the physics and chemistry of surfaces in point-mass models of physical systems:

In point of fact, real life blocks and planes will bind very tightly together if their surfaces are appropriately cleansed and polished; the only reason we normally see behaviors that approximate more closely to frictionless sliding is because very complicated physical processes are active within the region of interfacial contact. For example, normal surfaces are quite rough on a microscopic scale and contact each other only at widely separated asperities. These contact points interact with one another through softening at quite elevated local temperatures and through other chemical alterations that remain completely invisible upon a macroscopic scale. Furthermore, the strengths of the bonding at the asperities are usually greatly diminished through atmospheric contaminants serving as a thin interfacial lubricant. And so on, through many unexpected complications. Accordingly, a very elaborate form of interfacial modeling is needed before one can realistically expect that “blocks” and “planes” assembled from mass point conglomerates will interact with one another in a manner that remotely resembles the “block and plane” behaviors invoked in our text book exercises. 

Mark Wilson, “Mixed-Level Explanation,” p. 3

Wilson goes on to point out that despite the lack of consideration of surfaces, point-mass modeling strategies are nonetheless often successful methods of modeling material behavior. He tells a rich and detailed story about the interaction of models to produce what he calls “mixed-level explanations” of how materials respond to stresses and strains from their environments, and how these mixed-level explanations carry changes in the structure of an explanation of a material system. He argues that as the scale at which a material is studied changes, the behaviors of interest change—and, consequently, so do the explanatory structures needed to model that behavior.

Throughout his career, Wilson has demonstrated the existence of changes in explanatory structure that follow changes in the kind of model being used to support explanations of a physical system. In Wandering Significance, he describes how physical concepts such as redness and hardness change with the context of application, and how changes in the concepts affect our abilities to carry out both scientific and everyday projects that require an understanding of what it means to be red, or hard. More recently, he has argued that the field of classical mechanics contains a wide variety of subtle discontinuities in the ways that concepts are applied across point-mass, rigid-body and flexible-body modeling methods, and that attention to these discontinuities provides further insight into the behavior of systems across a variety of scales, as well as insight into the structure of effective explanations of classical-mechanical systems.

Wilson is not the only author to point out the tendency of scientific concepts to slip and shift as they move from one theoretical context, or one scale, to another. Bob Batterman’s theory of asymptotic explanation relies heavily on attention to the mathematical behavior of models of materials as the models interact to produce descriptions of critical, inter-phase phenomena. Rob Phillips has written an entire materials-science textbook addressing the challenge of “modeling across scales.” All three of these authors have recognized the importance of attention to context, and especially the kind of context defined by the time- and length-scales that define the system of study, in forging descriptions and explanations of physical and material concepts and phenomena.

Surfaces, though, are not just physical. Wilson recognizes this in the description of the mechanics of surfaces quoted above, making mention of “chemical alterations” and “the strengths of the bonding.” Physics and chemistry are both needed to adequately describe the behavior of surfaces, and it should come as no surprise that scientists and philosophers looking to better understand the concept surface need to pay attention to the interaction of physical and chemical models of surfaces in order to answer questions and accomplish projects.

Nowhere is this more apparent than in models of nanoscale surfaces. At the nanoscale, the percentage of atoms on the surface of a material becomes statistically significant: rather than making up an infinitesimal fraction of the material’s mass and structure, surfaces make up 10, 20, or up to 80-90 percent of the material. Consequently, the role of the surface in influencing the physical and chemical behavior of the material changes. Additionally, novel physical and chemical phenomena arise in the surfaces of nanomaterials. At the nanoscale, surfaces simply cannot be ignored as they often are in models of macroscopic materials.

The shifting role of surfaces in shaping the behavior of these materials is important for a variety of reasons. First, as the role of surfaces changes, the concept surface itself changes, providing a further example of the kind of conceptual change that authors like Wilson, Batterman and Phillips have addressed. In order to effectively model nanoscale systems, then, it is important to pay attention to the ways in which surface changes at the nanoscale.

Second, it is a change in the scale of the system being studied that induces this conceptual change. Wilson, Batterman and Phillips, as well as many other authors in the philosophy of physics, biology and other sciences, have all recognized the importance of modeling systems at a variety of levels or scales. Batterman in particular has emphasized the role of scale in shaping explanations of physical systems. But none of the examples considered by these authors address the curious position of nanoscience as an entire discipline framed around the study of a length scale, or the implications for modeling of uniting modeling strategies from multiple disciplinary traditions around the study of this length scale.

Third, and consequently, the variety of modeling strategies that are applied to the project of understanding of nanoscale surfaces are more diverse than the strategies used to model strictly physical, chemical, or biological systems. Contemporary neuroscience may prove the closest cousin of nanoscience here in its attempts to marry physical, chemical and biological methods in order to better understand the electrochemical mechanisms of the biological brain. As the modeling strategies used to understand nanoscale surfaces are laid out, it will be important to pay attention to the interaction of models from different scientific backgrounds in shaping the concept nanoscale surface. What a chemist means by surface is not necessarily what a materials scientist means by surface, and for many purposes one need not address what hinges on the difference in their meanings. But when the two come together to tackle the question of how nanoscale surfaces behave, the differences between their concepts of surface, and how those differences play out in the way they model surfaces, must be scrutinized.

Finally, changes in the role of surfaces at the nanoscale are crucially important for understanding the potential applications of nanomaterials in a variety of technologies. The ability to manipulate and control nanoscale surface phenomena such as localized surface plasmon resonance and the semiconductor behavior of carbon nanomaterials is precisely why nanoscience is worth doing: nearly every application of nanomaterials to solving the energy crisis, curing cancer, improving computing and otherwise making the world a better place is an application of some nanoscale surface phenomenon. Modeling and understanding nanoscale surfaces are central projects of nanoscience, and if the science is going to grow up to fulfill the promises it has made, those studying nanoscience need to understand how the surfaces of their materials work. In other words, clearing up the meaning of the concept surface at the nanoscale is a necessary step in developing the field of nanoscience.

 

*Catalysis is the use of an additional chemical agent, the catalyst, to increase reactivity or rate of reaction. Heterogeneous catalysis uses a catalyst that is in a different phase of matter than the reactants, either a solid in the presence of liquids and/or gases, or a liquid in the presence of gases. The heterogeneous catalyst provides extra reaction sites at which the reactants can interact, and the number of sites available is directly proportional to the amount of catalytic surface with which the reactants are in contact.

Posted in Uncategorized | Tagged , , , | Leave a comment

On ‘On Being the Right Size’

A colleague recently pointed me an article by J.B.S. Haldane, of population genetics fame, entitled “On Being the Right Size.” It’s a good read and quick, and in it Haldane talks about evolution as a “struggle to increase surface in proportion to volume.”

Those of you following my interest in nano know that surface-to-volume ratios keep popping up all over the place, both as an explanation for novel phenomena only witnessed at the nano scale and as one of the main hindrances to stability in nanoscale systems. These issues, it turns out, are not just issues for nanosynthesis, which is pretty exciting when you think about it: what other scientific systems can be understood in terms of the struggle to maximize surface-to-volume ratios? What can we learn from studying systems in this way? What ways of understanding how theories work can promote insights of this sort, and what ways might inhibit that understanding?

On a related note, the new HBO documentary series The Weight of a Nation, which investigates research into obesity in a variety of modalities, makes the point in its first episode that the human body is designed to bear loads of a particular size. Above that scale bodily processes strain and weaken: joints ache, the liver freaks out, and of course there’s the diabeetus. Aligning a system with the scale at which it best functions, it seems, can save lives as well as provide insights of scientific and philosophical interest about the various constraints on a system’s behavior.

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

Even “Carbon Nanotube Physicists” Need Philosophers

Astrophysicist and NPR blogger Adam Frank recently summarized a contentious debate in the philosophy of physics and foundational physics communities as follows:

Carbon-nanotube physicists are so deep within the traditional modes of empirical (i.e., data-driven) scientific investigation that they can happily ignore what goes on in the halls of philosophy. But as Krauss’ example shows, cosmologists can push so hard and so far at the boundaries of fundamental concepts they cross over and fall prey to their own unspoken philosophical biases and misconceptions.

Frank does a decent job summarizing some of the more hotheaded points that physicists and their philosophers argue over. But leaving aside the fact that I’m pretty sure “carbon-nanotube physicists” are not a thing (and Frank’s jarring tendency to mix up “its” and “it’s” throughout the article, which is a personal pet peeve–don’t do it!), the claim that scientists who study nano systems don’t need philosophers is false, and insidiously so, for both those scientists and for philosophers of science.

Those of you who know me know this is the basic motivational claim of my dissertation. As I have not, ahem, published (written) that tome in full, let me give you a brief overview and a demonstration that I hope you will find convincing.

Point 1: Nano systems are defined by their scale, and specifically by novel material behaviors that appear at that scale and not at scales above or below it. For instance, the anisotropic metal nanoparticles I like to study exhibit localized surface plasmon resonance and quantum confinement, both scale-dependent properties that I encourage you to read about on Wikipedia (you know you already typed it in your search bar anyway). The carbon nanotubes Frank mentioned are often recognized as the strongest known materials by some measures, as well as having the ability to conduct electricity in a way bulk carbon cannot (Pro-tip: do not attempt to rewire your house with pencil leads) and there’s a whole Wikipedia page devoted just to their potential society-changing applications.

Point 2: Modeling properties of interest in nano systems in order to obtain, systematize, explain or predict that empirical data Frank referred to cannot be done without reference to the scale of the system. Because the properties of interest in nano systems are scale-dependent (Point 1), representations of those properties will in some way refer to scale.

Point 3: It turns out that a lot of material structure changes at the nano scale. The  bonding behaviors (strengths, lengths, angles, crystal packing structures) that we model with molecular models and the macroscopic behaviors (ductility, malleability, thermal transport, resistance to cracking) that we model with, well, macroscopic models–sometimes called phenomenological models–are themselves based in the molecular and macroscopic scales. What actually happens at the nano scale is a little different from what happens at these molecular or macroscopic scales, and so

Point 4: Theoretical models of nano systems have to accommodate changes in material behavior based on changes in scale. This means the assumptions of the models referred to in Point 3 have to change when the models are applied to nano systems, and it turns out that a lot of physicists, chemists and materials scientists are struggling with how to systematize and justify the ways in which those changes happen, which leads us to

Point 5: Philosophers of science can help figure out how to systematize and justify those changes. That’s what we are trained to do, to highlight the assumptions being made by a particular model and question the extent to which those assumptions apply, and whether there are better ones out there.

I’m not saying philosophers are the only ones who can do this; but as far as I can tell from reading a lot of nano journals and talking to a lot of nano scientists, it’s not being done by the scientists themselves. Probably because it’s not quite as exciting to scientists, or as grant-money-conducive, as actually making the nano systems or the models themselves. But, let me put it to you this way: I talked to a professor yesterday whose whole job is to built computer models of nano systems. He’s a very smart guy who does good work and figures out valuable information; he spends a lot of time with that empirical data Frank mentioned. He didn’t realize that the capping ligands he was modeling were actually holding together his nanoparticle, and that the ‘bare,’ uncapped gold nanoparticle in his model was an unrealistic system (i.e. not a physically or chemically stable configuration of atoms). Now, it turns out that the bare nanoparticle in the model ends up being a good model of some material properties of interest, and whose job is it to explain why that highly unrealistic artifact of the model ends up working so well to describe phenomena observed in the world? Likewise, pace Frank’s article, whose job is it to explain why a particle-free quantum field fits the definition of “nothing”? Not necessarily the scientist who came up with the model of the nanoparticle or the quantum field, as Frank clearly and concisely argues.

The job of philosophers has always been to monitor and improve reasoning processes–ask yourself if Socrates was doing anything else with his incessant questioning. That job is one that needs to be done in all areas of science (and elsewhere), not just foundational physics. It is not necessarily the job of “carbon-nanotube physicists” to notice that the computational models they use rely on assumptions that are incompatible with reality, if the models are working well; that’s the essence of the empirical mindset to which Frank refers.

When science as a practice arose out of increased specialization and professionalization from “natural philosophy”, some of the kinds of questions that used to get asked in lab drifted down the hall to the philosophy classroom. That’s fine for now, as long as the philosophers who ask the questions keep getting up and walking back down the hall to the labs every now and then to talk to the scientists–and the scientists listen. This is Frank’s point, which is well taken. It just applies more broadly than even he admits.

Still not convinced? Try this additional anecdote on for size (heh). When physicists model material systems, they often talk about “boundary conditions,” which in solid materials often refers to the surface of the material. In bulk materials, the fraction of the atoms in the material that lie on the surface of the material is infinitesimal, and the behavior of those surface atoms thus doesn’t really have much influence on the behavior of the material at large. So it’s okay to ignore boundaries and use macro-scale models to understand, predict, and explain the behavior of materials. But once materials get down to the nano scale, significant proportions of the atoms in the material actually lie on the surface of the material (See chart above). So the behavior of surface atoms can no longer be ignored, because it becomes the dominant behavior of the system.

This fact is responsible for a lot of the interesting properties of nano systems–surface plasmon resonance, quantum confinement, differences in conductivity and catalytic behaviors between nano and macroscopic systems. It is also not only a scientific fact; it is a conceptual fact, the kind that is the province of philosophers over and above scientists. When the “surface” of a material stops being an infinitesimal part of the material and starts being a significant fraction of the matter in the material, what it means to be a surface changes. It turns out, consequently, that our very concepts are scale-dependent. Philosophers of mind and language, who deal with how words refer to parts of the world and how information about those parts of the world are stored in our brains, have to grapple with the implications of problems in the modeling of nano systems just as much as the scientists modeling those systems do. And the fruits of their labors might end up influencing further scientific advances–a standardization, for instance, of the way in which surfaces are computationally modeled at various scales.

The bottom line here is that whatever “carbon nanotube physicists” are, they need philosophers and philosophers need them, no matter what Adam Frank or Lawrence Krauss have to say about it.

Posted in Uncategorized | Tagged , , , , , , , , , , , | 2 Comments