Monday, October 20, 2014

'Obstetric dilemma' skeptic has c-section and remains skeptical ... & ... Why my c-section was natural childbirth

This is a new kind of Tale for me. The rock'n'roll's turned way up, and every couple sentences I have to stop typing to twirl a blue hound dog, a bear holding an umbrella, a Flying Spaghetti Monster, and other oddities that I strung up to hypnotize this little guy into letting me type one thought at a time:

The thing that needs to be hypnotized.
Or the three wise monkeys say: The thing that makes it impossible to create or to dwell on the negative. (e.g. his birth by c-section)

That young primate's the reason I've been quiet for a while here on the MT. And he's the reason I'm a bit more emotional and I cry harder than usual at Rise of the Planet of the Apes (those poor apes!), Cujo (that poor dog!), and other tearjerkers. But he's also the reason my new favorite animal is plain old, fascinating, and dropdead adorable Homo sapiens.

In anthropological terms, he's the reason I'm overwhelmed, not just in love but in new thinking and new questions about the evolution of human life history and reproduction, and then what culture's got to do with it and with our reconstruction of it.

Some context would help, probably.

For the past few years I've been challenging the 'obstetric dilemma' hypothesis--the idea that hominin mothers' bipedal pelves have shortened our species' gestation length and caused infant helplessness, and that antagonistic selection between big-brained babies and constrained bipedal mothers' pelves explains childbirth difficulty too.

[For background see here or here or here or here.]

As part of all that, I've been arguing that the historically recent surge of c-sections and our misguided assumptions about childbirth difficulty and mortality have muddled our thinking about human evolution.

So, once I was pregnant, you might imagine how anxious I was to experience labor and childbirth for myself, to feel what the onset of labor was like, and to feel that notorious "crunch" that is our species's particular brand of childbirth. Luckily I was not anxious about much else the future might hold because modern medicine, paid for by my ample health insurance, would always be there to make it all okay. After a long pregnancy that I didn't enjoy (and am astonished by people who do) I was very much looking forward to experiencing childbirth. In the end, however, my labor was induced and I had a bleeping c-section.

But my bleeping c-section's only worth cussing over for academic reasons because the outcome has been marvelous, and the experience itself was out of this world.

We'll get to the reasons for my c-section in a second, but before that, here are the not-reasons...

First of all, I did not have a c-section because I fell out of a tree with a full bladder.

Second of all, shut your mouth... a c-section was not inevitable because of my hips.

Okay, you got me. I've never been even remotely described as built for babymaking. My hips are only eye-catching in their asymmetry. One side flares out. It might be because when I was 15 years old I walked bent-kneed for a few months pre- and post-ACL reconstruction. That leg's iliac crest may have formed differently under those abnormal forces because, at 15, it probably wasn't fused and done growing yet. If you like thinking in paleoanthropological terms like I do, then my left side is so Lucy.

Anyway. I'm not wide-hipped. However, guess how many nurses, doctors, or midwives who were involved in our baby's birth think my pelvis was a note-worthy factor in my c-section? Not one.

Hips do lie! Inside mine there's plenty of room to birth a large baby. Two independent pelvic exams from different midwives (who knew nothing of my research interests at the time) told me so, and it sounded like routine news to boot. Although one midwife asked me "do you wear size nine and a half shoes?" (no, I wear 8) which was her way of saying, "Girl, you're running a big-and-tall business. You got this."

What you probably know from being alive and knowing other people who were also born and who are alive (or what you might hear if you ask a health professional in the childbirth biz) is that most women are able to birth babies vaginally, even larger-than-average babies. And that goes for most women who have ever lived. Today, "most women" includes many who have c-sections because not all c-sections are performed because of tight fit between mother's birth canal and baby's size. As I understand it, once the kid's started down into the birth canal and gets stuck, a c-section's no longer in the cards. So performing c-sections for tight fit is a preventative measure based on a probability, not a reflection of an actual tight fit. In the mid 20th century, tight fit used to be estimated by x-raying pregnant women and their fetuses. Can you imagine? And this was right about the time the obstetric dilemma hypothesis was born. I don't think that's a coincidence.

Here's a list of reasons for c-sections. Tight fit is included in the first bullet point. Tight fit is one of the few quantifiable childbirth risks. No wonder it's so prominent in our minds. That list excludes "elective" ones which can be done, at least in Rhode Island, if they check the box that says "fear of childbirth". And that's not even close to a list of reasons why women around the world and throughout history have died during or as a result of childbirth. For example, about a hundred years ago women were dying all over the place because of childbed fever.

Anyway, we should assume that I am like most women and expect that I could have given birth the way Mother Nature intended: through my birth canal and with the participation of other humans. Oh yeah, when it comes to humans, social behavior and received knowledge are part of natural childbirth. Even this natural childbirth (which has inspired a forthcoming reality television show featuring women giving birth in the wild!) involves the supportive and beneficial presence of other humans as well as the culture that the mother brings to the experience.

But a c-section's just culture too, so could it be part of "natural" childbirth, then?

I'm inclined to blurt out yes, of course! because I don't support calling anything that humans do "unnatural." But I know that's not something everyone agrees with. It's politics. For example, many of you out there don't flinch an inch at the subtitle of Elizabeth Kolbert's book, "The Sixth Extinction: An Unnatural History."  And given the present energetic movement against childbirth interventions, describing c-sections as "unnatural" as climate change could help minimize unnecessary ones for those who wish to give birth vaginally.

So there we have it. These are the two enormous issues raised by my own little c-section: What can it teach us about the evolution of gestation length, infant helplessness, and childbirth difficulty? And could it be considered natural?

One way for me to get at these questions is to try to understand why I experienced "unnatural" childbirth in the first place. So here goes.

Here's why I think I had to have a c-section:

1. My pregnancy ran into overtime.


This is expected for nulliparous mothers. I visited one of my OBs on my due date. He put his finger on the calendar on the Friday that was two weeks out and joked, "Here's when we all go to jail." Then he asked me, "Who do you want to deliver your baby? I'll see when they're on call before that Friday and schedule your induction then." And I chose my favorite midwife and he scheduled the induction.

All right so I was running late compared to most women, but that's still natural, normal. But it also means risks are ever-increasing by the day. And no matter how small, that the professionals know how to mitigate the biggest risks of all, *deaths*, means that they try to do that. They're on alert already as it is, and then they're even moreso on edge when you're overdue. Especially when it's your first baby and you're a geezer, over 35 years of age.

Now, does going overdue mean the baby keeps growing? Maybe, but not necessarily and not necessarily substantially. Both of us, together, should have been reaching our maximum, metabolically. There's only so much growing a fetus can do inside a mother.

When I approached my due date, and then once I went past it, I tried to eat fewer sweets to make it less comfortable in my womb. I also went back to taking long, hard walks, five milers, even though it was hard on my bladder because I thought that might help kick him out too. I even ran the last of my five miles the day before my induction, to no avail other than the mood boost it gave me.

2. I didn't go into labor naturally by my due date or by my induction date 11 days later. 

Although my cervix was ripening, when I went in to be induced I was only dilated 0-1 cm. I had 9+ more to go before the kid could get out at 10. So a balloon catheter was inserted and filled with water, and I had to tug on the tail of it, which tugged the balloon, which put pressure on the cervix. It dilated enough that it fell out several hours into the process, and by morning I was dilated 3-4 cm. This was exactly the goal of the catheter, this many centimeters. All was going well. However, that the cervix did not open on its own is already a missing piece of going "natural," of having my own biology contribute to my childbirth experience. So starting this way is already derailing things, making it difficult for anything natural to follow, naturally.

3. The fetus's head was facing the hard way: sunnyside up.

This was assessed by the midwife and cradling my belly in a bedsheet, with me on all fours, she and I could not twist him into a better position. His head, she said, was probably why I did not dilate naturally. When I asked an OB during my postpartum check-up, "What dilates the cervix?", he said "We don't know. But I can tell you it's not with the head like it's a battering ram." Well, then... hmph. And then I asked him if women carrying breech fetuses have trouble dilating their cervixes, or going into labor naturally, and he said not necessarily. No. Hmph.

Regardless of what causes cervical dilation, if the head isn't facing the right direction, it's notoriously tough to get down into the birth canal, let alone through the birth canal. It's not impossible, not even close. But it's not looking good at this point either. Perhaps the contractions will jossle his head into a better position, they said. And the contractions should further dilate the cervix.

4. Contractions didn't get underway, naturally, after the catheter dilation, so the drug pitocin was used. 

Induction and pitocin increase the chances that a mother will ask for drugs to help with pain and that she will have interventions, like a c-section. See for example this paper. What the causes are, I'm not sure. But pointing out the correlation is useful at this point because at this point, without even getting into hard labor yet, and without finding out whether my cervix does its job, I'm more likely than ever to be going to the operating room.

5. After six hours of easy labor and five hours of intense labor, my cervix never dilated past 5 cm.  

It needs to get to 10 cm to get the baby moving into the birth canal. Just like with due dates, I think that blanketly assigning this number to all women is probably not consistent with variable biology, but it's how it's currently done. And maybe any higher resolution, like "Sally's cervix needs to hit 9.7 cm", is pointless.

After several hours pitocin-induced contractions--which at first felt like the no-big-deal Braxton-Hicks ones I'd been having numerous times daily for the whole third trimester--I only dilated 1 cm more. That's even when they upped the pitocin to make them more intense.

But after they saw I'd made essentially no progress and that I was napping to save my energy for when things got bad, they woke me up and broke my bag. It would be nice if they could have let my labor progress slowly, if that's what my body wanted to do, but remember, my personal biology went out the window as soon as induction began. And then when that amniotic fluid oozed out of me, that's when bleep got real.

Every two minutes and then every one and a half, I grabbed Kevin's extended hand and breathed like an angry buffalo humping a locomotive. It was the worst pain of my life and I was afraid I'd never last to 10 cm, so I took the stadol when I told the nurse my pain was now at a 9 out of 10 (all previous answers to this question were no higher than 2). I was going to avoid the epidural no matter what, even at this point, because I was more afraid of the needle sticking out of my spine for hours of labor than I was afraid of these contractions. I have no idea if the stadol dulled any pain, because the pain just got worse, but it did help psychologically because it put me to sleep between contractions. There was no waiting with anxiety for the next one and time flew by. But after five hours of this, I had not dilated any more. But I had vomited plenty! And although I'd fended off the acupuncture (FFS!), I folded weakly and, for the peace of mind of a wonderfully caring nurse, I allowed a volunteer to perform reiki on me. And what a tragedy it was! Wherever she is, there's a good chance she gave up trying to help laboring women, and she may have given up reiki all-together.

The hard labor story ends at five hours because that's about when the nurse actually screamed into the intercom for the doctor. My contractions were sending the fetus into distress.

6. After five hours of intense labor, the fetus was experiencing "distress" at every contraction, as interpreted from his heart-rate monitor. 

Basically, he was bottoming out to a scary heart-rate and only very slowly coming back to a healthy heart-rate just in time to get nailed by another contraction. By the way, this is the official reason listed in my medical records for my c-section: fetal distress.

I know that a heart-rate monitor on the fetus is another one of those medical practices that increases the chances of an "unnatural" childbirth. That's probably because all fetuses are distressed during labor, but observing the horror, and then guessing whether it's safe to let it continue is seemingly impossible. So at some point, like with me and my fetus, they get alarmed and then how do you back down from that?  They gave me an oxygen mask which immediately helped the fetus a bit, but like I said, hackles were already up at this point. Soon thereafter we had a talk with the doctor about how I  could go several more hours like this and get absolutely nowhere with my cervix, and then there are those life and death matters. She never said c-section. I had to eek out between contractions, "So are you saying we need to perform a c-section?" and she said yes, and urgently. A c-section sounded like the only solution at this point both to battered, old me, to clear-minded Kevin, and clearly to the delivery team (and in hindsight, it still does to Kevin and me). Then, lickety-split, the anaesthesiologist arrived, got acquainted with our situation, and made me vomit more. And then like a whirlwind, Kevin's putting on scrubs, and we're told to kiss, and I'm jokingly protesting "I'm a doctor too!" while being wheeled into the operating room because I cannot walk through my contractions.

It's bright white, just like Monty Python said it would be. I sat on the crucifix-shaped operating table to receive all the numbing and pain killing agents through my spine. Somehow they pulled this off while I was still having massive contractions. Then I laid down, arms splayed out to the side, and they drew a curtain across my chest, a nurse told me how creepy it was about to be, and they got to work.

Although the c-section wasn't painful, I could feel everything. This was my childbirth experience. I felt the incision as if she was simply running her finger across my belly, and I felt the tugging and the pressure lifting from my back as they extracted my baby from me. After that, and after I got a short glimpse of him dangly over my left arm--"He's beautiful! He's perfect! He's got a dimple! He growled!"--I continued to feel many things, probably the birth of my placenta, etc...

But I didn't know what exactly I was feeling until I watched a video of a c-section on YouTube. Kevin helped fill in the details too. He had caught a naughty glimpse of the afterbirth scene before being chased back to his designated OR spot with the baby. Thanks to him (and that video) I know now that I was feeling my enormous muscular uterus and some of my intestines being yanked completely out of a small hole right above my pubic bones and then stuffed back in. For a few moments, it must have looked like I was getting re-inseminated by a red octopus.

I tell everyone that it was like going to outer space to give birth. And this, if you know me, is an exciting idea so my eyes are smiling and I sound dreamy when I say "it was like going to outer space to give birth!" I bet you're thinking it's the Prometheus influence, but you'd have the wrong movie. The correct one is Enemy Mine. And it's much more than that, actually. I was as jaw-dropped and awe-struck by humanity during my childbirth experience as I am by space exploration. The orchestration, the specialization, the patience, the years of study, of planning, the calculations, the dexterity. To boldly go. Wow. Like I said, humans are my new favorite animal.

I was back in our little room quicker than most pizza deliveries, where our bright red new baby was trying hard to nurse from his daddy. Then he nursed from me. And the story's all mushy weepy cuddly stuff from now on. So let's not. Let's remember what we're here for. Okay. Right.

7. The cord was wrapped twice around his neck. 

We found this out when he was cut out of me. That didn't help with moving him around in utero to a good position, nor did it help with oxygen flow during contractions! This would not have inhibited his safe vaginal birth, however, at least not necessarily.

8. He was enormous. His head was enormous too. 

He came out a whopping 9 pounds, 13 ounces, 22.25 inches long, with a head circumference of 15.5 inches. They say that's heavier than he'd be if born vaginally because he didn't get all the fluids squeezed out of him. But still, that's large. According to the CDC he was born as heavy as an average 3.5 month-old boy. His head was about the size of an average 2.5 month-old.

Red line is our baby's head circumference at birth. (source)

Way back at the mid-pregnancy ultra-sound, we knew he was going to be something. And then if you'd seen me by the end, like on my due date, you might have guessed I was carrying twins. I was so big that my mom joked she thought maybe a second fetus was hiding behind the other one, undetected.

Smiling on my due date because pregnancy was almost over. 
(By the way, I could still jog and I dressed weird while my body was weird.)

If I hadn't had the means to eat so much like I did during pregnancy, perhaps he wouldn't have grown so large inside me. If I hadn't lived such a relaxed lifestyle while pregnant, maybe he wouldn't have grown so large inside me. If I didn't have a medical safety net waiting for us at the end, perhaps I would have been scared into curbing my appetite from the get go. I gained 40 pounds. With this body, but in a different life, a different place, a different time, maybe I wouldn't have. Probably I wouldn't have.

His size has got to have influenced a few of those other contributors to my c-section. But clearly it's more complicated than his size. And this brings us back to the obstetric dilemma. Let's say he was too big or that his large size screwed everything up, even if he could technically fit through the birth canal. Well then, why didn't I go into labor? Labor triggers are, to me, a significant problem when it comes to explaining the evolution of gestation length in humans, and whether we have a unique problem at the end.

If our pregnancy length is determined by available energy, energy use, and metabolism (here and here) then women like me who go overdue, who are clearly not killing our babies inside us either, are just ... able to do that. But doing that clearly leads to problems in our species (one of the few known) that has such a tight fit to begin with.

If our pregnancy length is determined by our birth canal size, and any anatomical correlates, then why didn't I go into labor before my fetus got so big? What went wrong? What's frustrating too is, for my n of 1, we'll never know if I could have given birth vaginally because I never got the chance to try.

These seem like simple questions but they are deceptively complex. And I think there will be some exciting discoveries to come from medicine and anthropology in the coming decades to explain just how our reproduction works which will in turn help us reconstruct how it evolved.

What's my birth experience got to do with evolution? Why, everything. It's got everything to do with evolution, because if it's not evolution, it's magic.  And that's kind of where I'm coming from when I say that my c-section was still natural childbirth. It wasn't unnatural and it certainly wasn't supernatural. Sure, it's politics. I'm invested in the perspective that humans are part of the evolving, natural world and want others to see it that way or, simply, to understand how so many of us see it that way. But it's not just evolution that's got me enveloping culture into nature and that's got me all soft on the folks who drive fancy cars who cut my baby out of me.

Who knows what could have happened to my son or to me if we didn't have these people who know how to minimize the chances of our death? It's absolutely human to accumulate knowledge, like my nurses, midwives and doctors have about childbirth. Once learned, it's difficult for that knowledge to be unseen, unheard, unspoken, unknown. Why should we expect them to throw all that away so that we can experience some form of human being prior to that knowledge?

Nature vs. Culture? That's the wrong battle.
What matters is which one can fight hardest on my behalf against the unthinkable.


Maybe childbirth is so difficult because it can be. We've got all this culture to help out when things get dicey, with or without surgeons. On that note, maybe babies are so helpless because they can be. We've got all the anatomy and cognition to care for them and although the experiment would be impossible, it's doubtful any other species but ours could keep a human baby alive for very long. It could just be our dexterous hands and arms, but it could be so much more, like awareness of their vulnerability and their mortality,and (my favorite pet idea) awareness that they're related to us. Culture births and keeps human children alive with or without obstetricians. It's in our nature. Maybe it's time we let all this culture, our fundamental nature, extend into the operating room.

Friday, October 17, 2014

BigData: scaling up, but no Big message

As technology has advanced dramatically during the past few decades, we have been able to look at the relationship between genotypes and phenotypes in ever more detail, and to address phenogenetic questions, that is to search for putative genomic causes of traits of interest on an ever and dramatically increasing scale. At each point, there has been excitement and a hope (and widespread promises) that as we overcome barriers of resolution, the elusive truth will be found. The claim has been that prior methods could not identify our quarry, but the new approach can finally do that. The most recent iteration of this is the rush to very Big Data approaches to genetics and disease studies.

Presumably there is a truth, so it's interesting to review the history of the search for it.

The specifics in this instance
In trying to understand the genetic basis of disease, modern scientific approaches began shortly after Mendel's principles were recognized, around 1900. At that time, the transmission of human traits, like important non-infectious diseases, was documented by the same sorts of methods, called 'segregation analysis', that Mendel used on his peas. Specific traits, including some diseases in humans or pea-color in plants, could be shown to be inherited as if they were the result of single genetic causal variants.

Segregation analysis had its limits, because while a convincing disorder might be usefully predicted, as for couples wanting to know if they are likely to have affected children, there was rarely any way to identify the gene itself. However, as methods for identifying chromosomal or metabolic anomalies grew, the responsible gene for some such disorders, usually serious pediatric traits present at birth, could be identified. For decades in the 20th century this was a matter of luck, but by the '80s methods were developed to search the genome systematically for causal gene locations in the cases of clearly segregating traits. The idea was to trace known genetically varying parts of the genome (called genetic 'markers', typed because they were known to vary not because of anything they might actually cause) and search for co-occurrence among family members between a marker and the trait. This was called 'linkage mapping' because it tracked markers and causal sites that were together--were 'linked'--on the same chromosome.

Linkage analysis was all that was possible for many years, for a variety of reasons. Meanwhile, advances in protein and enzyme biology identified many genes (or their coded proteins) that were involved in particular physiology and whose variants could be associated causally with traits like disease. A classic example was the group of hemoglobin protein variants associated with malaria or anemias. The field burgeoned, and identified many genes that were likely 'candidates' for involvement in diseases of the physiology the gene was involved in. So candidate genes were studied to try to find association between their variation, and disease presence or traits.
Segregation and linkage analysis helped find many genes involved in serious early onset disease, and a few late-onset ones (like a subset of breast cancers due to BRCA genetic variants). But too many common traits, like stature or diabetes, were not mappable in this way. The failure of diseases of interest to have clear Mendelian patterns (to ‘segregate’) in families showed that simple high-effect variants were not at work or, more likely, that multiple variants with small effects were. Candidate genes accounted for some cases of a trait, but they also failed to account for the bulk. Again, this could be because responsible variants had small effects and/or were just not common enough in samples to be detected. Finding small effects requires large samples.

So things went for some time. But a practical barrier was removed and a different approach became possible.

Association mapping is linkage mapping and it is candidate gene testing
As it became possible to type many thousands, and now millions, of markers across the genome, refined locations of causal effects became possible, but such high-density mapping required large samples to resolve linkage associations. Family data are hard and costly to collect, especially families with many members affected a given disease.

However, linkage does not just occur in close relatives, because linkage relationships between close sites on a chromosome last for a great many generations, so a new approach became possible. Variants arise and are transmitted (if not lost from the population) generation upon generation in expanding trees of descend. So if we just compare cases and controls, we can statistically associate map locations with causal variants: we know the marker location, and the association points to a chromosomally nearby causal site.
It is not widely appreciated perhaps, especially by those with only a casual background in evolutionary genetics, but the reason markers can be associated with a trait in case-control and similar comparisons is that such 'association' studies are linkage studies because we assume the sets of individuals sharing a given marker do so because of some, implicit--unknown but assumed--family connection, perhaps many generations deep. The attraction of association studies is that you don't have to ascertain all family members that connect these individuals, though of course such direct-transmission data provide much statistical power to detect true effects. Still, if causation is tractably simple, GWAS, a statistically less powerful form of linkage analysis, should work with the huge samples available.

And GWA studies are essentially also a form of indirect candidate gene studies. That's because they identify chromosomal locations that then are searched for plausible genetic candidates for affecting the trait in question. GWAS are just another way of identifying what is a functional candidate. If the candidate's causal role is tractably simple, candidate gene studies should work with the huge samples available.

But by now we all know what’s been found so far—and it’s been just what good science should be: consistent and clear. The traits are not simple, no matter how much we might wish that. All the methods, from their cruder predecessors to their expansive versions today, have yielded consistent results.

Where the advocacy logic becomes flawed 
This history shows the false logic in the claim typically raised to advocate even larger and more extensive studies. The claim is that, since the earlier methods have not answered our questions, therefore the newly proposed method will do so. But it's simply untrue that because one approach failed, some specific other approach must thus succeed. The new method may work, but there is no logical footing for asserting certainty. And there is a logical reason for doubting it.

The argument has been that prior methods failed because of inadequate sample size and hence poor resolution of causal connections. And since GWAS is in essence just scaled up candidate-gene and linkage analysis, it should ‘work’ if the problem in the first place was just one of study size. Yet, clearly, the new methods are not really working, as so many studies have shown (e.g., the recent report on stature). But there's an important twist. It isn't true that the older methods didn't work!

In fact, the prior methods have worked: They first of all stimulated technology development that enabled us to study greatly expanded sample sizes. By now, those newer, dense mapping methods have been given more than a decade of intense trial. What we now know is that the earlier and the newer methods both yield basically the same story. That is a story that we have good reason to accept, even if it is not the story we wished for, that causation will turn out to be tractably simple. From the beginning the consistent message has been one of complexity of weak, variable causation.
Indeed, all the methods old and new also work in another, important sense: when there's a big strong genetic signal, they all find it! So the absence of strong signals means just what it says: they're not there.
By now, asserting 'will work' or even 'might work' should be changed to 'are unlikely to work'. Science should learn from experience, and react accordingly.

Thursday, October 16, 2014

What if Rev Jenyns had agreed? Part III. 'Group' selection in individuals, too.

We have been using Darwin's and Wallace's somewhat different views of evolution to address some questions of evolutionary genetics and their consequences for todays attempt to understand the biological, especially genomic, basis of traits of interest. Darwin had a more particularistic individual focus and Wallace a more group-focused, ecological one, on the dynamics of evolutionary change.

HMS Beagle in the Straits of Magellan

As a foil, we noted that a friend of Darwin's, Leonard Jenyns was offered the naturalist's job on the Beagle first, but turned it down, opening the way for Darwin. We mused about how we might think today had Wallace's view of evolution, announced in the same year that Darwin's was, been the first view of the new theory. Where we'd be now if we'd had a more group than individual focus is of course not knowable, but we feel Wallace's viewpoint, at least in some senses, has been wrongly neglected.

Population genetic theory traces what happens to genetic variants in a population over time. Almost without exception the theory treats each individual as representing a single genotype. We take individual blood samples or cheek swabs, and let our "Next-Gen" sequencer grind out the nucleotide sequences as though on a proverbial assembly line. In this sense, each individual--or, rather, the individual's genotype--is taken to be the unit of evolution.

Populations were, and generally still are, seen as a mix of these individual internally non- varying homogeneous units each having a genotype. But that's an obviously inaccurate way to view life, another reflection of the difference in viewpoint about variation in life that we've been characterizing by relating them symbolically to Darwin's and Wallace's stress in their views of evolution.

There is a strong tendency to equate genotypes with the traits they cause. This derives from the tendency to reduce natural selection to screening of single genes, because if single genes cannot be detected effectively by selection, they generally won't have high predictive value for biomedicine either. It is easy to see the issue.

But individuals are populations too
Let's ask something very simple: What is your 'genotype'? You began life as a single fertilized egg with two instances of human genomes, one inherited from each parent (here, we’ll ignore the slight complication of mitochondrial DNA). Two sets of chromosomes. But that was you then, not as you are now. Now, you’re a mix of countless billions of cells. They’re countless in several ways. First, cells in most of your tissues divide and produce two daughter cells, in processes that continue from fertilization to death. Second, cells die. Third, mutations occur so that each cell division introduces numerous new DNA changes in the daughter cells. These somatic (body cell) mutations don’t pass to the next generation (unless they occur in the germline) but they do affect the cells in which they are found.

But how do we determine your genotype? This is usually done from thousands or millions of cells—say, by sequencing DNA extracted from a blood sample or cheek swab. So what is usually sequenced is an aggregate of millions of instances of each genome segment, among which there is variation. The resulting analysis picks up, essentially, the most common nucleotides at each position. This is what is then called your genotype and the assumption is that it represents your nature, that is, all your cells that in aggregate make you what you are.

In fact, however, you are not just a member of a population of different competing individuals each with their inherited genotypes. In every meaningful sense of the word each person, too, is a i of genomes. A person's cells live and/or compete with each other in a Darwinian sense, and his/her body and organs and physiology are the net result of this internal variation, in the same sense that there is an average stature or blood pressure among individuals in a population.

If we were to clone a population of individuals, each from a single identical starting cell, and house them in entirely identical environments, there would still be variation among them (we see this, imperfectly, in colonies of inbred laboratory strains such as of mice). They are mostly the same, but not entirely. That’s because they are aggregates of cells, with genomes varying around their starting genome.

Yesterday we tried to describe why the traits in individuals in populations have a central tendency: most people have pretty similar stature or glucose levels or blood pressure. The reason is a group-evolutionary phenomenon. In a population, many different genomic elements contribute to the trait, and because the population is here and hence has evolved successfully in its competitive environment, the mix of elements and their individual frequencies is such that random draws of these elements mainly generate rather similar results.

It is this distribution of random draws of all the genetic variants in the population that determines the context and hence the success of a given variant. But the process is a relativistic one, rather than absolute effects of individual variants. Gene A's success depends on B's presence and vice versa, across the genome. There is always a small number of outliers, having drawn unusual combinations, and evolution screens these in a way that results in a central tendency that may shift over time, etc.

The same explanation accounts for the traits in individuals. There would be a central tendency in our hypothetical cloned mice. That’s because the somatic mutations generate many different cells, but most are not too different from each other. As in evolution in populations, if they are dysfunctional the cell dies (or, in some instances, they doom the whole cell-population to death, as when somatic mutations cause cancer in the individual). Otherwise, they usually comprise a population near the norm.

Is somatic variation important?
An individual is a group, or population of differing cells. In terms of the contribution of genetic variation among those cells, our knowledge is incomplete to say the least. From a given variant's point of view (and here we ignore the very challenging aspect of environmental effects), there may be some average risk--that is, phenotype among all sampled individuals with that variant in their sequenced genome. But somatically acquired variation will affect that variant's effects, and generally we don't yet know how to take that into account, so it represents a source of statistical noise, or variance, around our predictions. If the variant's risk is 5% does that mean that 5% of carriers are at 100% risk and the rest zero? Or all are at 5% risk? How can we tell? Currently we have little way to tell and I think manifestly even less interest in this problem.

Cancer is a good, long-studied example of the potentially devastating nature of somatic variation, because there is what I've called 'phenotype amplification': a cell that has inherited (from the person's parents or the cell's somatic ancestors) a carcinogenic genotype will not in itself be harmful, but it will divide unconstrained so that it becomes noticeable at the level of the organism. Most somatic mutations don't lead to uncontrolled cell proliferation, but they can be important in more subtle ways that are very hard to assess at present. But we do know something about them.

Evolution is a process of accumulation of variation over time. Sequences acquire new variants by mutations in a way that generates a hierarchical relationship, a tree of sequence variation that reflects the time order of when each variant first arrived. Older variants that are still around are typically more common than newer ones. This is how the individual genomes inherited by members of a population and is part of the reason that a group perspective can be an important but neglected aspect of our desire to relate genotypes to traits, as discussed yesterday. Older variants are more common and easier to find, but are unlikely to be too harmful, or they would not still be here. Rarer variants are very numerous in our huge, recently expanded human population. They can have strong effects but their rarity makes them hard to analyze by our current statistical methods.

However, the same sort of hierarchy occurs during life as somatic mutations arise in different cells at different times in individual people. Mutations arising early in embryonic development are going to be represented in more descendant cells, perhaps even all the cells in some descendant organ system, than recent variants. But because recent variants arise when there are many cells in each organ, the organ may contain a large number of very rare, but collectively important, variants.

The mix of variants, their relative frequencies, and their distribution of resulting effects are thus a population rather than individual phenomenon, both in populations and individuals. Reductionist approaches done well are not ‘wrong’, and tell us what can be told by treating individuals as single genotypes, and enumerating them to find associations. But the reductionist approach is only one way to consider the causal nature of life.

Our society likes to enumerate things and characterize their individual effects. Group selection is controversial in the sense of explaining altruism, and some versions of group selection as an evolutionary theory have well-demonstrated failings. But properly considered, groups are real entities that are important in evolution, and that helps account for the complexity we encounter when we force hyper-reductionistic, individual thinking to the exclusion of group perspectives. The same is true of the group nature of individuals' genotypes.

We have taken Darwin and Wallace as representatives of these differing perspectives. Had Jenyns taken the boat ride he was offered, we'd have been more strongly influenced by Wallace's population perspective because we wouldn't have had Darwin's. Instead, Darwin's view won, largely because of his social position and being in the London hub of science, as has been well-documented. A consequence is that the ridicule to which group-based evolutionary arguments have been subjected is a reflection of the resulting constricted theoretical ideology of many scientists—but not of the facts that science is trying to explain.

What needs to be worked on is not, or certainly not just, increased sample size to somehow make enumerative individual prediction accurate. For reasons we've tried to suggest, retrospective fitting to the particular agglomerate of genotypes does not yield accurate individual prediction--and here we've not even considering non-genomic aspects of each genome-site's environment. Instead, we should try to develop a better population-based understanding of the mix of variants and their frequencies, and a better sense of what a given allele's 'effect' is when we know each allele's effect is not singular nor absolute, but is strictly relative to its context both in terms of its individual and population occurrences. It's not obvious (to us, at least) how to do that, or how such an understanding might relate to whether accurate individualized prediction is likely to be possible in general.

Wednesday, October 15, 2014

What if Rev Jenyns had agreed? Part II. Would evolutionary theory be different from a population perspective?

In yesterday's post I noted some general differences between Darwin's individual-centered theory of evolution, and AR Wallace's more population-focused ideas.  Of course they both developed their ideas with the kinds of knowledge and technology then available, so we can use them to represent differing points of view we might hold today, but must realize that that is symbolic rather than literal. They were who they were, both skilled and perceptive, but their ideas were subject to modification with subsequent knowledge. One major piece of knowledge that emerged after their time was that genes are point causes of biological function, that is, single locations in DNA with distinct activity.
But that knowledge was derived from Mendel, Morgan, Watson, Crick and a host of others, who, following Mendel, pursued genetic function with independent point causation as the assumed starting point that drove their study designs.  DNA may be atoms on a string, but the assumption was misleading then, and still is today.


Alfred Russel Wallace

The modern theory of evolution, population genetics, is based on genes as point causes, and it recognizes the local nature of evolution in time and space.  A genetic variant's chances of spreading in a population are, naturally enough, seen in population perspective.  But by and large that perspective is about a genetic variant, and indeed attempts to explain functional and adaptive evolution from a single gene's point of view.  The variant's success depends on the relative success of other variants at the same locus--competition.  Of course that success depends on many things, but this perspective basically just 'integrates' away all factors other than the gene itself, computing a net-result picture.  It is very 'Darwinian' in the sense of being strongly deterministic and considering genes as points individually competing with each other for success.

This is not a fallacious picture, but I think it's not terribly relevant to the kinds of questions most people are asking these days, both in evolution and in biomedical genetics.  One needn't deny that individual genetic variants don't have their differential success over time, or that we can't or shouldn't be aware of nucleotide differences.  To do so would be something like denying that a house is made of bricks, the bricks can be identified and enumerated, and they have something to do with the nature of the house.  The question is the degree to which you can explain or predict the house from the enumeration of the bricks.

There are those who suggest that evolution is more about interaction at the genome level than it is about single alleles; enumerating bricks is not enough. However, the allele-focused view would have it that it is only the 'additive' aspect of each individual allele's effect on its own, that is transmitted. The idea is that even if the combination of alleles at and among loci affect an individual's traits (roughly, this is called 'epistasis'), s/he only transmits a roughly random half of those to each offspring.  Thus, the combination effect is not inherited.  Epistatic holism is an evolutionary hoax.

This venerable riposte to those arguing for a more 'holistic' or complex genomic viewpoint may be mathematically true in the abstract, but misses an important point.  In fact, the fitness (reproductive success) of a given allele entirely depends on the rest of the genome and the external environment.  If you just think about how life works (that is, metabolism, morphology, and many other complex interactions), the dependency is very unlikely to be simply additive. Things work, things adapt in combinations.  But we'll see below how this squares with the additive-only view.

In fact, the collective context-dependency of each allele's functional effects means that the evolution of a population is dependent on its mix of genomic variation--which brings us back to Wallace, and is what group selection is properly about.

Group selection: why a bad reputation?
Group selection got a bad reputation in part when a book by VC Wynne-Edwards was published in 1964 that claimed that in many species, individuals restrained their reproduction essentially for the good of the group (whether or not this was done knowingly for that purpose).  This was a kind of fitness-related altruism that was ridiculed on the grounds that if I restrain my reproduction for the good of the group, others may not be so restrained and any genetic variant that led me to do what I did would thus be out-competed.  So group selection was out, but WD Hamilton introduced concepts of extended kinship to explain altruistic behavior, such as why I might help someone at a cost to myself--if that someone were a relative, for example.  Hamilton's rule became dogma and explains much of the sociobiology of our era still today (though the rule doesn't really work very well when closely tested).

In this sense, group selection was viewed or modeled as driven by single genes and the argument was how an individual 'altruism' gene could possibly sacrifice itself and still get ahead, the one coin of the realm recognized by the most strident of Darwinists.  In recent years, various defenses of the idea and proposed mechanisms have been offered, usually with no reference to Wallace's more ecological concept.  The reason his views might be relevant is not that he thought about this in modern terms, but because he recognized that the collective qualities of the group--its overall members' traits--are what affects the group's chances of confronting the environment or other populations that it faces.

But in fact I think that while the evolution of altruism is an interesting question, it is a red herring that has given group selection a bad name.  Because there is a lot more about group selection than that gene-centered, restricted argument would suggest, and it's fundamental to life.  Indeed, it is possible that Wallace's idea, that the properties of the group determine its success, is more cogent than the gene-focused version--but for different, wholly non-mystical reasons.

Group selection, more properly conceived
The answer in brief is not a new fact but a different way of weighing the facts.  It is based on the indisputable fact that DNA is, by itself, quite an inert molecule. Anything it does is only in context.  The chance of an allele being successful depends on what else it finds itself combined with.  If in that context, the allele's effects are harmful, it has reduced prospects.  But if it finds itself in genomic and environmental circumstances in which it functions well, it can proliferate.

But what determines those genomes?  It's the relative frequency of their alleles in the population.  This is the result of the genomic history of the population as a reproducing unit.  Unless quickly removed, our new allele will see itself, probabilistically, in the company of other variants in the individuals who carry it.  If the number of those variants, and/or their frequencies, in which it can have positive effect is high enough, it has an increased chance of proliferating.  This is, in a legitimate sense group selection, because genomewide the success of the group depends on its collective distribution of alleles.  (Here we're not considering how that collective success operates, whether in terms of mating, avoiding predators, finding food, dealing with local climate, etc.).

The same variant that does very well in one genomic or environmental setting may do very poorly in another.  This is another manifestation of the central fact that a variant has no predetermined effect on its own.  It's why personalized medicine, based on predicting disease from genotypes, has a long way to go, at best, for other than very severe, largely early onset traits.

It is not that the individual variant, or the individual person, isn't important, or that we can't trace the frequency change of the variant, just as has been done for decades by population genetics theory.   But it misses the important collective aspect of an allele's success.  It's like the fact that we can count the bricks that make up our building, but we are hard-pressed to understand the building that way.

Over time, a successful population accumulates enough variants in enough genes that enough newly arising alleles are in favorable 'soil' to confer viable effects on individuals who bear them.  A population depauperate of enough of an allelic mix, genomewide, dies out.  This is, in every meaningful and non-mystical sense, a group phenomenon and if the term hadn't already been abused, group selection.  If a population perspective is really the most important one for understanding genome dynamics, then our usual genetic reductionism is misplaced.  

The Normal (bell-shaped) distribution of so many traits, like stature; UConn WWI recruits
Everyone in a population differs a bit but most people, for most traits, are rather near the middle.  The roughly Normal (bell-shaped) distribution of traits like human stature is a reflection of this.  There are those in the high- or low-end tails (very tall or very short), but most are near the middle.  There is a strong 'central tendency'.  Where does that come from?  It is a direct reflection of an evolution that makes most people inherit what in their collective ancestry has evolved as a 'fit' state for that population's circumstances.  There are always new mutational variants arising, and if the population--the 'group'--had not evolved this central tendency, it would not be a healthy one, and that would affect the likely fate of new mutations.  There are exceptions, but the restricted variance of natural populations, the tendency of most individuals to be quite similar, reflects what is, in fact, a form of group-selection history.

A major way in which this can arise, given that we have genomes made of multiple chromosomes and there is recombination and we are diploid but pass on only half our genome complement, is for many different genomic factors to affect a trait--for it to be 'polygenic'.   I think that it is the assembly of many more or less equivalent parts, independently segregating, that enables most individuals to inherit what the population's previous history has proved viable, that is, multiple independent contributors is why such central-tendency, limited-variance characteristics are so widespread.  Gene duplication and other processes help generate this state of affairs.  It's the way molecular interaction works; if things had been too genetically unitary, survival would have been more precarious.

From this perspective, the standard 'selfish gene' viewpoint's denial of the importance of epistasis and other contextual elements of gene function is off the mark.  It misperceives the nature and vital importance of the population in which these combinations exist, and the necessity that those factors be there, in enough numbers and/or with high enough frequency.

So, Wallace again?  But wait--isn't it individuals who reproduce or not?
But what about those individuals, on whom a century of population geneticists and countless popular science writers, have placed their hyper-competitive hyper-individualized stress?  The individual, driven by some critical genetic variant survives or not.  Individuals as wholes are viewed (or should we say dismissed), essentially, as mere carriers of the gene whose evolution is being tracked.  The context of population may be real, as discussed above, but the individual, basically a manifestation if its genotype, is what selfishly acts and determines success. No?

Sure, in a sense.  But the variant's prospects depend on the collective, and it's mutual, or relative.  Variant One is affected by Variant Two--but Variant Two is affected by Variant One, and so on.  The individual, or worse, individual gene focus is something one can compute, but it is misleading.  And, in fact, the situation is even more problematic in respect to what individuals actually are, genomically.

In Part III, I'll discuss how individuals, too, are being misperceived as the ultimate functional units based on their individual genotypes, either as wholes or in terms of specific genes.  Again a group or population perspective has an important, largely unrecognized role to play in individuals' and hence groups' success.

Wallace was onto something that's rather absent in Darwin, and still absent today as a result of the fact that the particularist aspect of Darwin's and Mendel's view prevailed.

Tuesday, October 14, 2014

What if Rev Jenyns had agreed? Part I. Would evolutionary theory be different?

In 2006 I wrote an article about the long potential impact that historical quirks can have on science, based on the fact that in 1831 an Anglican cleric named Leonard Jenyns said "no, thanks" to an offer. It so happened that that offer was to be the naturalist on a surveying voyage to be undertaken by the Royal Navy. But Jenyns was interested in natural history as a hobby, rather than as a career, and he said he had to spend time with his parishioners and couldn't be away for the long years of such a voyage. He might also have used that as an excuse to avoid the known dangers of such trips at the time.

Leonard Jenyns, the reluctant reverend
Too bad, said John Henslow at nearby Cambridge University, who had recommended Jenyns. So he recommended another of his students, a fellow named Charles Darwin. Darwin was interested in natural history, too, but spent most of his time riding and shooting, as did most members of his social class, and it wasn't clear that he'd make a serious enough candidate for the position. But, after agonizing and consulting family, Charles said "Yes!" The ship was, of course, the Beagle, and the voyage was to shake the world.

I've written about this incident before (Evol. Anth., 15:47-51, 2006) because it is interesting to surmise about how biology, in particular evolutionary and genetic theory and approaches, might be today if Jenyns had agreed, and Darwin had gone fox-hunting during those important years. What might have been different? Wouldn't we have eventually ended up where we are today, celebrating Jenyns rather than Darwin? I think definitely not.

Jenyns was basically a biblical fundamentalist, which meant a creationist.  He would have gotten along famously with Captain FitzRoy, also a strong believer.  Debates (after grace) over wine and meals would not have been about the origin and distribution of variation in plants and animals.  But can we doubt that we’d have learned about evolution anyway?  No, not at all.

At roughly the same time period, another not-so-wealthy naturalist was doing his natural history in remote parts of the world (first Amazonia, then Indonesia), and he developed a clear idea of the ‘transmutation’ of species on his own.  In 1858 he sent a brief manuscript explaining his idea to a correspondent, one who had become well-known among British naturalists, the same Charles Darwin. 

This stunned Darwin who had been working ploddingly on his own theory of evolution.  But with very good grace, he hastily assembled some bits and pieces to show his ideas (and, perhaps not so incidentally, his priority) which along with Wallace’s manuscript were read to the Linnaean Society.  The world had been told, but hardly anyone was listening until the following year when Darwin published his lengthy assertion of the idea that the diversity of life arose through a gradual historical process—his Origin of Species.

Both Darwin and Wallace were famously influenced by economist Thomas Malthus’ book arguing the inevitable pressure of growing population on available resources, and that idea led to the idea that it was competition for such resources in Nature that inevitably favored (selected) those better competitors in terms of their future reproductive success.  Adaptation by natural selection was the process that they argued explained the diversity and functional traits of species.

But the two ideas were rather different
Darwin and Wallace placed very different stress on how this process worked.  Darwin stressed competition among individuals for survival or mates, so that in a given location the better-endowed individuals would have all the fun at the expense of their less-suited contemporaries.  Since traits of organisms were at that time viewed as caused by the deterministic effects of some causal elements (that, in his way, the Moravian monk Gregor Mendel was studying, unbeknownst to Darwin and Wallace).  The most successful competitors would transmit these elements to their offspring, and the elements would thus proliferate over time to replace less-successful elements.

Differential success was also important to Wallace.  He recognized that, of course, individuals proliferate well or not, but his stress was more on competiton among groups or species, and/or of groups against the limits of their environment.  Some groups would do well and modify as successfully adapted species while others would wane.  It was the group characteristic, even though of course comprised of individual members, that told the tale.

Now, if Darwin had stuck to his guns, so to speak, we would be talking today of Wallacian, not Darwinian, evolution.  Whatever we would have discovered about the nature of inheritance, whether or not by now we had discovered DNA and its functions in the cell, we may very well not have developed our ferocious obsession with individual competition, an obsession that often drives us to view genes as if they themselves, rather than the whole individuals or whole populations or whole species, were the central competitors in the evolutionary race.

I think things today might be very different, and we might not be trying to enumerate individual genes in individuals’ genotypes when it came to accounting for genetic causation, genomic and even adaptive evolution.  The reason isn’t that individuals and their genotypes are unimportant, nor that some mysterious function unrelated to individual genes reifies the concept of population to give one population an edge over another.  The reason would simply be a different way to understand that the dynamics of both individuals and their genes are fundamentally aggregate phenomena.  And we’d have very different ideas on the role of populations and context.

In Part II, I’ll consider the collective nature of genomes in populations and how that affects their evolution in group-contextual ways.  Then in Part III, I'll try to show that individuals are themselves similarly context-driven populations of genotypes.

Monday, October 13, 2014

Morgan's insight--and Morgan's restraint

Last week, we stirred the pot by asserting that it was at best misleading for the authors of the latest human stature mega-study to say, as if reassuringly, that the number of genome locations contributing to stature was in the thousands, but that at least it was finite. We questioned that 'finite' both figuratively and literally, because it has to do with the realities and manageability of this sort of causal landscape.  And this is for what appears to be a highly genetic and easily measured trait.

Defenders of the faith tweeted sneeringly at these points.  Our view is that current practice is largely chasing rainbows, and we know it, and we had solid century-old theoretical reasons to expect the kind of complexity that's been found (countless contributing factors to complex traits).  The essential nature of the findings was clearly predicted, before and during the large-scale mapping era. Initially, one could argue that the theory of 'polygenic' inheritance was non-specific and the growth of whole genome studies confirmed it.  That, in itself, was a major success, not a failure, though it showed that using genomes to predict complex traits is problematic.

We have said that by now we have enough actual explicit genomic evidence to show the lay of the land--predicting phenotypes from genotypes is, and will continue to be problematic.  It's long time to stop chasing these rainbows and to stop making exaggerated promises of pots of medical gold to come. Some funding groups have said as much, but the push for ever bigger is not abating.

In our post, we used a quote from TH Morgan's 1926 book, The Theory of the Gene. Morgan was a major figure who laid the linkage and mapping framework for today's finds.  He made statements about stature and its complex causal basis that have stood a century of time, and the quote we used made our point.

Of course, selective retro-quoting is as dicey as using retro-fitted data to allege predictive power.  We can mine our forebears for quotes that seem prescient....because they support our own point of view. But exegesis is a game anyone can play.  You can usually find that the same author, or his contemporaries, said  things that don't support our view.  Industries of professors have made their careers by mining history for antecedents whose quotes presaged major discoveries such as of relativity, evolution, and so on, and/or helped stimulate Einstein or Darwin.  So, quoting Morgan was a rhetorical device for making a point, and in itself the quote has no scientific heft.

In fact, however, the quote reflects Morgan's views about what he was doing--and about what science at the time was not yet equipped to do.  Wisely, perhaps more than in today's environment, he simply refrained from doing what was not yet seriously feasible.

Morgan's contributions in the famous fly room are well documented (an interesting account is in Lords of the Fly, by RE Kohler, 1994, U Chicago Press).  He had a major, clearly important agenda.  Mendel had shown evidence that (carefully selected) traits could be inherited in what appeared to be a kind of "point" causation--single transmissible causal factors.  Mendel worked in the context of the newly developing atomic theory of chemistry, in which substances came in quantal packets (molecules composed of integer multiples of carbon), and the discovery of point causation of infectious disease (bacteria) by Pasteur and Snow and others.  I think this general scientific environment led Mendel to think in terms of 'integral' causation, that is, by discrete causal units.

The work of Morgan and his students and colleagues was designed to explain how such point causes of inheritance worked, whatever they were at the molecular level.  In his fly experiments, also working with carefully selected traits as Mendel did (and aware that not all traits behaved this way), he used controlled, replicable experimental crosses to show that these sorts of point causes could be located to specific relative physical places in chromosomes.  This follow-up to what Mendel first clearly revealed was of course fundamental and extremely valuable.

Morgan did not, however, think of genes (the causal 'beads' on the chromosomal string, whatever their actual nature) as having a fixed functional effect. In the absence of direct knowledge of their chemical nature he, like Mendel and everyone else up to his time, had to use phenotypic markers to reveal the presence of a given allele (genetic variant).  He recognized that 'genes' could have multiple or complex effects, but he scored flies for traits that had some discrete, enumerable state at some specific life stage, such as newly hatched larva, in controlled crosses, that could be used to identify the presence of the causal element.  He didn't care, and was explicit about this, whether that was all the gene did, or whether the trait was even present at some other life-history stage.  One might say that he was interested in the causal layout or, shape, to use a word we used in a post last week, of inheritance.  That is, his approach was a tactic to understand the nature of genetic inheritance, essentially not to explain the traits.

Morgan explicitly also eschewed working in areas like developmental genetics--or stature--because he rightly said there was simply not enough known to do that at the time--more fundamental understanding was needed first.  By avoiding what was hopeless to understand at the time, and using his restricted, focused approach to get to a deeper question (genes as causal locations on chromosomes, recombination, etc.), he made some of the most important contributions to our understanding of life.

In that sense, it is fair to quote him as we did in our post because he both had the insight and the restraint to stay within what was known.

How did we get here?
The formal theories of genetics that developed in the early third of the 20th century included ways to reconcile discrete Mendelian heritable 'causation' with the causation of the more obviously continuous traits--like stature.  The reconciliation was the concept of 'polygenic' rather than point causation.  The idea, in its theoretical expression, was that an infinite number of infinitesimally small individual genes generated the continuous population distribution of complex traits.

Like points in geometry, genes could be point causes, but were infinitesimally small in the limit, and their joint effects could have useful distributional properties (like the bell-shaped distribution of stature).  But quantitative geneticists properly refrained from trying to identify individual genes 'for' such traits (or, as time progressed, claimed that sometimes one or a few 'major' genes might be identifiable but in a polygenic background).  Whatever they were, genes behaved in individuals and were transmitted in aggregate ways that clearly fit the polygenic model, whether or not the number of causes was literally infinite.  That's like saying a line can be understood and analyzed as if made of countless infinitely small, non-enumerable points. From an aggregate point of view, it makes complete sense.

By roughly the 1990s, while the human genome reference sequence did not yet exist, it nonetheless became technically possible to scan the whole genome for specific sites that contributed to complex traits.  The genome was viewed much more as a string of discrete beads than it is now.  Enthusiasm was high because the method worked (called 'linkage' analysis, in very large families where detection power is greatest) and breast cancer susceptibility genes, cystic fibrosis, and others as well, were mapped by various related approaches.

Without going into the historical details, what was mappable were genes in which there were sufficiently common variants with sufficiently strong effects to appear in families in a pattern much like that which Mendel had introduced, in which the trait was an efficient marker of the presence of the causal allele.  The predictive power was strong in those families, but it was just as obvious that this was not the general case for occurrences of the same traits.  Even today, the preponderance of breast cancer cases are not due to the BRCA genes nor does the disease segregate in families in Mendelian fashion.

Still, the mapping-drug had been taken, and geneticists on a high saw a limitless landscape of possible ways to identify--to enumerate--the genomic regions that contributed causally to a host of complex, largely continuously distributed (quantitative) traits.  Just collect more data!  As technology improved, the addiction was fed because endlessly finer resolution seemed in the offing.  The 'hits' that were made were naturally trumpeted with great enthusiasm.  We could turn complex traits into Mendel's peas!

This began in earnest around 15 years ago, and money poured into genetics: the omics era had dawned.  For legitimate as well as fashion and imitation, every problem was turned into a big-data 'omics' problem driven, rather than just enabled, by advancing technology.  Nutrigenomics, diseaseomics, microbiomics, epigenomics, proteomics, and so on.  In a sense, science has become industry, and has jumped on the 'Big Data' bandwagon.

Where are we now?
The problem as I see it is that we have reached what seems clearly to be a kind of ceiling in cost-benefit or signal to noise terms.  The findings over the past 20 years, both widespread and consistent, from natural as well as experimental approaches, and from all the kingdoms of life, have confirmed the century old theory that the traits in question really are 'polygenic' in the practical sense of the term.  This is an elegant success story--but it's more like Morgan than the transformation so vocally being asserted, which amounts to the promise of imminent medical miracles.

There will of course always be some important findings when such a huge enterprise is undertaken. But we seem not yet willing to acknowledge, much less accept the limits of the new knowledge, in particular including that predicting complex traits from genomes at birth is not going to go as promised.

People aren't saying, with restraint, that we're just showing that discrete spots on the genome have causal effects, because we have blurred the effect (e.g., stature, but only after problematic 'regression' on age, sex, etc. as if that were the equivalent to Morgan's marker traits), we have assumed that random sampling gives the same sort of information as controlled crosses and so on.  We have assumed that estimating these things retrospectively gives us estimable predictive ability.

At this stage, the view we've expressed is that we now have countless big-scale mapping studies, generating similar results, and it's time to think about what we've been shown, rather than to continue along the same basic path.  Some are doing that, advocating whole-genome sequencing and whole-population data bases of sequence--partly to avoid the problem with the association studies that they can't find rare causes, and hoping instead to find them in families within population data bases.  This, too, will work sometimes.  But this is asking for a lot for the occasional success. It is not asking whether what we've done has shown us that genomes work in ways far more complex than our enumerative approach is aiming to document--and that view is what we assert.

It is of course possible that the kind of data we are collecting is, in the end, appropriate and that there isn't anything profound yet to be discovered by more careful, focused, less industry-first methods. Time will tell.

A standard wagon-circling criticism of those who say we've done more than enough of the recent mapping approach, is to say that if you don't have the answer you should shut up and go home.  But that is somewhat like saying that if you sees that the theater is on fire, you shouldn't say anything about it unless you have a hose in their hand.  That's a totally bogus argument.  If there is a problem, and many do now think so, and nobody is seriously rebutting the points, then there is a problem! There are resources, fiscal and intellectual, at stake and that could be used more productively.  Denial and aggressive promotion of current practice certainly keeps the motor running, but it won't solve the problem.