Thursday, April 24, 2014

Is it competition vs cooperation; or, cooperation lets competition be?

Since Darwin, emphasis in evolutionary theory has been on competition between individuals and species in the race for optimal fitness -- he or she who passes the most genes to subsequent generations wins.  Darwin saw this through the lens of the rampant cruelty of Nature, and the need for individuals to find food and mates and escape being eaten.

To many, this is a fundamental underpinning of evolution, although in recent years a number of evolutionary biologists have begun to think that perhaps cooperation deserves a larger role.  We have done our part, in our book (titled, in fact, The Mermaid's Tale: Four Billion Years of Cooperation in the Making of Living Things) and here on MT and elsewhere, though, despite our best efforts, competition remains the predominant view of how life works.

In our book we suggest that cooperation is a fundamental principle of life, arguably much more pervasive and important than competition because it happens at all levels all the time, from minuscule intracellular spaces to grander ecosystems, instantaneously as well as over evolutionary time.  It is, we think, hard to argue that competition plays such a central role.

A friend and sometime co-blogger Reed Goodman alerted us to an interesting piece the other day in The Baffler, "What's the Point if You Can't Have Fun?" by David Graeber.  Graeber, currently Professor of Anthropology at the London School of Economics, believes that behavioral scientists have gone over the top in arguing that there must be a rational purpose to every animal behavior.  
I’m simply saying that ethologists have boxed themselves into a world where to be scientific means to offer an explanation of behavior in rational terms—which in turn means describing an animal as if it were a calculating economic actor trying to maximize some sort of self-interest—whatever their theory of animal psychology, or motivation, might be.
Instead, why can't they just be having fun?  

Graeber notes that with the discovery of genes, evolutionary theorists quickly adopted the idea that everything animals did was in the service of passing along their own genes, an idea popularized by Richard Dawkins' in his book The Selfish Gene, but also widely accepted within evolutionary biology as well.  Indeed, evolutionary biologists tend to smell a competitive rat everywhere, nurturing the view that everything animals do must be adaptive, naturally selected, and for the purpose of out-reproducing the competition. 

This of course raises problems like altruism, cooperation among non-kin, animals sacrificing their own life for someone else, and so forth, but these have generally been hand-waved away with we would say rather contorted arguments that reframe kindness, cooperation and self-sacrifice as just competition in disguise.  Of course, that implicitly makes a tautology of every such explanation, based on an axiom -- an assumption -- of pervasive selective determinism.

Graeber isn't at all a fan of this strict view of biology and evolution.  His essay is wide ranging in scope,  from inchworms dangling in air for the sheer fun of it, to the historical context in which the idea that the purpose of life is the propagation of DNA (our genes thus made us invent the PCR machine, for  unlimited propagation of DNA?) could gain purchase, to the discussion of free will and consciousness.  It is provocative and well worth a read.  

But it was Graeber's mention of a 1902 book by Russian naturalist Peter Kropotkin (1842-1921) that most caught my eye.  In Mutual Aid: A Factor of Evolution Kropotkin argues that Darwin was wrong to place so much emphasis on competition, because cooperation -- mutual aid -- is so obviously in evidence all around us.  The idea of the struggle for life as a 'law of nature' was something he just couldn't accept because, as he wrote "...I was persuaded that to admit a pitiless inner war for life within each species, and to see in that war a condition of progress, was to admit something which not only had not yet been proved, but also lacked confirmation from direct observation." 

As a naturalist, Kropotkin spent much time traveling and observing nature. In Mutual Aid he documents  evidence of aid over conflict among animals, in humans and throughout human evolution and history, writing:
As soon as we study animals -- not in laboratories and museums only, but in the forest and prairie, in the steppe and the mountains -- we at once perceive that though there is an immense amount of warfare and extermination going on amidst various species, and especially amidst various classes of animals, there is, at the same time, as much, or perhaps even more, of mutual support, mutual aid, and mutual defense amidst animals belonging to the same species, or at least to the same society.  Sociability is as much a law of nature as mutual struggle.  
But which comes first, evidence or interpretation?
Kropotkin was a prominent figure in 19th century activist politics.  He was, according to the wisdom of the masses, a "geographer, economist, activist, philologist, zoologist, evolutionary theorist, philosopher, writer and prominent anarchist." (Wikipedia.)  He was sympathetic to the plight of the peasant in Russia as a young man, and to socialist ideas, though he eventually settled on anarchism and as a political activist, was imprisoned for subversive activities in 1876.  He escaped from prison before his trial, however, and fled to Europe, only returning to Russia after the revolution in 1917, enthusiastic about the changes he saw happening, though eventually disillusioned by the authoritarian socialism that the revolution became.

Kropotkin disliked capitalism and the idea that life must be a struggle.  As an anarchist, he preferred to believe that humans were capable of mutual aid and cooperation, and that we could effectively run our own societies.  On the other hand, competition was in the cultural air when Darwin was doing his thinking, with the British empire dominating much of the world, the beginning of the industrial age and the rise of capitalism, the economics of Thomas Malthus who was so influential to Darwin's thinking, so it was perhaps natural that Darwin, and Wallace too -- and indeed Richard Dawkins in the 1970's -- framed their theories of evolution in terms of competition.

One can assert that if Kropotkin was driven by his ideology to see in Nature what his filters allowed him to see, then the same certainly applies to the Darwinians and even to the gentle Charles himself.  If Darwin's view prevailed in the west, the cooperation-based views of Lysenko prevailed in the Soviet Union, with disastrous consequences for science.  But viewed in its context, these polarities are understandable. 

What does this say about which view is right?
I don't know.  Ken and I thought we were writing  The Mermaid's Tale about biology.  As we wrote in the book, competition and cooperation are laden words, but we explicitly chose 'cooperation' as an antidote to 'competition' with all its political and cultural meaning.  More neutral and scientifically appropriate terms like 'successful interaction' and 'differential proliferation' would serve science better and be less a matter of defining everything before it's observed.  However, our intention was to describe cooperation not just as kindness, but cooperative interactions among genes, organelles, organs, organisms and species.  In that context, we had little to say about culture, except insofar as we would argue that culture generally and manifestly usually trumps genetically driven behaviors.

So, I was surprised (and of course pleased) to see a recent review of our book on Amazon that says, among other things, "Would more anthropologists and policy makers read this…".  It's a favorable review, so presumably the author sees political and cultural meaning where we were explicitly only intending to describe biology.

But that's okay, and as it should be.  Science is always done in cultural or political context.  To a great extent, we see what we believe.  

Wednesday, April 23, 2014

Genomics in microbiomic clothing: The next 'hula hoop'?

So we've just been through 20 years of genomics, GWAS based on genetic markers tested in samples such as ever-growing huge numbers of cases compared to hordes of controls.  We know that mapping even in simple organisms including flies, yeast, and bacteria, show genomic causal complexity.  We know that whole population genome sequencing is the next way the marketeers will promise immortality.  We know this teaches us lessons for biology that we don't want to listen to.

We know that even without the exotic technology that makes all this possible, there are relatively strong, simple genetic causes of traits--this goes back in a formal sense to Mendel, and even GWAS, though often thought-free, can find these strong effects and will continue to find them here and there.  This, even though after all the effort of 20 years or more we still don't really understand the traits for which the mirage of an oasis of solutions has failed us (obesity, diabetes, schizophrenia, asthma, autism,.....).

So this is not no-yield science even if it is science of diminishing returns.  And it's our current way of doing business (and, perhaps, 'business' is the right word for it?).  It's our 'anthropology', how our culture works.  Big Ag is doing it, the military is doing it, NASA is doing it.   Big Data and other Big Projects are the order of the day and, to be blunt, the way to secure large long-term funds in a fickle market.

But as at least some are tiring of the over-marketing of all of this, the next genomic hula-hoop fad down this particular technology's line looks like it's the microbiome.  DNA sequencer makers can sell machines for this type of study, if the GWAS market shrinks.

But isn't it basically just the same?

We will find that the community of bugs in this or that part of your body will differ over time, will differ among sampling sites (this or that part of the skin, gut, or wherever).  It is even more complex than genomes because not only will we have the DNA sequences of large numbers of recognized microbes, but the microbial species will vary within their population and we'll have their relative frequencies in our samples.  In other words, from each person we get one genotype, alleles present or absent (or present in one or two copies), but for microbes we'll have their relative frequency in the sample (how many times their sequence was identified in the sample).  So there will be another very major variable added to the mix.

Everyone will differ in these regards, within and between populations, with age, sex, all sorts of environmental variables, and over time.  And while these variables may be enumerated one by one, in fact their dynamics involve interactions among them (often, probably, more than just two-species interactions in complex microbiome ecologies).  And there will be interactions with the host's genome as well.   If interaction among genomic sequence elements has proven largely intractable, wait til you see what the Wizards of Microbiomics turn up and start touting to the media.  This if anything bodes to make genomic variation rather simple!

Of course, we will learn a lot in this endeavor.  And, as with genetics, there will be strong and important signals related to human, animal, and plant well-being.  Some of these we know of already (e.g., some nasty strains of infectious bugs, E. coli, and so on).  Many would not require expensive exhaustive enumeration studies to find.  Just as with genetics, there will be successes.  But, we think, just as with genomics, we can already see the over-selling and the faddishness and band-wagoneering of microbiomics (and, similarly, of epigenetics).  Can basically the same knowledge come with fewer, more focused and disciplined microbiomic studies?

Perhaps we are over-reacting here, or perhaps this is just how humans, individually pursuing careers behave.  To get the minority of brilliant discoveries, perhaps the price to pay is of the sea of not-so-brilliant incremental work.  Only history will filter through the din of self-promotion to show what was actually important and in itself actually worth paying for.

If this is just how we are, in science, the arts, business, then progress is expensive, a fact we have to live with.  Of course, if resources were not captured by this academic welfare system, many other people could actually live, literally, or have better health and living standards.  Priorities on what to do with resources are societal, and we acknowledge that the sentiments we express presents a political view that.

But in terms of science itself, one could discuss whether there might be a better, more focused and modest, and less costly way to enhance creativity per dollar invested.  We and others have been writing with occasional suggestions about how funding could be less institutionalized, and we recently asked whether all of those ideas are rather irrelevant to the societal processes that will actually make change, when and where it happens.  Repetition and persuasion are perhaps essential parts of the jockeying 'game' of resource distribution.

Meanwhile, we'll be treated to the predictable years of breathless fantastic unprecedented paradigm-shifting discoveries in microbiomics.  Eventually, its dust will settle.

Tuesday, April 22, 2014

Microbiomes and complexity

Obesity -- the more we know, the less we seem to know -- or, at least, the more complicated it gets.  But, have heart!  Because this is turning out to be true of much of biology.  The more we learn about cellular mechanisms, how genes work, gene networks, the effects of medications, the relationship between diet and disease, the effects of environmental exposures on risk, and so much else, the better we understand that reductionist science is not necessarily the best way to explain cause and effect.  Why are some people obese and some aren't, why can't a single genetic variant often explain much, why do some people benefit from a given medication and some not, can we predict who will get which disease, and so forth?  It's complicated. But absorbing that message can be the first step towards better understanding.

piece in the April 17 Science, "Microbiome: A Complicated Relationship Status" by Sarah Deweerdt. elucidates this well. "Nothing is simple about the links between the bacteria living in our guts and obesity," Deweerdt writes.  Studies comparing the gut microbiome of obese people with that of thin people have shown marked differences between them.  Indeed, researchers have shown that "...microbial genes sort the lean from the obese with 90% accuracy, whereas looking at human genes yields the right answer only 58% of the time."

Of course, this isn't predictive, it's the microbiota of individuals who are already obese.  Whether obesity is caused by obesity-related gut flora or gut flora are a by-product of obesity isn't yet known, though a number of experiments with mice, including this one ("The gut microbiota as an environmental factor that regulates fat storage", Bäckhed et al., PNAS, 2004), suggest that gut flora might in fact be causal.  A 2014 study of the effects of pre- and probiotics on lipid metabolism and weight suggests the same, as do a number of others in the intervening decade. Of course, even if that's the case, genomic and microbial and other environmental factors interact: none is 'the' cause by itself.

To test the causal relationship, Bäckhed et al. transferred the microbiota of obese mice to the guts of germ-free mice (born by Caesarian section into sterile environments).  Despite eating less than before the transfer, and expending more energy than the germ-free controls, the recipient mice showed a 60% weight gain by two weeks after receiving the microbiota from the obese donors.  However, they never actually became obese themselves.  And we wonder if this is specific to the strain of mice they used: how would results compare if tested comparably on many other laboratory strains?

Bäckhed et al. report direct evidence of metabolic responses to the presence of the new gut flora, including increased hepatic production of triglycerides and increased monosaccharide uptake from the gut, and "increased transactivation of lipogenic enzymes... The liver has at least two ways of responding to this augmented delivery of calories: increasing inefficient metabolism (futile cycles) and exporting these calories in the form of fat for deposition in peripheral tissues."

That is, Bäckhed et al. suggest, resident gut microbes help us efficiently store calories, but in the calorie-rich environment that western grocery stores and other food provisioners create, over-efficiency can lead to obesity.  

The "thrifty genotype" becomes the "thrifty microbiome"
It should not be ignored that this is the same argument that Jim Neel used in 1962 to explain the evolution of genetic predisposition to diabetes ("Diabetes Mellitus: A 'Thrifty' Genotype Rendered Detrimental by 'Progress'").  His idea was that genes and pathways for storing energy in pre-modern times to take people through times of famine become disease risks in our time of plenty.  But this paper has been cited, and 'thrifty' rhetoric used without much restraint, even after Neel basically acknowledged that the idea was oversimplified and didn't apply to the major adult-onset diabetes epidemic.  It's highly likely that the thrifty microbiome idea will prove to be overly simplified as well.  

The microbiome is a hot item these days.  Though, unlike 'the' human genome, no one has ever suggested that there is 'a' single microbiome, which means that the recognition of complexity has been there from the start, as it should have been in the genome project.  Nonetheless, we have to be careful not to bestow too much credit for depth of insight on  the microbiome bandwagon:  reductionist explanations for what the microbiome can explain are tempting, perhaps especially by the media.  So, it's nice to see Deweerdt giving attention to its complexity.  

Indeed, Deweerdt cites researchers who believe that microbiota can only be considered to be part of a causal chain with respect to obesity.  What we eat influences the bacteria in our gut, and that in turn may influence our weight.  Germ-free mice, for example, didn't gain weight on a sugar-laden diet, suggesting that if sugar is obeseogenic, it's the bacteria in the gut that make calories available from the carbohydrates that we can't.  And gut bacteria can digest other components of what we eat that we ourselves can't, again increasing the number of calories we metabolize from certain foods.  

As far as we know, no one is claiming that if the thrifty microbiome idea is valid, it will be the whole story behind obesity, even in a single individual.  To date, to be sure, the mouse results aren't being replicated in humans, and fecal transplants aren't causing weight loss.  But even if to some extent gut flora are involved in regulating weight gain or loss, some forms of obesity really will turn out to have a fairly simple genetic explanation, even if that will vary between people, and some really will be due to energy imbalance (more energy consumed than expended).  And, there will be other explanations as well, perhaps even including a role for inflammation which is turning out to be involved in many diseases and disorders, as well as a combination of all of the above, even in single individuals.  

And the possible involvement of microbes only pushes the question back a step.  E.g., where do these obesity microbes come from, and are some people more susceptible than others?   

The more we learn, the more complicated it gets.  

Monday, April 21, 2014

Earths galore: we're getting closer...but to what?

Well, NASA's done it again.  They've found another exciting planet lurking in the depths of near space.  This time, the BBC proclaims, we have Kepler find 186f (illustrated, even!), the best one yet and (maybe) it (could) be watery!  It seems that the news cycle isn't just 24/7, but longer: every time NASA can release the story about some newly found somewhat-earthlike rock, the news outlet pick it up as if it were the first time and nobody can remember that we've seen almost the same many times before. But if they can get their sales with re-runs, we can't be blamed for at least returning to this topic (e.g., we blogged about this when NASA reported the news of an exoplanet circling the star Gliese 581, as well as others), though hopefully with a little bit more that's different compared with NASA's releases!

Just like Earth! [in an artist's ebullient imagination]  Credit:
CreditNASA Ames/SETI Institute/JPL-CalTec
Cred
A planetary plenitude
This discovery is called by the ever-sober news media an 'earth twin' or as the knowledgeable NY Times puts it, 'perhaps a cousin' (whatever that means).  Sssh!  If you keep very quiet, you might be able to hear your Keplerian kin-folk talking!

 Well, we can overlook such verbiage since ours attempts to be a science blog.

Actually, the discovery of a plenitude of possible planets, or 'habitable' ones as they seem often to be referred to, is interesting and continues apace.  They now number in the hundreds and only a trivial fraction of the universe has been scanned, or is even scannable with available technologies.

These truly are interesting findings, though they are, surprisingly, not at all surprising.  After all, space is massively large and filled with a chaos of objects hot and cold, large and small.  If, as seems likely, Newton was right and gravitation is universal, then the small stuff will often be captured by the gravitational attraction of the big stuff. Big hot stuff (stars) can capture smaller wandering rocks and they'll end up in orbit.  Some even smaller rocks are captured by the pull of, and orbit around, bigger rocks (like moons around planets). Lots of other rocks and stars will be in all sorts of relationships as well.  But some of these will be special.

If we care about our sort of life, then we want what is being called a Goldilocks planet: like her porridge, the rock will be not too hot, and not too cold, not too wet and not too dry, but just right!  That is, there will be water and warmth enough to keep it liquid but not turn it all to steam, and other things of that sort.  There is where, we're told, we'll find the ETs.  Some day.

Now this is genuinely thought-provoking, but it needs none of the circus hype of the news media.  That's because it basically tells us what we already knew.  In fact, the actual facts are to us a lot more interesting than the Disneyfication.

We've previously in general terms discussed the idea that if there are an infinity of starts, galaxies, planets or universes, there would just as likely be all sorts of life on them.  Here, we can be a tad more specific than that.  For example, if there are hundreds of planet-like things just here in our own local galaxy (the Milky Way), somewhat like 186f, and we've really just begun looking and technically can only see some of what might be out there, and if what we know is largely within our own galaxy, where there are in the range of 100 billion stars, then thousands and thousands of those stars must have orbiting rocks.  There are around 100 billion other galaxies (give or take a few), and we can assume that there must be thousands upon thousands of the same sorts of rocks orbiting stars and rocks orbiting around those rocks, in each galaxy.

That is, even on the back of the proverbial envelope, one would estimate that there would be at least 100 thousand billion habitable planets.  That is 100 trillion planets (100,000,000,000,000), as a minimal estimate.  Once we knew that there were 'habitable' rocks orbiting stars, such as Earth and perhaps one or two more even just around our own sun, there likely are around 100 or more billion earth-maybes in the Milky Way alone!  Of course, if you hold to Genesis, our Earth could be God's only watering hole, but once we had clear evidence of other possibles, a reasoning person must accept that these larger numbers become plausible.

The point is that even without the Kepler and other telescopes scanning the heavens for these things, the totally convincing plausibility argument would be that the universe is awash in 'habitable' planets.

But, ETs?
Now the fact that there are lots of warm, wet rocks out there is one thing, but it doesn't imply that there is anybody living on them. However, life--even just our sort of life--is clearly possible because we're here living it as proof.  Given that,  even a modest kind of belief in natural science would lead one to believe that if you have 100 trillion tries, there really has to be some sort of life out there, and probably lots of it, even if it's only on a trivially teeny fraction of the habitable planets.

This of course does not address whether it's our sort of life in the 'intelligent' sense.  Or life based on DNA. The fact that we are here is not quite so persuasive about that, because the numbers get astronomical (so to speak, but in the other direction--of smallness).  The number of nucleotides in earth-life's genetic history, from primal RNA to global DNA today, likely dwarfs even 100 trillion.  Each has arisen and/or later been changed by largely independent individual probabilities that are very, very small.  A net result is, in essence, the product of these probabilities (this and this and this...and this--the result--had to happen).  So to go from primal soup to any given form of  complex 'intelligence' over 3.5 billion years, that is, our form of it, would be minuscule relative even to the numbers of potentially habitable planets.  This could mean that intelligent life arising more than once, even with so many trials, would be very unlikely, and thus that we are lonely in our uniqueness.

But if others just like us may not happen more than once, there are also countlessly many such pathways to intelligence: after all, each human has a different genotype and there have been billions upon billions of us.  So it really is impossible to do more than muse about what the net resulting probabilities are.  To a great extent it depends on what we count as intelligent life.  To a greater extent, "Are we alone?"  is hardly even a scientific question to ask.

Worse for NASA (and Disney) is that even here on Earth where we know intelligent life has arisen, we've only been at it for, say 1,000,000 years, being generous and depending on what 'intelligent' means. But if it means having language and communicating by electromagnetic radiation (like radio), so we could communicate with ET's, that's only been about 100 years and probably we won't last much longer, either. So the probability that at any given time smart life is present in us and any other such place is a minuscule fraction of the time that life has been around on any of these lucky 100 trillion planets.

In that sense, large numbers don't nearly guarantee that there are smart anythings anywhere else.  The chance that us-like life is out there now, and that 'now' means we can communicate with it, becomes possibly rather miniscule.

Forget about chatting!
In one of our previous posts about Gliese 667C, we note the problems about thinking that we could communicate with, much less actually go to, such places (assuming we understand physics, like the limiting speed of light, correctly).

Kepler 186f is said to be about 500 light years away.  That means that a signal that we can pick up from there was sent when Da Vinci was painting the Mona Lisa.  If there was intelligent life there, and they're at all like us, they may well have obliterated themselves long ago.  But suppose they're peaceful (having evolved way beyond us), then just to send a friendly radio wave of "Hi!" to them, and get a wave back would take until the year 3014. By then most everything would have changed about human life here, with lots of world wars (though, of course, Republicans would still be trying to keep ordinary people from being able to afford a doctor).  Forget about chatting with the ETs!  Even Google will be out of business by that time.

And as we said about Gliese 667C which is a mere 22 light years away, 20 times closer than 186f, physically getting there would not be half or any of the fun.  It would be impossible in any practical sense, and even if we could actually do it, it would take millennia of space travel to get to 186f, and when we got there there might be nobody still around to drop in on.

So, what is the purpose of the space probe?
Without being too much of a spoil sport, because up to a point this kind of exploration really is interesting and in some ways now and then may tell us important things about our universe (not likely to be comforting to dogmatic religion), we have to ask about the purpose of this kind of probing.  In a sense, for reasons we suggest above, the numbers suggest that it really tells us nothing that we didn't have almost just as strong a reason to know anyway.  It would take something like a Genesis literalist to think that there would be no other planets with life on them, or even that they would be very few.  And of course either we think these findings suggest the plausibility that forms of life must exist out there, or else the burden of proof should be on a denier to show how, in the face of these overwhelming numbers (and not counting theories about multiple independent universes), there could fail to be some such 'life' on lots and lots of planets.

Of course, this is really just science fiction, almost literally.  The vast majority of any such planets are, were, or will be millions or even billions of light-years away. That means what we see today isn't there now, but was there eons ago.  Much of that light has been streaming here since before there was life on Earth--or even before there was an Earth!  Indeed, if a typical star's lifetime is around 10 billion years, much of what we see no loner exists as such and, likewise, much or most of what actually is out there came into existence too recently (even if millions or billions of years ago) for any evidence to have reached us.

So, it takes either a television sci-fi producer, a NASA PR rep, or a real dreamer to think we could ever go there or really communicate with much or even any of what must be out there.  If we really thought anything like that, we should intensely be doing very down-to-earth studies to see if the speed of light and relativity are limiting factors or whether transformative new aspects of space itself remain to be discovered.

At what point is the research cost** not worth the number of people who could be fed by the same funds, and so on?  When does asking such questions make one just a killjoy, and when does it make one concerned for the problems, and the actual unknowns, on the one planet we actually can do something about?



**Or, as we've suggested before, if this really is mainly just entertainment, why not let the video or other entertainment industries pay for it?

Friday, April 18, 2014

Another way to look at the university research budget sand castle problem

Yesterday we noted that a clear awareness of a crunch-time for university based science and graduate training is 'in the air'.  This is the result of an oversupply of doctoral graduates and a shrinking level of research funding.  It's leaving young people and even some older ones high and dry.  It's associated also with the loss of societal support for higher education as a general societal gain--legislators are backing away from providing funds to state universities.  One casualty is the loss of tenure-track jobs, being replaced by instructional serfdom.

These things reflect a more general turn towards individualism in our society--indeed, if you want to go to college, well, pay for it yourself!  But it's also a reflection of the self-serving college and university 'bubble' by which we have advertised 'education' and our research findings so heavily, to create societal demand, but without matching substance beneath it.

So many articles and blog posts and so on are being written to hand-wring about this.  We mentioned the Albert et al. PNAS commentary yesterday, written by experienced, senior people in science, but there have been many others.  We write in sympathy with the views expressed, and have, as have the authors of these and many other commentaries on this crisis, tried to suggest ways to get through trying times.

However, there is another very different way to look at this.

Social change must occur on its own terms
We, and authors of bemoaning commentaries that make recommendations for how to face these problems, are generally senior.  What we naturally tend to think of, and to suggest, amounts to ways to return to how things were done in the past--to how it was when we were young, the system we came up in, liked, got used to and which we would suggest make a come-back.  We did well during our decades and so tend to think we know what's right.  We naturally tend to propose changes meant to maintain the status quo.

But maybe that's wrong. Maybe we should not be listened to.  Maybe it's natural and right that we be put out to pasture.  We had what we view as halcyon days and they do seem definitely to have been gentler and easier than what is faced today.  But perhaps the solutions now will have to be different.

Perhaps lifetime tenure is obsolete.  Perhaps dependence on the grant system can't be reinstated, and major shifts in jobs will have to occur, and shouldn't be mourned.  Perhaps academic life will become less desirable, or will come to be something very different from what we elders knew and liked.  This is already happening, of course, not so much by design but because universities as businesses make decisions based on bottom-line considerations more than they used to, rather than what's best for scholarship or research or educational interests.  At least as we elders see those.

Perhaps, even, intensified competition is just the way it'll have to be.  Perhaps the capitalistic view that this is the hard-knocks way for society to thrive, trimming fat, intensifying effort and so on will become the norm.  Perhaps universities will have to shrink, professors losing jobs that don't really matter in the online world.  Perhaps the existence of excess labor pools--instructors who can't get tenure-track jobs and instead work by the hour when and where they can get jobs--is just going to be the way of the world because it is more economically 'efficient' (for society as a whole).  Perhaps this is a return not to the way it was for current elders, but much farther back, to the itinerant scholar days, ones who sing for their supper as individuals.  That's how it was in much of Classic times and the Middle Ages, after all.

In fact, it will just happen, however it happens.  Powers that be will struggle to keep things as they are and newcomers will struggle to change them, all in ways no one can really predict.  But perhaps in one way or another, we are already seeing a gradual de facto return to some forms of social and intellectual elitism, along with income inequity, is the path of the future even if we elders don't like that.  Perhaps our ideas about 'democracy' are just naive.

Maybe we should just not be the ones invited to write editorials about this 'crisis': maybe it's a crisis only in the mirror on the past.

Perhaps instead, young people will somehow restructure things in a way we elders can't or don't envision, and hence could never recommend.  Maybe they and only they should be writing about this---or, more realistically, maybe this needs to be worked out, by them, through the social media rather than the stodgy outlets we elders tend to use.

Given the number of stressors on the system, however, much of the change and the resolution is likely to be unplanned and will just in some meandering or chaotic way be where universities find themselves when the dust settles.  

Whatever replaces our type of world will become the new status quo, the one the new elders mourn the passing of fifty years from now, as our generation fades into the sunset of the world we have known.

Thursday, April 17, 2014

Playing in sand castles is no game! Funding science

There are rumors that the proposed federal NSF budget will cut some areas (cut not hold steady) by amounts well into double digits (like around 20%).  That's a permanent cut imposed over just one year, I think, on top of the steady-at-best budgets of recent years.  And a new commentary by Bruce Alberts et al. in PNAS bemoans the similarly serious situation in biomedical (NIH-based) research.  These latter authors make many or most of the same points that we have often been making here (and we were not alone by any means): these are not sour grapes rants but are widely perceived as truths about today's circumstances in science.

The points have to do with the poorly justified if not selfish excess production of PhDs, the hypercompetitive funding and publishing environment that eats up too much time while it stifles creativity, the conservative and cumbersome grant system, administrative creep and so on.

How did we get into this situation?

In a way we got into this situation because the idea of an ever-growing economy ran up against the real world (that, ironically, science is supposed to be about understanding).  We could and should have known this, but nonetheless built a sand-castle research/university welfare system too close to the shore, and now the tide of the inevitable is about to wash into or over it.

Sandcastle in Singapore; Wikimedia

We smugly expanded at exponential rates though any idiot (even a scientist!) knows that in the real world, as opposed to Disney perhaps, exponential growth must reach limits.  We behave short-term and totally selfishly, building our programs, training ever more graduate students, asking for ever bigger grants, bloating our administrations, being more hooked on overhead and soft-money salaries than a downtown druggie is addicted to meth.

This was a university 'bubble' that we built, and our society bought into it.  Now we're getting our comeuppance.  It's too bad because the most affected people will be the younger scientists who are innocent of the greedy behavior we elders indulged in during our careers.  It is we who deserve the slap on the backside, but the bruises will fall on our students--is falling on them.  There are not many university jobs and in many fields of scholarship, including hard-core and softer science, as well as the non-STEM subjects, there is a t-choice: a taxi-driving jobs compete with the prospects of a tenure-track job.

Universities are, often cravenly, saving money by denying tenure, hiring nearly unpaid adjunct instructors (but not reducing tuition accordingly, of course), and labs are letting staff off (to go compete for taxi licenses) because even some Dominant Baboon scientists can't get enough grants to feed their mill any more.

Now, we know that nationally, our Wall Street oligarchs treated themselves to a massive recession of which we, not they, were the victims, and they are getting off the hook for their evils.  But even forgetting that, the economy has had its downturn, as economies always do (the cycling tide of exponential growth).  So there is a constriction being laid on top of the overtly exponential-growth behavior of our universities.

In a downturn, there is a legitimate need to sort out priorities, which is less needed when everything is growing like Topsy.  Some areas have to be cut if we are to salvage what's really important. We here have often written critically of the puffed up, incremental rather than creative blowing away of large amounts of funding for various Big Data projects.  We've said that funding cuts might actually be a good thing if they forced people to think about their science rather than just buy more technology.  And both NIH- and NSF-related fields are guilty of devouring logs and spewing out sawdust.

But in a humane society, as ours should be, there should be a phase-out period of areas that are not delivering enough goods.  In our current system, however, there is so much lobbying and jockeying and self-promotion that this is not likely to be a humane process.  This we think is especially so if the cuts are quick, hard, and without much warning.

Either we'll continue with the brutally intense competitive environment, hostile to constructive interaction, in which we are already immersed in many areas of university science, or we'll have to bite some bullets.  We need to train substantially fewer graduate students.  Tenured faculty may need to do more actual teaching ourselves (fewer TAs).  We will have to scale back our labs to have fewer post-docs and technicians, and may need to do more actual science ourselves. We may have to be more selective, and restrictive in what we do or propose to do.  Administrations will have to do with fewer administrators, fewer shiny new buildings or lesser office furniture, and less addiction to overhead. Medical schools may actually have to learn to pay their employees (rather than relying on NIH to do that).

These changes even if they occur won't help those we've already misled into coming into these fields in which it was not hard to see the impending crunch, even years ago: They are the innocent victims.

We think what is needed, if it were possible, is a frank but non-partisan national discussion of what kinds of science and scholarship are most important and to phase in more funds for those and less for areas that, no matter how legitimate, are just less vital these days or less promising of major new discoveries.  We should consider academic employment practices and things like tenure and job security.  If they have to change, it should be in a phased way and not be punitive the way it is becoming now.

Alberts et al. suggest that we train them, our PhDs, for jobs other than in academe.  That's a great point, but if it's just an excuse for us to keep recruiting the same number of graduate students, it's a selfish ruse to preserve our business as usual, because we'd just quickly flood these other job areas if we did that.

The golden days of science (and scholarship--not all the important things in life are STEM things) may not be over, if we can behave properly and leave our six-guns at the coat-check.  But it does not seem likely to be easy or, worse, free of partisan politics unrelated to science itself.

What are you supposed to think, if you're a new graduate student, or a recent PhD?

Tuesday, April 15, 2014

STEMing the tide, part III: A (new) 'modest proposal'

We have been writing about the push in this country to strengthen the STEM subjects in education, science, technology, engineering and math, because of their financial, career, and material role in society. This is being done explicitly because when money is tight, subjects like the arts, humanities, and social sciences don't pay direct benefits.  This can be seen as inexcusably crass, but in a tight job market and culture increasingly embedded in things technological, with weakening public support for education, it is an understandable trend.

We  happen to be Luddites in this regard, perhaps, because we think that our society should not back away from the more literary, esthetic, and contemplative aspects of life.  This is not snobbery on our part, or at least not only that, thinking that everybody ought to love watching opera or reading the Iliad. The point is a societal one.  Much of our culture, such as pop music, sports, video games, chat sites, and the like are called 'popular' in part because everybody likes them (opera once was 'popular'). But the point here is that you don't need formal education to be exposed to them, indulge in them, or appreciate them for their values.

Appreciation of broader aspects of life, such as the 'finer' literature and arts, history, philosophical and anthropological thought, and the like is much more complex and often out of modern vernacular, technical, complex--even boring.  But exposure to them is as greatly enhanced by formal education, just as is the case for STEM subjects.  They have snob value in our social upper crust, but they have their aspects of value and appeal that might benefit and edify the lives of many more.

Here 'education' refers first to K-12. The current way to describe topics is to group the fashionable ones under the rubric STEM and then largely dismiss the others by omission--let them be nameless!  School districts are, we regularly read, shrinking or abandoning their music and arts programs, teaching of classics and the like, because they cost money, while adding pre-college specialty courses such as calculus. In a nutshell, this is based on our cultural obsession with money above all things, because these are the subjects, we are told, that industry wants and that make money for them and thus their employees.

But if being an industrial chemist or mechanical engineer pleases the wallet, we rarely hear that they please the soul.  We have not heard of a single serious-sized school district that has abandoned its sports programs, such as football or basketball, which are quite expensive, to augment the arts.

Universities and perhaps many colleges, are racing onto (or is it 'down' to?) the same money-driven bandwagon. Abandoning part of their mission to 'educate' informed citizens, they are widely shrinking or even sometimes running completely away from the non-STEM areas (but not, of course, football or basketball).

The scientific data on successful, healthy aging
I just returned from a workshop at the National Research Council, underwritten by NIH's National Institute on Aging (NIA), to discuss what we have learned about the basis of longevity and healthy lifespan experiences.  An objective was to provide advice to the NIA on directions of future ways to invest their resources, based on what we have learned from what has been supported heretofore.  The results, in a central area related to the question at hand, were in fact major and clear--and should provide equally clear directions for future NIA investment.

Health is a biological phenomenon (even mental health, of course, since the mind is a biological organ). The approach to human lifespan, longevity, health and life-course experience relates to the causes of negative as well as positive experience.  We should use our research technologies to find and identify the causes of either, so we can intervene with the negative and reinforce the positive.

In this case, the working model, in our scientific age that puts technology first, has been that ill health causes social and psychological decline. If you are sick, a biological and in that sense technical state, you cannot hold a job, may be involved in abusive domestic situations, become depressed, then invest badly in food or other resources and the like.  If you are sick, you may be more likely to be overweight, shorter, more likely to drink too much or to smoke.  So we have a plague of people in whom to search for the misguided cells, so we can alter their behavior.

Surprisingly, however, the reported research has shown, rather clearly and in both humans and other animal models (in particular, findings in other primates in the wild were reported at this meeting), that quite the opposite is true:  Social standing and cultural milieu are major, primary determinates of life-course health and experience. This even moreso than money itself!  Longevity and even height is in a strong sense determined by the degree of satisfaction or control you feel in your life, your social position, and even physical resources (incomes) do not over-ride the social effects.  Excepting of course strong harmful genetic effects in a small fraction of people, disease and lifespan causal are mediated largely by these aspects of social environment which, in turn, affect your health prospects. If you're born on the wrong side of the tracks, you're fate is largely sealed.

Since similar results were reported in several aspects and respects and even other species, one need not worry about the details, which seem to be generally small relative to the main picture.  The details needn't be studied to death.  Instead--we paid for the research, the research was very carefully and well done, and we got a clear result!  The question has largely been answered, and we now know how best to invest future resources most effectively for life-course improvement.

But the answer will surprise you!

Our 'modest proposal'
In 1729, Jonathan Swift saw a problem of the widespread lives of poverty among the downtrodden in Ireland, and suggested a solution:  they should gain income by selling their excess children (of which there were many), to be cooked in various culinary ways to satisfy the rich.  Many savory recipes were provided.

Carve, saute, and don't forget the sauce.  Drawing by Dore

That essay was a vicious satirical critique of societal inequity in Swift's time, and we (living in more civilized times, we generally suppose) would never think to suggest that kind of solution to the offensive, growing inequity in our society today.  But we do have a modest suggestion for today, based on our National Institutes of Health living up to its word, and using the results of research it sponsors to improve our society's lot.

The non-STEM parts of our educational system address quality of life issues that have to do with your assessment of the world, sense of well-being, ability to integrate understanding of civil life and across different realms of human thinking.  People with higher levels of senses of integration and well-being will be better able (as the research shows) to negotiate society and this will lead to better prospects and better health and longer life.

Of course, knowledge of the STEM subjects is important in this.  But we are already pouring resources there, clearly with more to come.  But we are pulling the plug on the non-STEM subjects that are associated with giving you a shot at being on the better side of the tracks--better and more equitable places in society, and which, we now know thanks to NIA research, lead to longer and healthier lives. This quantitatively and qualitatively trumps the relatively smaller, and consequent rather than causal effects of the various high-technology, costly things we spend funds on in relation to the pandemic diseases like heart disease, stroke, obesity-related diseases and so on.

So: what the NIA should do is to redirect its funds from these very sexy technological research approaches to life-course issues (like GWAS and so many other Big Data fashionable fields), and urgently pour these resources instead into intervening in the actual major causes of impaired lives.  NIA should underwrite the improvement of K-12 education nationwide, and should endow non-STEM programs in universities, conditional on those areas being retained as serious-level requirements for graduation.

If we let this recipe cook for a decade or two we'd have a more sophisticated, knowledgable, intellectually resourceful and more savory equitable society with more peace of mind.  And the populus would, as a direct consequence, have more intellectual resources to engage in creative and innovative science and technology, with the economic benefits that go with that. As a result, the rates of our common chronic diseases, including mental deterioration, and their associated misery and costs would be way down.

The diseases that would be left would be the truly biological or genetic or clear-cut environmentally caused instances of these diseases, on which cases focused research (rather than just big-data collection) might have a reasonable shot at devising cures and prevention.

That is our modest proposal for how we should use the results of the research we pay for (but we dare to suggest that it's not how we're using them now).