Can we predict a person’s face using DNA?

'Appleby believes he's come up with a way to defeat facial recognition software.'
Image from cartoonstock.com

So, do you think that a person’s face can be reconstructed by just using their DNA information? After all, it has been a long cherished dream for many security agencies, the stuff of nightmares for common folks and often features quite high among sci-fi themes. Now, according to a paper published by Craig Venter, the multi-millionaire genome entrepreneur and colleagues from his San Diego based Human Longevity Research (HLI) in Proceedings of National Academy of Sciences (PNAS), claim that they just discovered this magic potion [1].

What they claim?

Using some very costly camera setup to get facial information, hundreds of rounds of high-coverage sequencing for 1000+ people and some fancy machine learning algorithms they used SNP’s to check their association with various facial features like the height of cheekbones. They could correctly identify one individual out of ten drawn randomly from the HLI’s database around 74% of the time. They then proceeded to caution that public genomic databases can be used to identify an individual’s facial identity and hence privacy is at risk.

Now, this does sound very dangerous and hence quite rightly created a furore since high chances of an individual’s identification just from the genetic makeup would be boon to catching criminals but also put at risk all the publish genetic databases which routinely remove individual information before their public release. And right on cue, came the heavy-handed rebuttals from scores of geneticists and computer scientists well versed with not only the advanced machine learning algorithms used here but also human genomics.

Controversy!

The main problem with Venter & Co’s magic is that they claim to have used genomic data (SNP’s) to predict the facial features of an individual including the age etc based on their database. But their (HLI) database contains just around 1000 individuals and if one knows just the demographic variables like age, sex or race then it’s quite easy to identify an individual out of 10 without their genomic information. To prove this claim, Yaniv Erlich from the New York Genome Centre undertook a small study in which he used a simple re-identification procedure that relied on basic demographic information: age,
sex, and self-reported ethnicity present in many databases. Unsurprisingly, Erlich was able to achieve a 75% success rate compared to Venter’s 74% which relied on much sophistication and other fanfare[2].

Apart from this, Venter & Co’s  claimed novelty is in using trait predictions to identify people’s facial features. Unfortunately, if one takes a careful look at the facial features then it’s quite understandable that their re-identification power probably comes from inferring genomic ancestry and sex from the database rather than trait-specific markers [2]. Thus, they do not predict the face structure of a specific person or their height. Instead, they get something that is very close to the population average and then uses those values to re-identify.

Just have a look at the images below from their paper (S11) [1, 2]:

123_Page1_Image1What we can see is that predicted faces are pretty much identical to each other. Hence, the much vaunted Venter paper predicts a “genetic white male face” [2] rather than the actual face of a person.

Now the smelly stuff

Gymrek et al., [4] in their paper used a combination of a surname with age, state etc., to successfully re-identify a target individual. So, as Erlich [2] mentioned, if  Venter et al.’s, method work astronomically well then wouldn’t it be easier to just use some normal public database instead of a highly curated HLI one (which has been sequenced with great precision) to pick an individual and then successfully re-identify that person.

Now it’s quite reasonable to ask if this paper contains such misleading information (especially the way it’s promoted), then how come it got published in a prestigious journal like PNAS. Here, the excellent coverage by Nature News comes into perspective[3]. Earlier this was flat-out rejected by the journal Science based on the negative reviewer comments. Then Venter used the highly controversial PNAS submission policy in which a member of National Academy of Sciences can recommend the reviewers for the paper to be published. In fact, Venter used this route and chose the reviewers. Two of them are information-privacy experts and the remaining reviewer is a bioethicist. So, can you spot the problem here? A paper which uses sophisticated machine learning algorithms and uses genomic databases doesn’t even have a genomics expert or a statistician for its review, is the main stinker here!

Many would remember such controversial paper submission routes used by PNAS was the main reason why a few years ago we saw a highly unscientific and balmy paper getting published which famously claimed that “worm-like creature must have had sex with a winged, insect-like creature, and the metamorphoses of butterflies resulted from these distinct lines of genes merging” [5]. So, is this a similar case? Are we yet again seeing a modified form of cronyism here? This point is worth pondering, I say!

So what now?

According to Nature News, Venter & Co would soon come back with a fanciful reply to all the quite mathematically valid objections put forth by Erlich, and rather than wait for those magic tricks I would rather carry on with vigilance. Science has become very powerful and has permeated almost every sphere of human existence imaginable. We as scientists should continue to check, re-check every piece of scientific work coming out and should also have the moral tenacity to call ring the bell on misinformation.

Especially when modern public genomic databases in various biobanks have become an important resource to study complex diseases and come up with personalised medicine, we should exercise due caution towards papers like Venter’s which claim non-existent threats to genetic privacy.

References:

1).Venter & Co’s paper – Identification of individuals by trait prediction using whole-genome sequencing data

2). Elrich’s neat rebuttal – Major flaws in “Identification of individuals by trait prediction using whole-genome sequencing data”

3). Nature News commentary – Geneticists pan paper that claims to predict a person’s face from their DNA

4). Gymrek et al.’s work on re-identification – Identifying personal genomes by surname inference

5). PNAS disaster paper story – Controversial caterpillar-evolution study formally rebutted

Advertisements

Do we really need a new revamped theory of evolution?

A short answer to this meaty question is – NO?

But if you have been keeping a track of science news in the press recently, the half-baked articles all have this vague notion that an urgent extension is needed to the theory of evolution. And if you now ask why do we need such an extension, the answer invariably comes back as – EPIGENETICS.

In this summer we had a major controversy regarding the inadequate and completely wrong interpretation of epigenetics and gene regulation Siddarth Mukherjee’s article in The New Yorker based on his new book – The Gene: An Intimate History (Scribner, 2016). Apart from that, Royal Society conference hosted a conference on “New Trends in Evolutionary Biology: Biological, Philosophical, and Social Science Perspectives.” which harped on bringing a thorough revamp in evolutionary biology. And this was succeeded by a few articles like journalist Robby Berman’s Big Think,”How about a new theory of evolution with less natural selection?” and one by the eminent Carl Zimmer in Quanta Magazine as a longish essay, “Scientists seek to update evolution.”

So, what is this controversy about, who are the  “Third Way of Evolution” group and why despite all these claims evolutionary theory doesn’t require any sort of revision. This post discusses these issues and hopes that it would convince the readers that “All is well” with evolution.

The journos, and the “Third Way of Evolution” group  claim that the rising field of epigenetics is a way by which environment leads to long-lasting changes in the phenotype which can be inherited without altering the DNA sequences directly.epigenetic_mechanisms

These changes come somehow by the presence of methyl groups on a few nucleotides aka DNA methylation. Such changes can then be inherited by the next few generations, and so these folks claim such changes can be subjected to natural selection which leads to a form of evolutionary change similar to the old, bygone Lamarckian theory of inheritance of acquired characters. Apparently, traditional evolutionary theory doesn’t account for this mechanism and hence is in dire need for change!!

But some important aspects which people misread are the following:

  • This form of  neo-Lamarckian inheritance is not permanent and is wiped clean after a few generations. The most touted example of epigenetic changes inheritance in a plant lasted for 31 generations before being erased. And till now, no evidence has come up regarding epigenetic changes being inherited in a vertebrate.
  • When geneticists trace any adaptive changes being found on the genome, what is seen are the actual, real changes in the DNA sequence itself and not on the presence/absence of methyl markers on those nucleotides making up the sequence.
  • The increase in the frequency of DNA sequences which are susceptible to environmentally-induced methylation because them being adaptive is straight-forward natural selection and doesn’t require a revision of evolution.
  • Some methylation changes are indeed coded by the DNA, as in mediating parent-offspring conflict. But this form of evolution is not because of the environment but has happened due to normal natural selection.

Berman in his potpourri article used niche construction as an example of how epigenetics can work . But what he didn’t understand was that niche construction result more from DNA sequence changes which are adaptive to the novel environment the organism encounters and not due to the environment. As Jeffrey Coyne put it brilliantly – “Berman has no idea what he’s talking about here.”

In regards to the much-advertised meeting held by Royal Society – “New Trends in Evolutionary Biology: Biological, Philosophical, and Social Science Perspectives.” many of the attendees were sponsored by the Templeton Foundation which in recent years has led the highly stupid movement of bridging science and religion. It’s been the case that they have funded any project which is woozy and unscientific but sounds sciency/lofty in its aims. Check some of these woozy grants out –

So, this brings a huge doubt as to the impartial scientific motive behind this conference. Are these so-called revisionists simply “careerists” as Jeffrey Coyne puts it? Sadly, in this era of waning grants and increasing pressure to publish in high-impact journals (which itself is a crap idea to measure science) some people have come up with these half sciency, half baloney ideas which promise the moon all the while being blind.

This misunderstanding of epigenetics and the extent of its role is not restricted to these but even respected scientist/Pulitzer-winning author Siddhartha Mukherjee did a similar faux pas this summer with his article in The New Yorker. Nature even wrote an article collating the various viewpoints on the issue. His mistake was not claiming that evolution needs an overhaul but more nuanced, as he said that epigenetic markers play a huge role in gene regulation. What is now known to every biologist is that it’s transcription factors (proteins) which control the rate of transcription from DNA to messenger RNA, by binding to a specific DNA sequence. In turn, this helps to regulate the expression of genes near that sequence. Now, this coming from The New yorker and Mukherjee would convince layman about the role of epigenetic markers in gene regulation despite them not being true !! And the final nail in the coffin was when towards the end of the article he speculated that such inheritance of acquired characters via epigenetic markers (Lamarckism at its best !) could play a major role in evolution. As it’s been said, again and again there is simply no evidence for this and hence needless speculation based on shaky ground is harmful to science.

For more:

  1. The Role of Methylation in Gene Expression
  2. Researcher under fire for New Yorker epigenetics article
  3. The Imprinter of All Maladies
  4. Once again: misguided calls for a thorough revamping of evolutionary biology
  5. The New Yorker screws up big time with science: researchers criticize the Mukherjee piece on epigenetics

Population Genetics Undergrad Class

A nice bunch of notes for learning a wee bit of population genetics. Covers recent advances in pop gen too !

gcbias

We’re teaching Population and Quantitative Genetics (undergrad EVE102) this quarter. We’re posting our materials here, in case they are of interest.

A pdf of the popgen notes is here

The slide pdfs are linked to below

Lecture One [Introduction and HWE]. Reading  notes up to end of Section 1.2.

Week 1

lecture_2_rellys_inbreeding  [HWE, Relatedness (IBD), Inbreeding loops] Read Sections 1.3-1.5

lecture_3_population structure [Inbreeding, FST and population structure]

1/2 class Reading Discussion Simons Genome Diversity Project and Kreitman 1983 + 1/2 class on  lecture_4 [Other common approaches to population structure, Section 1.7 of Notes optional reading]

Week 2

lecture_5_ld_drift [Linkage Disequilibrium + Discussion of Neutral Polymorphism] Reading Section 1.8 of notes.

lecture_6_drift_loss_of_heterozygosity[Genetic Drift & mutation, effective population size. Read Chapter 2, up to end of Section 2.3]

Lecture 7. Finishing up lecture 6 & Discussion of Canid paper.

Week 3.

lecture_8_coalescent. [Pairwise Coalescent & n sample Coalescent…

View original post 69 more words

Evolution and cancer

In a 3 part series Prof. Mel Greaves, provides an excellent introduction to the evolution of cancer. Cancer is increasingly being looked through the evolutionary lens and is quite important to be done so. Like antibiotic resistance, we have to realise that evolution of metastasis from a single tumour cell is a dynamic process shaped by various selection pressures.

What has evolution got to do with cancer?

Darwin’s branching tree of evolutionary phylogeny

The principles of evolutionary natural selection in cancer

Our understanding of cancer is being transformed by exploring clonal diversity, drug resistance, and causation within an evolutionary framework. The therapeutic resilience of advanced cancer is a consequence of its character as a complex, dynamic, and adaptive ecosystem engendering robustness, underpinned by genetic diversity and epigenetic plasticity. The risk of mutation-driven escape by self-renewing cells is intrinsic to multicellularity but is countered by multiple restraints, facilitating increasing complexity and longevity of species. But our own species has disrupted this historical narrative by rapidly escalating intrinsic risk. Evolutionary principles illuminate these challenges and provide new avenues to explore for more effective control.

Lifetime risk of cancer now approximates to 50% in Western societies. And, despite many advances, the outcome for patients with disseminated disease remains poor, with drug resistance the norm. An evolutionary perspective may provide a clearer understanding of how cancer clones develop robustness and why, for us as a species, risk is now off the scale. And, perhaps, of what we might best do to achieve more effective control.

 

What to do when your Hessian matrix goes balmy !!!

So you ran some mixed models and got some balmy messages in return? Are these those messages?

“The Hessian (or G or D) Matrix is not positive definite. Convergence has stopped.”

OR

“The Model has not Converged. Parameter Estimates from the last iteration are displayed.”

Then this post is for you. First let’s try to understand right from the basics of matrix algebra itself. Before going into the Hessian matrix let’s take a detour into the murky world of mixed models and see what’s going on there and how come we get a thing called Hessian matrix !

A linear mixed model looks like this (from Wikipedia):

\boldsymbol{y} = X \boldsymbol{\beta} + Z \boldsymbol{u} + \boldsymbol{\epsilon}

where

  • \boldsymbol{y} is a known vector of observations, with mean E(\boldsymbol{y}) = X \boldsymbol{\beta};
  • \boldsymbol{\beta} is an unknown vector of fixed effects;
  • \boldsymbol{u} is an unknown vector of random effects, with mean E(\boldsymbol{u})=\boldsymbol{0} and variance-covariance matrix \operatorname{var}(\boldsymbol{u})=G;
  • \boldsymbol{\epsilon} is an unknown vector of random errors, with mean E(\boldsymbol{\epsilon})=\boldsymbol{0} and variance \operatorname{var}(\boldsymbol{\epsilon})=R;
  • X and Z are known design matrices relating the observations \boldsymbol{y} to \boldsymbol{\beta} and \boldsymbol{u}, respectively.

Let’s focus on the variance-covariance matrix G or some software refer to it as the D. It is the a matrix of the variances and covariances of random effects. The variances are the diagonal elements and the off-diagonal ones are covariances. So if you have a mixed model with two random effects say, a random intercept as well as the random slope, then we would have a 2 X 2 G matrix. The variances of the intercept and slope terms would be in the diagonal whereas the off-diagonal would contain the covariances.

Remember this G matrix is a one which contains variances so mathematically speaking, the matrix should be positive definite (for a matrix to be so, diagonal elements should be positive). As variances are always positive, hence this makes sense.

The Hessian matrix referred to in the warning messages you got is actually based on this G matrix which is used to calculate the standard errors of the covariance parameters. So, the algorithms which calculate them would be stuck and won’t be able to find an optimised solution if the given Hessian matrix calculated for the model doesn’t have positive diagonal elements.

So, the whatever results you may get out of the mixed model wouldn’t be correct or trustworthy. What that means is that the model which you specified couldn’t estimate parameters etc with your data. Some might choose to ignore this warning and move ahead, but my request is please don’t !!! This warning is indeed important, and NO the software doesn’t have a vendetta against you/your project.

 The next step is obviously to ask what can you do in this circumstance and what might be the solution. One method might be to check the scaling of your predictor variables in the model. If they are highly different then that can be a good reason why the software has trouble in variance calculation. So, just a change in scaling of the predictors can solve your problem here.

Another method is when some covariance estimates are 0 or have no estimates at all or don’t produce the standard errors at all (SPSS usually does this, and produces blank estimates). Now don’t go on ignoring this variable, as something is fishy with the model itself. For if the best estimate of your variance is zero, this means there is zero variance within your data for the effect under consideration. For example, you have introduced a random slope for that effect, but in actuality the slopes do not differ across the subjects of your study in that effect and possibly a random intercept component might well explain all the variation.

So just remember when something like this happens, the best possible solution for you to do is to respecify the random components in your model and that could be about removing a random effect. Sometimes you might feel or have been told that a given random effect has to be introduced because of the design of the study, you wouldn’t find any variation in the data. Another thing, is that you could specify perhaps a simpler covariance structure which contains lesser number of unique parameters to be estimated.

Let me give an example to highlight this situation:

A researcher wants to understand the behavioural responses of rats living in their cages in a lab building by doing standard behavioural tests. Since the cages are situated in different floors, in different corners in the lab building, the researcher wanted to see if before experimentation is there any change in their responses to simple behavioural tests. Now let’s suppose there are 1000 rats in each floor and there are 10 floors in the building. That makes it 10000 rats which would be a huge number to study all of them individually. So, we take samples of rats within each floor and the design indicates including a random intercept component for each floor, to account for the fact that rats in the same floor may be more similar to each other than would be the case in a simple random sample. So, if this is true, we would likely want to estimate the variance of behavioural responses among floors.

But we know that modern animal facility guidelines calls for rigorous protocols to be followed and because of that rats are kept in similar cages with as similar conditions as possible. Then we can easily see here that there wouldn’t be much variance in the behavioural responses among the floors. This leads to the scenario i put up before, i.e., variance for floors = 0 and the model would be unable to uniquely estimate any variation from floor to floor, above and beyond the residual variance from one sampled rat to another.

Finally, another option is to use a population averaged model instead of a linear mixed model. As population averaged models don’t have any random effects, but do contain the correlation of multiple responses by the sampled individuals.

For more, read these —

  1. West, B. T., Welch, K. B., & Galecki, A. T. (2007). Linear mixed models: A practical guide using statistical software. New York: Chapman & Hall/CRC
  2. Linear mixed models in R- http://www.r-bloggers.com/linear-mixed-models-in-r/
  3. Model Selection in Linear Mixed Models- http://arxiv.org/pdf/1306.2427v1.pdf
  4. Hessian matrix in statistics- http://www.slideshare.net/FerrisJumah/hessin

Is consciousness a hard problem and why hasn’t it been solved yet?

The birth of self-consciousnss: 'Holy smoke, I'm standing here!'

The birth of self-consciousnss: ‘Holy smoke, I’m standing here!’

Yeah that’s how probably many imagine consciousness to have emerged, all in one single stroke. But is consciousness such tractable? Is it explainable? The question of what is consciousness has dominated science and philosophy for many centuries now. Yet, a satisfactory solution to this problem still eludes the best minds amongst us.

At one time, conciousness was considered as a question to be pondered only by philosophers. This came into prominence with Rene Descartes and his Cartesian Duality theory( though Aristotle and Plato also had some versions of mind-body duality).In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.This whole duality regime persisted until the 18th century when physicalism came into the uncharted region of neurology.

And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

Then in a conference held in Arizona (1994) came up a chap who dressed up in all jeans looked like he belonged in a rock concert than the established, grumpy conference he gave a talk. What he said was to introduce consciousness as a hard problem in biology. He agreed positively with the advancement in sciences which had worked up so much to explain the inner workings of a brain but he asked how do you explain sensations, such as colors and tastes. Can we scientifically explain how the bunch of interconnected network of neurons leads to a highly subjective process such as sensations? David Chalmers proposed his zombie thought experiment wherein a zombie is a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, qualia, or sentience. For example, a philosophical zombie could be poked with a sharp object, and not feel any pain sensation, but yet, behave exactly as if it does feel pain (it may say “ouch” and recoil from the stimulus, or say that it is in intense pain).

The notion of a philosophical zombie is used mainly in thought experiments intended to support arguments (often called “zombie arguments”) against forms of physicalism such as materialism, behaviorism and functionalism. Physicalism is the idea that all aspects of human nature can be explained by physical means: specifically, all aspects of human nature and perception can be explained from a neurobiological standpoint. Some philosophers, like David Chalmers, argue that since a zombie is defined as physiologically indistinguishable from human beings, even its logical possibility would be a sound refutation of physicalism. However, philosophers like Daniel Dennett counter that Chalmers’s physiological zombies are logically incoherent and thus impossible.

Ever since then, research into this area which was long abandoned by mainstream science simply exploded. An early convert into this question of consciousness was the Nobel prize winner Francis Crick.

Upon taking up work in theoretical neuroscience, Crick was struck by several things:

  • there were many isolated subdisciplines within neuroscience with little contact between them
  • many people who were interested in behaviour treated the brain as a black box
  • consciousness was viewed as a taboo subject by many neurobiologists

Crick hoped he might aid progress in neuroscience by promoting constructive interactions between specialists from the many different subdisciplines concerned with consciousness. He even collaborated with neurophilosophers such as Patricia Churchland. In 1983, as a result of their studies of computer models of neural networks, Crick and Mitchison proposed that the function of REM sleep is to remove certain modes of interactions in networks of cells in the mammalian cerebral cortex; they called this hypothetical process ‘reverse learning‘ or ‘unlearning’. In the final phase of his career, Crick established a collaboration with Christof Koch that lead to publication of a series of articles on consciousness during the period spanning from 1990 to 2005. Crick made the strategic decision to focus his theoretical investigation of consciousness on how the brain generates visual awareness within a few hundred milliseconds of viewing a scene. Crick and Koch proposed that consciousness seems so mysterious because it involves very short-term memory processes that are as yet poorly understood. Crick also published a book describing how neurobiology had reached a mature enough stage so that consciousness could be the subject of a unified effort to study it at the molecular, cellular and behavioural levels. Crick’s book The Astonishing Hypothesis made the argument that neuroscience now had the tools required to begin a scientific study of how brains produce conscious experiences. Crick was skeptical about the value of computational models of mental function that are not based on details about brain structure and function.

But now the things have come up to a stage where there are two camps- one which agree with David Chalmers, Christof Koch and their panpsychism or collective consciousness theory OR another led by Daniel Dennett, Patricia Churchland which argue that consciousness might just be an emergent property of such an interconnected network and there is nothing special about it.

Daniel Dennett argues that consciousness, as we think of it, is an illusion: there just isn’t anything in addition to the spongy stuff of the brain, and that spongy stuff doesn’t actually give rise to something called consciousness. Common sense may tell us there’s a subjective world of inner experience – but then common sense told us that the sun orbits the Earth, and that the world was flat. Consciousness, according to Dennett’s theory, is like a conjuring trick: the normal functioning of the brain just makes it look as if there is something non-physical going on. To look for a real, substantive thing called consciousness, Dennett argues, is as silly as insisting that characters in novels, such as Sherlock Holmes or Harry Potter, must be made up of a peculiar substance named “fictoplasm”; the idea is absurd and unnecessary, since the characters do not exist to begin with. Its this criticism which hits the panpsychism idea. “The history of science is full of cases where people thought a phenomenon was utterly unique, that there couldn’t be any possible mechanism for it, that we might never solve it, that there was nothing in the universe like it,” said Patricia Churchland of the University of California, a self-described “neurophilosopher” and one of Chalmers’s most forthright critics. Churchland’s opinion of the Hard Problem, which she expresses in caustic vocal italics, is that it is nonsense, kept alive by philosophers who fear that science might be about to eliminate one of the puzzles that has kept them gainfully employed for years. Look at the precedents: in the 17th century, scholars were convinced that light couldn’t possibly be physical – that it had to be something occult, beyond the usual laws of nature. Or take life itself: early scientists were convinced that there had to be some magical spirit – the élan vital – that distinguished living beings from mere machines. But there wasn’t, of course. Light is electromagnetic radiation; life is just the label we give to certain kinds of objects that can grow and reproduce. Eventually, neuroscience will show that consciousness is just brain states. Churchland said: “The history of science really gives you perspective on how easy it is to talk ourselves into this sort of thinking – that if my big, wonderful brain can’t envisage the solution, then it must be a really, really hard problem!”

So, with the Big Brain initiative in US and Europe can we finally get more answers into what consciousness is? Would it turn out to be nothing much but an emergent property of neurons or something as fundamental property of universe?

For more, read these:

  1. The Stanford Encyclopedia of Philosophy
  2. Internet Encyclopedia of Philosophy
  3. Four philosophical questions to make your brain hurt

Are humans still evolving? Yes, both globally and locally.

An eternal question by almost everyone – Are humans evolving or are we at an evolutionary equilibria (if that even exists). Jerry says we are evolving still. An interesting read indeed !!!

Why Evolution Is True

The one question I’m inevitably asked after lecturing on evolution to a general audience is this: “Are humans still evolving?” What they really want to know, of course, is whether we’re getting smarter, taller, handsomer, and so on. Well, with respect to those traits I always say, “I have no idea,” but humans are still evolving, albeit in ways that don’t excite most people. I’ve posted about this twice (see here and here), and in recent times we have evidence for H. sapiens evolving to produce, in women, earlier age of first birth, later age of last birth and (also in women) increases in height in some places and decreases in others. Studies in the U.S., which haven’t been conducted elsewhere, show the recent evolution of reduced cholesterol levels and lower blood pressure, and an increased age of menopause.

The U.S-specific data brings up the question of whether even if the entire…

View original post 534 more words

Why do we love? An empirical test…

Archetypal lovers Romeo and Juliet portrayed by Frank Dicksee

Yeah love is indeed a mysterious thing and has always captured our imaginations. One of the most famous tragic love stories was the Romeo and Juliet by William Shakespeare. Tragic in the sense that the main protagonists die at the altar of their own love. So, what makes love so special? Or indeed as a biologist i ask what is the need for love. Just look what goes into the love process. Endless dating games, elaborate preparations, endless flirtations, also many humiliations and finally if you are lucky the one acceptance.

But wouldn’t it be simpler to just think about procreation alone, i.e., reproduce for the sake of propagation?? Since, evolutionary struggles dictate that there exists differential reproduction and hence propagation of one’s own genes is the thing which ultimately matters. So, then why do we go for this protracted cycle?

To answer this question albeit in an indirect way authors – Malika Ihle, Bart Kempenaers and Wolfgang Forstmeier all from Department of Behavioral Ecology and Evolutionary Genetics, Max Planck Institute for Ornithology, Seewiesen, Germany conducted a remarkable experiment. The results of this experiment was published recently in PloS Biology – Fitness Benefits of Mate Choice for Compatibility in a Socially Monogamous Species.

As we know that to actually conduct a cost/benefit analysis of love is easier said than done and there would be innumerable ethical concerns regarding the bounds of experimentation with humans. This present study however, used a model animal in an elegant experiment which was designed to find the reproductive consequences of mate choice.

The Experiment

The model species used here was the –zebra finch (Taeniopygia guttata, a native bird of Australia).

Adult male at Dundee Wildlife Park, Murray Bridge, South Australia

Adult male at Dundee Wildlife Park, Murray Bridge, South Australia

They started off with a population of 160 birds that had recently been isolated from the wild, and then set them up on a sort of speed-dating session, with groups of 20 females to choose freely between 20 males (See figure 1 below). Once the birds had paired off, half of the couples (the “chosen” or C group) were allowed to live happily ever after. For the other half, however, the authors intervened like overbearing Indian parents, and split up the happy pair to forcibly pair them up with other broken-hearted individuals (the “non-chosen” or NC group). The bird couples of both C and NC groups were then left in aviaries to breed. The authors then measured the couple’s behaviour and the number and paternity of dead embryos, dead chicks and surviving offspring.

Experimental Design

Figure 1: Experimental Design

Results

Relative fitness estimates (mean ± SE) of males (n = 84) and females (n = 84) from chosen and non-chosen pairs

Figure 2: Relative fitness estimates (mean ± SE) of males (n = 84) and females (n = 84) from chosen (C) and non-chosen (NC)pairs

The first batch of results is elegantly shown in the figure above.The overall reproductive fitness (measured as the final number of surviving chicks) was 37% higher for individuals in chosen pairs than those in non-chosen pairs. But since reproductive fitness is the sum total of different effects which add up to the total number of offspring produced, it’s vital to look at those parameters and understand the mate choice in C group affected the fitness. To start off the authors noted that both the C and NC group laid similar number of eggs which suggests that their initial investment towards egg laying is not affected by the group they are in and also oblivious to the mismatched mate picked up by the authors. But the nests of NC group had almost three times as many unfertilized eggs as the chosen ones, and a greater number of eggs that were neglected (either buried or lost).

The authors in their earlier studies had known this fact that embryo deaths happened mainly due to genetic incompatibility between the parents, however the egg hatching related deaths happened due to behavioural incompatibility. So, the next step was to compare these two phenomena in the two C and NC groups. They found that though the embryo mortality was similar in both the groups, however mortality of the hatched chicks was comparatively
higher in the NC couples. This suggests that its the behavioral incompatibility
between the non-chosen (NC) parents, and not genetic incompatibility which might be the driving factor behind the observed reduction in overall fitness (Fig. 3, below).

Embryo (A) and offspring (B) mortality rates (parameter estimates [mean ± SE]) in chosen and non-chosen pairs.

Figure 3: Embryo (A) and offspring (B) mortality rates (parameter estimates [mean ± SE]) in chosen (C) and non-chosen(NC) pairs.

So, the next question which the authors asked was if it’s the behavioural incompatibility which leads to greater hatchling death then can it be observed during the elaborate courtship rituals which happened before pairing? What they found was that although the NC and C couples spent almost similar time in courtship rituals the NC group females were far less receptive to NC males and also tended to copulate lesser compared to C group. Harmonious behaviour during courtship in zebra finches have been studied in detail and is taken as sum total of these: friendliness, mutual following, synchronous activity etc. So, a couple showing these behaviours in a greater amount would be termed as the ones who show behavioural compatibility and in anthropomorphic terms ”are in love”. An analysis of this behaviour among the C and NC couples showed that on an average the NC couples showed far less such behaviour than the ones in C group.Apart from these results, when the chicks hatched what was seen that greater proportion of males in NC group showed infidelity than in C group and the majority deaths of chicks which happened in the critical period of first 48 hours was due to lesser paternal care in NC group than in C. 

Discussions

The authors in the end ascribe this difference in reproductive fitness to the behavioural incompatibility between the two groups. They also mention – ‘‘The mechanisms behind such behavioural compatibility, in terms of willingness or ability to cooperate with certain individuals and in terms of coordination between partners need further study, in particular in the context of offspring provisioning.”

In humans, some studies suggest that individuals are more satisfied, more committed, and less likely to engage in domestic violence, when involved in a love-based rather than an arranged marriage (2,3,4). The challenge there is also to find out whether stable and happy marriages result from motivation to cooperate (and to identify what stimulates such feelings, see 5-8), or from congruence in terms of partners’ intrinsic behavioural types [9].

References:

  1. Ihle M, Kempenaers B, Forstmeier W. Fitness Benefits of Mate Choice for Compatibility in a Socially Monogamous Species. PLoS Biol. 2015; 13(9): e1002248. doi:10.1371/journal.pbio.1002248
  2. Xu XH, Whyte MK (1990) Love matches and arranged marriages—A Chinese replication. Journal of Marriage and the Family 52: 709–722.
  3. Sahin NH, Timur S, Ergin AB, Taspinar A, Balkaya NA, Cubukcu S (2010) Childhood trauma, type of marriage and self-esteem as correlates of domestic violence in married women in Turkey. Journal of Family Violence 25: 661–668.
  4. Regan PC, Lakhanpal S, Anguiano C (2012) Relationship outcomes in Indian-American love-based and arranged marriages. Psychological Reports 110: 915–924. PMID: 22897093
  5. Asendorpf JB, Penke L, Back MD (2011) From dating to mating and relating: predictors of initial and long-term outcomes of speed-dating in a community sample. European Journal of Personality 25: 16– 30.
  6. Honekopp J (2006) Once more: Is beauty in the eye of the beholder? Relative contributions of private and shared taste to judgments of facial attractiveness. Journal of Experimental Psychology-Human Perception and Performance 32: 199–209. PMID: 16634665
  7. Meltzer AL, McNulty JK (2014) “Tell me I’m sexy . . . and otherwise valuable”: Body valuation and relationship satisfaction. Personal Relationships 21: 68–87. PMID: 24683309
  8. Todd PM, Penke L, Fasolo B, Lenton AP (2007) Different cognitive processes underlie human mate choices and mate preferences. Proceedings of the National Academy of Sciences of the United States of America 104: 15011–15016. PMID: 17827279
  9. Rammstedt B, Spinath FM, Richter D, Schupp J (2013) Partnership longevity and personality congruence in couples. Personality and Individual Differences 54: 832–835.