Will The Polypill Prevent Your Heart Attack?

Giving the polypill to everybody above the age of 55 kills two birds with one stone: cardiovascular risk and preventive medicine. That's what the proponents of the polypill say. The medical establishment is in uproar. Here is why you should be, too. But for a different reason. [tweet this].
   
We are typically sold on the notion, that heart disease and stroke have become today's major killer, for one simple reason: We live far longer than our ancestors of a hundred years ago, whose major cause of death were infectious diseases. Their eradication has brought upon us the blessings of longer lives, and with it the detriments of aging related cardiovascular disease. It's root cause is elevated cholesterol, a theory enshrined in the so-called lipid hypothesis. Questioning it is to the medical establishment what Galileo's theories were to the catholic church: plain heresy. After all, cholesterol lowering drugs, the statins, are a blessing to mankind and substantial reducer of cardiovascular death. 
    
This is what nearly everyone believes.
The Chinese Tao has a quote for such situations. It goes something like this: "when everyone knows something is good, this is bad already." You might reject my suggestion that such ancient wisdom could possibly apply to modern medicine.  So, let's get cracking at those facts which everyone knows. 

Claim 1: Heart disease, stroke and cancer are today's major killers 
Undeniably.  Cardiovascular disease accounts for roughly one in three deaths (30%), followed by cancer, which kills another one in four (23%) [1]. Which means your chance of dying of any one of those two clusters is fifty-fifty. By the way, these data, and the ones which follow, are drawn from U.S. statistics. Unfortunately they are typical for the rest of the developed world and pretty close to what the developing nations experience, too. 

Claim 2: One hundred years ago, Infectious diseases were the main killers
Yes, indeed. In 1900, one third of all deaths were due to tuberculosis and influenza alone. 

Claim 3: Since we eliminated those infectious diseases we have a longer life expectancy and therefore we simply die of aging related diseases.
This is where it starts to get hairy. First, you must NOT confuse life expectancy with life span. Life expectancy is typically quoted as life expectancy at birth. It is an average value of all the years lived divided by the number of those born alive. You can imagine how this number is very sensitive to the rate of infant deaths and of deaths during the early adult years. Particularly when one third of all newborns die within the first 12 months. Which was a typical infant death rate, not only in ancient Rome but throughout most of modern history until the 17th century. While this infant mortality rate made Roman's have an average life expectancy at birth of a little less than 30 years, a considerable part of the population lived to their sixties and seventies. In fact, very few people will have died at age 30, most either having done so way earlier or much later. Back to 1900. 

In 1900, U.S. females had a life expectancy at birth of 51 years, whereas those who reached 50 had a remaining life expectancy of another 22 years, to reach 72. Today these numbers stand at 80 years life expectancy at birth and 82 years at the age of 50. Which means two things: First, while life expectancy at birth has increased dramatically by more than 30 years over the past 100 years, life span hasn't increased that much. Second, life expectancies at birth and at age 50 have become virtually the same. The reason is a substantial reduction in infectious diseases, which killed considerable numbers of infants, of women giving birth, and of young adults. Which brings us to ...

Claim 4: Cardiovascular disease and cancer are diseases of old age, which is why they are more prominent today than 100 years ago. 
When we compare today's death rates with those of the past, we need to keep in mind that the age distribution in 1900 was substantially different to what it is today. In 1900 there were a lot less people of age 65 and older than there are today. So, we need to answer the question, what would the CVD mortality have been in 1900 if the population had had the same age distribution as ours has today. Thankfully, the U.S. CDC provides us with a standardization tool, which allows us to answer this question. They simply use the U.S. population at the year 2000 as the standard to which all other population data can be standardized. The process is called "adjustment for age" and, when applied to mortality rates, they become truly comparable as  so-called age-adjusted mortality rates. So, in the future, when you read something about mortality rates or disease rates, make sure to check which rates he uses for comparison. If he doesn't say which is which, you need to be very skeptical about his interpretation. 

Now here comes the surprise: The mortality rate for cardiovascular disease in 1900 was 22% vs. today's 31%. At first blush, this doesn't sound that much different. But think about it: If CVD is merely the disease of old age, why should there be a difference at all? And if there is a difference, why should we be dying of this disease at a 50% higher rate when we have all the medical technology, and the statins, which our grand parents didn't have.  

The entire issue becomes even weirder when you look at the development of the CVD mortality rate over the 11 decades from 1900 to today (Figure 1). CVD rose to a 60% prominence in 1960 before steeply falling to today's level. You can see that in the 1950s and 1960s people died of "age-related" heart attacks and strokes at a 50% higher rate than 50 years earlier. Another 60 years later we die at a quarter the rate of the 1960s. Which begs the question: What happened?
Figure 1

Actually, there are two parts to this question: If heart disease is age-related, why was there such a dramatic rise in age-adjusted mortality over the first half of the past century, when there should have been none. I have my theories, but I will keep them for one of my next posts.

Far more pertinent to this post's subject is the second part of the question: What did happen in the 1960s and thereafter? If you think the answer is "statins happened, stupid", then you are in for a surprise. The first statin to hit the market was Merck's Lovastatin. In 1987! Its the red vertical line in the chart of figure 1. Almost 30 years after CVD mortality rate began its steep descent. A descent, which did not accelerate with the introduction of statins to the market.  

Now, don't get me wrong, I'm not saying statins do not reduce the risk of dying from CVD, or the risk of experiencing a non-fatal heart attack or stroke. There is quite some evidence to their benefits. My point is that, whatever statins do, they do not show up on our mortality radar as the grand reducer of CVD death. Not within the current medical practice of risk estimation and subsequent risk-based treatment. 

Enter the proponents of the polypill, which contains a statin, a blood pressure lowering medication, and an aspirin. Are these proponents right to say, give a statin to everyone, who has hit the age of 55? Well, they have a point. Wald and colleagues ran a computer simulation to compare the most simple of all screenings, age, vs. the UK's National Institute of Health guidelines, which recommend screening everybody from age 40 at five-yearly intervals until people reach the risk threshold of a 20% chance of a cardiovascular event in the next 10 years [2]. That's the cut-off for treatment. Astonishingly, the benefits are virtually the same. What this screening routine buys at the costs for doctor visits and blood tests, we get free of charge with the age threshold.  

This paper was so counterintuitive to the established way of medical thinking, that the authors' paper, first submitted to the British Medical Journal in 2009, went through a 2-years Odyssey of being rejected by 4 Journals and 24 reviewers, before finally being published in PLoS One in 2011. 

But costs from a societal perspective are not the costs which interest you. You might be more interested to know, that even at an elevated risk of CVD, 25 people would have to swallow a statin for 5 years to prevent just 1 heart attack. How much larger will this number be, the number needed to treat (NNT), as we call it, if you are simply 55 but with no other CVD risk factor? You won't get an answer anytime soon. Big Pharma is not interested to finance a study, which could deliver the answer. They don't earn much money from polypills which only use generic statins, those whose patent protection has expired. 

To me the NNT is definitely too high. I won't take the polypill, though I just crossed that age threshold a few days back. I pursue another path to health and longevity. And I believe, you might want to look at my reasoning for that path. I will introduce it progressively over the next few posts. Not that I evangelize it, not to worry. I simply believe there is a third alternative to the risk-oriented practice of preventive medicine and to the kitchen-sink approach of its polypill wielding opponents. This third alternative is heresy to both. But with heresy I'm in good company. Dr. Ignaz Semmelweis was a heretic when he suggested in the mid 1800s that the high rate of deadly childbed fever was due to physicians not washing their hands between dissecting dead bodies and helping women deliver their children. It took about 50 years for his ideas to become medical mainstream. 

That's because new ideas become accepted in medicine not upon proof of being better than the old ones, but upon the old professors, who have built their careers on the old ideas, dying out. So, let's try to survive them. 

1. Kochanek, K.D., et al., Deaths: Preliminary Data for 2009, in National Vital Statistics Reports 2011, U.S. Department of Health And Human Services.

2. Wald, N.J., M. Simmonds, and J.K. Morris, Screening for future cardiovascular disease using age alone compared with multiple risk factors and age. PLoS ONE, 2011. 6(5): p. e18742.

Wald NJ, Simmonds M, & Morris JK (2011). Screening for future cardiovascular disease using age alone compared with multiple risk factors and age. PloS one, 6 (5) PMID: 21573224

Why Risk Screening For Heart Disease Is As Good As Crystal Ball Gazing


If weather forecasts were as reliable as cardiovascular risk prediction tools, meteorologists would miss two thirds of all hurricanes, expect rain for 8 out of 10 sunny days, and fail to see the parallels to fortune telling.    

When you are older than 35 and visit your doctor, there is a good chance he will evaluate your risk of suffering a heart attack or stroke over the next 10 years. The motivation behind this risk scoring is to prevent such an event while you still can. After all, these cardiovascular diseases are the number one causes of disability and death. In Europe alone 1.8 Million people die from it every year. In fact, they die prematurely, which means at an age younger than 75. [tweet this].


That's why, at first blush, it sounds reasonable to develop risk prediction scores to help doctors identify the high-risk patient whose asymptomatic state makes him blissfully unaware of being a walking time bomb. Forewarned is forearmed, or something like that the reasoning goes. But what if the forewarning part is as reliable as a six weeks weather forecast and the forearming as effective as the wish for world peace?

As with any medical technology, risk prediction tools should be judged by their ability to improve YOUR health outcome before they are used on YOU. While the latest publication about the UK QRISK score is an upbeat evaluation of its improved performance, it fails to convince me that using these tools actually makes sense [1]. 

Let's look at the data first: 
The QRISK score was developed for the UK population, because the grand dame of risk prediction scores, the Framingham Risk Score (FRS), doesn't do so hot in northern European people. FRS was seen to over-predict the risk in the UK population by up to 50%. In an effort to do better than that, QRISK was developed. It packs a lot more variables into its score than FRS. In its latest version, QRISK includes the risk factors age, smoking status (with a 5-level differentiation), ethnicity, blood pressure, cholesterol, BMI, family history, socioeconomic status, and various disease diagnoses. An algorithm calculates your risk, expressed as a %-chance to suffer a heart attack or stroke over the next 10 years. 

In clinical practice a 20% risk is defined as the critical threshold that separates the high-risk person from those in the low-to-moderate risk categories. 20% is an entirely arbitrary number, selected simply for convenience's sake and economic reasons. Set it too high, and you identify too few at-risk people, set it too low and you have to deal with too many false positives, that is, people who you would treat for elevated risk but who will not suffer an event even if you didn't treat them. The latter is clearly a strain on limited health budgets.

Now, let's see how QRISK at a threshold of 20% risk would work for you, provided you are between 30 and 84 years old, which is the age range to which QRISK is applicable. Let's also assume you are female.  

For every 1000 women, 40 will suffer a first heart attack or stroke over the next 10 years. Of these 40 obviously high-risk, women, QRISK identifies 17 correctly. Which means the remaining 23, or 60% of all those who will suffer a heart attack or stroke, fly below the QRISK radar. But that's not the intriguing part. We get to that by looking at the group of women who are identified as high-risk. 
If the 20% risk score threshold predicts correctly, then about 20 of every 100 women identified as high-risk will suffer a first event over the next 10 years. After all, that's what a 20% risk means: Of a hundred women having the same profile, 20 will eventually suffer a first heart attack or stroke over the next 10 years. Which brings us to the really juicy part: In the population from which QRISK was developed, 16% of the high-risk women actually did suffer that predicted heart attack or stroke. 

You are forgiven if you don't immediately see, why I call this the juicy part. But think about it this way: The QRISK numbers were not plugged from an observational study, which simply observes and follows women for 10 years, without doing anything to or with them. These numbers represent women who were identified to be at high risk by the very health care system, which claims to do the risk scoring to protect them from such events in the first place. So, what happened to actually preventing those events? 16% vs. 20% doesn't sound like a terrific preventive job. 

By the way, for men the figures are very much the same. The reason why I chose women is because there is an inconsistency in the study's published tables which compare the events in two age groups - the 35-74 year old men, and the 30-80 year old men. The number of heart attacks and strokes is given as 54 and 50 for the first and second group respectively. But it can't be that there are less events in the 30-80 year range than in the 35-74 year range. Since there is no such detectable inconsistency in the numbers for women, I chose them as the example.  

Back to the risk score and a summary of its performance. First, the score misses 60% of all cases right off the bat. Second, among the correctly identified future sufferers of heart attacks and strokes, the subsequent treatment only prevents a small minority of events, which amounts to about 4% of all cases happening over the 10-year period.  If our preventive interventions were worth their salt, we should see no, or only a few, cases happening in the high-risk group. Because this is the group, which is supposed to benefit from intensive treatment and intervention. 

This public health strategy of targeting the high-risk part of the population with an intervention is appropriately called the high-risk strategy. As we have seen, it makes public health miss the majority of disease events, which it set out to prevent in the first place. So what is the alternative? It's called the population strategy. And, yes, it means targeting the entire population in an effort to reduce all people's exposure to whatever are the causes of the disease. That entails necessarily a one-size-fits-all approach to health. Which you encounter in the form of those exercise and diet recommendations preached to us from every public health pulpit. 

In theory, this strategy could potentially have a large effect on the health of the entire population, materializing as a substantial reduction in the number of heart attacks and strokes. But when you look at it from YOUR point of view, you have to invest the sizeable effort of changing your eating and exercising habits, while you'll find the benefits hardly perceivable. After all, health is when you don't feel it. A prevented disease is never perceived as such. In public health, this situation, where an individual's large perceived sacrifice yields only an imperceptibly small personal benefit, is called the prevention paradox. It's a more academic way of saying it doesn't work either.
    
The data are certainly there to prove my case. In my previous post I highlighted how little change in health behaviors has happened over the past 20 years. And the little change, that did happen, went mostly into the wrong direction. 

Which is why we will continue to see most of us dying, ironically, from preventable diseases: heart disease, stroke, diabetes, many cancers. Which is why I'm questioning the current clinical practice of risk scoring. After all, it costs money and time.

It's this question which has lead some researchers to suggest giving everybody above the age of 50 a so-called polypill. A pill which reduces blood pressure and cholesterol, and which delivers a low dose of aspirin. It aims at killing three birds with one stone: hypertension, hypercholesterolemia and thrombotic events, all of which are causally related to heart attack and stroke. But to me, the polypill is preventive medicine's declaration of bankruptcy.

In my next post, I will talk about this, about how preventive medicine may really work, and, most importantly, what it means to you. Practically and presently. Because we already have the tools to help you prevent your heart attack or stroke. And those tools don't go by the name of any known risk score. if you are still keen on scoring your risk, we have a tool on our website for you to do that. It also shows you, how your risk would be if all risk factors were in the green zone, or how your risk will be if you maintain your current status over the next ten years. You can play around with it here, and make a couple of other tests, too. But don't get fooled by numbers. Your greatest risk is to take those risk scores too seriously. 

Reference:

1. Collins, G.S. and D.G. Altman, Predicting the 10 year risk of cardiovascular disease in the United Kingdom: independent and external validation of an updated version of QRISK2. BMJ, 2012. 344.


Collins GS, & Altman DG (2012). Predicting the 10 year risk of cardiovascular disease in the United Kingdom: independent and external validation of an updated version of QRISK2. BMJ (Clinical research ed.), 344 PMID: 22723603

Are You A Unique Medical Case?

Research says yes, public health doesn't listen, and you suffer the consequences: too little benefits from generic interventions. And it could be so simple.



Different people always react differently to the same type of treatment. In my previous post I showed you the wide range of blood pressure changes in over 700 participants of the HERITAGE study's 20-weeks endurance exercise program (Figure 1). Unfortunately, most studies do not present their results in a way, which would allow us to construct such charts as in figure 1. But when they do, the charts look virtually the same. Figure 2 shows you how 30 obese men changed their bodyweight and fat weight as a consequence of a 12-weeks supervised exercise program [1]. As you can see, the mean change of 3.7 kg for both values (the horizontal red line) doesn't tell you anything about how these 30 men reacted INDIVIDUALLY to the program.

Figure 1

When your doctor tells you what exercise to do, what diet to follow or what drug to take, she refers to studies, which report their outcomes in terms of mean values for groups of participants. But as you know now, these values don't answer your question: What would my outcome have been, had I participated in this study? Which is the same as asking, what your results will be if you follow your doctor's advice. 





Figure 2

The honest answer is: nobody knows.  Augmented by: in all likelihood you will see some benefit; if you are very lucky you'll see an extremely large benefit. Or you might be unlucky and see no benefit at all. Call this the uncertainty principle of medicine. 

You won't hear your doctor talking about it. Particularly not when he recommends lifestyle change as your first line of defense against heart attack, stroke or diabetes. For two reasons: First, public health is not concerned with your point of view. I'll get to this in a moment. Second, doctors know that lifestyle change is hard to sell as it is. So, why make it even harder by telling you the truth about the uncertainty of  benefits. Think about it, we all like to enjoy now and pay later, if at all. That's certainly the case when it comes to cigarettes, salt, sugar and a sedentary lifestyle. To forgo these pleasures in favor of health benefits, which may or may not materialize decades from now, is simply not how we are wired. 

But public health does not seem to get it. Even the American Heart Association's (AHA) latest invention, the seven health metrics, is nothing but the same song and dance, which has not had any impact on the health of the population. Let's look at it in a little more detail: 
   
The AHA has defined 7 metrics to help you navigate your way to chronic health. 4 of those metrics are behavioral - smoking, physical activity, BMI and diet. The remaining 3 are biomarkers: blood pressure, fasting glucose and total cholesterol. 

Have all 7 in the green zone and you should do well with health. Exactly how well, that was the question Dr. Yang and colleagues had asked in a study which investigated (a) how many U.S. residents meet how many of those metrics and (b) how much of the U.S. population's death burden can be attributed to these risk factors [2]. Fast forward to the results. More than half of the population, 52.2%, meet only 3 or less of those 7 metrics. That's a 4 % increase compared to 20 years ago. Another 25% meet just 4 metrics. At the same time the percentage of people who meet at least 6 of the 7 metrics has gone down from 10.3% to 8.7%. The percentage of obese people has increased by 50%, and the rate of physical inactivity (that is, people who do not exercise at all!) has doubled from 15.6% to 31.9%. Compared with people who meet no more than 1 metric, those who meet at least 6 reduce their risk of dying by 50%. 

When you look at these correlations, you'll certainly agree with the researchers' statement that "the presence of a greater number of cardiovascular health metrics was associated with a graded and significantly lower risk of total and CVD mortality". That's nice to know, but you are probably not so much interested in the number of deaths in the population, which are attributable to whatever health metric score is the flavor of the day. You are interested to know the answer to three questions:  (a) what does it mean to you, if you don't meet those metrics, (b) how does your effort of getting these metrics into the green zone reduce your risk, and (c) which strategy should you use to lower your risk most effectively.

Fortunately, with a little bit of digging into published numbers, we can get fairly good answers to these questions. So, let's start with the first one: 
Dong and colleagues had done a fairly similar investigation asking how the number of AHA health metrics correlated with cardiovascular events (heart attack and stroke) in the Northern Manhattan Study Cohort [3]. The study's almost 3000 persons were on average 69 years old when they entered the study, and they were followed up for 11 years. Of those who had met at least 4 health metrics, 28% suffered a cardiovascular event during that time, vs. 32% of those who only met 3 or less metrics. 
That's a 4% improvement. 


I don't know, how you feel about it, but my experience with our health lab's clients is that a 4% risk reduction doesn't make them go nuts about exercise and health food. I sympathize, because life is not all about self-flagellation with veggie burgers, tofu swill and weekly marathons. Which is why it is justified to go for the biggest possible health benefit that is achievable with the smallest possible effort. The answer hinges around the question of what is the most critical health metric. Back to Yang's investigation. 


He had asked the question, which of the seven metrics, if met, would yield the largest reduction in deaths? 
If your bet was on smoking and obesity, you might be surprised to hear that blood pressure turned out to be a far more effective executioner, being responsible for 30% of the deaths in this cohort. With 24%, smoking took 2nd place, and obesity didn't show up as a killer at all. Which does not mean obesity doesn't cause death. You have to keep in mind that the average age of the Yang study cohort was 45 years, and the median observation period was 14 years.  


Again, what does all that mean for you? Principally you decide for yourself. I can only tell you what I practice with our clients in our health lab. For each case we define a benchmark biomarker depending on the individual's health profile. In many cases that's blood pressure or, better still, a biomarker of arterial function (I'll talk about the amazing role of arterial function in one of my next posts). We then agree on a certain exercise and dietary strategy, the effect of which we carefully measure in terms of change of the chosen biomarker. If that change does happen, and if it goes into the right direction, that's fine. If the client turns out to be one of the fringe cases, we need to adjust the strategy. We do that until we get it right. That's individualized prevention. While it does not eliminate the uncertainty principle of medicine, it makes prevention efforts far more effective and much more rewarding. It certainly beats following some generic advice drawn from studies, whose mean effect values conceal a wide range of possible effects. 

Let's see when public health will finally see the light. Fortunately you don't need to wait for that to happen. Arm yourself with one of those home measurement devices, and actively measure and chart your progress against your chosen lifestyle change strategy. You'll see very soon, how unique you are as a medical case. 


1. King, N.A., et al., Individual variability following 12 weeks of supervised exercise: identification and characterization of compensation for exercise-induced weight loss. Int J Obes (Lond), 2007.

2. Yang, Q., et al., Trends in Cardiovascular Health Metrics and Associations With All-Cause and CVD Mortality Among US Adults. JAMA: The Journal of the American Medical Association, 2012.

3. Dong, C., et al., Ideal Cardiovascular Health Predicts Lower Risks of Myocardial Infarction, Stroke, and Vascular Death across Whites, Blacks and Hispanics: the Northern Manhattan Study. Circulation, 2012.

References


King NA, Hopkins M, Caudwell P, Stubbs RJ, & Blundell JE (2008). Individual variability following 12 weeks of supervised exercise: identification and characterization of compensation for exercise-induced weight loss. International journal of obesity (2005), 32 (1), 177-84 PMID: 17848941

Yang, Q., Cogswell, M. E., Flanders, W. D., Hong, Y., Zhang, Z., Loustalot, F., Gillespie, C., Merritt, R., & Hu, F. B. (2012). Trends in Cardiovascular Health Metrics and Associations With All-Cause and CVD Mortality Among US Adults JAMA : the journal of the American Medical Association DOI: 10.1001/jama.2012.339

Dong C, Rundek T, Wright CB, Anwar Z, Elkind MS, & Sacco RL (2012). Ideal cardiovascular health predicts lower risks of myocardial infarction, stroke, and vascular death across whites, blacks, and hispanics: the northern Manhattan study. Circulation, 125 (24), 2975-84 PMID: 22619283

10 Good Reasons Not To Exercise?


Exercise may actually be bad for you! A professor says he stumbled upon this "potentially explosive" insight. The New York Times has been quick to peddle it. And couch potatoes descend on it like vultures on road kill. But professors can get it wrong, too. 

Before we judge the verity of the "exercise may be bad" claim, let's first look at how the media present it to us. We shall use the recent article in The New York Times, headlined "For Some, Exercise May Increase Heart Risk". The first paragraph confronts us with a journalist's preferred procedure for feeding us contentious scientific claims: presenting an authoritative author with stellar academic credentials and a publication list longer than your arm. While that is certainly better than having, say, Paris Hilton as the source of scientific insights, it is a far cry from actually investigating such claims. Which is what we want to do now.

The basis of the exercise-may-be-bad claim is a study which investigated the question "whether there are people who experience adverse changes in cardiovascular risk factors" in response to exercise [1]. The chosen risk factors in question were some of the usual suspects: systolic blood pressure, HDL-cholesterol, triglycerides and insulin. The research question: Are there people whose risk factors actually get worse when they change from sedentary to more active lifestyles? 

Sounds simple enough to investigate. Put a group of couch potatoes on a work-out program for a couple of weeks and see how their risk factors change. Only it is not that simple. In the realm of biomedicine, every measurement of every biomarker is subject to (a) errors in measurement and (b) other sources of variability. This makes it virtually impossible for you to see exactly the same results on your lab report for, say, blood pressure, cholesterol, glucose or any other parameter, when you get them measured two or more days in a row. Even if you were to eat exactly the same food every day and to perform exactly the same activities.  

Now imagine, if you conducted an intervention study on your couch-potato subjects and you found their risk factors changed after a couple of weeks of doing exercise, you could theoretically be seeing nothing else but a random variation caused by the error inherent in such measurement. 

To avoid falsely interpreting such a variation as a change into one or the other direction, it makes good sense to know the bandwidth of these errors for each biomarker, before you embark on interpreting the results of your study. Which is what the authors of this particular study did. They took 60 people and measured their risk factors three times over three weeks. From these measurements they were able to calculate the margin of error. Actually, they didn't do this for this particular paper, they had done this measurement as an ancillary study in the HERITAGE study performed earlier. The HERITAGE study had investigated the effects of a 20-weeks endurance training program on various risk factors in previously sedentary adults. Whether heritability plays a role in this response was a key question. That's why this study recruited entire families, that is, parents up to the age of 65, together with their adult children. 

I mention this because the paper, which we are deciphering now, is a re-hash of the HERITAGE study's results, to which the authors added the data of another 5 exercise studies. That's what is called a meta-analysis. In this case it covers more than 1600 people, with the HERITAGE study delivering almost half of them. 

Fast forward to answering the question of how many of those participants had experienced a worsening of at least 1 risk factor. Close to 10%. That is, about 10% of the participants had an adverse change of a risk factor in excess of the margin of error, which I mentioned earlier. I'm going to demonstrate the results, using systolic blood pressure and the Heritage study as the example. I do this exemplification for three reasons: First, blood pressure is the more serious of the investigated risk factors. Secondly, the HERITAGE study delivers most of the participants, and thirdly, the effects seen and discussed with respect to blood pressure and HERITAGE apply similarly to the other 5 studies and risk factors. 
But before we go there I need to familiarize you with a basic concept of statistics. It is called the "normal distribution of data". It is an amazing observation of how data are distributed when you take many measurements. Let's take blood pressure as an example. 

If you were to measure the blood pressure values for every individual living in your village, city or country, you could easily calculate the average blood pressure for this group of people. You could put all those data into a chart such as the one in figure 1. 

Figure 1
On the x-axis, the horizontal axis, you write down the blood pressure values, and on the y-axis (the vertical axis) you write down the number of observations, that is, how often a particular blood pressure reading has been observed. You will find that most people have a blood pressure value pretty close to the average. Fewer people will have values, which lie further away from this average, and very few people will have extreme deviations from the average. 


It so turns out, that when you map almost any naturally occurring value, be it blood pressure, IQ or the number of hangovers over the past 12 months, the curve, which you get from connecting all the data points in your graph, will look very similar in shape. Some curves are a bit flatter and broader, while others are a bit steeper and narrower. But the underlying shape is called the "normal distribution", and it means just that: It's how data are normally distributed over a range of possible values. The curve's shape being reminiscent of a bell, has lead to this curve being called the "bell curve". 

In statistics, especially when we use them to interpret study data, we always go through quite some effort to ensure that the data we measure are normally distributed. That's because many statistic tools don't give us reliable answers if the distribution is not normal.

Back to our famous study. What you see in figure 2 is how the authors present their results for the blood pressure response of the HERITAGE participants. 

Figure 2
For each individual (x-axis) they drew a thin bar representing the height of that person's change in blood pressure after 20 weeks of exercise. Bars extending below the x-axis represent reduced blood pressure, and those extending above the x-axis represent increased blood pressure. The bars in red are those of the people whose blood pressure increase was in excess of the error margin of about 8mmHg. 




Now, Claude Bouchard, the lead author of the paper, is being quoted in the NYT as saying that the counterintuitive observation of exercise causing systolic blood pressure to worsen "is bizarre". 
Here is why it is neither counterintuitive nor bizarre: When we accept the blood pressure values of our study population to be distributed normally, we have every reason to expect the change in blood pressure to be distributed normally, too. Specifically, since all participants went through the same type of intervention. 

Figure 3

If we now run a computer simulation, using the same number of people, the same mean change in blood pressure, and the same error values, then we can construct a curve for this group, too. Which is what you see in figure 3. Eerily similar to the one in figure 2, isn't' it? 






That's because we are looking at a normal distribution of the biomarker called 'blood pressure change'. It is an inevitable fact of nature that a few of our participants will change "for the worse". And I'm putting this in inverted comma because we don't really know whether this change is for the worse. 
After all, we are talking risk factors, not actual disease events. In the context of this study you need to keep in mind, that all participants had normal blood pressure values to begin with. The average was about 120mmHg. The mean change was reported as 0.2 mmHg. That's not only clinically insignificant, that's way below the measurement capability of clinical devices. 

When I started to dig deeper into this study, I found quite a number of inconsistencies with earlier publications. For example, in the latest paper, the one discussed in the NYT, the number of HERITAGE participants was stated as 723. In a 2001 paper, which investigated participants' blood pressure change at a 50-Watt work rate, the number was stated as 503 [2].  In the same year Bouchard had published a paper putting this number at 723 [3]. Anyway, the observation that the blood pressure change during exercise was significantly larger (about -8 mmHg) than the marginal change of resting blood pressure indicates that there probably was some effect of exercise. 

So, what's the take-home point of all this? With the "normal distribution" being a natural phenomenon that underlies so many biomarkers, it is neither bizarre nor in any other way astonishing to find "adverse" reactions in everything from pharmaceutical to behavioral interventions and treatments.  Whether such reactions are truly adverse can't be answered by a study like the one, which is now bandied about in the media. That's because risk factors are not disease endpoints. They are actually very poor predictors of the latter, as I have explained in my post "Why Risk Factors For Heart Attack Really Suck". 

So, keep in mind, that there is no treatment or intervention, which has the same effect on everybody. Pharmaceutical research uses this knowledge, for example, when determining the toxicity of a substance. This toxicity is often defined as the LD50 value, that is, the lethal dose, which kills 50% of the experimental animals.  Meaning, the same dose which kills half the animals, leaves the other half alive and kicking. 
And correspondingly, the same dose of exercise, which cures your neighbor from hypertension, may have no effect on you. Because you belong to those 10% who react differently. But are these 10 good reasons not to exercise? How to deal with this question will be the subject of my next post. Until then, stay skeptical. 

1. Bouchard, C., et al., Adverse Metabolic Response to Regular Exercise: Is It a Rare or Common Occurrence? PLoS ONE, 2012. 7(5): p. e37887.

2. Wilmore, J.H., et al., Heart rate and blood pressure changes with endurance training: the HERITAGE Family Study. Medicine and Science in Sports and Exercise, 2001. 33(1): p. 107-16.

3. BOUCHARD, C. and T. RANKINEN, Individual differences in response to regular physical activity. Medicine and Science in Sports and Exercise, 2001. 33(6): p. S446-S451.



Bouchard C, Blair SN, Church TS, Earnest CP, Hagberg JM, Häkkinen K, Jenkins NT, Karavirta L, Kraus WE, Leon AS, Rao DC, Sarzynski MA, Skinner JS, Slentz CA, & Rankinen T (2012). Adverse metabolic response to regular exercise: is it a rare or common occurrence? PloS one, 7 (5) PMID: 22666405

Wilmore, J. H., Stanforth, P. R., Gagnon, J., Rice, T., Mandel, S., Leon, A. S., Rao, D. C., Skinner, J. S., & Bouchard, C. (2001). Heart rate and blood pressure changes with endurance training: the HERITAGE family study. Medicine and Science in Sports and Exercise DOI: 10.1097/00005768-200101000-00017

Bouchard, C., & Rankinen, T. (2001). Individual differences in response to regular physical activity Med Sci Sports Exerc DOI: 10.1097/00005768-200106001-00013

Why You Should Arm Your Bullshit Alarm Before Reading Diet News.


In the fight over best diet for health and weight loss, it's protein lovers vs. vegetarian zealots. So far, a clear winner has not emerged. Only one loser: you, the victim of biased research. Here is an example of why you should keep your bullshit alarm on high alert when reading about weight loss diets.  
[tweet this].


Ellen M. Evans and colleagues wanted to know whether overweight men and women differ in their body composition responses to different weight loss diets [1]. So they enrolled 58 men and 72 women with a BMI greater than 26, and randomized them into two diet groups.
One group was instructed to follow a high-protein low-carbohydrate diet, which delivered 1.6 g of protein per kg bodyweight per day. The high-carb group  received only half that amount of protein, and both groups' fat intake was capped at 30% of total energy intake. Both diets contained the same amount of fiber. Women received a daily total of 1700 calories, men 1900 calories. The intervention lasted for 4 months, followed by an 8-months weight maintenance period. Fast forward to the 12-months results:

Both diet groups and both genders lost about 10% of their body weight. But expressing weight loss in kilos of body weight can be a deceptive thing. Ideally we want that loss to be fat loss rather than loss of lean mass, that is, muscle mass. In the study at hand, for men on the high-carb diet, a little over one third of their weight loss came from lean body mass. Meaning, of the 14 kilos, which they lost on average, 5 Kilos came from a reduction in muscle tissue. The high-protein guys maintained their muscle mass to a greater extent: only 20% of their weight loss came from wasted muscle. For the women the picture looked almost identical: muscle mass contributed 37% to the weight loss of the high-carb women, compared to 23% in the high-protein group. 

You would be forgiven if you now agreed with the authors' statement that the high-protein diet "...was more effective in reducing percent body fat...". Or in other words, a high-protein diet is superior to a high-carb alternative, as losing lean mass isn't a good thing in weight loss. I'll get to that point shortly in a little more detail. 

Before we go there, let me state, that, being a firm supporter of the high-protein low-carb dietary philosophy, I loved to read this study. But I'm an equally firm supporter of proper scientific methods. And they have been prostituted in this case, which is why I love this study a lot less than its results. 
Here is why: When I read the tables in which the authors present the results, I was impressed by the fact that both groups not only managed to rescue the 4-months weight loss to the 12-months finish line, but even increased this weight loss a little. When you have read literally hundreds of studies on weight loss interventions, as I have done, you'll find this observation to be in stark contrast to what we typically see: a reversal of weight loss. That is, at least a partial post-intervention regain of the weight lost during the dietary period. 

We find the explanation for this miraculous exception in the number of participants. Or rather in the number of disappearing participants. Of the 66 participants who started in the high-carb group, only 30 made it to the finish line 12 months later. That's a drop-out rate of more than 50%!  And of the 64 participants in the high-protein group 23, or 36%, had dropped out by month 12. 

High drop-out rates are nothing unusual in weight loss trials, but it is good practice for researchers to tell their readers, how they accounted for these drop outs in the statistics, with which they interpret the data. Nothing of that in this paper. So, we don't know whether the drop-outs simply did not show up for their measurements, or whether the researchers did not consider the data of those participants, who failed to achieve some arbitrary weight loss threshold. The latter is an absolute no-no. It enables researchers to skew the results every which way they want. And the former is reason to investigate whether the drop-outs differed in some way significantly from the adherent participants. Such differences often affect the interpretation of the results. 

One interpretation emerges right away, when checking the differences of relative fat loss while considering the drop-out rates:  the smaller relative loss of muscle mass in the high-protein diet is not significantly different from the loss observed in the high-carb group. That does not mean, there is no difference between these two diet types. It only means, the study was underpowered to detect such difference, if there was any. And if it was underpowered to detect the difference between diet groups, it was certainly underpowered to differentiate between men and women in this respect. 

If you still want the final verdict on high-carb vs. high-protein, I'm afraid I can't give it to you, even though I'm heavily leaning in favor of the high-protein version. I base my judgment on a 2009 systematic review of all randomized controlled trials, which were performed between 2000 and 2007, and which had pitted high-carb vs. high-protein strategies [2]. This review demonstrated that high-protein diets are more effective with respect to weight loss and probably with respect to cardiovascular risk factors than high-carb diets. At least over observation periods of 6 to 12 months. 

Only long-term observations, comparing hard endpoints, can decide which diet may be better. Those studies are a long way off. To complicate matters, we might find that different people react differently to the same type of dietary strategy. Until we know better, we need to go with what we know: 

The preservation of lean body mass certainly is a key aspect. Muscle tissue is an important endocrine organ, which, when exercised, produces potent anti-inflammatory substrates and hormones. These are the key elements of physical activity's protection against the initiating step of heart disease: atherosclerosis. Muscle tissue is also the body's primary site to store dietary carbohydrate in the form of glucose. The other site being the liver. With a high-carb diet, these storage sites are easily overwhelmed, which leads to conversion of carbs to fat. When, ironically, a high-carb diet nibbles away at the body's carb storage sites, you can imagine what this means to the body's relative fat content. Another aspect is that muscle tissue consumes energy, even at rest. The loss of this "burner" during weight loss makes weight rebound more likely.

So, if all these matters are known and understood, why perform a study, which is underpowered and fraud with questionable interpretations? Why produce the food equivalent of a scientology propaganda piece?  

Beats me. Maybe because part of the study's funding came from the National Cattlemen's Beef Association and The Beef Board. Both of which are, of course, entirely neutral to the outcome of research funded by them, and unbiased to its interpretation. 

It also beats me, why a respected journal and its peer reviewers facilitate the publication of such a study. Maybe because its senior author, Professor DK Layman, is a leading researcher in nutrition science, and... 
...the Egg Nutrition Center's director of research. 

As much as my dietary preferences place me in the protein camp of this contest, my bullshit alarm is set to high-sensitivity. And so should yours be. 
[tweet this].

  
1. Evans, E., et al., Effects of protein intake and gender on body composition changes: a randomized clinical weight loss trial. Nutrition and Metabolism, 2012. 9(1): p. 55.
2. Hession, M., et al., Systematic review of randomized controlled trials of low-carbohydrate vs. low-fat/low-calorie diets in the management of obesity and its comorbidities. Obesity Reviews, 2009. 10(1): p. 36-50.

Evans, Ellen, Mojtahedi, Mina, Thorpe, Matthew, Valentine, Rudy, Kris-Etherton, Penny, & Layman, Donald (2012). Effects of protein intake and gender on body composition changes: a randomized clinical weight loss trial Nutrition and Metabolism : doi:10.1186/1743-7075-9-55

Hession, M., Rolland, C., Kulkarni, U., Wise, A., & Broom, J. (2009). Systematic review of randomized controlled trials of low-carbohydrate vs. low-fat/low-calorie diets in the management of obesity and its comorbidities Obesity Reviews, 10 (1), 36-50 DOI: 10.1111/j.1467-789X.2008.00518.x

Can Chocolate Save You From Heart Attack?


The media says yes. Science says maybe. In the end, you decide. Here are the facts:


A truffle treatment for heart disease is imminent. That's what a recent article suggests, headlined in the New York Daily News as: "Dark chocolate cuts heart deaths; Study shows benefits for high risk cardiac patients." 

The funny thing is, the cited  study does not show what the media geniuses claim it does. So, let's look at this master piece of research journalism and ...
do a little fact check. [tweet this].

The cited study was performed by Ella Zomer and colleagues in Australia [1]. The researchers wanted to answer the question, what the daily consumption of dark chocolate would do to the heart health of a given population. Contrary to what you might believe, the researchers didn't pit chocolate eaters against abstainers. They simply ran an algorithmic model on the computer. In this case, a 10-year forward projection of what might happen, heart-wise, in a given population. Nothing wrong with that, as long as we keep in mind that such models are based purely on assumptions. You need to know those assumptions before you start investing a part of your daily food budget into chocolate.  So, let's take a more detailed look than the anonymous AFP writer did, whose master piece the NYDN bought to educate their readers.

The researchers selected the data sets of 2013 AusDiab study participants who were free from cardiovascular disease and diabetes, but who had the metabolic syndrome. The latter is not a disease in itself but an arbitrary risk definition along 5 risk factors: abdominal obesity, elevated triglycerides, blood sugar and blood pressure, and low "good" cholesterol (high-density lipoprotein, HDL). Have any three of those 5, and you are said to have the metabolic syndrome. 

To calculate the risk of suffering a heart attack or stroke, the researchers used the algorithms developed from the Framingham study. Those risk calculations are widely used in clinical practice. They inform your doctor about the need and urgency of treating you to prevent a heart attack or stroke. I have written about the sense and nonsense of such risk factors in my earlier post "When risk factors for heart attack really suck!".  Now, in order to calculate what the blissful consumption of chocolate will do to prevent such heart attacks, we need some more data. The researchers took those from 13 studies, which investigated the effects of chocolate consumption on blood pressure and on cholesterol levels.  

Now here is the first problem: While the Framingham study's algorithms have been tuned on the correlation of risk factors with with hard outcomes (real heart attacks and strokes) for more than half a century, the longest clinical trial on the effects of chocolate lasted just 18 weeks. Meaning, for the effects of chocolate consumption, we don't have anything remotely equivalent to the Framingham data. And we will probably never have, because it is difficult to imagine a study in which we expose half the participants to a daily chocolate load for many years, while the other half doesn't get any, with us waiting and watching what happens in terms of heart disease. Which is to say, the 13 studies used by the researchers are the next best choice. It informs us about the effects of chocolate consumption on blood pressure and cholesterol. The researchers plugged those data into the mathematical model, together with the Framingham algorithms and the life tables available for their Australian population. The entire model is based on a so-called "Markov model", which is simply a probability-based simulation of processes over time. 
  
Now that we are clear about the methods and assumptions, let's look at what you read in the article: 
"Australian researchers have found that eating a block of chocolate daily over 10 years has 'significant' benefits for high risk cardiac patients and could prevent heart attacks and strokes." Well, didn't I just tell you that the participants' data sets had been selected such that only those who were not cardiac patients, were considered in the model? Yep, that's what it says in the methods section of the study. But methods are tedious to read and, admittedly, a bit difficult to understand sometimes, so we forgive our writer for this little slip-up. Also, none of the participants had eaten a chocolate bar daily for 10 years, so really nobody could "have found" what that would have done for heart risk. But let's not dwell on such trifles. On to the next paragraph:

"A study .... found that the consumption of ... chocolate ... was an effective measure to reduce risk." Whoa, that one we can't forgive. What we do have are 13 studies, which show that a daily chocolate consumption of about 100 grams (3.5 ounces) reduces systolic blood pressure in hypertensive people by 5 mmHg on average, and total cholesterol by 0.21 mmol/L (8mg/dL). What these studies do not show, is a reduction of risk for heart disease, that is, a reduction of real heart attacks and strokes. 

Given the size of the improvements, I have serious doubts about those effects anyway. What I observe as blood pressure measurements in the daily clinical practice, a 5mmHg difference is within the error margin of many physicians' and nurses' measurement skills. And a 0.21 mmol/L difference in cholesterol is deep within the bandwidth of variations, which most people would see if they were to measure their cholesterol levels for a few days in a row. We have done that in our lab, and found the intra-individual variability to be way above those 0.21 mmol/L. In other words, if your cholesterol level is measured today, and tomorrow, and day after tomorrow, the values will vary by more than those 0.21 mmol/L, even if your blood was drawn at the same time of day, and always after an overnight fast.  

On to the next paragraph: "Lead researcher Ella Zomer said the team found 70 fatal and 15 non-fatal cardiovascular events per 10,000 people could be prevented over 10 years if patients at risk of having a heart attack or stroke ate dark chocolate." Throwing out numbers always looks good, but what do these numbers mean for YOU? I operate under the assumption that you are not interested in the 85 events among the 10,000 people, but that your interest is with the ONE possible event in YOU, right? 


OK, let's look at that. Of course, I don't have the data set of Zomer and colleagues, but we can make quite an educated calculation using the Framingham risk algorithm, the average risk profile of the participant and the researchers' statement of the effect size: 85 prevented events per 10,000 people. And here is what it means to you: 
If your profile is that of the average participant (that is, you are 53 years old, have a systolic blood pressure of 141 mmHg, total cholesterol of 6.1 mmol/L and HDL-cholesterol of 1.2 mmol/L), your chance of suffering a heart attack or stroke over the next 10 years would be roughly 10%. 
Eating chocolate every day would reduce this risk by a whopping 0.3% to 9.7%. Wow. 

So, over to you: do you agree with the article's next paragraph, where it quotes the researchers as saying that "Our findings indicate dark chocolate therapy could provide an alternative to or be used to complement drug therapeutics in people at high risk of cardiovascular disease."? I can already hear people telling their friends about the chocolate therapy they are on. Sounds fully compliant with the researchers' next statement "... here is a dietary alternative which may be quite appealing to a lot of people. In fact, chocolate studies have shown really good compliance rates." 

Well, except for the morsel about compliance rate, I don't buy it. Think about it. 100 grams of chocolate pump 550 kcal into your body (coincidentally the same as a Big Mac), delivered by  50 grams of sugar and 36 grams of fat, most of it saturated. And one more thing: the ingredient which, we believe, is the cause of chocolate's beneficial effect is the flavanol content. I won't go into details about this member of the flavonoid family, but suffice it to say, you'll find it in effective doses only in chocolates with at least a 70% cocoa. That is bitter chocolate. Also known as dark chocolate. But the latter name is already a potential for deception by the manufacturer. The flavanols are inherently bitter, which is where bitter chocolate originally got its name. To make it more pleasant to your taste buds, manufacturers can take the flavanols out and compensate by making the chocolate darker by other means. The end effect is a dark chocolate, which delivers all the sugar and fat and calories but none of the benefits for which you might have chosen it. 

Also, the flavanol content of cocoa can vary substantially, depending on where it comes from. No wonder, you find nothing written about your favorite chocolate's flavanol content on its nutrition label. And by the way, you encounter the flavonoid superfamily, of which chocolate's flavanols are a member, in virtually all fruits and vegetables. Often in concentrations which far exceed what chocolate has to offer. 

So here is what we see in this article of the New York Daily News: there is public health, there is you and then there is the media. To public health, 85 avoided heart attacks and strokes per 10,000 people is worth something, particularly when public health doesn't need to pay for it. Because you do, by buying your "chocolate therapy". Public health couldn't care less whether your 0.3% risk reduction is meaningful for YOU. But you care, I presume. Which is why I find it regrettable that you have to deal with the degree of misinformation doled out by those media geniuses, who, like public health, are not interested in YOU. They are interested in your subscription and your dollar. 


So, if you think that some of your friends would benefit from knowing about this, then send them this post. If you or they have an elevated risk for heart disease, there are many proven ways to reduce that risk. Eating chocolate is not one of them. [tweet this].  


1. Zomer, E., et al., The effectiveness and cost effectiveness of dark chocolate consumption as prevention therapy in people at high risk of cardiovascular disease: best case scenario analysis using a Markov model. BMJ, 2012. 344(may30 3): p. e3657-e3657.


Zomer, E., Owen, A., Magliano, D., Liew, D., & Reid, C. (2012). The effectiveness and cost effectiveness of dark chocolate consumption as prevention therapy in people at high risk of cardiovascular disease: best case scenario analysis using a Markov model BMJ, 344 (may30 3) DOI: 10.1136/bmj.e3657

Can A Genetic Test Say Why You Are Fat?

With the decoding of the human genome came the hope of getting a lever on the chronic diseases, which kill most of us today: heart disease, stroke, diabetes and many cancers. And since overweight and obesity are a common cause of those diseases, many obese people were, and still are, yearning for that exculpatory headline: "It's all in your genes!" Why and how this headline is unlikely to ever appear in any serious media, was a subject of my earlier post "It's not your genes, stupid!".

Now, a group of researchers have looked at the data of a 30-year investigation of health and behavior, ...
which you might call the New Zealand equivalent of the famous U.S. Framingham study [1]. If you ever wondered whether it would make sense to get your children, or yourself, tested for your genetic risk of obesity, you will be surprised to learn what this study tells you. But one step at a time. Let's first have a look at this outstanding piece of research. [tweet this].

The study population consists of all the 1037 babies born in Dunedin, New Zealand, between 1st April 1972 and 31st March 1973 at the Queen Mary Maternity Hospital. Comprehensive health assessments were done at ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32 and 38. These investigations will be extended into the future and into the next generation. This is a massive and admirable effort. With data having been collected about virtually all aspects of health and behavior, this project provides a rare opportunity to match those data with genetic information. While genetic profiling wasn't possible in the seventies, it is possible and feasible now. And since study participants' genetic make-up hasn't changed since the time of their conception, we can retrospectively look at the correlation of biomarkers and genes, in this case those that correlate with obesity. To understand this study let me familiarize you with some facts and terms first.

So-called genome-wide association studies (GWAS) have thrown up more than 30 individual single-nucleotide polymorphisms (SNP, pronounced 'snip'), that's geneticists' speak for a variation of a single building block (nucleotide) of a gene. The draw-back: Those SNPs individually correlate only very weakly with obesity. That is, while there is a statistical correlation with obesity, there are obese people who don't carry the SNP, and there are carriers of the SNP who are not obese. To complicate matters a little further, not all SNPs which show statistical correlations in one population, say the U.S., do so in another, say New Zealand. Which is why the Dunedin researchers developed a risk score from the 32 SNPs known from other studies. Of those 32 they could find 29 in their study cohort, and so they developed their score from those 29 SNPs. Participants were grouped according to their score into either high- or low-risk.

The next step was to look at how the participants' genetic risk score (GRS) correlated with BMI in each decade, starting from 15-18 years of age, followed by 21-26 years, and then from 32-38 years. In the second decade (ages 15-18), people with a high risk score had 2.4 times the risk of being obese than those who scored low on the GRS. Had this been you, having a high risk score would have made you almost two and a half times more likely to be obese as a teenager compared to your buddies of the low-risk persuasion. That sounds like a lot, and you might be tempted to think that screening your child for genetic risk would help you to be more vigilant in watching over his or her BMI while he or she is still under your care.

The authors certainly seem to think so when they say that "These findings have implications for clinical practice..." and that "the results suggest promise for using genetic information in obesity risk assessments." I respectfully disagree, and so might you.

Let's simply take your point of view for a moment, and not the one of public health, where we are interested in one patient only, the population under our care. In contrast, the only patient you are interested in is you, or maybe your child. This value of a relative risk of 2.4 doesn't tell you much. What you rather want to know is, what a high- or low-risk score means to you. And the right question to ask would be along the line of "what are the chances of becoming obese when my risk score is high?". And also, "what are my chances of not becoming obese when my risk score is low?". The answers to these 2 questions come in the shape of values, which we call positive predictive value (PPV) and negative predictive value (NPV). Unfortunately the Dunedin researchers don't report those values. But we can calculate them, which I did.

And here is the surprising answer: if you had a high score, your risk of being obese as an adolescent is just about 10%. In other words, even with a high-risk score, you stand a 90% chance of not being obese as an adolescent. And if your risk score had been low you would have a 95% chance of not becoming obese. Beats me, but I can't see the benefit of genetic testing.

I deliberately talk only about the risk at the age of adolescence. There is a simple reason for this. The researchers found that the relative risk of obesity between the high- and low-risk categories diminished progressively from 2.4 in the second decade to 1.6 in the fourth (ages 32-38). That means, our looking at adolescents affords us a look at a time when study participants' exposure to environmental and behavioral influences had been relatively short. Over the years, environment and behavior further diminish the predictive power of the genetic score. Which is akin to saying: your lifestyle choices give you a greater power over your BMI than your genes. And by extension, the choices you make for your children's lifestyle beats their genes easily, too. In other words, it's not so much the luck of the draw, which determines your body weight, but rather your skill of playing the deck of (genetic) cards, which we have been dealt at the moment of conception. The study's data say the same thing just in other words: At birth the high-risk babies were not any heavier than their low-risk peers. Only once they were exposed to the outside world, did BMI careers begin to divert. For some of them.

This tells us one thing: when it comes to obesity, habits and environment are the key, not a potpourri of SNPs. Of course, if you are in the business of peddling genetic tests, you will disagree. And also when selling guilt-free conscience to obese readers is what pays your bills. Which is why I'm curious to see how the media will portray this study. Let's stay tuned. [tweet this].

1.    Belsky, D.W., et al., Polygenic Risk, Rapid Childhood Growth, and the Development of ObesityEvidence From a 4-Decade Longitudinal StudyPolygenic Risk for Adult Obesity. Archives of Pediatrics and Adolescent Medicine, 2012. 166(6): p. 515-521.

Belsky DW, Moffitt TE, Houts R, Bennett GG, Biddle AK, Blumenthal JA, Evans JP, Harrington H, Sugden K, Williams B, Poulton R, & Caspi A (2012). Polygenic Risk, Rapid Childhood Growth, and the Development of Obesity: Evidence From a 4-Decade Longitudinal StudyPolygenic Risk for Adult Obesity. Archives of pediatrics & adolescent medicine, 166 (6), 515-21 PMID: 22665028