Showing posts with label health. Show all posts
Showing posts with label health. Show all posts

Can A Genetic Test Say Why You Are Fat?

With the decoding of the human genome came the hope of getting a lever on the chronic diseases, which kill most of us today: heart disease, stroke, diabetes and many cancers. And since overweight and obesity are a common cause of those diseases, many obese people were, and still are, yearning for that exculpatory headline: "It's all in your genes!" Why and how this headline is unlikely to ever appear in any serious media, was a subject of my earlier post "It's not your genes, stupid!".

Now, a group of researchers have looked at the data of a 30-year investigation of health and behavior, ...
which you might call the New Zealand equivalent of the famous U.S. Framingham study [1]. If you ever wondered whether it would make sense to get your children, or yourself, tested for your genetic risk of obesity, you will be surprised to learn what this study tells you. But one step at a time. Let's first have a look at this outstanding piece of research. [tweet this].

The study population consists of all the 1037 babies born in Dunedin, New Zealand, between 1st April 1972 and 31st March 1973 at the Queen Mary Maternity Hospital. Comprehensive health assessments were done at ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32 and 38. These investigations will be extended into the future and into the next generation. This is a massive and admirable effort. With data having been collected about virtually all aspects of health and behavior, this project provides a rare opportunity to match those data with genetic information. While genetic profiling wasn't possible in the seventies, it is possible and feasible now. And since study participants' genetic make-up hasn't changed since the time of their conception, we can retrospectively look at the correlation of biomarkers and genes, in this case those that correlate with obesity. To understand this study let me familiarize you with some facts and terms first.

So-called genome-wide association studies (GWAS) have thrown up more than 30 individual single-nucleotide polymorphisms (SNP, pronounced 'snip'), that's geneticists' speak for a variation of a single building block (nucleotide) of a gene. The draw-back: Those SNPs individually correlate only very weakly with obesity. That is, while there is a statistical correlation with obesity, there are obese people who don't carry the SNP, and there are carriers of the SNP who are not obese. To complicate matters a little further, not all SNPs which show statistical correlations in one population, say the U.S., do so in another, say New Zealand. Which is why the Dunedin researchers developed a risk score from the 32 SNPs known from other studies. Of those 32 they could find 29 in their study cohort, and so they developed their score from those 29 SNPs. Participants were grouped according to their score into either high- or low-risk.

The next step was to look at how the participants' genetic risk score (GRS) correlated with BMI in each decade, starting from 15-18 years of age, followed by 21-26 years, and then from 32-38 years. In the second decade (ages 15-18), people with a high risk score had 2.4 times the risk of being obese than those who scored low on the GRS. Had this been you, having a high risk score would have made you almost two and a half times more likely to be obese as a teenager compared to your buddies of the low-risk persuasion. That sounds like a lot, and you might be tempted to think that screening your child for genetic risk would help you to be more vigilant in watching over his or her BMI while he or she is still under your care.

The authors certainly seem to think so when they say that "These findings have implications for clinical practice..." and that "the results suggest promise for using genetic information in obesity risk assessments." I respectfully disagree, and so might you.

Let's simply take your point of view for a moment, and not the one of public health, where we are interested in one patient only, the population under our care. In contrast, the only patient you are interested in is you, or maybe your child. This value of a relative risk of 2.4 doesn't tell you much. What you rather want to know is, what a high- or low-risk score means to you. And the right question to ask would be along the line of "what are the chances of becoming obese when my risk score is high?". And also, "what are my chances of not becoming obese when my risk score is low?". The answers to these 2 questions come in the shape of values, which we call positive predictive value (PPV) and negative predictive value (NPV). Unfortunately the Dunedin researchers don't report those values. But we can calculate them, which I did.

And here is the surprising answer: if you had a high score, your risk of being obese as an adolescent is just about 10%. In other words, even with a high-risk score, you stand a 90% chance of not being obese as an adolescent. And if your risk score had been low you would have a 95% chance of not becoming obese. Beats me, but I can't see the benefit of genetic testing.

I deliberately talk only about the risk at the age of adolescence. There is a simple reason for this. The researchers found that the relative risk of obesity between the high- and low-risk categories diminished progressively from 2.4 in the second decade to 1.6 in the fourth (ages 32-38). That means, our looking at adolescents affords us a look at a time when study participants' exposure to environmental and behavioral influences had been relatively short. Over the years, environment and behavior further diminish the predictive power of the genetic score. Which is akin to saying: your lifestyle choices give you a greater power over your BMI than your genes. And by extension, the choices you make for your children's lifestyle beats their genes easily, too. In other words, it's not so much the luck of the draw, which determines your body weight, but rather your skill of playing the deck of (genetic) cards, which we have been dealt at the moment of conception. The study's data say the same thing just in other words: At birth the high-risk babies were not any heavier than their low-risk peers. Only once they were exposed to the outside world, did BMI careers begin to divert. For some of them.

This tells us one thing: when it comes to obesity, habits and environment are the key, not a potpourri of SNPs. Of course, if you are in the business of peddling genetic tests, you will disagree. And also when selling guilt-free conscience to obese readers is what pays your bills. Which is why I'm curious to see how the media will portray this study. Let's stay tuned. [tweet this].

1.    Belsky, D.W., et al., Polygenic Risk, Rapid Childhood Growth, and the Development of ObesityEvidence From a 4-Decade Longitudinal StudyPolygenic Risk for Adult Obesity. Archives of Pediatrics and Adolescent Medicine, 2012. 166(6): p. 515-521.

Belsky DW, Moffitt TE, Houts R, Bennett GG, Biddle AK, Blumenthal JA, Evans JP, Harrington H, Sugden K, Williams B, Poulton R, & Caspi A (2012). Polygenic Risk, Rapid Childhood Growth, and the Development of Obesity: Evidence From a 4-Decade Longitudinal StudyPolygenic Risk for Adult Obesity. Archives of pediatrics & adolescent medicine, 166 (6), 515-21 PMID: 22665028

No Time To Exercise? You Are Not Alone!

Lack of time is the most often cited excuse for not exercising. I deliberately chose the word "excuse" over its less judgmental alternative "obstacle". Simply because I cannot see an "obstacle" when I compare two simple metrics: the hours people spend watching TV and the minutes needed to maintain one's health with exercise. With high intensity interval training, or HIT, health enhancing exercise can be compressed into an amazingly short amount of time. When done right. [tweet this].
According to the Nielsen "Three Screen Report" Americans spend 5.1 hours daily in front of their TV. But they admit to "only" half that time, according to a survey of the Bureau of Labor Statistics. To be fair, I take the survey's figure of 2.7 hours for the comparison with the American College of Sports Medicine (ACSM) current guidelines for quantity and quality of exercise [1]. The ACSM's recommendations of 2.5 hours exercise PER WEEK vs. 2.7 hours in front of the TV PER DAY. Cut your 162 minutes of daily TV watching by just 21 minutes, and it still leaves you with more than 2 hours for mind numbing soaps.  

On a cautionary note to my fellow German readers: don't think for one minute that our TV habits are in any way better than those of our U.S. friends. According to statista's "Daten & Fakten zur Mediennutzung" we spend on average 220 minutes in front of the dumb tube. So, either we have, for once, outdone our U.S. friends, or their self-admitted 2.7 hours are an understatement. Anyway, those figures tell you why I talk about excuses and not obstacles.

But I'm a realist. Whatever my view on the issue of having time, it won't change other people's views. Which is why my colleagues in public health have begun to look into ways of how to get the same health punch out of dramatically shorter exercise routines. And, as I mentioned in my previous post, the solution might have been found. It is called high intensity interval training, or HIT.

HIT is an exercise routine, which consists of brief bouts of vigorous activity, alternating with "active recovery" periods of more moderate intensity.  Until very recently, researchers focused on the comparison of HIT with the conventional continuous endurance exercise of moderate-to-vigorous intensity, which is what those public health guidelines are all about. Most studies comparing those two exercise alternatives matched them for energy expenditure. Since energy expenditure is higher during the intense bout the overall time needed to expand the same amount of energy is shorter in HIT than in continuous exercise. 

Latest research efforts, however, try to answer the question whether those high-intensity bouts might even compensate for an overall lesser energy volume. In other words, could we reduce not only the time spent on exercise but also the total exercise volume simply by doing HIT? Which means, reducing the time required for doing exercise even further? The latest study, conducted by Katharine D. Currie and her colleagues seems to suggest just that [2]. Before I go into the details, let me explain why I find her line of investigation very appealing and important.

The overall purpose of exercise is to maintain functional health. The reason why exercise is key to human functional health is because humans are made to move. Only, today they don't move anymore. That's why my primary interest in exercise is about its link to health. Anything else, such as weight loss, is secondary. Because, if I can improve health by exercising, I have achieved my objective.  Regardless of whether weight loss has materialized as a side effect or not. Weight loss for its own sake without any improvement in health is a purely cosmetic issue, which doesn't interest me that much.

One of the main health issues attached to exercise is arterial function. It's impairment is the first step that leads to atherosclerotic plaque build-up in your arteries and ultimately to heart attack or stroke. The entire process typically lasts decades, and our current portfolio of risk factors, such as high cholesterol, alert us way too late to this situation. I have written about this in my earlier post "When Risk Factors For Heart Attack Really Suck". Which is why I believe that arterial function is THE benchmark for testing the efficacy of exercise: It's an extremely sensitive early warning signal and a reliable tool to measure the effect of your exercise efforts. This is what Currie and colleagues had in mind. They wanted to see how a low-volume HIT routine affected the arterial function and fitness of 10 participants with existing heart disease.

Participants were tested individually for their fitness on a cycle ergometer. The researchers used the results of the fitness test to set the parameters for the two exercise routines, which all participants had to perform. The endurance protocol was set at 55% of each participant's peak power output as determined during the fitness test. In the endurance exercise bout, participants had to cycle at this intensity for 30 consecutive minutes.

The HIT protocol consisted of 10 1-minute bouts of exercise at 80% of peak power output, separated by 1-minute bouts at 10% of peak power output. That's 30 minutes of continuous exercise vs. 19 minutes of HIT, not considering warm-up and cool-down which were the same for both protocols.
Interestingly, while all participants completed the HIT protocol, 2 participants were unable to last through the endurance protocol. Arterial function improved after both exercise protocols similarly, despite the fact that the total work performed in the endurance protocol was significantly greater than in the HIT protocol.

Now, 10 participants is a rather small number of subjects for such a study. The problem with a small number is insufficient statistical power to detect a difference in arterial function between the two protocols, if there was a difference. Which is why we will be looking forward to seeing larger trials investigating this question using more participants.

The researchers also show one thing which is always close to my heart but which is rarely reported in study publications: the very different outcomes between individuals. After the endurance exercise one participant saw a dramatic improvement in arterial function, 4 participants had a more modest improvement, and the remaining 5 no improvement. Following the HIT routine, there were 2 participants with a dramatic improvement of arterial function, 2 with a more moderate improvement, 1 whose arterial function actually got worse and the remaining 5 with no change. Unfortunately the researchers do not tell us whether those who improved or didn't improve in one routine showed corresponding effects in the other routine. My guess is, for at least some of the participants, the reaction will have been different. But even if that was not the case, we can see again, that the presentation of group results masks the fact that different people react very differently to the same type of intervention. I have presented an example of this effect in my earlier post "Am I shittin' you? Learn to be a skeptic".

A similar degree of inter-individual difference was seen in a study which used the same protocol of low-volume HIT, but this time on healthy sedentary adults. The question was whether 2 weeks of performing the HIT routine 3 times per week would improve the participants' ability to burn fat instead of carbohydrates. This so called oxidative capacity is a marker of metabolic health and gives you a clue about your diabetes risk. True enough, the results support the idea, that this minimal amount of exercise can substantially improve metabolic function. But again, the wide standard deviation of the group results points at substantial differences between the individuals [3].  

These inter-individual differences make prescription of exercise always a trial-and-error effort. As much as you would like to hear from your coach or doctor that a specific type of exercise will have a specific effect on your health, nobody can give you that certainty. In fact, if you encounter a coach who talks certainty, you know a coach whose knowledge is too limited to make him recognize his own limitations. That's something to be wary about.

Now, what if you would like to try HIT for yourself? How would you design a HIT routine? Before I give you a few pointers, let me warn you first: Do not take my advice as a medical recommendation. You follow it at your own risk. If you have been sedentary, and you have any doubt as to whether exercise at high intensity is good for you, seek medical advice first.

Obviously the best way of designing a maximally effective HIT routine is to go through a fitness test first. Ideally, one which tests things like your maximal oxygen consumption. The gold standard is the cardiopulmonary exercise test during which gas exchange is measured together with heart rate or ECG. The measured values will allow your coach to tailor the intensity of the intervals to maximum effect. But there is a simple do-it-yourself way, too. Here is how it works:

In exercise research we know that people's perception of exertion correlates quite reliably and closely with biomarkers of exertion (e.g. heart rate, oxygen consumption). We call this subjective perception the "rate of perceived exertion" or RPE. And we have scales for you to express this RPE. The most commonly used one is the Borg scale of perceived exertion. I personally prefer the OMNI version because its 0-10 scale is so much more intuitive than Borg's 6-20 scale. 

The picture to the right is a copy of the OMNI scale.
It doesn't matter whether you run or cycle or do any other type of endurance exercise. What you would describe as "extremely hard" (9-10) is the most strenuous intensity at which you can currently perform your exercise. Regardless of your personal maximal oxygen capacity. That means, an Olympic marathon runner has his 100% max at 10 and so do you as a couch potato. Even though both of you have vastly different capacities. Since we want to exercise at 80% of that capacity it doesn't matter what it's absolute value is. The only thing that matters is that we hit the 80%. Which is what these scales are so good for.

At the left end (0) of the scale you find the descriptor "extremely easy", which is the way you would describe an exercise that you could perform for very long durations without any distress. The point is to get your exercise intensity during the high-intensity intervals to where you would describe the feeling as "hard", that is, at a 7-8 out of 10. That point correlates pretty closely with the 80-85% of maximal effort used by the researchers. The period of active recovery, which separates two high-intensity intervals, should get you to a perception in the range of 4-6.

Keeping this scale in mind you can now perform your own interval training with whatever exercise you fancy, whether its cycling, running, skating, swimming, or whatever. From experience with our own study participants I find a HIT routine of 1-minute high-intensity intervals, separated by 4-minute active recovery intervals, the most agreeable to start with. If that's too tough, cut the high-intensity interval down to 45 or 30 seconds. Try to get 3 to 4 high interval bouts into one exercise. And don't be frustrated if initially you can manage only two. Do this 3 times a week, always with one day between 'HIT days', and you'll find your fitness level responding very fast to this minimal effort. Increasing this effort will be no problem. You can play around with different ways of doing that. Shortening the active recovery period is one way. Stringing more intervals into your exercise bout is another. The variations are limitless.

If there is one particular biomarker which you want to improve, be it blood pressure, blood sugar or arterial function, get it tested before you start and then a couple of weeks after you have persisted with the weekly HIT routine. To see the health effects of your efforts can be a strong motivator to go on, or to do even more. To get from 20 minutes three times a week to 20 minutes daily will be a huge improvement. It still leaves you with plenty of TV time, and probably with enough time to wonder how you could have ever thought of time being an obstacle to exercise.

You'll probably not be tempted to do what I did 10 years ago: I threw out my TV and never replaced it. Which is why I can now work, study, exercise and write a blog. Which also means that to compensate for my zero TV time, somebody must spend a lot longer in front of the TV than the average 2.7 hours. Could that be you? Or someone you know, who would benefit from reading this?  [tweet this].

1.    Garber, C.E., et al., Quantity and Quality of Exercise for Developing and Maintaining Cardiorespiratory, Musculoskeletal, and Neuromotor Fitness in Apparently Healthy Adults: Guidance for Prescribing Exercise. Medicine & Science in Sports & Exercise, 2011. 43(7): p. 1334-1359 10.1249/MSS.0b013e318213fefb.

2.    Currie, K.D., R.S. McKelvie, and M.J. Macdonald, Flow-Mediated Dilation Is Acutely Improved following High-Intensity Interval Exercise. Medicine and Science in Sports and Exercise, 2012.

3.    Hood, M.S., et al., Low-volume interval training improves muscle oxidative capacity in sedentary adults. Medicine and Science in Sports and Exercise, 2011. 43(10): p. 1849-56.

Garber, C., Blissmer, B., Deschenes, M., Franklin, B., Lamonte, M., Lee, I., Nieman, D., & Swain, D. (2011). Quantity and Quality of Exercise for Developing and Maintaining Cardiorespiratory, Musculoskeletal, and Neuromotor Fitness in Apparently Healthy Adults Medicine & Science in Sports & Exercise, 43 (7), 1334-1359 DOI: 10.1249/MSS.0b013e318213fefb


Currie KD, McKelvie RS, & Macdonald MJ (2012). Flow-Mediated Dilation Is Acutely Improved following High-Intensity Interval Exercise. Medicine and science in sports and exercise PMID: 22648341


Hood MS, Little JP, Tarnopolsky MA, Myslik F, & Gibala MJ (2011). Low-volume interval training improves muscle oxidative capacity in sedentary adults. Medicine and science in sports and exercise, 43 (10), 1849-56 PMID: 21448086

3 Ways to Spot Their Lies About Healthy Recipes

Briefly: If I had to name the one word, that is most often used to label something as what it is not, my vote would go to "healthy". Whether it's the issue of sugar vs. honey, of butter vs. oil or of calories vs. nutrients, science and evidence are clearly not playing the lead role in the culinary theater of the world wide web. Judging by its popularity, that's a missed opportunity.
I recently gave a talk on the lies and deceptions the food industry uses in labeling and marketing their products. A German corporate health insurance had asked me to give that presentation to their clients. Naturally, a large percentage of the audience were women. Judging from the lively and entertaining discussion, which followed my presentation, almost all women prefer home cooked food for their families to take-out or eat-out. The most often cited reason was that home cooked food is the healthier choice. I'm not convinced that they get it. Not if they get their food information from where they professed to search for it: the internet.
I know this, because in preparation for my talk I followed my wife on one of her culinary search trips through the web.
The number of recipe sites is staggering. So is the degree of misinformation disseminated by them. Most of it in the form of labeling something as healthy when it clearly isn't. Let's look at three commonly encountered mis-perceptions on randomly chosen recipe sites. I won't give you the links, because to single them out would be unfair. What I found there is so ubiquitous, that you will encounter it virtually everywhere once you start surfing the culinary side of the web.

Honey vs. Sugar

A self-proclaimed holistic health counselor shares her recipe for a "Healthier Flourless Chocolate Cake". Which immediately begs the question: healthier compared to what? The answer comes in brackets directly behind the title, where it says "refined-sugar free". Reducing sugar in our daily diet is certainly a big step towards better health. But you won't get there by replacing sugar with honey. The difference between sugar and honey is simple: Sugar is 100% sugar, honey is 80% sugar. Admitted, that's a little oversimplified. Honey does have ingredients which sugar doesn't. But these are not an issue when it comes to reducing calories or the metabolic impact of sugar. Whether you sprinkle granulated sugar into the dough or fold honey into it, what your metabolism has to deal with is their common denominator, the breakdown molecule, which ends up in your blood - glucose. Of the recipe's remaining 4 ingredients - butter, eggs, cocoa powder and baking chocolate - the butter is evidence that our holistic health counselor has missed out on another common diet mis-perception:

Butter vs. Oil

On another website we find the "Best Ever Healthier Chocolate Brownies". Honey isn't an issue for this lady. Her claim to healthiness is based on the conviction that other recipes use "... butter rather than olive oil", and that "olive oil contains healthier fats". This butter vs. oil issue is not as straight forward as the glucose theme. So let's look at it in greater detail.
The fats for human nutrition come either from animal or plant sources, and you can think of them in 3 major categories: saturated fats, and mono- and poly-unsaturated fats. We don't need to go into the molecular details of the fats - or fatty acids (FA), as they are more correctly called. Suffice it to say, that the "unsaturated" part of the descriptor refers to one (mono) or more (poly) carbon atoms of the fatty acid molecule having less than the maximally possible number of hydrogen atoms linked to them. Depending on the position of the first "unsaturated" atom in the chain of carbon atoms, the poly-unsaturated fats are called omega-3 or omega-6 poly-unsaturated fatty acids (PUFA). There is one more thing you should be aware of: the human body can manufacture most of the fatty acids which it needs for its metabolism and maintenance. But there are two, which we need to supply through our food intake. These two are alpha linolenic acid (ALA), an omega-3 FA, and linoleic acid (LA), an omega-6 FA. Our organism uses them to produce other fatty acid variants which are essential for our health.
Armed with this knowledge we can now ask ourselves an obvious question: What's the health issue with fats? You have probably heard that a high fat diet promotes high levels of cholesterol in your blood (partly true) and that high cholesterol is the cause of heart disease (not true). You have also heard that saturated fats, such as butter, are bad for you and that replacing it with olive oil is good for your health.
Now let's hear the facts as we know them today: Dietary trials in which saturated fat, such as butter, was replaced by PUFA lead to a reduction in risks for cardiovascular disease [1]. However, when those PUFAs were mainly of the omega-6 version, there was no reduction, or even a slight increase in risk for heart disease. Looks like replacing saturated fats with oils isn't going to do you any good if you don't chose the oils for their content of omega-3 FAs.
These observations match nicely with what we know from evolutionary biology. Comparing the fat intake between our hunter/gatherer ancestors and us, we notice that the ratio of omega-6 to omega-3 fatty acids has undergone a dramatic change. While that ratio stood at 1:1 or even lower throughout most of human evolution, our modern western diet has upped that ratio to a whopping 16:1 [2], and even greater than that, depending on where you live. When I now tell you that the downstream products of your omega-6 FA intake are pro-inflammatory whereas the products of ALA have the opposite effects, you might begin to see the picture. With heart disease and stroke being the late-stage consequences of chronic inflammation of the arteries, as I highlighted in my earlier post "Your Shortcut To Longevity", the type of fat appears to have an effect on your arterial health. And therefore on your risk of heart disease.
How large this effect really is, remains unclear. In a recently updated review of randomized clinical trials the Cochrane Collaboration came to the conclusion that reducing the content of saturated fat in favor of unsaturated fats had some effect on cardiovascular disease events in men only (not in women) and only if such dietary habit change lasted at least 2 years [3]. There was no detectable effect on the risk of dying from cardiovascular disease. Importantly, it was unclear whether the reduction in disease events was due to poly- or mono-unsaturated fatty acids.
There is another issue I have with that song and dance about olive oil. Its omega-6:omega-3 ratio is around 13, which doesn't exactly make it heart healthy. In comparison, the much maligned coconut oil has no omega-3 FAs only omega-6. But it delivers only a third of the omega-6 FA of olive oil. In contrast, sunflower oil also has no omega-3 component but delivers almost 6 times as much omega-6 as olive oil. The only really stand-up guy in the vegetable oil department is flaxseed oil: its omega6:omega3 ratio is 0.3, which makes it as good as any of the fish oils, which are considered healthy. But don't get too excited about flaxseed oil taking over your kitchen anytime soon. It can't hide it's similarity with fish oil. I tried it. It's OK in a salad, and so are the seeds. Heat up the oil, though, and you think you are frying a cod liver. That taste doesn't go too well with a chocolate cake, or many other dishes.
So, what's the point? Of course, you can read the evidence as you like, but I wouldn't call a brownie or chocolate cake healthier when the only merit to this claim is its oil content. To me, the excess in calories is what by far outweighs the relative merits of the carriers of theses calories. Which brings me to the third issue:

Calorie Density vs. Nutrient Density

When I added up the calories for the brownie and the cake, the calorie-to-weight ratio was in excess of 4. That is, every 100 grams of these buggers deliver more than 400 calories. This nutrient density of 4 is way in excess of what man was exposed to through most of evolution. Think about it: fruits come with a ratio of 0.6, on average, vegetables with a ratio of 0.3 and game meat, the only meat available to our cave dwelling ancestors, delivers on average 200 calories for every 100 gram. We can reasonably assume that our ancestors had to survive on an overall calorie-to-weight ratio of less than 2. Add to this the fact that they expended far more calories than we do today. Just to maintain calorie balance our ancestors had to eat a much larger quantity of food than we do today. And that food, while low in calories, was packed with nutrients. So, their nutrient:calorie ratio was certainly far more favorable than ours is today.
There are many more issues which plague much of the web's culinary universe. By right, the word healthy shouldn't be anywhere near most of its places. Particularly when those places are all about eating and nothing about exercise. You can eat as healthy as you like, if you fail to exercise at the right frequency, intensity and volume, then pay-back day is almost inevitable.
How your arteries benefit from exercise, and how you can make that exercise exactly right for you with the least possible effort, that will be an issue of my next post. Until then, don't get hooked too much on the culinary web. I worked up a hell of an appetite during my recipe surfing exercise with my wife. Didn't do any good to my waist line and probably not to my arteries. But, what the hell, we need to enjoy sometimes, too.    


Kuipers RS, de Graaf DJ, Luxwolda MF, Muskiet MH, Dijck-Brouwer DA, & Muskiet FA (2011). Saturated fat, carbohydrates and cardiovascular disease. The Netherlands journal of medicine, 69 (9), 372-8 PMID: 21978979

Simopoulos, A. (2008). The Importance of the Omega-6/Omega-3 Fatty Acid Ratio in Cardiovascular Disease and Other Chronic Diseases Experimental Biology and Medicine, 233 (6), 674-688 DOI: 10.3181/0711-MR-311

Hooper L, Summerbell CD, Thompson R, Sills D, Roberts FG, Moore HJ, & Davey Smith G (2012). Reduced or modified dietary fat for preventing cardiovascular disease. Cochrane database of systematic reviews (Online), 5 PMID: 22592684

The Death Of Good Cholesterol

Briefly

There were always two types of cholesterol, the good and the bad. Until now. A large new study tells us that good cholesterol might have been an impostor. That's food for the media types. For those who think before they type, the real news is that we are finally getting closer to uncovering the impostors. Thanks to the genetics revolution which seems to be paying off in an unexpected area.  

 

 

HDL - The Knight in Shining Armor

In the cholesterol universe there are two camps: good cholesterol, also known as HDL, and bad cholesterol, often referred to as non-HDL cholesterol. The latter comes in a variety of flavors, of which LDL is the most prominent and best known. From many large observational studies we know that high levels of LDL and low levels of HDL associate with an elevated risk for heart disease and stroke. Certain limits have been derived from these studies, above which your LDL shouldn't rise and below which your HDL shouldn't fall. The magic level for HDL is 60 mg/dL blood. Above that limit, we are assured, HDL will even offset some other risk factor, such as age or being of the male persuasion. Given that a large percentage of people fail to achieve these desirable levels, researchers have been eagerly sourcing for pharmaceutical means to increase HDL. Now a new study tells us, that HDL might have to be stripped off its White-Knight title, much for the same reason as "Dr." Karl-Theodor zu Guttenberg, the former German defense minister, had to be stripped off his doctorate last year: for being an impostor.

Epidemiology 101

If you have been following biomedical research for a couple of years, you will have noticed that results are often conflicting. So, you might discount the findings of one study if hundreds of others come to a different conclusion. Only in this case you should pay closer attention, because what Voight and colleagues have produced strikes at the foundation of how we do research in epidemiology, the science which studies the health of populations [1].  To appreciate the gravity of the situation, I need to familiarize you with a basic concept of epidemiological studies: Confounding. I'll use a very simple and hypothetical example. 
Let's say we are interested to know the causes of health and disease in children in the hypothetical and impoverished state of Maladipore. The figure to the left represents our astonishing finding that children growing up in a household which owns a TV are significantly less likely to die during childhood than children growing up without the boob tube. The correlation between TV ownership status and survival are very strong and compelling. 
On the face of it we could now recommend the prime minister to improve the health of the nation by simply installing a TV in every household in which there are children. If we know that this is nonsense, we take our epidemiology tools and look for another factor which has an influence on TV ownership AND on survival rate. 
And so we discover that wealth is this third factor. We call it a confounder. Wealth has confounded our original finding because the wealthy can afford a TV and they can also afford medical care and immunization for their children. Whereas the inability to buy a TV certainly reflects the inability to buy medical care, too. When we repeat our analysis of the data, which we gathered during our observational study, we find that the link between TV ownership and survival disappears once we bring the third variable, wealth, into the equation. Clearly, providing every household with a TV wouldn't have reduced the rate of child deaths. Greater wealth however will.
In the case of Maladipore, common sense is all it takes to suspect and find the confounder. In real life it is almost never as simple. When we find an association between cholesterol and heart disease, then we typically have some idea about the way cholesterol might contribute to heart disease. At that stage our ideas are merely hypothetical. The classic way of investigating them is through clinical trials in which we randomize participants into 2 groups, one in which we lower (bad) cholesterol and another in which we don't, the control group. Then we observe them for a period and note the rate at which people in both groups develop heart disease or die from it. If we find that the control group, the one which didn't receive the benefit of having its cholesterol lowered, has a significantly higher rate of falling ill, we conclude that lowering cholesterol is the way to go. Sounds easy, but it isn't. For several reasons. In the case of cholesterol, the time between developing high bad, or low good, cholesterol and suffering a heart attack or stroke is measured in decades rather than in years. We also cannot just experiment with people as we would like to in the name of science. Ethics boards look very closely at the potential risks and benefits associated with what we do in trials. We cannot simply withhold treatment from a control group, with scientific curiosity as the motivation. With these obstacles, we had  to draw our conclusions from observational studies, which tell us a lot about associations but nothing about cause and effect. Until now, we simply had no other choice. But not any more:

It's Mendel All Over Again

With larger and larger databases being developed from genetic research we can now do something else: Mendelian randomization studies. Which is what Voight and colleagues did. The concept behind it is amazingly simple and elegant, though not as brand new as you might think. It has been named after Gregor Mendel, the father of modern genetics, who first observed and described how traits are inherited. As always, a concept is best understood using an example. In the 1980s some researchers thought that very low cholesterol levels might increase the risk of cancer. There was definitely an association being observed between cancer and low cholesterol, but nobody knew which was the cause and which the effect. Or whether there was a third confounding variable, as yet unknown. Now, you can't make a study in which you lower the cholesterol in some people, just to see whether they will develop cancer.  Go and find volunteers for that one.
So, Martjin Katan had another idea [2]. In 1986 he pointed out that there existed a certain variation in one gene (the gene which encodes the apolipoprotein E), which, if you had that variation, would give you extremely low cholesterol levels. He also knew, of course, that we inherit our genes from our mother and our father in a random way. That means, your hodgepodge of genes and my hodgepodge of genes are not systematically different from each other. Both are just random assemblies of genes from among all possible variations. In case you inherited that low-cholesterol gene, and I didn't, then it was just the luck of the draw. The important point is, that there is no room for confounding the random selection of genes. 
So, if the "unconfoundable" low-cholesterol gene directly affects cholesterol levels and nothing else, then people who carry this gene should be found more often among patients with cancer than among people who are free from cancer. That was Katan's suggestion for a study design to test this cholesterol-cancer hypothesis.
Unfortunately, in 1986 it was impossible to realize this study design. The required genetic data were not yet available. That is changing. While, to the best of my knowledge, Katan's proposal has not been carried out yet, Voight and colleagues used his proposed design to investigate the HDL-heart disease theory.

The Death of Cholesterol?

They looked at a rare gene variant, which, as far as we know today, correlates strongly with HDL concentration, but not with any other cholesterol type. That's important, because we need to disentangle the effects of HDL from those of LDL. In their analysis, using data from 21,000 heart disease patients and 95,000 controls (people free of heart disease), the researchers could not find any association between HDL level and risk of heart disease. But Voight and colleagues didn't leave it at that. They also formulated a genetic risk score using 14 common gene variants with known effects on HDL (but not on LDL) and examined the score's association with heart disease in over 12,000 patients and over 41,000 controls. Again, nothing. Elevated HDL did not show up as the cherished knight in shining armor.
What do we make of this? First, that raising HDL cholesterol may not be a way to reduce the risk of heart disease. Therefore, secondly, let's not think that treating a so-called risk factor will reduce risk (more on that in my post "when risk factors for heart disease really suck"). Third, let's hope Pfizer & Co. get this message, too. Because drugs, which treat risk factors but not risk, are like impostors: they never deliver.



Voight, B., Peloso, G., Orho-Melander, M., Frikke-Schmidt, R., Barbalic, M., Jensen, M., Hindy, G., Hólm, H., Ding, E., Johnson, T., Schunkert, H., Samani, N., Clarke, R., Hopewell, J., Thompson, J., Li, M., Thorleifsson, G., Newton-Cheh, C., Musunuru, K., Pirruccello, J., Saleheen, D., Chen, L., Stewart, A., Schillert, A., Thorsteinsdottir, U., Thorgeirsson, G., Anand, S., Engert, J., Morgan, T., Spertus, J., Stoll, M., Berger, K., Martinelli, N., Girelli, D., McKeown, P., Patterson, C., Epstein, S., Devaney, J., Burnett, M., Mooser, V., Ripatti, S., Surakka, I., Nieminen, M., Sinisalo, J., Lokki, M., Perola, M., Havulinna, A., de Faire, U., Gigante, B., Ingelsson, E., Zeller, T., Wild, P., de Bakker, P., Klungel, O., Maitland-van der Zee, A., Peters, B., de Boer, A., Grobbee, D., Kamphuisen, P., Deneer, V., Elbers, C., Onland-Moret, N., Hofker, M., Wijmenga, C., Verschuren, W., Boer, J., van der Schouw, Y., Rasheed, A., Frossard, P., Demissie, S., Willer, C., Do, R., Ordovas, J., Abecasis, G., Boehnke, M., Mohlke, K., Daly, M., Guiducci, C., Burtt, N., Surti, A., Gonzalez, E., Purcell, S., Gabriel, S., Marrugat, J., Peden, J., Erdmann, J., Diemert, P., Willenborg, C., König, I., Fischer, M., Hengstenberg, C., Ziegler, A., Buysschaert, I., Lambrechts, D., Van de Werf, F., Fox, K., El Mokhtari, N., Rubin, D., Schrezenmeir, J., Schreiber, S., Schäfer, A., Danesh, J., Blankenberg, S., Roberts, R., McPherson, R., Watkins, H., Hall, A., Overvad, K., Rimm, E., Boerwinkle, E., Tybjaerg-Hansen, A., Cupples, L., Reilly, M., Melander, O., Mannucci, P., Ardissino, D., Siscovick, D., Elosua, R., Stefansson, K., O'Donnell, C., Salomaa, V., Rader, D., Peltonen, L., Schwartz, S., Altshuler, D., & Kathiresan, S. (2012). Plasma HDL cholesterol and risk of myocardial infarction: a mendelian randomisation study The Lancet DOI: 10.1016/S0140-6736(12)60312-2

Individualized Medicine, Ignorant Medics And An Invitation To Lose Weight.

In my previous post I promised to talk about your individualized way to achieving optimal health. If that made you think about personalized medicine, you were right. Almost. Because personalized medicine is still light-years away from us. That's the bad news. The good news, personalized prevention is an emerging reality. At least in my lab. Which is why I would like to invite you to become a part of it. No strings attached. But before we get to this let's first get on the same page about the personalization of medicine.
Two questions we need to ask ourselves: What is personalized medicine and why would we want it?
Professor Jeremy K Nicholson of the Imperial College, London, defined personalized medicine as "effective therapies that are tailored to the exact biology or biological state of an individual" [1]. Such tailoring of a treatment, say for your high blood pressure, would require your doctor to evaluate your biochemical and metabolic profile in order to prescribe you the most effective drug or treatment at the most effective dose, with the least possibility of side effects.
Now, why would we want this?
Simply because we don't have it. Because our current drugs do not work optimally in most people [2]. But don't just take my word for it. Take that of Dr. Allen D. Roses, head of the Drug Discovery Institute at Duke University School of Medicine. In an interview he told a UK newspaper, The Independent, that more than 90% of modern drugs work, at best, in 30-50% of the people. He said that in 2003. At the time, Roses was also senior vice president for genetics research and pharmacogenetics at GlaxoSmithKline. 
Contrary to what you might think, Roses did not reveal any nasty industry secret. What he said is plainly visible for everyone who can read the results of clinical trials through the lens of statistics. I simply quote Roses for effect. After all, he knows what he is talking about. Contrary to many medical doctors, who have an amusingly limited grasp of the basic statistics used to interpret and present the results of clinical trials. Just how limited, that has been recently demonstrated for the case of cancer screening in a mock-up trial investigating the understanding of practicing physicians [3].
Before I tell you the results of this trial, let me make you understand what it was about. One big question in cancer screening is whether screening helps to reduce the number of people dying from cancer. Let's take a hypothetical example, and here I reuse the one which the study's authors used to explain statistical outcomes. Let's say, cancer was detected in a group of people at age 67. All of them died of their cancer at age 70. The 5-year survival rate from diagnosis would stand at 0% (they all died before 5 years were over). Now imagine that all those cancer cases would have been detected at age 60 with a screening test. And also imagine that all of them still died at age 70. In this case the 5-year survival rate would have been 100% (they were all still alive at 65). You see the issue: the survival rate was better with screening, but the rate of dying remained the same. In epidemiology we call this sort of thing lead-time bias. That is, simply detecting a disease earlier might lead to an improved survival rate which has, in fact, nothing to do with improved survival. Such lead time bias is rarely an all-or-nothing thing as in this hypothetical case. Most of the time it comes in degrees. But in any case, it would help you as a patient, if your doctor was able to see through the reporting, and to question the clinical relevance of the results so presented. Your doctor should look for the mortality rate, the rate of dying, not the survival rate.
Back to the results of the mock-up trial about physicians' interpretive skills of clinical research publications. If the results of this mock-up trial are representative of the population of your doctors, then you should be worried. Of the over 200 practicing physicians enrolled in this trial, fully 76% would recommend you this useless screening test. They considered an improved 5-year survival rate as prove for the test's efficacy! These were not undereducated physicians of a third world country, mind you. They were randomly selected from the Harris Interactive Physician Panel, which is representative of the general U.S. physician population.
OK, you may say that this was a test related to cancer screening. What has it got to do with understanding the efficiency of a drug, which your doctor prescribes you? Well, maybe your doctor aces the statistics test on drug trials after he has flunked the one on cancer screening. If you believe that, you probably also believe in the tooth fairy and in Santa Claus.  But you may have another question: Can trial results be presented in such misleading ways? Aren't researchers supposed to report their results honestly and correctly? And what use is the peer-review process which every published paper has to go through?
With 70% of all medical research being financed by the private sector, data are a commodity. So, whether you develop a screening test, a drug or a treatment, you will want to dress it up as a magic bullet. Because when you have the magic bullet for, say high blood pressure or high cholesterol, it will make it into every physician's armory. That's where the money is. It's certainly not in personalized medicine, which may find your competitors' drugs as more suitable solutions for a variety of cases. 
Which brings us back to personalized medicine. I have told you in my previous posthow much it costs to develop a drug. Which is why Big Pharma would love to concentrate its research on the areas where the probability of success is high and the potential risk of failure is low. That's the area of follow-up drugs, drugs of the same class as established drugs, but with incremental improvements over the older version. Ironically, our health care system discourages this type of pharmacological research. Incrementally improved drugs are typically reimbursed at the same rate as older drugs. Not much profit potential there. Particularly when competition is fierce.   
Which is why Big Pharma looks for new grounds, that is new therapeutic classes, for which, of course, there need to exist a large market [4]. Again, individualization is certainly not desirable, as it would fragment any market. There is another draw-back: when you break new grounds, it takes a lot longer to get off that ground with some new product. Which is what we see in the FDA's records of drug approvals over the past 10-15 years [5]. Ten years ago the FDA approved on average 90-100 new drugs every year. For the past few years this number has dwindled to 20-30 drugs per year, with the average development period for a drug increasing from 10 years to 14 years. Seven of those years are locked up in the clinical trials required by the FDA. Faced with these risks and costs, how eager, do you think, is Big Pharma to develop niche products for individualized medicine?
Even if we didn't have all those economic issues, individualizing medicine is not as easy as making some genetic test and reading the right drug combo and dosage for your ailment from it. True, genetic testing has become possible and prices are coming down. But to know your organism's blueprint doesn't mean to know what your organism does with this blueprint. In my earlier post I have explained about epigenetics, and how environmental and behavioral factors have a great influence on how your genes play out in the final version of "you". I'm afraid, without this knowledge we can't get individualized medicine off the ground. Not to the extent it exists in most people's fantasy.       
How about personalized prevention? What's the big difference to personalized medicine? Well, for one, we don't need to develop a drug. When I talk about prevention, I talk about preventing what kills most of us today: heart disease, stroke, cancer, and diabetes. Actually, diabetes per se does not kill us, it's those cardiovascular diseases which ride on it. Anyway, to prevent them and diabetes and many cancers takes only some modifications to your lifestyle, chiefly not smoking, not being overweight, being physically active and eating a healthy diet. Any of these comes without undesirable side effects. And for all of them an incredibly large number of studies has investigated their effects under virtually all possible combinations of risk factors, biomarkers and population characteristics.
What doesn't exist is the knowledge of what will work best for you. For two reasons: First, most of this research has been correlated with our classical risk factors. In an earlier post I suggested why these risk factors really suck when it comes to predicting your risk for disease or your health career. Second, there is no knowing how you will react to any intervention even if a research paper tells you that this-and-this exercise routine has cut blood pressure in the participants from 140 to 120 mmHg. Each participant will have experienced a different effect on his blood pressure, ranging from a lot more to no effect at all. The 20-mmHg reduction is merely an average value. We would need to know how similar you are to which participant to tell you exactly what you might expect.
These are the two issues which we work on in my lab: getting away from inconclusive risk factors to what really predicts health, disease and longevity. And making this trial-and-error approach a systematic one. Instead of working with risk factors we have identified key organic functions which predict health and disease much more accurately than risk factors do. And instead of dishing out the generic "spend-150-minutes-per-week-on-exercise" advice we are building a database of biomedical knowledge which will match your profile with the most promising exercise and dietary interventions to help you achieve your personal goals with the least possible effort. And to monitor the effects of your efforts on your organic functions, we are developing tools for you to precisely measure them. For convenience's sake, preferably at home, or at least in your fitness center, at your office or your doctor's practice. 
We do walk entirely new ways to achieve all this, but we never stray from the scientific method. I will, in the twice-weekly postings of this blog, report occasionally on the progress we make. I can't hold my tongue, simply because this work is so fascinating and exciting, at least to me. Of course, I do know that most people are obviously not interested in their health. Judging by the fact that less than 2% of Americans achieve ideal health metrics [6]. But for those who really want to achieve chronic health and functional longevity, we will have something to offer. In fact, I have something right now:
With overweight being one of the biggest issues, we have developed a little tool with which you train what we call a 6th sense for your calorie balance. We have tested this tool in a successful proof-of-concept study. Which is why I would like to invite you to use it. Free of charge, no strings attached. Except for the following three:
First, bear with us for the design of this web-based tool. It can't compete with what you are used to from the design gods of Apple. Second, give me your feedback and suggestions. And third, use it as it is intended to be used: daily. You'll see what I mean when you get there. 
You can find it on facebook. Just type the name "adiphea" into the search bar and click on the app. Or call it up directly from here. It doesn't cost anything, and there is no advertisement other than what facebook puts on all our pages. The tool itself is described in all details on its app-page on facebook. Most of the explanations come in the form of short videos. Which is why I'm not going into details right here. Only one thing I need to mention: Ideally, you should have a body-fat scale instead of the regular bathroom scale. Body fat scales calculate your body water, too.  And the app works best when you enter body water together with your weight daily.
We have set aside a limited contingent for users who are truly interested to work on their health and on their weight in an entirely new way. For those who are determined enough to use our tool properly and thereby help us to perfect it, it will remain accessible free of charge. For all others, utilization will be terminated after one month.
If you are a coach, operate a fitness center, run a company or a medical practice, and you want the app for a group of your clients, staff or patients, talk to me. You'll find my email on my lab's website (www.adiphea.com) . I will arrange for you to get administrative functions, so that you can manage your clients. And not to worry, the tool is built on top of an electronic patient data file, which meets the strictest data security and privacy requirements.We also do not use your email address for anything else than responding to your inquiry.
Let's see whether we can make personalized prevention fly. Big Pharma certainly wouldn't like it. They can't make money from chronically healthy people. But you could be on your way to NOT become one of the 50-70% of people in whom Big Pharma's drugs don't work so well. Now, is that an inducement or what ?



Nicholson, J. (2006). Global systems biology, personalized medicine and molecular epidemiology Molecular Systems Biology, 2 DOI: 10.1038/msb4100095

Wegwarth O, Schwartz LM, Woloshin S, Gaissmaier W, & Gigerenzer G (2012). Do physicians understand cancer screening statistics? A national survey of primary care physicians in the United States. Annals of internal medicine, 156 (5), 340-9 PMID: 22393129

Pammolli, F., Magazzini, L., & Riccaboni, M. (2011). The productivity crisis in pharmaceutical R&D Nature Reviews Drug Discovery, 10 (6), 428-438 DOI: 10.1038/nrd3405

Loscalzo, J. (2012). Personalized Cardiovascular Medicine and Drug Development: Time for a New Paradigm Circulation, 125 (4), 638-645 DOI: 10.1161/CIRCULATIONAHA.111.089243

Yang, Q. (2012). Trends in Cardiovascular Health Metrics and Associations With All-Cause and CVD Mortality Among US Adults JAMA: The Journal of the American Medical Association, 307 (12) DOI: 10.1001/jama.2012.339
Yang Q, Cogswell ME, Flanders WD, Hong Y, Zhang Z, Loustalot F, Gillespie C, Merritt R, & Hu FB (2012). Trends in cardiovascular health metrics and associations with all-cause and CVD mortality among US adults. JAMA : the journal of the American Medical Association, 307 (12), 1273-83 PMID: 22427615

How to survive the health care system.

You have heard about good and bad cholesterol. You have heard that increasing the former and reducing the latter will cut your risk of heart disease. You will now hear what's principally wrong with this strategy of attacking risk factors. And how it prevents us from eradicating the heart disease epidemic sweeping the globe. 
On 30th November 2006, Jeff Kindler, the CEO of Pfizer, praised their about-to-be released drug for increasing good cholesterol as "...one of the most important compounds of our generation."
Three days later Pfizer halted the phase 3 clinical trial of its hoped-for blockbuster drug. The simple reason: the drug's ingredient, torcetrapib, did what it was supposed to: increase good cholesterol. But it also increased patients' risk for heart attack, stroke or death from any cause [1]. You probably see the common theme behind this and the story of my previous post: a drug improves a risk factor but worsens risk. Fortunately, Pfizer didn't cover that up.
Now, before you admire Pfizer as an outstanding citizen of the pharma world, let's look at their track record in the ethics department: In 2009 Pfizer pleaded guilty to a felony violation of the Food, Drug and Cosmetic Act for misbranding their drug Bextra and three others "with the intent to defraud or mislead".
The anti-inflammatory drug Bextra had been pulled off the market 4 years earlier, but Pfizer bribed its way into physicians' prescription blocks. The consequence: a $ 2.3 Billion criminal fine, the largest ever awarded. You would think of such a fine as putting a serious dent into a company's balance sheet. Well, 4th quarter profits 2008 were only 10% of what they used to be. Pfizer had made a provision for what they knew was coming. But compared to the $ 50 Billion in annual sales that's nothing over which a CEO would lose sleep.
Last month, Merck was fined $ 321 Million for similar offences related to Vioxx, a drug of the same class as Bextra.
Altogether, Big Pharma has been fined $ 8 Billion over the past 10 years for repeatedly defrauding the U.S. health care system. If they do it repeatedly, it's probably not because they are slow learners. It's because - 'cherchez l'argent' - there is money in it. If that's the case, then what has been brought into the open, may just be the proverbial tip of the iceberg. Now, an iceberg of monetary fraud is one thing, an iceberg of defrauding you of your health is quite another. Let me backup my suspicions, again with a recent example: Tamiflu
Roche's Tamiflu is the only orally administered influenza antiviral drug in its class (neuraminidase inhibitors). Governments all over the world had been stockpiling it before the 2009 influenza outbreak. So did the U.S. government at a cost of $ 1.5 Billion for Tamiflu and Relenza
Based on what evidence? Chiefly on a meta-analysis of 10 studies which, in 2003, informed the public that Tamiflu substantially reduced the need for antibiotics and also reduced the risk of developing serious complications [2]. What's wrong with that? That 8 of the 10 studies cited in this Roche-sponsored meta-analysis had not been published in peer-reviewed journals. That is, independent researchers have been unable to verify such claims. Not that they didn't try. The most authoritative organization for conducting meta-analyses is the independent Cochrane Collaboration. The Cochrane researchers Peter Doshi and colleagues attempted to verify the claims of the Kaiser meta-analysis, so named after its lead author. But despite repeated requests, Roche has remained uncooperative in sharing the full data reports, offering reasons of which Doshi and colleagues had to say that "none seemed credible" [3].
In a March 2012 interview with the Swiss newspaper Neue Zürcher Zeitung, the head of the German arm of the Cochrane Collaboration, Prof. Gerd Antes, enlightened readers about Roche's way of doctoring the Kaiser meta-analysis. Not only did Kaiser NOT have access to the 8 studies, Roche had simply given him their evaluation of these Roche sponsored studies' data. And the other two studies alone do NOT support the overall "result" of the meta-analysis. If it wasn't OUR tax dollars and OUR health, I'd say it was funny how governments all over the world are gullible (that's the polite version for 'stupid and reckless') enough to shovel billions of Dollars and Euros into Big Pharma's pockets. Based on nothing else than doctored data. With my small lab we are currently jumping through hoops to get a few ten thousand Euros from a government research fund for a project that will take us 2 years to accomplish. I'm not complaining, I just mention it for contrast.
If I was a conspiracy buff, which I am really not, I would suspect a put-up affair: Politicians first accept lobbyists' contributions for which in return they waste our money on useless, or even dangerous, drugs, after which they earn brownie points from us by publicly wrist-slapping Big Pharma for their deception with fines which look big but really are not. With Pfizer alone spending annually around $ 12 Million for lobbying to lawmakers, Roche and Merck  each $ 8 Million, this scenario doesn't look that far fetched. At least for Big Pharma the return on (lobbying-)investment appears obscenely good. 

Anyway, what you have seen in the case of Tamiflu is called publication bias. It means that positive results are more often reported and published than negative ones. This bias comes with a serious side effect: Meta-analyses and reviews, which typically inform medical practice guidelines, can only be as accurate as their knowledge base. If that base is biased, so will be the treatment or advice which you receive from your doctor.
How can we gauge the extent of this problem? That's surprisingly simple, provided you have access to files of the ethics boards and committees. Every study must be approved by an ethics committee before it can be carried out. Prof. Antes has this type of access. He and his Cochrane team found that of all studies, which received an ethics committee's approval since 2000, only 50% had their results published. Which simply means for every published study there is another one, which, in all likelihood, comes to a different conclusion with respect to effect and risk of its subject drug or treatment.
That is frightening. You might wonder how medicine can be such an inaccurate science. It's because we still know so little. Every biomarker, every hormone every molecule represents one small node in an immensely intricate network of biochemical pathways. At each node sebverlas pathways may intersect, some of them we know, many we don't. So when we develop a drug which manipulates one pathway through one of these nodes, we inadvertently interfere with other pathways. These effects may or may not turn up in the three phases of clinical trials. Recall the torcetrapib example, which I mentioned in the beginning of this post, and the glitazone example of the previous post.
That's why reducing risk is not as straight forward as reducing risk factors. Take the all-time favorite, high blood cholesterol. Almost two thirds of newly diagnosed Indian patients with coronary artery disease (CAD) have normal blood cholesterol levels [4]. Then there is the mother of all risk scores, the Framingham Risk Score (FRS). It uses age, gender, total cholesterol, good cholesterol, blood pressure, smoking and diabetes status to compute your 10-year risk for heart disease. But more than 40% of those people who develop the disease, are flying below the FRS radar. That is, they wouldn't have raised a red flag even if their doctor had examined them the day before their heart attack [5].
Now, you probably think that we are making progress in identifying more and better biomarkers to predict disease risk. Boy, are you wrong. In a review, published just 2 months ago, its authors rounded up 36 new biomarkers which are not already included in the FRS [6]. Of those 36, they investigated the 10 most promising, as determined by the numbers of studies published. Together these 10 had almost 123,000 studies to their names. Mind you, that's only the number of studies with a focus on cardiovascular disease! To what effect? None for most of them. And for the three with a moderate predictive value, BNP, CRP and fibrinogen, the authors found evidence for, you guessed it, bias.
Performing studies is no cheap business. So why burn money on biomarkers, which we know to be of little use?  Because they have great monetary value. Think about it. Once a biomarker, such as cholesterol, has been accepted as a risk predictor into clinical practice, developing drugs which improve that biomarker is the logical next step. And once the widespread use of the cholesterol lowering drug is found to correlate with some reduction of heart attacks and strokes, public health is happy with this drug. Because in public health, we are not concerned with you or with your personal health. We are concerned with the health of the population at large. Back to our example of the cholesterol lowering drug.  The more people qualify for its prescription, the larger the business of developing, producing and distributing this drug. Voila, here is Big Pharma's big business.
Now, let me ask you a hypothetical question: What would you be prepared to do to prevent a $ 1.8 Billion investment from going down the drain? That's the current cost estimate for bringing one drug to market [7]. It takes into account that only a fraction of the molecules, which show some promise in laboratory rats, will make it into a phase 1 clinical trial where they are tested on humans. And only one in 5 of those which make it to phase 1 trials will make it all the way to phase 3 and from there to your pharmacist. With more than 10 years for the entire process, mind you. So let me refine my question, what would you do with your drug which, in phase 3 trial, shows some effect but also a side effect in quite a number of participants? You would certainly NOT be tempted to hide these effects from the public, because you are such a nice person. But some other people, who also have shareholders  breathing down their necks, well, they might be tempted to swing the trial data in favor of the expected blockbuster earnings, don't you think?
Which brings us to the question: how can you survive this system with your health intact? The simple answer is, don't develop any risk factors in the first place. Stay out of this health care system. At least don't find yourself in it as a patient with a chronic condition. What that means to your chances at longevity and celebrating your 80th birthday in full health, you have read in my previous post  "When risk scores for heart attack really suck!"
In the next post I will tell you what your individual way to optimal health and longevity will look like. And how we, in my lab, work on making an individualized prevention strategy a reality, soon. For everyone. At least for everyone who wants it. Stay tuned. 


Research Blogging:
Barter, P., Caulfield, M., Eriksson, M., Grundy, S., Kastelein, J., Komajda, M., Lopez-Sendon, J., Mosca, L., Tardif, J., Waters, D., Shear, C., Revkin, J., Buhr, K., Fisher, M., Tall, A., & Brewer, B. (2007). Effects of Torcetrapib in Patients at High Risk for Coronary Events New England Journal of Medicine, 357 (21), 2109-2122 DOI: 10.1056/NEJMoa0706628
Kaiser, L. (2003). Impact of Oseltamivir Treatment on Influenza-Related Lower Respiratory Tract Complications and Hospitalizations Archives of Internal Medicine, 163 (14), 1667-1672 DOI: 10.1001/archinte.163.14.1667
Doshi P, Jefferson T, & Del Mar C (2012). The imperative to share clinical study reports: recommendations from the tamiflu experience. PLoS medicine, 9 (4) PMID: 22505850
Wannamethee, S. (2005). Metabolic Syndrome vs Framingham Risk Score for Prediction of Coronary Heart Disease, Stroke, and Type 2 Diabetes Mellitus Archives of Internal Medicine, 165 (22), 2644-2650 DOI: 10.1001/archinte.165.22.2644


Ioannidis JP, & Tzoulaki I (2012). Minimal and null predictive effects for the most popular blood biomarkers of cardiovascular disease. Circulation research, 110 (5), 658-62 PMID: 22383708

Paul, S., Mytelka, D., Dunwiddie, C., Persinger, C., Munos, B., Lindborg, S., & Schacht, A. (2010). How to improve R&D productivity: the pharmaceutical industry's grand challenge Nature Reviews Drug Discovery DOI: 10.1038/nrd3078