Craig W (1917). Appetites and Aversions as Constituents of Instincts. Proceedings of the National Academy of Sciences of the United States of America, 3 (12), 685-8 PMID: 16586767
Sunday, September 2, 2012
A 95 years old psychology article holds the key to solving the obesity epidemic. It's not about a long forgotten medicine or an ancient psycho-trick. It's a simple observation about the dynamics of feeding. Vindicated by neurohormonal research, here is what it means to your struggle with extra pounds. [tweet this].
When Wallace Craig dissected the feeding behavior of doves, his experimental animal of choice, he discovered the existence of two distinct phases - an appetitive and a consummatory phase . He defined appetite as "a state of agitation", which continues until food is presented, whereupon phase 2 begins. That's the phase you and I call eating. It's followed by a third phase of relative rest, which Craig called the state of satisfaction. You are forgiven if you now ask "what science nugget could possibly be hidden in this platitude". But the best-hidden gems are often those, which are in plain sight. In this case it's nothing less than the model explaining why so many of us wear dress sizes, ranging from "XL" to "Oh my God, look at this!", while none of us actually wants to be seen in them.
Before I get to the beauty of Craig's observation, let me also tell you what's the acid test for any biological model: it must make sense in evolutionary biology. If it does, it still may not be the final word, but if it doesn't we can safely discard it into the heap of wishful thinking. Keeping this in mind, let's get cracking.
When Craig published his paper in 1917 he described the behaviors of his doves as instinctive. In other words, being driven by some innate processes which require no conscious decision making nor any degree of intellect. Today we know a lot more about those "innate processes", particularly that they are the result of a complex conversation between neurons and hormones playing out in the recesses of the animal brain. Not only do we know the chains of command running from brain centre to periphery we also know the hormones (at least some of them) by names, such as Neuropeptide Y (NPY) or Leptin. You don't need to remember them. What you need to remember is that "instinctive" has matured from a black box stage to the stage of neurohormonal mechanisms, which can be tested quantitatively in the lab with experimental animals.
As you may have guessed, I didn't mention NPY and leptin by chance. Both are relatively new kids on the block, with NPY having been discovered in 1982, followed by leptin 12 years later. Now what do they have to do with eating behavior?
NPY is the most potent "orexigenic" peptide currently known. That's science speak for appetite stimulating peptide. Now you also know what it means when I tell you that leptin's effect is just the opposite, that is, anorexigenic, or appetite suppressant. Inject NPY into the right places of a rat's brain and it will turn into a voracious eater. Give obese rats leptin, and they slim down.
Wonderful, you might say, so we do have a cure for obesity. That's what researchers thought, too. But it turned out that leptin administration does not help overweight people to lose weight. That's one of the problems with animal experiments: What works in rats does not necessarily work in man, even if our behaviors are often indistinguishable.
Back to our feeding issue. With ethics boards being as they are, we won't get their approval for experimental NPY injections into people's brains just to watch their eating behaviors. But we do know that NPY is operational in humans as much as in rats and many other animals, and within the same brain centers. So, a rat model is still our best bet at studying how hormones affect human feeding habits.
Now, one of behavioral sciences' problems with animal experiments in the lab is that an animal, which is kept in a cage, is not a good model for that animal's behavior under free-living conditions. Of relevance to our topic is the free-living animal's need to procure food before it can consume it. This go-and-get-food stage typically involves a fair amount of foraging or hunting, depending on the type of animal and its position on the food chain. It's what Craig had called the state of agitation of his captive doves. How is all this relevant to NPY's orexigenic role? Well, in 1995 Seely and his colleagues wanted to know exactly how NPY does its fattening job . In an experiment they discovered that NPY injection did not increase the amount of food rats ingested from an orally injected sugar solution. What they did increase was their trips to a bottle from which they had been trained to take food (sugar solution). In other words, NPY sort of activated the rats go-get-food autopilot, but it didn't drive them to ingest more food when this food was presented intra-orally.
That's when the first suspicions arose that the equation "NPY=feeding frenzy" is not as straight forward as it had seemed. In an ingeniously designed experiment with lab rats Ammar and colleagues wanted to see whether and how NPY and leptin affect the appetitive and the consummatory aspect of feeding behavior differently .
The results of this experiment show a much more complex hormonal effect of NPY and leptin than what had previously been thought. While NPY infusion made rats increase their physical activity to get food it inhibited their intake of food, when it was made available to them by oral infusion while they were on the "go-get" for food. Just the opposite was the case with leptin. While leptin made rats drop their efforts to get food, it made them increase their intake of food when it was delivered by oral infusion. The researchers experimented with male rats exclusively, not because they thought that females would react differently, but because they wanted to see how specific the effect of NPY is on the animal's drive to become physically active. So they used the one stimulus with the best track record of throwing males off any course of action, regardless whether those males are rats or men: the presence of a sexually receptive female. Now, this must be very illuminating to my female readers: under the influence of NPY male rats were "more into food than into females" so to speak. Definitely more than when NPY was taken out of the equation.
Just as an aside: what does this mean for women's belief that men are operating under one of three mindsets: "I'm hungry, I'm thirsty, I'm horny"? It means that the order of priority is obviously determined more by hormones such as NPY, than by female manipulation. Of course, as I already mentioned, rats are not necessarily a template for human biology. Which is why we should hope, for the sake of female rodents, that their male peers' mindsets are a little more sophisticated than what human females observe in their male counterparts.
But let's not talk rats for a while, let's talk humans. More specifically humans in what was their natural habitat throughout 99% of our evolution: the pre-agricultural world which our ancestors had roamed as hunter/gatherers. In this world it will have made great sense to be kick-started into a go-get-food mood when one's energy reserves began to deplete. And it would have made equally great sense to have one's pre-occupation with food knocked down a few pegs once energy reserves have been replenished.
Interestingly, the circuitry which accomplishes this behavioral feat has been preserved over the eons of evolution, from mouse to man. And in this circuitry we find the same hormones, NPY and leptin, playing essentially the same roles, too. Obviously this neurohormonal architecture has been a recipe for survival throughout the evolution of species. It is easy to see how evolution had trained this architecture to align every species' feeding behavior (a) with its energy needs, (b) with the energy density of the food available to it, and (c) with the effort necessary to get this food.
Now here is where it all comes together in our obesity plagued times: in our modern environment this inherited circuitry has turned from an asset into a liability. What use is NPY's drive to get physically active for the procurement of food, when the necessary physical activity has been reduced from spear-hunting a deer to opening the fridge? What use is NPY signaling the reduction of energy stores when your and my stores are too high in the first place? What use is leptin's stimulative effect on eating once food is available, when food is available everywhere and all the time? Not only have these drives become useless, they have become hazardous to our health. I'll spare you the proof: The much quoted statistics of overweight and obesity.
When Wallace Craig first painted the architecture of feeding 95 years ago, obesity wasn't a problem at all. Now you might say "Wait a minute, his generation wasn't exactly the hunter/gatherer type. If all this talk about hormones and feeding behavior is correct, why were they not fat?" Good point. One part of the answer is, that the obesity epidemic has been paralleled by an epidemic of rapidly declining physical activity. The other part of the answer is, again, a hormonal circuitry. A much more dangerous one. I will address it in my next post, and then we will construct a comprehensive model of feeding behaviour which not only explains your personal failures, or triumphs, in your personal war against the XL sizes. It will also explain why and how, despite all those inherited neurhormonal mechansims driving our feeding behavior, you still can win this war. [tweet this].
1. Craig W: Appetites and Aversions as Constituents of Instincts. Proc Natl Acad Sci U S A 1917, 3(12):685-688.
2. Seeley RJ, Payne CJ, Woods SC: Neuropeptide Y fails to increase intraoral intake in rats. Am J Physiol 1995, 268(2 Pt 2):R423-427.
3. Ammar AA, Sederholm F, Saito TR, Scheurink AJ, Johnson AE, Sodersten P: NPY-leptin: opposing effects on appetitive and consummatory ingestive behavior and sexual behavior. American journal of physiology Regulatory, integrative and comparative physiology 2000, 278(6):R1627-1633.
Craig W (1917). Appetites and Aversions as Constituents of Instincts. Proceedings of the National Academy of Sciences of the United States of America, 3 (12), 685-8 PMID: 16586767
Seeley RJ, Payne CJ, & Woods SC (1995). Neuropeptide Y fails to increase intraoral intake in rats. The American journal of physiology, 268 (2 Pt 2) PMID: 7864237
Ammar AA, Sederholm F, Saito TR, Scheurink AJ, Johnson AE, & Södersten P (2000). NPY-leptin: opposing effects on appetitive and consummatory ingestive behavior and sexual behavior. American journal of physiology. Regulatory, integrative and comparative physiology, 278 (6) PMID: 10848532
Sunday, August 19, 2012
Evolutionary selection favored those who became fat easily. That's the essence of the "thrifty gene hypothesis". It's like Madonna. On the wrong side of 50, and ripe to be dethroned by something with greater sex appeal. In this case the contender's name is the "drifty gene hypothesis". Here is why you shouldn't be too dazzled about it. [tweet this].
Exactly 50 years ago, Neel suggested that the high rate of diabetes in our society is the result of evolutionary selection which favored those of our ancestors whose genes made them store fat more efficiently during periods of food abundance . It's such a marvelously simple explanation that it doesn't take the brains of an Einstein to chatter about it at any dinner party where one wants to be remembered as quite the hobby geneticist. But to every party there is a party pooper. In this case two of them. John R. Speakman and Klaas R. Westerterp are telling us that the high prevalence rate of obesity and diabetes actually disproves the thrifty gene hypothesis .
In a nutshell their argument goes like this: our human and hominin ancestors have gone through so many feast and famine cycles over the past 2 million years, that, if it was for genetic selection, we should by now all be carriers of the genes that made caveman survive and modern man fat and diabetic. Since this is clearly not the case, the TGH can't be correct.
I'm a sucker for theories which challenge common wisdoms, so I enthusiastically read the authors' arguments. Now, let's see how this enthusiasm evaporated.
To a considerable extent, obesity is determined by genes. If you want to put a number on it, genetic factors explain about 60% of the variance in obesity metrics, such as the body mass index (BMI). That's the numbers we are getting from studies, which compare such metrics between identical twins and other sibling types . Just as an aside: When you consider genes as the one condition which you can't change, 60% heritability still leaves a lot of wiggling room for you to fashion your own fate. That's good because obesity comes with a host of nasty diseases, none of which makes your life longer or more pleasant. Think diabetes. Of course, you know all that, and it is not really our subject here. We want to know why there is such a high prevalence of obesity prone people.
To answer this question Speakman and Westerterp compiled some insights from genetics and put them through a mathematical blender. That sounds far simpler than it really was. For that blender to give you an intelligent answer you need to feed it with intelligent data. Otherwise it's the old nonsense-in-nonsense-out" paradigm. In our case at hand there are three data segments which need to be considered.
First, there is biology: what happens to a human organism when it is exposed to fasting? How long will it survive?
Second, there is genetics: what do we know about those 60% of genetic causes? Are they concentrated in a handful of genes, or are they spread over hundreds? And what do we know about the mutation rates of genes? Obviously, the more causative genes, and the smaller the mutation rates the longer it will take for any genetic mutation (or allele) to become fixed in the genetic pool. "Fixed" being geneticist speak for "(almost) everybody has it".
Third, there is evolution & environment: how often did famines happen, and how many of our ancestors were affected by them at any one event?
Get the figures slightly wrong in any of those three segments and your result will be off track. And so will be your conclusions.
To get intelligent data, the two authors first went through an exemplary exercise of modeling what happens to a human organism when it is exposed to a zero-intake famine. That's not as straight forward as you might think, because our metabolism goes through at least three distinct phases when fasting in the extreme. These three phases are determined by our organism's way of storing energy reserves.
First, there is glucose, the building block of virtually all carbohydrates in our food. While our brain thrives almost exclusively on glucose, the body's glucose stores are remarkably small. Glucose is predominantly stored in the form of glycogen in muscle and liver tissue. It is these reserves which are tapped first, and they are typically depleted within 24 hours. If you are a marathon runner you do this depletion business a lot faster, say after 20 miles or so.
Since your brain still needs glucose, your body then starts to produce its own. Largely from lactate and glycerol, a component of fat. Which brings us to the second phase, where the body metabolizes its fat reserves. But even fat reserves don't last forever. Once they are depleted, the body begins to cannibalize its protein. Actually, weight loss in phase 2 is never a pure loss fat only. Proteins are being burnt at the same time but a at a lesser rate, until fat reserves have been depleted. And that's where fasting gets critical, because to your body, burning proteins for energy is like burning banknotes for warming your house: you go broke in no time. And "broke" means "dead" to your body.
Since time to death is a critical element in the mathematical model, the authors went through an exemplary effort of mapping the course from fully fed to fully dead. Interestingly, everybody reacts differently to this fasting business. Some people survive longer than others, even when they have the same BMI to start with. That's why Speakman and Westerterp applied three different models to predict survival time, all models representing those known different ways of adapting to starvation. For a severely obese 1.64 m tall female weighing 100 kg, the models predicted a survival time of 249-289 days. Imagine, that's about 8-9 months with no food at all.
Onto the genetics assumptions. The one thing we know for sure is that obesity is a multi-gene condition. Very multi-gene in fact, because genome-wide association studies (GWAS) have thrown up about 30 odd genes with a combined effect of explaining only 7% of those 60% of weight variance. So, we are assuming that the unexplained difference resides within another 200 or so genes, which we haven't even identified yet. Speakman translated this knowledge into an assumption of each individual gene having a net effect on fat storage of about 80g. That is, a carrier of a gene's "thrifty mutation" (or allele) would store 80g more fat than his peer with the "lean" version of the gene, with those 80g, translating into a 0.25% better chance of surviving a famine. With these assumptions the authors could then calculate how many famines it would take to weed out the unlucky ones whose "lean" genes didn't give them the 80g advantage. That calculation in itself is no rocket science. The authors took a given population size of 5 million people, exposed them to a virtual famine, after which the population had been appropriately decimated, and the percentage of "thrifty gene" carriers among the survivors had increased. They all mated happily after that until the population again reached 5 million. Then the next virtual famine struck, and so on.
How many famines would it take to eliminate the lean gene from the gene pool? Under the authors' assumptions about 6000 famine events.
They then made their final assumption: one famine happening every 150 years. That's 900000 years altogether for those 6000 famine events. Their conclusion: if the thrifty gene hypothesis and its assumption of selection pressure from catastrophic events was correct, we all should be obese today. Since we are not, the TGH is false.
The alternative explanation, which the authors offer is a "drifty gene hypothesis" as opposed to the thrifty version. "Drifty" referring to genetic drift, meaning that mutations of the genes, which regulate fat storage were never really subject to selection pressure, and what we see today is simply the result of a natural drift of genetic mutations over the eons of human existence.
The authors argue further that excessive fat storage was a distinct disadvantage for our earliest hominin ancestors, for reasons of predation. Think of it like that: while neither a fat man nor a lean man can outrun a saber toothed tiger, it's enough for the lean guy to run just a little faster than his fat bro'. Call it a stone-age version of the "first come, first serve" principle, at least from the tiger's perspective: the first man I get is the first man to serve me as breakfast.
The authors then suggest that once our ancestors discovered fire and spears and other things which placed them on top of the food chain, the selective pressure for the lean gene had vanished. Its thrifty sibling started to flourish, not because it was favored by famine-based selection pressure, but simply because man had taken tiger and co. out of the equation, and with it the selective pressure to NOT get fat. During those zillions of generations which separate the man-known-for-throwing-spears from the man-known-for-throwing-tantrum-when-the-iphone-doesn't-work, those 200 odd genes accumulated just enough mutations for many, but not all, of us to become obese and diabetic.
Up to this point one might buy into Speakman's and Westerterp's story. But here is the twist:
Speakman has written about the subject before. With a different tagline. In his 2006 paper he suggested that the selection pressure of famines in human history was too small to have caused the effects attributed to it by the thrifty gene hypothesis . According to that paper, famines with severe mortality rates were rare and, most tellingly, a phenomenon of agricultural societies.
Indeed, the consensus view on famines in pre-agricultural vs. agricultural societies is that our hunter/gatherer ancestors were better fed and better protected against famines than their agricultural descendents. The hunter simply doesn't depend on a crop. Whereas when a crop fails, food shortage is inevitable for the agriculturalist. But even then, a true famine, where there is no food at all, typically requires a back-to-back failure of crops in consecutive years. And even then, as Speakman pointed out in his 2006 paper, mortality rates rarely exceeded 10% of the population, with those 10% coming almost exclusively from those who are either too young or too old to reproduce and thereby contribute to the gene pool after the famine is over. The author's message in 2006: Genetic mutations towards thrifty genes didn't have sufficient advantage or time to spread.
This little twist shows us that somebody is taking potshots at TGH:
Shot 1 (2006): Famines haven't been with us for long enough nor with sufficient severity to have exerted the selective pressure on which the thrifty gene hypothesis rests. Ergo, TGH is wrong.
Shot 2: Famines were so numerous and severe during human history that their combined selective pressure on the thrifty genes was sufficient to have made them a fixture in EVERYBODY'S genetic make-up. Since this is not the case, the TGH is wrong.
Science shouldn't be about taking potshots. Science is about the testing of falsifiable hypotheses in reproducible experiments. A mathematical model, such as the one presented in Speakman's most recent paper does not qualify as such.
Here is why: Given that mutations happen at the rate of 1.1 per 30-100 million base pairs, we all carry about 100 to 200 mutations in our DNA . Not necessarily do those mutations affect actual genes coding for proteins. And if they do, most mutations confer a slight disadvantage, many have no effect on an organism's fitness, and only a few are favorable. Natural selection will weed out the deleterious ones, quickly fix the favorable ones and let the neutral ones accumulate at the given mutation rate. To complicate matters, all those processes happen at vastly different rates depending on the location on the DNA. That much we do know. What we don't know is how much these rates differ. We certainly can't know it for those genes, which we haven't even identified yet, as is the case for most of the hypothesized fat storage genes. That's why the mathematical model with which Speakman supports his argument against the validity of the thrifty gene hypothesis is in all likelihood not reflective of what has happened throughout evolution. Which means, it doesn't add any quantitative or objective evidence against the TGH.
In my next post I will tell you why I believe that the entire discussion misses the point. What we really want to know now is how to help people avoid becoming fat and diabetic in the first place. Decoding the genome and its evolutionary history doesn't do that trick. Because genes do not make us fat and diabetic, genes make proteins, nothing else. One part of those proteins are the hormones. They drive our moods and emotions, our likes and our dislikes and, believe it or not, all our behaviors, from feeding to physical activity. For those latter two I have suggested an explanatory model in my dissertation thesis.
This model tries not only to explain why we eat too much and move too little, despite having the best intentions to do otherwise, and while being aware of all the life threatening consequences. But, more importantly, without having to have a complete understanding of all those hormonal happenings, the model suggests a practical and testable solution to oppose those genetically encoded mechanisms for a longer and healthier life. Think of your car: You don't need to understand the mechanism of its gearbox to operate it for an optimal ride.
Achieving the same thing with your life could turn out to be a gratifying pastime while my geneticist colleagues work on unraveling the enigma of the genetics of obesity. Whatever newer and sexier model they develop to explain the genetic origins of obesity, we might look at it like we look at Madonna and her variants: offering lots of entertainment value, but little of practical use. [tweet this].
1. Neel JV: Diabetes mellitus: a "thrifty" genotype rendered detrimental by "progress"? Am J Hum Genet 1962, 14:353-362.
2. Speakman JR, Westerterp KR: A mathematical model of weight loss under total starvation and implications of the genetic architecture of the modern obesity epidemic for the thrifty-gene hypothesis. Disease models & mechanisms 2012.
3. Segal NL, Allison DB: Twins and virtual twins: bases of relative body weight revisited. Int J Obes Relat Metab Disord 2002, 26(4):437-441.
4. Speakman JR: Thrifty genes for obesity and the metabolic syndrome--time to call off the search? Diabetes & vascular disease research : official journal of the International Society of Diabetes and Vascular Disease 2006, 3(1):7-11.
5. Xue Y, Wang Q, Long Q, Ng BL, Swerdlow H, Burton J, Skuce C, Taylor R, Abdellah Z, Zhao Y et al: Human Y chromosome base-substitution mutation rate measured by direct sequencing in a deep-rooting pedigree. Curr Biol 2009, 19(17):1453-1457.
NEEL JV (1962). Diabetes mellitus: a "thrifty" genotype rendered detrimental by "progress"? American journal of human genetics, 14, 353-62 PMID: 13937884
Speakman JR, & Westerterp KR (2012). A mathematical model of weight loss under total starvation and implications of the genetic architecture of the modern obesity epidemic for the thrifty-gene hypothesis. Disease models & mechanisms PMID: 22864023
Segal NL, & Allison DB (2002). Twins and virtual twins: bases of relative body weight revisited. International journal of obesity and related metabolic disorders : journal of the International Association for the Study of Obesity, 26 (4), 437-41 PMID: 12075568
Speakman JR (2006). Thrifty genes for obesity and the metabolic syndrome--time to call off the search? Diabetes & vascular disease research : official journal of the International Society of Diabetes and Vascular Disease, 3 (1), 7-11 PMID: 16784175
Xue Y, Wang Q, Long Q, Ng BL, Swerdlow H, Burton J, Skuce C, Taylor R, Abdellah Z, Zhao Y, Asan, MacArthur DG, Quail MA, Carter NP, Yang H, & Tyler-Smith C (2009). Human Y chromosome base-substitution mutation rate measured by direct sequencing in a deep-rooting pedigree. Current biology : CB, 19 (17), 1453-7 PMID: 19716302
Sunday, August 5, 2012
Public health has been telling you for years: you are fat because you move too little and eat too much. And yes, it's your fault if you don't break a sweat every day to keep your waist line in check. But research says, that's not the entire truth. In fact, public health might have taken the easy way out, and here is how it could finally make amends. [tweet this].
If an alien scientist came to earth to study us in the same way in which we study lab rats, he would come to the same simple conclusion as we do: give those animals more than enough food, take away the need to move around, and what you'll get is a population of mostly overweight individuals. I say "mostly" because there are always the odd ones who fall away from the norm. What fascinates me most in this image is the fact that, while mice and rats probably do not communicate among each other the benefits of staying slim, we humans do so and still, the result is the same. What our alien researcher sees is biology trumping consciousness. For a good reason. Neither rats nor humans would survive in their natural habitat without the ability to store excess calories as fat, which then sees them through the inevitable lean periods. It gave our ancestors a good shot at survival, with no or little chance to become overweight. At least not then.
Today, obesity is the new normal. I won't bore you with the percentages. You hear and read about them in the media almost daily, with one or the other pundit citing the ever increasing number of people who are overweight or outright fat (the politically correct term being "obese"). Not that any of those pundits offers any solution or view of things other than that too little exercise and too much food are the cause. Those platitudes are typically topped off with denouncing people's weakness to do something about it, such as exercising more and eating less. When you look at the effectiveness of public health calls for exercising more and eating less, you'll find that overweight and obesity have increased nicely alongside those calls. Which simply means one thing: we need to do something differently.
Now, remember, I said there are always some odd individuals who seem to escape the fate of the majority of our experimental animals, be that rats in the lab or humans in free living conditions. It is here where we ought to look at what makes them so different. And whether this difference is in their genetic program or in their mental ability to override this program.
The funny thing is, the answer to this question has been relatively clear for years, but hardly anybody seems to draw the right conclusions from it. Just about a week ago, another wonderful study has emerged on this subject.
Britt Eriksson and her colleagues investigated the correlation between body composition development and energy expenditure through physical activity in 1.5 year old infants . That's not a first, but the way they did it is. When you look at energy expenditure of any individual it is necessary to know how much of this energy expenditure comes from basal metabolic rate (BMR). This BMR tells us how much energy an organism needs to maintain life under resting conditions. There are large differences in these rates between individuals, such that two persons who share the same body weight, height and composition and who do the same type of exercise may burn substantially different amounts of calories, simply because one person has a higher basal metabolic rate than the other. So, If you want to know exactly how much of an individual's total energy expenditure is coming from physical activity, you better have accurate knowledge about his basal metabolic rate because you need to subtract it from total energy he or she burns. In previous studies of infants, physical activity levels (PAL) had been estimated based on predicted BMR rather than on actually measured BMR. Obviously, if your BMR prediction is incorrect so will be your conclusions about PAL. That's why Eriksson and her colleagues objectively measured basal metabolic rates. They did so by analyzing carbon dioxide production and oxygen consumption while infants slept under a ventilated hood system. Add to this the researchers' way of measuring total energy expenditure with the gold standard doubly labeled water method, and what you get is the most accurate differentiation between BMR and PAL possible in living humans.
Our researchers did all those measurements on 44 children aged 1.5 years. All of them had participated in a body composition study when they were 1 and 12 weeks old. Body composition was again measured in the current study. Before we look at the correlation between body fatness and PAL in those 1.5 year old children, let's recall what is normal in human development during infancy.
Healthy infants typically gain body fat, expressed as a percentage of bodyweight, during the first 6 months of life, after which the total body fat percentage (TBF%) slowly decreases. By the way, that was the case in only about 20% of the infants in this study. The majority increased their body fat percentage but with large differences between individuals. At age 1.5 years TBF% varied between 21% and 35%. And these changes in body fat correlated with the physical activity levels of the infants, such that those with a higher PAL had decreased their body fat percentage more than those with a lower PAL. The beauty of investigating these associations in infants is that you don't need to worry about your study subjects' volitional exercise habits, such as treadmill running, mountain biking or kicking ass instead of writing anonymous comments to blog posts. All their physical activity is non-exercise activity. I'll get to this important distinction in a moment. The point here is: genetic influences show up relatively unmasked. If there are such large inter-individual differences in body fat development already being evident in the earliest years of life, we have every reason to assume that there is a phenotype and a genotype which is better protected against fat gain than others. We also know that body fat percentage in the youngest years tracks into adolescence and on into adulthood.
Which of course also means that we should see such differences in adults, too. In fact we have been seeing them for more than 10 years, but somehow these observations don't make it into the media where the doom and gloom prophets of obesity have our ears and eyes but no solutions to offer.
Back to those studies: Levine and colleagues put 16 non-obese young adults, aged 25-36, on an 8-weeks supervised diet which provided a daily excess of 1000 Kcal over what each individual needed for weight maintenance . The participants had to maintain their usual level of exercise throughout the experiment. Physical activity and body composition were measured with the same gold standard methods, which the Eriksson group used on their infants. As a group, the participants of the overfeeding experiment stored 44% of the excess kcal as fat, and dissipated 53% through increased energy expenditure.
But those average values over a group of people don't interest us here. What we want to know is how much difference was there between participants. Well, fat gain varied more than 10-fold from a minimal increase of 360 Grams to a whopping 4.23 kg. Think about this for a moment: you let 16 people gorge themselves on a daily excess of 1000 kcal for 8 weeks and what you get is one whose weight remains virtually the same, while another gains more than 9 pounds, and all the other 14 show up anywhere in between those two.
The laws of physics tell us that energy cannot be lost or created, it can only be converted from one form to another. What this means to our weight gain experiment is that those who didn't store the energy as fat must have burned it somehow through physical activity. But how could that have happened if all participants kept their exercise on an even keel throughout the experiment? Had an enormous increase of BMR protected them against weight gain? Our researchers didn't think so, because experiments on BMR response to over- and underfeeding have been fairly consistent, showing only small changes in the range of 5%. Levine's participants were no exception to that rule. So, what happened?
The answer is in the details of what constitutes physical activity. There are two components, one of which you certainly know: exercise. Then there is the other, which I just mentioned a few lines earlier. It's called NEAT, which is short for non-exercises activity thermogenesis. In a less convoluted way it means the energy you burn through acitivities of daily living, fidgeting, spontaneous muscle contractions and maintaining or adjusting posture while not lying down. In other words, the energy you burn through physical activity which is not volitional exercise.
NEAT accounted for over 70% of the increase in daily energy expenditure, with an average increase of 336kcal/day. Mind you, this was the average over the entire group. Far more interesting, again, is the range, which spanned from a decrease of 98 kcal/day to an increase of 692 kcal/day. It's the same picture we saw in the fat weight development. And yes, the larger a participant's increase in NEAT the smaller his weight gain. The fellow with the 692 kcal/ increase subconsciously moved around more often. He had increased his strolling-equivalent activity by an average of 15 minutes per waking hour! Interestingly, the 4 female participants in this study had the smallest changes in NEAT. While this study is certainly underpowered to tell us anything about gender differences, its observations fits neatly with an another observation: The age-dependent increase of obesity risk is steeper for women than for men.
Now, back to the study results. If NEAT is NON-VOLUNTARY activity energy expenditure, then conscious rationally driven behavior has nothing to do with it. It's purely physiology talking. It's our genes' handwriting. And if this handwriting reveals such a substantial effect on weight development, shouldn't we look at means to increase NEAT, rather than keeping our current tunnel vision on exercise, which we already know is so difficult to adopt for most people? Let's put some effort into designing "obligatory" NEAT into our life. Or rather, designing NEAT killers (such as remote controls) out of it.
To our alien researcher, this might just be the next experiment, as it is for his human peers who are already experimenting with running wheels and wheel locks in their lab rats' cages. After all, a 332 kcal/day deficit translates into almost 14 kg of fat over a year. That's certainly something which public health ought to be interested in.
1. Eriksson B, Henriksson H, Löf M, Hannestad U, Forsum E: Body-composition development during early childhood and energy expenditure in response to physical activity in 1.5-y-old children. The American Journal of Clinical Nutrition 2012.
2. Levine JA, Eberhardt NL, Jensen MD: Role of Nonexercise Activity Thermogenesis in Resistance to Fat Gain in Humans. Science 1999, 283(5399):212-214.
Eriksson B, Henriksson H, Löf M, Hannestad U, & Forsum E (2012). Body-composition development during early childhood and energy expenditure in response to physical activity in 1.5-y-old children. The American journal of clinical nutrition PMID: 22836033
Levine JA, Eberhardt NL, & Jensen MD (1999). Role of nonexercise activity thermogenesis in resistance to fat gain in humans. Science (New York, N.Y.), 283 (5399), 212-4 PMID: 9880251
Sunday, July 22, 2012
We live in interesting times. Admitted, my view of times is myopic, it's focused on the biomedical. So, I'm obviously not referring to Greece teetering on the economic brink. In biomedicine our Greeks are the cherished wisdoms about salt, fat and BMI. Similarity 1: They are not doing so well. Similarity 2: Their balance sheet screams bankruptcy. Similarity 3: Our authorities won't kick them out for fear of a domino effect.
Actually, medical history is full of interesting times. Remember, when a young doctor suggested that simply washing hands between dissecting cadavers and helping women give birth would seriously reduce the regular 1-in-five death rate from childbed fever?
Of course, you won't remember this: the place was the Vienna General Hospital in Austria, the year was 1849 and the young doctor's name was Ignaz Semmelweis. While he didn't publish his observations, one of his students did, in the grand old dame of British medical journals, the Lancet . At that time infection was known per se, but it was believed to work like this: a "peculiar morbid atmospheric influence which extends beyond the range of personal communication". Semmelweis believed in washing hands. And he had the numbers to prove his belief. After introducing a hand-washing rule in his department, childbed deaths dropped by 75%. The response of Semmelweis' peers and superiors to his challenging notion is today known as the Semmelweis reflex. Unlike Semmelweis it is very much alive. It is the reflex-like rejection of new insights because they disagree with entrenched beliefs.
Semmelweis' observations were one of the initial steps in the development of the germ theory of disease. Three more names are attached to its development, Louis Pasteur, Robert Koch and Joseph Lister. The latter was the first to apply this new theory to surgical procedures. So, the next time you gargle with Listerine, you might want to say a silent thank you to all the men and women of science who had the guts to challenge those pompous idiots who, in true Semmelweis-reflex mode, did everything to discredit the new insights. And they usually are quite successful. Who wants to argue with a praetorian guard of honorable old professors. In Semmelweis' case, they got him barred from medical practice, they publicly ridiculed him and ultimately drove him to insanity. All the while women continued to die in childbed. Unsurprisingly, because medical textbooks continued to teach the old views on childbed fever until the 1890s. But once germ theory, and with it hygiene, was adopted into medicine and daily life, the mortality landscape changed dramatically. Infectious diseases disappeared from the pole position of the death tables, and the 1900s witnessed the emergence of their replacements: cardiovascular disease (CVD) in all its flavors, from hypertension and atherosclerosis to heart attack, stroke and heart failure.
Recently, potential triggers for those Semmelweis reflexes have been coming out of research, though probably not as dramatic in consequence as the infection theory.
Salt is no evil
I have written about the potentially flawed obsession with trying to get everybody to reduce salt intake in order to reduce blood pressure and stroke in the entire population. The latest nail in this obsession's coffin is Alderman and Cohen's review of 23 observational studies and 7 randomized controlled trials (RCT), all investigating the effects of salt intake on parameters of health . The observational studies accumulated data on 360,000 participants and recorded 26,000 disease events. While 7 studies showed a direct relationship between increasing sodium and increasing CVD events, another 6 studies demonstrated the opposite: a clear inverse relationship. Two studies showed a J-shaped relationship, in which low and high intakes increased disease risk, whereas 8 studies didn't show any relationship or only inconsistent results.
Only 9 of the 23 studies measured salt intake objectively, that is by measuring participants' urinary sodium excretion. I mention this because the other 14 studies relied on self-reported sodium intake, which is prone to recall errors. Of those 9 studies with objective measurements, only 1 showed a direct association, whereas in three studies higher intake was associated with fewer disease events. Two studies showed a J-shaped relationship and one didn't show any relation.
Is it now wise to simply balance those scores, like in football, and let the highest scorer win the tournament? In our case the team "inverse relation" would win hands down over its competitors "direct relation" "J-shaped relation" and "no relation". Should we draw our conclusions in this way? No, science does not work this way. And it's not the way the authors chose. When they looked a little closer at the sodium intake ranges in each of the RCT's they found that the seemingly conflicting results of the observational studies could be reconciled easily.
It turns out that it all depends on the baseline intake from which you increase or decrease your salt consumption. The safe range being actually quite large, between 2.5-6.0 g/day. Go below or above that and you will face some increased risk. Interestingly, this range is way in excess of the current authoritative recommendation of "less than 2.0g/day". What is also rarely mentioned by those "authoritative guidelines" are some other side effects of lowering salt intake. In several of those RCTs salt reduction came with an increase in blood cholesterol, insulin resistance, adrenaline secretion and sympathetic nerve activity. None of those effects is beneficial to health. That's curious because all, except for cholesterol, tend to raise blood pressure. So the net effect of salt reduction, or increase, in you is always a composite of all those biochemical responses to sodium intake change.
Want to bet whether these insights will trigger the Semmelweis reflex in some of those who have built their career on maligning salt? On to the next subject:
Fat isn't so bad either
Khaw and colleagues wanted to know the answer to an old question: does the fat in your diet give you heart disease ? The belief that dietary fat and heart disease march lock-step is so ingrained that you are forgiven to wonder why anybody would spend time and effort on this question. Well, that's because a recent meta-analysis of 21 studies, following 347,000 people for 5 to 23 years, could NOT find any association between the two .
Since all of the previous studies had been of an observational nature, which does not allow for conclusions of causality, the researchers used data from a prospective trial which had investigated the correlation of diet with cancer outcome, the European Prospective Investigation into Cancer (EPIC)-study. They looked at the fats in the blood of 10,000 participants aged 40–79 years, and they followed them from 1993–1997 through 2011. During this period 2,424 participants were diagnosed with heart disease. From among the remaining 10,000 participants the researchers chose 4,930 disease-free controls for a comparative evaluation. They checked whether, and how strongly, saturated and unsaturated fatty acids in the blood correlated with heart disease.
Since the fatty acid composition in the blood mirrors dietary fatty acid intake, this is as close as you can get to a conclusion about how the intake of type of fat affects your risk of heart disease. Of course, you need to adjust for other risk factors, such as age, sex, BMI, smoking, physical activity, alcohol intake, diabetes, blood pressure, cholesterol and other known risk factors for heart disease.
The researchers did those adjustments, and they found the saturated fatty acids to be only weakly related with heart disease. But that's not the surprising find. The real surprise was about the unsaturated fatty acids of the famous omega-3 and omega-6 persuasion. You have always heard how omega-3, the fish-oil variant is so good for your heart and the omega-6 is not. Well, listen to this: omega-6 turned out to be protective against heart disease, and omega-3 wasn't. That's contrary to our hitherto held beliefs that popping fish oil pills will make you a Methusalem, and that reducing omega-6 intake will do wonders against inflammation in your arteries. See a Semmelweis reflex on the horizon?
BMI is a useless crutch
There is probably no other number which has become so much engrained in our medical psyche as the BMI. This relation of bodyweight over height squared is the human equivalent of a meat stamp: if it's below 25 you are the prime cut which every health insurer wants on his client list. Bring it above 30 and you are a fat and soon-to-be sick bum whom nobody wants to talk to, unless your name is John Candy. In our society where we determine the winner of a Formula-1 race with millisecond precision, we accept being stamped at-risk with the accuracy of 20/100 vision (20/20 being the normal, and 20/200 being the cut-off for legal blindness).
The problem with BMI is, it doesn't differentiate between muscle and fat tissue, which makes a body-builder look fat, and bad, on the chart. BMI also doesn't tell you where your fat resides, on your buttocks or on your waist. The latter is certainly worse for your health than the former. Still BMI is THE number to judge you by. Maybe not any more.
Krakauer and Krakauer have developed a new measure from looking at the numbers of the National Health and Nutrition Examination Survey (NHANES) 1999-2004, and correlating those numbers with the death statistics. They wanted to blend weight, height and body shape into a more informative indicator of disease risk. Which is why they called it A Body Shape Index, ABSI. They also wanted this number to be easy to calculate from parameters which everyone can measure. Which is why the ABSI only calls for waist circumference to be measured in addition to the BMI's parameters of height and weight. I won't bore you with the details of statistical development of this ABSI, but suffice it to say, it's been done beautifully and very thoroughly. Then, after adjusting for other known risk factors, such as age, smoking, diabetes, blood pressure and cholesterol, the authors correlated BMI, WC and ABSI with death. While WC and BMI didn't show any correlation ABSI was strongly correlated. That's surprising, given the relatively short follow-up period of 5 years.
Of course, there area lot more questions to be answered before the ABSI will make it into medicine's hall of fame. The most important: Does lowering the ABSI translate into increasing health, or improving risk or extending lifespan?
So, be prepared for quite some time to pass before your doctor tells you that your ABSI requires some serious attention. Once that happens, you might remember this post and look up the time that has passed between its publication and your encounter with the ABSI in a medical environment. You will then see that medical science grinds slowly. But ultimately it grinds.
In this case, medical science might grind a little slower, because the ABSI isn't the brainchild of a biomedical or public health scientist. It's developer is an assistant professor in the Department of Civil Engineering of the City College of New York.
Now, how can a civil engineer's index beat our beloved BMI when biomedicine's best brains have been laboring over a BMI replacement for years?
Great potential for a Semmelweis effect, don't you think? [tweet this].
1. Routh CH: On the Causes of the Endemic Puerperal Fever of Vienna. Medico-chirurgical transactions 1849, 32:27-40.
2. Alderman MH, Cohen HW: Dietary Sodium Intake and Cardiovascular Mortality: Controversy Resolved? Am J Hypertens 2012, 25(7):727-734.
3. Khaw K-T, Friesen MD, Riboli E, Luben R, Wareham N: Plasma Phospholipid Fatty Acid Concentration and Incident Coronary Heart Disease in Men and Women: The EPIC-Norfolk Prospective Study. PLoS Med 2012, 9(7):e1001255.
4. Siri-Tarino PW, Sun Q, Hu FB, Krauss RM: Meta-analysis of prospective cohort studies evaluating the association of saturated fat with cardiovascular disease. Am J Clin Nutr 2010, 91(3):535-546.
Routh CH (1849). On the Causes of the Endemic Puerperal Fever of Vienna. Medico-chirurgical transactions, 32, 27-40 PMID: 20895917
Alderman MH, & Cohen HW (2012). Dietary sodium intake and cardiovascular mortality: controversy resolved? American journal of hypertension, 25 (7), 727-34 PMID: 22627176
Khaw KT, Friesen MD, Riboli E, Luben R, & Wareham N (2012). Plasma Phospholipid Fatty Acid Concentration and Incident Coronary Heart Disease in Men and Women: The EPIC-Norfolk Prospective Study. PLoS medicine, 9 (7) PMID: 22802735
Siri-Tarino PW, Sun Q, Hu FB, & Krauss RM (2010). Meta-analysis of prospective cohort studies evaluating the association of saturated fat with cardiovascular disease. The American journal of clinical nutrition, 91 (3), 535-46 PMID: 20071648
Sunday, July 15, 2012
"It is now well known that spending too much time sitting down is bad for the heart, even if one takes regular exercise."
That's what the Daily Telegraph told us on 10 July this year. Behind this piece of insight is a study published by Katzmarzyk and colleagues a few days earlier. The authors investigated the question what effect the daily time we spend sitting down has on health and life expectancy. In the USA, that is.
Maybe the Daily Telegraph lives in a different knowledge universe, but in the one where biomedical research counts, the association between sitting and heart disease is not as clear as the reporters make it out to be. Call me a fusspot, but the only study design, which allows us to draw conclusions about causality, are those where we expose a randomly assembled group of individuals to a certain intervention (in this case: sitting down for extended hours every day) and then we compare the outcome in that group with the outcome of another randomly assembled group which didn't get our intervention. Assuming the two groups didn't differ in any meaningful way from each other at the outset of our experiment, we can, at the end of it, ascribe a possible difference in outcome between the groups to our intervention. That's what I want you to keep in mind while I walk you through the study which had prompted the Daily Telegraph to tell you that sitting too long will cut your life expectancy.
Let's first look at the background to the authors' research question, which was "To determine the impact of sitting and television viewing on life expectancy in the USA" . Over the past 60 years we have accumulated a vast body of evidence for the benefits of physical activity on health. The results of this research are reflected in every guideline on how and how much we should exercise. You could say "made to move" is written all over our genes. Only very recently are we discovering a sub-clause, written in small-print, saying "man is not made to sit", which we interpret to mean that cramming movement into a brief period of time every day doesn't help us much if sitting around is what we do for the rest of the day. Katzmarzik and Lee simply wanted to extract from the available evidence how a violation of this newly discovered sub-clause impacts our health and longevity.
So, they set out to identify all the studies from which reliable data could be gleaned about the effects of sitting and television time on the risk of dying. Only 5 studies matched those criteria. From these they pooled the relative risk results into a meta-analysis. Then they looked at the sedentary behaviors of the U.S. population. For that purpose they consulted the data of the National Health And Nutrition Surveys (NHANES), and they also looked at the latest life tables for this population as published by the World Health Organization (WHO). We don't need to go into the statistic intricacies of the procedure. They are a very thorough and methodical attempt at coming up with an educated guess about the impact of extended sitting on the life expectancy of a population. By way of analogy: the authors threw all those data into the statistics blender and came up with what we call the population attributable fraction, or PAF, which tells you how many deaths (or disease cases) could be avoided in a population if the risk factor or exposure were eliminated, in our case, the exposure being extended sitting time.
Fast forward to the results, which the authors comment as follows: "The results of this study indicate that limiting sitting to less than 3 hrs/day and limiting television viewing to less than 2 hrs/day may increase life expectancy at birth in the USA by approximately 2.0 and 1.4 years respectively, assuming a causal relationship." That's what I like about the authors, whose work I have been following for quite some time. They point out that this conclusion is only valid UNDER THE ASSUMPTION that sitting and dying early are causally related. They also go on to emphasize that this is "...a theoretical estimate..." (emphasis in italics by the authors) and that "This should not be interpreted to mean that people who are more sedentary can expect to live 1.4 or 2.0 years less than someone who does not engage in these behaviours as much." That's obviously addressed at those media types who, of course find it far more sexy to tell you that spending too much time on your butt cuts down your life expectancy.
Now, instead of picking the raisins out of this nicely done study, I want to walk you briefly through the 5 studies from which the authors extracted their results. After that, you can still judge for yourself how much trust you want to put into the Daily Telegraph interpretation.
The first of the 5 studies was conducted by the same lead author, Dr Katzmarzyk. It was a study of 17,013 adults of the 1981 Canada Fitness Survey (CFS) who had been followed for up for 12 years . At baseline, the survey participants had been asked, among other things, about their time spent sitting. Death from cardiovascular and other causes were the outcome measure. In such a study it wouldn't make sense to simply correlate sitting time with death. After all, there are a lot of other factors which determine our demise. Age being one of them. My chances to die in the next 12 years are quite a bit greater if I'm 70 than if I'm 35. So, believe me when I say that the authors adjusted as much as possible for such factors. And it is this "as much as possible" where we begin to find hairs in the soup.
First of all, cardiovascular disease (CVD) is a main cause of death today. So, we should account for all those people who already had CVD when they entered the study. But that's not as simple as it sounds. CVD has a mean streak in that it remains asymptomatic for years, often decades, before it hits you with a heart attack or stroke. So, eliminating those cases who had reported such events at baseline, doesn't mean our survey participants had a clean bill of cardiovascular health. At the average age of over 40, there will certainly have been quite a number of people who had such silent stages of CVD. The principal manifestation of "silent" cardiovascular diseases are those atherosclerotic plaques which narrow the arteries and arterioles. While the authors used the PAR-Q (physical activity readiness questionnaire) which asks, in five questions, about symptoms of CVD, silent CVD would have flown below that radar. So, not accounting for those silent cases may, in all likelihood, have biased the results. Think about it, if those with silent CVD don't move as much, simply because exercising causes them discomfort (which happens when narrowed arteries don't supply enough blood to a working muscle, or heart), it is not the sitting time, but the silent CVD which correlates with an earlier death.
On to the 2nd study: Author Patel and colleagues looked at 123,216 adults, aged 60+, of the CPS-II nutrition cohort, who had been followed up for 14 years . Again the results support an association between sitting time and CVD mortality, but, again, silent asymptomatic disease had not been assessed. Interestingly, in this study the association was far stronger in women than in men. Tellingly, age 60+ is also the age at which women start to "catch up" with their male peers in respect to CVD risk.
In the third study, Dunstan and colleagues had looked at the correlation between television viewing time and death among 8,800 adults aged 50+ with a median follow-up period of 6.6 years. In contrast to the previous 2 studies, the authors were able to adjust for known CVD risk factors such as hypertension, blood lipids, blood glucose and diabetic status. Those who reported sitting in front of the TV for more than 4 hours per day, had a 50% higher risk of dying from any cause and an 80% higher risk of dying from CVD causes. But adjusting for risk factors of CVD is not the same as adjusting for CVD.
In the fourth study, Stamatakis and colleagues had looked at the data of 4512 people, aged 57+, of the Scottish Health Survey, who had been interviewed in 2003 and followed up until 2007. Those who had reported watching more than 2 hours of TV per day had an increased risk of CVD events (not of CVD death), and only those who had reported watching TV for more than 4 hours per day had a statistically significant risk increase of dying from any cause.
In the fifth and final study Wijndaele investigated the data of 13,197 adults aged 60+ of the EPIC study cohort. Those people had been assessed at the 1998-2000 baseline and followed up for 9.5 years. Like in the other 4 studies, the association between increased TV viewing time and all-cause and CVD death was evident. This observation prompted the authors to say that: "Given the high prevalence of excessive TV watching, ... these results indicate the importance of public health recommendations aimed at decreasing TV time and possibly overall sedentary behaviour." So, will throwing away your TV make you live longer?
You'll probably appreciate the difference between Wijndaele's and Katzmarzyk's way of interpreting essentially similar results. I personally go with Katzmarzyk's more careful interpretation. It does not outright assume a causal correlation to exist. There are still too many question marks. For example: We know that self-reported physical activity, self-reported screen time, well, self reported anything, is inherently fraud with over- and under-reporting of facts. Dunstan and colleagues were adamant at pointing out that this couldn't have affected their results. But when you look at how well, or how poorly, their questionnaire really performs, you will be forgiven to be less enthusiastic than the authors. Use that questionnaire twice on the same person to assess same-level PA, and chances are you'll get two different answers. That's not just me being the party pooper, it has been confirmed in validation studies which have shown, at best, only a moderate level of agreement between two rounds of questioning (the parameter is the intraclass correlation coefficient, or ICC) . If repeated questioning is already fraud with inconsistencies, how large, do you think, such inconsistencies will be between the answers of any given respondent and his actual physical activity level?
So, what are we to make of all this? I can only give you my personal opinion. I tend to believe that there is a threshold volume and intensity of DAILY physical activity, which protects you against the effects of extended sitting time. Only we can't see this level in the 5 discussed studies for obvious reasons. Their ways of assessing PA were not accurate enough.
I have to admit, that my belief is biased: I don't know about you, but less than 3 hours of sitting time appears unachievable for most of us today. And while I'm working at a desk, which allows me to alternate between standing and sitting, seen through the lenses of these 5 studies, I still have what those studies proclaim to be a risk factor for premature death: extended sitting time. But I also do exercise on a daily basis at an intensity and with a volume which far exceeds what 90% of the population is doing. That's why I love to think of this effort as being CVD-protective. This belief is founded in a large body evidence which essentially says: exercise triggers biochemical reactions and mechanisms with a vast array of protective effects. In a dose-dependent way.
Fortunately, I'm able to measure the effects of my personal dose of exercise in my health lab. And from doing the same thing for our clients, I happen to know that everyone is unique in his response to intervention, be that exercise or diet or a pharmacological treatment. Which is why I am quite confident when I tell you not to lose any sleep over those attention grabbing headlines. Especially, when they suggest cause-effect relationships from studies which simply can't establish such relationships. In the case at hand, none of the 5 studies could have adjusted for pre-existing silent CVD. CVD is a cause of premature death and, as I have argued, it can be a reason for people to avoid exercise and spend more time sitting, simply because exercise causes them discomfort. So, here is my question: Are people dying early because they sit too long, or are they sitting so long because they'll die earlier? Stay skeptic! [tweet this].
1. Katzmarzyk PT, Lee IM: Sedentary behaviour and life expectancy in the USA: a cause-deleted life table analysis. BMJ Open 2012, 2(4).
2. Katzmarzyk PT, Church TS, Craig CL, Bouchard C: Sitting time and mortality from all causes, cardiovascular disease, and cancer. Med Sci Sports Exerc 2009, 41(5):998-1005.
3. Patel AV, Bernstein L, Deka A, Feigelson HS, Campbell PT, Gapstur SM, Colditz GA, Thun MJ: Leisure Time Spent Sitting in Relation to Total Mortality in a Prospective Cohort of US Adults. Am J Epidemiol 2010:kwq155.
4. Brown WJ, Trost SG, Bauman A, Mummery K, Owen N: Test-retest reliability of four physical activity measures used in population surveys. J Sci Med Sport 2004, 7(2):205-215.
Katzmarzyk PT, & Lee IM (2012). Sedentary behaviour and life expectancy in the USA: a cause-deleted life table analysis. BMJ open, 2 (4) PMID: 22777603
Katzmarzyk PT, Church TS, Craig CL, & Bouchard C (2009). Sitting time and mortality from all causes, cardiovascular disease, and cancer. Medicine and science in sports and exercise, 41 (5), 998-1005 PMID: 19346988
Patel AV, Bernstein L, Deka A, Feigelson HS, Campbell PT, Gapstur SM, Colditz GA, & Thun MJ (2010). Leisure time spent sitting in relation to total mortality in a prospective cohort of US adults. American journal of epidemiology, 172 (4), 419-29 PMID: 20650954
Brown WJ, Trost SG, Bauman A, Mummery K, & Owen N (2004). Test-retest reliability of four physical activity measures used in population surveys. Journal of science and medicine in sport / Sports Medicine Australia, 7 (2), 205-15 PMID: 15362316
Sunday, July 8, 2012
One of the enduring diet questions is whether supplements are a good tool to (a) improve health, and (b) compensate for nutritional deficits of an enjoyable but less than healthy dietary habit.
To most people, the answer seems to be a resounding "Yes". In the U.S. more than 65% of the population are regular supplement users. They spend north of 28 Billion US$ annually on their pills and potions. To put this into perspective: 28 Billion is more than the gross domestic product of Cyprus - the latest EU country in need of being bailed out. While Cyprus circles the drain, the supplement industry doesn't. In fact it is growing by 10% annually. A growth, which, in 2008, Dr. Daniel Fabricant, then vice president of the Natural Products Association (NPA), had correctly predicted. He knew the drivers of that growth: "...the products that grow are the ones with science behind them. When there’s good science like there is behind ... vitamin D and omega 3s, that’s really where the dollar is going to be spent.” So, let's have a look at how good that science really is.
Remember the time when Vitamin E and beta Carotene - the thing in veggies and fruits, which your body turns into Vitamin A - were found to be associated with decreased risk of lung cancer. The year was 1981 and the knowledge of that time had been summarized in the journal Nature . You must keep in mind: if it's in Nature, it's like God's gospel. Also keep in mind, that those studies were observational by design, that is, they observed an association between increased beta-carotene intake and lower incidence of lung cancers. Such observations do not allow us to say that one causes the other, even though the media types are typically quick in doing just that. So, the natural conclusion from these association studies was: give smokers, those people who have the highest risk of getting lung cancer, a Vitamin supplement to reduce their risk.
Then, in 1985, a group of Finnish researchers (The Alpha-Tocopherol, Beta Carotene Cancer Prevention Group, ATBC) did the one and only thing, which can establish a cause-effect relationship: a study in which male smokers, the people at highest risk for lung cancer, were given the supplement and another group wasn't . In fact, the 29,000 participants had been randomized into one of 4 equal-sized groups, with group A receiving Vitamin E, group B receiving Vitamin A , group C receiving both Vitamins and group D getting simply a placebo. In 1994 the results came out. Certainly not in favor of the supplement. The guys on beta-carotene had an 18% higher rate of developing lung cancer than their peers who did not get this Vitamin. Actually, this rate was seen accelerating over time.
Another large trial, the beta-carotene and retinol efficacy trial (CARET) did essentially the same thing. It investigated the effect of beta-carotene on lung cancer risk in more than 18,000 participants at elevated risk due to their being smokers or having been exposed to asbestos. CARET was done in the U.S., and it delivered more sobering results: A 28% increase in lung cancer risk among those who had been randomized to receive the beta-carotene supplement . The trial was halted, and follow-up observations showed a gradual reversal of elevated risk. That's a clear indication that the increased risk of lung cancer was attributable to the supplementation with beta-carotene and vitamin E.
While these results certainly put a damper on the enthusiasm for vitamin A & E, the truly interesting finding is often overlooked and underreported: For the placebo guys in the ATCB study, there was a clear inverse relationship between intake of FOODS high in Vitamin E & A and the risk of lung cancer. The group with the lowest intake of those veggies and fruits, which deliver Vitamin E & A, had a 50% higher risk of developing lung cancer compared to those guys with the highest intake of fruits and veggies.
These observation have been confirmed in the EPIC study which investigated the effects of diet on cancer. Also here, a high intake of fruit and vegetables, not supplements, was found to reduce smokers' risk of lung cancer considerably .
With these facts about nutrition science, and how the supplement industry uses it, I simply wanted to set the mood. Now, let's look at how this science is doing in the vitamin D and omega-3 department as emphasized by Dr. Fabricant.
Vitamin D supplements are believed to improve or maintain bone health in older adults, particularly in women. Indeed, what comes out of science labs seems to support this notion. Dr. Bischoff-Ferrari and her colleagues evaluated 11 randomized controlled trials to answer the question whether vitamin D supplementation reduces fracture risk in women aged 65 and older. It does. But only in those with the highest daily intake, more than 800IU. Good news for the supplement industry? You bet. But is it good news for you, too? Maybe not. Vitamin D needs to be taken with calcium to be effective. But high calcium intake by way of supplements appears to increase the risk for heart attacks, whereas dietary calcium intake, say from milk and cheese, does not .
In view of all this evidence the United States Preventive Services Task Force (USPSTF) recently issued its draft recommendation, which says that there is insufficient evidence to "...to assess the balance of the benefits and harms of combined vitamin D and calcium supplementation...". But rest assured, the supplement industry has all the evidence and science, which the USPSTF has not. Or so they want you to believe.
Let's move over to the famous fish oils and their Omega-3s.
The Omega-3 fatty acids are often praised as the constituents of fish oil, which protect against heart disease. At least that's what the supplement industry says. Science says something else. A double-blind prospective study of 2500 men and women aged 45 to 80, who had experienced a heart attack or stroke, investigated whether omega3- supplementation would prevent further cardiovascular events . It didn't.
You might ask, why this study looked only at people who had already cardiovascular disease. Maybe they are so far down the drain, that fish oil can't do its trick any more. Wouldn't it be nice to know whether omega-3 is protective in people who do not have cardiovascular disease? Yeah, it would. It would also be nice for you to tell me how to run such a study. Realistically. You would have to enroll thousands of healthy people, randomize them into those who MUST NOT EVER get their hands on omega-3 supplements and those who MUST take it every day for many years. Go find those people. Then, after many years, you would have to compare the outcome between the two groups. And you also would have to rule out those outcomes to be affected by such factors as physical activity and all the different food habits those thousands of people have. Of course, you would need funding for this type of research. Only, who will give you the funds? Certainly not the pharmaceutical industry. It pumps billions into research, yes, but only for proprietary chemicals. There is nothing proprietary about a vitamin, which every Tom, Dick and Harry can put into a pill. Which is why even the supplements industry won't give you a single dollar for your research. Now you know why such studies are not being performed. And why nutrition science is so fickle with its results.
What's the taking home point: When it comes to nutrient-health interactions, it is obviously not as simple as boiling down the effects of food to an individual vitamin or other nutrient. Neither is it as simple as stuffing this nutrient into a pill and shoving it down your throat. In the words of Einstein: "Make things as simple as possible, but not simpler." Reducing the effects of food to individual vitamins or other nutrients is obviously an oversimplification. When, as a result of oversimplification, nutrition science makes you jump from one supplement to the next, what does the supplement industry do? They are laughing their way to the bank. And, as we have seen, Mr Fabricant knows why. He is no more with the NPA, though. He has switched sides to work now for the FDA as director of its Dietary Supplement Programs division. Let's hope the FDA's view on nutrition science remains as skeptical as it ought to be. In the interest of your health. [tweet this].
1. Peto R, Doll R, Buckley JD, Sporn MB: Can dietary beta-carotene materially reduce human cancer rates? Nature 1981, 290(5803):201-208.
2. The effect of vitamin E and beta carotene on the incidence of lung cancer and other cancers in male smokers. The Alpha-Tocopherol, Beta Carotene Cancer Prevention Study Group. N Engl J Med 1994, 330(15):1029-1035.
3. Goodman GE, Thornquist MD, Balmes J, Cullen MR, Meyskens FL, Jr., Omenn GS, Valanis B, Williams JH, Jr.: The Beta-Carotene and Retinol Efficacy Trial: incidence of lung cancer and cardiovascular disease mortality during 6-year follow-up after stopping beta-carotene and retinol supplements. J Natl Cancer Inst 2004, 96(23):1743-1750.
4. Gonzalez CA, Riboli E: Diet and cancer prevention: Contributions from the European Prospective Investigation into Cancer and Nutrition (EPIC) study. Eur J Cancer 2010, 46(14):2555-2562.
5. Li K, Kaaks R, Linseisen J, Rohrmann S: Associations of dietary calcium intake and calcium supplementation with myocardial infarction and stroke risk and overall cardiovascular mortality in the Heidelberg cohort of the European Prospective Investigation into Cancer and Nutrition study (EPIC-Heidelberg). Heart 2012, 98(12):920-925.
6. Galan P, Kesse-Guyot E, Czernichow Sb, Briancon S, Blacher J, Hercberg S: Effects of B vitamins and omega 3 fatty acids on cardiovascular diseases: a randomised placebo controlled trial. BMJ, 341.
Peto R, Doll R, Buckley JD, & Sporn MB (1981). Can dietary beta-carotene materially reduce human cancer rates? Nature, 290 (5803), 201-8 PMID: 7010181
The Alpha-Tocopherol Beta Carotene Cancer Prevention Study Group (1994). The effect of vitamin E and beta carotene on the incidence of lung cancer and other cancers in male smokers. The Alpha-Tocopherol, Beta Carotene Cancer Prevention Study Group. The New England journal of medicine, 330 (15), 1029-35 PMID: 8127329
Goodman GE, Thornquist MD, Balmes J, Cullen MR, Meyskens FL Jr, Omenn GS, Valanis B, & Williams JH Jr (2004). The Beta-Carotene and Retinol Efficacy Trial: incidence of lung cancer and cardiovascular disease mortality during 6-year follow-up after stopping beta-carotene and retinol supplements. Journal of the National Cancer Institute, 96 (23), 1743-50 PMID: 15572756
Gonzalez CA, & Riboli E (2010). Diet and cancer prevention: Contributions from the European Prospective Investigation into Cancer and Nutrition (EPIC) study. European journal of cancer (Oxford, England : 1990), 46 (14), 2555-62 PMID: 20843485
Li K, Kaaks R, Linseisen J, & Rohrmann S (2012). Associations of dietary calcium intake and calcium supplementation with myocardial infarction and stroke risk and overall cardiovascular mortality in the Heidelberg cohort of the European Prospective Investigation into Cancer and Nutrition study (EPIC-Hei Heart (British Cardiac Society), 98 (12), 920-5 PMID: 22626900
Galan P, Kesse-Guyot E, Czernichow S, Briancon S, Blacher J, Hercberg S, & SU.FOL.OM3 Collaborative Group (2010). Effects of B vitamins and omega 3 fatty acids on cardiovascular diseases: a randomised placebo controlled trial. BMJ (Clinical research ed.), 341 PMID: 21115589