Sunday, October 8, 2017

Can taking vitamin D help women burn more fat?

* * * * * * * * * *

Sorry I've been AWOL these last six months. Life, as you well know, gets busy and things get in the way. Blogging is a hobby for me, though. I have no aspirations to monetize or grow this practice, per se. Frankly, I'll be surprised if you're still reading. Props for your patience, though, if you are.

Of course, despite life's business, I have been keeping up with various bits and pieces of the scientific literature. As much as is possible, anyway, what with school and work and my recent engagement - I asked, he said yes! In any event, allow me to pander to something other than my rather uninteresting personal life, like... fat loss from a pill? One could only hope.

Salehpour, et al. (2012) purports to show that, after 12 weeks of vitamin D supplementation, 39 "healthy" overweight, non-pregnant/non-lactating females lost more body fat than did a parallel cohort of 38 representatively similar overweight females taking a placebo.

First, everything was free-living and self-reported, but what else is new in diet-related research? So, naturally, all the same standard limitations apply, here. Second, although this was a supplement trial, they collected food-frequency questionnaires (FFQ) and 24-hour dietary records to try to ensure standardization across the board, so that nobody was getting away with significantly lower food intakes, skewing the statistics in favor of one arm or another, for instance. (They also tried to standardize physical activity, as well.) Per usual, these techniques are quite poor, however "validated" nutrition scientists claim they are. If you don't already understand why this is, Schoeller, et al. (2013) provide a good explanation. Luckily, we don't need them to be great for this particular study, and I'm glad they tried to do something to standardize the groups - sometimes, a little something is better than nothing. And they did claim to have counted how many of the pills each participant, in both the intervention and placebo arms, had consumed at weeks 4 and 8, and adherence was estimated to be roughly 87%. You might be tempted to complain that this number isn't higher, but it might as well be 90%, and a score of 9/10 (essentially an A-) is pretty darn good. What little added benefit one might have achieved, hypothetically, with one extra dose which was accidentally skipped when rushing out the door in the morning for work, is probably small enough for our purposes as to not be worth considering. Asking people to give A+ effort at all times simply doesn't happen in the general populous. We are interested in real life, after all. But there are other reasons to be skeptical of their resultant data, which I will cover momentarily.

Participants were randomly allocated from an 85-person list to receive either 25 mcg/day of cholecalciferol (vitamin D3) from seal oil or 25 mcg/day of lactose (placebo), although they do not mention how this randomization was performed (e.g. whether it was done by random number generation or some other such thing). This is a minor bellyache, but I still prefer to see all the data, and since the Nutrition Journal is open access and there are no page number limitations that I am aware of, there's really no excuse to publish papers without the full sequence of methods, laid out plainly for all to see and validate or even replicate independently if they'd choose.

The study went on for 90 days, or 12 weeks time. Subjects' food frequency questionnaires were reviewed once per month (the authors never mention coaching participants on their 24-h diet record to improve adherence there), so presumably three times throughout the course of the study, which means only the first two really mattered much for the purposes of keeping them on course, assuming it does so at all.

At baseline, the data from all randomized participants were normally distributed across all measures, with the exception of serum calcium and fat free mass (2.2 mmol/L vs 2.3 mmol/L, and 44 kg vs 46 kg, respectively). These figures are even enough such that I'm not sure it matters. One rather important measurement these authors did not get was resting metabolic rate (RMR), which they admit in the discussion.

Over the life of this study, eight participants dropped out for various reasons, which brought the sample down from 85 to 77 persons. No big deal. But, then, instead of having to cope with these drop outs, the noise in the system as it were, by following through with their originally proposed intention-to-treat analyses, the authors decided to supplant this with a per-protocol analysis instead, which means they only incorporated and analyzed data from those who actually completed the intervention and adhered to the protocol as was asked of them. Ranganathan, Pramesh, & Aggarwal (2016) do a good job of explaining why this approach can be problematic, but, essentially, doing only the per-protocol analysis introduces bias both in the randomization and in subsequent interpretations of the data.

Common though it is, unfortunately, the authors did not report much in terms of their statistical analyses, but merely posited an alpha (statistical significance) of p < 0.05, utilized analyses of covariance (ANCOVA) for biochemical variables**, and then performed Pearson correlation coefficients to try to show some kind of a relationship between 25(OH)D/iPTH and body fat mass. I'd like to have known explicitly what their 1 - B (statistical power) was, but I can only assume it was set at 0.8 (80%), as most of these trials tend to be. Working off of this assumption, an alpha of 0.05 and 1 - B of 0.8 would make the Cohen's d (effect size) approximately 0.32, which translates to a Pearson correlation coefficient (r) of 0.16, a very small linear relationship, at best. (For those interested, a d of 0.32 and an r of 0.16 would equal a number needed to treat (NNT) of approximately 11, meaning 11 people would have to be treated with this supplement in order for 1 person to glean whatever benefits there might be, assuming there are any.)

**I must say, I am happy to see that the authors knew the importance of quantifying biochemical variables in the serum (vitamin D, PTH, etc.) and correlated them back to the outcome measures of interest. It seems self-evident that this ought to be done, but you might be surprised at how many studies purport to show that a supplement, drug or substance does or does not produce some benefit or harm, when the authors never actually measured its concentrations in the serum, and so we actually have no idea what they're "measuring" at all.

So what were their results?

Serum 25(OH)D levels increased in the intervention arm, as would be expected, since 25 mcg/day is about 1,000 IU, which brought their serum values from roughly 15 ng/mL at baseline to approximately 30 ng/mL at the end of the study. 1,000 IU isn't an awfully large amount by common standards, but was probably a reasonable dose. It also strikes me that the subjects initial values were very low (< 20 ng/mL). Should we really be looking at two separate trials, here? One to demonstrate the efficacy of this kind of intervention in people with normal 25(OH)D levels, and another to demonstrate whether it is efficacious in those with abnormally low levels of 25(OH)D, such as the participants in this trial? Something to ponder, anyway.

Whereas serum iPTH values decreased slightly in the intervention arm (-0.26 pmol/L), they increased slightly in the placebo arm (0.27 pmol/L), p < 0.001.

Body weight change was minuscule and non-significantly different between groups, where the intervention arm lost -0.3 kg (or 0.7 lb.) and the placebo arm lost -0.1 (or 0.2 lb.).

They seemed to be suggesting that waist circumference was somehow meaningfully different between groups, where the intervention arm lost 0.3 cm around the waist, while the placebo arm gained 0.4 cm, but this was a non-significant change with a p < 0.38. Besides, even if this was statistically meaningful - and it's not - after 12 weeks of fairly religious pill popping, we're talking about a difference of less than one centimeter, here!

Hip circumference decreased in both groups, although non-significantly (-0.39 cm vs. -0.9 cm for intervention and placebo arms, respectively; p < 0.36).

Body fat mass supposedly decreased in both groups, where the vitamin D group (intervention arm) supposedly lost 2.7 kg (~6 lb.), and the placebo arm supposedly lost 0.4 kg (~1 lb.). So, let me get this straight: there's an 0.5 lb. difference in body weight between arms, but a 5.0 lb. difference in fat mass between arms? How on earth is that possible? Did the average subject in the intervention arm both lose 6 lb. of fat and gain 5+ lb. of muscle in 12 weeks? Give me a break. How much of this purported change could easily be explained away by the fact that the researchers used bioelectrical impedance to estimate body fat percentage in order to calculate these fat mass values? That would be my primary contention. So, no. I don't buy it. Perhaps if their body weights were also at least somewhat reflective of this kind of change. (But, no cigar.)

They claim to have demonstrated statistically significant inverse Pearson correlation coefficients (r) between the changes in 25(OH)D in serum and body fat mass (r = -0.319) from baseline - which was only significant in that it achieved a p < 0.005, in my view - and changes in iPTH in serum and body fat mass (r = -0.318) from baseline - which also achieved a p < 0.005. On the flip side, they also claim to have demonstrated a positive r between the changes in serum iPTH concentrations and body fat mass from baseline (r = 0.32, p < 0.004). However, that these values were statistically significant, doesn't change the fact that the correlation coefficients were small, as can be seen in the scatter plots provided below.

Lastly, they say they've shown that changes in these values correlate linearly with the outcomes posited above, yet their last statement in the results section states:

How in the world could it be that changes in 25(OH)D and iPTH were linearly correlated with fat mass, while serum 25(OH)D and iPTH concentrations were simultaneously not correlated with fat mass?

It took me a while to realize that, although there was apparently some kind of a linear relationship between the serum changes (from baseline) of these values to the outcomes they've posited above, the actual serum concentrations of 25(OH)D and iPTH at any given moment were not linearly correlated to these same outcomes. (And notice how they didn't give a value for r or an alpha for this last measure. Bit sneaky, if you ask me.)

And, ultimately, what do you think these scatter plots and correlation coefficients would look like, if the authors had kept their analyses true to the original intention-to-treat?

I'll say it, again: I don't buy it. Do you? Until next time.

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~


Ranganathan, Pramesh, & Aggarwal. (2016). Common pitfalls in statistical analysis: Intention-to-treat versus per-protocol analysis. Perspectives in clinical research. 7(3), 144-146.
Salehpour, et al. (2012). A 12-week double-blind randomized clinical trial of vitamin D3 supplementation on body fat mass in healthy overweight and obese women. Nutrition Journal, 11, 78.
Schoeller, et al. (2013). Self-report–based estimates of energy intake offer an inadequate basis for scientific conclusions. The American journal of clinical nutrition. 97(6), 1413-1415.

Sunday, April 16, 2017

Does eating too much, too often increase liver fat?

Possibly, but this trial doesn't give us the answer....

* * * * * * * * * *

It has been postulated that a major proximate cause of non-alcoholic fatty liver disease (NAFLD) - ignoring hepatitis C infection and things of that sort - is obesity itself (Yilmaz & Younossi, 2014). Since we're told obesity is "caused by an imbalance in energy intake versus expenditure," it must be that NAFLD is caused by a sustained increase in intrahepatic triglycerides (IHTG), due to a general overconsumption of Calories.

Fair enough. But, for now, I'm not interested in getting into whether or not a specific nutrient, or lack thereof, is on the hook for contributing to fatty liver disease per se, or whether or not an overconsumption of calories in itself is sufficient to cause it. What I am interested in, however, is the question posed in an article by Koopman et al. (2014) inquiring into whether or not hypercaloric between-meal snacking might lead to more IHTG accumulation, independent of the Caloric or macronutrient content of the diet, which is in fact what they ended up concluding in their Discussion section.

It's a bold claim, that between meal snacking independently predisposes to NAFLD - which, if you follow their line of reasoning to it's logical conclusion, is effectively the underlying implication of this idea. I want to discuss this trial in particular, because it purports to be the first of its kind. A human intervention study showing that snacking increases IHTG and intra-abdominal fat mass had never been reported, before it. Yet, that's what these authors claim to have done, here. So, let's see what they did.

Most of my concerns and comments will be those typed in red, below.


Two important things, right out of the gate:

First, this study purports to be a randomized controlled trial, testing the effect of various dietary interventions against a control group. I will come back to this, but I don't consider this study properly randomized or adequately controlled. It is, in my estimation, a glorified observational study, whose conclusions are meager at best, and, at worst, totally unwarranted.

Secondly, since this was not a metabolic ward experiment, but a free-living study, the results must be interpreted with caution, since there is little opportunity for anyone to validate whether or not the participants adhered in any meaningful way to their respective interventions (or lack thereof).

  • 37 lean (mean BMI: 22.5), otherwise healthy (no NIDDM) young men (mean age: 22 years)
All studies have limitations, and this one is no different. It will contain its fair share, and that's fine. Depending on what it is, a limitation needn't necessarily be a trial-ruiner. That said, it's important to recognize the first, which is that, should these results wring true in the end, these data may not be extrapolated and applied to women, geriatric or pediatric populations, persons of other ethnicities, and possibly individuals with acute or chronic disease; the mechanisms alluded to may have radically different molecular implications for obese or ill individuals. Therefore, although they could be interesting to a young, healthy, 20-something year old male, the generalizability of these data outside of that cohort are already of a "low-to-no" sort.

Exclusion Criteria:
  • Eating disorders, psychiatric disorders, type II diabetes, an "unhealthy ad libitum diet," (according to Dutch guidelines), and exercising > 3 hours/week
It may be worth considering that the Dutch guidelines have asked people to consume a largely plant-based diet. The notion that a more carnivorous or meat-based diet may lead to more IHTG or NAFLD has, as far as I am aware, never been demonstrated, thus making this exclusion criteria somewhat strange in my view. Then again, as with anything, there are some good things about their guidelines, and I suspect they wanted some kind of a baseline to iron out any significant fluctuations in intergroup diet variability.

The idea that these young, healthy guys were told to remain sedentary and refrain from vigorous physical activity for more than 3 hours per week is a little funny, although I get why they asked it of them. But do we believe that each of these 37 subjects complied with this 3 hour per week exercise maximum? They were all lean and prone to spontaneous activity, and might have easily forgotten about, misunderstood, underestimated or even just not cared much about the importance of this aspect of the study - and it is important. Yet, since this is a free-living study, there's no way to know for sure that they weren't doing more. Even if everything were perfect, and these results held true, who's to say that it isn't simply the case that a more frequent overconsumption of Calories contributes to increased IHTG only in the absence of concomitant exercise, or if one is sedentary? Then, the implication would be less relevant to diet and geared more toward a recommendation to increase physical activity. Of course, this question would require a separate trial altogether to answer, however, I still think it's worth considering.

Study Design:
  • 37 subjects were split into 5 groups:
    • control group consumed their usual, baseline diet
    • one intervention group consumed their usual diet plus a high fat, high sugar supplement, which they were was asked to consume with their meals, ensuring (at least hypothetically) that they were not snacking between meals. This arm was labeled: high fat, high sugar-SIZE (HFHS-S)
    • a second intervention group consumed the same aforementioned supplement, but were asked to consume them as snacks, 2-3 hours after their daily meals, instead of with them. This arm was labeled: high fat, high sugar-FREQUENT (HFHS-F)
    • a third intervention group consumed their normal meals, plus a high sugar supplement (devoid of Calories from protein or fat), which they were asked to consume with their daily meals. This arm was labeled: high sugar-SIZE (HS-S)
    • a fourth intervention group consumed their normal meals, plus the same aforementioned high sugar supplement, but, as with group two, were required to consume these as snacks, 2-3 hours after each meal. This arm was labeled: high sugar-FREQUENT (HS-F)
  • Subjects were asked to consume three meals per day and three supplement drinks per day
  • In theory, following these requests should have led to a 40% increase in Calories (1.4 x REE)
    • The HS-S and HS-F drink (x3/day) was essentially just = 3,000 mL of any one of a variety of nutritionally comparable sugar-sweetened sodas. (e.g. Coca Cola, Pepsi, etc.)
      • HS-S and HS-F drinks contained:
        • 43.3 kcal/100 mL x 10 x 3 servings/day = 1,299 kcal/day
        • 10.3 g sucrose/100 mL x 10 x 3 servings/day = 309 g sugar/day
    • The HFHS-S and HFHS-F drink (x3/day) contained:
      • 240 kcal/100 mL x 3 servings/day = 720 kcal/day
      • 9.6 g casein protein x 3 servings / day = 28.8 g protein/day
      • 9.3 g fat x 3 servings/day = 27.9 g fat/day
      • 29.4 g carbohydrate x 3 servings/day = 88.2 g carbohydrate/day
  • This study was undertaken for six weeks
    • Anthropometric measurements, laboratory testing and intensive counseling were said to have been implemented weekly to attempt to account for any inconsistencies in subjects' compliance with the protocol

Note that all four intervention diets are high sugar. There are no purely high fat intervention arms, thus precluding any conclusions about high fat intake under hypercaloric conditions contributing to IHTG one might want to make.

It may be worth mentioning that, considering the sugar-sweetened beverages in the HS-S/F groups were comprised of added sucrose, the subjects' fructose consumption should have been approximately 154.5 g/day. This is significant, given the impact we think the chronic overconsumption of fructose might have on hepatocytes (Ouyang et al., 2008). Therefore, it might have been wise to study this in such a way that only glucose or only fructose was being manipulated. This seems to add another confounder to the mix.

Perhaps my biggest problem with this aspect of the methodology is that they've presented the intervention groups as all being in a hypercaloric phase of 1.4 X REE, or 140% of their normal diet in terms of Calories, and that each group would be isocaloric. Yet, given the figures presented above, the HS-S/F arms should have been consuming a mean 579 kcal/day more than the HFHS-S/F arms. By the end of the trial (42 days), this would have amounted to about 24,318 kcal, between groups, assuming 100% dietary compliance. Assuming stable BMR for all subjects, this should come out to be roughly 7 lb. of fat gained over and above the HFHS-S/F intervention arms, given ~3,500 Calories per pound of fat. But what was the actual difference in weight gained? None; or, rather, no statistically significant difference was detected. Something doesn't add up. Shall we simply assume subjects were noncompliant with the protocol?

Statistical Methods:
  • The authors of this paper had published a previous report - one I have not yet read through - from which they determined their prior; an effect size of 0.46, predicated on previous data demonstrating that a hypercaloric diet increased HOMA-IR by 0.46 +/- 0.17. (Brands et al., 2013)
  • From this predetermined effect size, they reasoned that they would need 7 subjects per group to maintain a statistical power of 0.8 (80%) for an alpha level or significance set at p < 0.05, in order to detect statistically meaningful differences in insulin sensitivity between groups
Already violating the 7-person-per-group rule, the Control group only had 5 subjects. Then again, as will be discussed shortly, the controls were never compared to the intervention arms in the final analysis....
  • Subjects were "randomly allocated" to five groups via simple, non-stratified lot drawing. The randomization process was not blinded.
  • To determine normal distribution curves, they did normality testing, prior to paired Student t tests. Otherwise, they used Wilcoxon matched pairs. Between-group differences were analyzed with two way ANOVA and then post-hoc Bonferroni.
It is true that with an effect size (Cohen's d) of 0.46, a Power (1 - B) of 0.8 (or 80%) is right on the money, but only if the n = 37, in the final analysis. However, if you look at what they did after random allocation, they dropped three participants from the trial. Two, for "uncertain diet compliance," and one for alcohol abuse. Fine, but then they added two subjects, after that. They just added a couple folk, and that's all they say about it, anywhere. Not only were subjects not blindly allocated to intervention groups, which itself is problematic (Dettori, 2010), the run in and diet protocol phases of the study were complete before they just plopped two new guys in the analysis. Not okay. All trials lose people, that's why you shoot for a larger n, from the start, to account for this loss to follow up. But, you don't get to just add new guys to the mix, after the fact, because you feel like it.

Suppose we now calculate a new Power, for Cohen's d of 0.46, knowing that we've lost three and then gained two subjects. This obviously leaves us with a total of 36 participants, rather than 37. If you do the calculations, this brings the study's Power down from 0.8 to 0.79 (79%). That's not terrible, although the estimated type II error rate is approximately 21%. This is somewhat typical, but then if you notice that they didn't actually analyze the control group in the final analyses, or compare their results to those of the four intervention arms by the end, we must conclude that the true n is really 32. Your n is only your analyzed n. This makes the study significantly more underpowered, bringing the original (1 - B) of 0.8 down to (1 - B) = 0.74 (74%). It also increases the type II error rate to ~26%.

Furthermore, the overall Power is even further reduced, in the end, due to the repeated Bonferroni corrections (Nakagawa, 2004). I liked that they thought to do this, in that Bonferroni corrections attempt to account for multiple comparisons. The problem, as described by Nakagawa (2004), is that the more of these one completes, the lower (1 - B) tends to become. And (1 - B) in the Koopman et al. (2014) trial was low to begin with.


  • BMI remained stable for all six weeks of the trial
  • "Caloric intake and intake of specific macronutrients were stable during the observational period (data not shown)"
Of course they're not shown, these data are assumed.
  • IHTG content, abdominal fat, insulin-mediated suppression of EGP, and peripheral rate of disappearance of glucose (Rd) were not statistically different after the observational period
  • "Control subjects were included to show reproducibility of the measurements only and are therefore not further analyzed"
In other words, these five subjects were not analyzed alongside the other 32, thus bringing (1 - B) down, as I've described above. This is the only pertinent information we are given about the control group. What I would like to have seen, and what I think would have constituted a true control arm, is to compare these data with results from the intervention groups. I cannot help but wonder why they chose not to?

Food Intake:
  • Caloric intake between intervention groups was considered equivalent
This is incorrect. Take a few moments to work through the math:

If the HFHS arms are consuming their Nutridrink Compact supplement x 3/day, at 240 kcal a pop, this equates to 720 kcal per day. On the other hand, if the HS arms are consuming their soda in 1,000 mL units, at 433 kcal x 3 drinks/day, this equates to a total of 1,299 kcal. As I said before, this is a difference of 579 kcal. In what world is that equivalent? And yet, inter-arm participants equaled out to roughly the same weight, BMI and fatness, in the final analysis? I am left wondering whether the authors made an error in the report, the subjects were noncompliant with their protocols, or something similar.

Even if, in the end, it was demonstrated that consuming one of the HS diets led to greater increases in IHTG, we wouldn't be able to tell whether or not this was a product of the diet composition, meal frequency, or merely by virtue of the sheer difference in Calories they were consuming above the other interventions.

BMI and Resting Energy Expenditure (REE):
  • Subjects gained a mean 2.5 kg (5.5 lb.) over the course of the 6 weeks
If the subjects were consuming their diets precisely as described, the HS groups should have gained more fat than the HFHS groups over the same six weeks. Again, this points to noncompliance (or some other error in reporting).
  • All subjects' BMI increased, and there were no differences between them
  • REE did not change in any of the intervention groups, throughout the 6 weeks
Intra-Hepatic Triglyceride (IHTG):
  • "IHTG significantly increased in the HFHS-F and the HS-F groups"
  • "The increase in IHTG tended to be higher in the HS-frequency group"
I take issue with "tended to be higher." Statistically significant figures are only significant if they've actually reached significance! Trending toward significance is just a sneaky way to imply that an alpha level wasn't reached, but came close. Sorry, no cigar. That's not how this stuff works. The result of a significance test is either significant or it isn't. A non-significant finding is just that. (In this case, the purported "increase in IHTG... in the HS-F group" was p < 0.07.)

0.07 =/= 0.05

It's true that the result of a significance test could be statistically non-significant, yet still be practically significant. Unfortunately, this was not the case for these data. These results were meager, and may have actually been meaningless. Therefore, not only were their results statistically non-significant, they were also practically insignificant.
  • "In the two groups with increased meal size, IHTG did not change"
Abdominal Fat:
  • Total abdominal fat increased in the HFHS-F group
By a mere 0.1 kg or 0.22 lb. How clinically meaningful are we supposed to believe this is, assuming it's not just a statistical error?
  • Total abdominal fat tended to increase in the HS-F group
No. It was non-significant.
0.051 =/= 0.05
  • In the HFHS-S and HS-S groups, abdominal fat did not change
  • The increase in abdominal fat was not different between the two frequency groups
  • The increase in total abdominal fat was mainly caused by an increase in subcutaneous fat in both frequency groups
By a mere 0.035 kg (0.077 lb.). Of what practical significance is that?
  • (In the HFHS-F arm) visceral fat tended to increase, but was unchanged in all other groups
No. It was non-significant.
0.074 =/= 0.05

Glucose Metabolism:
  • Fasting glucose and EGP did not change
  • Fasting insulin levels slightly but significantly increased in the HS-S group only
By 12 pmol/L, going from 36 to 48 pmol/L
(A fasting insulin level < 174 pmol/L is generally considered to be within normal limits (WNL))
  • Hepatic insulin sensitivity (expressed as percent insulin-mediated suppression of baseline EGP) tended to decrease in the HFHS-F group only
No. It was non-significant.
0.083 =/= 0.05
  • Peripheral insulin sensitivity did not change in any of the diet groups
  • In the HFHS-F group insulin-mediated suppression of FFA significantly decreased
Only by 4.6%, and only in Step 1 of a two-Step analysis.

Glucoregulatory Hormones, Leptin and Plasma Lipids:
  • Plasma leptin concentrations increased in all diet intervention groups
All with the exception of HS-F, which was non-significant, at 0.075. (0.075 =/= 0.05) Fasting leptin increased a mean 1.25 ng/mL in all but the HS-F groups.

Fasting leptin levels are considered to be WNL when they are found to be < 15 ng/mL. It makes sense that leptin increased a little, given a few lb. of fat gain over the 6 weeks, but none of the intervention arms ever got over 5 ng/mL, and considering a well articulated argument by Askari, Tykodi, Liu & Dagogo-Jack (2010), fasting hyperleptinemia may act as an appropriate surrogate endpoint for identifying impaired insulin action. Thus, since Koopman et al. (2014) have claimed not only that hypercaloric feeding with more frequency induces more IHTG, but also that it induces insulin resistance, their own fasting leptin data seem to further combat this latter claim.
  • Glucoregulatory hormones did not change
  • Fasting triglycerides increased in the HFHS-F diet only
By a mere 0.28 mmol/L (a TAG level of < 1.7 mmol/L is considered desirable)

None of the participants ever got above 0.85 mmol/L, and thus were never hypertriacylglycerolemic.

(Overall) Meal Size vs. Meal Frequency:
  • BMI significantly increased in both groups
By a mere 0.7 kg/m^2... and none of the participants came close to approaching an overweight BMI of > 25 kg/m^2. 
  • Only increasing meal frequency significantly increased IHTG and total abdominal fat
By a mere 0.96%. According to Ress & Kaser (2016), if < 5% of hepatocytes are afflicted with an over-accumulation of IHTG, this is considered a Grade 0 on a 0-3 Grade scale. So, whether statistically significant or not, here, I would venture to guess that an increase of < 1% (and thus a Grade 0 on the aforementioned scale) isn't especially meaningful.
  • Only increasing meal frequency reduced insulin-mediated suppression of FFA
Sure, but only at Step 1 of a two-Step analysis, and only by 2.9%. Does that matter?


Finally, the first thing in this paper I agree with, 100%.

~ ~ ~ ~ ~ ~ ~ ~ ~ ~

My TL;DR Takeaway

Given the limitations in this paper - errors in random allocation, the addition of two random subjects with no additional information, the low statistical power and small sample size, a lack of individual participant data points, the fact that the control group wasn't compared to the intervention groups, mathematical errors in intervention arms, etc. - and the meager effects reported in the Results section, that even if you are a young, healthy 20-something year old male, you shouldn't concern yourself (based on these data alone) with reducing hypercaloric meal frequency, versus meal size, for the sake of preventing the accumulation of intrahepatic triglycerides.

A more straightforward and elegantly-designed trial with a larger n, better reliability, concealed randomization, and actually comparing intervention arms to controls, would be necessary in order to fully elucidate the answer to the initial question posed by Koopman et al. (2014). These hypothetical data would further need to be replicated and their results reproduced by an independent research group, before we could then decide that, indeed, disturbances in meal frequency such as those described above would be expected to increase liver fat stores, and potentially predispose to non-alcoholic fatty liver disease.


Askari, H., Tykodi, G., Liu, J., & Dagogo-Jack, S. (2010). Fasting plasma leptin level is a surrogate measure of insulin sensitivity. The Journal of Clinical Endocrinology & Metabolism, 95(8), 3836-3843.
Brands, M., Swat, M., Lammers, N. M., Sauerwein, H. P., Endert, E., Ackermans, M. T., ... & Serlie, M. J. (2013). Effects of a hypercaloric diet on β‐cell responsivity in lean healthy men. Clinical endocrinology, 78(2), 217-225.
Dettori, J. (2010). The random allocation process: two things you need to know. Evidence-based spine-care journal, 1(03), 7-9.
Koopman, K. E., Caan, M. W., Nederveen, A. J., Pels, A., Ackermans, M. T., Fliers, E., ... & Serlie, M. J. (2014). Hypercaloric diets with increased meal frequency, but not meal size, increase intrahepatic triglycerides: a randomized controlled trial. Hepatology60(2), 545-553.
Nakagawa, S. (2004). A farewell to Bonferroni: the problems of low statistical power and publication bias. Behavioral Ecology, 15(6), 1044-1045.
Ouyang, X., Cirillo, P., Sautin, Y., McCall, S., Bruchette, J. L., Diehl, A. M., ... & Abdelmalek, M. F. (2008). Fructose consumption as a risk factor for non-alcoholic fatty liver disease. Journal of hepatology, 48(6), 993-999.
Ress, C., & Kaser, S. (2016). Mechanisms of intrahepatic triglyceride accumulation. World journal of gastroenterology, 22(4), 1664.
Yilmaz, Y., & Younossi, Z. M. (2014). Obesity-associated nonalcoholic fatty liver disease. Clinics in liver disease, 18(1), 19-31.