Reviewing “Replacing Darwin” – Part 7: A Nuclear Catastrophe

Reviewing “Replacing Darwin” – Part 7: A Nuclear Catastrophe

Here we go again. This blog post will go through the second chapter of the final section of Jeanson’s book, wherein his goal is to lay out his new “creation model” intended to “replace Darwin’s”. Last chapter we saw him try to tackle mitochondrial DNA data and fail spectacularly, now he’s going to be covering nuclear DNA data. These two chapters are the most substantive in the entire book – “substantive” meaning “containing the most claims and discussion of data”, which means they’re also the two chapters containing the most errors.

That being said, this chapter is mercifully short compared to the previous one, so this blog post is half the length of my last.

Chapter 8 – A Preexisting Answer

Sequence differences and their functions

In keeping with the last chapter, Jeanson begins this one with another consideration of nested hierachies:

“at the level of genes, comparative genetic analyses have frequently been performed. And, similar to our observations of mtDNA, nuclear genes fall into nested hierarchical patterns.2”

Once again, the genes fall into a nested hierarchical pattern, as evolution would predict. And once again, Jeanson claims that this is also what would be predicted by creationism. I’ve previously discussed reasons why this isn’t the case, but Jeanson states it matter-of-factly, again dismissing nested hierarchies as a piece of evidence for or against either evolution or creationism.

According to Jeanson, we should instead be looking at the specific functions of sequence differences between families:

“Evolutionists ascribe the origin of all DNA differences ultimately to mutations. Thus, between two species that share similar genes, they generally expect sequence differences between these genes to be functionally neutral.3 In contrast, among species from separate families, creationists predict high levels of function for DNA sequence differences.4”

I don’t really have any problems with this paragraph, although I did find footnote 3 pretty interesting, as it sounds like Jeanson is suggesting that we’ve only just started thinking more about the effects of selection on gene sequence divergence between species (my emphasis):

“3. However, it appears that the typical evolutionary explanation of protein differences is being modified. Rather than attributing changes primarily to a nuclear DNA “clock,” evolutionists are beginning to invoke more of a role for natural selection. See the following paper as an example: J. Parker et al., “Genome-wide Signatures of Convergent Evolution in Echolocating Mammals,” Nature, 2013, 502:228–231.”

Remember, the expectation is that most nucleotide differences between species is functionally neutral – obviously some of them have to be functional in order to get the diversity of phenotypes we see between species. It’s odd then, that Jeanson would cite a paper detailing one such example as though it represents some kind of trend against the neutrality of mutations. Is he trying to make a point about convergent evolution? It’s not as though convergence, or even molecular convergence, are recent ideas. It’s really not clear to me what Jeanson is trying to say here.

Anyway, Jeanson’s predictions are made clear: evolution would predict most nucleotide differences between species and clades are neutral, while creationists would predict that they’re mostly functional between “kinds”, as they would have been originally designed in by God for the purpose of making the “kinds” different. Jeanson correctly quotes Graur et al. (2013) explaining that evolution would predict that the human genome is mostly non-functional. This is for reasons I’ll discuss shortly, but despite quoting a paper that explains some of these reasons, Jeanson doesn’t really engage with them. Instead, Jeanson jumps to a discussion of ENCODE and how jolly controversial the question of functionality in the human genome is.

The section is too long to quote in full, but to paraphrase, Jeanson says ENCODE claimed to have shown that 80% of the human genome was functional, and that some evolutionary biologists dismissed this out of hand because it would contradict evolution. Jeanson separately notes that there are technical critiques of ENCODE’s claims, and gives the example that it’s difficult to unequivocally assign the label “functional” or “non-functional” without doing functional assays like knock-outs. Since these tests haven’t been done for over 90% of the human genome, ENCODE’s claims are “very preliminary”.

Jeanson says that because of this lack of rigorous data on functionality, everyone should just reserve judgement. The question should be completely up in the air – maybe all of the genome is functional, maybe most of it isn’t. We just don’t know. So, Jeanson chastises us “evolutionsts” for daring to make arguments for evolution involving non-functional sequences. For example, “assuming” that pseudogenes are non-functional and that therefore shared pseudogenes in multiple species represent “shared mistakes” indicative of common ancestry.

He cites a study purporting to show that 80% of human pseudogenes have “at least one line of biochemical evidence for function”. Based on this, he immediately reneges on his previous statement that this kind of biochemical evidence for function is “very preliminary”, and says that it actually points towards a particular conclusion (my emphasis):

“Again, these results are only biochemical experiments — not genetic knockouts. But they have set a trajectory that points toward pervasive, genome-wide nuclear DNA function.”

These claims of ubiquitous genome function are common among creationists. Aside from the positive arguments for evolution that can be made using neutral DNA sequences, it’s admittedly a bit difficult for special creationists to explain why exactly God would create us (and other organisms) with vast stretches of “junk” DNA.

Jeanson discussed ENCODE a bit in chapter 3, and I responded in my review of that chapter without getting too much into the details, so I’ll expand a little more on the subject now (although it still deserves it’s own article one day). Once again, Graur et al. (2013) is an excellent (and accessible for the lay reader) review of ENCODE’s claims and why they don’t hold up to scrutiny. Jeanson cites this paper, so he’s presumably read it, but chooses not to discuss the subject in any detail.

ENCODE surveyed the human genome and looked for certain biochemical signatures: transcription, histone modifications, chromatin conformation, protein binding, etc. By showing how much of the genome is characterised by at least one of those signatures, they obtained the figure of ~80%, and went on to claim that this number represents the fraction of the genome that is “functional”. The problem is, however, that none of these signatures by themselves are indicative of function at all. Some proteins can randomly bind to sequences in the genome that randomly happen to resemble binding sites. Random sequences can get randomly transcribed when the right proteins happen to bind nearby. The list goes on. Only a sequence with multiple overlapping markers could truly be considered likely to be functional. The fraction of sequences that fit that criteria is much smaller than 80%: on the order of 10%.


It’s a similar story with Sisu et al. (2014), the paper that Jeanson cites as demonstrating that 80% of human pseudogenes have signatures of function. The proportion of pseudogenes in humans, roundworms, and fruit flies that the authors say show evidence of being “highly active” (AKA good evidence for function) is just ~5%. ~75% have some evidence for being “partially active”, and the remaining ~20% has no evidence of functionality at all. For humans specifically, these numbers are 0.8%, 78%, and 21%, respectively. Figure 1 shows a breakdown of all 11,216 pseudogenes analysed. It shows, for example, that of the 11,216 pseudogenes, only 1,441 had evidence of transcription, and of these 1,441, only 150 had an active Pol II binding site upstream, and so on.

Screenshot 2019-05-01 at 21.51.59.png
Figure 1 | Signatures of activity in human pseudogenes. Human pseudogenes categorised by the presence (red) or absence (white) of 4 different indicators of activity. Tnx: Transcription, Pol II: active Pol II binding site upstream, AC: active chromatin, TF: transcription factor binding sites upstream. The numbers below the hierarchy diagram represent the number of signatures present in each of the 16 bins of pseudogenes, from 0-4. Pseudogenes with 0 signatures of activity are labelled “dead” (D), pseudogenes with 1-3 signatures are labelled “partially active” (P), and pseudogenes with all 4 signatures are labelled “highly active” (H). The table at the bottom breaks down the total number of pseudogenes based on the total number of signatures of activity they have. Highlighted in the blue box is the 6991 pseudogenes whose only signature for functionality is being present in active chromatin. Figure adapted from Sisu et al. (2014).

Within the 78% “partially active” pseudogenes, the vast majority (6991; 80%) were marked by just one particular “signature” of function – residing in “active chromatin” (AC) also called “open chromatin”. This is a poor indicator of function in so-called “processed” pseudogenes, as these are pseudogenes inserted by retrotransposition from an mRNA molecule (see full explanation in my previous blog post about processed pseudogenes). They are preferentially inserted into areas of open chromatin in the first place! “Closed” chromatin, as the name would suggest, is much less accessible. Finding processed pseudogenes, which make up the vast majority of all human pseudogenes, in open chromatin is like finding sand at the beach. It’s expected, and isn’t the least bit compelling as evidence of functionality by itself.

I’d say the study reported results indicative of ~1% of human pseudogenes being “likely functional”, and maybe a further ~5-10% being “possibly functional”. Around 20% were shown to almost certainly be non-functional, and the remaining ~65-70% are probably not functional. Jeanson, on the other hand, just reports the results as “80% have evidence of function” and then says this is another trajectory that “points to” pervasive genome function, so his vague reporting of the paper’s results is quite convenient to his chosen narrative. How lucky.

It’s important to note that even functional pseudogenes don’t lose their utility as evidence for common ancestry. Often, only a fraction of the pseudogene has been “exapted” for some kind of function, and it’s still perfectly obvious that the pseudogene as a whole was once a functional protein-coding gene that lost that original function. To use an analogy, a wrecked rowing boat on the shore might be converted into a makeshift yet effective shelter in a rainstorm, but no passer-by would be fooled into thinking that a craftsman built the splintered hull, cracked keel, tattered seats, and bent oarlocks for the express purpose of being turned upside down and used as a shelter. Instead, it’s exactly what it looks like: a broken boat that’s been salvaged and used for a radically different purpose.

Jeanson’s response to this kind of argument is that we simply don’t understand the genome well enough to be able to judge whether we’re looking at a “broken” gene or an exquisitely designed gene that has an unusual function. To continue the analogy from above, he would look at the makeshift shelter and say “we just don’t understand the designer’s motives for making this shelter with what looks just like a cracked keel on the roof and tattered seats upside down on the ceiling. They probably aid in the function of the shelter in some way that we haven’t figured out yet.”

Two good papers identifying such obvious pseudogenes in primates are Zhu et al. (2007) and Zhang et al. (2010). There are numerous cases of pseudogenes that look entirely normal (just like the corresponding functional protein-coding gene in other species) except for one or two disruptive mutations that stick out like a sore thumb. These include premature stop codons that would cut off the back half of the protein sequence, or mutations at a splice site in an intron which causes the intron not to be spliced out of the mRNA transcript, again massively disrupting a protein that was otherwise highly conserved among a large set of relatives.

Positive evidence for pervasive non-functionality of the genome

So, what are the reasons that so many “evolutionists” (read: biologists) think that a minority of the genome is functional, aside from the lack of compelling evidence mentioned above? Jeanson doesn’t mention any, he argues that it’s based purely on the lack of evidence for function, and that this is just because we just haven’t studied the genome enough yet. In reality, biologists also have ample positive evidence for non-functionality, not just a lack of evidence for functionality.  Palazzo and Gregory (2014) explain many such lines of evidence in their aptly named paper “The Case for Junk DNA”. Their open-access paper is well-written and should be understandable to lay readers, so I highly recommend it.

In fairness, Jeanson does use a lot of cautious language here, and never quite outright says “study X or data Y PROVES most of the genome is functional”, which is a nice change of pace compared to most creationists. On the other hand, he only really discusses what he claims is positive evidence for this conclusion, and leaves out any mention of the evidence in favour of the opposite conclusion. He’s adamant the tide is turning in his favour as more studies are published.

Human Chromosome 2 Fusion

Next, continuing on the theme of genetic mistakes (supposedly) turning out not to be genetic mistakes, Jeanson brings up human chromosome 2 (HC2) fusion. Specifically, citing much of Jeffrey Tomkin’s “research” claiming to have debunked the fusion. I’ve blogged on the subject of HC2 before, although that post wasn’t focused on the actual evidence for it. I think I’ll definitely have to make an extensive post about it in future, as it’s an oft-discussed subject, and some associates and I have put some work in over the years to analyse Tomkins’ claims directly, so it would be good to have all that material outlined in one place. For now though, I’ll just run through it very quickly since Jeanson only devotes a single paragraph to the subject, leaving his references to Tomkins’ “papers” in the “Journal of Creation” and “Answers Research Journal” to do the heavy lifting for him.

For those who aren’t familiar with the basic idea behind HC2 fusion, this video features Ken Miller explaining it in just 4 minutes, so give that a watch before coming back and continuing. Right. Now I’m assuming everyone is familiar with the basics.

So, Jeanson says:

“This hypothesis leads to expectations about function at the site where the proposed fusion occurred. At the site of the supposed fusion in human chromosome 2, evolutionists expect to find a genetic scar. They have even claimed that such a scar exists.
In contrast, recent investigations have demonstrated that the supposed fusion site does not bear the scar of an accidental chromosomal crash. Rather, the site sits in the middle of a functional gene, and the purported fusion sequence appears to participate in the regulation of this gene.9”

First, it’s worth pointing out that there’s no contradiction between a sequence being both a “genetic scar” and also part of functional gene. The fusion could have happened, and then a gene and regulatory region could have evolved later from the sequence of the fusion site. This option isn’t considered plausible by creationists though, as they’re (mostly) adamant that “new information” in the form of new functional genes can’t evolve.

That being said, there is no good evidence of a functional gene spanning the fusion site, nor of the fusion site itself regulating this gene. The “recent investigations” Jeanson references in the above quote are almost exclusively the work of the aforementioned Dr. Jeffrey Tomkins of the Institute for Creation Research (ICR). The so-called “functional gene” is named DDX11L2. It’s a pseudogene, a fragment of a gene that originally encoded a DNA helicase enzyme. This pseudogene has incredibly minimal evidence for transcription, but there are also two different transcripts of interest to us in this context. One is short, and doesn’t span the fusion site. The other is long, and does span the fusion site. Tomkins used the minimal evidence for expression of the short transcript as evidence that the long transcript (that crosses the fusion site) is functional. That’s how he arrived at the claim that the fusion site “contains a functional gene”. Tomkins’ evidence that the 798bp fusion site itself is a regulatory element controlling this “functional gene” is simply that the fusion site contains some transcription factor-binding sites. This too, is spurious evidence. Not only is the binding incredibly weak, but telomeric sequences naturally contain such binding sites, so we’d expect to find them in a clear head-to-head telomere fusion site such as the sequence in question. Of course, Tomkins didn’t do any lab-based functional assays like, say, knockouts, to bolster any of his conclusions. Curious then, that even in the absence of rigorous evidence for function, Jeanson is willing to discard all the cautious language from earlier and say matter-of-factly that the purported fusion site is in the middle of functional gene.

Jeanson concludes this part of the chapter by claiming that in just the last few pages he’s demonstrated that all of the nuclear DNA patterns (nested hierarchies, pseudogenes, HC2 fusion) aren’t evidence of evolution. He doesn’t mention other patterns like ERVs, but presumably he doesn’t consider them evidence either, for one reason or another.

“In summary, in the arena of nuclear DNA patterns, the patterns themselves cannot distinguish between the evolutionary model and the creationist/design models. Rather, the function of the nuclear DNA differences can distinguish between these two — and the trajectory of recent biochemical experiments is pointing toward high levels of function.
This trajectory would eventually have even bigger ramifications for the origin of species.”

Here I feel like Jeanson is conflating the functionality of nuclear DNA differences with the functionality of the genome. These aren’t synonymous. Hypothetically, the entire human (and chimp) genome could be completely filled with functional elements, from protein-coding genes to non-coding RNAs and regulatory elements, and yet the differences between them could be mostly neutral. This is because there can still be neutral changes within functional elements. That being said, despite Jeanson’s hand-waving, there is no such trajectory towards the human genome being considered anywhere near 100% functional.

Anyway, all of this so far has been leading to Jeanson’s primary argument in this chapter (and the next) regarding nuclear DNA clocks and “created heterozygosity”, so let’s finally move on to that.

Nuclear DNA sequence differences

In the previous chapter, Jeanson focused on mitochondrial DNA mutation rates and sequence differences between species, and now he does the same for nuclear DNA (nDNA). He begins by citing a human nDNA mutation rate, obtained from pedigrees, of around 78 substitutions per generation. Unlike his previously mentioned mtDNA mutation rate, this is actually accurate. Certainly in the right ballpark. He cites Venn et al. 2014 to support the claim that chimpanzees appear to have essentially the same nuclear mutation rate. So far so good.

Using these mutation rates, Jeanson estimates that if humans and chimpanzees diverged 4.5-7 million years ago, given a realistic generation time, then they should only have accumulated up to 13 million nDNA sequence differences (7 million years * 2 lineages * 4.25×10^-10 mutations per bp per year * 2.2 billion bp of aligned nDNA = 13 million differences). However, more than 26 million differences exist between human and chimp nDNA, so the “evolutionary prediction” seems to be inaccurate: there are twice as many differences between humans and chimp nDNA than expected.

Jeanson cites Venn et al. (2014), who calculated a divergence time 2x older to explain these differences – about 13 million years ago – but he rules out this possibility since extrapolating this divergence time increase to other primate species results in far too ancient divergence times. I agree that this linear extrapolation results in unrealistically old divergence times for early primates, but Jeanson neglects to mention a highly-relevant set of explanations for this.

Chief among them relates to the so-called “Hominoid slowdown hypothesis”, which was first proposed about 60 years ago and posits that Hominoid mutation rates were faster in the past. Since then, evidence has accumulated that suggests the per-year mutation rate has progressively decreased over the course of great ape evolution as body sizes, and hence generation times, increased (Scally and Durbin, 2012; Steiper and Seiffert, 2012). Modern human and chimpanzee average generation times differ by approximately 5 years (chimp: 24 years, humans: 29 years).

This slowdown would mean that the extrapolation of the 2x older divergence to distant primate relatives would also be unjustified. In other words, the human and chimp “discrepancy” doesn’t necessitate throwing off the entire primate evolution timeline as mentioned earlier. If the mutation rates of our ancestors were faster than ours, then it would obviously be inappropriate to extrapolate our slower mutation rate across the entire primate family tree, inflating the lengths of time needed to account for all the observed differences.

Accounting for life history, including generation times, also reduces the human-chimp divergence times closer to “traditional” estimates (Amster and Sella, 2016), especially when also using mutation types that are less sensitive to certain life history factors (Moorjani et al., 2016). To quote the final sentence of Moorjani et al.:

Thus, within hominines, there is no obvious discrepancy between phylogenetic and pedigree-based estimates of mutation rates, once the effect of life history traits on mutation rates is taken into account.

Another study using a completely independent method based on recombination blocks to estimate human mutation rates in the recent past (<100,000 years) found a significantly higher nDNA mutation rate than that estimated by pedigrees – it was intermediate between pedigree-based estimates and phylogeny-based estimates (Lipson et al., 2015). This observation fits the hypothesis of a recent reduction in human mutation rates but it might alternatively suggest that current pedigree studies are in fact underestimating current human mutation rates.

In addition, recent pedigree studies suggest that the chimp mutation rate is in fact significantly faster than the current estimate for humans, contrary to the findings of Venn et al. (2014) that I mentioned Jeanson cited earlier. Tatsumoto et al. (2017) estimated the chimp mutation rate to be approximately 30% higher than humans, and Besenbacher et al. (2019) found that chimps, gorillas, and orangutans all had significantly faster mutation rates than humans, around 40-50% higher (Figure 2). To be fair to Jeanson, both of these papers came out after he published his book, so it’s understandable that he didn’t report their results, but they both strongly support the hypothesis that the mutation rate in the human lineage specifically has recently slowed down. The idea that the modern human mutation rate can be applied to the whole of primate evolution is just untenable at this point.

Figure 2 | Divergence and speciation times within the great apes. Speciation times (red) are more recent than divergence times (blue) according to ancestral effective population size estimates (purple). On the left hand side, several relevant fossil species and their ages are added for context. Measured mutation rates for each species are listed at the bottom. Figure from Basenbacher et al., (2019). Reprinted with permission from Nature Ecology & Evolution, Copyright 2019.

Finally, it’s important to note that what is calculated by these molecular clock analyses are technically divergence times, not speciation times. Speciation times represent the time when a population splits into two separate, non-interbreeding populations, while divergence times represent the time when the two sequences being examined actually had their last common ancestor. These are often not the same time, with the divergence times usually predating the speciation time significantly. The length of the gap between divergence time and speciation time is a function of the ancestral population size. Larger ancestral population = longer gap.

“Traditional” figures like 4.5-7 million years ago usually refer to the speciation time of humans and chimps because they’re calibrated on fossil data, so it’s not surprising that molecular clock analyses give somewhat older dates than this – the divergence time is older! For example, Moorjani et al. (2016) estimated a divergence time of 12.1 million years ago and a speciation time of around 7.9 million years ago, while Besenbacher et al. (2019) estimated a divergence time between humans and chimps of 10.6 million years ago and a speciation time of 6.6 million years ago (Figure 2).

Suffice it to say, this is a very complex subject, and Jeanson doesn’t come close to doing it justice. Despite what he says, there are a number of very plausible explanations for the apparent “discrepancy” between human/chimp mutation rates and divergence times that don’t involve completely overhauling the timeline of primate evolution.

Conflicting explanations?

Let’s get back to Jeanson’s version of events. Between this and the last chapter, he thinks he’s on to a real stumper for evolution. Recall from chapter 7 that he purported to show the evolutionary model severely over-predicting the number of mitochondrial DNA differences that should exist between humans and chimps, and now he purports to show evolution under-predicting the number of nuclear DNA differences between them:

“This contrast constrained the explanatory options for the evolutionary model. Consider the most likely evolutionary explanation for the mtDNA discrepancy. Given the massive number of predicted differences — differences that exceeded the length of the mtDNA genome — I anticipate that evolutionists will invoke natural selection to reconcile prediction with fact. Yet, in the realm of nuclear DNA, natural selection is excluded from the discussion, almost by definition. Since the nuclear DNA predictions underestimated the actual level of DNA differences, elimination of mutations via natural selection would only make this discrepancy worse — it would reduce the number of predicted differences even more. This presents a conflict for evolution. ”

First of all, Jeanson says he “anticipates” that natural selection will be brought up in regard to mtDNA data in the previous chapter? Anticipates”? That implies that Jeanson doesn’t already know for a fact that natural selection is invoked as an explanation, having read at least one paper on time-dependent rate slowdown (see the previous part of this review). I could be reading too much into things here, but I get the impression that this is another subtle way that Jeanson tries to convince his audience that his results are in some way “new” or “unexpected” to “evolutionists” – by making it sound as though he’s thinking several steps ahead of “evolutionists” who will only find out about the data upon reading his book and be scrambling to come up with ad hoc explanations.

Anyway, in the quote above, Jeanson says that this is a challenge for “evolutionists” because as the two discrepancies are in opposite directions, they can’t be explained by the same phenomena. He also bemoans the fact that the explanations might be applied completely ad hoc, invoked when needed to explain any inconvenient observations.

It’s true that there are different explanations for these phenomena, but there’s nothing contradictory or inconsistent with that, and this becomes obvious when you consider the differences between mitochondrial DNA and nuclear DNA. The primary explanations for time-dependent rate slowdown in mtDNA, natural selection and saturation, have much more of a significant effect on mutations in the mtDNA than in the nDNA because mtDNA is more densely packed with functional elements (e.g. genes), and is orders of magnitude smaller than nDNA. A higher proportion of mutations that occur in mtDNA will be deleterious and be removed by selection since there’s a higher density of functional elements to hit. Saturation will play a larger role in the mtDNA because, well, it’s harder to saturate a 3 billion base-pair nuclear genome with mutations than it is a 16 thousand base-pair mitochondrial genome. These two mechanisms act to reduce the number of mtDNA differences, but won’t do nearly as much to reduce the number of nDNA differences. As a side-note, this is also why time-dependency is so extensively documented in the tiny and function-dense genomes of viruses (Aiewsakun and Katzourakis, 2016).

On the other hand, some (not all) of the factors that explain the nDNA “discrepancy” could well impact the mtDNA estimates (e.g. increasing generation time), but not by all that much, relatively speaking, since the “discrepancies” are of different magnitudes.

Time dependence

As I spoiled in the last part of this review, Jeanson now finally brings up time-dependent rate slowdown as an explanation for his mtDNA results in the previous chapter. I’ll reiterate what I said back then: the way Jeanson talks about time-dependent rate slowdown indicates he doesn’t actually understand what it is. As soon as he finishes dismissing the explanation involving the effects of natural selection on mtDNA substitution rates, he says (my emphasis):

Consider another potential evolutionary explanation. In the realm of mtDNA, evolutionists have already discussed a phenomenon termed time dependency.20”

It sure sounds to me as though Jeanson thinks time-dependency is an entirely different explanation, unrelated to natural selection. He also says (my emphasis):

“In other words, evolutionists have suggested that the mtDNA clock ticks at different rates at different points in history. Specifically, evolutionists have argued that mtDNA mutation rates have been slower in the distant past — an explanation which could, in theory, reconcile the erroneous predictions of the previous chapter with actual mtDNA differences.”

Here it sounds like he’s claiming that we’re suggesting that mitochondrial mutation rates were actually slower in the past, when in reality the mutation rate might have been entirely unchanged. It’s the substitution rate that changed, as a result of natural selection and saturation. Finally, he says:

“Yet, for nuclear DNA, a slower rate in the distant past would aggravate the magnitude of the underestimate. When does the molecular clock speed up and slow down? Can the evolutionary model predict when it accelerates and when it doesn’t? Or will time dependency always be an idea that is retrofitted to any result as needed — a “time dependency did it” type of explanation?”

If Jeanson knew that time dependency was related to natural selection, perhaps he might also know the answer to these questions. The fact is that time dependency is not simply retrofitted to the data when convenient – it applies to the nuclear DNA as well as the mitochondrial DNA. However, because nDNA is different to mtDNA, the dynamics of time-dependency are different too. As only a small fraction of the human nuclear genome is functional, only that small fraction would be expected to be affected by selection-caused time-dependent rate slowdown. This is precisely what the data shows. Subramanian and Lambert (2012) found that a strong time-dependent signature was present in non-synonymous SNPs in highly constrained nuclear genes. Put more simply, the changes that were more likely to impact function displayed this signature of time-dependence, just as in mtDNA. Indeed, the fact that this time-dependent signal is absent from the majority of the genome is actually another independent line of evidence suggesting that the majority of the genome is not subject to purifying selection, and is therefore non-functional. I hope you’re noticing how several different lines of evidence are cross-confirming one another in this regard, dear reader.

Pre-existing heterozygosity

A little earlier I mentioned that the length of time separating the speciation time and coalescence time of a pair of species is a function of population size. This is because if the ancestral population that existed prior to the speciation was large, then coalescence of two sequences would take longer, pushing back the coalescence time. The same principle applies to populations as well as species. What we now recognise as “modern humans” arose approximately 200,000 years ago (some recent research indicates this could be a bit of an underestimate, but I’ll stick with this nice round number for now). However, the human population contains more variation than what mutations alone could generate in 200,000 years. Why? Because in addition to mutations that occurred in the last 200,000 years, we also inherited mutations that were present in the ancestral populations that gave rise to modern humans. This ancestral population had accumulated these mutations during their history, and in turn inheriting some from even earlier ancestral populations, and so on.

Jeanson presents a bar chart showing that around 12% of DNA differences that separate distant populations of modern humans today represent mutations that occurred in the past 200,000 years, so the other 88% were pre-existing in the ancestral population. He frames this as a rescuing device for the timeline of evolution, as though “evolutionists” just pick a preferred date for the origin of modern humans, count how many DNA differences can be explained by mutations, and then simply chalk up all the remaining DNA differences up to “pre-existing variation” and call it a day.

He then points out that a YEC timeline of just 6,000 years at measured mutation rates accounts for just 0.4% of human variation, where the “evolutionary” timeline accounted for 12%. Finally, he asks his innocent question:

“But if evolutionists can invoke preexisting differences, why can’t creationists (Figure 8.11)? If evolutionists explain 88% of our differences as preexisting, is it too much of a stretch to bump the number up to 99.6%?”

The answer is yes, it is a massive stretch, especially when creationists posit that those 99.6% of differences were pre-existing due to design, rather than due to ancient mutations. More on that later.

Some amount of pre-existing differences is completely expected by evolution, given that the human population evolved from an ancestral population that would have accumulated many differences in the course of their history. In other words, there’s nothing ad hoc about this explanation since it flows necessarily from evolutionary theory. Populations have ancestry, and ancestral populations contain standing (“pre-existing”) variation. On the other hand, Jeanson has no such a priori reason to expect pre-existing differences to have been present in the earliest humans, so he has had had to come up with this idea of 99.6% pre-existing differences specifically to explain the observed data after the fact. It’s the very definition of an ad hoc explanation. In effect, Jeanson is saying “God specially created humans with all the variation they would have had if they’d descended from an earlier population in the distant past. But they definitely didn’t.”

Jeanson then shows a series of bar charts demonstrating that indeed, the YEC timescale requires >95% of nuclear DNA differences found within several extant animal and fungal “kinds” (great apes, mice, flycatchers, fruit flies, yeast) to have been specially created, because there are far too many differences to have accumulated in just a few thousand years at current mutation rates. Again, this is a completely ad hoc explanation, with no independent supporting evidence.

Instead of providing supporting evidence, Jeanson argues that his model of specially-created nuclear variation is “scientifically coherent” because it doesn’t require conflicting explanations or timescales for the mitochondrial genome versus the nuclear genome. As discussed previously, his evidence that the evolutionary model involves “conflicting” explanations for these different genomes is fundamentally flawed, but he’s kind enough to offer up another yet another example of his difficulty in understanding some basic biology:

“With respect to other species, mutation rates and divergence times run into additional problems. For example, among yeast species, the current mutation rate over the 15 million-year evolutionary time of divergence predicts45 far too many mutations among yeast species (Figure 8.18). Just like we observed for mtDNA, the number of predicted mutations actually exceeded the yeast genome size. This result raises again the questions of what role natural selection plays, when it plays its role, and how much of a role it plays in each compartment. If nothing else, it demonstrates that evolutionary divergence times do not consistently predict mutation rates.”

Notice that once again, Jeanson neglects this opportunity to talk about mutational saturation when he says that the number of predicted mutations in an evolutionary timescale supposedly exceeds the yeast genome size. The upper end of Jeanson’s calculations predict that divergent yeast nuclear genomes should differ by more than 700 million mutations, a distinct challenge as the genome is only about 12 million nucleotides in length.

That aside, Jeanson is making the point that the “evolutionary explanation” for this discrepancy between the calculation and reality would be to invoke natural selection as an ad hoc rescuing device. The problem is that this explanation isn’t ad hoc at all, as I discussed earlier. The yeast genome is much more densely packed with functional elements than the human genome, or the genomes of any of the other species Jeanson makes these calculations for. As a comparison, the human genome contains about 20,000 genes in 3 billion nucleotides, while the yeast genome contains about 6,000 genes in just 12 million nucleotides. The density of genes in the yeast genome is about 75 times higher than in the human genome. As mutations are much more likely to hit a functional element in yeast, and because the population size is very large, more mutations will be removed from the population by natural selection. This naturally follows from evolutionary theory. Perhaps Jeanson rejects this reasoning because he believes that the vast majority of bases in genomes are functional, but he at least should at least accurately describe the “evolutionary model” to his readers rather than giving them the impression that natural selection is a magic wand employed by “evolutionists” in any inconvenient situation, regardless of basic biological principles.

Where did nuclear DNA differences come from?

Jeanson’s claim is that the vast majority of the variation within the human population was specially created as “created heterozygosity” (variation present in the genomes of Adam and Eve), and that all of the genetic differences that separate humans and chimpanzees were also the result of special creation. What does the evidence say? What if there’s evidence that the differences were actually caused by mutations? As it turns out, there’s actually quite a lot. As I wrote about some of them for this part of the review, it began to get up into several thousand words, so I decided it actually deserved its own dedicated blog post, which I published somewhat ahead of this one. Here’s the link:

Human Genetics Confirms Mutations as the Drivers of Diversity and Evolution

So, not only is the recent conception of “created heterozygosity” a flimsy, ad hoc rescuing device for young-earth creationists, it also flatly contradicts reality according to observed data. Without this device, Jeanson has no explanation for the extensive nuclear DNA diversity of humans and other species, which forms the basis of his entire “creation model” of origins that he attempts to lay out in his book. Once again, it seems excessive to continue beating Jeanson’s dead horse of a thesis after this, but I’ll see this through to the end nonetheless.

Y no Y-chromosome analysis?

So far in the book, Jeanson has “analysed” mitochondrial DNA and nuclear DNA in general, but he hasn’t specifically focused on the Y-chromosome. Towards the end of this chapter, he gives them as passing mention, first pointing out a similarity between the Y-chromosome and the mtDNA:

“Because of its uniparental inheritance, the Y chromosome shares a number of characteristics with the other uniparentally inherited genetic compartment, the mtDNA genome. In both compartments, all modern DNA differences are the result of mutations to the sequence that was present in the first ancestors.”

As he says, he believes that all mutations in the Y-chromosome are the result of mutations that have occurred since Adam. Since Adam possessed the only human Y-chromosome in existence after the original creation, no “created variation” is involved, just simple mutations, like in mtDNA. He even goes as far as saying:

“Thus, both the mtDNA and Y chromosome sequences have the potential to act as strict, absolute molecular clocks. The rate of mutation in each of these compartments will determine how precisely each can measure time. But the fact of a clock is true in both compartments. Therefore, the timing of the origin and migration of various people groups can be interrogated with these genetic tools.”

While the Y-chromosome and mtDNA are indeed similar in this respect, they also differ greatly in another: size and gene density. The mitochondrial genome is about 16,000bp long, while the human Y-chromosome is almost 60,000,000bp long. It’s several thousand times larger yet only contains twice as many genes as mtDNA (78 vs 37), so the Y-chromosome is much less densely populated with functional elements compared to the mtDNA, so a relatively smaller number of mutations would be removed by purifying selection.

Why then, does Jeanson not simply repeat some of the types of analyses he described in the previous chapter? All the data is available, why not count up all the differences in human Y-chromosomes, get an empirical mutation rate from the literature, and see what sort of timescale is required to explain the number of differences? In fact, Jeanson doesn’t even have to go through all that effort, because such analyses are already published in the scientific literature. Let’s see what they found.

First, I have to add the caveat that the Y-chromosome is more complex beast than the mitochondrial genome. While the Y-chromosome is largely inherited in isolation like mtDNA without any recombination with the other chromosomes, there are exceptions. The tips of the ends of the Y-chromosome (the so-called “pseudoautosomal regions” do actually recombine with the X-chromosome, so these are excluded from analyses of Y-chromosome-specific mutations. The remaining 95% of the Y-chromosome doesn’t recombine with other chromosomes, and is termed the “male-specific region”, or MSY for short. The MSY has a complex structure with a lot of repeated sequences, so is actually very difficult to sequence with most existing technologies. As such, most analyses are limited to a set of relatively easy-to-sequence regions through the Y-chromosome, comprising about 10-15 million bases of the total ~60 million.

Now that that’s out of the way, let’s look at the mutation rate. A few studies have estimated the Y-chromosome mutation rate based on sequencing pedigrees. Xue et al. (2009) found 4 mutations after sequencing two Y-chromosomes from individuals separated by 13 generations, yielding a mutation rate of 1.0x10^-9 mutations per nucleotide per year. Helgason et al. (2015) sequenced the Y-chromosomes of 753 men from 274 patrilines (pedigrees), and estimated the mutation rate to be 0.87x10^-9 mutations per nucleotide per year. Balanovsky et al. (2015) found a mutation rate of 0.78x10^-9 mutations per nucleotide per year. As you can see, the empirical estimates fall within a fairly tight range, centred around approximately 0.80x10^-9 mutations per nucleotide per year (Figure 3).

There are also a couple of studies that estimate the mutation rate based on calibration with ancient timescales. For example, Poznik et al. (2013) estimated the mutation rate by using the date for the colonisation of the Americas, 15,000 years ago, to calibrate their molecular clock and found a result of 0.82x10^-9 mutations per nucleotide per year. Fu et al. (2014) sequenced the Y-chromosome of human remains from Siberia that were carbon dated to be 45,000 years old. Based on the differences between this ancient Y-chromosome and modern Y-chromosomes, and the assumption that those differences had accumulated over 45,000 years, the authors obtained a mutation rate estimate of  0.76x10^-9mutations per nucleotide per year. The fact that these estimates line up almost perfectly with the estimates obtained from direct (pedigree) studies is compelling evidence by itself that the ancient timescales used in the latter two studies are accurate. Figure 3 shows the range of estimates from all the aforementioned studies, in addition to two more.

Figure 3 | Estimated Y-chromosome mutation rates. Reported estimates of the Y-chromosome from 7 studies between 2009 and 2015. Colours of the data points represent the type of study/calibration (families = pedigrees). The horizontal blue bar represents the consensus mutation rate, while the horizontal yellow bar represents the autosomal mutation rate, so you can see that the Y-chromosome mutates at approximately twice the rate of the autosomes. Figure from Jobling and Tyler-Smith (2017). Reprinted with permission from Nature Reviews Genetics, Copyright 2019.

So, we have our mutation rate: approximately 0.80×10^-9 mutations/nucleotide/year, but what about the number of DNA differences within modern humans? Poznik et al. (2016) sequenced 10.3Mb of Y-chromosomes from over 1,200 men from around the world in order to find out. Before I reveal how many they found, let’s consider roughly how many might we expect according to a YEC version of human history. Let’s make a prediction. The last Y-chromosomal common ancestor of humanity should be Noah, so let’s put the YEC timescale at ~5,000 years. We can plug all these numbers into the mutation rate to generate our prediction. 0.80×10^-9 mutations per nucleotide per year, multiplied by 10.3 million nucleotides multiplied by 5,000 years equals 41 mutations. Multiplying this by 2 (for two divergent human lineages), and our YEC prediction for how many single-nucleotide differences should separate the two most divergent human Y-chromosomes works out to ~82.

How many are actually found? Around 3000 (see Supplementary Fig. 14 of Poznik et al., 2016). Let that sink in.

Poznik et al. applied the aforementioned mutation rate (actually they used 0.76×10^-9, but it’s close enough) to the modern human Y-chromosomal variation to produce a time-calibrated phylogeny. It’s a more sophisticated version of the analyses Jeanson does, producing a phylogeny with all the time information on it rather than a simple bar chart representing a coalescence or divergence calculation performed based on a single time point. The result is shown below in Figure 4. As you can see, it puts the last human Y-chromosomal common ancestor at 190,000 years ago and once again, it recovers an African origin of humans. This number is based on observed genetic variation and observed mutation rates.

Figure 4 | A time-calibrated phylogeny of human Y-chromosomes and world map displaying the distribution of haplotypes. On the left you can see the timescale in kya (thousands of years ago). Figure from Poznik et al. (2016). Reprinted with permission from Nature Genetics, Copyright 2019.

In order to compress this variation into YEC timescales, creationists would have to increase mutation rates or decrease generation times by about 36x in total. There are pretty obvious reasons why the generation times couldn’t be significantly lower, so really they require a Y-chromosomal mutation rate about 35x higher than what is observed today.

In fact, this 190,000 years-worth of variation is actually an underestimate, as Poznik et al‘s dataset of human variation didn’t include any Y-chromosome sequences from the earliest-branching African haplogroup “A00”. Luckily, another study performed a similar analysis that did include this haplogroup and found that the last common ancestor of the A00 haplogroup and the other haplogroups seen in Figure 4 lived 275,000 years ago (Mendez et al. 2016). This paper also analysed Neanderthal Y-chromosome sequences and estimated that the Y-chromosomal last common ancestor of modern humans and Neanderthals lived about 588,000 years ago. In other words, there’s 588,000 years worth of mutations to account for, given observed mutation rates. Naturally, Jeanson will dismiss this Neanderthal DNA data since it’s supposedly “unreliable” ancient DNA, but then he still has to contend with 275,000 years worth of mutations found within extant human Y-chromosomes. Will he invoke increases in Y-chromosome mutation rates up to 55x what the current data supports to rescue his timescale?

What about using the Y chromosome for inter-species comparisons, like between humans and chimps? Human and chimp Y-chromosomes differ by about 1.78% in terms of SNPs (Kuroki et al., 2006), so if we we assume a human-chimp divergence of about 7-12 million years ago, we find that the expected mutation rate should be between and 0.74x10^-9 and 1.27x10^-9, a range that encompasses the observed human Y-chromosome mutation rate nicely. It’s still a range with a centre point a little on the high side though, with its midpoint around 1.0x10^-9. One explanation for this could be that the chimpanzee Y-chromosome mutation rate is slightly higher than the human rate and is therefore dragging up the average, as we saw earlier with the autosomal mutation rates. We have good theoretical reasons to expect this to be the case, as well as some indirect data: chimpanzees have a promiscuous mating system where a female mates with several males to conceive, and as such there is intense competition between the males to be the one who actually fertilises the female’s egg. This causes there to be intense selection pressure on sperm (so-called “sperm competition”), and as many key sperm-related genes reside in the Y-chromosome, this positive selection can drive an increase in mutation rates and/or substitution rates in the Y-chromosome specifically (Pesgraves and Yi, 2009).

Once again, this is a complex subject with a lot of factors to consider, but simply put, the Y-chromosome molecular clock is consistent with the expectations according to the model of an ancient origin of humans and human-chimp common ancestry. Despite the fact that this literature is all readily available to Jeanson, and that it’s precisely the kind of analysis he uses on mtDNA to make his case for creationism, all of this data on the Y-chromosome is glaringly absent in his book. I find it difficult to believe that Jeanson isn’t familiar with at least some of these papers, or that he didn’t try and do this analysis himself and found similar results. Given what the data shows, it’s perhaps not surprising that Jeanson would leave it all out. Instead of this analysis, Jeanson says he wants to test his YEC model using an entirely different method, looking for signatures of the slave trade:

“Currently, I’m exploring whether the timing of the Trans-Atlantic slave trade has left a genetic signature in the Y chromosome. If so, I can theoretically test the predictions of the 6,000-year model for the origin of humanity.”

Well, as we’ve just seen, that test has already been performed according to Jeanson’s standards, and it strongly conflicts with the 6,000-year model for the origin of humanity. I’m curious to see how he will manage to contort the data from signatures of the Trans-Atlantic slave trade in the Y-chromosomes of modern Americans to supporting a YEC model though – I’m still waiting for it, now 18 months after Jeanson published this book.

Jeanson’s final remarks

Jeanson ends this chapter by reiterating two main points: evidence is accumulating that the vast majority of nucleotides in genomes of species from humans to roundworms are functional, supporting the creationist model of pre-existing heterozygosity, and that the creation model “harmoniously” explains both the mtDNA and nuclear DNA data, while the evolutionary model has much more difficulty.

I think I’ve already covered how silly the first claim is, but I don’t think I can emphasise enough how nonsensical the second is, so I’ll say it one more time. Jeanson’s model says that mtDNA differences within “kinds” are the result of mutation, while nuclear DNA differences are the result of creation. Of these two claims, only the first even comes close to being quantitative (and as I showed in my last blog post, is contradicted by the data). The second is completely open-ended. In other words, it can be used to justify almost any amount of genetic variation above the minimum dictated by just 6,000 years of mutations. Given the current data, Jeanson says that pre-existing heterozygosity explains ~4.3 million nucleotide differences between humans. If the data was different, say there were 430 thousand or 430 million nucleotide differences between humans instead, his “explanation” would be absolutely identical. When your explanation is literally “god did it”, is it really surprising that you can mould it to fit any data?

As usual though, Jeanson hints at more evidence in later chapters:

“in this chapter, the evidences for preexisting nuclear DNA differences were not the end of the discussion on this question. They were the start.”

There are only two chapters left, so Jeanson is cutting it a bit close. Unsurprisingly, he doesn’t deliver in the next chapters either, but those will the topic of subsequent blog posts.


This chapter covered several aspects of nuclear DNA, with Jeanson ultimately trying to make the case that nuclear DNA variation is best explained by “created heterozygosity” as the evolutionary explanations just don’t hold up. After again dismissing nested hierarchies as evidence for evolution, Jeanson tackles nuclear DNA “clocks”, arguing first that the number of nuclear DNA differences between humans and chimpanzees are unexpectedly high and difficult for evolution to explain. Difficult, that is, if you ignore the well-supported hypothesis of hominid mutation rate slowdown. When it comes to intra-human (and intra-kind) nuclear DNA differences, Jeanson acts as though his ad hoc “created heterozygosity” idea is superior to evolutionary explanations, but he doesn’t have a leg to stand on. I feel like I’m a broken record at this point, but I don’t know how many times I can rephrase the same sentiment: he consistently ignores overwhelming contradictory evidence, and what little data he does attempt to tackle, he manages to misrepresent beyond recognition. His model is also wholly dependent on the claim/prediction that the vast majority of both intra- and inter-“kind” genetic variation is functionally relevant, which is completely at odds with everything we know about genetics. I’m not complaining – if Jeanson wants to lay his entire “creation science” model on a foundation made of quicksand, who am I to stop him?

In the next chapter, Jeanson will elaborate more on the details of “created heterozygosity”. I don’t know how much more there is to say on this subject, so maybe the next post will end up being nice and short. Who knows.

Comments and queries are welcome.


5 thoughts on “Reviewing “Replacing Darwin” – Part 7: A Nuclear Catastrophe

  1. Another great job, especially with the Y-chromosome analysis. I’m beginning to think Jeanson represents in part just an ignorance of population genetics and molecular evolution but also I think sometimes he knows full well the data don’t support his religious agenda and that’s the stuff he will conveniently avoid (your Y-chromosome observation is I think a good example of the latter).

    Liked by 1 person

  2. Just read the whole review. Congratulations on a thorough analysis. Let me suggest another reason for chimps having a faster nuclear substitution and mutation rate: a different aspect of sperm competition. Chimps have much larger testicles and produce much more sperm than humans, so the number of sperm generations in an organismal generation is considerably greater than in humans, and thus if most mutation occurs during replication, a higher mutation rate. This would of course apply to a lesser degree to autosomal mutation rates and to a still lesser degree to X chromosome mutation rates. But it doesn’t involve selection at all (except for selection on greater sperm production, of course).


  3. EvoGrad (and Mays if you see this). I’m a biblical creationist putting together a talk on creation (and writing an associated blog). One topic I’m going to discuss is beneficial mutations. But I’d like to do my research a bit before discussing it.

    Do you mind taking a moment and listing what you would consider the most convincing examples of beneficial mutations?
    Thank you,


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s