Reviewing the Intelligent Design Literature – Part 2: Why Are These Here?

Reviewing the Intelligent Design Literature – Part 2: Why Are These Here?

This is the second post in my series reviewing the Discovery Institute’s (DI) list of peer-reviewed papers that they say “support the theory of intelligent design”. In Part 1 I narrowed down the list from 123 publications to 55 papers, and in this part of the series I’ll cut away a few more – 5 papers that seem to have nothing at all to do with intelligent design (ID), so I’m really not quite sure why they made it onto the final cut of the list.

 

1. Winston Ewert, William A. Dembski, Robert J. Marks II, “Measuring meaningful information in images: algorithmic specified complexity,” IET Computer Vision, Vol. 9 (6): 884-894 (December, 2015).

In this paper, Ewert, Dembski, and Marks describe a new measure of “meaningful information” they call “Algorithmic Specified Complexity” (ASC) based on conditional Kolmogorov–Chaitin–Solomonoff (KCS) complexity. KCS complexity describes the length (in bits) of the shortest computer program required to produce a result, and conditional KCS complexity modifies this using a library of existing results that the program only has to modify to get the new result. To use an example with images, since that’s what the paper is about, think about a portrait of yourself. The KCS complexity would describe the length of the minimal program required to reproduce that portrait from scratch, while conditional KCS would be given a library of pre-existing portraits and describe the length of the minimal program required to modify one of those portraits into the portrait of you. The authors make it clear in their introduction what they mean by “meaningful” when it comes to information in images:

For an image to be meaningfully distinguishable, it must relate to some external independent pattern or specification. The image of the sunset is meaningful because the viewer experientially relates it to other sunsets in their experience. Any image containing content rather than random noise fits some contextual pattern. Naturally, any image looks like itself, but the requirement is that the pattern must be independent of the observation and therefore the image cannot be self-referential in establishing meaning. External context is required.

The authors define ASC as:

ASC(X, C, P) = I(X) − K(X|C)

Where I(X) is basically the probability that the image will be produced by stochastic (random) processes, and K(X|C) is basically the complexity of the program required to produce the image by transforming the images from the pre-existing library of images. Both of these quantities are in bit format.

I(X) is high when the image is unlikely to be produced by a stochastic process, and K(X|C) is high when the image is very different from the images in the library (as the program required to transform the library images will be large). When an image is very complex (high I(X)) and very similar to images in the library (low K(X|C)), ASC will be high. When an image is very complex and very dissimilar to images in the library (high K(X|C)), ASC will be low. When an image is very simple (low (I(X)), ASC will be low.

The result of all this is that you can supply a series of images to a program as context, then more images for it to judge, and using the measure of ASC it will be able to tell you which of the latter images are “meaningful”, AKA which images are similar to the the images you originally supplied the program as “context”. Comparing images to look for and measure similarities is certainly useful, but also hardly novel, which might explain why this paper seems to have garnered no citations in the 2 years since it was published. I really don’t understand why anyone would think this paper supports ID in any way. It’s about looking for similarities between images and distinguishing them from very simple images or images of random noise, that’s all.

 

2. Mohit Mishra, Utkarsh Chaturvedi, K. K. Shukla, “Heuristic algorithm based on molecules optimizing their geometry in a crystal to solve the problem of integer factorization,” Soft Computing, DOI 10.1007/s00500-015-1772-8 (July 23, 2015).

This is another computing paper, about integer factorisation – the problem of how to decompose a given integer into its component prime factors. It’s obviously very easy to multiply 2 prime numbers together to get the result, but it’s extremely difficult to work backwards from the result to find the prime numbers. This is a well-known problem in computing, which is why the principal is used in digital security and cryptography.

There have been such alternative approaches to solving integer factorization found in Chan (2002), Meletiou et al. (2002), Jansen and Nakayama (2005), Laskari et al. (2006), Mishra et al. (2014) and Yampolskiy (2010) which suggest the growing interest of researchers across the academia towards nature-inspired computing for integer factorization. In this paper, we present a new technique derived from computational chemistry: a molecular geometry optimization approach (Energy Minimization 2014). Molecular geometry optimization is basically a process of finding an optimized arrangement of a group of atoms in space that reduces the surface potential energy. In such an arrangement, the atoms are positioned in space such that the net atomic forces on each other is minimum (close to zero).

The authors also explicitly explain their motivation for using a nature-inspired method for the task of integer factorisation:

There is absolutely no need in nature for factorization. Then, one may question that why would we model natural phenomenon for solving the problem of integer factorization. The trick here is to transform the problem of integer factorization (IF) into an optimization task. Once this can be done, nature-inspired methods for optimization jobs can be easily applied to them to approach an approximate, and in some cases exact, solutions.

Basically the algorithm that they use, which they call the molecular geometry optimization algorithm (MGO), is kind of a genetic (evolutionary) algorithm based on the idea of searching through the number space and “selecting” for steps that move the algorithm closer to the answer. This is the only way I can imagine the DI are interpreting this as relevant to evolution or ID – the authors say:

One must also note that the problem of integer factorization has an all-or-nothing nature—either a number is a factor or it is not. So, at the large extent, no heuristic information is available. Hence, it becomes rather questionable if nature-inspired methods like MGO in this case would actually scale up to a large extent, by discarding incorrect solutions or to navigate to more promising areas of search space. This in fact is an open problem for researchers in this field.

I imagine that the DI see this as relating to searches through evolutionary sequence space, looking for new functions or something. However this case of integers factorisation is extremely far removed from any biological system, and the principal is trivial – of course if sequence space has no heuristic information available a searching algorithm won’t be any good. Overall this paper is so far removed from any implications of ID that I justified including it in this section of the series instead of the later one which will look at weak arguments for ID.

 

3. Steinar Thorvaldsen and Peter Øhrstrøm, “Darwin’s Perplexing Paradox: intelligent design in nature,” Perspectives in Biology and Medicine, Vol. 56 (1): 78-98 (Winter, 2013).

This paper sounds a bit more promising, doesn’t it? It even has “intelligent design” in the title! This paper is not a research paper, or even an opinion piece. It’s a historical narrative about Darwin and his conversations with his contemporaries around the time he published On the Origin of Species, inlcluding Asa Gray, William Whewell, and John Herschel. These conversations were theological in nature, referencing how Darwin’s theory related to teleology, predestination, the problem of evil, and so on. Gray, Whewell, and Herschel were critical of Darwin to varying degrees, arguing that his theory should also take into account the guiding hand of a divine designer in the evolutionary process, while Darwin disagreed. The authors summarised the paper well in 2 sentences:

We argue that Darwin made a distinction between at least two kinds of intelligent design, one general (or cosmological) and one specific (related to the individual species).While he accepted the former kind of intelligent design as the basis of a correct understanding of the existence of natural laws, he rejected the latter idea as realized in the biological world.

They also describe at length how Darwin struggled with the paradox between these two positions: on the one hand he saw evidence for design in the universe, but on the other he couldn’t see it in individual species. He felt conflict here, which is why towards the end of his life he became an agnostic, apparently fluctuating from theist to atheist to some extent.

It’s an interesting paper, and in the discussion it comments on modern interpretations of the positions of the aforementioned men, and how modern theology has tried to reconcile these problems, but by no means does it “support the theory of intelligent design”. Unless you consider the fact that 19th century natural philosophers believed in some form of intelligent design (general and/or specific – see the above quote), including Darwin, to be evidence that supports the validity of ID. I don’t.

 

4. Kirk K. Durston, David K.Y. Chiu, Andrew K.C. Wong, and Gary C.L. Li, “Statistical discovery of site inter-dependencies in sub-molecular hierarchical protein structuring,” EURASIP Journal on Bioinformatics and Systems Biology, Vol. 2012 (8).

Finally some Biology! This paper is about protein structure, and the relationships between amino acid residues. Since a functional protein generally consists of a specific folded structure, residues that are far apart on the linear sequence can end up right next to each other. In fact, it’s these kind of relationships that are responsible for the way the protein folds in the first place. Particular residues have affinities for each other, others repel each other, others are hydrophilic or hydrophobic, etc. As a result, a protein sequence has interdependencies between particular sites along it’s sequence. For example, in Protein X residues 38 and 294 might have to complement each others charge in order to attract each other and fold the protein in a particular way, therefore residues 38 and 294 will be correlated in a statistical analysis of functional proteins – they’re not free to vary independently of each other.

Specifically the aim of the paper was to use multiple sequence alignment (MSA) and a statistical information method to infer which sites are associated. As I mentioned above, this works by aligning hundreds of different functional sequences of a particular protein (e.g. from different species) and then performing a statistical analysis on the individual sites along the aligned sequences to look for associations. The authors conclude:

We have introduced here a powerful new approach for analyzing the multiple sequence alignment data of protein families and discovering key associations among aligned sites and their importance within the 3D structure. Using two proteins of known structure and function as a test bench, our method revealed key associations among residues and sites that appear to have important structural and functional significance. It can, therefore, be applied to protein families of unknown structure and function.

Again, while this method may indeed be useful, it’s certainly not novel in concept or practice, which is probably why this paper has only been cited 4 times in the past 5 years, 3 of which were by the original authors and the other time by a collaborator. So how does using MSA to detect relationships between amino acids in a protein structure support ID? Your guess is as good as mine.

 

5. David K.Y. Chiu and Thomas W.H. Lui, “Integrated Use of Multiple Interdependent Patterns for Biomolecular Sequence Analysis,” International Journal of Fuzzy Systems, Vol. 4(3):766-775 (September 2002).

This is a tricky one, because I’ve scoured the internet high and low for this paper, and it just doesn’t seem to be available online. The journal has only digitised its post-2015 volumes, and this paper was published in 2002. For that reason, I can only see what other sources say about it. TalkOrigins says the authors “mention complex specified information in passing, but go on to develop another method of pattern analysis”. It doesn’t sound as though the paper had much to do with ID, it was probably just another case of citing ID proponents to inflate their citation scores.

 

I mentioned in Part 1 that of the 55 peer-reviewed papers, only 19 could best be described as “research articles” that did some kind of original data collection and/or analysis. 4 of those 19 were featured in this post (1, 2, 4, 5), which leaves 15 supposedly supporting ID which I’ll get to in due course. These 5 papers stood out to me as being unrelated to ID, but they aren’t really the only ones. Later in the series we’ll look at the other papers whose only real relevance to ID is that they might have implications for evolution, for example in the way proteins evolve and how dependent on their structure they are to function, etc. These are interpreted as supporting ID by ID proponents because they like it when there’s even a hint of some kind of constraints on evolution. I’m getting ahead of myself though, that will be a few posts away. Next time we’ll see a selection of papers that I would classify as making arguments akin to “look at the trees!”, either directly in the paper’s text or as an interpretation.

50 left.

 

Comments and queries are welcome.

-RM

 

2 thoughts on “Reviewing the Intelligent Design Literature – Part 2: Why Are These Here?

  1. Granted a late reply but I think it worth mentioning regarding the first article (written by Dembski and crew). It seems like Dembski is taking another stab at information theory and probability as a way to prove intelligent design. The following is from talkreason.org and part of one (of many) critique:

    -In an essay titled “Specification: The Pattern That Signifies Intelligence,” it is said that “Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence.” In No Free Lunch, the index entry “Specification, definition of” leads to a page where specification is used as a synonym for rejection region. The [explanatory] filter requires us at some point to compute a probability, so whatever “specification” is, it must be possible to convert it into the mathematical object of a set.- brackets from me

    The previous probability and information theory efforts were shown to be quite wrong. I have no specific knowledge but it seems to me algorithmic specified complexity is part of another attempt to measure information and use probability to “prove” intelligent design.

    Like

Leave a comment