Tuesday, January 17, 2012

How ID sheds light on the classic free will dilemma

Copied from Uncommon Descent:

http://www.uncommondescent.com/philosophy/how-id-sheds-light-on-the-classic-free-will-dilemma/


The standard argument against free will is that it is incoherent.  It claims that a free agent must either be determined or non-determined.  If the free agent is determined, then it cannot be responsible for its choices.  On the other hand, if it is non-determined, then its choices are random and uncontrolled.  Neither case preserves the notion of responsibility that proponents of free will wish to maintain.  Thus, since there is no sensible way to define free will, it is incoherent. [1]
Note that this is not really an argument against free will, but merely an argument that we cannot talk about free will.  So, if someone were to produce another way of talking about free will the argument is satisfied.
Does ID help us in this case?  It appears so.  If we relabel “determinism” and “non-determinism” as “necessity” and “chance”, ID shows us that there is a third way we might talk about free will.
In the universe of ID there are more causal agents than the duo of necessity and chance.  There is also intelligent causality.  Dr. Dembski demonstrates this through his notion of the explanatory filter.  While the tractability of the explanatory filter may be up for debate, it is clear that the filter is a coherent concept.  The very fact that there is debate over whether it can be applied in a tractable manner means the filter is well defined enough to be debated.
The explanatory filter consists of a three stage process to detect design in an event.  First, necessity must be eliminated as a causal explanation.  This means the event cannot have been the precisely determined outcome of a prior state.  Second, chance must be eliminated.  As such, the event must be very unlikely to have occurred, such that it isn’t possible to have queried half or more of the event space with the number of queries available.
At this point, it may appear we’ve arrived at our needed third way, and quite easily at that.  We merely must deny that an event is caused by chance or necessity.  However, things are not so simple.  The problem is that these criteria do not specify an event.  If an event does meet these criteria, then the unfortunate implication is so does every other event in the event space.  In the end the criteria become a distinction without a difference, and we are thrust right back into the original dilemma.  Removing chance and necessity merely gives us improbability (P < 0.5), also called “complexity” in ID parlance.
What we need is a third criteria, called specificity.  This criteria can be thought of as a sort of compression, it describes the event in simpler terms.  One example is a STOP sign.  The basic material of the sign is a set of particles in a configuration.  To describe the sign in terms of the configuration is a very arduous and lengthy task, essentially a list of each particle’s type and position.  However, we can describe the sign in a much simpler manner by providing a computer, which knows how to compose particles into a sign according to a pattern language, with the instructions to write the word STOP on a sign.
According to a concept called Kolmogrov Complexity [2], such machines and instructions form a compression of the event, and thus specify a subset of the event space in an objective manner.  This solves the previous problem where no events were specified.  Now, only a small set of events are specified.  While KC is not a necessary component of Dr. Dembski’s explanatory filter, it can be considered a sufficient criteria for specificity.
With this third criteria of specificity, we now have a distinction that makes a difference.  Namely, it shows we still have something even after removing chance and necessity: we have complex specified information (CSI).  CSI has two properties that make it useful for the free will debate.  First, it is a definition of an event that is neither caused by necessity or chance.  As such, it is not susceptible to the original dilemma.  Furthermore, it provides a subtle and helpful distinction for the argument.  CSI does not avoid the distinction between determinism and non-determinism.  It still falls within the non-determinism branch.  However, CSI shows that randomness is not an exhaustive description of non-determinism.  Instead, the non-determinism branch further splits into a randomness branch and a CSI branch.
The second advantage of CSI is that it is a coherent concept defined with mathematical precision.  And, with a coherently definition, the original argument vanishes.  As pointed out in the beginning of the article, the classic argument against free will is not an argument against something.  It is merely an argument that we cannot talk about something because we do not possess sufficient language.  Properly understood, the classical argument is more of a question, asking what is the correct terminology.  But, with the advent of CSI we now have at least one answer to the classical question about free will.
So, how can we coherently talk about a responsible free will if we can only say it is either determined and necessary, or non-determined and potentially random?  One precise answer is that CSI describes an entity that is both non-determined while at the same time non-random.
——————-
[1] A rundown of many different forms of this argument is located here:
http://www.informationphilosopher.com/freedom/standard_argument.html
[2] http://en.wikipedia.org/wiki/Kolmogorov_complexity

Infinite Probabilistic Resources Makes ID Detection Easier (Part 2)

Copied from Uncommon Descent:

http://www.uncommondescent.com/philosophy/the-effect-of-infinite-probabilistic-resources-on-id-and-science-part-2/

Previously [1], I argued that not only may a universe with infinite probabilistic resources undermine ID, it will definitely undermines science. Science operates by fitting models to data using statistical hypothesis testing with an assumption of regularity between the past, present, and future. However, given the possible permutations of physical histories, the majority are mostly random. Thus, a priori, the most rational position is that all detection of order cannot imply anything beyond the bare detection, and most certainly implies nothing about continued order in the future or that order existed in the past.
Furthermore, since such detections of order encompass any observations we may make, we have no other means of determining a posteriori whether science’s assumption of regularity is valid to any degree whatsoever. And, as the probabilistic resources increase the problem only gets worse. This is the mathematical basis for Hume’s problem of induction. Fortunately, ID provides a way out of this conundrum.Not only does intelligent design detection become more effective as the probabilistic resources increase, but it also provides a basis (though not a complete answer) for the assumption of regularity in science.
In [1], I point out that as the resources approach infinity the proportion of coherent configurations in the space of possible configurations approach zero. This is important because Intelligent Design is really about hypothesis testing and specifying rejection regions [2], and coherent configurations allow us to form a rejection region. In hypothesis testing, the experimenter proposes a hypothesis and a probability distribution over potential evidence, signifying what results the hypothesis predicts. If the experiments produce results outside the predicted range to a great enough degree, then the result fall within the rejection region and the hypothesis is considered statistically unlikely and consequently rejected. Note that in this case it is actually better to have more result samples rather than fewer samples. With a few samples the variance is large enough that the results don’t render the hypothesis statistically unlikely. But, with enough samples the variance is reduced to where the hypothesis can be rejected. With an infinite number of samples we can see almost exactly whether the true distribution matches the predicted distribution.
Such is the case with an infinite universe and infinite samples. The infinite universe is composed of all possible configurations, which create a probability distribution over how ordered an arbitrary sample is expected to be. With an assumption of infinite samples (i.e. a conscious observer in every configuration), we can say in what proportion of the configurations intelligent design detection will be successful, which is the inverse of the proportion of ordered configurations. Unfortunately, the number of unsuccessful detections never actually reaches zero, since there will always be coherent configurations as long as they are possible. If I happen to find myself in a coherent configuration I may just be extremely lucky. But, in the majority of configurations the chance and necessity hypothesis will be validly rejected in favor of the intelligent design hypothesis.
At this point it may seem suspicious that I write we can reject a hypothesis in favor of another. Why should rejecting one hypothesis favor another hypothesis? This begins to sound like a god-of-the-gaps argument; just because we’ve dismissed chance and necessity doesn’t necessarily imply we can accept design. There may be yet another alternative we’ve yet to think of. While a good caution, science does not deal with unknown hypotheses. Science deals with discriminating between known hypotheses to select the best description of the data. But, what is the probability distribution over the evidence that ID provides? Well, the specific prediction of ID is that orderly configurations will be much more common than statistically expected. For example, we can see this in that Kolmogrov complexity provides an objective specification for CSI calculations [2]. Kolmogrov complexity is a universal measure of compression, and orderliness is a form of compression. [3] So, when I end up in a configuration that is orderly I have a higher probability of being in a configuration that is the result of ID than in a configuration that is the result of chance and necessity. Hence, an orderly configuration allows me to discriminate between the chance and necessity hypothesis and the ID hypothesis, in favor of the latter. Additionally, since orderly configurations drop off so quickly as our space of configurations approach infinity, then this shows that infinite resources actually make it extremely easy to discriminate in favor of ID when faced with an orderly configuration. Thus, intelligent design detection becomes more effective as the probabilistic resources increase.
Now that I’ve addressed the question of whether infinite probabilistic resources makes ID detection impossible or much, much easier, let’s see whether ID can in turn do science a favor and provide a basis for its regularity assumption. I will attempt to do this in part 3 of my series.
[1] http://www.uncommondescent.com/intelligent-design/the-effect-of-infinite-probabilistic-resources-on-id/
[2] http://www.designinference.com/documents/2005.06.Specification.pdf
[3] Interestingly, Kolmogrov complexity is uncomputable in the general case due to the halting problem. This means that in general no algorithm can generate orderliness more often than is statistically expected to show up by chance. Hence, if some entity is capable of generating orderliness more often than statistically predicted, it must be capabable, at least to some extent, of solving the halting problem.

The Effect of Infinite Probabilistic Resources on ID and Science (Part 1)

Copied from Uncommon Descent:

http://www.uncommondescent.com/intelligent-design/the-effect-of-infinite-probabilistic-resources-on-id/


One common critique of intelligent design is that since it is based on probabilities, then with enough probabilistic resources it is possible to make random events appear designed. For instance, suppose that we live in a universe with infinite time, space and matter. Now suppose we’ve found an artifact that to the best of our knowledge (assuming finite probabilistic resources) passes the explanatory filter and exhibits CSI. However, one of the terms in the CSI calculation is probabilistic resources available. If the resources are indeed infinite, then the calculation will never give a positive result for design. Consequently, if the infinite universe critique holds, then not only does it undermine ID, but every huckster, conman, and scam artist will have a field day.
Say I had a bet with you that I’m flipping a coin and whenever it came up heads I’d pay you $100 and whenever it came up tails you’d pay me $1. Seems like a safe bet, right? Now, say that I flipped 100 tails in a row and you now owe me $100. Would you be suspicious? I might say 100 tails is just a likely, probabilistically speaking, as 50 tails followed by 50 heads, or alternating tails and heads, or any other permutation of 100 flips, which would be mathematically correct. To counter me, you bring in the explanatory filter and say, “Yes, 100 tails is equally probable, but it also exhibits CSI because there is a pattern it conforms to.” In a finite universe, this counter would also be mathematically valid. I’d be forced to admit foul play. But, if we lived in an infinite universe then even events seeming to exhibit CSI will turn up, and I could claim there is no rational reason to suspect foul play. I could keep this up for 1,000 or 1,000,000 or 1,000,000,000,000 tails in a row, and you’d still have no rational reason to call foul play (though you may have rational reason to question my sanity).
Not only do many incredible events become reality, but we begin to lose a grip on reality itself. For instance, it is much more likely, from an a priori probability, that we are merely boltzmann brains [2] instantiated with a momentary existence, only to disappear the next instant. Furthermore, it is much more likely that our view of reality itself is an illusion and the objective world is merely a random configuration that just happens to give us a coherent perception. As a result, in an infinite universe, our best guess is that we are hallucinating, instantaneous brains floating in space, or perhaps worse.
A more optimistic person might say, “Yes, but such a pessimistic situation only exists if we make assumptions about the a priori probability, such as it is a uniform or nearly uniform distribution. There are many other distributions that lead to a coherent universe where we are persistent beings that have a grasp on objective reality. Why make the pessimistic assumption instead of the optimistic assumption?”
Of course, this is good advice, whenever we have such a choice of alternatives. Unfortunately, this advice ignores the mathematical structure of the problem. The proportion of coherent distributions to incoherent distributions drops off exponentially, and as an exponential equation approaches infinity it becomes an almost binary drop off. This means that as probabilistic resources approach infinity, the number of coherent distributions approaches zero. Nor does the situation get any better if we talk about probability distributions over probability distributions, the problem remains unchanged or even gets exponentially worse with every additional layer.
The end result is that with an infinite number of probabilistic resources the case for ID may be discredited, but then so is every other scientific theory.
However, perhaps there is a rational way to preserve science even if there are infinite probabilistic resources. If so, what effect does this have on ID? Maybe ID even has a hand in saving science? More to follow…
[1] http://en.wikipedia.org/wiki/Law_of_large_numbers
[2] http://en.wikipedia.org/wiki/Boltzmann_brain

Broader Implications of ID

Copied from Uncommon Descent:

http://www.uncommondescent.com/intelligent-design/broader-implications-of-id/


In the popular media, ID is often portrayed as Creationism in new clothes.  And indeed, even among ID proponents, the creation implications tend to be predominantly emphasized.  Yet the theory underpinning Intelligent Design has implications beyond the realm of biological history, perhaps it is a much broader theory than most realize at first.  In fact, it may even describe a comprehensive worldview.  The primary reason that ID has such an impact is because materialism underlies many areas of modern thought, and ID is an alternative hypothesis to materialism.
To understand the insights that ID brings, it is important to have a bit of philosophical background to begin with.  There are two basic concepts that are important to know: efficient and final causes.  This may seem a bit off the beaten trail, but stay with me here.  For any event there are two questions you can ask.  You can ask “how did this happen?” and you can ask “why did this happen?”.  As an example, the event of your web browser navigating to this article can either be described in terms of the very complex computer and network architecture and accompanying electrical signals that lead to the retrieval and display of this article (how), or it can be described in terms of the fact that you wished to view this article (why).  Both are valid explanations.  The first explanation is the efficent causal explanation and the second explanation is the final cause explanation.  Now, to relate these concepts back to the interplay between materialism and ID, materialism implies that all events only have efficient causal explanations, and any perceived final causal explanations can be reduced to efficient causal explanations.  On the other hand, ID implies that some events may potentially have irreducible final causal explanations, and no matter what one may know about how an event occurred they will not be able to completely explain its occurrence.
For an application of these two concepts and ID, consider the realm of economics.  Generally there tend to be two schools of thought regarding economics: the decentralized Austrian school and the centralized Kenseyian school.  ID allows us to say that one school is strictly and objectively better than the other.  To see this, consider how wealth is created.  Wealth is created by the creation of new information in the form of complex, specified inventions.  These irreducibly complex devices are formed from many integrated parts to accomplish a specific function or set of functions.  According to ID, individual intelligent agents are the creators of this information.  Thus, an economic system that incentivizes individuals to create new inventions to fulfill useful functions is strictly better than a system that does not.  In a centrally planned economy, there are only a few empowered information creators, who decide how resources are divided amongst the populace.  However, in a decentralized economy, all individuals are empowered to create information.  Since an Austrian economy focusses on decentralizing information production, it is strictly better than a Kenseyian economy at creating wealth, since the Austrian economy enables an enormously larger pool of information creating intelligent agents. 
But how are materialistic assumptions at play in modern economic theory?  The impact of materialism primarily has to do with the notion of wealth.  If you recall the introductory distinction between efficient and final causes, materialism implies that there is no such thing as an irreducible final cause while ID says there may really be final causes.  The added concept you need to see how this applies to economics is that when an event occurs due to a final cause, then at this point information is created.  So, conversely, if there is no such thing as a final cause, as materialism claims, then no information is ever created.  And, if information is tied to wealth creation, then the further implication is that wealth is not created.  In which case, wealth is no longer tied to inventions, but is instead tied to resources.  Since there are only a limited number of resources in the world, economics becomes primarily concerned with the proper distribution of these resources amongst the population, instead of being concerned with allowing the creation of greater amounts of resources.  So, a centralized Kenseyian economy becomes the best kind of economy within a materialistic paradigm, since it least wastefully allocates resources (at least in theory).  But, if the materialism assumption is removed, then the emphasis for economies is changed.  Once the door is opened to the idea that wealth can be created, then economies can look to provide better avenues for wealth creation.  As discussed above, ID further implies that wealth is better created through a decentralized than through a centralized economy.
Now lets consider a very right brained topic, very rarely under the purview of common ID discussion.  Namely, how are the humanities related to the sciences?  Commonly, they are considered two seperate spheres with little interrelation.  Additionally, the humanities, nowadays, tend to be somewhat looked down upon by the more technically oriented fields.  And, due to the greater difficulty in establishing an ROI for the humanities it becomes much harder to secure grant money and stay afloat in academia.  Consequently, out of a combination of insecurity and poverty, the humanities are beginning to sell out more and more in academia, and adopt the false robes of quantifiable, empirical fields and needlessly obtuse technical language. 
How does ID shed light on a solution here?  Well, underlying the difficulties that the humanities face is the worldview of materialism.  Materialism asserts that the only reality is matter.  If the only reality is matter, then only the fields dealing with the description of matter, matter.  Since the humanities ostensibly do not deal with matter, and in fact traditionally deal with entities such as the soul, God, and other such topics, the humanities are considered to be at best entertaining and at worst dangerous deceptions (per the recent rife of cantakerous anti-religion literature).  ID provides a helping hand here by showing that, at the very least, there is open room to doubt that there is nothing more to reality than particles colliding and quantum waveforms collapsing.  Again, to understand why ID helps, we can rely on the handy distinction between efficient and final causes.  Simply enough, if ID is at least possibly true, then there may be other entities at work than the particles and waves.  Furthermore, if ID is true, then final cause explanations are true and important, and final causal explanations are entirely in the realm of the humanities.  The humanities primarily occupy themselves with answering the question why?, and since final causes are the source of intelligently designed events, the humanities turn out to be even more important than the sciences, at least as far as intelligent design is concerned.
And, ID goes further than even this, as we’ll see in the realm of philosophy.
As any student of the history of philosophy can tell you, the modern era has denoted a dramatic change of focus in philosophy.  What used to be a holistic field that attempted to understand man and his relation to reality in totality with rationality, has bifurcated into two realms: contintental and analytical philosophical traditions.  The continental tradition tends to be occupied with questions of meaning and purpose, while the analytic tradition attempts to remove all ambiguity from discourse.  Perchance can we explain this divide in terms of our efficient and final cause distinction?  Perhaps we can if we first look at this distinction as it applies to language and thought.  The distinction between efficient and final causes shows up in linguistics as the distinction between syntax and semantics.  Syntax describes how a language works, the efficient causal portion of language, while semantics deals with the content of language, the purposeful thought and final cause behind a particular word choice.  Analytic philosophy tends to be primarily concerned with the syntax of our thought and language, and has significant concentration on the fields of logic and language syntax.  Continental philosophy tends to be primarily concerned with the semantics, and is often concerned with fields such as phenomenology and qualia.
So, here, even in the realm of philosophy we can see the same bifurcation as we saw in the humanities.  And, as we saw in the humanities, the analytic portion of philosophy is often considered the more reliable.  However, continental philosophy, instead of trying to make itself more quantifiable and objective has decided to embrace subjectivity.  Here again, ID is able to provide a useful perspective.  As we saw with the humanities, ID implies that the field of final causes may be much more relevant than it is usually credited nowadays, so implies that the syntax of analytic philosophy provides a substrate for the content of continental philosophy’s semantics, in the same way that we need grammar and vocabulary in order to express ideas in language.  And thus, ID provides a precise way of describing the relationship between analytic and continental philosoph, which can provide an approach for integrating the two fields.
By unifying humanities and sciences, and the fields of philosophy, ID now opens the way for providing a framework for ethics and morality.  In the middle ages, and throughout much of western history, morality has been understood within a framework of natural law.  This framework was explained by Aristotle by the notion that everything had a function, and that life was lived well by fulfilling one’s function.  Thus, morality was explained in terms of living according to a purpose, a final cause.  However, with the advent of materialism, the notion of natural function became discredited.  Why this happened is easy to see if we think of functions as final causes.  As explained previously, materialism does away with final causes, replacing them all with efficient causes.  Consequently, with the removal of final causes, so also was functionality and thus natural law based morality removed.  But, if materialism is not a foregone conclusion, then there may well be a system of functionality embedded in our world, within which we can define a moral theory based on natural law.
And with that, I bring to a close my brief, but indepth look at some of the non-biological implications that intelligent design theory has.  There are numerous other interesting implications of ID, but I will need to cover them in a new article.

Follow up to critics agreeing with Dembski re: NFL

Copied from Uncommon Descent:

http://www.uncommondescent.com/intelligent-design/follow-up-to-critics-agreeing-with-dembski-re-nfl/


Joe Felsenstein (Zoologist) at Panda’s Thumb responded [1] to my previous article [2] showing that a couple critics, Wolpert in particular who created the NFL, actually agree with Dembski.
He refers me to a paper he wrote [3] where he explains that the problem with Dembski’s argument is the relevant fitness landscape for evolution is not under the domain of the NFL.  While he may be right, I’m skeptical since Wolpert explicitly denies this in his paper.
Here is the quote again:
“In general in biological coevolution scenarios (e.g., evolutionary game theory), there is no notion of a champion being produced by the search and subsequently pitted against an antagonist in a “bake-off”. Accordingly, there is no particular signifcance to results for C’s that depend on f.
This means that so long as we make the approximation, reasonable in real biological systems, that x’s are never revisited, all of the requirements of Ex. 1 are met. This means that NFL applies.” [formatting mine] [4]
Additionally, there are other reasons I am skeptical.  For one, while there are a few domains where the NFL doesn’t apply, the NFL applies to most.  It may be that evolution just happens to be concerned with the extremely small subset where the NFL doesn’t apply.  But even if it does, even within this select group, most of the group is mostly random.  While there are relevant cites within other papers by Wolpert and other authors on the NFL (which I can dig up if requested), it is easy to just see this result mathematically.  According to Kolmogrov complexity, the majority of the landscapes will be completely random in structure.  Of the rest, only a small subset has any significant amount of structure, and this subset shrinks at an exponential rate as your landscape size increases.  So, given the timescales and number of creatures and variety of environments that evolution posits for its effectivity, it is highly unlikely the landscape is suitably structured for any manner of effective search.
Second, Dr. Felsenstein’s big promise that obviates any concerns about the NFL is:
“They have overlooked the NFL theorem’s unrealistic assumptions about the random way that fitnesses are associated with genotypes, which in effect assumes mutations to have disastrously bad fitness. ”
As far as I know, the current consensus of population geneticists is that mutations do indeed have disastrously bad fitness.
Finally, perhaps Dr. Felsenstein is right after all and evolution does happen to possess the extremely rare and valuable fitness landscape whereby algorithmic search is significantly effective to be worthwhile.  In this case Dembski would be indeed wrong about the applicability of the NFL.  However, given the high specificity of such a landscape, this would mean that evolution itself is intelligently designed to an extraordinary degree.
[1] http://www.pandasthumb.org/archives/2011/08/criticisms-of-d.html#comment-panels
[2] http://www.uncommondescent.com/intelligent-design/critics-agree-with-dembski-the-no-free-lunch-theorem-applies-to-evolution/
[3] http://ncse.com/rncse/27/3-4/has-natural-selection-been-refuted-arguments-william-dembski
[4] http://cs.calstatela.edu/wiki/images/1/15/Wolpert-Coevolution.pdf

Critics agree with Dembski, the No Free Lunch theorem applies to evolution

Copied over from Uncommon Descent:

http://www.uncommondescent.com/intelligent-design/critics-agree-with-dembski-the-no-free-lunch-theorem-applies-to-evolution/


We’ve all noticed the ID critics all speak outside of their realm of expertise. Biologists expound their expert opinions on mathematics, mathematicians make claims about computer science, and computer scientists think they know it all when it comes to evolution.
So, I thought, what happens if I only listen to their opinions in their actual realms of expertise?
Here’s a mathematician, MarkCC, author of the blog “Good Math, Bad Math.”
What’s his expertise? Math. What does he say about Dembski’s mathematics?
“he’s actually a decent mathematician”
What is not his expertise? Computer science. What does he say in the domain of computer science?
“But his only argument for making those modifications have nothing to do with evolution: he’s carefully picking search spaces[competitive agent (co-evolutionary) algorithms] that have the properties he want, even though they have fundamentally different properties from evolution.” [formatting mine]
Here MarkCC misunderstands the point of said paper, which is to define the how fitness of agents in co-evolutionary algorithms should be measured in general, regardless of the search space. (As an aside, he also doesn’t realize the triangle inequality can apply to evolutionary scenarios as well: B outbreeds and eliminates A, C outbreeds and eliminates B; but A could have outbred C given the chance.)
But, MarkCC is excused since both of these issues are outside of his realm of expertise.
Alright, let’s look at what the computer science experts have to say, namely Wolpert. Wolpert responds to Dembski’s earlier work on the NFL, which didn’t address co-evolution.
Let’s remind ourselves that Wolpert’s expertise lies in algorithms, not in biology. Does he detect any problem with Dembski’s understanding of the NFLT? Well, if Wolpert does, he says nothing. Instead, the supposed problem is:
“…throughout there is a marked elision of the formal details of the biological processes under consideration. Perhaps the most glaring example of this is that neo-Darwinian evolution of ecosystems does not involve a set of genomes all searching the same, fixed fitness function, the situation considered by the NFL theorems. Rather it is a co-evolutionary process.” [formatting mine]
So, within Wolpert’s domain of expertise he detects no problem with Dembski’s work, just like MarkCC, or at least is silent. Wolpert’s only complaint is outside his field, whether Dembski correctly formalizes evolutionary processes within his argument, not that Wolpert has much sympathy for Darwinists either.*  He does indicate that he believes the NFL does not apply to co-evolution**:
“Roughly speaking, as each genome changes from one generation to the next, it modifies the surfaces that the other genomes are searching. And recent results indicate that NFL results do not hold in co-evolution.
Now for the punch line: but what happens when Wolpert does examine the evolutionary details and whether the NFL applies to them?
“In general in biological coevolution scenarios (e.g., evolutionary game theory), there is no notion of a champion being produced by the search and subsequently pitted against an antagonist in a “bake-off”. Accordingly, there is no particular signifcance to results for C’s that depend on f.
This means that so long as we make the approximation, reasonable in real biological systems, that x’s are never revisited, all of the requirements of Ex. 1 are met. This means that NFL applies.” [formatting mine]
It is commonly noted that when smart people achieve expertise in a certain area, they suddenly think they are experts in many others, even when lacking the necessary knowledge. When listening to smart people, it is always wise to take this into consideration, and listen most closely to their opinions about what they’re carefully studied.
The ID debates are no exception.
—————
* “First, biologists in particular and scientists in general are horribly confused defenders of their field. When responding to attacks from non-scientists, rather than attempt the rigor that the geometry of induction and similar bodies of statistics provide, they fall back on Popperian incantations, trying to browbeat their opponents into acceding to the homily that if one follows certain magic rituals—the vaunted “scientific method”—then one is rewarded with The Truth. No mathematically precise derivation of these rituals from first principles is provided. The “scientific method” is treated as a first-category topic, opening it up to all kinds of attack. In particular, in defending neo-Darwinism, no admission is allowed that different scientific disciplines simply cannot reach the same level of certainty in their conclusions due to intrinsic differences in the accessibility of the domains they study.”
** From the comments regarding how exactly the NFL applies to co-evolution:
What Wolpert is saying here is that co-evolution can produce fitter competitors, but it still cannot produce complex functionality:
“For example, say the problem is to design a value y that
maximizes a provided function g(y), e.g., design a biological
organ that can function well as an optical sensor. Then, even
if we are in the general coevolutionary scenario of interacting
populations, we can still cast the problem as a special case
of Example 1….
Due to the fact that they’re a special case of Example 1, the
NFL theorems hold in such scenarios. The extra details of the
dynamics introduced by the general biological coevolutionary
process do not affect the validity of those theorems, which is
independent of such details.”
However, it can possibly produce a better survivor:
“On the other hand, say the problem is to design an organism that is likely to avoid extinction (i.e., have a non-zero
population frequency) in the years after a major change to the
ecosystem. More precisely, say that our problem is to design
that organism, and then, after we’re done its ecosystem is
subjected to that change, a change we know nothing about a
priori. For this problem the coevolution scenario is a variant of
self-play; the “years after the major change to the ecosystem”
constitute the “subsequent game against an antagonist”. Now
it may be quite appropriate to choose a C that depends directly
on f. In this situation NFL may not hold.”
Note that this is consistent with ID’s claim that evolution cannot produce complex functionality.