Tuesday, January 17, 2012

Infinite Probabilistic Resources Makes ID Detection Easier (Part 2)

Copied from Uncommon Descent:

http://www.uncommondescent.com/philosophy/the-effect-of-infinite-probabilistic-resources-on-id-and-science-part-2/

Previously [1], I argued that not only may a universe with infinite probabilistic resources undermine ID, it will definitely undermines science. Science operates by fitting models to data using statistical hypothesis testing with an assumption of regularity between the past, present, and future. However, given the possible permutations of physical histories, the majority are mostly random. Thus, a priori, the most rational position is that all detection of order cannot imply anything beyond the bare detection, and most certainly implies nothing about continued order in the future or that order existed in the past.
Furthermore, since such detections of order encompass any observations we may make, we have no other means of determining a posteriori whether science’s assumption of regularity is valid to any degree whatsoever. And, as the probabilistic resources increase the problem only gets worse. This is the mathematical basis for Hume’s problem of induction. Fortunately, ID provides a way out of this conundrum.Not only does intelligent design detection become more effective as the probabilistic resources increase, but it also provides a basis (though not a complete answer) for the assumption of regularity in science.
In [1], I point out that as the resources approach infinity the proportion of coherent configurations in the space of possible configurations approach zero. This is important because Intelligent Design is really about hypothesis testing and specifying rejection regions [2], and coherent configurations allow us to form a rejection region. In hypothesis testing, the experimenter proposes a hypothesis and a probability distribution over potential evidence, signifying what results the hypothesis predicts. If the experiments produce results outside the predicted range to a great enough degree, then the result fall within the rejection region and the hypothesis is considered statistically unlikely and consequently rejected. Note that in this case it is actually better to have more result samples rather than fewer samples. With a few samples the variance is large enough that the results don’t render the hypothesis statistically unlikely. But, with enough samples the variance is reduced to where the hypothesis can be rejected. With an infinite number of samples we can see almost exactly whether the true distribution matches the predicted distribution.
Such is the case with an infinite universe and infinite samples. The infinite universe is composed of all possible configurations, which create a probability distribution over how ordered an arbitrary sample is expected to be. With an assumption of infinite samples (i.e. a conscious observer in every configuration), we can say in what proportion of the configurations intelligent design detection will be successful, which is the inverse of the proportion of ordered configurations. Unfortunately, the number of unsuccessful detections never actually reaches zero, since there will always be coherent configurations as long as they are possible. If I happen to find myself in a coherent configuration I may just be extremely lucky. But, in the majority of configurations the chance and necessity hypothesis will be validly rejected in favor of the intelligent design hypothesis.
At this point it may seem suspicious that I write we can reject a hypothesis in favor of another. Why should rejecting one hypothesis favor another hypothesis? This begins to sound like a god-of-the-gaps argument; just because we’ve dismissed chance and necessity doesn’t necessarily imply we can accept design. There may be yet another alternative we’ve yet to think of. While a good caution, science does not deal with unknown hypotheses. Science deals with discriminating between known hypotheses to select the best description of the data. But, what is the probability distribution over the evidence that ID provides? Well, the specific prediction of ID is that orderly configurations will be much more common than statistically expected. For example, we can see this in that Kolmogrov complexity provides an objective specification for CSI calculations [2]. Kolmogrov complexity is a universal measure of compression, and orderliness is a form of compression. [3] So, when I end up in a configuration that is orderly I have a higher probability of being in a configuration that is the result of ID than in a configuration that is the result of chance and necessity. Hence, an orderly configuration allows me to discriminate between the chance and necessity hypothesis and the ID hypothesis, in favor of the latter. Additionally, since orderly configurations drop off so quickly as our space of configurations approach infinity, then this shows that infinite resources actually make it extremely easy to discriminate in favor of ID when faced with an orderly configuration. Thus, intelligent design detection becomes more effective as the probabilistic resources increase.
Now that I’ve addressed the question of whether infinite probabilistic resources makes ID detection impossible or much, much easier, let’s see whether ID can in turn do science a favor and provide a basis for its regularity assumption. I will attempt to do this in part 3 of my series.
[1] http://www.uncommondescent.com/intelligent-design/the-effect-of-infinite-probabilistic-resources-on-id/
[2] http://www.designinference.com/documents/2005.06.Specification.pdf
[3] Interestingly, Kolmogrov complexity is uncomputable in the general case due to the halting problem. This means that in general no algorithm can generate orderliness more often than is statistically expected to show up by chance. Hence, if some entity is capable of generating orderliness more often than statistically predicted, it must be capabable, at least to some extent, of solving the halting problem.

No comments:

Post a Comment