‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ Thoughts on the Fine-Tuning Fallacy


Supporters of the fine-tuning fallacy, the so-called religious apologists, claim that certain conditions necessary for life in the universe could only exist if specific physical constants were within a very narrow range. For instance, they argue that if the critical density of our universe were altered by just 1 in 10^40, the universe as we know it wouldn't exist, and celestial bodies and galaxies wouldn't have developed. Another example they cite is the cosmological constant, fine-tuned within a narrow range of 1 in 10^120. Apologists contend that these finely tuned quantities are essential for our existence, suggesting the possibility of a conscious designer who intentionally set these conditions for our creation.

Is the Universe Really Finely-Tuned?

Many scientists have written papers arguing that even if the constants could vary, and this variation was very different from ours, life would still be possible.


Physicist Victor Stenger directly confronted the fine-tuning argument in a technical article titled Natural Explanations for the Anthropic Coincidences. He took four constants from which can be computed the average lifetime of a star, the size of planets, and other traits that would predict whether a universe might allow life. His simulation randomly varies these constants within a range five orders of magnitude higher and five lower than their actual values to see what kind of universe the combination creates. His conclusion: “A wide variation of constants of physics has been shown to lead to universes that are long-lived enough for complex matter to evolve.”

Apologists criticized it for only addressing four rather than six constants, but in fact only four constants are relevant for generating long-lived stars, whose existence makes the conditions for life highly probable, regardless of what the other constants turn out to be, as Stenger argues in the above-mentioned paper. (Carrier, 2001)


In the book The Failed Hypothesis (pp. 148-149, 164), Dr. Stenger discussed his findings in more depth: "Only four parameters are needed to specify the broad features of the universe as it exists today: the masses of the electron and proton and the current strengths of the electromagnetic and strong interactions. [20-1] (The strength of gravity enters through the proton mass, by convention.) I have studied how the minimum lifetime of a typical star depends on the first three of these parameters. [21] Varying them randomly in a range often orders of magnitude around their present values, I find that over half of the stars will have lifetimes exceeding a billion years. Large stars need to live tens of millions of years or more to allow for the fabrication of heavy elements. Smaller stars, such as our sun, also need about a billion years to allow life to develop within their solar system of planets. Earth did not even form until nine billion years after the big bang. The requirement of long-lived stars is easily met for a wide range of possible parameters. The universe is certainly not fine-tuned for this characteristic. 

One of the many major flaws with most studies of the anthropic coincidences is that the investigators vary a single parameter while assuming all the others remain fixed. They further compound this mistake by proceeding to calculate meaningless probabilities based on the grossly erroneous assumption that all the parameters are independent. [22] In my study I took care to allow all the parameters to vary at the same time. Physicist Anthony Aguirre has independently examined the universes that result when six cosmological parameters are simultaneously varied by orders of magnitude, and found he could construct cosmologies in which "stars, planets, and intelligent life can plausibly arise." [23] Physicist Craig Hogan has done another independent analysis that leads to similar conclusions. [24] And, theoretical physicists at Kyoto University in Japan have shown that heavy elements needed for life will be present in even the earliest stars independent of what the exact parameters for star formation may have been. [25]"

Dr. Stenger replied here and here to some counter-arguments that apologists such as Robin Collins and Luke Barnes presented in response to his objections.

Dr. Stenger also discussed the alleged fine-tuning of the vacuum energy (pp. 151-153): "Next, let us examine the claim that the vacuum energy of the universe is fine-tuned. Normally we think of the vacuum as being empty of matter and energy. However, according to general relativity, gravitational energy can be stored in the curvature of empty space. Furthermore, quantum mechanics implies that a vacuum could contain a minimum zero-point energy. [Steven] Weinberg referred to this as the cosmological constant problem, since any vacuum energy density is equivalent to the parameter in Einstein's theory of general relativity called the cosmological constant that relates to the curvature of empty space-time. A better term is vacuum energy problem. Crude calculations gave a value for the vacuum energy density that is some 120 orders of magnitude greater than its maximum value from observations. Since this density is constant, it would seem to have been fine-tuned with this precision from the early universe, so that its value today allowed for the existence of life. Until recent years, it was thought that the cosmological constant is exactly zero, in which case there was no need for fine-tuning, although no theoretical reason was known. However, in 1998, two independent research groups studying distant supernovae were astonished to discover that the current expansion of the universe is accelerating. 

More recent observations from other investigators have confirmed this result. The universe is falling up! The source of this cosmic acceleration may be some still-unidentified dark energy, which constitutes 70 percent of the mass of the universe. One possible mechanism is gravitational repulsion by means of the cosmological constant, that is, by way of a vacuum energy field, which is allowed by general relativity. If that is the case, then the cosmological constant problem resurfaces. In the meantime, we now have plausible reasons to suspect that the original calculation was incomplete and that a proper calculation will give zero for the vacuum energy density. Until these newer estimates are shown to be wrong, we cannot conclude that the vacuum is fine-tuned for life and we have no particularly strong need to invoke a designer deity. But, then, what is responsible for cosmic acceleration, that is, what is the nature of the dark energy? A cosmological constant is not the only possible source of gravitational repulsion. According to general relativity, any matter field will be repulsive if its pressure is sufficiently negative. Theorists have proposed that the dark energy may be a matter field, called quintessence, which requires no fine-tuning."

Further, a quintessence field may not even be necessary. Cosmologist Sean Carroll challenged this view that the cosmological constant is finely-tuned:

“[T]he fine-tunings you think are there might go away once you understand the universe better. They might only be apparent. There’s a famous example theists like to give, or even cosmologists who haven’t thought about it enough, that the expansion rate of the early universe is tuned to within 1 part in 10⁶⁰. That’s the naïve, back of the envelope, pencil and paper estimate you would do. But in this case, you can do better. You can go into the equations of general relativity and there is a correct rigorous derivation of the probability [Ref: Hawking et al., 1987 and Carroll & Tam, 2010]. If you ask the same question using the correct equations, you find that the probability is 1. Mathematically, almost all early universe cosmologies have the right expansion rate to live for a long time and allow life to exist. We can’t currently say that all parameters fit into that paradigm, but until we know the answer, we also can’t claim that they’re definitely finely-tuned.” (God & Cosmology, pp. 49-50)


Elsewhere, Dr. Carroll pointed out that we don’t know enough about life and the universe to determine whether any kind of life would really be impossible:

“[W]e don’t know that much about whether life would be possible if the numbers of our universe were very different. Think of it this way: if we didn’t know anything about the universe other than the basic numbers of the Core Theory [i.e., the Standard Model of Particle Physics] and cosmology, would we predict that life would come about? It seems highly unlikely. It’s not easy to go from the Core Theory to something as basic as the periodic table of the elements, much less all the way to organic chemistry and ultimately to life. Sometimes the question is relatively simple – if the vacuum energy were enormously larger, we wouldn’t be here. But when it comes to most of the numbers characterizing physics and astronomy, it’s very hard to say what would happen were they to take on other values. There’s little doubt that the universe would look quite different, but we don’t know whether it would be hospitable to biology. ... Life is a complex system of interlocking chemical reactions, driven by feedback and free energy. Here on Earth, it has taken a particular form, making use of the wonderful flexibility of carbon-based chemistry. Who is to say what other forms analogous complex systems might take? Fred Hoyle... wrote a science-fiction novel called The Black Cloud, in which the Earth is menaced by an immense, living, intelligent cloud of interstellar gas. Robert Forward, another scientist with a science-fictional bent, wrote Dragon’s Egg, about microscopic life-forms that live on the surface of a neutron star. Perhaps a trillion trillion years from now, long after the last star has winked out, the dark galaxy will be populated by diaphanous beings floating in the low-intensity light given off by radiating black holes, with the analogue of heartbeats that last a million years. Any one possibility seems remote, but we know of a number of physical systems that naturally develop complex behavior as entropy increases over time; it’s not at all hard to imagine that life could develop in unexpected places.” (The Big Picture, p.305)
  

Cosmologist Will Kinney made a similar point: “[This argument] is based on a very narrow concept of “life” and what physical conditions make life possible. We currently know of only one example of an environment containing life, which introduces a massive bias in how we conceive of what makes life possible. ... The problem is that we have no idea how common life is in our own universe, much less how common it is likely to be in a multiverse with a wide distribution of physical laws. We have one example of one planet with life, and the logic of the anthropic principle requires the extrapolation that any life must be more or less similar to us. We are biochemical machines, built out of atoms, with a structure determined primarily by electromagnetism. ... But is “life like us” the only possibility for how one might build life? There is no particular reason to think so. Natural selection does not contain biochemistry as a fundamental assumption. Any system with sufficient complexity to self-replicate is sufficient. Furthermore, natural selection tells us that life will optimize itself to suit the environment in which it evolves. We are highly specialized to survive on a little ball of rock with liquid water because we evolved on a little ball of rock with liquid water. Life that evolves in radically different circumstances will be specialized to survive in those circumstances and might not look anything even remotely like us. We simply have no data on the environments in which other life-forms can (or do) exist. One thing that is being realized in modern biology is that even our narrow, carbon-based definition of life is astoundingly flexible; so-called extremophile life exists in environments so harsh that they were previously assumed to be lifeless. The one lesson that we are learning over and over is that life is more adaptable as well as ubiquitous than we ever previously expected. ... Perhaps there are organisms composed only of dark matter or black holes, based on gravity instead of electromagnetism. Why not? (Physicist Lisa Randall, for example, has proposed the possibility of complex “shadow” life existing in a “dark sector” that does interact directly with normal matter.) Such life-forms could populate a universe that a naive application of the anthropic principle would rule out as “suitable for life.” The known unknowns are many; the unknown unknowns are limitless.” (An Infinity of Worlds, Ch.7)

In his recent (2023) book, cosmologist Lawrence Krauss echoed this idea: "I should state at the outset that there isn’t a shred of evidence to suggest design in nature, and there is plenty of evidence against it. ... It is true that if the parameters of the universe were different, we might not be here, but since we have no idea what the complete set of possibilities are for life, especially what the possibilities would be if the laws of physics were slightly different, then who are we to say that there wouldn’t be some different kind of life that could arise in such a universe? A black cloud perhaps? And I expect in such a universe these lifeforms would be wondering why their universe was so fine-tuned for their existence! The point is that the universe isn’t fine-tuned for life. ... Hoyle’s black cloud, and the considerations that stem from it, should give us some cosmic humility. Not only are we not likely to be special gifts of creation, but the possible existence of lifeforms with nothing in common with us lends further incredulity to the incredibly non-humble suggestion that the universe was made for us." (The Known Unknowns)


In the article Evidence Against Fine Tuning for Life, cosmologist Don N. Page argues that the fraction of baryons that condense gravitationally into structures large enough to allow for the development of life depends on the value of the cosmological constant in such a way that the fraction of baryons monotonically decreases for all positive values of the cosmological constant. Thus, Page concludes, the observed value of the cosmological constant is not optimal for the evolution of life – any smaller positive number would be better. He offers an estimate that in fact a small negative number would be the optimal value. Consequently, our universe is not fine-tuned for life.

Besides the relation between the cosmological constant and baryon condensation being more subtle than Page takes it to be, there are other reasons why this conclusion might not hold that Page also mentions in his discussion. It could be for example that there is an unknown constraint preventing an independent variation of the cosmological constant without also altering other constants. Or the fraction of baryons is not monotonically related to the probability of forming life. Though this relation seems plausible, it is an additional assumption. (Sabine, 2011)

In a science article titled Life Beyond our Universe, the author wrote: "One physical parameter that does appear to be extremely finely tuned is the cosmological constant – a measure of the pressure exerted by empty space, which causes the universe to expand or contract. When the constant is positive, space expands, when negative, the universe collapses on itself. In our universe, the cosmological constant is positive but very small – any larger value would cause the universe to expand too rapidly for galaxies to form. However, Wise and his colleagues have shown that it is theoretically possible that changes in primordial cosmological density perturbations could compensate at least for small changes to the value of the cosmological constant."

In the paper Finetuning and Naturalness in the Foundations of Physics (p.15), Sabine Hossenfelder wrote: "I have argued here that the popularity of arguments from naturalness and finetuning in the foundations of physics is problematic. These arguments are not mathematically well defined because they refer to probabilities without a probability distribution." (Note: In one section, Sabine says that she will not be discussing the notion of fine-tuning for life, since that is based on an “entirely different logic”. However, this paper details how the two senses of fine-tuning are related.)

Elsewhere, Sabine commented specifically about the fine-tuning for life:

“One then needs to show, though, that the values we observe are indeed the only ones (or at least very probable) if one requires that life is possible. And this is highly controversial. The typical claim... goes like this: if parameter x was just a little larger or smaller, we wouldn’t exist. ... The problem with such arguments is that small variations in one out of two dozen parameters ignore the bulk of possible combinations. You’d really have to consider independent modifications of all parameters to be able to conclude there is only one combination supportive of life. But this is not presently a feasible calculation. Though right now we cannot scan the whole parameter space to find out which combinations might support life, we can try at least a few. This has been done and is the reason we now think there is more than one combination of parameters that will create a universe hospitable to life. For example in... 2013, Abraham Loeb, of Harvard, argued that a primitive form of life might have been possible in the early universe.
...
However, the anthropic principle might still work for some parameter if that parameter’s effect is almost independent of what the other parameters do. That is, even if we cannot use the anthropic principle to explain the values of all parameters because we know there are other combinations allowing for the preconditions of life, certain parameters might need to have the same value in all cases because their effect is universal. ... Still, if we want to derive a probability rather than a constraint, we need a probability distribution for the possible theories, and that distribution can’t come from the theories themselves – it requires additional mathematical structure, a metatheory that tells you how probable each theory is. It’s the same problem that occurs with naturalness: the attempt to get rid of human choice just moves the choice elsewhere. And in both cases, one has to add unnecessary assumptions – in this case, the probability distribution – that could be avoided by just using a simple fixed parameter.” (Lost In Math: How Beauty Leads Physics Astray)

Prominent cosmologist Laura Mersini-Houghton also reached the conclusion that fine-tuning likely does not exist, as she explained in her book Before the Big Bang:

"Later on, Fred [Adams] and I and two other colleagues, the pioneering physicist Stephon Alexander at Brown University and Evan Grohs, then at the University of Michigan, expanded our investigation into the merits of the anthropic arguments. We inquired whether the fine-tuned values of the constants of nature observed in our universe provided the only possible conditions for life. For life to arise, we need a handful of requirements, including a certain amount of complexity of at least 10^15 particles in the universe and long-lived stars that act as factories to produce heavy elements from lighter elements under the gravitational pressure inside the star’s core.

The result of our investigation surprised even us: Habitable universes could exist even if we made the strength of gravity a lot weaker or a lot stronger and even if we changed other constants of nature (like the constant that controls the strength of electromagnetism) by many millions of times from their known values. We concluded that the constants of nature in our universe are not specially selected to allow for habitation. Even worse for the anthropic argument, we found that our universe seemed only borderline habitable based on the anthropic selection rules. There were many other possible universes with very different constants of nature from ours that would be more likely to allow life to arise. These findings made me even more convinced that the application of the anthropic principle was like throwing in the towel on science."

Here are some other papers that challenge the claim that the universe is finely-tuned for life:

"This paper develops constraints on the values of the fundamental constants that allow universes to be habitable. ... The results indicate that viable universes  with working stars and habitable planets  can exist within a parameter space where the structure constants α and αG vary by several orders of magnitude." (Adams, 2016)

"Motivated by the possible existence of other universes, with possible variations in the laws of physics, this paper explores the parameter space of fundamental constants that allows for the existence of stars. ... As a result, the set of parameters necessary to support stars are not particularly rare." (Adams, 2008)

Source                                                                         
"Many previous authors have noted that stars in our universe would have difficulty producing carbon and other heavy elements in the absence of the well-known 12C resonance at 7.6 MeV... Using both a semi-analytic stellar structure model as well as a state-of-the-art stellar evolution code, we find that viable stellar configurations that produce beryllium exist over a wide range of parameter space. Finally, we demonstrate that carbon can be produced during later evolutionary stages." (Grohs, 2016)

"Motivated by the possibility that different versions of the laws of physics could be realized within other universes, this paper delineates the galactic parameters that allow for habitable planets and revisits constraints on the amplitude Q of the primordial density fluctuations. ... However, the outer parts of the galaxy always allow for habitable planets, so that the value of Q does not have a well-defined upper limit. Moreover, some Galactic Habitable Zones are large enough to support more potentially habitable planets than the galaxies found in our universe. These results suggest that the possibilities for habitability in other universes are somewhat more favorable and far more diverse than previously imagined." (Bloch, 2015)

"Motivated by the possibility that the laws of physics could be different in other regions of space-time, we consider nuclear processes in universes where the weak interaction is either stronger or weaker than observed. ... Although stars in these universes are somewhat different, they have comparable surface temperatures, luminosities, radii, and lifetimes, so that a wide range of such universes remain potentially habitable." (Howe, 2018)

"Both fundamental constants that describe the laws of physics and cosmological parameters that determine the cosmic properties must fall within a range of values in order for the universe to develop astrophysical structures and ultimately support life. This paper reviews current constraints on these quantities. ... We consider specific instances of possible fine-tuning in stars, including the triple alpha reaction that produces carbon, as well as the effects of unstable deuterium and stable diprotons. For all of these issues, viable universes exist over a range of parameter space, which is delineated herein. Finally, for universes with significantly different parameters, new types of astrophysical processes can generate energy and support habitability." (Adams, 2019)

"In work recently featured in a cover story in Scientific American, MIT physics professor Robert Jaffe, former MIT postdoc, Alejandro Jenkins, and recent MIT graduate Itamar Kimchi showed that universes quite different from ours still have elements similar to carbon, hydrogen, and oxygen, and could therefore evolve life forms quite similar to us. Even when the masses of the elementary particles are dramatically altered, life may find a way. “You could change them by significant amounts without eliminating the possibility of organic chemistry in the universe,” says Jenkins." (Trafton, 2010)

Philosophical Objections

What sense does it make to say that the constants that define that universe are improbable? We don’t have billions of universes to evaluate, some designed and some natural, so that we have some probabilistic framework in which to place our own universe and evaluate its likeliness. There are no antecedent conditions that could determine such a probability. Hence, if the universe is the ultimate brute fact, it is neither likely nor unlikely, probable or improbable; it simply is. If we were in a position to witness the birth of many worlds 
– some designed, some undesigned – then we might be in a position to say of any particular world that it had such-and-such a probability of existing undesigned. But we simply are not in such a position. We have absolutely no empirical basis for assigning probabilities to ultimate facts. Therefore, imagining that we can evaluate the likelihood of our own poorly understood universe makes no sense. You say our universe looks designed? Compared to what?

We must say that the values of the constants are neither probable nor improbable; they just are. In that case the only rational expectation of the values of the constants is that they will be whatever we find them to be. The universe and its laws could be brute, inexplicable facts. There is nothing absurd about this conclusion. (Seidensticker 2014 & 2017)

Also, the argument can be turned on the apologist, as philosopher Keith Parsons pointed out. If we’re insanely lucky to be in a life-permitting universe (according to the apologist’s thinking), there must have been a supernatural Fine Tuner to create this universe. But, by recursively applying this thinking to the Fine Tuner, the fine-tuning problem falls on the apologist. That God would have preferences for these specific fundamental laws given all the preferences God could have consistent with God’s desires is improbable. Moreover, there’s a myriad of conceivable supernatural beings. Apologists must marvel at our good fortune to have one who wanted humans (rather than any of the infinite number of other logically possible intelligent life forms) and had the power to fine-tune the universe so that we’re here to seek out this Creator.

Indeed, it seems that there could have been one or more of an indefinitely numerous set of supernatural entities could be the ultimate existent(s) instead of the theistic God. Maybe, for instance, there could have been Platonic ideas, or a neo-Platonic One, or a being totally indifferent to created beings, or, tragically, a lonely god who yearns for companionship, but does not have the power to create.

The upshot is that if it is possible to have rational expectations about which of a range of possible worlds is likely to be actualized, where do we stop? If someone insists that we are very, very lucky – impossibly lucky – to have a universe as “life permitting” as the one we inhabit, and therefore there must have been a supernatural fine-tuner to set things up, don’t we have to ask why that same reasoning should not apply to putative supernatural beings? Why is it, that of all the ultimate, uncaused supernatural beings that might have existed, we were so impossibly lucky as to get one that was a personal being who, amazingly, just happened to want creatures like us and also had the power to do the fine tuning? Instead of solving the fine-tuning problem, doesn’t the hypothesis of theism merely set it back a step? Instead of a finely-tuned universe we seem to need a finely-tuned God. If the former is wildly unlikely, then why not the latter? If the universe is rationally unexpected, then why not God? (Parsons, 2010)

Counter-apologist Jeffrey Lowder mentioned another problem with the fine-tuning argument: "Varying the Constants but Fixing the Physics: [The religious apologetic] argument depends upon counting the number of possible universes with different values for the anthropic constants but with the same laws of physics. But why restrict the set of possible universes to only those with the same laws of physics? Why not also include possible universes with different physics? Bradley Monton makes this point extremely well; it's worth quoting him at length:

'The general point is as follows: when faced with the fine-tuning evidence, it is reasonable to not be surprised. We already knew that there are many possible universes that are not life-permitting, and yet are similar in certain ways to our actual universe. The fine-tuning argument encourages us to focus our attention on those possible universes that have the same laws of physics as ours, but different fundamental constants. But why not focus on those possible universes that have the same types of particles as ours, but different fundamental laws? Or why not focus on those possible universes that have the same density distribution as ours, but different types of particles?' [Monton, 2006]

The upshot is that if our goal is to count the relative frequency of life-permitting universes among all possible universes, then we have to consider all possible universes, not just those with the same [constants]. Since [apologists] have [not] done that, it follows that their defense of this crucial premise (and hence their design argument as a whole) is, at best, incomplete."

See Aron Lucas, 2015 for a robust defense of this objection.

Another interesting philosophical objection was presented by Sean Carroll:

“The physical parameters of our universe govern what can happen according to the laws of physics. But under theism, “life” is generally something other than a simple manifestation of the laws of physics. Theists tend to be non-physicalists; they believe that living organisms are more than simply the collective behavior of their physical parts. There is a spirit, soul, or life-force that is the most important part of what life really is. The physical aspects may be important, but they are not at the heart of what we mean by “life.” And if that’s true, it’s unclear why we should care about fine-tuning of physical features of the universe at all. The physical world could behave in any way it pleases; God could still create “life,” and associate it with different collections of matter in whatever way he might choose. The requirement that our physical situation be compatible with complex networks of chemical reactions that perpetuate themselves and feed off of free energy in the way we usually associate with living organisms is only relevant if naturalism is true. If anything, the fact that our universe does allow for these physical configurations should be taken to increase our credence for naturalism at the expense of theism.” (The Big Picture, p.310)

I suppose a physicalist Christian (e.g., Peter van Inwagen and Lynne Rudder Baker) isn't going to be persuaded by Dr. Carroll's point, as she doesn't think that human minds are immaterial. However, if the Christian is convinced that God is omnipotent, she should at least be open to the idea that God could have created immaterial human minds (there is nothing unintelligible or incoherent about that). And since God could have created immaterial minds, there would be no need to finely-tune the physical world for life, which implies theism doesn't predict fine-tuning (and isn't therefore evidence favoring theism).

Philosopher Alex Malpass defended an objection similar to Dr. Carroll's  in his article Fine-tuning and Consciousness.

The Sharpshooter Analogy

An amusing strategy that apologists often adopt is to appeal to intuition by using bad analogies. Consider, for example, Hugh Ross's use of the sharpshooter analogy. In the example, a prisoner is to be executed by a firing squad consisting of 100 sharpshooters, but although they all fire their guns he fails to get shot. Two hypotheses are put forward to explain the remarkable event. One of them, which is supposed to be naturalism, is that all 100 sharpshooters missed by sheer accident. The other hypothesis, which is supposed to be supernaturalism, is that there was a plot to prevent the execution. Naturally, the plot hypothesis is more reasonable than the accidental-miss hypothesis, which is supposed to show that supernaturalism is more reasonable than naturalism. I find this to be a very bad analogy to the case of the universe's physical constants. There is nothing in the case of the universe that corresponds to a scheduled execution by firing squad. We know perfectly well how firing squads operate, based on how they have operated in the past. We know that if they are intent on doing their job, then they simply do NOT all miss! But there is no corresponding information about the process by which universes might acquire their physical constants. We would need to be aware of some connection between the process of physical-constant formation. But we simply do not have any such information, and that in turn destroys the analogy. Those who put forward such bad analogies are simply showing their confusion about the issue at hand. (Drange, 1998)

Keith Parsons makes a similar point about our lack of background knowledge in this context:

"The alleged improbability of the cosmic "coincidences" is often illustrated by comparing it to extremely improbable mundane situations – such as every gun jamming at once in a firing squad. But such analogies are not apt. We have prior experience of rifles and their performance and, on the basis of that experience, we know how unlikely it is that, say, ten of them would simultaneously jam. We have no similar experiences that would justify such an inference about the cosmic "coincidences."... Once this disanalogy is recognized, the "fine tuning" argument loses all of its intuitive appeal." (Is There a Case for Christian Theism?, p.182)

Since the argument is an argument from analogy, the likelihood of the conclusion turns one the degree of similarity between the two things compared in the premises. Unfortunately, as I explained, the degree of similarity between the firing squad and the constants is too low to warrant a confident inference to the intelligent design of the latter;

Different expositions spell out the form of arguments from analogy in slightly different ways, but here's a standard one:


1. X is similar to Y in respects A, B, C.

2. X also has feature D.
------------------------------
3. So, probably, Y has feature D as well.

Two salient criteria of good arguments from analogy are that the analogues (X and Y) have a large number of similarities, and that such similarities are relevant to the target attribute in the conclusion. Thus, relevant similarities strengthen the case for the conclusion; relevant dissimilarities diminish it.

Ideally, you want the analogues to bear an exact similarity in observed attributes (e.g., two patients have exactly similar symptoms). But if the two are relevantly dissimilar in even just one or two respects (e.g., one patient has a loss of appetite, while the other does not), then the probability that they're similar in the target unobserved respect (e.g., the patients have the same illness) goes down – sometimes precipitously. (Leon, 2006)

Cyclic Models

Another way to get around the problem is to postulate the universe is somehow cyclic and that in each cycle, it has different constants of nature. In the article A Cyclic Universe Approach to Fine-Tuning, Sam Cormack et al. proposed a "closed bouncing universe model... where the values of the couplings vary randomly from one cycle to the next" and is thus a viable "alternative cosmological model to anthropic arguments... for explaining the values of the coupling constants of the Standard Model" and even though the authors used String Theory to vary the constants, they admit this "mechanism [merely] stands alone as an illustration of how to implement random changes in couplings in a bounce universe."

In the paper Can Universe Experience Many Cycles with Different Vacua?, Yun-Song Piao wrote: “We show in an operable model of cyclic universe that the universe can experience many cycles with different vacua, which is a generic behavior independent of the details of the model.”

Nobel Prize Winner Steven Weinberg also made this suggestion: "One very simple possibility proposed by Hoyle is that the constants of nature vary from region to region, so that each region of the universe is a sort of subuniverse. The same sort of interpretation of multiple universes might be possible if what we usually call the constants of nature were different in different epochs of the history of the universe." (Dreams of a Final Theory, p.209)

According to philosopher of physics Quentin Smith, this proposal is very old. It can be traced back to John Wheeler: "John Wheeler sweeps away these objections to an infinitely oscillating universe by supposing that at the end of each contracting phase all the constants and laws of that cycle disappear and the universe is “reprocessed probabilistically” (Misner, Thorne and Wheeler 1973, p. 1214) so as to acquire new constants and laws in the next cycle." (Quentin, 1988)

The Universe is Inhospitable for Life

Even if all the scientific and philosophical objections mentioned above fail, the fine-tuning argument might still not establish design, as the counter-apologist Richard Carrier pointed out. This is so simply because our universe is 99.99999 percent composed of lethal radiation-filled vacuum, and 99.99999 percent of all the material in the universe comprises stars and black holes on which nothing can ever live, and 99.99999 percent of all other material in the universe (all planets, moons, clouds, asteroids) is barren of life or even outright inhospitable to life. In other words, the universe we observe is extraordinarily inhospitable to life. Even what tiny inconsequential bits of it are at all hospitable are extremely inefficient at producing life – at all, but far more so intelligent life. One way or another, a universe perfectly designed for life would easily, readily, and abundantly produce and sustain life. Most of the contents of that universe would be conducive to life or benefit life. Yet that’s not what we see. Instead, almost the entire universe is lethal to life – in fact, if we put all the lethal vacuum of outer space swamped with deadly radiation into an area the size of a house, you would never find the comparably microscopic speck of area that sustains life (it would literally be smaller than a single proton). It’s exceedingly difficult to imagine a universe less conducive to life than that – indeed, that’s about as close to being completely incapable of producing life as any random universe can be expected to be, other than of course being completely incapable of producing life.

And yet, that is exactly what we would have to see if life arose by accident. Because life can arise by accident only in a universe that large and old. The fact that we observe exactly what the theory of accidental origin predicts is evidence favoring naturalism.

Because without a designer, life can only exist by chemical accident, such a chemical accident will be exceedingly rare, and exceedingly rare things only commonly happen in vast universes where countless tries are made over vast spans of time. Likewise, a universe not designed for us will not look well suited to us but be almost entirely unsuited to us and we will survive only in a few tiny chance pockets of survivable space in it. Naturalism thus predicts several bizarre features of the universe (it’s vast size and age and lethality to life), whereas we cannot deduce any of those features from the design hypothesis.

The extraordinary lethality, size, and age of the universe, are all what a universe will almost certainly look like if it is a product of chance and not what a universe would likely look like at all if designed for us (or any life whatever). If the evidence points to chance, and not intentional design, that caused this universe to have life, then fine tuning is evidence for chance, and against design. Therefore, naturalism is supported by evidence, and requires far fewer assumptions beyond already established science than theism does. Its probability relative to the theistic alternative is therefore increased.

William Lane Craig thinks this objection is irrelevant because the fact that it is highly inhospitable doesn't change the fact (assuming it is a fact) that only a small set of the constants is life-permitting and we exist in this small set.

In response to this objection, counter-apologist Jeff Lowder observed that while the universe’s hostility to life is compatible with the fact that the universe is life-permitting, it decreases the probability of it being created for us.

Craig’s version of the fine-tuning argument is a textbook example of how the distinction between deductive and inductive arguments can mask uncertainty. His argument is a valid (deductive) argument: the conclusion has to be true if both of the premises are true. The fact that the conclusion follows from the premises, however, tells us nothing about the probability of the conclusion, which Craig wrongly ignores. He seems to think that all that matters is whether the premises are more probable or not. In his words:

"This is a logically valid argument. That is to say, if the two premises are true, then the conclusion necessarily follows. The only question is: are those two premises more plausibly true than false?"

But the expression, “the conclusion necessarily follows,” gives the illusion of certainty, which is not warranted if at least one premise is uncertain. For example, suppose we are certain that (1) is true–i.e., we believe the probability of (1) is 100%–but only 70% confident that (2) is true. In that case, it follows that our degree of belief in (3) should also be 70%. That is all the hostility objection needs to get going.

Assume that we are 70% confident that the life-permitting conditions of our universe are due to design. By itself, the fact that we are 70% confident that the life-permitting conditions of our universe are due to design does not justify us in being 70% confident that our universe is due to design. That would be the case only if we did not know of any other factors about our universe, besides life-permitting conditions, which were relevant to the probability that our universe was designed. But that’s false. We know much more about our universe’s habitability than the fact that it is life-permitting, however. We also know that the vast majority of our universe is hostile to life. Given that our universe is life-permitting, the fact that so much of our universe is hostile to life is more probable on the assumption that naturalism is true than on the assumption that theism is true. Once the evidence about our universe’s habitability is fu
lly stated, it’s far from obvious that it favors theism.

The Multiverse Hypothesis and Chance

An effective way of attacking the fallacy of fine-tuning is to bring up the multiverse hypothesis. 
It may be that there are many regions with different constants. If there were to exist such regions, then there would be nothing remarkable in the fact that we observe the particular physical constants that we do (since we are in a region of the universe that permits life). 

As was mentioned before, it is not clear that analogies are appropriate when talking about the origin of the universe's constants. However, let's suppose, arguendo, that this critique is invalid. Imagine a very large planet (say, the same size as Jupiter), and suppose it is inhabited by only one human being. Suppose further that just one relatively small asteroid (say, the size of a basketball) is heading towards this planet. We would think this person is very, very, very unlucky if the small asteroid were to hit his head exactly, rather than hitting any of the other billions of points on that planet.

Now let's change the scenario a little bit. Suppose that instead of one asteroid, there were millions and millions of them heading towards this planet to bombard it from all sides. Now it wouldn't seem that surprising if one asteroid eventually found its way and hit the guy's head. A similar logic could be applied in the case of the multiverse and the values of the constants of nature.


Apologists might respond that the design hypothesis is simpler than the multiverse hypothesis. For on the naturalistic hypothesis, the explanation of the apparent fine-tuning of our universe requires that there are lots and lots of other universes – perhaps infinitely many. By contrast, the design hypothesis explains the apparent fine-tuning of our universe in terms of just a single entity: the god of traditional theism. Thus, even granting that theism leaves unexplained and brute at least some order (God's intellect and will), it is a much more economical/parsimonious explanation of the data of apparent fine-tuning. (Leon, 2006)

The Inflationary multiverse hypothesis does not really imply the existence of a multiplicity of "universes" in the absolute sense (viz., spatio-temporal manifolds totally disconnected/isolated from each other). Rather, it entails a single spacetime manifold in which many bubbles (or inflationary regions) are constantly appearing and growing. But those bubbles are not really spatially disconnected from each other, as Prof. Alex Vilenkin explained"There are now popular models [that postulate] many universes, but what we mean here are different [and] very remote regions of the same spacetime.Cosmologist Sean Carroll confirmed this point: "In inflationary cosmology, however, these different regions can be relatively self-contained – 'pocket universes,' as Alan Guth calls them... So there is a good reason to think about them as separate universes, even if they're all part of the same underlying spacetime.If that is not clear enough, Dr. Carroll made it more clear in one of his books: “One way in which spacetime might fluctuate was studied in the 1990s by Edward Farhi, Alan Guth, and Jemal Guven. They suggested that spacetime could not only bend and stretch, as in ordinary classical general relativity, but also split into multiple pieces. In particular, a tiny bit of space could branch off from a larger universe and go its own way. The separate bit of space is, naturally, known as a baby universe. (In contrast to the “pocket universes” [of the multiverse of eternal inflation] mentioned in the last chapter, which remained connected to the background spacetime.)” (From Eternity to Here, p.581)

The apologist might reply: "While it is true that in this hypothesis there is only one space-time manifold, you're postulating a larger manifold with more entities, i.e., more regions of matter with different constants. Therefore, the argument still applies." Fair enough. Fortunately for the non-theist, there are other problems with this apologetic argument: (1) First of all, it is not clear that parsimony (as is understood by most scientists) is referring to the quantity of posited entities in a theory as opposed to the number of assumptions, as the scientist Victor Stenger pointed out: “Some [apologists] have objected to the multiverse hypothesis as being... nonparsimonious, because [it] violates Ockham's razor. But these objections are not legitimate. ... Ockham (ca. 1285–1349)... is frequently quoted as having said, ‘Entities should not be posited without necessity.’ However, this statement does not appear in his works, and the notion that we should always seek the simplest explanation probably goes back to an earlier time. In science, Ockham's razor is usually interpreted to mean that a theory should not contain any more premises than are required to fit the data. It does not mean that the number of objects in the theory must be minimum. The atomic theory of matter introduced a trillion-trillion more objects in a gram of matter than the theories that considered matter in bulk, yet it was more parsimonious with fewer premises and agreed better with the data.” (Stenger, 2012) Following the same line of reasoning, scientist Sean Carroll stated: “I judge simplicity [of the multiverse picture] by the number of ideas and concepts, not by the number of universes.” (Stewart, 2016: p.95(2) The apologist mistakenly assumes that there is only one kind of theoretical parsimony, viz., quantitative parsimony (i.e., the explanation postulates fewer entities). However, as David Lewis has taught us, another type is qualitative parsimony (i.e., the explanation postulates fewer kinds of entities). As philosopher Alan Baker explained: "One further distinction should be mentioned. This distinction is between qualitative parsimony (roughly, the number of types (or kinds) of thing postulated) and quantitative parsimony (roughly, the number of individual things postulated). The default reading of Occam's Razor in the bulk of the philosophical literature is as a principle of qualitative parsimony. Thus Cartesian dualism, for example, is less qualitatively parsimonious than materialism because it is committed to two broad kinds of entity (mental and physical) rather than one." (Simplicity, Stanford Encyclopedia of Philosophy) Or as philosophers Trent Dougherty and Logan Gage observed: “David Lewis and others dismiss [quantitative parsimony] on explanation. Is the hypothesis that a particular human brain contains x number of brain cells really automatically superior to the hypothesis that it contains x+1 cells?” (The Routledge Handbook of Contemporary Philosophy of Religion, p.57And while the design hypothesis is a much more quantitatively parsimonious explanation of the data (it explains all of the data in terms of just one entity, viz., a god), the many-universes hypothesis is a more qualitatively parsimonious explanation of the data (since it explains all of the data solely in terms of one kind of entity, viz., physical objects). (Leon, 2006) For example, a reasonable scientist investigating a supposedly haunted house, might first try to explain the observed phenomena (strange sounds and etc.) by appealing to multiple physical explanations before conjuring up a single, but non-physical, kind of substance; e.g., a ghost or demonAs philosopher Walter Sinnott-Armstrong observed: "Craig rejects this multiple-cosmoi hypothesis as 'arguably inferior to the design hypothesis, because the design hypothesis is simpler.' However, the multiple-cosmoi hypothesis postulates more tokens of the same type (Big Bangs), whereas the design hypothesis postulates a wholly new type of thing (God). What matters is new types, not new tokens. To see this, compare a scientist who postulates a wholly new type of element when the evidence can be explained just as well by postulating only new samples of the same old types of elements. This scientist's new-element hypothesis would and should be rejected as less simple than the old-elements hypothesis. For the same reasons, the God hypothesis should be rejected as less simple than the multiple-cosmoi hypothesis." (A Debate Between a Christian and an Atheistp.49) Philosopher Neil Manson made a similar point: “Regarding the second common objection to the multiverse hypothesis, we should be wary of measuring simplicity too simplistically... The simplicity of a hypothesis is not merely a function of the raw number of entities it posits. The Standard Model in particle physics posits a small number of types of subatomic particle, but of course there are countless tokens (instances) of each of these types. Yet the Standard Model is rightly regarded as a good scientific explanation – one perfectly in accord with Occam’s razor – because of its symmetry and because it invokes a small number of types. Depending on how it is fleshed out, the multiverse hypothesis, too, could exhibit simplicity in these regards. [In addition,] skeptics like Narveson will retort that the design hypothesis is hardly simple if it just is the hypothesis that there exists an eternal, personal being of unlimited power, knowledge, and goodness. They find such a being incomprehensibly complex.” (The Teleological Argument and Modern Science, p.19For critiques on this notion relating to Philosophy of Religion (specifically Swinburne’s appeal to simplicity), see God or the Multiverse? Swinburne on Fine-TuningIs Simplicity an Adequate Criterion of Theory Choice?, Is Simplicity Evidence of Truth?Simplicity and Natural Theology and Swinburne on Simplicity as Evidence of Truth.

Another objection to the multiverse hypothesis is that we have never seen such worlds, and we have no evidence that they exist.

(1) This objection fails to see that the point of constructing these hypotheses in the first place is precisely because we have no way of directly observing the cause of the apparent fine-tuning of the fundamental constants of our universe. And it's just part of the nature of such hypotheses that they accrue probability just to the extent that they can explain the range of data in question. Thus, it's not true that we have no evidence that other universes exist. Rather, the extent to which it can explain the data just is the grounds for according it some degree of probability. And the same is true of the design hypothesis, of course – we only have reason to think that it is probable to the extent that it can explain the data of apparent fine-tuning. That's what the theory-data relationship is all about. (Leon, 2006) Indeed, according to Prof. Vilenkin, the very fact of fine-tuning is evidence of other universes. In the paper Fine-tuning as Evidence For a Multiverse, philosopher Mark Saward defended this idea against objections and recently in the paper Multiple Universes and Self-Locating Evidence, Hawthorne & Sanford reached the same conclusion. Bradford Saad presented his own argument in favor of this proposition in his paper Fine-Tuning Should Make Us More Confident that Other Universes Exist(2) The precedent of scale increases the probability that other universes exist. We observe that there are frequently many moons around a planet, frequently many planets around a star, always many stars around a galaxy, always many galaxies in a cluster, always many clusters in the cosmos. And the cosmos appears to stretch to vast quantities of such contents. The probability that this pattern continues is therefore higher than 50/50: we should expect there to be many other universes. This would not be the case on a single star-planet system. Therefore the actual pattern we observe increases the probability of other universes by exactly as much as observations not exhibiting that pattern would have decreased it. (Carrier, 2016(3) Most common models of Inflation predict a multiverse. According to Alexander Vilenkin models that avoid it tend to be contrived and unrealistic“With the simplest assumptions, you end up with eternal inflation and the multiverse,” says physicist Andreas Albrecht of the University of California. "It's hard to build models of inflation that don't lead to a multiverse," said Alan Guth, an MIT theoretical physicist. "It's not impossible, so I think there's still certainly research that needs to be done. But most models of inflation do lead to a multiverse, and evidence for inflation will be pushing us in the direction of taking the multiverse seriously." Guth also said, "There are ways of constructing inflation so that would not be eternal... [but] those models are pretty contrived just in terms of the dynamics that they assume... It is very hard to construct a version of inflation that would not sometimes become eternal and my view is that if it can sometimes become eternal – since eternal is forever – that just plainly makes it eternal." Other researchers agreed on the link between inflation and the multiverse. "In most of the models of inflation, if inflation is there, then the multiverse is there," said Andrei Linde, a Stanford University theoretical physicist. "It's possible to invent models of inflation that do not allow [a] multiverse, but it's difficult. Every experiment that brings better credence to inflationary theory brings us much closer to hints that the multiverse is real." Cosmologist Paul Steinhardt also commented on the subject: "Some suggest trying to construct theories of inflation that are not eternal... But eternality is a natural consequence of inflation plus quantum physics." Cosmologist George Efstathiou confirmed this: "The type of inflation [that] the Planck data says happened, strongly favors flat potentials; a scalar field evolving in a flat potential. [If that's the case] then inflation is eternal. So, the Planck data says that inflation is eternal. And if inflation is eternal, then you have a multiverse. That's why I say this is one of the most important results from Planck, where we're being pushed towards, now experimentally, in the direction of a multiverse." In addition, there may be some evidence that the constants of nature slightly vary in different regions of space: "The constants appeared to remain constant over time. But that still leaves open the possibility that the fundamental constants may be subtly different in space rather than time. ... The most common target for studying the fundamental constant is something called the 'fine structure constant.' The fine structure constant is a combination of four other fundamental constants: the speed of light, the charge on the electron, Planck’s constant, and the permittivity of free space. ... Results fit with more recent findings: that the fine structure constant varies with direction. Earlier results have shown that the fine structure is slightly different along a specific axis of the Universe, called a dipole. Now, the latest result is from a single light source along a specific direction, so it's not definitive on its own. Yet the result fits with the previous data." (Chris Lee, 2020)

Furthermore, the naturalist does not have to believe in the multiverse hypothesis. He must merely present it as a naturalistic alternative which solves the fine-tuning problem the same way the theistic hypothesis does. That's because the apologists are making a positive claim here (i.e., God is the only valid explanation of fine-tuning), so they have the burden to disprove all plausible competing hypotheses.

Another claim is that the multiverse idea is not falsifiable or testable, so it is pseudoscience. In the article Physics in the MultiverseFrench physicist and philosopher Aurélien Barrau responded to this claim: "At first glance, the multiverse seems to lie outside science because it cannot be observed. How, following the prescription of Karl Popper, can a theory be falsifiable if we cannot observe its predictions? This way of thinking is not really correct for the multiverse for several reasons. First, predictions can be made in the multiverse: it leads only to statistical results but this is also true for any physical theory within our universe, owing both to fundamental quantum fluctuations and to measurement uncertainties. Secondly, it has never been necessary to check all the predictions of a theory to consider it as legitimate science. General relativity, for example, has been extensively tested in the visible world and this allows us to use it within black holes even though it is not possible to go there to check."

More importantly, it is possible to test this hypothesis. An interesting way to do it, is to look for supermassive black holes with a specific mass distribution, as cosmologist Alexander Vilenkin explained:

"[In] this multiverse picture... of eternal inflation all these vacuum states will be populated to have bubbles within bubbles, within bubbles. When inflation was going on in our region of space, bubbles of different vacua popped out and expanded. When we worked on this idea we thought, 'What is going to happen to these bubbles when inflation ends [in our region]?' The answer is that instead of expanding they will start contracting and they will collapse; they will form black holes. And we've calculated the mass distribution of these black holes. So, there is a very uniquely defined distribution of masses. And, for one thing, these black holes are interesting because they may explain, say, the origin of supermassive black holes that we observe in galactic centers. But also if we really detect black holes with this predicted mass distribution, that would be evidence for the multiverse, that we indeed had this period where bubbles were nucleating. So, these are basically failed bubbles, these big black holes. So, these are direct tests. If we are lucky enough, we will be able to observe these things."

Physicist Victor Stenger wrote: “Some [apologists] have objected to the multiverse hypothesis as being nonscientific, since other universes are unobservable, [however], science talks about the unobservable all the time, such as quarks... They are components of models that agree with observations. ... Furthermore, proposals have been made for the possible verification of and even possible evidence for multiple universes. The basic idea is that gravitational interaction with an outside universe might produce a detectable asymmetry in the cosmic background radiation in our universe.” (Stenger, 2012)

Apologists also accuse multiverse proponents of committing the inverse gambler’s fallacy. However, a proper response was already given (See, Friederich and Oppy, p.216 and Leslie).

Even physicist Roger Penrose tried to poke holes in the multiverse idea. He has calculated that the odds of our universe’s low entropy condition obtaining by chance alone are on the order of 1:10¹²³, an inconceivable number. If our cosmos were indeed but one member of a much vaster multiverse of randomly ordered worlds, then it is vastly more probable that we should be observing a much smaller universe. The probability of our solar system forming randomly is about 1:10⁶⁰, a vast number but inconceivably smaller than 10¹²³.

Penrose’s argument is an extreme version of an idea originally due to Boltzmann, who near the end of the 19th Century argued that the direction of time is a consequence of the increase of entropy in the future but not in the past, which requires an extremely unlikely initial state. However, this kind of reasoning is as brilliant as it is controversial (Mersini-Houghton 2008; Callender 20042010; Earman 2006; Eckhardt 2006; Wallace 2010; Schiffrin and Wald 2012). More generally, the more extreme the asserted fine-tuning is, the more adventurous the underlying arguments are. Additionally, Prof. Alex Vilenkin argued "entropy is an extensive quantity; it is proportional to the volume. And the very small universe will necessarily have a very low entropy. But also it is filled with vacuum, and that is also the lowest entropy that you can have." In an email to me, Prof. Vilenkin clarified his point, "It is true that the initial conditions required for our universe are highly unlikely if we assume that they were chosen at random. I think they were not chosen at random and that we need a theory of initial conditions. The most promising theory in my view is based on quantum cosmology, as discussed in Chapter 23 of "Cosmology for the curious". This suggests that the universe originated as a small closed universe filled with a high-energy vacuum. It had a very low entropy." Physicist Nikodem Poplawski made a similar point: "Entropy was small at the early universe because there was no particle production yet. Once quantum effects in the strong gravitational field start particle production, the entropy grows."

A similar argument is based on the Boltzmann Brain problem. However, physicists already presented various solutions (Vilenkin, 2021).

Finally, it should be noted that eternal inflation postulates an infinite number of bubble 'universes' and, as a consequence, faces a challenge called the 'Measure problem'. This is a real problem as Alan Guth and others admit. However, not all cosmologists agree the multiverse is composed of an infinite number of universes. For example, physicists Laura Mersini-Houghton and Malcolm Perry proposed that inflation "cannot sustain an eternal replenishment of ‘free lunches’ of new universes after a time... since the background space grows highly inhomogeneous." (The End of Eternal Inflation, p.5) It is not clear whether Mersini still accepts that or whether her colleagues agree with this conclusion, but it is interesting to point out that the idea of a finite multiverse has a precedent in the literature.

A Serious Objection to the Multiverse

In an essay titled "The Teleological Argument", apologist Robin Collins argued that a “generator of universes” doesn’t eliminate the need for fine-tuning. The analogy he uses is that of a bread machine, which must have the right structure, programs, and ingredients (flour, water, yeast, and gluten) in order to produce decent loaves of bread. Similarly, the fundamental laws of the "generator" must be just right – i.e. "fine-tuned" – in order for it to produce universes whose constants and initial conditions permit the subsequent emergence of life. Thus, according to him, invoking some sort of "generator of universes" to explain the fine-tuning of our universe merely pushes the fine-tuning up one level: it doesn’t make it go away. As Collins puts it:

"As a test case, consider the inflationary type multiverse generator. In order for it to explain the fine-tuning of the constants, it must hypothesize one or more “mechanisms” for laws that will do the following [four] things: (i) cause the expansion of a small region of space into a very large region; (ii) generate the very large amount of mass-energy needed for that region to contain matter instead of merely empty space; (iii) convert the mass-energy of inflated space to the sort of mass-energy we find in our universe; and (iv) cause sufficient variations among the constants of physics to explain their fine-tuning. In order to get the parameters of physics to vary from universe to universe, however, there must be a further physical mechanism/law to cause the variation. Currently, many argue that this mechanism/law is given by superstring theory. Although constants can vary... these fundamental laws and principles... cannot be explained as a multiverse selection effect."

My initial objection is that the “multiverse generator” creates more lifeless than habitable universes. So, isn't it more reasonable to conclude that if this generator has some purpose, it is to create universes without life? If it were designed for life, then the majority of the universes would contain living beings, and not a ridiculously small percentage. Shouldn't we expect that all or at least most universes to be habitable? Collins is essentially proposing that the machine was perfectly designed to produce fresh, brownish, crunchy breads with a  pleasant roasty aroma, fine slicing characteristics and a soft and elastic crumb texture, but 99.99999999% of the time, it simply burns all the ingredients to the point where they turn into a pile of black dust. Would you conclude it was designed to produce breads? I would surely not, or perhaps would conclude it has some serious defect.

But why use a "bread machine" analogy in the first place? Why not compare it to an avalanche? In order for the avalanche to work, there are several necessary properties that must be just right: gravity, a planet with the right composition, right size, pressure, some material like snow and finally a trigger: "The recipe for an avalanche may seem simple: a mountain slope and a thick layer of snow. But Simon Trautman, an avalanche specialist ... says [that] to get an avalanche, you need a surface bed of snow, a weaker layer that can collapse, and an overlaying snow slab. The highest risk period is during and immediately after a snow storm. Underlying snowpack, overloaded by a quick deluge of snow, can cause a weak layer beneath the slab to fracture naturally. Human-triggered avalanches start when somebody walks or rides over a slab with an underlying weak layer. The weak layer collapses, causing the overlaying mass of snow to fracture and start to slide." (Avalanches Explained, National Geographic)

So many necessary conditions, and yet avalanches kill and destroy. Animals caught off-guard may be swept to their deaths; hoofed mammals inhabiting the high country, such as mountain sheep and ibex, can be particularly vulnerable. Over history, the loss to avalanches of human life and property – from utility lines to ski-resort complexes – has been significant. Snow slides in the Alps have swept away entire villages and, during World War I, thousands of troops at a time. Skiers, mountaineers and other outdoor recreationists are most at risk. 

That is not to say that accidental and mindless disasters cannot have some positive effects (for example, avalanches may build habitat and diversify the landscape sometimes). But that's the point. There is no reason to think avalanches – even though requiring many conditions in order to properly work – have a purpose behind them, and yet they can have some positive effects. The same seems to be true of the multiverse.

Alternatively, think of serious unintentional occurrences/accidents (say, plane crashes) that caused mass deaths in history. Think of all the necessary conditions that had to be in place in order for these unfortunate events to occur. Should we conclude there is design behind these events just because many conditions had to be in place? That just seems absurd.

That's one reason we should be suspicious of religious apologetic analogies. As distinguished philosopher J. J. C. Smart wrote: "With enough ingenuity one can find some analogy between almost any two things." (Atheism and Theism, p.22)

Also, the first three conditions Collins mentioned are part of Inflationary cosmology (which is the standard view today) and was constructed to solve some problems of the classical Hot Big Bang theory. The expansion (or Inflation) of the small region of space, as well as the conversion of matter from vacuum energy, happen even if the multiverse prediction is wrong.

Now, Collins talked about String Theory being necesssary for the constants to vary in different bubble universes.

I sent an email to physicist Anthony Aguirre (who received his doctorate in astronomy from Harvard University and published several articles about Inflation) asking the following question: "Is string theory necessary in order for the constants of nature to vary in different "bubble" or "pocket" universes in the Inflationary Multiverse?" and he responded: "No – it’s just a mechanism believed by some for other reasons. But there are other ways to make a constant of nature depend on a field that varies in space-time, and that’s all that’s needed."

I also contacted cosmologist Sean Carroll. I asked him if String Theory is the only hypothesis that allows the constants to vary. He wrote: "You don't need string theory to get varying constants, no. But string theory is the most full-blown theory we have (which doesn't mean it's right). Other approaches are pretty ad hoc, which might be okay for our current state of knowledge."

Hence, it seems this was not settled yet. It is plausible there are other ways to vary the constants of nature which do not require extra dimensions (e.g., Lee Smolin's variation of constants of his Cosmological Natural Selection model). (For further reading about alternative ways to vary the constants, see Anthropic Selection of Physical Constants, pp. 6-9, by Mariusz P. Dabrowski and The fundamental constants and their variationpp. 42-44, and Varying Constants, Gravitation and Cosmology, p.96, by Jean-Philippe Uzan.)

But leaving this aside, isn't it puzzling that Collins used randomness as evidence of design? Random variation is the opposite of order and fine-tuning; it is pure disorder! Why would God play dice and create a whole ensemble of universes with random variations until a suitable one appears when he could have simply created a single universe with the desirable constants? This does not fit well on theism, but it is predicted by naturalism: it predicts a totally random and mindless reason why the constants are this way and the multiverse is one of the few explanations that appeal to such randomness. Therefore, it is more likely on naturalism.

But Collins doesn’t stop here:

“In addition to the four factors listed, the fundamental physical laws underlying a multiverse generator must be just right in order for it to produce life-permitting universes, instead of merely dead universes. Specifically, these fundamental laws must be such as to allow the conversion of the mass-energy into material forms that allow for the sort of stable complexity needed for complex intelligent life. For example, without the Principle of Quantization, all electrons would be sucked into the atomic nuclei, and, hence atoms would be impossible; without the Pauli Exclusion Principle, electrons would occupy the lowest atomic orbit, and hence complex and varied atoms would be impossible; without a universally attractive force between all masses, such as Gravity, matter would not be able to form sufficiently large material bodies (such as planets) for life to develop or for long-lived stable energy sources such as stars to exist.
....
In sum, even if an inflationary-superstring multiverse generator exists, it must have just the right combination of laws and fields for the production of life-permitting universes: if one of the components were missing or different, such as Einstein’s equation or the Pauli Exclusion Principle, it is unlikely that any life-permitting universes could be produced. Consequently, at most, this highly speculative scenario would explain the fine-tuning of the constants of physics, but at the cost of postulating additional fine-tuning of the laws of nature.”

Collins is essentially saying that all the right combinations of properties  e.g., gravity, a planet with the right composition, size, pressure, some material like snow, a trigger, a mountain slope, a weaker layer that can collapse, and an overlaying snow slab – must be present so that the avalanche can kill animals, skiers, mountaineers, outdoor recreationists and destroy utility lines and ski-resort complexes. If we remove any of these properties, then no sufficiently mortal avalanche would obtain. Does that mean these properties exist for avalanches to exist just by virtue of allowing avalanches to form? What kind of stupid logic is that? There is simply no reason to think that just by virtue of allowing something to exist, its purpose must be for that something to exist. This is a non-sequitur.

But if that's not enough to convince you of his flawed logic, the following must be sufficient. (1) It seems Collins is suggesting that only the constants can vary in different inflationary regions, but that's simply not true as Dr. Aguirre observed: "If something like string theory is true, then so-called fundamental constants like α could vary. Moreover, it may go beyond the constants. Electromagnetism, for example, is part of the standard model of particle physics and is distinct from the other forces (weak and strong) at low enough energy scales. Like the fundamental constants, the properties and even existence of various particles in string theory are governed by the compact dimensions. We can then imagine a different geometry for these tiny dimensions that corresponds to a modified particle physics model without electromagnetism... it would have no light at all." (Cosmological Koans, p.207) (2) Plus, Collins is probably wrong about his examples, as counter-apologist Richard Carrier pointed out: "The Pauli-exclusion principle, for example, can't even be "finely" tuned, since it only has two values, on and off, and if it can be tuned at all, it is tuned by chaotic inflation ... Likewise Einstein’s equation, is just an assembly of random facts (which forces exist, what dimensions they propagate in, what the terminal velocity is which is probably entailed by the quantum of time and space selected). So there isn’t any sense in which these things cannot be generated at random. Other random assemblies (we need only consider ones logically possible), produce similar or the same effects vis-a-vis Chaotic Inflation’s budding into new universes. There is just no sense in which anything is required but a very small selection of very simple physical laws (which are just descriptions of very simple physical facts, far simpler than gods, and vastly simpler than any fine-tuning required)." (Carrier, 2006 & 2018) To confirm Dr. Carrier's claims, I asked physicist Sergey Rubin whether "laws and principles (e.g., the Pauli Exclusion Principle) that govern atomic interactions could change in different regions of the multiverse." He replied, "Yes, but most likely there are no atoms in such universes. ... [A] huge amount of universes with absolutely different properties can be produced." (3) But even if this is wrong, the principles Collins mentioned here would not vary in String Theory. The problem is that we simply do not know whether String Theory is right or not. It seems equally plausible that in the final Theory of Everything, the overwhelming majority of fundamental principles will indeed vary in different 'universes' – except for a very simple and small set of principles required for the production of different inflationary regions.

In addition, there is some evidence suggesting that life could still exist without some fundamental physics. For example, in 2006, Roni Harnik, Graham Kribs, and Gilad Perez have constructed a universe without any weak nuclear interactions. They find that this universe undergoes big bang nucleosynthesis, matter domination, structure formation, and star formation. Stars burn for billions of years, synthesizing elements up to iron and undergoing supernova explosions, dispersing heavy elements into the interstellar medium. Chemistry and nuclear physics are essentially unchanged. Louis Clavelli and Raymond White, however, argued that life will be strongly inhibited in a universe without weak interactions since insufficient oxygen would be produced and oxygen is critical to our form of life in many ways – we need oxygen for water, which is needed as a universal solvent. The problem is that they assume that any form of life must necessarily be very much like our own. Harnik responded that the claim that life needed a universal solvent is “not based on evidence.” While it is true for our form of life, “that's not a large sample.” In any case, he says, “even in the weakless universe there can be plenty of water.” (See, The Fallacy of Fine-Tuning, Victor Stenger). More recently (2018), Alex Howe, and Fred Adams wrote a paper confirming the initial conclusion, saying: "We investigate a class of universes in which the weak interaction is not in operation... Although somewhat different from our own, such universes remain potentially habitable."

Now, if the inflationary and black-hole-type multiverses postulated by cosmologists are shown to be unable to successfully explain fine-tuning, it is still possible to forget Inflationary Theory, forget the String Landscape (and alternatives), forget Cosmological Natural Selection and simply say: our spacetime manifold is simply one among many spatially and causally disconnected infinitely old  or, alternatively, finite (and yet uncaused) – manifolds. These manifolds possess different fundamental physics and different values of the constants. Another possibility is that there is some kind of physical generator of universes (not the inflationary one) which is supremely simple, but we can't understand how because it may be very different from the physical world we're acquainted with (for instance, it may obey fundamentally different laws – or just one law – of physics). This non-scientific (i.e., metaphysical) alternative avoids all the purported problems of the scientific ones. 

An apologist might claim this possibility has no explanatory power because it makes no predictions – we could never have access to these other manifolds and consequently we would never confirm this hypothesis via some direct observation.

However, just because we couldn't confirm it by using a "direct" observation, doesn't mean we can't test it indirectly. How so? If it is true that the many-universes hypothesis (in the absence of God) is the only plausible naturalistic alternative to the hypothesis that there is only one space-time and it is finely-tuned for life by God, then the evidence that the naturalistic view is true (hence, evidence that the theistic view is false) will be evidence that the many-universes hypothesis is true. For example, let's say theism predicts that our minds are not reducible to physical matter. Then, the evidence that your mind is your brain will be evidence that naturalism is true. But for naturalism to be true, the many-universes hypothesis must be true (this assumes there are no other naturalistic ways to explain fine-tuning). Therefore, even though we cannot produce direct predictions to test it, we do have indirect predictions. In other words, evidence of naturalism will be evidence of the many-universes hypothesis. And since both (theistic and naturalistic) hypotheses begin with the same probability – given that fine-tuning is, prima facie, equally explained by the two – the naturalist will not stay behind the theist.

Finally, philosopher Felipe Leon argued that the designer itself must be finely-tuned, which means it must be designed:

"There are final causes in God's nature that are ontologically prior to his intelligent agency. For example, God's intellect and will work together to perform various functions, such as designing and creating things. God's life is also meaningful and purposeful according to classical theism. On classical theism, therefore, final causes are built into God's nature without a prior cause. But if that's right, then classical theism entails the existence of final causes at the metaphysical ground floor that God cannot create. And if that's right, then theism entails that non-conscious teleology is a more fundamental feature of reality than teleology caused by intelligence. And if that's right, then we'd expect base-level teleology in the universe that's not caused by God on the hypothesis of theism. Therefore, absent a further reason for thinking cosmic fine-tuning isn't expected unless caused by a divine fine-tuner, cosmic fine-tuning doesn't confirm theism vis-a-vis naturalism." (Leon, 2023)

Dr. Leon then addressed a possible objection to his argument:

"In a section defending the argument against the "Who Designed the Designer" criticism, Collins uses Swinburne's reply that, roughly, an explanatory posit y can explain some phenomenon x even if y is itself complex-yet-unexplained. However, when Collins discusses the "Many Universes" criticism of the fine-tuning argument, he argues that such a hypothesis is implausible, on the grounds that the mechanism that would be required to produce the universes on that hypothesis is complex and functional, and thus would itself require a designer. In short, Collins seems to accept the following principle when he responds to the "who designed the designer?" criticism of the design argument:

(*) A theoretical posit y can be an adequate explanation of some phenomenon x, even if y is itself complex-yet-unexplained.

However, when the critic uses the Many Universes hypothesis to explain the data of fine-tuning, Collins responds in a way that is in conflict with (*).

So the worry is this. There seems to be no principled way to qualify (*) in a way that makes it legitimate to apply to theism, yet illegitimate to apply to the many universes hypothesis. But if not, then he must either accept (*) in its current, unqualified form or reject it. Now if he rejects it, then unless he takes God to be simple in structure, his hypothesis of a theistic Fine-Tuner can't be a legitimate explanation of cosmic fine-tuning, and thus his design argument collapses. So he can't take that route. So he must accept (*). But if so, then he loses the reply to the Many Universes hypothesis, and with it any explanatory upper-hand the theistic hypothesis may have had with respect to the data of fine-tuning.

Of course, he could reply that God is simple – e.g., he could hold to the Thomistic conception of God as a being whose existence is identical with his essence, and that all of his attributes are identical. But he seems to be hesitant to do so in his chapter (and rightly so...). Still, I guess that one could say that, contrary to anything we can imagine, God is absolutely simple, and our inability to comprehend God's simplicity is due to our cognitive limitations. But at that point, the pretense of offering an explanatory account of fine-tuning disappears; and in any case the many-universe proponent could play that game: perhaps at the most fundamental level of reality in the multiverse, reality is ultimately simple." (Leon, 2008)