PALEY's METAPHOR OF THE WATCH, alluded to by Ridcully, still remains powerful; powerful enough for Richard Dawkins to title his neo-Darwinian riposte of 1986 The Blind Watchmaker. Dawkins[8] made it clear that in his view, and those of most evolutionary biologists over the past fifty years, there was no watchmaker for living organisms, in Paley's sense: 'Paley's argument is made with passionate sincerity and is informed by the best biological scholarship of his day, but it is wrong, gloriously and utterly wrong.' But, says Dawkins, if we must give the watchmaker a role, then that role must be the process of natural selection that Darwin expounded. If so, the watchmaker has no sense of purpose: it is blind. It's a neat title but easily misunderstood, and it opens the way to replies, such as the recent book by William Dembski, How Blind Is the Watchmaker? Dembski is an advocate of `intelligent design', a modern reincarnation of Paley with updated biology which repeats the old mistakes in new contexts[9].
If you did find a watch on a heath, your first thought would probably not be that there must have been a watchmaker, but a watch-owner. You would either wish to get the owner's property back to them, or look guiltily around to make sure they weren't anywhere nearby before you snaffled it. Paley tells us that if we find, say, a spider on the path, then we are compelled to infer the existence of a spider-maker. But he finds no such compulsion to infer the existence of a spider-owner. Why is one human social role emphasised, but the other suppressed?
Moreover, we know what a watch is for, and this colours our thinking. Suppose, instead, that our nineteenth-century heath-walker chanced upon a mobile phone, left there by some careless time traveller from the future. He would probably still infer `design' from its intricate form ... but purpose? What conceivable purpose would a mobile phone have in the nineteenth century, with no supporting network of transmission towers? There is no way to look at a mobile phone and infer some evident purpose. If its battery has run down, it doesn't do anything. And if what was found on the path was a computer chip - say, the engine manager of a car - then even the element of design would be undetectable, and the chip might well be dismissed as some obscure crystalline rock. Chemical 'analysis would confirm the diagnosis by showing that it was mostly silicon. Of course, we know that these things do have a designer; but in the absence of any clear purpose, Paley's heath-walker would not be entitled to make any such inference.
In short, Paley's logic is heavily biased by what a human being would know about a watch and its maker. And his analogy breaks down when we consider other features of watches. If it doesn't even work for watches, which we do understand, there's no reason for it to apply to organisms, which we don't.
He is also rather unfair to stones.
Some of the oldest rocks in the world are found in Greenland, in a 25-mile-long band known as the Isua supracrustal belt. They are the oldest known rocks among those that have been laid down on the surface of the Earth, instead of rising from the mantle below. They are 3.8 billion years old, unless we cannot reliably make inferences from observations, in which case the evidence for cosmic design has to be thrown out along with the evidence of the rocks. We know their age because they contain tiny crystals of zircon. We mention them here because they show that Paley's lack of interest in `stones', and his casual acceptance that they might have `lain there forever', are unjustified. The structure of a stone is nowhere near as simple as Paley assumed. In fact, it can be just as intricate as an organism, though not as obviously `organised'. Every stone has a story to tell.
Zircons are a case in point.
Zirconium is the 40th element in the periodic table, and zircon is zirconium sulphate. It occurs in many rocks, but usually in such tiny amounts that its presence is ignored. It is extremely hard - not as hard as diamond, but harder than the hardest steel. Jewellers sometimes use it as a diamond substitute.
Zircons, then, are found in most rocks, but in this instance the important rock is granite. Granite is an igneous rock, which wells up from the molten layers beneath the Earth's crust, forcing a path through the overlying sedimentary rock that has been deposited by wind or water. Zircons form in granite that solidifies about 12 miles (20 km) down inside the Earth. The crystals are truly tiny: one 10,000th of an inch (2 microns) is typical.
Over the last few decades we have learned that our apparently stable planet is highly dynamic, with continents that wander around over the surface, carried by gigantic `tectonic plates' which are 60 miles (100 km) thick and float on the liquid mantle. Sometimes they even crash into each other. They move less than an inch (about 2 cm) per year, on average, and on a geological timescale that's fast.
The north-west of Scotland was once part of North America, when the North American plate collided with the Eurasian plate; when the plates later split apart, a piece of America was left behind, forming the Moine thrust. When plates collide, they slide over each other, often creating mountains. The highest mountains on Earth today, the Himalayas, formed when India collided with the Asian mainland. They are still rising today by more than half an inch (1.3 cm) a year, though are often weathered away faster, and India is still moving northwards.
At any rate, granite deep within the Earth may be uplifted by the collision of continental plates, to appear at the surface as part of a mountain range. Being a hard rock, it survives when the softer sedimentary rocks that surround it weather away. But eventually, even granite weathers, so the mountain erodes. The zircon crystals are even harder, so they survive weathering; they separate out from the granite, to be washed down to the coast by streams and rivers, deposited on the sandy shore, and incorporated into the next layer of sedimentary rock.
As well as being very hard, zircon is chemically very stable, and it resists most chemical changes. So, as the sediment builds up, and the zircon crystal is buried under accumulating quantities of incipient rock, the crystal is relatively immune to the increasing heat and pressure. Even when the rock is cooked by deep heat, becoming metamorphic - changing its chemical structure - the crystal of zircon survives. Its one concession to the extreme environment around it is that eventually it builds a new layer, like a skin, on its surface. This `rim', as it is called, is roughly the same age as the surrounding rock; the inner core is far older.
Now the process may repeat. The core of zircon, with its new rim, may be pushed up with the surrounding rocks to make a new mountain range. When those mountains weather, the zircon may return to the depths, to acquire a second rim. Then a third, a fourth ... Just as tree rings indicate the growth of a tree, so `zircon rims' reflect a sequence of mountain-building and erosion. The main difference is that each ring on a tree corresponds to a period of one year, whereas the rims on the tiny zircon crystal correspond to geological cycles that typically last hundreds of millions of years. But, just as the widths of tree rings tell us something about the climate in the years that are represented, so the zircon rims tell us something about the conditions that occurred during a given geological cycle.
By one of those neat coincidences that Paley would interpret as the Hand of God but nowadays we recognise as an inevitable consequence of the sheer richness of the universe (yes, we do see that those statements might be the same), the zirconium atom has the same electric charge, and is much the same size, as an atom of uranium. So uranium impurities can easily sneak into that zircon crystal. This is good for science, because uranium is radioactive. Over time, it decays into lead. If we measure the ratio of uranium to lead then we can estimate the time that has elapsed since any given part of the zircon crystal was laid down. Now we have a powerful observational tool, a geological stopwatch. And we also have a simple prediction that gives us confidence in the hypothesis that the zircon crystal forms in successive stages. Namely, the core should be the oldest part of the crystal, and successive rims should become consistently younger, in separate stages.
A typical crystal might have, say, four layers. The core might date to 3.7 billion years ago, the next to 3.6 billion years, the third to 2.6 billion years, and the last one to 2.3 billion years. So here, in a simple `stone', we have evidence for geological cycles that last between 100 million and one billion years. The order of the ages agrees with the order in which the crystal must have been deposited. If the general scenario envisaged by geologists were wrong, then it would take only a single grain of sand to disprove it. Of course that doesn't confirm the huge geological cycles: those are deduced from other evidence. Science is a crossword puzzle.
Zircons can teach us more. It is thought that the ratio of two isotopes of carbon, carbon-12 and carbon-13, may distinguish organic sources of carbon from inorganic ones. There is carbon in the Isua formation, and the ratio there suggests that life may have existed 3.8 billion years ago, surprisingly soon after the Earth's surface solidified. But this conclusion is controversial, and many scientists are not convinced that other explanations can be excluded.
At any rate, for the Isua zircons we know that it is not an option for them to have `lain there for ever'. Stones are far more interesting than they might seem, and anyone who knows how to read the rocks can deduce many things about their history. Paley believed that he could deduce the existence of God from the complexity of an eye. We can't get God from a zircon, but we can get vast geological cycles of mountain-building and erosion ... and just possibly, evidence for exceedingly ancient life.
Never underestimate the humble stone. It may be a watch in disguise.
Paley's position is that what you see is what you get. The appearance is the reality. His title Natural Theology says as much, and his subtitle could scarcely be plainer. Organisms look designed because they are designed, by God; they appear to have a purpose because they do have a purpose: God's. Everywhere Paley looked, he saw traces of God's handiwork; everything around him was evidence for the Creator.
That kind of `evidence' exists in such abundance that there is no difficulty in accumulating examples. Paley's central example was the eye. He noted its similarity to a telescope, and deduced that since a telescope is designed, so must an eye be. The camera did not exist in his day[10], but if it had existed, he would have found even closer similarities. The eye, like a telescope or a camera, has a lens to bring incoming light to a sharp focus, forming an image. The eye has a retina to receive that image, just as a telescope has an observer, or a screen on to which the image is projected.
The lens of the eye is useless without the retina; the retina is useless without the lens. You can't put an eye together piecemeal - you need all of it, at once, or it can't work. Later supporters of theist explanations of life turned Paley's subtle arguments into a simplistic slogan: `What use is half an eye?'
One reason to doubt Paley's explanation of `design' is that in science, you very seldom get what you see. Nature is far from obvious. The waves on the ocean may seem to be travelling, but the water is mainly going round and round in tiny circles. (If it wasn't, the land would quickly be swamped.) The Sun may appear to orbit the Earth, but actually it's the other way round. Mountains, apparently solid and stable, rise and fall over geological timescales. Continents move. Stars explode. So the explanation `it appears designed because it is designed' is a bit too trite, a bit too obvious, a bit too shallow. That doesn't prove it's wrong, but it gives us pause.
Darwin was one of a select group of people who realised that there might be an alternative. Instead of some cosmic designer creating the impressive organisation of organisms, that organisation might come into being of its own accord. Or, more accurately, as an inevitable consequence of the physical nature of life, and its interactions with its environment. Living creatures, Darwin suggested, are not the product of design, but of what we now call `evolution' - a process of slow, incremental change, almost imperceptible from one generation to the next, but capable of accumulating over extensive periods of time. Evolution is a consequence of three things. One is the ability of living creatures to pass on some of their attributes to their offspring. The second is the slightly hit-and-miss nature of that ability: what they pass on is seldom a precise copy, though it usually comes close. The third is `natural selection'- creatures that are better at survival are the ones that manage to breed, and pass on their survival attributes.
Natural selection is slow.
As an accomplished student of geology - Victorian-style field geology, where you traipse about the landscape trying to work out what rocks lie under your feet, or halfway up the next mountain, and how they got there - Darwin was well aware of the sheer abyssal depth of geological time. The record of the rocks offered compelling evidence that the Earth must be very, very old indeed: tens or hundreds of millions of years, maybe more. Today's figure of 4.5 billion years is even longer than the Victorian geologists dared imagine, but probably would not have surprised them.
Even a few million years is a very long time. Small changes can turn into huge ones over such a period of time. Imagine a species of worm four inches (10 cm) long, whose length increases by one thousandth of a per cent every year, so that even very accurate measurements would not detect any change on a yearly basis. In a hundred million years, the descendants of that worm would be 30 feet (10 m) long. From annelid to anaconda. The longest worm alive today sometimes reaches lengths of 150 feet (50m), but it is a marine worm: Lineus longissimus, which lives in the North Sea and can be found under boulders at low tide. Earthworms are a lot shorter, but the Megascolecid worms of Australia can grow to a length of 10 feet (3m), which is still impressive.
We're not suggesting that evolution happens with quite that degree of simplicity or regularity, but there's no question that geological time allows huge changes to occur by imperceptible steps. In fact, most evolutionary changes are a lot faster. Observations of `Darwin's finches', 13 species of bird that inhabit the Galapagos Islands, reveal measurable changes from one year to the next - for example, in the average sizes of the birds' beaks.
If we want to explain the rich panoply of life on Earth, it is not enough to observe that living creatures can change as the generations pass. There must also be something that drives those changes in a `creative' direction. The only driving force that Paley could imagine was God, making conscious, intelligent choices and designing them in from the beginning. Darwin was more acutely aware that organisms can and do change from each generation to the next. Both the fossil record and his experience with the breeding of new varieties of plants and domestic animals made that fact plain. But breeding is also a choice imposed from outside, by the breeder, so if anything, domestic animals look like evidence in favour of Paley.
On the other hand, no human agency ever bred dinosaurs. Does that imply that the agency was God - or did the dinosaurs somehow breed themselves into new forms? Darwin realised that there is another kind of `choice', imposed not by intelligent will but by circumstance and context. This is `natural selection'. In the vast, ongoing competition for food, living space, and the opportunity to breed, nature will automatically favour winners over losers. Competition introduces a kind of ratchet, which mostly moves in one direction: towards whatever works better. So we should not be surprised that tiny incremental changes from one generation to the next should possess some sort of overall `direction', or dynamic, with changes accumulating coherently across the aeons to produce something entirely different.
This kind of description is easily misunderstood as a kind of inbuilt tendency towards 'progress'- ever onwards, ever upwards. Ever more complex. Many Victorians took the message that the purpose of evolution was to bring humanity into being. We are the highest form of creation, we are at the top of the evolutionary tree. With us, evolution has arrived; it will now stop, having achieved its ultimate goal.
Rubbish. `Works better' is not an absolute statement. It applies in a context that is itself changing. What works better today might not do so in a million years' time - or even tomorrow. Maybe for a time, a bird's beak will `work better' if it is bigger and stronger. If so, that's how it will change. Not because the birds know what kind of beak will work better: because the kind of beak that works better is the kind that survives more effectively and is therefore more likely to be inherited by succeeding generations. But the results of the competition may change the rules of the game, so that later on, big beaks may become a disadvantage; for instance, suitable food may disappear. So now smaller beaks will win.
In short, the dynamic of evolution is not prescribed in advance: it is `emergent'. It creates its own context, and reacts to that context, as it proceeds. So at any given time we expect to find some sensible directionality to evolutionary change, consistent over many generations, but often the universe itself only finds out what that direction is by exploring what's possible and discovering what works. Over a longer timescale, the direction itself can change. It's like a river that flows through an eroding landscape: at any given time there is a clear direction to the flow, but in the long run the passage of the river can slowly change its own course.
It is also important to appreciate that individual organisms do not compete in isolation, or against a fixed background. Billions of competitions go on all the time, and their outcome may be affected by the results of other competitions. It's not like the Olympics, where the javelin-throwers politely wait for the marathon-runners to stream past. It's more like a version of the Olympics where the javelinthrowers try to spear as many marathon-runners as they can, while the steeplechasers are trying to steal their javelins to turn each hurdle into a miniature pole vault, and the marathon-runners' main aim in life is to drink the water-jump before the steeplechasers get to it and drink it first. This is the Evolympics, where everything happens at once.
The evolutionary competitions, and their outcomes, also depend on context. Climate, in particular, plays a big role. In the Galapagos, selection for beak size in Darwin's finches depends on how many birds have what size of beak, and on what kinds of food - seeds, insects, cactus - are available and in what quantities. The amount and type of food depend on which plants and insects are competing best in the struggle to survive - not least from being eaten by finches - and breed. And all of this is played out against a background of climatic variations: wet or dry summers, wet or dry winters. Observations published in 2002 by Peter and Rosemary Grant show that the main unpredictable feature of finch evolution in the Galapagos is climate. If we could forecast the climate accurately, we could predict how the finches would evolve. But we can't predict the climate well enough, and there are reasons to think that this may never be possible.
That doesn't prevent evolution from being `predictive', hence a science, any more than it prevents meteorology from being a science. But the evolutionary predictions are contingent upon the behaviour of the climate. They predict what will happen in what circumstances, not when it will happen.
Darwin almost certainly read Paley's masterwork as a young man, and in later life he may well have used it as a touchstone for his own, more radical and far more indirect, views. Paley succinctly expressed many of the most effective objections to Darwin's ideas, long before Darwin arrived at them. Intellectual honesty demanded that Darwin should find convincing answers to Paley. Such answers are scattered throughout Darwin's epic treatise The Origin of Species, though Paley's name does not appear.
In particular, Darwin found it necessary to tackle the thorny question of the eye. His answer was that although the human eye appears to be a perfected mechanism, with many interdependent parts, there are plenty of different `eyes' in the animal kingdom, and a lot of those are relatively rudimentary. They can even be arranged in a rough progression from simple light-sensing patches to pinhole cameras to complex lenses (though this arrangement should not be interpreted as an actual evolutionary sequence). Instead of half an eye, we find an eye that is half as effective at detecting light. And this is far, far better than no eye at all.
Darwin's approach to the eye is complemented by some computer experiments published by Daniel Nilsson and Suzanne Pelger[11] in 1994. They studied a simple model of the evolution of a lightsensing patch of cells, whose geometry could change slightly at every `generation', and which was equipped with the capacity to develop accessories such as a lens. In their simulations, a mere 100,000 generations were enough to transform a light-sensing patch into something approaching the human eye, including a lens whose refractive index varied from place to place, to improve its focus. The human eye possesses just such a lens. Moreover, and crucially, at every one of those 100,000 steps, the eye's ability to sense light got better.
This simulation was recently criticised on the grounds that it gets out what it puts in. It doesn't explain how those light-sensing cells can appear to begin with, or how the eye's geometry can change. And it uses a rather simplistic measure of the eye's performance. These would be important criticisms if the model were being used as some kind of proof that eyes must evolve, and as an accurate description of how they did it. However, that was never the purpose of the simulation. It had two main aims. One was to show that in the simplified context of the model, evolution constrained by natural selection could make incremental improvements and get to something resembling a real eye. It wouldn't get stuck along the way with some dead-end version of the eye that could be improved only by scrapping it and starting afresh. The second aim was to estimate the time required for such a process to take place (look at the title of the paper), on the assumption that the necessary ingredients were available.
Some of the model's assumptions are easily justified, as it happens. Light carries energy and energy affects chemical bonds, so it is not surprising that many chemicals respond to light. Evolution has an immense range of molecules to draw on - proteins specified by DNA sequences in genes. The combinatorial possibilities here are truly vast: the universe is not big enough, and has not lasted long enough, to make one molecule of each possible protein as complex as, say, haemoglobin, the oxygen-carrier in blood. It would be utterly astonishing if evolution could not come up with at least one light-sensing pigment, and incorporate it into a cell.
There are even some ideas of how this may have happened. In Debating Design, Bruce Weber and David Depew point out that lightsensitive enzyme systems can be found in bacteria, and these systems are probably very ancient. The bacteria don't use them for vision, but as part of their metabolic (energy-gaining) processes. Proteins in the human lens are very similar to metabolic enzymes found in the liver. So the proteins that make the eye did not start out as components of a system whose purpose was vision. They arose elsewhere and had quite different `functions'. Their form and function were then selectively modified when their rudimentary light-sensing powers turned out to offer an evolutionary advantage.
Although we now know quite a lot about the genetics of the human eye, no biologist claims to know exactly how it evolved. The fossil record is poor, and humanoid eyes don't fossilise (though trilobite eyes do). But biologists can offer simple reasons why and how the eye could have evolved, and these alone are sufficient to demolish claims that its evolution is impossible in principle because the eye's components are interdependent and removing any one of them causes the eye to malfunction. The eye did not evolve one component at a time. Its structure evolved in parallel. The instigators of more recent revivals of Paley's doctrine, albeit in less overtly theist tones, have taken on board the message of the eye as a specific case ... but its more generic aspects seem to have eluded them. Darwin's discussion of the eye, and the Nilsson-Pelger computer experiment, are not limited to eyes. Here is the deeper message. When confronted with a complex living `mechanism', do not assume that the only way it can evolve is component by component, piece by piece. When you see a watch, do not think of hooking up springs and adding cogwheels from some standard box of spare parts. Think more of a Salvador Dali `soft watch' that can flow and distort, deform, split apart, and rejoin. Think of a watch whose cogwheels can change shape, grow new teeth, and whose axles and supports evolve along with the cogs so that at every stage the whole thing fits together. Think of a watch that may have started out as a paper clip, and along the way became a pogo-stick. Think not of a watch that does and always did have a single purpose, which was to tell the time. Think of a watch that once held sheets of paper together and could also be straightened out to form a toothpick, and which later turned out to be great for bouncing, and started to be used for measuring time only when someone noticed that its rhythmic movements could chart the passing seconds.
Yes, proponents of intelligent design understand the eye ... but only as one example, not as the basis of a general principle. `Oh, yes, we know all about the eye,' they say (we paraphrase). `We're not going to ask what use half an eye is. That's simple-minded nonsense.' So instead, they ask what use half a bacterial flagellum is, and thereby repeat the identical error in a different context.
We owe this example to Michael Behe, a biochemist who was baffled by the complexity of bacterial flagella. These are the `tails' that bacteria use to move around, tiny `screws' like a ship's propeller, driven by a rotary molecular motor. Some forty proteins are involved in making such a motor, and if you miss any of them out, it won't work. In his 1996 Darwin's Black Box, Behe claimed that the only possible way to make a flagellum was to encode the whole structure, in advance, in bacterial DNA. This code could not have evolved from anything simpler, because the flagellum is `irreducibly complex'. An organ or biochemical system is said to be irreducibly complex if removing any of its parts causes it to fail. Behe deduced that no irreducibly complex system can evolve. The example of the bacterial flagellum quickly became a cornerstone of the intelligent design movement, and Behe's principle of irreducible complexity was promoted as an unavoidable barrier to the evolution of complex structures and functions.
There are several excellent books that debate intelligent design: we've mentioned two earlier in a footnote. It's fair to say that the antis are winning the debate hands down - even in books edited by the pros, such as Debating Design. Perhaps the biggest problem for the pros is that Behe's fundamental concept of `irreducible complexity' has fatal flaws. With his definition, the deduction that an irreducibly complex system cannot evolve is valid only if evolution always consists of adding new parts. If that were the case, then the logic is clear. Suppose we have an irreducibly complex system, and suppose that there is an evolutionary route leading to it. Focus on the final step, where the last part is added. Then whatever came before must have been a failure, so it couldn't have existed. This is absurd: end of story.
However, evolution need not merely add identifiable components, like a factory-worker assembling a machine. It can also remove them - like a builder using scaffolding and then taking it down once it's done its job. Or the entire structure can evolve in parallel. Either possibility allows an irreducibly complex system to evolve, because the next to last step no longer has to start from a system that lacks that final, vital piece. Instead, it can start from a system with an extra piece, and remove it. Or add two vital pieces simultaneously. Nothing in Behe's definition of irreducible complexity prohibits either of these.
Moreover, `fail' is a slippery concept: a watch that lacks hands is a failure at telling the time, but you can still use it to detonate a timebomb, or hang it on a string to make a plumb-line. Organs and biochemical systems often change their functions as they evolve, as we've just seen in the context of the eye. No satisfactory definition of `irreducible complexity' - one that really does constitute a barrier to evolution - has yet been suggested.
According to Kenneth Miller in Debating Design: `the great irony of the flagellum's increasing acceptance as an icon of the antievolutionist movement is the fact that research had demolished its status as an example of irreducible complexity almost at the very moment it was first proclaimed'. Removing parts from the flagellum do not cause it to `fail'. The base of the bacterial motor is remarkably similar to a system that bacteria use to attack other bacteria, the `type III secretory system'. So here we have the basis of an entirely sensible and plausible evolutionary route to the flagellum, in which protein components do get added on. When you remove them again, you don't get a working flagellum - but you do get a working secretory system. The bacterial method of propulsion may well have evolved from an attack mechanism.
To their credit, proponents of intelligent design are encouraging this kind of debate, but they have not yet conceded defeat, even though their entire programme rests on shaky foundations and is collapsing in ruins. Creationists, desperate to snatch at any straw of scientific respectability for their political programme to lever religion into the American state school system[12], have not yet noticed that what they are currently taking as their scientific support is falling apart at the seams. The theory of intelligent design itself is not overtly theist - indeed its proponents try very hard not to draw religious conclusions. They want the scientific arguments to be considered as science. Of course that's not going to happen, because the theist implications are a little too obvious - even to atheists.
There are some things that evolution does not explain - which will gladden the heart of anyone who feels that, Darwin notwithstanding, there are some issues that science cannot address.
It is perfectly possible to agree with Darwin and his successors that the Earth is 4.5 billion years old, and that life has evolved, by purely physical and chemical processes, from inorganic beginnings - yet still find a place for a deity. Yes, in a rich and complex universe, all these things can happen without divine intervention. But ... how did that rich and complex universe come into being?
Here, today's cosmology offers descriptions of how (Big Bang, various recent alternatives) and when (about 13 billion years ago), but not why. String theory, a recent innovation at the frontiers of physics, makes an interesting attempt at `why?' However, it leaves an even bigger `why?' unanswered: why string theory? Science develops the consequences of physical rules (`laws'), but it doesn't explain why those rules apply, or how such a set-up came to exist.
These are deep mysteries. At the moment, and probably for ever, they are not accessible to the scientific method. Here religions come into their own, offering answers to riddles about which science chooses to remain mute.
If you want answers, they are available.
Rather a lot of different ones, in fact. Choose whichever one makes you feel most comfortable.
Feeling comfortable, however, is not a criterion recognised by science. It may make us feel warm and fuzzy, but the historical development of scientific understanding shows that, time and again, warm and fuzzy is just a polite way of saying `wrong'.
Belief systems rely on faith, not evidence. They provide answers - but they don't provide any rational process to assess those answers. So although there are questions beyond the capacity of science to answer, that's mostly because science sets itself high standards for evidence, and holds its tongue when there isn't any. The alleged superiority of belief systems compared to science, when it comes to these deep mysteries, stems not from a failure of science, but from the willingness.of belief systems to accept authority without question.
So the religious person can take comfort that his or her beliefs provide answers to deep questions of human existence that are beyond the powers of science, and the atheist can take comfort that there is absolutely no reason to expect those answers to be right. But also no way to prove them wrong, so why don't we just coexist peacefully, stay off each other's turf, and each get on with our own thing? Which is easy to say but harder to do, especially when some people refuse to stick to their own turf, and use political means, or violence, to promote their views, when rational debate long ago demolished them.
Some aspects of some belief systems are testable, of course - the Grand Canyon is not evidence for Noah's flood, unless God is having a quiet joke at our expense, which admittedly would be a very Discworld thing to do. And if He is, then all bets are off, because His revealed word in [insert your preferred Holy Book] may well be a joke too. Other aspects are not testable: the deeper issues stray into intellectual territory where, in the end, you have to settle for whatever explanation your type of mind finds convincing, or just stop asking that kind of question.
But remember: what's most interesting about your beliefs, to anyone who does not share them, is not whether you're right - it's that what you believe is a window into the workings of your mind. `Ah, so you think like that, do you?'
This is where the great mystery of human existence leads, and where all explanations are true - for a given value of `true'.