In a conversation on Edge.org, historian Yuval Noah Harari discussed how society may change in the future due to advances in technology. He foresees a time of social change and unrest when the elite can afford advanced medicine (possibly even achieving eternal life on earth, he says) and the poor are left farther and farther behind. He compares his predicted social problems to the upheaval caused by the Industrial Revolution:
What is certain is that the old answers were irrelevant [in dealing with the results of the Industrial Revolution]. Today, everybody is talking about ISIS, and the Islamic fundamentalism, and the Christian revival, and things like that. There are new problems, and people go back to the ancient texts, and think that there is an answer in the Sharia, in the Qur'an, in the Bible. We also had the same thing in the 19th century. You had the Industrial Revolution. You had huge sociopolitical problems all over the world, as a result of industrialization, of modernization. You got lots of people thinking that the answer is in the Bible or in the Qur'an. You had religious movements all over the world….
Eventually, people came up with new ideas, not from the Sharia, and not from the Bible, and not from some vision. People studied industry, they studied coal mines, they studied electricity, they studied steam engines, railroads, they looked at how these developments transformed the economy and society, and they came up with some new ideas.
New ideas, rooted in scientific understanding, did help bring societies through the turbulence of industrialization. But the reformers who made the biggest differences — the ones who worked in the slums and with the displaced, attacked cruelties and pushed for social reforms, rebuilt community after it melted into air — often blended innovations with very old moral and religious commitments.
When technological progress helped entrench slavery, the religious radicalism of abolitionists helped destroy it. When industrial development rent the fabric of everyday life, religious awakenings helped reknit it. When history’s arc bent toward eugenics, religious humanists helped keep the idea of equality alive….
[T]he assumption, deeply ingrained in our intelligentsia, that everything depends on finding the most modern and “scientific” alternative to older verities has been tested repeatedly — with mostly dire results. The 19th-century theories that cast themselves as entirely new and modern were the ones that devastated the 20th century, loosing fascism and Marxism on the world.
Which makes Harari’s concluding provocation feel like an unintended warning: “In terms of ideas, in terms of religions,” he argues, “the most interesting place today in the world is Silicon Valley, not the Middle East.” It’s in Silicon Valley that people are “creating new religions” — techno-utopian, trans-humanist — and it’s those religions “that will take over the world.”
He could be right. But if those new ideas are anything like the ones that troubled the 20th century, we may find ourselves looking to older ones for rescue soon enough.
I posted a quote last week from an atheist who warned that science can be used to promote a wide variety of values: “There is no more reason to think science can determine human values today than there was at the time of Haeckel or Huxley [who argued for eugenics based on science].” Anyone looking to technology to lead our society as a “new religion” will eventually find it can’t be counted on to create and uphold beliefs in intrinsic human value, universal human rights, or even compassion. By its very nature, it’s not capable of that.
Humanity is what it is. Read Shakespeare and you’ll find you can relate, even though he lacked your technology. Even if I didn’t think Christianity is true, it would still seem to me that wisdom about ethics and human flourishing is more likely to reside in the time-tested ideas that built thousands of years of civilization than in something brand new.
Atheist John Gray argues in the Guardian that atheists who think science alone can support their preferred system of morality are fooling themselves:
It’s probably just as well that the current generation of atheists seems to know so little of the longer history of atheist movements. When they assert that science can bridge fact and value, they overlook the many incompatible value-systems that have been defended in this way. There is no more reason to think science can determine human values today than there was at the time of Haeckel or Huxley [who argued for eugenics based on science]. None of the divergent values that atheists have from time to time promoted has any essential connection with atheism, or with science. How could any increase in scientific knowledge validate values such as human equality and personal autonomy? The source of these values is not science. In fact, as the most widely-read atheist thinker of all time argued, these quintessential liberal values have their origins in monotheism….
It’s impossible to read much contemporary polemic against religion without the impression that for the “new atheists” the world would be a better place if Jewish and Christian monotheism had never existed. If only the world wasn’t plagued by these troublesome God-botherers, they are always lamenting, liberal values would be so much more secure. Awkwardly for these atheists, Nietzsche understood that modern liberalism was a secular incarnation of these religious traditions….
To be sure, evangelical unbelievers adamantly deny that liberalism needs any support from theism…. Canonical liberal thinkers such as John Locke and Immanuel Kant may have been steeped in theism; but ideas are not falsified because they originate in errors. The far-reaching claims these thinkers have made for liberal values can be detached from their theistic beginnings; a liberal morality that applies to all human beings can be formulated without any mention of religion. Or so we are continually being told. The trouble is that it’s hard to make any sense of the idea of a universal morality without invoking an understanding of what it is to be human that has been borrowed from theism. The belief that the human species is a moral agent struggling to realise its inherent possibilities – the narrative of redemption that sustains secular humanists everywhere – is a hollowed-out version of a theistic myth. The idea that the human species is striving to achieve any purpose or goal – a universal state of freedom or justice, say – presupposes a pre-Darwinian, teleological way of thinking that has no place in science. Empirically speaking, there is no such collective human agent, only different human beings with conflicting goals and values. If you think of morality in scientific terms, as part of the behaviour of the human animal, you find that humans don’t live according to iterations of a single universal code. Instead, they have fashioned many ways of life. A plurality of moralities is as natural for the human animal as the variety of languages.
As Gray says, “It’s not that atheists can’t be moral – the subject of so many mawkish debates. The question is which morality an atheist should serve.” And that is the problem. Too many atheists still don’t understand the extent to which their moral views are influenced by theism, and therefore they still don’t understand the consequences of banishing that theism.
We’ve posted before about the problem of atheists declaring that the design of this or that body part is sub-optimal (and therefore, isn’t designed). Electrical engineer Bill Pratt explained it this way:
Over the years, I have often heard young engineers, who did not design a particular [integrated circuit], criticize the design of that IC by saying it is sub-optimal, that they could do a better job. I have then seen these same engineers eat crow when they finally talk to the original designer and discover the constraints that original engineer was under when he designed the IC and the purposes for which he designed the IC.
It is impossible to judge a design as optimal or sub-optimal without knowing the purposes of the designer and without knowing the constraints the designer faced during the design.
From a practical standpoint, the wiring of the human eye – a product of our evolutionary baggage – doesn't make a lot of sense. In vertebrates, photoreceptors are located behind the neurons in the back of the eye – resulting in light scattering by the nervous fibers and blurring of our vision. Recently, researchers at the Technion – Israel Institute of Technology have confirmed the biological purpose for this seemingly counterintuitive setup.
"The retina is not just the simple detector and neural image processor, as believed until today," said Erez Ribak, a professor at the Technion – Israel Institute of Technology. "Its optical structure is optimized for our vision purposes." …
Previous experiments with mice had suggested that Müller glia cells, a type of metabolic cell that crosses the retina, play an essential role in guiding and focusing light scattered throughout the retina. To test this, Ribak and his colleagues ran computer simulations and in-vitro experiments in a mouse model to determine whether colors would be concentrated in these metabolic cells. They then used confocal microscopy to produce three-dimensional views of the retinal tissue, and found that the cells were indeed concentrating light into the photoreceptors.
"For the first time, we've explained why the retina is built backwards, with the neurons in front of the photoreceptors, rather than behind them," Ribak said.
It’s funny how the writer refers to the “biological purpose” of this “evolutionary baggage.” There is no such thing as “purpose” in “evolutionary baggage”—there’s only what happens to survive. And yet, looking at things such as the eye, this science writer can’t help but use the word.
It’s also funny he would refer to the “mystery of the reverse-wired eyeball,” as if we should have assumed there was a secret to discover about why the eye is the way it is—why it ought to be the way it is. Why should anyone assume there’s a mystery to be solved here—that there’s a purpose waiting to be discovered, explaining why this setup actually is optimal? There’s no reason to assume the structure is optimal if it’s the result of random events (it certainly isn’t “built”), so there’s no “mystery.” Calling the structure of the eye a “mystery” would require an assumption of purpose and optimal design, and that’s not an assumption that can be supported by unguided evolution. Why use this kind of language?
For since the creation of the world His invisible attributes, His eternal power and divine nature, have been clearly seen, being understood through what has been made, so that they are without excuse (Romans 1:20).
Last week, Biola hosted a panel discussion between William Lane Craig, J.P. Moreland, and John Lennox (moderated by Hugh Hewitt) on the topic of “God, Science, and the Big Questions.”
In response to a question posed to William Lane Craig about the biggest challenge to Christianity from science that Christians need to work on, J.P. Moreland (at 1:23:04) reminded the audience that a theory (scientific, theological, etc.) ought not be rejected just because there’s an anomaly that can’t yet be reconciled with it. Instead, it’s legitimate to take time to work on finding an answer that resolves the alleged contradiction. He referenced the work he’s done on how to evaluate theories in light of anomalies:
I did a study of how people weigh and change theories, and one of the things I learned is that a theory of any kind—if it’s an economic theory, a scientific theory, it could be a theological theory—will have core commitments that are called the “paradigm carriers.” They’re the key things to the theory…. And then there will be less important commitments that are around the periphery of the theory.
Now, when does it become reasonable to think that an anomaly on the periphery—a problem—turns out to really be a falsification of the theory, as opposed to an anomaly that we can explain or it’s okay for us to work on it over a while?
Here’s what I think it is: … If the evidence for the central part of the theory is stronger than the evidence that this [anomaly] falsifies the theory, then you are within your intellectual rights to say I don’t have an answer to this yet, but I can’t bring myself to think it falsifies the theory—not because I don’t want to, [but] because we have a ton of evidence for this theory.
He then gives an example of how this has worked for a scientific theory in the past. His ideas on this subject are well worth thinking about, as I think people often misunderstand how to evaluate evidence and anomalies, thinking any anomaly ought to put an end to consideration of the theory (in this case, Christianity). This is simply a misunderstanding of how evidence works.
In forming a theory, you start with the clear cases, not the borderline ones.
The presence of as yet unexplained anomalies does not necessarily disprove the theory.
No one instance (or even a few) of a class has the power to prove or disprove a theory about that class (even if, taken alone, it would seem to); it's studied as a member of its larger class, in light of the evidence of the other examples of its kind.
One important insight I learned from J. Warner Wallace—who spent his career dealing with evidence—is that evidence is messy. We shouldn’t expect everything to line up perfectly. There will be anomalies or things that will remain unexplained, and yet it’s still reasonable to settle on the conclusion that makes the best sense of all of the evidence as it stands, even with loose ends. The loose ends shouldn’t panic you.
The whole panel discussion is worth watching.
(You can find the podcast series on creation and evolution Dr. Craig referenced in the discussion here; and specifically, his response to the problem reconciling population genetics with Adam and Eve—the challenge he cited that initiated the conversation about anomalies—can be found in Part 11.)
The age of the earth is not a topic we discuss much on our website. We have an article by Greg on why he believes in an old earth, but not every employee we’ve had has agreed with that, and it’s not something we take a hard stance on as an organization. (We do, however, affirm the primacy of Scripture and a historical Adam and Eve, and we’ve argued against theistic evolution). We think Christians can take different positions on the age of the earth and still be orthodox—that the difference is often one of biblical interpretation rather than one of biblical rejection, and that the real fight on our hands when it comes to creation is not with our fellow believers, but with the naturalists.
But while the controversy over the age of the earth is not an area of my expertise, I have a great interest in promoting grace among Christians on this issue. I receive letters from time to time from people who are angry about what Greg has said about this and are convinced the old earth position is rooted in a low view of the Bible that will tear down the faith.
I don’t think this is necessarily true of old-earthers (though it may be true for some), and it shouldn’t be assumed. A few years ago, I posted a video of R.C. Sproul (who holds to a literal six-day creation) exhibiting grace on this issue, and I think he models well the grace we need to have for each other.
I want to suggest there are some good, textual reasons—in the creation account itself—for questioning the exegesis that insists on the days as strict 24 hour periods. Am I as certain of this as I am of the resurrection of Christ? Definitely not. But in some segments of the church, I fear that we’ve built an exegetical “fence around the Torah,” fearful that if we question any aspect of young-earth dogmatics we have opened the gate to liberalism. The defenders of inerrancy above show that this is not the case. And a passion for sola Scriptura provides us with the humility and willingness to go back to the text again to see if these things are so.
Old-earthers, respect the admirable passion of young-earthers to honor the Word of God. Young-earthers, understand that many old-earthers share that passion and do not see a contradiction between an old earth and the Word of God. You will likely still disagree with each other on the age of the earth after reading Taylor’s article, and I'm not saying you shouldn't continue to debate it, but my hope is that grace will increase.
This challenge comes from the first item in the Pro-Choice Action Network’s article refuting “some common misconceptions about abortion”:
Human life begins at conception. There is no scientific consensus as to when human life begins. It is a matter of philosophic opinion or religious belief. Human life is a continuum—sperm and eggs are also alive, and represent potential human beings, but virtually all sperm and eggs are wasted. Also, two-thirds of human conceptions are spontaneously aborted by nature.
What mistakes are made here? How would you argue that we can objectively know when a new human being comes into existence? Answer this challenge in the comments below, and then we’ll hear Alan’s answer on Thursday.
This past November I wrote that embryonic stem cell research (ESCR) had not led to any successful human treatments. I was wrong.
It turns out there are several clinical studies that have used stem cells derived from human embryos to successfully treat human conditions. In one case, embryonic stem cells were used to treat patients with macular degeneration and macular dystrophy. Researchers transplanted human embryonic stem cells into the affected eyes and showed measurable improvement. So, I can’t claim that that embryonic stem cells have treated zero conditions.
None of this changes the main points I make on this topic, though. Adult-derived stem cells are still the superior choice. It’s still true that adult stem cells have been far more successful at treating conditions in humans. It’s still true that it’s not necessary to clone human beings with adult stem cells. And, most importantly, it’s still true that treating conditions using adult stem cells doesn't require you to kill innocent human beings.