« When You Know God Is Real | Main | A Brief Sample of Archaeology Corroborating the Claims of the New Testament »

December 20, 2013

Comments

Bruce Metzger says that we have so many manuscripts of the NT, that, say, we lost all of them, we would still be able to reconstruct the NT from the quotes of the early patristic church fathers. We basically have the original texts as they were.

One can reconstruct an indication of how a text stood at a certain point in time.

To say that this point in time is the time of the original, you need an argument.

You can't simply 'acknowledge' it.

Doesn't Ehrman's and other's arguments fly in the face of what is an established science? I don't see these same criticisms being directed at, say, the writings of Euclid, which lay much of our mathematical foundations. Just another example of the double standard that exists when it comes to the authenticity of the Bible...

The fact of the matter is, it has been conclusively demonstrated that out of all ancient texts, the Bible has by far the most compelling evidence and scientific research for its authenticity. It's not even close! Even most secular scholars would dismiss Ehrman's critiques as not holding water.

JB,

How does being able to reconstruct the NT we have now - by whatever method - tell us that the NT we have now is the original?

d,

What arguments of Ehrman and others fly in the face of what established science?

In what way do most secular scholars dismiss Ehrman's critiques?

What can we do about changes to a text that were made prior to our earliest manuscript witness to that text?


Well, Ron, I would suspect that the noisiness of the copy process might be gauged from the copies made after the earliest copies we have. You could look at how much the copies differ from each other to take a measure of that.

When you focus on the transmission of information (rather than letter-by-letter transmission) my guess is that there is not a whole lot of variation in the information being transmitted by the copy process.

From that you might be able to extrapolate back to how much the original text could vary from the reconstructed text.

While I've obviously not carried out the said gauging and extrapolation, I really doubt that the original could vary by an awful lot from the reconstructed copy.

And do you really think that the variance could be great? Or are you just being disputatious for the sake of dispute?

WL,

I like these thoughts you express - except for the last one. :)

I agree. I think you could, as you put it, 'gauge the copy process' using the earliest generations of copies we have. I have had the same thoughts.

Have you also thought about the problems and limitations of the project?

You might calculate an error rate. That would tend to apply to honest errors - characters, words, lines. It would not apply very well to intentional edits.

Such an error rate might have large error bars if based on the earliest manuscripts - because of their smaller numbers.

And, calculating such an error rate doesn't tell you what was different in versions that were lost before the existing copies were made.

Nor does it tell you that the process was equally good prior to the earliest generations we have. It likely wasn't.

As more copies circulate, the population of copies should tend to agree more.

Why?

Because, if you can compare 3 versions and 2 agree, you throw out the third - other things being equal. Redundancy naturally helps with honest errors.

Also because the more copies in circulation, the harder it is to advance an agenda: you have to get your edit copied at a higher rate to catch up.

Conversely, going back in time, as fewer copies circulate, the population will tend to agree less.

Going back in time, redundancy helps less and less with honest errors.

Advancing an agenda is easiest at the beginning - just where our data is nil.

I don't know if anybody has calculated such an error rate for NT copying. I do know that, other things being equal, it will not be constant as the population of manuscripts changes.

Extrapolating back would be problematic for the honest class of errors and hopeless for the others. Hopeless.

Regarding your final question: No.

I do not claim to know how different the originals were from what we have.

I don't even claim to know there is any difference.

Kruger and Epp, on the other hand, claim to know that we have the original text.

I dispute that they know this.

That's what my comment are about.

I don't do this for the sake of being disputatious.

I do it because there is plenty of room for Kruger and Epp to be very wrong.

So, in that, I disagree with you too: the originals might have been very different from what we have - different in game-changing ways.

And that's without saying anything about the veracity of it all.

This would seem to be an issue for the Muslim on the claim that the Quran is itself somehow divine, but such is not necessary for Christianity. The original text, in Islam, is to be, literally, unchanged and this is so much the case that even a translation is considered a no-go…..one must read it in its original language to achieve understanding. That the original text cannot be found is thus a likely issue for the Muslim to answer, though, the OT and NT make no such claims on themselves. Scripture’s ontological bedrock can be translated into any language and its ontological application to humanity transcends mere time, location, culture, translation, and paraphrase.

Admittedly, this OP’s topic is a real problem for the Muslim/Quran, however, even if not for the OT/NT.


"You might calculate an error rate. That would tend to apply to honest errors - characters, words, lines. It would not apply very well to intentional edits."

I don't think this is right. Intentional edits are as much noise in the copy process as unintentional slips are. The question is simply, "How noisy is the copy process?" Not "Why is there noise in the copy process?"

"Such an error rate might have large error bars if based on the earliest manuscripts - because of their smaller numbers."

The earliest manuscripts would be more likely to be copied from originals, so their lower numbers are likely to be offset by the fewer generations to the original. Just taking the limiting case of your remark, I don't think, for example, that the very first copy of the Gospel of Matthew is more likely to have errors than the 900th. It's likely to have fewer, in fact.

"Nor does it tell you that the process was equally good prior to the earliest generations we have. It likely wasn't."

Why is that? Human hands and eyes improved in that time? The tendency of human copyists to make 'helpful' (or deliberately unhelpful) changes to the source material go down?

"As more copies circulate, the population of copies should tend to agree more."

While this is true, it is not a problem for anything I've said. What you are describing here is why information can be faithfully transmitted through the noisy channel of the hand-copy process.

Intentional edits are as much noise in the copy process as unintentional slips are.
Intentional edits will always be meaningful. That's the opposite of noise. Intentional edits will be written to blend with the surrounding text; noise will not.
The earliest manuscripts would be more likely to be copied from originals, so their lower numbers are likely to be offset by the fewer generations to the original. Just taking the limiting case of your remark, I don't think, for example, that the very first copy of the Gospel of Matthew is more likely to have errors than the 900th. It's likely to have fewer, in fact.
Agreed: the earlier copies should - must? - have the fewest errors. But I was not talking about how many errors exist in a given generation.

I was talking about the uncertainty in our calculated copying error rate. This is not how accurate a copy is; it's how accurate the copying process is.

The copying error rate might be expressed as the number of errors introduced per generation per page. For example, if a copyist does 1000 pages and introduces 20 errors, then the copying error rate is 20/1000 = 0.02 errors per page per generation.

The copying error rate tells you nothing about how accurate a given generation is.

Because early manuscripts are fewer than later manuscripts, our calculated error rate for early manuscripts will be less certain. A smaller sample size always leads to more uncertainty.

Why is that? Human hands and eyes improved in that time?
You ask why I think the copying process got better over time.

I already said: Early on, there were fewer copies around. One had a better chance of pushing an agenda.

Early on, there were fewer people involved. So there were fewer people to choose copyists from. As a new sport grows in popularity, its records fall rapidly.

Consider your chances of making a change as a copyist today. Zero.

What you are describing here is why information can be faithfully transmitted through the noisy channel of the hand-copy process.
What I'm describing here is why Kruger and Epp can't know what they claim to know, i.e., that have the original NT text.

Ron-

I think that your argument simply shows that the cleaning up of a signal in a noisy channel is not a linear function of time (which was never my claim).

I certainly don't want to put words in your mouth just so I can correct them.

My dispute is, again, with the above claim of Epp and Kruger.

Based on the evidence, the presuppositions of E & K are justified. Preserved content. The weight of evidence..... presuppositions....


I agree with RonH's point.

The claim: "But the original text is not a physical object. The autographs contain the original text, but the original text can exist without them...." is coherent. If the presupposition of preservation is without merit, then, so too, is any presupposition of, or claim of, loss of content. RonH is not making this latter claim, that of loss of content.

And yet we find this challenge against the OT and NT all the time.

Now, should this challenge be, forever, dropped by scripture's critics, then, perhaps, this exercise would be pointless.

But there is great value in this thread nonetheless, for, we find that the presupposition of lack of preservation is just that: a presupposition.

Now, we all know how hard it is to prove and disprove a presupposition.

So, all that is left is the usual, everyday standards we all use in the typical fashion to evaluate "old documents". And the OT and NT, without equal, have passed all the usual tests by a far greater percentage than others.

Evidence.

Presupposition.

It is reasonable to claim preservation, based on the evidence and based on the methodology by which we evaluate these sorts of documents.

RonH is correct on challenging the OP's presupposition, but, the presupposition RonH is challenging has more beefy evidence behind it than the opposite presupposition of lost-content, based on the methodology by which we answer such a question. And that is what this is all about. Evidence and how much it does, or does not, line up with one's presupposition.

Game-changer's have never been shown, and, RonH has not claimed they ever have been shown. But others insinuate such all the time, despite the evidence.

Whereas, more than reasonable performance on the sorts of exams old documents must suffer through have left the OT and NT at the head of the class.

I agree with RonH's point, which is why I am left choosing between two presuppositions. That of preservation, and, that of lost-content. Methodology and Evidence come in at this point. I am left believing the evidence: the OT and NT have maintained content to a high degree. And, of course, the OT and NT are not the Quran, and thus, Truth transcending one's particular culture and language, we expect the Truth of Love's Ontology to be the content which is transmitted, and so the sort of "Bob went" vs. "went Bob", and so on, which change neither message nor content in any holistic way, are, while problematic to the Quran, simply of no concern to the Christian.

E Pluribus Unum, that fully singular, that fully triune, Self-Other-Us, is the Truth of the matter, and, there is but one story on Earth wherein every implication of that Unchanging Truth is nuanced to the Nth degree in a seamlessly coherent Prescriptive-Descriptive. Such has been transmitted wholly, fully, intact. In fact, such is so unmistakable within Scripture's A-to-Z that the only way one could miss it is by intention.


I'm not sure you understand completely (I've said this before) textual criticism, Ron. Manuscripts take us back to early 2nd century AD, and there is evidence that the early church copied some of the original autographs almost from the beginning. There is not really a question among scholars if the books of the NT are like the originals. However, some scholars do make the case that some epistles are forgeries in Paul's or Peter's name, and they were not written by them at all. Honestly, most scholars think that 2nd Peter was written by Peter, etc. You can make that claim, but honestly, there is very little evidence for such a claim.

I guess you could say that the gospels are all made up, or that they distort the story. But that is then not a question of manuscript transmission. Scribes in the ancient world were actually extremely careful and did not usually add or take away text if not authorized. The argument you would need to make would be the time between the composing of the epistles/gospels and the events (about 20 years in the case of Paul, and anywhere between 20 and 50 for the gospels).

So in those 20 years, you would need to show how the story got all muddled up and Jesus did not resurrect from the dead in the actual historical events. This is the line that Crossan and Ehrman take, for instance. Pretty much all scholars accept that by the time these documents were composed, the story as it is was set in stone and that we have the original autographs. You can argue otherwise, but it does not seem highly likely that someone came in and made sweeping changes. In that case, they would have made sweeping changes to every single letter and story so as to add the resurrection!! That is just not very likely, and in order to believe such a thing, I would need quite a bit of persuasion. You also do have some scholars that argue that some letters were spliced together, etc (like 2nd Corinthians), or that John's epilogue was tacked on at a later date (which could have just been that the writer finished the story but then felt compelled to add that last bit, too), but all of that is just conjecture. I think that, for the most part, using manuscripts across time, we have each epistle and gospel just as they were originally penned about 99 percent (depending on the translation).

Could they have been changed, say, between 50 AD and the first manuscripts we have between early 2nd century? Such a thing is possible in theory, but it's just not highly likely. First of all, why would someone have done that? What motivation would they have to do that? Second, why in the world would they have changed into the story that they did? The story of Jesus is incredibly strange. If you think it is strange for us now, then it would have been even weirder in the first century. Third, you are presuming that the early Christian community would have accepted such a thing. Don't think for a minute in the ancient world that people were stupid (or at least any stupider than people now). They controlled traditions and writings just as they do now. That's why there are a host of apocalyptic forgeries and stories that were never accepted (as early as the second century) by the church. But of course, no one really argues that those forgeries (when I say forgery, someone claiming to be the author, say Paul, when they are not) are not like the original penned letter/gospel. That is just unlikely due to the nature of scribes and transmission.

If you want to take a more realistic approach, then you would need to take Crossan's or Ehrman's approach, which is basically to say that these people had some ulterior motive or that we misunderstand the language they are using, etc. Jesus did not really rise from the dead, this is just a tale these foolish pre-modern primitive people made up. Or they distorted the reality, or we misunderstood what they meant by resurrection. But there is a reason no one argues that Luke got completely changed up and that the author's gospel did not look anything like what he penned. Just think that Mark or Matthew were composed in another part of the world where Paul was when he composed 1st Corinthians 15, and these accounts match up with the ideology and story that Paul tells in 1st Corinthians 15. Is it really that likely that someone, between the years of 50 AD and early 2nd century, added those two things both to Paul's letters and those two gospels, and then managed to make sure that those stories were the only ones that ever circulated among the early Christian community? It just does not seem highly likely. It's more logical to assume that since we have manuscripts dated to only a few years from the autographs, that we have the same basic stories as the originals.

After that, you can take a different tack and argue that Jesus didn't rise from the dead, but not using the argument you are making. No real Biblical scholar does that, even the extremely skeptical ones.

Ron, I thought I would address some of your comments since I do know a bit about this subject and help explain how scholars determine what gets in and what doesn't:

You might calculate an error rate. That would tend to apply to honest errors - characters, words, lines. It would not apply very well to intentional edits.

Scholars have in fact calculated such an error rate, and despite your belief, we read this stuff so much that we can tell if there is an intentional edit. Also, if there is an intentional edit (the story of the adulteress in John 7 for instance), the manuscript traditions will not agree. Believe it or not, disagreements and intentional edits have all likely been found by now. There are arguments and disagreement about what might or might not be an intentional edit, though.

Such an error rate might have large error bars if based on the earliest manuscripts - because of their smaller numbers.

It's also highly unlikely that that early manuscripts had more intentional edits, because the writer would have been around to confirm or deny such edits.

And, calculating such an error rate doesn't tell you what was different in versions that were lost before the existing copies were made.

True, but see above. If Paul visited a community, and saw their copy of his letter and that it was wrong, what do you think he would have done?

Nor does it tell you that the process was equally good prior to the earliest generations we have. It likely wasn't.

It likely was the same. Scribes are much more careful than you think they are. If you hire a scribe in the ancient world, there copy work was very diligent. It's pretty much the same process until the era of the codex. And then it was still pretty much the same.

As more copies circulate, the population of copies should tend to agree more.

It doesn't work like that at all. What happens over time is that you have manuscript groups. Let's say that manuscript A gets corrupted. Spawned from A, a whole group of manuscripts may emerge that carry the error. But manuscript B did not have the error. The second group of manuscripts will be copied without the error. The only exception to this is if the scribe later on had copies from both manuscript groups and tried to "sync" them which rarely happens because manuscript groups are spread across geography. Manuscript groups are important because we can often actually track where the error began to occur. Sometimes scribes felt compelled to correct these errors as well, but these are often apparent in a manuscript group as well.

Why?

Because, if you can compare 3 versions and 2 agree, you throw out the third - other things being equal. Redundancy naturally helps with honest errors.

It is true, but you do not throw out the 3rd if you think it came from an earlier manuscript group that might be right. When evaluating errors, what is more important is this: which reading is more difficult? Because the scribe would have been more likely to change the difficult reading to smooth it out. So if they had three readings, they might have harmonized it to make it fit well. So that criteria is first and foremost the most important one. Manuscript groups are the 2nd. If we have earlier manuscripts than the one from group B that disagree, we might consider all manuscripts from group B to be tainted with that particular passage.

Also because the more copies in circulation, the harder it is to advance an agenda: you have to get your edit copied at a higher rate to catch up.

And there is strong evidence that they started copying Paul's letters from the get go.

Conversely, going back in time, as fewer copies circulate, the population will tend to agree less.

Not necessarily. If those few copies were copied highly faithfully, then there would be less disagreement. There is no way to know that.

Going back in time, redundancy helps less and less with honest errors.

Redundancy in the beginning holds the same as redundancy 500 years later. If there is an error in the beginning, it got copied over and over. If there is not, then it didn't.

The further you go back, and the more the manuscripts agree, the more accurate they are.
Advancing an agenda is easiest at the beginning - just where our data is nil.

Except for the fact that the people would have been alive to halt circulation of false manuscripts, which throws a major crimp in your idea. By the time they and their friends were dead, we have manuscripts of the NT.

I don't know if anybody has calculated such an error rate for NT copying. I do know that, other things being equal, it will not be constant as the population of manuscripts changes.

No, it won't be constant because we keep finding manuscripts. But for the most part it will, because we can group them into manuscript groups and eliminate the error as part of the group. Since there are not many errors, the error rate won't change much. It's just an estimation, anyways. No one has even reviewed all the manuscripts.

Extrapolating back would be problematic for the honest class of errors and hopeless for the others. Hopeless.

Not so much. Most intentional addendum come much much later. Such major errors probably would not have caught on in the beginning because the writers of the early letters would have been alive, or at least people that knew them personally. If stories would have been so drastically altered, the community itself would have thrown them out. Really, you must think pre-modern people are completely gullible, Ron. I have lived with pre-modern culture, and let me tell you: they are not so gullible.

Really, you are being an extreme skeptic here. It's one thing for you to be skeptical and not believe in God, but the argumentation you are advancing here just is really spotty and doesn't make sense. Also, this is much more complex than you are making it out to be.

Have to consider that RohH's point is valid.

There's that pesky blank interval between when the autograph came into being and the first known copies and/or citations.

The question is,

"So what"?

What conclusion do you draw, RonH? And what action does that conclusion lead you to take?

Goat Head 5

RonH, the problem is that for it even to be possible for someone to change all the copies--to remove some information from every single manuscript line--it would have to have happened right at the very, very beginning when there were only a couple copies (or one) in a single location. But this is the least likely scenario at all, because at that early stage, 1) only the most trusted people who respected the documents and were trusted by the churches would have had access to the documents (they would have been protecting them), 2) the people who had seen the documents--who knew the people involved and were familiar with the situation from the beginning--would have known immediately if someone had tampered with them, and they would have stopped it, and 3) what reason would Christians in the churches in which the letters were being read have had to change anything at that point? To me, it just all sounds like a bizarre and extremely unlikely scenario. Furthermore, there's no evidence suggesting it and no good reason to believe it.

And once any of the texts was copied and circulated, there's no reason to think anyone could have expunged something from all of them. This is the whole point of the piece above. That is, there are enough lines of transmission that we can trust the original reading can be determined by comparing them, and there's no reason to think the true reading alone was never copied and didn't create its own line of transmission.

Well said Amy.

The comments to this entry are closed.