Theologian David R. Law writes that biblical scholars usually employ
textual,
source,
form, and
redaction criticism together. The Old Testament (the Hebrew Bible), and the New Testament, as distinct bodies of literature, each raise their own problems of interpretation - the two are therefore generally studied separately. For purposes of discussion, these individual methods are separated here and the Bible is addressed as a whole, but this is an artificial approach that is used only for the purpose of description, and is not how biblical criticism is actually practiced. It is one of the largest areas of biblical criticism in terms of the sheer amount of information it addresses. The roughly 900 manuscripts found at Qumran include the oldest extant manuscripts of the Hebrew Bible. They represent every book except Esther, though most books appear only in fragmentary form. The
New Testament has been preserved in more manuscripts than any other ancient work, having over 5,800 complete or fragmented
Greek manuscripts, 10,000
Latin manuscripts and 9,300 manuscripts in various other ancient languages including
Syriac,
Slavic,
Gothic,
Ethiopic,
Coptic, and
Armenian texts. The dates of these manuscripts are generally accepted to range from c.110–125 (the papyrus) to the introduction of printing in Germany in the fifteenth century. There are also approximately a million direct New Testament quotations in the collected writings of the
Church Fathers of the first four centuries. (As a comparison, the next best-sourced ancient text is the
Iliad, presumably written by the ancient Greek
Homer in the late eighth or early seventh century BCE, which survives in more than 1,900 manuscripts, though many are of a fragmentary nature.) is the oldest existing fragment of New Testament papyrus. It contains phrases from the
Book of John. These texts were all written by hand, by copying from another handwritten text, so they are not alike in the manner of printed works. The differences between them are called variants. A variant is simply any variation between two texts. Many variants are simple misspellings or mis-copying. For example, a scribe might drop one or more letters, skip a word or line, write one letter for another, transpose letters, and so on. Some variants represent a scribal attempt to simplify or harmonize, by changing a word or a phrase. The exact number of variants is disputed, but the more texts survive, the more likely there will be variants of some kind. Variants are not evenly distributed throughout any set of texts. Charting the variants in the New Testament shows it is 62.9 percent variant-free. The impact of variants on the reliability of a single text is usually tested by comparing it to a manuscript whose reliability has been long established. Though many new early manuscripts have been discovered since 1881, there are critical editions of the Greek
New Testament, such as NA28 and UBS5, that "have gone virtually unchanged" from these discoveries. "It also means that the fourth century 'best texts', the
'Alexandrian' codices
Vaticanus and
Sinaiticus, have roots extending throughout the entire third century and even into the second". . The
Alexandrian textual family is based on this codex. Variants are classified into . Say scribe
A makes a mistake and scribe
B does not. Copies of scribe '''A's'' text with the mistake will thereafter contain that same mistake. Over time the texts descended from
A that share the error, and those from
B that do not share it, will diverge further, but later texts will still be identifiable as descended from one or the other because of the presence or absence of that original mistake. Forerunners of modern textual criticism can be found in both early
Rabbinic Judaism and in the
early church. • Recension is the selection of the most trustworthy evidence on which to base a text. • Emendation is the attempt to eliminate the errors which are found even in the best manuscripts. Jerome McGann says these methods innately introduce a subjective factor into textual criticism despite its attempt at objective rules. Alan Cooper discusses this difficulty using the example of
Amos 6.12 which reads: "Does one plough with oxen?" The obvious answer is "yes", but the context of the passage seems to demand a "no". Cooper explains that a recombination of the consonants allows it to be read "Does one plough
the sea with oxen?" The amendment has a basis in the text, which is believed to be corrupted, but is nevertheless a matter of personal judgment. This contributes to textual criticism being one of the most contentious areas of biblical criticism, as well as the largest, with scholars such as Arthur Verrall referring to it as the "fine and contentious art". It uses specialized methodologies, enough specialized terms to create its own lexicon, and is guided by a number of principles. Yet any of these principles—and their conclusions—can be contested. For example, in the late 1700s, textual critic
Johann Jacob Griesbach (1745 – 1812) developed fifteen critical principles for determining which texts are likely the oldest and closest to the original. Latin scholar Albert C. Clark challenged Griesbach's view of shorter texts in 1914. Some twenty-first century scholars have advocated abandoning these older approaches to textual criticism in favor of new computer-assisted methods for determining manuscript relationships in a more reliable way. The French physician Jean Astruc presumed in 1753 that Moses had written the book of Genesis (the first book of the Pentateuch) using ancient documents; he attempted to identify these original sources and to separate them again.
Source criticism of the Old Testament: Wellhausen's hypothesis Source criticism's most influential work is
Julius Wellhausen's
Prolegomena zur Geschichte Israels (
Prologue to the History of Israel, 1878) which sought to establish the sources of the first five books of the Old Testament - collectively known as the Pentateuch. The
Wellhausen hypothesis (also known as the
JEDP theory, or the
Documentary hypothesis, or the Graf–Wellhausen hypothesis) proposes that the Pentateuch was combined out of four separate and coherent (unified single) sources (not fragments). J stands for the
Yahwist source, (
Jahwist in German), and was considered to be the most primitive in style and therefore the oldest. E (for
Elohist) was thought to be a product of the Northern Kingdom before BCE 721; D (for
Deuteronomist) was said to be written shortly before it was found in BCE 621 by King
Josiah of Judah (
2 Chronicles 34:14-30). Later scholars added to and refined Wellhausen's theory. For example, the
Newer Documentary Thesis inferred more sources, with increasing information about their extent and inter-relationship. The
fragmentary theory was a later understanding of Wellhausen produced by form criticism. This theory argues that fragments of documents — rather than continuous, coherent documents — are the sources for the Pentateuch. Furthermore, they argue, it provides an explanation for the peculiar character of the material labeled P, which reflects the perspective and concerns of Israel's priests. Wellhausen's theory went virtually unchallenged until the 1970s, when it began to be heavily criticized. By the end of the 1970s and into the 1990s, "one major study after another, like a series of hammer blows, has rejected the main claims of the Documentary theory, and the criteria on the basis of which they were argued". Problems and criticisms of the Documentary hypothesis have been brought on by literary analysts who point out the error of judging ancient Eastern writings as if they were the products of western European Protestants; and by advances in anthropology that undermined Wellhausen's assumptions about how cultures develop; and also by various archaeological findings showing the cultural environment of the early Hebrews was more advanced than Wellhausen thought. As a result, few biblical scholars of the twenty-first century hold to Wellhausen's Documentary hypothesis in its classical form.
Source criticism of the New Testament: the synoptic problem In New Testament studies, source criticism has taken a slightly different approach from Old Testament studies by focusing on identifying the common sources of multiple texts instead of looking for the multiple sources of a single set of texts. The two-source hypothesis stands among one of the most influential and well-known theories of source criticism. As sources,
Matthew,
Mark and
Luke are partially dependent on each other and partially independent of each other. This is called the
synoptic problem, and explaining it is the single greatest dilemma of New Testament source criticism. Any explanation offered must "account for (a) what is common to all the Gospels; (b) what is common to any two of them; (c) what is peculiar to each". Multiple theories exist to address the dilemma, with none universally agreed upon, but the
two-source hypothesis is the most common, though alternative hypotheses like
Farrer are increasing in popularity. Mark is the shortest of the four gospels with only 661 verses, but 600 of those verses are in Matthew and 350 of them are in Luke. Some of these verses are verbatim. Most scholars agree that this indicates Mark was a source for Matthew and Luke. There is also some verbatim agreement between Matthew and Luke of verses not found in Mark. In 1838, the religious philosopher
Christian Hermann Weisse developed a theory about this. He postulated a hypothetical collection of the sayings of Jesus from
an additional source called Q, taken from
Quelle, which is German for "source". The study of pre-Gospel sources is declining in scholarship, with most scholars rejecting unified M and L sources today. Biblical scholar
B. H. Streeter used this insight to refine and expand the two-source theory into a four-source theory in 1925.
Two-source theory critique While the two-source theory remains the most popular solution, others say it is not solved satisfactorily, with theories suggesting usage of Matthew by Luke or vice versa without Q increasing in popularity within scholarship.
Donald Guthrie says no single theory offers a complete solution as there are complex and important difficulties that create challenges to every theory. One example is
Basil Christopher Butler's challenge to the legitimacy of two-source theory, arguing it contains a
Lachmann fallacy that says the two-source theory loses cohesion when it is acknowledged that no source can be established for Mark.
F. C. Grant posits multiple sources for the Gospels. Bible scholar
Richard Bauckham says this "most significant insight," which established the foundation of form criticism, has never been refuted.
Hermann Gunkel (1862–1932) and
Martin Dibelius (1883–1947) built from this insight and pioneered form criticism. By the 1950s and 1960s,
Rudolf Bultmann and form criticism were the "center of the theological conversation in both Europe and North America". Form criticism breaks down biblical pasage into short units called
pericopes, which are then classified by genre: prose or verse, letters, laws, court archives, war hymns, poems of lament, and so on. Form criticism then theorizes concerning the individual pericope's
Sitz im Leben ("setting in life" or "place in life"). Based on their understanding of
folklore, form critics believed the early Christian communities formed the sayings and teachings of Jesus themselves, according to their needs (their "situation in life"), and that each form could be identified by the situation in which it had been created and vice versa.
Critique of form criticism In the early- to mid- twentieth century, form critics thought finding oral "laws of development" within the New Testament would prove the form critic's assertions that the texts had evolved within the early Christian communities according to
sitz im leben. Since Mark was believed to be the first gospel, the form critics looked for the addition of proper names for anonymous characters, indirect discourse being turned into direct quotation, and the elimination of Aramaic terms and forms, with details becoming more concrete in Matthew, and then more so in Luke. Instead, in the 1970s, New Testament scholar
E. P. Sanders wrote that: "There are no hard and fast laws of the development of the Synoptic tradition... On all counts the tradition developed in opposite directions. It became both longer and shorter, both more and less detailed, and both more and less Semitic". Some form critics assumed these same skeptical presuppositions based largely on their understanding of oral transmission and folklore. During the latter half of the twentieth century, field studies of cultures with existing oral traditions directly impacted many of these presuppositions. According to religion scholar
Werner H. Kelber, form critics throughout the mid-twentieth century were so focused on finding each pericope's original form, that they were distracted from any serious consideration of memory as a dynamic force in the construction of the gospels or the early church community tradition. What Kelber refers to as the "astounding myopia" of the form critics has revived interest in memory as an analytical category within biblical criticism. As in source criticism, it is necessary to identify the traditions before determining how the redactor used them. Redaction critics reject source and form criticism's description of the Bible texts as mere collections of fragments. Where form critics fracture the biblical elements into smaller and smaller individual pieces, redaction critics attempt to interpret the whole literary unit. In the New Testament, redaction critics attempt to discern the original author/evangelist's theology by focusing and relying upon the differences between the gospels, yet it is unclear whether every difference has theological meaning, how much meaning, or whether any given difference is a stylistic or even an accidental change. Further, it is not at all clear whether the difference was made by the evangelist, who could have used the already changed story when writing a gospel.
Mark Goodacre says "Some scholars have used the success of redaction criticism as a means of supporting the existence of Q, but this will always tend toward circularity, particularly given the hypothetical nature of Q which itself is reconstructed by means of redaction criticism". Hans Frei proposed that "biblical narratives should be evaluated on their own terms" rather than by taking them apart in the manner we evaluate philosophy or historicity. New Testament scholar
Paul R. House says the discipline of linguistics, new views of historiography, and the decline of older methods of criticism were also influential in that process.
Brevard S. Childs (1923–2007) proposed an approach to bridge that gap that came to be called canonical criticism. Canonical criticism "signaled a major and enduring shift in biblical studies". John Barton says that canonical criticism does not simply ask what the text might have originally meant, it asks what it means to the current believing community, and it does so in a manner different from any type of historical criticism. (2) Canonical critics approach the books as whole units instead of focusing on pieces. They accept that many texts have been composed over long periods of time, but the canonical critic wishes "to interpret the
last edition of a biblical book" and then relate books to each other. it is
Herbert A. Wichelns who is credited with "creating the modern discipline of rhetorical criticism" with his 1925 essay
"The Literary Criticism of Oratory". In that essay, Wichelns says that rhetorical criticism and other types of literary criticism differ from each other because rhetorical criticism is only concerned with "effect. It regards a speech as a communication to a specific audience, and holds its business to be the analysis and appreciation of the orator's method of imparting his ideas to his hearers".
Phyllis Trible, a student of Muilenburg, has become one of the leaders of rhetorical criticism and is known for her detailed literary analysis and her
feminist critique of biblical interpretation.
Narrative criticism In the last half of the twentieth century, historical critics began to recognize that being limited to the historical meant the Bible was not being studied in the manner of other ancient writings. In 1974, Hans Frei pointed out that a historical focus neglects the "narrative character" of the gospels. Critics began asking if these texts should be understood on their own terms before being used as evidence of something else. Christopher T. Paris says that, "narrative criticism admits the existence of sources and redactions but chooses to focus on the artistic weaving of these materials into a sustained narrative picture". Narrative criticism was first used to study the New Testament in the 1970s, with the works of
David Rhoads,
Jack D. Kingsbury, R. Alan Culpepper, and Robert C. Tannehill. Stephen D. Moore has written that "as a term, narrative criticism originated within biblical studies", but its method was borrowed from
narratology. == Legacy ==