If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Course: LSAT   >   Unit 1

  • Getting started with Logical Reasoning
  • Introduction to arguments
  • Catalog of question types

Types of conclusions

  • Types of evidence
  • Types of flaws
  • Identify the conclusion | Quick guide
  • Identify the conclusion | Learn more
  • Identify the conclusion | Examples
  • Identify an entailment | Quick guide
  • Identify an entailment | Learn more
  • Strongly supported inferences | Quick guide
  • Strongly supported inferences | Learn more
  • Disputes | Quick guide
  • Disputes | Learn more
  • Identify the technique | Quick guide
  • Identify the technique | Learn more
  • Identify the role | Quick guide
  • Identify the role | learn more
  • Identify the principle | Quick guide
  • Identify the principle | Learn more
  • Match structure | Quick guide
  • Match structure | Learn more
  • Match principles | Quick guide
  • Match principles | Learn more
  • Identify a flaw | Quick guide
  • Identify a flaw | Learn more
  • Match a flaw | Quick guide
  • Match a flaw | Learn more
  • Necessary assumptions | Quick guide
  • Necessary assumptions | Learn more
  • Sufficient assumptions | Quick guide
  • Sufficient assumptions | Learn more
  • Strengthen and weaken | Quick guide
  • Strengthen and weaken | Learn more
  • Helpful to know | Quick guide
  • Helpful to know | learn more
  • Explain or resolve | Quick guide
  • Explain or resolve | Learn more

what is logical conclusion

  • When an arguer's conclusion is a recommendation for something, he or she often will provide one good reason to do that thing. One thing to be aware of here is the assumption that the benefits outweigh the drawbacks.
  • When an arguer's conclusion is a prediction , the arguer may be assuming that the current evidence will remain unchanged in the future.

Comparisons

  • It’s clear that this year’s candidate is stronger than last year’s candidate.
  • It’s clear that this year’s candidate understands the public’s wishes better than she did a year ago.
  • Last night, I took cough medicine and today I feel much better. So that cough medicine is really effective. ( Cause: cough medicine; effect: feeling better)
  • Jonathan gets good grades without trying very hard, and his teachers have said multiple times how much they like him. The only possible way that Jonathan maintains his good grades is because of how much his teachers like him. ( Cause: teachers liking Jonathan; effect: good grades)

Assessments

  • The flower is beautiful .
  • This policy is very helpful .
  • The outcome will be important .

Recommendations

  • In treating this disease, then, physicians should favor Treatment X.
  • It’s likely that extending the warranty is the only way to gain new customers.

Predictions

  • Obviously, the tennis match will be rescheduled.
  • Our homeless population may not be reduced by next year.

Simple Beliefs

  • It’s clear that the student cheated on the test.
  • The thief is probably still in the house somewhere.

Degrees of conclusion

Definite conclusions, indefinite conclusions.

  • Likelihood: likely, unlikely, possible, could, might
  • Quantity: some, most, more
  • Frequency: rarely, seldom, often, sometimes, usually
  • Proximity: almost, nearly

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Great Answer

1.1 Introduction

Hermione Granger got it right when, facing the potion-master's test in Harry Potter, she said: "This isn't magic - it's logic - a puzzle. A lot of the greatest wizards haven't got an ounce of logic; they'd be stuck here forever."

In the real world, we are better off. We use Logic in just about everything we do. We use it in our professional lives - in proving mathematical theorems, in debugging computer programs, in medical diagnosis, and in legal reasoning. And we use it in our personal lives - in solving puzzles, in playing games, and in doing school assignments, not just in Math but also in History and English and other subjects.

Just because we use Logic does not mean we are necessarily good at it. Thinking correctly and effectively requires training in Logic, just as writing well requires training in English and composition. Without explicit training, we are likely to be unsure of our conclusions; we are prone to make mistakes; and we are apt to be fooled by others.

The ancient Greeks thought Logic sufficiently important that it was one of the three subjects in the Greek educational Trivium, along with Grammar and Rhetoric. Oddly, Logic occupies a relatively small place in the modern school curriculum. We have courses in the Sciences and various branches of Mathematics, but very few secondary schools offer courses in Logic; and it is not required in most university programs.

Given the importance of the subject, this is surprising. Calculus is important to physics. And it is widely taught at the high school level. Logic is important in all of these disciplines, and it is essential in computer science. Yet it is rarely offered as a standalone course, making it more difficult for students to succeed and get better quality jobs.

This course is a basic introduction to Logic. It is intended primarily for university students. However, it has been used by motivated secondary school students and post-graduate professionals interested in honing their logical reasoning skills.

There are just two prerequisites. The course presumes that the student understands sets and set operations, such as union, intersection, and complement. The course also presumes that the student is comfortable with symbolic mathematics, at the level of high-school algebra. Nothing else is required.

This chapter is an overview of the course. We start with a look at the essential elements of logic - logical sentences, logical entailment, and logical proofs. We then see some of the problems with the use of natural language and see how those problems can be mitigated through the use of Symbolic Logic. Finally, we discuss the automation of logical reasoning and some of the computer applications that this makes possible.

1.2 Logical Sentences

For many, Logic is an esoteric subject. It is used primarily by mathematicians in proving complicated theorems in geometry or number theory. It is all about writing formal proofs to be published in scholarly papers that have little to do with everyday life. Nothing could be further from the truth.

As an example of using Logic in everyday life, consider the interpersonal relations of a small group of friends. There are just four members - Abby, Bess, Cody, and Dana. Some of the girls like each other, but some do not.

The figure on the left below shows one set of possibilities. The checkmark in the first row here means that Abby likes Cody, while the absence of a checkmark means that Abby does not like the other girls (including herself). Bess likes Cody too. Cody likes everyone but herself. And Dana also likes the popular Cody. Of course, this is not the only possible state of affairs. The figure on the right shows another possible world. In this world, every girl likes exactly two other girls, and every girl is liked by just two girls.

Let's assume that we do not know the likes and dislikes of the girls ourselves but we have informants who are willing to tell us about them. Each informant knows a little about the likes and dislikes of the girls, but no one knows everything.

This is where Logic comes in. By writing logical sentences , each informant can express exactly what he or she knows - no more, no less. The following sentences are examples of different types of logical sentences. The first sentence is straightforward; it tells us directly that Dana likes Cody. The second and third sentences tell us what is not true without saying what is true. The fourth sentence says that one condition holds or another but does not say which. The fifth sentence gives a general fact about the girls Abby likes. The sixth sentence expresses a general fact about Cody's likes. The last sentence says something about everyone.

Looking at the worlds above, we see that all of these sentences are true in the world on the left. By contrast, several of the sentences are false in the world on the right. Hence, we can rule out the second world.

Of course, in general, there are more than two possible worlds to consider. As it turns out, there are quite a few possibilities. Given four girls, there are sixteen possible instances of the likes relation - Abby likes Abby, Abby likes Bess, Abby likes Cody, Abby likes Dana, Bess likes Abby, and so forth. Each of these sixteen can be either true or false. There are 2 16 (65,536) possible combinations of these true-false possibilities, and so there are 2 16 possible worlds.

Logical sentences like the ones above constrain the possible ways the world could be. Each sentence divides the set of possible worlds into two subsets, those in which the sentence is true and those in which the sentence is false, as suggested by the following figure. Believing a sentence is tantamount to believing that the world is in the first set.

Given two sentences, we know the world must be in the intersection of the set of worlds in which the first sentence is true and the set of worlds in which the second sentence is true.

Ideally, when we have enough sentences, we know exactly how things stand.

Effective communication requires a language that allows us to express what we know, no more and no less. If we know the state of the world, then we should write enough sentences to communicate this to others. If we do not know which of various ways the world could be, we need a language that allows us to express only what we know, i.e. which worlds are possible and which are not. The language of Logic gives us a means to express incomplete information when that is all we have and to express complete information when full information is available.

1.3 Logical Entailment

Once we know which world is correct, we can see that some sentences must be true even though they are not included in the premises we are given. For example, in the first world we saw above, we can see that Bess likes Cody, even though we are not told this fact explicitly. Similarly, we can see that Abby does not like Bess.

Unfortunately, things are not always so simple. Although logical sentences can sometimes pinpoint a specific world from among many possible worlds, this is not always the case. Sometimes, a collection of sentences only partially constrains the world. For example, there are four different worlds that satisfy the sentences in the previous section.

In situations like this, which world should we use in answering questions? The good news it that sometimes it does not matter. Even though a set of sentences does not determine a unique world, there are some sentences that have the same truth value in every world that satisfies the given sentences, and we can use that value in answering questions.

This is logical entailment . We say that a set of premises logically entails a conclusion if and only if every world that satisfies the premises also satisfies the conclusion.

What can we conclude from the bits of information in our sample logical sentences? Quite a bit, as it turns out. In our example, we see that Bess likes Cody in all four worlds. We also see Bess does not like Abby in all four worlds. Does Dana like Bess? Since there are different values in different worlds, we cannot say yes and we cannot say no. All we can say is Maybe. In the real world, either Dana likes Bess or she doesn't. However, we do not have enough information to say which case is correct.

Model checking is the process of examining the set of all worlds to determine logical entailment. To check whether a set of sentences logically entails a conclusion, we use our premises to determine which worlds are possible and then examine those worlds to see whether or not they satisfy our conclusion. If the number of worlds is not too large, this method works well.

Unfortunately, in general, there are many, many possible worlds; and, in some cases, the number of possible worlds is infinite, in which case model checking is impossible. So what do we do? The answer is logical reasoning and logical proofs.

1.4 Logical Proofs

Logical proofs are analogous to derivations in algebra. We can try solving algebraic equations by randomly trying different values for the variables in those equations. However, we can usually get to an answer faster by manipulating our equations syntactically. Logical reasoning is similar. Rather than checking all worlds, we simply apply syntactic operations to the premises we are given to generate conclusions.

One of Aristotle's great contributions to philosophy was the identification of syntactic operations that . Such operations are typically called rules of inference . By applying rules of inference to premises, we produce conclusions that are entailed by those premises. A proof is a sequence of such rule applications. We can think of individual reasoning steps as the atoms out of which proof molecules are built.

As an example of a rule of inference, consider the reasoning step shown below. We know that all Accords are Hondas, and we know that all Hondas are Japanese cars. Consequently, we can conclude that all Accords are Japanese cars.

Now consider another example. We know that all borogoves are slithy toves, and we know that all slithy toves are mimsy. Consequently, we can conclude that all borogoves are mimsy. What's more, in order to reach this conclusion, we do not need to know anything about borogoves or slithy toves or what it means to be mimsy.

What is interesting about these examples is that they share the same reasoning structure, viz. the pattern shown below.

Two things are worthy of note here. First of all, correctness in logical reasoning is determined by the logical operators in our sentences, not the objects and relationships mentioned in those sentences. Second, the conclusion is guaranteed to be true only if the premises are true.

The philosopher Bertrand Russell summed this situation up as follows. Logic may be defined as the subject in which we never know what we are talking about nor whether what we are saying is true . We do not need to know anything about the concepts in our premises except for the information expressed in those premises. Furthermore, while our conclusion must be true if our premises are true, it can be false if one or more of our premises is false.

The existence of reasoning patterns is fundamental in Logic but raises important questions. Which rules of inference are correct? Are there many such patterns or just a few?

Let us consider the first of these questions. Obviously, there are patterns that are just plain wrong in the sense that they can lead to incorrect conclusions. Consider, as an example, the faulty reasoning pattern shown below.

Now let us take a look at an instance of this pattern. If we replace x by Toyotas and y by cars and z by made in America , we get the following line of argument, leading to a conclusion that happens to be correct.

On the other hand, if we replace x by Toyotas and y by cars and z by Porsches , we get a line of argument leading to a conclusion that is questionable.

What distinguishes a correct pattern from one that is incorrect is that it must always lead to correct conclusions, i.e. they must be correct so long as the premises on which they are based are correct. As we will see, this is the defining criterion for what we call deduction .

Now, it is noteworthy that there are patterns of reasoning that are not always correct but are sometimes useful. There is induction, abduction, analogy, and so forth.

Induction is reasoning from the particular to the general. The example shown below illustrates this. In this case, the induction is incomplete. Although we have seen many ravens, we have not seen them all. However, if we see enough cases in which something is true and we never see a case in which it is false, we tend to conclude that it is always true. Unfortunately, when induction is incomplete, as in this case, it is not sound. There might be an albino raven that we have not yet seen.

Incomplete induction is the basis for Science (and machine learning). Deduction is the subject matter of Logic. Science aspires to discover / propose new knowledge. Logic aspires to apply and/or analyze existing knowledge.

This distinction was at the heart of a famous disagreement between the physicist Albert Einstein and his contemporary Niels Bohr in which Bohr derided Einstein's emphasis on deduction rather than induction. He reputedly told Einstein: You are not thinking; you are just being logical. Bohr was a fan of induction and thought that Einstein placed too much emphasis on deduction.

Of all types of reasoning, deduction is the only one that guarantees its conclusions in all cases, it produces only those conclusions that are logically entailed by one's premises.

In talking about Logic, we now have two notions - logical entailment and provability. A set of premises logically entails a conclusion if and only if every possible world that satisfies the premises also satisfies the conclusion. A sentence is provable from a set of premises if and only if there is a finite sequence of sentences in which every element is either a premise or the result of applying a deductive rule of inference to earlier members in the sequence.

These concepts are quite different. One is based on possible worlds; the other is based on symbolic manipulation of expressions. Yet, for "well-behaved" logics, it turns out that logical entailment and provability are identical - a set of premises logically entails a conclusion if and only if the conclusion is provable from the premises. Even if the number of worlds is infinite , it is possible in such logics to produce a finite proof of the conclusion, i.e. we can determine logical entailment without going through all possible worlds. This is a very big deal.

1.5 Symbolic Logic

So far, we have illustrated everything with sentences in English. While natural language works well in many circumstances, it is not without its problems. Natural language sentences can be complex; they can be ambiguous; and failing to understand the meaning of a sentence can lead to errors in reasoning.

As an example of ambiguity, suppose I were to write the sentence There's a girl in the room with a telescope . See the following figure for two possible meanings of this sentence. Am I saying that there is a girl in a room containing a telescope? Or am I saying that there is a girl in the room and she is holding a telescope?

Such complexities and ambiguities can sometimes be humorous if they lead to interpretations the author did not intend. See the examples below for some infamous newspaper headlines with multiple interpretations. Using a formal language eliminates such unintentional ambiguities (and, for better or worse, avoids any unintentional humor as well).

As an illustration of errors that arise in reasoning with sentences in natural language, consider the following examples. In the first, we use the transitivity of the better relation to derive a conclusion about the relative quality of champagne and soda from the relative quality of champagne and beer and the relative quality or beer and soda. So far so good.

This makes sense. It is an example of a general rule about the transitivity of the better relation. If x is better than y and y is better than z, then x better than z.

Now, consider what happens when we apply this rule in the case illustrated below. Bad sex is better than nothing. Nothing is better than good sex. Therefore, bad sex is better than good sex. Really?

The form of the argument is the same as in the previous example, but the conclusion is somewhat less believable. The problem in this case is that the use of nothing here is syntactically similar to the use of beer in the preceding example, but in English it means something entirely different.

Logic eliminates these difficulties through the use of a formal language for encoding information. Given the syntax and semantics of this formal language, we can give a precise definition for the notion of logical conclusion. Moreover, we can establish precise reasoning rules that produce all and only logical conclusions.

In this regard, there is a strong analogy between the methods of Formal Logic and those of high school algebra. To illustrate this analogy, consider the following algebra problem.

Xavier is three times as old as Yolanda. Xavier's age and Yolanda's age add up to twelve. How old are Xavier and Yolanda?

Typically, the first step in solving such a problem is to express the information in the form of equations. If we let x represent the age of Xavier and y represent the age of Yolanda, we can capture the essential information of the problem as shown below.

Using the methods of algebra, we can then manipulate these expressions to solve the problem. First we subtract the second equation from the first.

Next, we divide each side of the resulting equation by -4 to get a value for y . Then substituting back into one of the preceding equations, we get a value for x .

Now, consider the following logic problem.

If Mary loves Pat, then Mary loves Quincy. If it is Monday and raining, then Mary loves Pat or Quincy. If it is Monday and raining, does Mary love Quincy?

As with the algebra problem, the first step is formalization. Let p represent the possibility that Mary loves Pat; let q represent the possibility that Mary loves Quincy; let m represent the possibility that it is Monday; and let r represent the possibility that it is raining.

With these abbreviations, we can represent the essential information of this problem with the following logical sentences. The first says that p implies q , i.e. if Mary loves Pat, then Mary loves Quincy. The second says that m and r implies p or q , i.e. if it is Monday and raining, then Mary loves Pat or Mary loves Quincy.

As with Algebra, Formal Logic defines certain operations that we can use to manipulate expressions. The operation shown below is a variant of what is called Propositional Resolution . The expressions above the line are the premises of the rule, and the expression below is the conclusion.

There are two elaborations of this operation. (1) If a proposition on the left hand side of one sentence is the same as a proposition on the right hand side of the other sentence, it is okay to drop the two symbols, with the proviso that only one such pair may be dropped. (2) If a constant is repeated on the same side of a single sentence, all but one of the occurrences can be deleted.

We can use this operation to solve the problem of Mary's love life. Looking at the two premises above, we notice that p occurs on the left-hand side of one sentence and the right-hand side of the other. Consequently, we can cancel the p and thereby derive the conclusion that, if is Monday and raining, then Mary loves Quincy or Mary loves Quincy.

Dropping the repeated symbol on the right hand side, we arrive at the conclusion that, if it is Monday and raining, then Mary loves Quincy.

This example is interesting in that it showcases our formal language for encoding logical information. As with algebra, we use symbols to represent relevant aspects of the world in question, and we use operators to connect these symbols in order to express information about the things those symbols represent.

The example also introduces one of the most important operations in Formal Logic, viz. Resolution (in this case a restricted form of Resolution). Resolution has the property of being complete for an important class of logic problems, i.e. it is the only operation necessary to solve any problem in the class.

1.6 Automation

The existence of a formal language for representing information and the existence of a corresponding set of mechanical manipulation rules together have an important consequence, viz. the possibility of automated reasoning using digital computers.

The idea is simple. We use our formal representation to encode the premises of a problem as data structures in a computer, and we program the computer to apply our mechanical rules in a systematic way. The rules are applied until the desired conclusion is attained or until it is determined that the desired conclusion cannot be attained. (Unfortunately, in some cases, this determination cannot be made; and the procedure never halts. Nevertheless, as discussed in later chapters, the idea is basically sound.)

Although the prospect of automated reasoning has achieved practical realization only in the last few decades, it is interesting to note that the concept itself is not new. In fact, the idea of building machines capable of logical reasoning has a long tradition.

One of the first individuals to give voice to this idea was Leibnitz. He conceived of "a universal algebra by which all knowledge, including moral and metaphysical truths, can some day be brought within a single deductive system". Having already perfected a mechanical calculator for arithmetic, he argued that, with this universal algebra, it would be possible to build a machine capable of rendering the consequences of such a system mechanically.

Boole gave substance to this dream in the 1800s with the invention of Boolean algebra and with the creation of a machine capable of computing accordingly.

The early twentieth century brought additional advances in Logic, notably the invention of the predicate calculus by Russell and Whitehead and the proof of the corresponding completeness and incompleteness theorems by Godel in the 1930s.

The advent of the digital computer in the 1940s gave increased attention to the prospects for automated reasoning. Research in artificial intelligence led to the development of efficient algorithms for logical reasoning, highlighted by Robinson's invention of resolution theorem proving in the 1960s.

Today, the prospect of automated reasoning has moved from the realm of possibility to that of practicality, with the creation of logic technology in the form of automated reasoning systems, such as Vampire, Prover9, the Prolog Technology Theorem Prover, and others.

The emergence of this technology has led to the application of logic technology in a wide variety of areas. The following paragraphs outline some of these uses.

Mathematics. Automated reasoning programs can be used to check proofs and, in some cases, to produce proofs or portions of proofs.

Engineering. Engineers can use the language of Logic to write specifications for their products and to encode their designs. Automated reasoning tools can be used to simulate designs and in some cases validate that these designs meet their specification. Such tools can also be used to diagnose failures and to develop testing programs.

Database Systems. By conceptualizing database tables as sets of simple sentences, it is possible to use Logic in support of database systems. For example, the language of Logic can be used to define virtual views of data in terms of explicitly stored tables, and it can be used to encode constraints on databases. Automated reasoning techniques can be used to compute new tables, to detect problems, and to optimize queries.

Data Integration The language of Logic can be used to relate the vocabulary and structure of disparate data sources, and automated reasoning techniques can be used to integrate the data in these sources.

Law and Business. The language of Logic can be used to encode regulations and business rules, and automated reasoning techniques can be used to analyze such regulations for inconsistency and overlap.

1.7 Reading Guide

Although Logic is a single field of study, there is more than one logic in this field. In the three main units of this book, we look at three different types of logic, each more sophisticated than the one before.

Propositional Logic is the logic of propositions. Symbols in the language represent "conditions" in the world, and complex sentences in the language express interrelationships among these conditions. The primary operators are Boolean connectives, such as and , or , and not .

Relational Logic expands upon Propositional Logic by providing a means for explicitly talking about individual objects and their interrelationships (not just monolithic conditions). In order to do so, we expand our language to include object constants and relation constants, variables and quantifiers.

Functional Logic takes us one step further by providing a means for describing worlds with infinitely many objects. The resulting logic is much more powerful than Propositional Logic and Relational Logic. Unfortunately, as we shall see, some of the nice computational properties of the first two logics are lost as a result.

Each logic introduces new issues and capabilities. Despite their differences, there are many commonalities among these logics. In particular, in each case, there is a language with a formal syntax and a precise semantics; there is a notion of logical entailment; and there are legal rules for manipulating expressions in the language.

These similarities allow us to compare the logics and to gain an appreciation of the fundamental tradeoff between expressiveness and computational complexity. On the one hand, the introduction of additional linguistic complexity makes it possible to say things that cannot be said in more restricted languages. On the other hand, the introduction of additional linguistic flexibility has adverse effects on computability. As we proceed though the material, our attention will range from the completely computable case of Propositional Logic to a variant that is not at all computable.

There are also some topics that are relevant to Logic but are out of scope for this course, such as probability, metaknowledge (knowledge about knowledge), and paradoxes (e.g. This sentence is false. ). Also, negation as failure ( knowing not versus not knowing , non-deductive reasoning methods (like induction), and paraconsistent reasoning (i.e. reasoning from inconsistent premises). We touch on these extensions in this course, but we do not talk about them in any depth.

One final comment. In the hopes of preventing difficulties, it is worth pointing out a potential source of confusion. This book exists in the meta world. It contains sentences about sentences; it contains proofs about proofs. In some places, we use similar mathematical symbology both for sentences in Logic and sentences about Logic. Wherever possible, we try to be clear about this distinction, but the potential for confusion remains. Unfortunately, this comes with the territory. We are using Logic to study Logic. It is our most powerful intellectual tool.

Logic is the study of information encoded in the form of logical sentences. Each logical sentence divides the set of all possible world into two subsets - the set of worlds in which the sentence is true and the set of worlds in which the set of sentences is false. A set of premises logically entails a conclusion if and only if the conclusion is true in every world in which all of the premises are true. Deduction is a form of symbolic reasoning that produces conclusions that are logically entailed by premises (distinguishing it from other forms of reasoning, such as induction , abduction , and analogical reasoning ). A proof is a sequence of simple, more-or-less obvious deductive steps that justifies a conclusion that may not be immediately obvious from given premises. In Logic, we usually encode logical information as sentences in formal languages; and we use rules of inference appropriate to these languages. Such formal representations and methods are useful for us to use ourselves. Moreover, they allow us to automate the process of deduction, though the computability of such implementations varies with the complexity of the sentences involved.

Exercise 1.1: Consider the state of the Sorority World depicted below.

For each of the following sentences, say whether or not it is true in this state of the world.

Exercise 1.2: Consider the state of the Sorority World depicted below.

Exercise 1.3: Consider the state of the Sorority World depicted below.

Exercise 1.4: Come up with a table of likes and dislikes for the Sorority World that makes all of the following sentences true. Note that there is more than one such table.

Exercise 1.5: Consider a set of Sorority World premises that are true in the four states of Sorority World shown in Section 1.4. For each of the following sentences, say whether or not it is logically entailed by these premises.

Exercise 1.6: Consider the sentences shown below.

Say whether each of the following sentences is logically entailed by these sentences.

Exercise 1.7: Say whether or not the following reasoning patterns are logically correct.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Logical Consequence

A good argument is one whose conclusions follow from its premises; its conclusions are consequences of its premises. But in what sense do conclusions follow from premises? What is it for a conclusion to be a consequence of premises? Those questions, in many respects, are at the heart of logic (as a philosophical discipline). Consider the following argument:

  • If we charge high fees for university, only the rich will enroll. We charge high fees for university. Therefore, only the rich will enroll.

There are many different things one can say about this argument, but many agree that if we do not equivocate (if the terms mean the same thing in the premises and the conclusion) then the argument is valid , that is, the conclusion follows deductively from the premises. This does not mean that the conclusion is true. Perhaps the premises are not true. However, if the premises are true, then the conclusion is also true, as a matter of logic. This entry is about the relation between premises and conclusions in valid arguments.

Contemporary analyses of the concept of consequence—of the follows from relation—take it to be both necessary and formal , with such answers often being explicated via proofs or models (or, in some cases, both). Our aim in this article is to provide a brief characterisation of some of the notions that play a central role in contemporary accounts of logical consequence.

We should note that we only highlight a few of the philosophical aspects of logical consequence, leaving out almost all technical details, and also leaving out a large number of philosophical debates about the topic. Our rationale for doing as much is that one will get the technical details, and the particular philosophical issues that motivated them, from looking at specific logics —specific theories of logical consequence (e.g., relevant logics, substructural logics, non-monotonic logics, dynamic logics, modal logics, theories of quantification, and so on). (Moreover, debates about almost any feature of language—structure versus form of sentences, propositions, context sensitivity, meaning, even truth—are relevant to debates about logical consequence, making an exhaustive discussion practically impossible.) Our aim here is simply to touch on a few of the very basic issues that are central to logical consequence.

1. Deductive and Inductive Consequence

2. formal and material consequence, 3.1 the model-theoretic account of logical consequence, 3.2 the proof-theoretic account of logical consequence, 3.3 between models and proofs, 4. premises and conclusions, 5. one or many, history of logical consequence, 20th century developments, philosophy of logical consequence, other internet resources, related entries.

Some arguments are such that the (joint) truth of the premises is necessarily sufficient for the truth of the conclusions. In the sense of logical consequence central to the current tradition, such “necessary sufficiency” distinguishes deductive validity from inductive validity. In inductively valid arguments, the (joint) truth of the premises is very likely (but not necessarily) sufficient for the truth of the conclusion. An inductively valid argument is such that, as it is often put, its premises make its conclusion more likely or more reasonable (even though the conclusion may well be untrue given the joint truth of the premises). The argument

  • All swans observed so far have been white. Smoothy is a swan. Therefore, Smoothy is white.

is not deductively valid because the premises are not necessarily sufficient for the conclusion. Smoothy may well be a black swan.

Distinctions can be drawn between different inductive arguments. Some inductive arguments seem quite reasonable, and others are less so. There are many different ways to attempt to analyse inductive consequence. We might consider the degree to which the premises make the conclusion more likely (a probabilistic reading), or we might check whether the most normal circumstances in which the premises are true render the conclusion true as well. (This leads to some kinds of default or non-monotonic inference.) The field of inductive consequence is difficult and important, but we shall leave that topic here and focus on deductive validity.

(See the entries on inductive logic and non-monotonic logic for more information on these topics.)

The constraint of necessity is not sufficient to settle the notion of deductive validity, for the notion of necessity may also be fleshed out in a number of ways. To say that a conclusion necessarily follows from the premises is to say that the argument is somehow exceptionless , but there are many different ways to make that idea precise.

A first stab at the notion might use what we now call metaphysical necessity. Perhaps an argument is valid if it is (metaphysically) impossible for the premises to be true and the conclusion to be untrue, valid if—holding fixed the interpretations of premises and conclusion—in every possible world in which the premises hold, so does the conclusion. This constraint is plausibly thought to be a necessary condition for logical consequence (if it could be that the premises are true and the conclusion isn’t, then there is no doubt that the conclusion does not follow from the premises); however, on most accounts of logical consequence, it is not a sufficient condition for validity. Many admit the existence of a posteriori necessities, such as the claim that water is H\(_2\)O. If that claim is necessary, then the argument:

  • \(x\) is water. Therefore, \(x\) is H\(_2\)O.

is necessarily truth preserving, but it seems a long way from being deductively valid. It was a genuine discovery that water is H\(_2\)O, one that required significant empirical investigation. While there may be genuine discoveries of valid arguments that we had not previously recognised as such, it is another thing entirely to think that these discoveries require empirical investigation.

An alternative line on the requisite sort of necessity turns to conceptual necessity . On this line, the conclusion of (3) is not a consequence of its premise given that it is not a conceptual truth that water is H\(_2\)O. The concept water and the concept \(H_2O\) happen to pick out the same property, but this agreement is determined partially by the world.

A similar picture of logic takes consequence to be a matter of what is analytically true, and it is not an analytic truth that water is H\(_2\)O. The word “water” and the formula “H\(_2\)O” agree in extension (and necessarily so) but they do not agree in meaning .

If metaphysical necessity is too coarse a notion to determine logical consequence (since it may be taken to render too many arguments deductively valid), an appeal to conceptual or analytic necessity might seem to be a better route. The trouble, as Quine argued, is that the distinction between analytic and synthetic (and similarly, conceptual and non-conceptual) truths is not as straightforward as we might have thought in the beginning of the 20th Century. (See the entry on the analytic/synthetic distinction .) Furthermore many arguments seem to be truth-preserving on the basis of analysis alone:

  • Peter is Greg’s mother’s brother’s son. Therefore, Peter is Greg’s cousin.

One can understand that the conclusion follows from the premises, on the basis of one’s understanding of the concepts involved. One need not know anything about the identity of Peter, Greg’s cousin. Still, many have thought that (4) is not deductively valid, despite its credentials as truth-preserving on analytic or conceptual grounds. It is not quite as general as it could be because it is not as formal as it could be. The argument succeeds only because of the particular details of family concepts involved.

A further possibility for carving out the distinctive notion of necessity grounding logical consequence is the notion of apriority . Deductively valid arguments, whatever they are, can be known to be so without recourse to experience, so they must be knowable a priori . A constraint of apriority certainly seems to rule argument (3) out as deductively valid, and rightly so. However, it will not do to rule out argument (4). If we take arguments like (4) to turn not on matters of deductive validity but something else, such as an a priori knowable definition, then we must look elsewhere for a characterisation of logical consequence.

The strongest and most widespread proposal for finding a narrower criterion for logical consequence is the appeal to formality . The step in (4) from “Peter is Greg’s mother’s brother’s son” to “Peter is my cousin” is a material consequence and not a formal one, because to make the step from the premise to the conclusion we need more than the structure or form of the claims involved: we need to understand their contents too.

What could the distinction between form and content mean? We mean to say that consequence is formal if it depends on the form and not the substance of the claims involved. But how is that to be understood? We will give at most a sketch, which, again, can be filled out in a number of ways.

The obvious first step is to notice that all presentations of the rules of logical consequence rely on schemes . Aristotle’s syllogistic is a proud example.

F e r io : No \(F\) is \(G\). Some \(H\) is \(G\). Therefore some \(H\) is not \(F\).

Inference schemes, like the one above, display the structure of valid arguments. Perhaps to say that an argument is formally valid is to say that it falls under some general scheme of which every instance is valid, such as F e r io .

That, too, is an incomplete specification of formality. The material argument (4) is an instance of:

  • \(x\) is \(y\)’s mother’s brother’s son. Therefore, \(x\) is \(y\)’s cousin.

every instance of which is valid. We must say more to explain why some schemes count as properly formal (and hence a sufficient ground for logical consequence) and others do not. A general answer will articulate the notion of logical form , which is an important issue in its own right (involving the notion of logical constants , among other things). Instead of exploring the details of different candidates for logical form, we will mention different proposals about the point of the exercise.

What is the point in demanding that validity be underwritten by a notion of logical form? There are at least three distinct proposals for the required notion of formality, and each provides a different kind of answer to that question.

We might take the formal rules of logic to be totally neutral with respect to particular features of objects . Laws of logic, on this view, must abstract away from particular features of objects. Logic is formal in that it is totally general . One way to characterise what counts as a totally general notion is by way of permutations. Tarski proposed (1986) that an operation or predicate on a domain counted as general (or logical) if it was invariant under permutations of objects. (A permutation of a collection of objects assigns for each object a unique object in that collection, such that no object is assigned more than once. A permutation of \(\{a, b, c, d\}\) might, for example, assign \(b\) to \(a, d\) to \(b, c\) to \(c\) and \(a\) to \(d\).) A \(2\)-place predicate \(R\) is invariant under permutation if for any permutation \(p\), whenever \(Rxy\) holds, \(Rp(x)p(y)\) holds too. You can see that the identity relation is permutation invariant—if \(x = y\) then \(p(x) = p(y)\)—but the mother-of relation is not. We may have permutations \(p\) such that even though \(x\) is the mother of \(y\), \(p(x)\) is not the mother of \(p(y)\). We may use permutation to characterise logicality for more than predicates too: we may say that a one-place sentential connective ‘\(\bullet\)’ is permutation invariant if and only if, for all \(A\), \(p(\bullet A)\) is true if and only if \(\bullet p(A)\) is true. Defining this rigorously requires establishing how permutations operate on sentences, and this takes us beyond the scope of this article. Suffice to say, an operation such as negation passes the test of invariance, but an operation such as ‘JC believes that’ fails.

A closely related analysis for formality is that formal rules are totally abstract . They abstract away from the semantic content of thoughts or claims, to leave only semantic structure. The terms ‘mother’ and ‘cousin’ enter essentially into argument (5). On this view, expressions such as propositional connectives and quantifiers do not add new semantic content to expressions, but instead add only ways to combine and structure semantic content. Expressions like ‘mother’ and ‘cousin’, by contrast, add new semantic content.

Another way to draw the distinction (or to perhaps to draw a different distinction) is to take the formal rules of logic to be constitutitive norms for thought, regardless of its subject matter. It is plausible to hold that no matter what we think about, it makes sense to conjoin, disjoin and negate our thoughts to make new thoughts. It might also make sense to quantify. The behaviour, then, of logical vocabulary may be used to structure and regulate any kind of theory, and the norms governing logical vocabulary apply totally universally. The norms of valid argument, on this picture, are those norms that apply to thought irrespective of the particular content of that thought. [ 1 ]

3. Mathematical Tools: Models and Proofs

Twentieth Century technical work on the notion of logical consequence has centered on two different mathematical tools, proof theory and model theory. Each of these can be seen as explicating different aspects of the concept of logical consequence, backed by different philosophical perspectives.

We have characterized logical consequence as necessary truth preservation in virtue of form . This idea can be explicated formally. One can use mathematical structures to account for the range of possibilities over which truth needs to be preserved. The formality of logical consequence can be explicated formally by giving a special role to the logical vocabulary, taken as constituting the forms of sentences. Let us see how model theory attends to both these tasks.

The model-centered approach to logical consequence takes the validity of an argument to be absence of counterexample . A counterexample to an argument is, in general, some way of manifesting the manner in which the premises of the argument fail to lead to a conclusion. One way to do this is to provide an argument of the same form for which the premises are clearly true and the conclusion is clearly false. Another way to do this is to provide a circumstance in which the premises are true and the conclusion is false. In the contemporary literature, the intuitive idea of a counterexample is developed into a theory of models .

The exact structure of a model will depend on the kind of language at hand (extensional/intensional, first/higher-order, etc.). A model for an extensional first order language consists of a non-empty set which constitutes the domain , and an interpretation function , which assigns to each nonlogical term an extension over the domain—any extension agreeing with its semantic type (individual constants are assigned elements of the domain, function symbols are assigned functions from the domain to itself, one-place first-order predicates are assigned subsets of the domain, etc.).

The contemporary model-theoretic definition of logical consequence traces back to Tarski (1936). It builds on the definition of truth in a model given by Tarski in (1935). Tarski defines a true sentence in a model recursively, by giving truth (or satisfaction) conditions on the logical vocabulary. A conjunction, for example, is true in a model if and only if both conjuncts are true in that model. A universally quantified sentence \(\forall xFx\) is true in a model if and only if each instance is true in the model. (Or, on the Tarskian account of satisfaction, if and only if the open sentence \(Fx\) is satisfied by every object in the domain of the model. For detail on how this is accomplished, see the entry on Tarski’s truth definitions .) Now we can define logical consequence as preservation of truth over models: an argument is valid if in any model in which the premises are true (or in any interpretation of the premises according to which they are true), the conclusion is true too.

The model-theoretic definition is one of the most successful mathematical explications of a philosophical concept to date. It promises to capture both the necessity of logical consequence—by looking at truth over all models, and the formality of logical consequence—by varying the interpretations of the nonlogical vocabulary across models: an argument is valid no matter what the nonlogical vocabulary means. Yet, models are just sets, which are merely mathematical objects. How do they account for the range of possibilities, or circumstances required? John Etchemendy (1990) offers two perspectives for understanding models. On the representational approach , each model is taken to represent a possible world. If an argument preserves truth over models, we are then guaranteed that it preserves truth over possible worlds, and if we accept the identification of necessity with truth in all possible worlds, we have the necessary truth preservation of logical consequence. The problem with this approach is that it identifies logical consequence with metaphysical consequence, and it gives no account of the formality of logical consequence. On the representational approach, there is no basis for a distinction between the logical and the nonlogical vocabulary, and there is no explanation of why the interpretations of the nonlogical vocabulary are maximally varied. The second perspective on models is afforded by the interpretational approach , by which each model assigns extensions to the nonlogical vocabulary from the actual world: what varies between models is not the world depicted but the meaning of the terms. Here, the worry is that necessity isn’t captured. For instance, on the usual division of the vocabulary into logical and nonlogical, identity is considered a logical term, and can be used to form statements about the cardinality of the domain (e.g., ‘‘there are at least two things’’) which are true under every reinterpretation, but perhaps are not necessarily true. On this approach, there is no basis for considering models with domains other than the universe of what actually exists, and specifically, there is no explanation of model theory’s use of domains of different sizes. Each approach, as described here, is flawed with respect to our analysis of logical consequence as necessary and formal. The interpretational approach, by looking only at the actual world fails to account for necessity, and the representational approach fails to account for formality (for details, see Etchemendy 1990, Sher 1996, and Shapiro 1998, and for refinements see Etchemendy 2008). A possible response to Etchemendy would be to blend the representational and the interpretational perspectives, viewing each model as representing a possible world under a re-interpretation of the nonlogical vocabulary (Shapiro 1998, see also Sher 1996 and Hanson 1997 for alternative responses).

One of the main challenges set by the model-theoretic definition of logical consequence is to distinguish between the logical and the nonlogical vocabulary. The logical vocabulary is defined in all models by the recursive clauses (such as those mentioned above for conjunction and the universal quantifier), and in that sense its meaning is fixed. The choice of the logical vocabulary determines the class of models considered when evaluating validity, and thus it determines the class of the logically valid arguments. Now, while each formal language is typically defined with a choice of a logical vocabulary, one can ask for a more principled characterization of logical vocabulary. Tarski left the question of a principled distinction open in his 1936, and only gave the lines of a relativistic stance, by which different choices of the logical vocabulary may be admissible. Others have proposed criteria for logicality, demanding that logical constants be appropriately formal, general or topic neutral (for references and details, see the entry on logical constants ). Note that a choice of the logical vocabulary is a special case of setting constraints on the class of models to be used. It has been suggested that the focus on criteria for the logical vocabulary misses this point, and that more generally the question is which semantic constraints should be adopted, limiting the admissible models for a language (Sagi 2014a, Zinke 2017).

Another challenge faced by the model-theoretic account is due to the limitations of its set-theoretic basis. Recall that models are sets. The worry is that truth-preservation over models might not guarantee necessary truth preservation—moreover, it might not even guarantee material truth preservation (truth preservation in the actual world). The reason is that each model domain is a set, but the actual world presumably contains all sets, and as a collection which includes all sets is too ‘‘large’’ to be a set (it constitutes a proper class ), the actual world is not accounted for by any model (see Shapiro 1987).

One way of dealing with this worry is to employ external means, such as proof theory, in support of the model-theoretic definition. This is done by Georg Kreisel in his “squeezing argument”, which we present in section 3.3. Kreisel’s argument crucially depends on the language in question having a sound and complete proof system. Another option is to use set-theoretic reflection principles. Generally speaking, reflection principles state that whatever is true of the universe of sets, is already true in an initial segment thereof (which is always a set). If reflection principles are accepted, then at least as concerns the relevant language, one can argue that an argument is valid if and only if there is no counter set-model (see Kreisel 1967, Shapiro 1987, Kennedy & Väänänen 2017).

Finally, the explanation of logical consequence in terms of truth in models is typically preferred by “Realists”, who take truth of sentences to be independent of what can be known. Explaining logical consequence in terms of truth in models is rather close to explaining logical consequence in terms of truth , and the analysis of truth-in-a-model is sometimes taken to be an explication of truth in terms of correspondence, a typically Realist notion. Some, however, view logical consequence as having an indispensable epistemic component, having to do with the way we establish the conclusion on the basis of the premises. “Anti-realists”, who eschew taking truth (or at least, correspondence-truth) as an explanatory notion, will typically prefer explaining logical consequence in terms of proof —to which we turn next.

On the proof-centered approach to logical consequence, the validity of an argument amounts to there being a proof of the conclusions from the premises. Exactly what proofs are is a big issue but the idea is fairly plain (at least if you have been exposed to some proof system or other). Proofs are made up of small steps, the primitive inference principles of the proof system. The 20th Century has seen very many different kinds of proof systems, from so-called Hilbert proofs, with simple rules and complex axioms, to natural deduction systems, with few (or even no) axioms and very many rules.

The proof-centered approach highlights epistemic aspects of logical consequence. A proof does not merely attest to the validity of the argument: it provides the steps by which we can establish this validity. And so, if a reasoner has grounds for the premises of an argument, and they infer the conclusion via a series of applications of valid inference rules, they thereby obtain grounds for the conclusion (see Prawitz 2012). One can go further and subscribe to inferentialism , the view by which the meaning of expressions is determined by their role in inference. The idea is that our use of a linguistic expression is regulated by rules, and mastering the rules suffices for understanding the expression. This gives us a preliminary restriction on what semantic values of expressions can be: they cannot make any distinctions not accounted for by the rules. One can then go even further, and reject any kind of meaning that goes beyond the rules—adopting the later Wittgensteinian slogan “meaning is use”. This view is favored by anti-realists about meaning, since meaning on this view is fully explained by what is knowable.

The condition of necessity on logical consequence obtains a new interpretation in the proof-centered approach. The condition can be reformulated thus: in a valid argument, the truth of the conclusion follows from the truth of the premises by necessity of thought (Prawitz 2005). Let us parse this formulation. Truth is understood constructively : sentences are true in virtue of potential evidence for them, and the facts described by true sentences are thus conceived as constructed in terms of potential evidence. (Note that one can completely forgo reference to truth, and instead speak of assertibility or acceptance of sentences.) Now, the necessity of thought by which an argument is valid is explained by the meaning of the terms involved, which compels us to accept the truth of the conclusion given the truth of the premises. Meanings of expressions, in turn, are understood through the rules governing their use: the usual truth conditions give their way to proof conditions of formulas containing an expression.

One can thus provide a proof-theoretic semantics for a language (Schroeder-Heister 1991). When presenting his system of natural deduction, Gentzen remarked that the introduction rules for the logical expressions represent their “definitions,” and the elimination rules are consequences of those definitions (Gentzen 1933). For example, the introduction rule for conjunction dictates that a conjunction \(A \amp B\) may be inferred from both conjuncts \(A\) and \(B\), and this rule captures the meaning of the connective. Conversely, the elimination rule for conjunction says that from \(A \amp B\) one may infer both \(A\) and \(B\). The universal quantifier rules tell us that from the universally quantified claim \(\forall xFx\) we can infer any instance \(Fa\), and we can infer \(\forall xFx\) from the instance \(Fa\), provided that no other assumption has been made involving the name \(a\). Under certain requirements, one can show that the elimination rule is validated by the introduction rule.

One of the main challenges for the proof-centered approach is that of distinguishing between rules that are genuinely meaning-determining and those that are not. Some rules for connectives, if added to a system, would lead to triviality. Prior (1960) offered the following rules for a connective “\(\tonk\)”. Its introduction rule says that from \(A\) one can infer \(A \tonk B\), and its elimination rule says that from \(A \tonk B\) one can infer \(B\). With the introduction of these rules, the system becomes trivial so long as at least one thing is provable, since from any assumption \(A\) one can derive any conclusion \(B\). Some constraints have to posed on inference rules, and much of subsequent literature has been concerned with these constraints (Belnap 1962, Dummett 1991, Prawitz 1974).

To render the notions of proof and validity more systematized, Prawitz has introduced the notion of a canonical proof . A sentence might be proved in several different ways, but it is the direct, or canonical proof that is constitutive of its meaning. A canonical proof is a proof whose last step is an application of an introduction rule, and its immediate subproofs are canonical (unless they have free variables or undischarged assumptions—for details see Prawitz 2005). A canonical proof is conceived as giving direct evidence for the sentence proved, as it establishes the truth of the sentence by the rule constitutive of the meaning of its connectives. For more on canonical proofs and the ways other proofs can be reduced to them, see the entry on proof-theoretic semantics .

We have indicated how the condition of necessity can be interpreted in the proof-centered approach. The condition of formality can be accounted for as well. Note that on the present perspective as well, there is a division of the vocabulary into logical and nonlogical. This division can be used to define substitutions of an argument. A substitution of an argument is an argument obtained from the original one by replacing the nonlogical terms with terms of the same syntactic category in a uniform manner. A definition of validity that respects the condition of formality will entail that an argument is valid if and only if all its substitutions are valid, and in the present context, this is a requirement that there is a proof of all its substitutions. This condition is satisfied in any proof system where rules are given only for the logical vocabulary. Of course, in the proof-centered approach as well, there is a question of distinguishing the logical vocabulary (see the entry on logical constants ).

Finally, it should be noted that a proof theoretic semantics can be given for classical logic as well as a variety of non-classical logics. However, due to the epistemic anti-realist attitude that lies at the basis of the proof-centered approach, its proponents have typically advocated intuitionistic logic (see Dummett 1991).

For more on the proof-centered perspective and on proof-theoretic semantics, see the entry on proof-theoretic semantics .

The proof-theoretic and model-theoretic perspectives have been considered as providing rival accounts of logical consequence. However, one can also view “logical consequence” and “validity” as expressing cluster concepts : “A number of different, closely related notions go by those names. They invoke matters of modality, meaning, effectiveness, justification, rationality, and form” (Shapiro 2014). One can also note that the division between the model-theoretic and the proof-theoretic perspectives is a modern one, and it was only made possible when tools for metamathematical investigations were developed. Frege’s Begriffsschrift , for instance, which predates the development of those tools, is formulated as an axiomatic proof system, but the meanings of the connectives are given via truth conditions.

Once there are two different analyses of a relation of logical consequence, one can ask about possible interactions, and we’ll do that next. One can also ask what general features such a relation has independently of its analysis as proof-theoretic or model-theoretic. One way of answering this question goes back to Tarski , who introduced the notion of consequence operations. For our purposes, we note only some features of such operations. Let \(Cn(X)\) be the consequences of \(X\). (One can think of the operator \(Cn\) as deriving from a prior consequence relation which, when taking \(X\) as ‘input (or premise)’ set, tells you what follows from \(X\). But one can also see the ‘process’ in reverse, and a key insight is that consequence relations and corresponding operations are, in effect, interdefinable. See the entry on algebraic propositional logic for details.) Among some of the minimal conditions one might impose on a consequence relation are the following two (from Tarski):

  • \(X\) is a subset of \(Cn(X)\).
  • \(Cn(Cn(X)) = Cn(X)\).

If you think of \(X\) as a set of claims, then the first condition tells you that the consequences of a set of claims includes the claims themselves. The second condition demands that the consequences of \(X\) just are the consequences of the consequences of \(X\). Both of these conditions can be motivated from reflection on the model-theoretic and proof-theoretic approaches; and there are other such conditions too. (For a general discussion, see the entry on algebraic propositional logic .) But as with many foundation issues (e.g., ‘what are the essential features of consequence relations in general?’), even such minimal conditions are contentious in philosophical logic and the philosophy of logic. For example, some might take condition (2) to be objectionable on the grounds that, for reasons of vagueness (or more), important consequence relations over natural languages (however formalized) are not generally transitive in ways reflected in (2). (See Tennant 1994, Cobreros et al 2012, and Ripley 2013, for philosophical motivations against transitive consequence.) But we leave these issues for more advanced discussion.

While the philosophical divide between Realists and Anti-realists remains vast, proof-centered and model-centered accounts of consequence have been united (at least with respect to extension) in many cases. The great soundness and completeness theorems for different proof systems (or, from the other angle, for different model-theoretic semantics) show that, in an important sense, the two approaches often coincide, at least in extension. A proof system is sound with respect to a model-theoretic semantics if every argument that has a proof in the system is model-theoretically valid. A proof system is complete with respect to a model-theoretic semantics if every model-theoretically valid argument has a proof in the system. While soundness is a principal condition on any proof system worth its name, completeness cannot always be expected. Admittedly, these definitions are biased towards the model-theoretic perspective: the model-theoretic semantic sets the standard to what is “sound” and “complete”. Leaving terminological issues aside, if a proof system is both sound and complete with respect to a model-theoretic semantics (as, significantly, in the case of first order predicate logic), then the proof system and the model-theoretic semantics agree on which arguments are valid.

Completeness results can also support the adequacy of the model-theoretic account, as in Kreisel’s “squeezing argument”. We have noted a weakness of the model-theoretic account: all models are sets, and so it might be that no model represents the actual world. Kreisel has shown that if we have a proof system that is “intuitively sound” and is complete with respect to the model-theoretic semantics, we won’t be missing any models: every intuitively valid argument will have a counter-model. Let \(L\) be a first order language. Let \(Val\) denote the set of intuitively valid arguments in \(L\). Kreisel takes intuitive validity to be preservation of truth across all structures (whether sets or not). His analysis privileges the modal analysis of logical consequence—but note that the weakness we are addressing is that considering set-theoretic structures might not be enough. Let \(V\) denote the set of model-theoretic validities in \(L\): arguments that preserve truth over models. Let \(D\) be the set of deductively valid arguments, by some accepted proof system for first order logic. Now, any such proof system is “intuitively sound”, meaning that what is deductively valid by the system is intuitively valid. This gives us \(D \subseteq Val\). And obviously, by the definitions we’ve given, \(Val \subseteq V\), since an argument that preserves truth over all structures will preserve truth over set-structures.

By the completeness result for first order logic, we have: \(V\) ⊆ \(D\). Putting the three inclusions together (the “squeeze”), we get that all three sets must be equal, and in particular: \(V = Val\). In this way, we’ve proven that if there is some structure that is a counterexample to a first order argument, then there is a set-theoretic one.

Another arena for the interaction between the proof-theoretic and the model-theoretic perspectives has to do with the definition of the logical vocabulary. For example, one can hold a “moderate” inferentialist view which defines the meanings of logical connectives through their semantics (i.e. truth conditions) but demands that the meaning of a connective be determined by inference rules. Carnap has famously shown that the classical inference rules allow non-standard interpretations of the logical expressions (Carnap 1943). Much recent work in the field has been devoted to the exact nature and extent of Carnap’s categoricity problem (Raatikainen 2008, Murzi and Hjortland 2009, Woods 2012, Garson 2013, Peregrin 2014, Bonnay and Westerståhl 2016. See also the entry on sentence connectives in formal logic ).

Finally, we should note that while model theory and proof theory are the most prominent contenders for the explication of logical consequence, there are alternative frameworks for formal semantics such as algebraic semantics , game-theoretic semantics and dynamic semantics (see Wansig 2000).

There has also been dissent, even in Aristotle’s day, as to the “shape” of logical consequence. In particular, there is no settled consensus on the number of premises or conclusions appropriate to “tie together” the consequence relation.

In Aristotle’s syllogistic, a syllogism relates two or more premises and a single conclusion. In fact, Aristotle focuses on arguments with exactly two premises (the major premise and the minor premise), but nothing in his definition forbids arguments with three or more premises. Surely, such arguments should be permitted: if, for example, we have one syllogism from two premises \(A\) and \(B\) to a conclusion \(C\), and we have another from the premises \(C\) and \(D\) to the conclusion \(E\), then in some sense, the longer argument from premises \(A, B\) and \(D\) to conclusion \(E\) is a good one. It is found by chaining together the two smaller arguments. If the two original arguments are formally valid, then so too is the longer argument from three premises. On the other hand, on a common reading of Aristotle’s definition of syllogism, one -premise arguments are ruled out—but this seems arbitrary, as even Aristotle’s own “conversion” inferences are thus excluded.

For such reasons, many have taken the relation of logical consequence to pair an arbitrary (possibly infinite) collection of premises with a single conclusion. This account has the added virtue of having the special case of an empty collection of premises. Arguments to a conclusion from no premises whatsoever are those in which the conclusion is true by logic alone. Such “conclusions” are logical truths (sometimes tautologies ) or, on the proof-centered approach, theorems .

Perhaps there is a reason to allow the notion of logical consequence to apply even more broadly. In Gentzen’s proof theory for classical logic, a notion of consequence is defined to hold between multiple premises and multiple conclusions. The argument from a set \(X\) of premises to a set \(Y\) of conclusions is valid if the truth of every member of \(X\) guarantees (in the relevant sense) the truth of some member of \(Y\). There is no doubt that this is formally perspicuous, but the philosophical applicability of the multiple premise—multiple conclusion sense of logical consequence remains an open philosophical issue. In particular, those anti-Realists who take logical consequence to be defined in terms of proof (such as Michael Dummett) reject a multiple conclusion analysis of logical consequence. For an Anti-realist, who takes good inference to be characterised by the way warrant is transmitted from premise to conclusion, it seems that a multiple conclusion analysis of logical consequence is out of the question. In a multiple conclusion argument from \(A\) to \(B, C\), any warrant we have for \(A\) does not necessarily transmit to \(B\) or \(C\): the only conclusion we are warranted to draw is the disjunction \(B\) or \(C\), so it seems for an analysis of consequence in terms of warrant we need to understand some logical vocabulary (in this case, disjunction) in order to understand the consequence relation. This is unacceptable if we hope to use logical consequence as a tool to define that logical vocabulary. No such problems appear to arise in a single conclusion setting. (However, see Restall (2005) for a defence of multiple conclusion consequence for Anti-realists; and see Beall (2011) for a defence of certain sub-classical multiple-conclusion logics in the service of non-classical solutions to paradox.)

Another line along which the notion has been broadened (or along which some have sought to broaden it) involves recent work on substructural logic . The proposal here is that we may consider doing without some of the standard rules governing the way that premises (or conclusions) of an argument may be combined. Structural rules deal with the shape or structure of an argument in the sense of the way that the premises and conclusions are collected together, and not the way that those statements are constructed. The structural rule of weakening for example, states that if an argument from some collection of premises \(X\) to a conclusion \(C\) is valid, then the argument from \(X\) together with another premise \(A\) to the conclusion \(C\) is also valid. This rule has seemed problematic to some (chiefly on the grounds that the extra premise \(A\) need not be used in the derivation of the conclusion \(C\) and hence, that \(C\) does not follow from the premises \(X,A\) in the appropriate sense). Relevant logics are designed to respect this thought, and do without the structural rule of weakening. (For the proof-theoretic picture, see Negri and von Plato (2001).)

Other structural rules are also a called into question. Another possible application of substructural logic is found in the analysis of paradoxes such as Curry’s paradox . A crucial move in the reasoning in Curry’s paradox and other paradoxes like it seems to require the step reducing two applications of an assumption to a single one (which is then discharged). According to some, this step is problematic, and so, they must distinguish an argument from \(A\) to \(B\) and an argument from \(A, A\) to \(B\). The rule of contraction is rejected.

In yet other examples, the order in which premises are used is important and an argument from \(A, B\) to \(C\) is to be distinguished from an argument from \(B, A\) to \(C\). (For more details, consult the entry on substructural logics .) There is no doubt that the formal systems of substructural logics are elegant and interesting, but the case for the philosophical importance and applicability of substructural logics is not closed.

We have touched only on a few central aspects of the notion of logical consequence, leaving further issues, debates and, in particular, details to emerge from particular accounts (accounts that are well-represented in this encyclopedia). But even a quick glance at the related links section (below) will attest to a fairly large number of different logical theories, different accounts of what (logically) follows from what. And that observation raises a question with which we will close: Is there one notion of logical consequence that is the target of all such theories, or are there many?

We all agree that there are many different formal techniques for studying logical consequence, and very many different formal systems that each propose different relations of logical consequence. But given a particular argument, is the question as to whether it is deductively valid an all-or-nothing affair? The orthodoxy, logical monism , answers affirmatively. There is one relation of deductive consequence, and different formal systems do a better or worse job of modelling that relation. (See, for example, Priest 1999 for a defence of monism.) The logical contextualist or relativist says that the validity of an argument depends on the subject matter or the frame of reference or some other context of evaluation. (For example, a use of the law of the excluded middle might be valid in a classical mathematics textbook, but not in an intuitionistic mathematics textbook, or in a context where we reason about fiction or vague matters.) The logical pluralist , on the other hand, says that of one and the same argument, in one and the same context, there are sometimes different things one should say with respect to its validity. For example, perhaps one ought say that the argument from a contradictory collection of premises to an unrelated conclusion is valid in the sense that in virtue of its form it is not the case that the premises are true an the conclusion untrue (so it is valid in one precise sense) but that nonetheless, in another sense the form of the argument does not ensure that the truth of the premises leads to the truth of the conclusion. The monist or the contextualist holds that in the case of the one argument a single answer must be found for the question of its validity. The pluralist denies this. The pluralist holds that the notion of logical consequence itself may be made more precise in more than one way, just as the original idea of a “good argument” bifurcates into deductive and inductive validity (see Beall and Restall 2000 for a defence of pluralism).

Expositions

  • Coffa, J. Alberto, 1993, The Semantic Tradition from Kant to Carnap , Linda Wessels (ed.), Cambridge: Cambridge University Press. An historical account of the Kantian origins of the rise of analytic philosophy and its development from Bolzano to Carnap.
  • Kneale, W. and Kneale, M., 1962, The Development of Logic , Oxford: Oxford University Press; reprinted, 1984. The classic text on the history of logic until the middle 20th Century.

Source Material

  • Ewald, William, 1996, From Kant to Hilbert: a source book in the foundations of mathematics (Volumes I and II), Oxford: Oxford University Press. Reprints and translations of important Texts, including Bolzano on logical consequence.
  • van Heijenoort, Jean, 1967, From Frege to Gödel: a sourcebook in mathematical logic 1879–1931 , Cambridge, MA: Harvard University Press. Reprints and translations of central texts in the development of logic.
  • Husserl, Edmund, 1900 [2001], Logical Investigations (Volumes 1 and 2), J. N. Findlay (trans.), Dermot Moran (intro.), London: Routledge.
  • Mill, John Stuart, 1872 [1973], A System of Logic (8th edition), in J. M. Robson (ed.), Collected works of John Stuart Mill (Volumes 7 & 8), Toronto: University of Toronto Press.
  • Anderson, A.R., and Belnap, N.D., 1975, Entailment: The Logic of Relevance and Necessity (Volume I), Princeton: Princeton University Press.
  • Anderson, A.R., Belnap, N.D. Jr., and Dunn, J.M., 1992, Entailment (Volume II), Princeton: Princeton University Press. This book and the previous one summarise the work in relevant logic in the Anderson–Belnap tradition. Some chapters in these books have other authors, such as Robert K. Meyer and Alasdair Urquhart.
  • Dummett, Michael, 1991 The Logical Basis of Metaphysics , Cambridge, MA: Harvard University Press. Groundbreaking use of natural deduction proof to provide an anti-realist account of logical consequence as the central plank of a theory of meaning.
  • Gentzen, Gerhard, 1969, The Collected Papers of Gerhard Gentzen , M. E. Szabo (ed.), Amsterdam: North Holland.
  • Mancosu, Paolo, 1998, From Brouwer to Hilbert , Oxford: Oxford University Press. Reprints and translations of source material concerning the constructivist debates in the foundations of mathematics in the 1920s.
  • Negri, Sara and von Plato, Jan, 2001, Structural Proof Theory , Cambridge: Cambridge University Press. A very accessible exposition of so-called structural proof theory (which involves a rejection of some of the standard structural rules at the heart of proof theory for classical logic).
  • Shoesmith D. J. and Smiley, T. J., 1978, Multiple-Conclusion Logic , Cambridge: Cambridge University Press. The first full-scale exposition and defence of the notion that logical consequence relates multiple premises and multiple conclusions.
  • Restall, Greg, 2000, An Introduction to Substructural Logics , Lond: Routledge. ( Précis available online ) An introduction to the field of substructural logics.
  • Tarski, Alfred, 1935, “The Concept of Truth in Formalized Languages,” J.H. Woodger (trans.), in Tarski 1983, pp. 152–278.
  • –––, 1936, “On The Concept of Logical Consequence,” J.H. Woodger (trans.), in Tarski 1983, pp. 409–420.
  • –––, 1983, Logic, Semantics, Metamathematics: papers from 1923 to 1938 , second edition, J. H. Woodger (trans.), J. Corcoran (ed.), Indianapolis, IN: Hackett.

There are many (many) other works on this topic, but the bibliographies of the following will serve as a suitable resource for exploring the field.

  • Avron, Arnon, 1994, “What is a Logical System?” in What is a Logical System? , D.M. Gabbay (ed.), Oxford: Clarendon Press (Studies in Logic and Computation: Volume 4), pp. 217–238.
  • Beall, Jc, 2011, “Multiple-conclusion LP and default classicality,” Review of Symbolic Logic , 4(2): 326–336.
  • Beall, Jc and Restall, Greg, 2000, “Logical Pluralism,” Australasian Journal of Philosophy , 78: 457–493.
  • Belnap, Nuel D., 1962, “Tonk, Plonk and Plink,” Analysis , 22 (6): 130–134.
  • Bonnay, Denis and Westerståhl, Dag, 2012, “Consequence Mining: Constants Versus Consequence Relations,” Journal of Philosophical Logic , 41(4): 671–709.
  • –––, 2016, “ Compositionality Solves Carnap’s Problem,” Erkenntnis , 81 (4): 721–739.
  • Brandom, Robert, 1994, Making It Explicit , Cambridge, MA: Harvard University Press. [See especially Chapters 5 and 6 on the account of logical consequence according to which truth is not a fundamental explanatory notion.]
  • Caret, Colin R. and Hjortland, Ole T. (eds.), 2015, Foundations of Logical Consequence , Oxford: Oxford University Press.
  • Carnap, Rudolf, 1943, Formalization of Logic , Cambridge, MA: Harvard University Press.
  • Cobreros, Pablo; Égré, Paul; Ripley, David and van Rooij, Robert, 2012, “Tolerance and mixed consequence in the s’valuational setting,” Studia Logica , 100(4): 855–877.
  • Etchemendy, John, 1990, The Concept of Logical Consequence , Cambridge, MA: Harvard University Press.
  • –––, 2008, “Reflections on Consequence”, in D. Patterson (ed.), 2008.
  • Garson, James W., 2013, What Logics Mean: From Proof Theory to Model-Theoretic Semantics , Cambridge: Cambridge University Press.
  • Gomez-Torrente, Mario, 1996, “Tarski on Logical Consequence,” Notre Dame Journal of Formal Logic , 37: 125–151.
  • Griffiths, Owen, and Paseau, A.C., 2022, One True Logic: A Monist Manifesto , Oxford: Oxford University Press.
  • Hanson, William H., 1997, “The Concept of Logical Consequence,” The Philosophical Review , 106 (3): 365–409.
  • Kennedy, Juliette and Väänänen, Jouko, 2017, “Squeezing arguments and strong logics,”, in Hannes Leitgeb, Ilkka Niiniluoto, Elliot Sober and P. Seppälä (eds.), Logic, Methodology, and the Philosophy of Science: Proceedings of the Fifteenth International Congress (CLMPS 2015), London: College Publications.
  • Kreisel, Georg, 1967, “Informal Rigour and Completeness Proofs,” in I. Lakatos (ed.), Problems in the Philosophy of Mathematics , (Studies in Logic and the Foundations of Mathematics: Volume 47), Amsterdam: North Holland, pp. 138–186.
  • McGee, Vann, 1992, “Two Problems with Tarski’s Theory of Consequence,” Proceedings of the Aristotelian Society , 92: 273–292.
  • Murzi, Julien and Carrara, Massimiliano, 2014, “More Reflections on Consequence,” Logique et Analyse , 57 (227): 223–258.
  • Murzi, Julien and Hjortland, Ole T., 2009, “Inferentialism and the Categoricity Problem: Reply to Raatikainen,” Analysis , 69 (3): 480–488.
  • Patterson, Douglas, (ed.), 2008, New Essays on Tarski and Philosophy , Oxford: Oxford University Press.
  • Peregrin, Jaroslav, 2014, Inferentialism: Why Rules Matter , UK: Palgrave Macmillan.
  • Prawitz, Dag, 1974, “On the Idea of a General Proof Theory,” Synthese , 27 (1–2): 63–77.
  • –––, 1985, “Remarks on some approaches to the concept of logical consequence,” Synthese , 62: 153–171.
  • –––, 2005, “Logical Consequence from a Constructivist Point of View,” in S. Shapiro (ed.), The Oxford Handbook of the Philosophy of Mathematics and Logic , Oxford: Oxford University Press, pp. 671–695.
  • –––, 2012, “The Epistemic Significance of Valid Inference,” Synthese , 187: 887–898.
  • Priest, Graham, 1999, “Validity,” European Review of Philosophy , 4: 183–205 (Special Issue: The Nature of Logic , Achillé C. Varzi (ed.), Stanford: CSLI Publications.
  • Prior, Arthur N., 1960, “The Runabout Inference-Ticket,” Analysis , 21 (2): 38–39.
  • Putnam, Hilary, 1971, Philosophy of Logic , New York: Harper & Row.
  • Quine, W.V.O., 1986 (2nd Ed.), Philosophy of Logic , Cambridge, MA: Harvard University Press.
  • Raatikainen, Panu, 2008, “On Rules of Inference and the Meanings of Logical Constants,” Analysis , 68 (300): 282–287.
  • Ray, Greg, 1996, “Logical Consequence: A Defense of Tarski,” The Journal of Philosophical Logic , 25 (6): 617–677.
  • Read, Stephen, 1994, “Formal and Material Consequence,” The Journal of Philosophical Logic , 23 (3): 247–265.
  • Restall, Greg, 2005, “Multiple Conclusions,” in P. Hájek, L. Valdés-Villanueva, and D. Westerståhl (eds.), Logic, Methodology and Philosophy of Science: Proceedings of the Twelfth International Congress , London: KCL Publications, pp. 189–205. [ Preprint available online in PDF ].
  • Ripley, David, 2013, “Paradoxes and failures of cut,” Australasian Journal of Philosophy , 91(1): 139–164. doi: 10.1080/00048402.2011.630010.
  • Sagi, Gil, 2014a, “Formality in Logic: From Logical Terms to Semantic Constraints,” Logique et Analyse , 57 (227): 259–276.
  • –––, 2014b, “Models and Logical Consequence,” Journal of Philosophical Logic , 43 (5): 943–964.
  • Shapiro, Stewart, 1987, “Principles of Reflection and Second Order Logic,” Journal of Philosophical Logic 16 (3): 309–333.
  • –––, 1998, “Logical Consequence: Models and Modality,” in M. Schirn (ed.), The Philosophy of Mathematics Today , Oxford: Oxford University Press, pp. 131–156.
  • –––, 2005, “Logical Consequence, Proof Theory, and Model Theory,” in S. Shapiro (ed.), The Oxford Handbook of the Philosophy of Mathematics and Logic , Oxford: Oxford University Press, pp. 651–670.
  • –––, 2014, Varieties of Logic , Oxford: Oxford University Press.
  • Sher, Gila, 1991, The Bounds of Logic , Cambridge, MA: MIT Press.
  • –––, 1996, “Did Tarski Commit Tarski’s Fallacy?,” Journal of Symbolic Logic , 61 (2): 653–686.
  • Sher, Gila, 2022, Logical Consequence , Cambridge: Cambridge University Press.
  • Schroeder-Heister, Peter, 1991, “Uniform Proof-Theoretic Semantics for Logical Constants (Abstract),” Journal of Symbolic Logic , 56: 1142.
  • Tarski, Alfred, 1986, “What are Logical Notions,” History and Philosophy of Logic , 7: 143–154.
  • Tennant, Neil, 1994, “The Transmission of Truth and the Transitivity of Deduction,” in What is a Logical System? (Studies in Logic and Computation: Volume 4), D.M. Gabbay (ed.), Oxford: Clarendon Press, pp. 161–177.
  • Wansing, Heinrich, 2000, “The Idea of a Proof-Theoretic Semantics and the Meaning of the Logical Operations,” Studia Logica , 64 (1): 3–20.
  • Westerståhl, Dag, 2012, “From constants to consequence, and back,” Synthese , 187 (3): 957–971.
  • Woods, Jack, 2012, “Failures of Categoricity and Compositionality for Intuitionistic Disjunction,” Thought: A Journal of Philosophy , 1 (4): 281–291.
  • Zinke, Alexandra, 2018, The Metaphysics of Logical Consequence (Studies in Theoretical Philosophy: Volume 6), Frankfurt am Main: Vittorio Klostermann.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • MacFarlane, John, 2000, What Does it Mean to Say that Logic is Formal? , Ph. D. Dissertation, Philosophy Department, University of Pittsburgh.

Aristotle, General Topics: logic | Bolzano, Bernard | Carnap, Rudolf | Frege, Gottlob: theorem and foundations for arithmetic | logic, normative status of | logic: algebraic propositional | logic: classical | logic: inductive | logic: intuitionistic | logic: non-monotonic | logic: substructural | logical constants | logical form | logical pluralism | logical truth | model theory | proof theory | Russell, Bertrand | schema | semantics: proof-theoretic

Copyright © 2024 by Jc Beall Greg Restall Gil Sagi < gilisagi @ gmail . com >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Using Logic in Writing

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Understanding how to create logical syllogisms does not automatically mean that writers understand how to use logic to build an argument. Crafting a logical sequence into a written argument can be a very difficult task. Don't assume that an audience will easily follow the logic that seems clear to you. When converting logical syllogisms into written arguments, remember to:

  • lay out each premise clearly
  • provide evidence for each premise
  • draw a clear connection to the conclusion

Say a writer was crafting an editorial to argue against using taxpayer dollars for the construction of a new stadium in the town of Mill Creek. The author's logic may look like this:

Premise 1: Projects funded by taxpayer dollars should benefit a majority of the public. Premise 2: The proposed stadium construction benefits very few members of the public. Conclusion: Therefore, the stadium construction should not be funded by taxpayer dollars.

This is a logical conclusion, but without elaboration it may not persuade the writer's opposition, or even people on the fence. Therefore, the writer will want to expand her argument like this:

Historically, Mill Creek has only funded public projects that benefit the population as a whole. Recent initiatives to build a light rail system and a new courthouse were approved because of their importance to the city. Last election, Mayor West reaffirmed this commitment in his inauguration speech by promising "I am determined to return public funds to the public." This is a sound commitment and a worthy pledge.

However, the new initiative to construct a stadium for the local baseball team, the Bears, does not follow this commitment. While baseball is an enjoyable pastime, it does not receive enough public support to justify spending $210 million in public funds for an improved stadium. Attendance in the past five years has been declining, and last year only an average of 400 people attended each home game, meaning that less than 1% of the population attends the stadium. The Bears have a dismal record at 0-43 which generates little public interest in the team.

The population of Mill Creek is plagued by many problems that affect the majority of the public, including its decrepit high school and decaying water filtration system. Based on declining attendance and interest, a new Bears stadium is not one of those needs, so the project should not be publicly funded. Funding this project would violate the mayor's commitment to use public money for the public.

Notice that the piece uses each paragraph to focus on one premise of the syllogism (this is not a hard and fast rule, especially since complex arguments require far more than three premises and paragraphs to develop). Concrete evidence for both premises is provided. The conclusion is specifically stated as following from those premises.

Consider this example, where a writer wants to argue that the state minimum wage should be increased. The writer does not follow the guidelines above when making his argument.

It is obvious to anyone thinking logically that minimum wage should be increased. The current minimum wage is an insult and is unfair to the people who receive it. The fact that the last proposed minimum wage increase was denied is proof that the government of this state is crooked and corrupt. The only way for them to prove otherwise is to raise minimum wage immediately.

The paragraph does not build a logical argument for several reasons. First, it assumes that anyone thinking logically will already agree with the author, which is clearly untrue. If that were the case, the minimum wage increase would have already occurred. Secondly, the argument does not follow a logical structure. There is no development of premises which lead to a conclusion. Thirdly, the author provides no evidence for the claims made.

In order to develop a logical argument, the author first needs to determine the logic behind his own argument. It is likely that the writer did not consider this before writing, which demonstrates that arguments which could be logical are not automatically logical. They must be made logical by careful arrangement.

The writer could choose several different logical approaches to defend this point, such as a syllogism like this:

Premise 1: Minimum wage should match the cost of living in society. Premise 2: The current minimum wage does not match the cost of living in society. Conclusion: Therefore, minimum wage should be increased.

Once the syllogism has been determined, the author needs to elaborate each step in writing that provides evidence for the premises:

The purpose of minimum wage is to ensure that workers can provide basic amenities to themselves and their families. A report in the Journal of Economic Studies indicated that workers cannot live above the poverty line when minimum wage is not proportionate with the cost of living. It is beneficial to society and individuals for a minimum wage to match living costs.

Unfortunately, our state's minimum wage no longer reflects an increasing cost of living. When the minimum wage was last set at $5.85, the yearly salary of $12,168 guaranteed by this wage was already below the poverty line. Years later, after inflation has consistently raised the cost of living, workers earning minimum wage must struggle to support a family, often taking 2 or 3 jobs just to make ends meet. 35% of our state's poor population is made up of people with full time minimum wage jobs.

In order to remedy this problem and support the workers of this state, minimum wage must be increased. A modest increase could help alleviate the burden placed on the many residents who work too hard for too little just to make ends meet.

This piece explicitly states each logical premise in order, allowing them to build to their conclusion. Evidence is provided for each premise, and the conclusion is closely related to the premises and evidence. Notice, however, that even though this argument is logical, it is not irrefutable. An opponent with a different perspective and logical premises could challenge this argument. See the next section for more information on this issue.

Definition and Examples of Conclusions in Arguments

  • An Introduction to Punctuation
  • Ph.D., Rhetoric and English, University of Georgia
  • M.A., Modern English and American Literature, University of Leicester
  • B.A., English, State University of New York

In argumentation , a conclusion is the proposition that follows logically from the major and minor premises in a syllogism . An argument is considered to be successful (or valid ) when the premises are true (or believable) and the premises support the conclusion.

"We can always test an argument," says D. Jacquette, "by seeing whether and how far we can modify it in order to attain the opposite conclusion" ("Deductivism and the Informal Fallacies" in  Pondering on Problems of Argumentation , 2009).

Examples and Observations

  • "Here is a simple list of statements: Socrates is a man. All men are mortal. Socrates is mortal. The list is not an argument, because none of these statements is presented as a reason for any other statement. It is, however, simple to turn this list into an argument. All we have to do is to add the single word 'therefore': Socrates is a man. All men are mortal. Therefore, Socrates is mortal. Now we have an argument. The word 'therefore' converts these sentences into an argument by signaling that the statement following it is a conclusion and the statement or statements that come before it are offered as reasons on behalf of this conclusion. The argument we have produced in this way is a good one, because the conclusion follows from the reasons stated on its behalf." (Walter Sinnott-Armstrong and Robert J. Fogelin, Understanding Arguments: An Introduction to Informal Logic , 8th ed. Wadsworth, 2010)
  • Premises That Lead to a Conclusion "Here is an example of an argument. This job description is inadequate because it is too vague. It doesn't even list the specific tasks that should be performed, and it doesn't say how my perfomance will be evaluated. 'This job description is inadequate' is the conclusion and is stated first in the argument. The reasons advanced to support this conclusion are: 'It is too vague,' 'It doesn't list specific tasks,' and 'It doesn't state how performance will be evaluated.' They are the premises. If you accept the premises as true, you have good grounds for accepting the conclusion 'The job description is inadequate' is true." (Michael Andolina, Practical Guide to Critical Thinking . Delmar, 2002)
  • The Conclusion as Claim "When someone makes an argument, typically that person is, at the minimum, advancing a claim — a statement the advocate believes or is in the process of evaluating —and also providing a reason or reasons for believing or considering that claim. A reason is a statement advanced for the purpose of establishing a claim. A conclusion is a claim that has been reached by a process of reasoning . The rational movement from a particular reason or reasons to a particular conclusion is called an inference , a conclusion drawn on the basis of reasons ." (James A. Herrick, Argumentation: Understanding and Shaping Arguments , 3rd ed. Strata, 2007)
  • Misdirected Argumentation "This general fault [ misdirected argumentation ] refers to cases in which there is a line of argumentation moving along other than the path of argumentation leading towards the conclusion to be proved. In some such cases the path leads to the wrong conclusion, and in these cases the fallacy of wrong conclusion can be said to have been committed. In other cases the path leads away from the conclusion to be proved, but not to any specific alternative conclusion, as far as we can judge from the data given in the case. [See the fallacy of the red herring .]" (Douglas Walton,  Argumentation Methods for Artificial Intelligence in Law . Springer, 2005)
  • Premise Definition and Examples in Arguments
  • Propositions in Debate Definition and Examples
  • Definition and Examples of Syllogisms
  • What Is an Argument?
  • What Is Deductive Reasoning?
  • Fallacies of Relevance: Appeal to Authority
  • Definition and Examples of Valid Arguments
  • Reductio Ad Absurdum in Argument
  • Undistributed Middle (Fallacy)
  • False Dilemma Fallacy
  • Inference in Arguments
  • Logical Fallacies: Begging the Question
  • Contradictory Premises in an Argument
  • What Is the Fallacy of Division?
  • paralogism (rhetoric and logic)

Example sentences logical conclusion

It takes to its extreme logical conclusion the pervasive notion that if you are innocent, a good citizen, you have nothing to hide.
But it merely takes to its logical conclusion an attitude that we often encounter in the street.
Be responsive to change across the board and be prepared to follow things through to their logical conclusion .
Taken to a logical conclusion and applied more widely it could, say, lead to fast-food retailers being barred from certain areas.
Yet once committed, he never wavered, driving deals through to what by then he was certain was their logical conclusion .

Definition of 'conclusion' conclusion

IPA Pronunciation Guide

Definition of 'logical' logical

B2

COBUILD Collocations logical conclusion

English Quiz

Browse alphabetically logical conclusion

  • logical assumption
  • logical atomism
  • logical choice
  • logical conclusion
  • logical connection
  • logical consequence
  • logical consistency
  • All ENGLISH words that begin with 'L'

Quick word challenge

Quiz Review

Score: 0 / 5

Tile

Wordle Helper

Tile

Scrabble Tools

Understanding Logical Statements

Learning Objectives

  • Identify the hypothesis and conclusion in a logical statement.
  • Determine whether mathematical statements involving linear, quadratic, absolute value expressions, equations, or inequalities are always, sometimes, or never true.
  • Use counterexamples to show that a statement is false, and recognize that a single counterexample is sufficient.

Introduction

Logic is an essential part of the study of mathematics. Much of mathematics is concerned with the characteristics of numbers and other mathematical objects (such as geometric figures or variables), and being able to make decisions about what must be true based on known characteristics and other facts is vital.

The Parts of a Logical Statement

A logical statement A statement that allows drawing a conclusion or result based on a hypothesis or premise. is a statement that, when true, allows us to take a known set of facts and infer (or assume) a new fact from them. Logical statements have two parts: The hypothesis The part of a logical statement that provides the premise on which the conclusion is based. In a statement “If `x` then `y` ,” the hypothesis is `x` . , which is the premise or set of facts that we start with, and the conclusion The part of a logical statement that provides the result or consequences of the hypothesis. In a statement “If `x` then `y` ,” the conclusion is `y` . , which is the new fact that we can infer when the hypothesis is true. (Note: If you've used hypothesis in science class, you've probably noticed that this is a fairly different definition. Be careful not to get confused!)

Consider this statement:

If you go outside without any rain gear or cover when it’s pouring rain, you will get wet.

Here, the hypothesis is “you go outside without any rain gear or cover when it’s pouring rain.” The hypothesis must be completely true before we can use the statement to infer anything new from it. What does this statement say about someone who doesn’t go outside? About someone who uses an umbrella? About what happens to someone when it’s not pouring rain? Nothing. This statement doesn't apply to anyone in those cases.

The conclusion of this statement is “you will get wet.” Suppose it’s raining, and someone walks outside, and doesn't have any rain gear or other kind of cover—what will happen? All the parts of the hypothesis have been met, so—if the statement is true—we can infer that the person is going to get wet. It certainly seems reasonable that they would!

Note that in this example, if the hypothesis isn’t true, the person still could get wet. On a sunny day with no rain, someone might go outside to wash his car and get sprayed by the hose. Someone else might go swimming, and then they would really get wet! The statement says nothing when the hypothesis is false. It's only helpful when the hypothesis is true.

Not all logical statements are written as “If (something is true) then (something else is true).” To identify the hypothesis and conclusion, you may need to try to rewrite a statement in an “if-then” format.

Logical statements can also be about mathematics, of course! Anything that lets us infer a new fact about something mathematical from given information is a logical statement. For example, “The diagonals of a rectangle have the same length” is a logical statement. The hypothesis is the part that can help us if we know it’s true. When could this statement be useful?

With algebraic statements, the hypothesis is often an assumption about what values are allowed for a variable. For example, you might have seen a statement like “ `a + b = b + a` , where `a`  and `b` are real numbers.” Let's treat this equation as a logical statement:

Testing for Truth

Critical thinking is important, not just in mathematics but in everyday life. Have you ever heard someone make a statement and then thought, “Wait. Is that true?” Sometimes people have reasons for thinking something is true even though it isn’t. Determining if a statement is true is a great skill to have!

When determining if a statement is true, most people start by looking for examples A situation that suggests a logical statement may be true. , which are situations for which the statement does turn out to be true (both the hypothesis and the conclusion are true). A more powerful situation to find, if one exists, is a counterexample A situation that provides evidence that a logical statement is false. , a situation for which the statement turns out to be false (the hypothesis is true, but the conclusion is false). Why are counterexamples so powerful?

Consider a person who sees the moon many times at night and then thinks: I’ve never seen the moon during the day. The person might then make the statement, “The moon only comes out at night.” As an if-then statement, this is the same as “If the moon is out, then it’s night.” We can all probably think of many times when we saw the moon out, and it was nighttime. These are examples, situations when the statement was true.

But in fact, the statement is not always true, and we only need to see the moon during the day once —only one counterexample—to know that the statement is not true. Many, many examples cannot prove the statement is true, but we only need one counterexample to prove it’s not!

For a given statement, then, we have three possibilities:

So how can we be sure if something is true (or never true for that matter), if we can’t rely on lots of examples? With algebraic statements, sometimes we can turn to a graph for help:

Another way to decide if something is always, sometimes, or never true is reasoning from other things we know are true. We can start with something we know is true and try to create the original statement. Let’s try this with the same example above:

When we try to put together logical arguments like that, the biggest problem can be knowing where to start. There's a good chance our first attempt(s) will run into a dead end, and we'll need to start over. Practice does make it easier. Sometimes it helps to work backward: start by assuming the conclusion is true, try to think of a related statement known to be true (or false), and then connect them. Let’s take one last look at the example above:

Another way to test the truth of a statement is to look for counterexamples. Graphs can help us there, too:

Although we know  `|-x| = -x` is not always true if `x` is any real number, we also know that it is sometimes true. In fact, we can specify when it’s true—using the graph from the example, we can see it’s true when `x<=0` . We can use that fact to create a new statement:

If `x<=0` , then `|-x| = -x` .

Because of the narrower hypothesis, this statement is always true.

Special Cases

When we look for examples, and particularly for counterexamples, there are some special cases that are easy to overlook. Keeping these cases in mind is often very helpful. Look through some special cases and consider if any of them provides a counterexample for this statement: “When two numbers are multiplied, the product is larger than each of the factors.”

Here are some more examples. Consider the special cases above as you read through these.

Logical statements have two parts, a hypothesis that presents facts that the statement needs to be true, and a conclusion that presents a new fact we can infer when the hypothesis is true.

For a statement to be always true, there must be no counterexamples for which the hypothesis is true and the conclusion is false. If there are examples for which the statement is true, but there are also counterexamples, then the statement is sometimes true. These sometimes true statements can be made into always true statements by changing the hypothesis. A statement is never true if there are no examples for which both the hypothesis and the conclusion are true. When looking for counterexamples and examples, there are some special cases (such as negative numbers and fractions) that should be considered.

what is logical conclusion

Syllogism Definition

What is a syllogism? Here’s a quick and simple definition:

A syllogism is a three-part logical argument, based on deductive reasoning, in which two premises are combined to arrive at a conclusion. So long as the premises of the syllogism are true and the syllogism is correctly structured, the conclusion will be true. An example of a syllogism is "All mammals are animals. All elephants are mammals. Therefore, all elephants are animals." In a syllogism, the more general premise is called the major premise ("All mammals are animals"). The more specific premise is called the minor premise ("All elephants are mammals"). The conclusion joins the logic of the two premises ("Therefore, all elephants are animals").

Some additional key details about syllogisms:

  • First described by Aristotle in Prior Analytics , syllogisms have been studied throughout history and have become one of the most basic tools of logical reasoning and argumentation.
  • Sometimes the word syllogism is used to refer generally to any argument that uses deductive reasoning.
  • Although syllogisms can have more than three parts (and use more than two premises), it's much more common for them to have three parts (two premises and a conclusion). This entry only focuses on syllogisms with three parts.

Syllogism Pronunciation

Here's how to pronounce syllogism: sil -uh-jiz-um

Structure of Syllogisms

Syllogisms can be represented using the following three-line structure, in which A, B, and C stand for the different terms:

  • All A are B.
  • All C are A.
  • Therefore, all C are B.

Another way of saying the same thing is as follows:

Notice how the "A" functions as a kind of "middle" for the other terms. You could, for instance, write the syllogism as: C = A = B, therefore C = B.

Types of Syllogism

Over the years, more than two dozen different variations of syllogisms have been identified. Most of them are pretty technical and obscure. But it's worth being familiar with the most common types of syllogisms.

Universal Syllogisms

Universal syllogisms are called "universal" because they use words that apply completely and totally, such as "no" and "none" or "all" and "only." The two most common forms of universal syllogisms are:

  • All mammals are animals.
  • All elephants are mammals.
  • Therefore, all elephants are animals.
  • No mammals are frogs.
  • Therefore, no elephants are frogs.

Particular Syllogisms

Particular syllogisms use words like "some" or "most" instead of "all" or "none." Within this category, there are two main types:

  • All elephants have big ears.
  • Some animals are elephants.
  • Therefore, some animals have big ears.
  • No doctors are children.
  • Some immature people are doctors.
  • Therefore, some immature people are not children.

Enthymemes are logical arguments in which one or more of the premises is not explicitly stated, but is instead implied. Put another way: an enthymeme is a kind of abbreviated syllogism in which the writer presumes that the audience will accept the implied and unstated premise. For instance, the following statement is an enthymeme:

  • "Socrates is mortal because he's human."

This enthymeme is an abbreviation of a famous syllogism:

  • All humans are mortal.
  • Socrates is human.
  • Therefore Socrates is mortal.

The enthymeme leaves out the major premise. It instead assumes that all readers will understand and agree that "Socrates is mortal because he's human" without needing the explicit statement that "all humans are mortal."

Syllogistic Fallacies

A "fallacy" is the name for a mistake in logic. Syllogisms often seem like very simple statements, but you may be surprised how often people make logical mistakes when trying to put together simple syllogisms. For example, it may seem logical to make a statement like "Some A are B, and some C are A, therefore some C are B," such as:

  • Some nice people are teachers.
  • Some people with red hair are nice.
  • Therefore, some teachers have red hair.

Each of these categorical propositions is, after all, true—but in fact the final proposition, while true in itself, is not the logical conclusion of the two preceding premises. In other words, the first two propositions, when combined, don't actually prove that the conclusion is true. So even though each statement is independently true, the "syllogism" above is actually a logical fallacy. Here's an example of a false syllogism whose logical fallacy is a bit easier to see.

  • Some trees are tall things.
  • Some tall things are buildings.
  • Therefore, some trees are buildings.

The error in both of the above examples is called the "fallacy of the undistributed middle," since in each example the A is not "distributed" across the B and C in such a way that the B and C terms actually overlap. Other types of syllogistic fallacies exist, but this is by far the most common logical error people make with syllogisms.

Syllogism Examples

Syllogisms appear more often in rhetoric and logical argumentation than they do in literature, but the following are a few of the more memorable examples of the use of syllogism in literature.

Syllogism in Timon of Athens by Shakespeare

In this passage from a lesser-known work of Shakespeare, titled Timon of Athens, the character Flavius asks Timon whether he has forgotten him. Timon responds with a syllogism.

Flavius: Have you forgot me, sir? Timon: Why dost ask that? I have forgot all men; Then, if thou grant’st thou’rt a man, I have forgot thee.

So the structure of Timon's syllogism is as follows:

  • All men are men that Timon has forgotten.
  • Flavius is a man.
  • Therefore, Flavius is a man that Timon has forgotten.

Syllogistic Fallacy in The Merchant of Venice by Shakespeare

In The Merchant of Venice , a beautiful, young woman named Portia is arranged to marry whomever can correctly guess which of three caskets contains her portrait: the gold, the silver, or the lead casket. A prince comes to solve the riddle, and thinks he has worked out the answer when he reads the following inscription on the gold casket:

“Who chooseth me shall gain what many men desire.”

Upon reading this inscription, the suitor immediately exclaims:

Why, that’s the lady. All the world desires her.

But he's mistaken; the gold casket does not contain the portrait of Portia. The suitor clearly thinks he has made a logical deduction using the structure of a syllogism:

  • All men desire Portia;
  • Many men desire what is in this chest;
  • Therefore what is in the chest is (the portrait of) Portia.

But this "syllogism" is actually an example of the "fallacy of the undistributed middle," as we described above. In other words, it's the equivalent of saying "some trees are tall, and some buildings are tall, so therefore some buildings are trees."

Syllogism in "Elegy II" by John Donne

The following poem by John Donne contains a syllogism, though Donne takes some liberties with language in putting his syllogism together:

All love is wonder; if we justly do Account her wonderful, why not lovely too?

These lines could be translated into the structure of a syllogism like so:

  • All love is wonder.
  • She inspires wonder (or she is "wonderful").
  • Therefore, she inspires love (or she is "lovely").

Syllogism in "To His Coy Mistress" by Andrew Marvell

If you look closely, you can see that this poem by Andre Marvell contains a subtle syllogism, scattered throughout the poem. Embedded in the beginning, middle, and end of the poem are the major premise , the minor premise , and the conclusion .

Had we but world enough and time, This coyness, lady, were no crime. We would sit down, and think which way To walk, and pass our long love’s day. Thou by the Indian Ganges’ side Shouldst rubies find; I by the tide Of Humber would complain. I would Love you ten years before the flood, And you should, if you please, refuse Till the conversion of the Jews. My vegetable love should grow Vaster than empires and more slow; An hundred years should go to praise Thine eyes, and on thy forehead gaze; Two hundred to adore each breast, But thirty thousand to the rest; An age at least to every part, And the last age should show your heart. For, lady, you deserve this state, Nor would I love at lower rate. But at my back I always hear Time’s wingèd chariot hurrying near; And yonder all before us lie Deserts of vast eternity. Thy beauty shall no more be found; Nor, in thy marble vault, shall sound My echoing song; then worms shall try That long-preserved virginity, And your quaint honour turn to dust, And into ashes all my lust; The grave’s a fine and private place, But none, I think, do there embrace. Now therefore, while the youthful hue Sits on thy skin like morning dew, And while thy willing soul transpires At every pore with instant fires, Now let us sport us while we may, And now, like amorous birds of prey, Rather at once our time devour Than languish in his slow-chapped power.

You might translate the above as follows:

  • Being coy is fine so long as there's lots of time.
  • There isn't any time. (I can hear "time's winged chariot" right behind me!)
  • Therefore, don't be coy. (We should be amorous "while we may.")

Why Do Writers Use Syllogisms?

Writers use syllogisms because they're a useful tool for making an argument more convincing in persuasive writing and rhetoric. More specifically, writers might choose to use syllogism because:

  • Using a syllogism can help make a logical argument sound indisputable, whether it's being used to illustrate a simple point or a complex one.
  • Although syllogisms may seem somewhat tedious—since often they are just spelling out things that most people already know—it is helpful to clarify the terms and basic assumptions of an argument before proceeding with your main points.
  • As shown in the Merchant of Venice example from above, even a false or poorly constructed syllogism can help make an ill-conceived argument sound airtight, since using the language and structure of logical argumentation can be very convincing even if the logic itself isn't sound.

Other Helpful Syllogism Resources

  • The Wikipedia Page on Syllogism: A helpful, though technical resource if you want to learn more about the history of syllogisms, the many different types, and how the different ways they can be described using symbols.
  • The Dictionary Definition of Syllogism: A basic definition, including a bit on the etymology: the root of the word is the Greek verb "to infer."
  • To-the-point syllogism : An extremely to the point page on basic syllogisms and enthymeme.

The printed PDF version of the LitCharts literary term guide on Syllogism

  • PDFs for all 136 Lit Terms we cover
  • Downloads of 1934 LitCharts Lit Guides
  • Teacher Editions for every Lit Guide
  • Explanations and citation info for 40,762 quotes across 1934 books
  • Downloadable (PDF) line-by-line translations of every Shakespeare play
  • Juxtaposition
  • Common Meter
  • Deus Ex Machina
  • Personification
  • Verbal Irony
  • Round Character
  • Bildungsroman
  • Dynamic Character

The LitCharts.com logo.

Cambridge Dictionary

  • Cambridge Dictionary +Plus

logical conclusion

Meanings of logical and conclusion.

Your browser doesn't support HTML5 audio

(Definition of logical and conclusion from the Cambridge English Dictionary © Cambridge University Press)

  • Examples of logical conclusion

{{randomImageQuizHook.quizId}}

Word of the Day

on top of the world

extremely happy

Keeping up appearances (Talking about how things seem)

Keeping up appearances (Talking about how things seem)

what is logical conclusion

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists

{{message}}

There was a problem sending your report.

  • Definition of logical
  • Definition of conclusion
  • Other collocations with conclusion
  • More from M-W
  • To save this word, you'll need to log in. Log In

Definition of conclusion

  • consequence
  • determination

Examples of conclusion in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'conclusion.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

Middle English, from Anglo-French, from Latin conclusion-, conclusio , from concludere — see conclude

14th century, in the meaning defined at sense 1a

Phrases Containing conclusion

  • bring to a conclusion
  • bring to conclusion
  • come to the conclusion
  • draw a conclusion
  • foregone conclusion
  • in conclusion
  • reach a conclusion
  • reach its conclusion

Dictionary Entries Near conclusion

concludingly

conclusional

Cite this Entry

“Conclusion.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/conclusion. Accessed 1 Jun. 2024.

Kids Definition

Kids definition of conclusion, legal definition, legal definition of conclusion, more from merriam-webster on conclusion.

Nglish: Translation of conclusion for Spanish Speakers

Britannica English: Translation of conclusion for Arabic Speakers

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

More commonly misspelled words, commonly misspelled words, how to use em dashes (—), en dashes (–) , and hyphens (-), absent letters that are heard anyway, how to use accents and diacritical marks, popular in wordplay, the words of the week - may 31, pilfer: how to play and win, 9 superb owl words, 10 words for lesser-known games and sports, your favorite band is in the dictionary, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

How to Write a Good Conclusion (With Examples) 

How to Write a Good Conclusion (With Examples) 

  • Smodin Editorial Team
  • Published: May 31, 2024

Students often spend a great deal of time crafting essay introductions while leaving the conclusion as an afterthought. While the introduction is one of the most vital aspects of an essay, a good conclusion can have just as much of an impact on its effectiveness. Knowing how to write a good conclusion is crucial, as it encapsulates your main points and leaves a lasting impression on the reader.

A well-crafted conclusion should serve as the final pitch for your arguments. Your reader should walk away with a clear understanding of what they just read and how it applies to the core of your thesis. With the right approach, your conclusion can transform a good essay into a great one, making it both memorable and impactful.

This article will guide you through four simple steps of writing compelling conclusions. Each step is designed to help you reinforce your thesis and articulate your final thoughts in a way that will resonate with your teacher or professor. With a bit of practice, you can learn how to stick the landing and give every essay the finale it deserves.

What Is the Purpose of the Conclusion Paragraph?

Understanding the purpose of the conclusion paragraph is essential for effective essay writing. The conclusion paragraph should be more than just a summary of your essay. It should consolidate all your arguments and tie them back to your thesis.

Remember, all good writing inspires emotion. Whether to inspire, provoke, or engage is up to you, but the conclusion should always leave a lasting impression.

If in doubt, Smodin’s AI Chat tool can be handy for gauging the emotional impact of your conclusion.

By mastering the art of writing a powerful conclusion, you equip yourself with the tools to ensure your essays stand out. Whether it’s the first or last essay you’re writing for the class, it’s your chance to leave a definitive mark on your reader.

How to Write a Good Conclusion

student writing a conclusion

This approach ensures your conclusion adds value and reinforces your arguments’ coherence. Here are three simple and effective practices to help you craft a solid conclusion.

Restating Your Thesis

Restating your thesis in the conclusion is a common practice in essay writing, and for good reason. It helps underscore how your understanding has deepened or shifted based on the evidence you provided.

Just understand that a restatement of your original thesis doesn’t mean a complete word-for-word repeat. You should rephrase your original thesis so that it elucidates the insights you touched on throughout the essay. Smodin’s AI Rewriter can help refine your restatement to ensure it is fresh and impactful.

Here are a few tips to effectively restate your thesis

  • Show Complexity : If your essay added layers or nuances to the original statement, be sure to articulate that clearly.
  • Integrate Key Findings : Incorporate the main findings of your essay to reinforce how they supported or refined your thesis.
  • Keep It Fresh : Again, you want to avoid repeating the same things twice. Use different wording that reflects a nuanced perspective.

Finally, always ensure that the restated thesis connects seamlessly with the rest of your essay. Always try to showcase the coherence of your writing to provide the reader with a strong sense of closure.

Using AI tools like Smodin’s Outliner and Essay Writer can ensure your writing flows smoothly and is easy to follow.

Providing an Effective Synthesis

Providing an effective synthesis should enhance your original thesis. All good arguments should evolve and shift throughout the essay. Rather than simply summarizing these findings, you should integrate critical insights and evidence to demonstrate a deeper or more nuanced understanding.

Draw connections between the main points discussed and show how they collectively support your thesis. Also, reflect on the implications of these insights for the broader context of your subject. And once again, always use fresh and engaging language to maintain the reader’s interest.

The last thing you want is for your reader to view your essay as a collection of individual points. A good essay should read as a unified whole, with all the pieces tying together naturally. You affirm your argument’s significance when you tie all the pieces together in your conclusion.

Providing New Insights

provide insights when writing conclusion paragraph

Also, think of this step as your opportunity to propose future research directions based on your findings. What could a student or researcher study next? What unanswered questions remain? If you’re having trouble answering these questions, consider using Smodin’s research tools to expand your knowledge of the topic.

That isn’t to say you can leave open-ended or unanswered questions about your own thesis. On the contrary, your conclusion should firmly establish the validity of your argument. That said, any deep and insightful analysis naturally leads to further exploration. Draw attention to these potential areas of inquiry.

(Optional) Form a Personal Connection With the Reader

Forming a connection with the reader in the conclusion can personalize and strengthen the impact of your essay. This technique can be powerful if implemented correctly, making your writing more relatable, human, and memorable.

That said, slime academics discourage using “I” in formal essays. It’s always best to clarify your teacher’s or professor’s stance before submitting your final draft.

If it is allowed, consider sharing a brief personal reflection or anecdote that ties back to the main themes of your essay. A personal touch can go a long way toward humanizing your arguments and creating a connection with the reader.

Whatever you choose, remember that your conclusion should always complement the analytical findings of your essay. Never say anything that detracts from your thesis or the findings you presented.

Examples of Good Conclusions

Let’s explore some examples to illustrate what a well-crafted conclusion looks and sounds like. The following are two hypothetical thesis essays from the fields of science and literature.

  • Thesis Topic: The Impact of Climate Change on Coral Reefs
  • Introduction: “Coral reefs act as the guardians of the ocean’s biodiversity. These underwater ecosystems are among the most vibrant and essential on the entire planet. However, the escalating impact of climate change poses a severe threat to their health and survival. This essay aims to dissect specific environmental changes contributing to coral degradation while proposing measures for mitigation.”
  • Conclusion: “This investigation into the impact of climate change on coral reefs has revealed a disturbing acceleration of coral bleaching events and a significant decline of reef biodiversity. The findings presented in this study establish a clear link between increased sea temperatures and coral reef mortality. Future research should focus on the resilience mechanisms of coral species that could influence conservation strategies. The fate of the coral reefs depends on humanity’s immediate and concentrated action to curb global emissions and preserve these vital ecosystems for future generations.”

Notice how the conclusion doesn’t simply restate the thesis. Instead, it highlights the definitive connection between climate change and coral health. It also reiterates the issue’s urgency and extends a call of action for ongoing intervention. The last sentence is direct, to the point, and leaves a lasting impression on the reader.

If you’re struggling with your closing sentence (or any sentence, for that matter), Smodin’s Rewriter can create hundreds of different sentences in seconds. Then, choose the sentences and phrases that resonate the most and use them to craft a compelling conclusion.

  • Thesis Topic: The Evolution of the American Dream in 20th-Century American Literature
  • Introduction: “The American Dream was once defined by prosperity and success. However, throughout the 20th century, the representation of the American Dream in popular literature has undergone significant changes. Are these representations indicative of a far-reaching sentiment that lay dormant among the American public? Or were these works simply the result of disillusioned writers responding to the evolving challenges of the times?”
  • Conclusion: “Works by F. Scott Fitzgerald, John Steinbeck, and Toni Morrison illustrate the American Dream’s evolution from unbridled optimism to a more critical examination of the American ethos. Throughout modernist and post-modernist literature, the American Dream is often at odds with core American values. These novels reflect broader societal shifts that continue to shape the national consciousness. Further research into contemporary literature could provide greater insight into the complexities of this concept.”

You will know exactly what this essay covers by reading the introduction and conclusion alone. It summarizes the evolution of the American Dream by examining the works of three unique authors. It then analyzes these works to demonstrate how they reflect broader societal shifts. The conclusion works as both a capstone and a bridge to set the stage for future inquiries.

Write Better Conclusions With Smodin

Always remember the human element behind the grading process when crafting your essay. Your teachers or professors are human and have likely spent countless hours reviewing essays on similar topics. The grading process can be long and exhaustive. Your conclusion should aim to make their task easier, not harder.

A well-crafted conclusion serves as the final piece to your argument. It should recap the critical insights discussed above while shedding new light on the topic. By including innovative elements and insightful observations, your conclusion will help your essay stand out from the crowd.

Make sure your essay ends on a high note to maximize your chances of getting a better grade now and in the future. Smodin’s comprehensive suite of AI tools can help you enhance every aspect of your essay writing. From initial research to structuring, these tools can streamline the process and improve the quality of your essays.

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping
  • Prepositional Inference in Artificial Intelligence
  • Problem Solving in Artificial Intelligence
  • Turing Test in Artificial Intelligence
  • Types of Reasoning in Artificial Intelligence
  • Artificial Intelligence | An Introduction
  • Types of Artificial Intelligence
  • Resolution Algorithm in Artificial Intelligence
  • Game Playing in Artificial Intelligence
  • Top 15 Artificial Intelligence(AI) Tools List
  • What is Artificial Intelligence?
  • Artificial Intelligence Tutorial | AI Tutorial
  • Artificial Intelligence - Terminology
  • 10 Most Demanded Job Roles in Artificial Intelligence
  • Artificial Intelligence(AI) Replacing Human Jobs
  • Prepositional Logic Inferences
  • Artificial Intelligence - Boon or Bane
  • Artificial Intelligence - Temporal Logic
  • Artificial Intelligence | Natural Language Generation
  • Dangers of Artificial Intelligence

Propositional Logic in Artificial Intelligence

Propositional logic, also known as propositional calculus or sentential logic, forms the foundation of logical reasoning in artificial intelligence (AI). It is a branch of logic that deals with propositions, which can either be true or false. In AI, propositional logic is essential for knowledge representation, reasoning, and decision-making processes. This article delves into the fundamental concepts of propositional logic and its applications in AI.

Table of Content

What is Propositional Logic in Artificial Intelligence?

Example of propositions logic, basic concepts of propositional logic, 1. propositions:, 2. logical connectives:, 3. truth tables:, 4. tautologies, contradictions, and contingencies:, facts about propositional logic, syntax of propositional logic, logical equivalence, properties of operators, applications of propositional logic in ai, limitations of propositional logic.

Propositional logic is a kind of logic whereby the expression that takes into consideration is referred to as a proposition, which is a statement that can be either true or false but cannot be both at the same time. In AI propositions are those facts, conditions, or any other assertion regarding a particular situation or fact in the world. Propositional logic uses propositional symbols, connective symbols, and parentheses to build up propositional logic expressions otherwise referred to as propositions.

Proposition operators like conjunction (∧), disjunction (∨), negation ¬, implication →, and biconditional ↔ enable a proposition to be manipulated and combined in order to represent the underlying logical relations and rules.

In propositional logic, well-formed formulas, also called propositions, are declarative statements that may be assigned a truth value of either true or false. They are often denoted by letters such as P, Q, and R. Here are some examples:

  • P: In this statement, ‘The sky is blue’ five basic sentence components are used.
  • Q: ‘There is only one thing wrong at the moment we are in the middle of a rain.” 
  • R: ‘Sometimes they were just saying things without realizing: “The ground is wet”’.

All these protasis can be connected by logical operations to create stamata with greater propositional depth. For instance: 

  • P∧Q: ”It is clear that the word ‘nice’ for the sentence ‘Saturday is a nice day’ exists as well as the word ‘good’ for the sentence ‘The weather is good today. ’” 
  • P∨Q: “It may probably be that the sky is blue or that it is raining. ”
  • ¬P: I was not mindful that the old adage “The sky is not blue” deeply describes a geek.

A proposition is a declarative statement that is either true or false. For example:

  • “The sky is blue.” (True)
  • “It is raining.” (False)

Logical connectives are used to form complex propositions from simpler ones. The primary connectives are:

  • Example: “It is sunny ∧ It is warm” is true if both propositions are true.
  • Example: “It is sunny ∨ It is raining” is true if either proposition is true.
  • Example: “¬It is raining” is true if “It is raining” is false.
  • Example: “If it rains, then the ground is wet” (It rains → The ground is wet) is true unless it rains and the ground is not wet.
  • Example: “It is raining ↔ The ground is wet” is true if both are true or both are false.

Truth tables are used to determine the truth value of complex propositions based on the truth values of their components. They exhaustively list all possible truth value combinations for the involved propositions.

  • Example: “P ∨ ¬P”
  • Example: “P ∧ ¬P”
  • Example: “P ∧ Q”
  • Bivalence: A proposition gives a true and false result, with no in-between because h/p’ cannot be true and false simultaneously.
  • Compositionality: The general signification of truth value of the proposition depends on the truth values of the parts that make up the proposition as well as the relations between the different parts.
  • Non-ambiguity: Every purpose is unambiguous, well-defined: Each proposition is a well-defined purpose, which means that at any given moment there is only one possible interpretation of it.

Propositional logic and its syntax describes systems of propositions and methods for constructing well-formed propositions and statements. The main components include:

  • Propositions: Denoted by capital letters (For example, P, Q).
  • Logical Connectives: Signs that are employed to join give propositions (e.g., ∧, ∨, ¬).
  • Parentheses: Conventional operators are employed to identify the sequence of operations and the hierarchy of various operators existing in the syntax of computer programming languages.

In propositional logic, a well-formed formula or WFF is an expression in symbols for the logic that satisfies the grammar rules of the logic.

Two statements have the same logical form if the truth of every proposition contained in the first statement has the same value in all cases as the truth of every proposition contained in the second statement. For instance:

  • It is also important to note that S→T is equivalent to ¬S∨T.
  • The deep relationship between P↔Q and (P→Q)∧(Q→P) can be easily deduced, and the relationship between P↔Q and (P→Q)∧(Q→P) cannot be overemphasized.

Logical equivalence can be done using truth tables or logical equivalences where specific attributes are used to compare the two.

The logical operators in propositional logic have several important properties:

1. Commutativity:

  • P ∧ Q ≡ Q ∧ P
  • P ∨ Q ≡ Q ∨ P

2. Associativity:

  • (P ∧ Q) ∧ R ≡ P ∧ (Q ∧ R)
  • (P ∨ Q) ∨ R ≡ P ∨ (Q ∨ R)

3. Distributivity:

  • P ∧ (Q ∨ R) ≡ (P ∧ Q) ∨ (P ∧ R)
  • P ∨ (Q ∧ R) ≡ (P ∨ Q) ∧ (P ∨ R)

4. Identity:

  • P ∧ true ≡ P
  • P ∨ false ≡ P

5. Domination:

  • P ∨ true ≡ true
  • P ∧ false ≡ false

6. Double Negation:

7. idempotence:, 1. knowledge representation:.

Propositional logic is used to represent knowledge in a structured and unambiguous way. It allows AI systems to store and manipulate facts about the world. For instance, in expert systems, knowledge is encoded as a set of propositions and logical rules.

2. Automated Reasoning:

AI systems use propositional logic to perform automated reasoning. Logical inference rules, such as Modus Ponens and Modus Tollens, enable systems to derive new knowledge from existing facts. For example:

  • Modus Ponens : If “P → Q” and “P” are true, then “Q” must be true.
  • Modus Tollens: If “P → Q” and “¬Q” are true, then “¬P” must be true.

3. Problem Solving and Planning:

Propositional logic is fundamental in solving problems and planning actions. AI planners use logical representations of actions, states, and goals to generate sequences of actions that achieve desired outcomes. For example, the STRIPS planning system employs propositional logic to represent preconditions and effects of actions.

4. Decision Making:

In decision-making processes, propositional logic helps AI systems evaluate various options and determine the best course of action. Logical rules can encode decision criteria, and truth tables can be used to assess the outcomes of different choices.

5. Natural Language Processing (NLP):

Propositional logic is applied in NLP for tasks like semantic parsing, where natural language sentences are converted into logical representations. This helps in understanding and reasoning about the meaning of sentences.

6. Game Theory and Multi-Agent Systems:

In game theory and multi-agent systems, propositional logic is used to model the beliefs and actions of agents. Logical frameworks help in predicting the behavior of agents and designing strategies for interaction.

While propositional logic is powerful, it has several limitations:

  • Lack of Expressiveness: s does not allow for the representation of how one proposition relates to another or to use variables to refer to objects in the world (e. g. , “All human beings are mortal”).
  • Scalability: What has been a defining problem of propositional logic is that the size of the resultant truth tables increases exponentially with the number of propositions, which makes practical problems infeasible.
  • Limited Inference: It can only take on ‘true or false’ solutions, not probabilities or multidimensional security levels of truth.
  • Absence of Quantifiers: Unlike predicate logic, propositional logic does not allow the use of quantifiers such as “for all” (denoted by ∀) or “there exists” (denoted by ∃), both of which are powerful expressions.

Propositional logic is one of the cornerstones of artificial intelligence and computer science as a field as it forms a basis upon which different algorithms can be developed. It is employed in several areas as the representation of knowledge, reasoning, and digital circuits. However, these weaknesses do not detract from the fact that propositional logic is effective when it comes to the creation of AI systems as well as their application. AI programming language has its unique set of principles, syntax, as well as properties, and understanding them became crucial for individuals, engaged in AI-related tasks.

Please Login to comment...

Similar reads.

  • Data Science Blogathon 2024

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

what is logical conclusion

What is a CMOS Battery? How It Works & How to Replace One [2024]

C MOS stands for Complementary Metal Oxide Semiconductor and despite its size, it does much more than you think. Let’s clarify a few things first. A CMOS chip is vastly different than the CMOS battery on your motherboard.

Table of Contents

Introduction: what is “cmos”, the role of a cmos battery, 1. incorrect date and time, 2. bios passwords may get reset, 3. disturbed boot device sequence, 4. constant beeping sound, 5. irresponsive peripherals, 6. checksum error, how to reset/replace the cmos battery.

CMOS is a MOSFET -type technology used for logical operations. In simple terms, it is used to make chips that have low static power consumption . This chip quite efficiently stores your important BIOS configurations, powers the RTC (Real Time Clock), and holds many other important settings.

For the last decade or so, the CMOS chip has been integrated as a part of the ICH / South Bridge chip on your motherboard.

The coin-shaped object on the contrary is the CMOS battery , a Lithium coin cell. This is what powers the aforementioned chip/memory. While storing BIOS/UEFI preferences can be offloaded to say your Hard Disk , since it is non-volatile, a battery is required to keep the time and date in check (Real Time Clock).

The terms CMOS battery and CMOS chip/memory are often used interchangeably, but it is best to know the difference between both.

READ MORE: What is Power-On Self-Test in Computers? POST Explained ➜

How Exactly Does a PC Function?

Let’s start from the basics. Your CPU is the brain of your computer, it does all the complex calculations and determines your PC’s performance. The CPU is attached to a motherboard, on which all the I/O is connected. You use a keyboard and mouse for input and in most cases a monitor displays the output. The BIOS establishes the relationship between all these hardware devices .

When you boot up your PC, your BIOS identifies and configures the hardware connected. If you have 2 SSDs , one SATA and one M.2 , say you want the SATA to be your primary for whatever reason, the BIOS does all the handling. The BIOS or UEFI is the first software that loads up, even without an Operating System .

But let’s say you want to save these preferences, I mean who wants to set a manual overclock or change the boot device priority repeatedly every single time their PC boots up? This is where the CMOS memory/chip comes into play. Just like how you require a Hard Drive to store your games , your BIOS requires the CMOS memory to store all the settings and configurations.

Let’s talk more about the importance of CMOS in how your PC works.

If you’ve assembled your computer for the first time and boot up, the CMOS is empty. The BIOS starts collecting information regarding your PC, like your CPU frequency ( Overclocks , if any), memory speeds ( XMP ), memory latency, which settings you prefer to keep on/off, and so on. It saves this data in the CMOS memory.

The next time you boot up your PC, all this data is tallied against the stored information in the CMOS memory. If you replace any hardware part or change a setting, that information is also immediately saved to the CMOS memory.

Most importantly, a CMOS battery is crucial to maintain the RTC (Real Time Clock) on your computer. This allows proper tracking of time even when your PC is powered off. While most PCs nowadays are connected to the Internet , it is still convenient to have a one-stop solution for all systems. Let’s recap what we have discussed thus far, that is the CMOS battery:

  • Saves your BIOS / UEFI preferences and settings.
  • Maintains the Real Time Clock .
  • Provides power to the CMOS memory.

Since all batteries have a limited lifespan, you may need to replace your CMOS battery eventually. Typically, CMOS batteries last for around 3 years when unplugged from the PSU . But once they start getting weak, you may notice all sorts of problems.

READ MORE: 2 ways to change ‘Critical Battery Percentage Levels on Windows’ ➜

Issues with the CMOS Battery

A few issues that may arise due to a weakened or dying CMOS battery are as follows:

As the CMOS battery is primarily responsible for powering the RTC, it may lead to incorrect date and time on your PC. While this can be overcome by synchronizing your PC’s date/time with the Internet, it is a crystal-clear symptom of a failing CMOS battery.

PRO TIP: If the issue is with your computer or a laptop/notebook you should try using Fortect Repair which can scan the repositories and replace corrupt and missing files. This works in most cases, where the issue is originated due to a system corruption. You can download Fortect by Clicking Here

Likewise, certain websites will display an error message stating “ Your clock is ahead/behind . This indicates that the date and time are not synchronized properly and is most likely caused by your CMOS battery.

READ MORE: How to Fix Real Time Clock Error ➜

BIOS Passwords are different from Windows user passwords since they prevent unauthorized users from accessing your system BIOS. As the BIOS data is dependent on the CMOS memory and battery, the BIOS may frequently get reset due to a malfunctioning battery. This in turn also resets all BIOS-level passwords on your system.

If you manage an organization or a school, this can lead to many vulnerability and security issues. As a side note, you can use this to your advantage as well. If you somehow forget your BIOS password, just remove your CMOS battery, wait for a bit ( 10 – 15 minutes ), reinsert it and your password will be reset.

If your PC out of nowhere boots off another Hard Disk , or your primary boot device suddenly becomes the secondary or even undetected, it could be a telltale sign of a failing CMOS battery.

As we explained above, all this boot sequence data is saved in the CMOS memory. If the CMOS battery, which supplies power to the CMOS memory gets weak, this data may get corrupted or even lost.

If you hear a beeping sound when your PC starts, it might be a beep code indicating a failing CMOS battery. Such beeps occur when your PC undergoes the procedure of POST (Power On Self Test) . While each code has a separate meaning, specific to each OEM , this in tandem with the aforementioned problems hints towards an issue with the CMOS battery.

For example, 10 beeps on an AMI BIOS indicate a CMOS shutdown register read/write issue.

Another sign of a dying CMOS battery is if your keyboard and mouse stop responding or show weird behavior. If you use a custom keyboard layout , it may get reset. This may be a result of improperly configured drivers, courtesy of the CMOS battery.

When you boot your computer or laptop with a failing CMOS battery, you may experience a CMOS checksum error . This basically implies that your BIOS has not been able to verify the details stored regarding your hardware in the CMOS memory, as we explained above.

The CMOS memory holds all the information stored in the BIOS for the 1st boot. When you boot for the 2nd time, the new information is matched against the data stored from the 1st boot.

READ MORE: How to Fix the CMOS Checksum Error on Windows? ➜

All electronic appliances have a limited lifespan. After a number of years, your CMOS battery might not supply enough voltage to the CMOS memory, resulting in the aforementioned problems.

If your CMOS battery needs a replacement, or if you just want to reset your BIOS, the hardest part is to actually locate the battery iteslf. The rest is just pulling it out and plugging it back in. Matter of fact, CMOS batteries are quite cheap and easy to replace. Follow these steps to do so:

  • Take necessary ESD protections.

Open your PC’s case and locate the motherboard on which your CPU cooler , GPU and RAM have been installed. (Reference image from GIGABYTE )

  • Locate the CMOS battery, typically shaped like a silver coin. In the above image, it is located just below the PCIe Slot . You may need to disconnect cables and remove drives or even your GPU to gain access to the CMOS battery.
  • If you are on a laptop or a server motherboard, kindly check the manufacturer’s guide for more details. Some laptops have a casing that gives access to all components beneath. Other laptops may have several layers, each packing different components. In the worst-case scenario, you’d have to unscrew all the casings and explore a bit.
  • Next, use your fingers to hold the edge of the battery and gently pull it out. Some manufacturers will opt to use a clip that you must pull up to remove the battery.
  • Once the battery is out of the socket, gently reinsert the replacement in the socket and you’ve successfully replaced your CMOS battery. As a reminder, this process is easier than it sounds.

READ MORE: How to Run A Computer Performance (Benchmark) Test on Windows ➜

The CMOS battery is responsible for storing your BIOS settings and maintaining the date and time on your PC. As is the case with all cells, the CMOS battery may degrade over time if left unpowered for a long while. You will start to notice some serious hiccups if your battery starts to fail.

It is highly recommended you swap out your faulty CMOS battery for a new one. CR2032 Lithum cells cost as low as $1 and replacing your existing CMOS battery for a new one is quite straightforward.

No. CMOS is a technology that is used to make chips, mainly the CMOS memory. The CMOS battery powers the CMOS memory. The BIOS on the contrary initializes the hardware and stores related data in the CMOS memory for future use.

No! Modern PCs use an RTC (Real Time Counter) which keeps track of the date and time on your PC. It requires an active power source to do so. Furthermore, many other BIOS-related settings are dependent on the CMOS battery.

It depends. While most CMOS batteries can last for around 5-10 years, you may find a few that die within just 3 years. Luckily enough, replacing them isn’t a hassle since most of them are inexpensive CR2032 Lithium cells.

The post What is a CMOS Battery? How It Works & How to Replace One [2024] appeared first on Appuals .

CMOS stands for Complementary Metal Oxide Semiconductor and despite its size, it does much more than you think. Let’s clarify a few things first. A CMOS chip is vastly different than the CMOS battery on your motherboard. Introduction: What is “CMOS”? CMOS is a MOSFET-type technology used for logical operations. In simple terms, it is …

COMMENTS

  1. Types of conclusions (article)

    When an arguer's conclusion is a recommendation for something, he or she often will provide one good reason to do that thing. One thing to be aware of here is the assumption that the benefits outweigh the drawbacks. When an arguer's conclusion is a prediction, the arguer may be assuming that the current evidence will remain unchanged in the future.

  2. Logical reasoning

    Logical reasoning is a mental activity that aims to arrive at a conclusion in a rigorous way. It happens in the form of inferences or arguments by starting from a set of premises and reasoning to a conclusion supported by these premises. The premises and the conclusion are propositions, i.e. true or false claims about what is the case.

  3. Introduction to Logic

    Logic eliminates these difficulties through the use of a formal language for encoding information. Given the syntax and semantics of this formal language, we can give a precise definition for the notion of logical conclusion. Moreover, we can establish precise reasoning rules that produce all and only logical conclusions.

  4. Using Logic

    Before using logic to reach conclusions, it is helpful to know some important vocabulary related to logic. Premise: Proposition used as evidence in an argument. Conclusion: Logical result of the relationship between the premises. Conclusions serve as the thesis of the argument. Argument: The assertion of a conclusion based on logical premises.

  5. Logical Consequence

    Such "conclusions" are logical truths (sometimes tautologies) or, on the proof-centered approach, theorems. Perhaps there is a reason to allow the notion of logical consequence to apply even more broadly. In Gentzen's proof theory for classical logic, a notion of consequence is defined to hold between multiple premises and multiple ...

  6. Logic in Writing

    This is a logical conclusion, but without elaboration it may not persuade the writer's opposition, or even people on the fence. Therefore, the writer will want to expand her argument like this: Historically, Mill Creek has only funded public projects that benefit the population as a whole. Recent initiatives to build a light rail system and a ...

  7. Definition and Examples of Conclusions in Arguments

    Definition and Examples of Conclusions in Arguments. Words such as therefore, so, hence, and thus are called conclusion-indicators: they signal the arrival of a conclusion in an argument. (Gustav Dejert/Getty Images) In argumentation, a conclusion is the proposition that follows logically from the major and minor premises in a syllogism . An ...

  8. Episode 9: Logical Conclusions

    This video is about Episode Logical Conclusions and Inferences. Filmed and edited by Kevin M. McAllister

  9. LOGICAL CONCLUSION definition in American English

    Be responsive to change across the board and be prepared to follow things through to their logical conclusion. The Sun. Taken to a logical conclusion and applied more widely it could, say, lead to fast-food retailers being barred from certain areas. Times, Sunday Times.

  10. Inductive Reasoning

    Inductive reasoning is a logical approach to making inferences, or conclusions. People often use inductive reasoning informally in everyday situations. You may have come across inductive logic examples that come in a set of three statements. These start with one specific observation, add a general pattern, and end with a conclusion.

  11. Logical consequence

    Logical consequence (also entailment) is a fundamental concept in logic which describes the relationship between statements that hold true when one statement logically follows from one or more statements. A valid logical argument is one in which the conclusion is entailed by the premises, because the conclusion is the consequence of the premises.

  12. LOGICAL CONCLUSION definition and meaning

    LOGICAL CONCLUSION definition | Meaning, pronunciation, translations and examples

  13. What Is Deductive Reasoning?

    Validity and soundness. Validity and soundness are two criteria for assessing deductive reasoning arguments. Validity. In this context, validity is about the way the premises relate to each other and the conclusion. This is a different concept from research validity.. An argument is valid if the premises logically support and relate to the conclusion.

  14. Understanding Logical Statements

    A logical statement A statement that allows drawing a conclusion or result based on a hypothesis or premise. is a statement that, when true, allows us to take a known set of facts and infer (or assume) a new fact from them. Logical statements have two parts: The hypothesis The part of a logical statement that provides the premise on which the conclusion is based.

  15. Conclusion

    In logic: Scope and basic concepts. …new proposition, usually called the conclusion. A rule of inference is said to be truth-preserving if the conclusion derived from the application of the rule is true whenever the premises are true. Inferences based on truth-preserving rules are called deductive, and the study of such inferences is known as ...

  16. Syllogism

    A syllogism is a three-part logical argument, based on deductive reasoning, in which two premises are combined to arrive at a conclusion. So long as the premises of the syllogism are true and the syllogism is correctly structured, the conclusion will be true. An example of a syllogism is "All mammals are animals.

  17. Logical Fallacies

    A logical fallacy is an argument that may sound convincing or true but is actually flawed. Logical fallacies are leaps of logic that lead us to an unsupported conclusion. People may commit a logical fallacy unintentionally, due to poor reasoning, or intentionally, in order to manipulate others. Logical fallacy example.

  18. LOGICAL CONCLUSION collocation

    Examples of LOGICAL CONCLUSION in a sentence, how to use it. 18 examples: If this is the case, a logical conclusion is that there must be a categorization device that allows…

  19. Conclusion Definition & Meaning

    The meaning of CONCLUSION is a reasoned judgment : inference. How to use conclusion in a sentence.

  20. How to Write a Good Conclusion (With Examples)

    The conclusion paragraph should be more than just a summary of your essay. It should consolidate all your arguments and tie them back to your thesis. Remember, all good writing inspires emotion. Whether to inspire, provoke, or engage is up to you, but the conclusion should always leave a lasting impression.

  21. Propositional Logic in Artificial Intelligence

    Propositional logic, also known as propositional calculus or sentential logic, forms the foundation of logical reasoning in artificial intelligence (AI). It is a branch of logic that deals with propositions, which can either be true or false. ... Conclusion. Propositional logic is one of the cornerstones of artificial intelligence and computer ...

  22. What is a CMOS Battery? How It Works & How to Replace One [2024]

    CMOS is a MOSFET -type technology used for logical operations. In simple terms, it is used to make chips that have low static power consumption. This chip quite efficiently stores your important ...

  23. [Solved] State the premise, conclusion, and logical fallacy. If the

    Premise: Teachers don't get paid enough. 2. Conclusion: Teachers need to be paid extra for new programs. Teachers should volunteer for community service. There is a need for more community service volunteers. 3. Logical Fallacy: Straw man (the teacher's argument against mandatory community service is misrepresented or exaggerated).

  24. Read the Conclusions and then decide which of the given conclusion

    The following question has one statement and three conclusions. Check the conclusions on the basis of the statement and choose the correct option. Statement: Meena is the only daughter of Mr. & Mrs. Joshi. Conclusion: (i) Mr. & Mrs. Joshi have two children. (ii) Meena has no brother. (iii) Meena has a step brother.

  25. Hanifa murder: I'll follow case to logical conclusion

    Governor Abba Yusuf of Kano State has vowed to resurrect murder case of 5-year-old school girl, Hanifa Abubakar killed by her proprietor, Abdulmalik Tanko with the intent to follow case to a ...

  26. How to Execute an IF…THEN Logic in an SQL SELECT Statement

    The CASE statement acts as a logical IF-THEN-ELSE conditional statement.We can use it to perform conditional branching within the SELECT statement across various SQL databases, including SQL Server, MySQL, and PostgreSQL.. Within SQL SELECT, we can use the WHEN-ELSE statement instead of the traditional IF-ELSE.It evaluates a condition and returns a specific value depending on the outcome.