Logic, is intimately familiar to us all, yet people tend to struggle explaining it and how it obtains— because of our thorough acquaintance with logic, this leads many of us to form conclusions about it that are quite mistaken, e.g. that the laws of logic constrains the world to act a certain way, that we need reified laws of logic, etc. I aim to explain the nature of logic in an understandable way here (Note that I will omit many technical details for the sake of brevity)
Now having written things like this before, I know most of you will never read beyond this first paragraph! If you are skimming and this happens to catch your eye I implore you if you are interested in the nature of logic and its foundations, look into “metalogic”
First, consider this excerpt from the SEP entry “Logic and Ontology”:
“On the one hand, logic is the study of certain mathematical properties of artificial, formal languages. It is concerned with such languages as the first or second order predicate calculus, modal logics, the lambda calculus, categorial grammars, and so forth[…] In any case, logic deals with inferences whose validity can be traced back to the formal features of the representations that are involved in that inference, be they linguistic, mental, or other representations[…] Such a conception of logic thus distinguishes validity from formal validity. An inference is valid just in case the truth of the premises guarantees the truth of the conclusion[…] validity so understood is not what logic is concerned with. Logic is concerned with formal validity, which can be understood as follows. In a system of representations, for example a language, it can be that some inferences are always valid as long as the representational or semantic features of certain parts of the representations are kept fixed, even if we abstract from or ignore the representational features of the other parts of the representations. So, for example, as long as we stick to English, and we keep the meanings of certain words like “some” and “all” fixed, certain patterns of inference, like some of Aristotle’s syllogisms, are valid no matter what the meaning of the other words in the syllogism. To call an inference formally valid is to assume that certain words have their meaning fixed, that we are within a fixed set of representations, and that we can ignore the meaning of the other words. The words that are kept fixed are the logical vocabulary, or logical constants, the others are the non-logical vocabulary. And when an inference is formally valid then the conclusion logically follows from the premises[...] Logic is the study of such inferences, and certain related concepts and topics… “
What is said in that entry is vital to understanding logic and it will guide our investigation here as we now know that logic:
(1) studies formal validity and
(2) depends on the fixed meaning of logical constants
(Also as a side note, logical laws are much more than the so-called laws of thought– The law of identity, law of non-contradiction, law of excluded middle, etc. Indeed there are numerous laws of logic, and our discussion here will seek to look at logic in a general manner so as to see the foundations of all of these laws)
1.0 Formal Languages
A formal language requires a set of formation rules, that is, a specification of the kinds of expressions that are counted as what are called “Well-Formed Formulas”--- which are sentences (AKA meaningful expressions). Crucially these expressions or well-formed formulas are applicable in a computable way such that “a machine could check whether a candidate satisfies the requirements.” As Hao Wang put it. The specifications will usually contain three things:
(1) Primitive symbols (smallest, indivisible units of the language. Much like letters of the alphabet): These can include variables, predicate symbols, connectives, constants (name for objects) etc.
(2) Atomic Sentences (the smallest, non-compound expressions that count as Well-Formed Formulas): The role of these are to state simple facts or relationships
(3) Set(s) of inductive clauses: These are the rules that let you take legitimate sentences (Atomic or complex) and conjoin them using logical connectives and/or quantifiers to form new, more complex, but still syntactically correct, Well-Formed Formulas
Keeping that in mind, this is a major part of how we obtain truth-functional semantics which will be discussed later, and how natural language is translated into formal language, that aside, let’s look at something else quickly
1.1 Formal Language and Logic
It’s important to note that formal languages and logics are not actually necessarily the same thing. Otavio Bueno in his paper, Troubles with Trivialism, details:
“There is an extremely idealized notion of language—the notion of a formal language—that… according to which language and logic are, at least in principle, separate notions. A language is taken here mainly as a list of symbols, some syntactic rules of construction of expressions, and some rough and ready rules of interpretation of these expressions. A logic, in turn, is basically a system of derivation of expressions. Given that a formal language is detached from a logic, a language cannot be inconsistent, since it doesn’t presuppose logic. On this conception of (formal) language, it is ultimately a category mistake to state that a language is consistent”
Now you do not have to worry about his statements of consistency and inconsistency in that quote as that pertains to the separate topic of trivialism and natural language he spoke of in his paper– more importantly, from this we can see that formal languages and logic (in principle) are distinct. Bueno’s statement is consistent with what was spoken of before (see 1.0) as a formal language, as he correctly notes, is simply a language reduced to its most basic structural components (symbols, syntactic rules, etc). Accordingly, logic is a “system of derivation of expression” meaning it provides the rules for reasoning, proof, and determining what follows from what (AKA what we can “derive”)
2.0 Interpreting Formal Language
Great, so we understand now, what the “formal features of the representations” involved in logical inferences are based on what we know about formal languages. Now how does logic work with these? Well the answer is that logic interprets the features of formal languages with specific stipulations.
This is largely where the conversation of semantics will come because we want to take these highly idealized and abstract formal languages and be able to give them truth value among other things
We will focus primarily on propositional (sentential) logic for simplicity, but a great amount of what is said here can be generalized for other logics. Now, to establish a formal connection between the symbols of the language and a specific "world" or context, is called an interpretation or a model.
You won’t see this in sentential logic but to help conceptualize it, consider domains, which is the specific collection of things (objects, entities, etc) that you are talking about. E.g. a mathematician could use the set of all natural numbers (0,1,2,3…) as a domain.
Further we stipulate what objects of the domain are denoted by which constants of the language and which relations and functions are denoted by which predicate letters and/or function symbols. So denotation, as understood here, is simply stipulating which objects of a domain correspond to the basic symbols of the language e.g. we can stipulate that the constant “a” denotes the number 5 and that the predicate P denotes the property of being a prime number and we can obtain an atomic sentence: P(a) that has a fixed non-ambiguous meaning– [ P(a) = The number 5 is prime]. In propositional logic, a common example is that P = It is raining
With an interpretation set, you can then computationally determine the truth-value of atomic sentences, for simplicity just use a pre-theoretic notion of truth (Meaning an understanding that exists before any formal, developed theory/system/rigorous analysis.). From that, we can assign truth-values of true or false based on whether the relation or property it describes actually holds for the objects it denotes within the specified domain.
2.1 Truth-Functional Semantics and Validity
Now enter the magic that we have been waiting for. Let us focus on classical propositional logic again. The final step of interpreting formal languages is where we can see the elegance of logic.
2.1.1 The Content Neutrality of logic
First, understand that logic is understood to be content and domain neutral:
All X are Y
Z is an X
Therefore, Z is Y
This works identically regardless of what is being spoken of: Philosophers (All Greeks are mortal / Socrates is Greek / Therefore Socrates is mortal), Chemistry (All acids have pH < 7 / Hydrochloric acid is an acid / Therefore it has pH < 7), Fiction (All unicorns have horns / Sparkle is a unicorn / Therefore Sparkle has horns)
Or say we had a tautology “Mars is dry or not dry” (P v -P) the statement would compute as true even if “Mars” referred to my eye and “dry” referred to the property of being a prime number. The statement would be true under all interpretations of the logical and non-logical terms.
So logical truths and schemas are not true in virtue of the world, indeed, they do not care about the world. Why is that?
2.1.2 Truth-Functional Rules
This is possibly the most important section of this paper, remember how I said truth-functional semantics was the final step of interpretation? Well, it is because the final step uses the standard interpretation of logical connectives to calculate the truth-value of the complex, non-atomic sentences. Logical connectives like ∧("and"), ∨("or"), and ¬("not") have fixed, agreed-upon rules for how they combine truth-values. These rules are defined purely by truth tables. (Validity and semantic entailment are essentially the same notion)
What is meant by a connective being truth-functional is that the truth of a complex sentence is determined by the truth of its parts. To figure out if a truth-functional logical sentence is true, all you need is a truth table showing the truth values of its atomic parts.
Consider any propositional connective or operator e.g., conjunction ∧. Its semantics is defined purely truth-functionally: P∧Q is true iff both P and Q are true. This is independent of what P and Q mean.
Whether: P = “the grass is green”, Q = “the sky is blue” or instead: P = “2 is prime”, Q = “my chair is equitable”
If you give me a big TFL sentence—e.g.:
¬[(P ∧ Q) → (R ∨ ¬S)]
I can compute its truth value using only this information:
the truth value of P
the truth value of Q
the truth value of R
the truth value of S
There is no need to know what the sentences mean, describe, imply, suggest, emphasize, or emotionally color.
The logical rule governing “∧” does not change. Logic cares only about the truth-values of the atomic components, not about their “semantic content” in the world. This is why truth tables exhaust the space of possibilities A truth-table for ∧ shows the evaluation of “P ∧ Q” under every possible assignment of truth-values to P and Q. meaning all possible situations in which a conjunction-type statement could be true or false, because propositional logic abstracts away from the objects, properties, and facts to the bare truth-value pattern.
Thus, we obtain that a formula is a tautology iff it is true under every truth-value assignment because tautologies depend only on the meanings of the logical constants. They will also come out as true given the truth-functional roles of the operators used. Crucially, this matches the idea that logical truth holds across all possible worlds, because any world corresponds to some assignment of truth-values to P, Q, etc. So, by applying these rules repeatedly, you can determine the truth-value of every single well-formed formula in the language under a specific (or any) interpretation.
In any case when talking about propositional logic: Truth in all truth-value assignments ⇔ truth in all possible interpretations ⇔ truth in all possible worlds. These collapse because propositional logic has no structure beyond truth-values.
2.1.3 Grounding Logical Truths
With all of that said, to make it clear we can repeat what is said succinctly to see how logical truths can be grounded. There are two major ways to do this (no they do not include God): proof-theoretic accounts and model-theoretic accounts. Take what we already know:
(1) Logical constants like ¬, ∨, and ∧ have fixed meanings across all interpretations.
(2)These meanings are truth-functional rules.
(3) A tautology has a form such that the truth-functional rules make it true no matter what truth-values its atomic parts have.
(4) Because those truth-values can vary arbitrarily (in different worlds, interpretations, or assignments),
(5) A tautology is unavoidable: every way the world might be makes it true.
As per the brilliant Graham Seth Moore, It's a law of logic that any instance of the schema [~(p v q) <-> (~p & ~q)] is true. What grounds this fact is the truth functions denoted by '~', 'v', '&', and '<->'. Those truth functions are sufficient to explain the truth of any instance because they obtain in virtue of this fixed logical vocabulary.
From there, you can get into more sophisticated waters and decide if you want to use proof-theoretic accounts and model-theoretic ones. E.g. Logical truths are grounded in the definitions of logical constants and the existence of objects in the world, requiring no "laws of logic" over and above the objects themselves. So logical truths may commit us to the existence of objects. Although, logic remains neutral formally. (The schema P v -P is neutral) whilst being worldly via instantiations. (The sentence "Mars is dry or not dry" requires Mars). To quote Moore exactly, "The general phenomenon of logical truth does not depend on any specific object... but the actual full range of logical truths presupposes actual objects".
That is one example, but conversely, the proof-theoretic view is syntactic in nature, defining logic as a system of derivation where validity is determined by whether a conclusion can be deduced from premises using formal inference rules, independent of any external "meaning". Whereas, the model-theoretic view is semantic, defining logic through the relationship between formal languages and other structures (like mathematical ones), where validity is determined by whether a statement remains true across all possible interpretations (models) of that language. These two are not necessarily in rivalry with each other and can be complementary. The model-theoretic story explains why certain sentences come out true in every model; the proof-theoretic story explains why certain formal derivations are admissible. The two notions coincide for classical first-order logic (by the completeness theorem), but they come apart in many non-classical logics. I hold to the standard Tarskian model-theoretic view but I can explain that fully in another paper
For first-order quantifiers, the same principle generalizes as their fixed semantic rules ensure truth across all interpretations where the domain and predicate extensions vary arbitrarily. Once we understand that we can understand why other logic-related things, like contradictions, don't happen, based on semantic-constitutive accounts and why we don't need reified laws of logic or something of that sort
Conclusion
I hope this was edifying and explained things that you may have been confused about. Of course, it was not exhaustive or entirely substantive by any means but I think that clears up most things, if you have any further questions or disagreements just let me know
