These notes are based on Sandhya's Magic Handouts for Uni Leipzig.
I regret to inform all German speakers that Johannes Dölling's Magic Handouts are no longer available online (last checked on 22th September 2019).
Set theory
A set
(Notation: {a,b,c})is a collection of objects, called its members or elements,
that are
- distinct: no member may be repeated, i.e. {x} = {x, x}. An object is either a member of a set or not, there is no such thing as halfway, multiple, or gradual membership.
- unordered: the relative order of the set members does not matter (unlike in ordered tuples), i.e. {x,y} = {y,x}.
- potentially unbounded: there is no limit to the number of members a set can have, it can be infinite (e.g. the set of natural numbers), finite or even empty (written as ∅ or {}).
- abstract: the members can be literally everything.
A set can also be a member of a set (recursive
sets).
NB: It is important to keep
separate the objects / concepts vs the names
of the objects: If {Georg Cantor} is the set containing the man who came up
with set theory and {“Georg Cantor”} is the set containing the name “Georg
Cantor”, then
{Georg Cantor} = {the man who came up with
set theory} but
{Georg
Cantor} ≠ {“Georg Cantor”}
|
Sets can be described using
- List notation: A = {a,e,i,o,u}
- Venn diagrams (see below)
- Predicate notation: A = {a | a is a vowel in Spanish}
For any two sets A and B
-
if every member of A is a member
of B, then A is a subset of B (A ⊆B) and B is a superset
of A (B ⊇ A)
-
if
every member of A is in B and every member of B is in A, then A = B or A ⊆B and
A ⊇B or B ⊆ A and B ⊇ A. So every set is a subset of itself.
-
if
every member of A is a member of B but not every member of B is a member of A,
then A is a proper subset of B (A ⊃B) and B is a proper superset of A (B ⊂ A).
For
a set A not to be a subset of B ( A ⊈ B) or B not to be a superset of A (B ⊉ A)
A must have at least one member that is not in B. As the empty set (written as
∅ or {}) has no members at all, it has no member that is not in B and is
therefore a subset of B for B = any set.
NB: A member of a set ≠ a subset of a
set: A subset is always a set, a member may or may not be a set. If a member
M of set A is a set, it is still not a subset of A, but the set containing M,
i.e. {M}, is.
|
The union
of two sets A and B
(A ⋃ B) is the set {x | x ∈ A or x ∈ B}, i.e. the set containing the members of
A and the members of B.
The intersection of two sets A and B (A ⋂ B) is the
set {x | x ∈ A and x ∈ B} i.e. the set containing all objects that are both in
A and in B.
The difference of two sets A and B (A-B) is the set {x | x ∈ A and x ∉
B}, i.e. the set containing all the members of A that are not in B.
The complement of a set A (A’) is the set {x | x ∉
A}, i.e. the set of all objects in the universe of discourse U (the set
of things we are talking about) that are not in A.
Idempotent Laws
|
Commutative Laws
|
Associative Laws
|
|
Identity Laws
|
Complement Laws
|
Consistency Principle
|
|
De Morgan’s Laws
|
|
Distributive Laws
|
|
An ordered
tuple (Notation: <a,b,c>) is a finite collection of members. Unlike
in a set, the members of an ordered tuple may be repeated. An ordered tuple
that consists of exactly one element is called an ordered pair.
The Cartesian
Product (A × B) of two given sets A and B is the set consisting of all
ordered pairs formed from the sets A and B where the first element is in A and
the second element is in B. A × B = {<x,y> | x ∈ A and y ∈
B}.
To
find out what is the smallest A and B such that a given set X ⊆ (i.e. is a subset
of) A × B we take A = { a | <a,b> ∈ X for some b} and B = {b | b ∈ X for
some a} and form the Cartesian Product of the two. These two sets are called the projections of X onto the
first and second coordinates, respectively.
Relations and functions
For
any sets A and B, a relation R holding between objects in A and B may be
expressed as R ⊆ A × B, i.e. a relation R is a set of ordered pairs <a,b>
for which R holds between a and b.
R =
{<a,b> | aRb}.
The projection of R onto the first coordinate
is the domain of R, the projection
of R onto the second coordinate is the range
or codomain of R.
The complement
of R, R’=def (A × B) - R for
any two sets A and B.
The inverse of R, R-1 = the set
containing all the ordered pairs in R, but with the first and second member in
each ordered pair reversed.
A relation R from A to B is a function iff
- each element in the domain of R is paired with only one element in the range
- the domain of R = A
A partial
function on R is a relation R that satisfies Condition 1 but not Condition
2, i.e. that is a total function on some proper subset of A.
A function that is a subset of A × B is a
function from A to B, also written
as F: A → B.
A function that is a subset of A × A is a
function in A.
The members of the domain of a function are
called its arguments, the members of the range of a function are called its
values.
Functions from A to B are in general said to
be into B (also called into functions) while functions whose
range = B are said to be onto B (also called onto functions or surjective
functions).
A function from A to B in which at least one
member of B is assigned to more than one member in A (i.e. repeated) is called many-to-one, while a function in which
no member in B is assigned to more than one member in A is called one-to-one (or injective).
A function which is both one-to-one (injective)
and onto (surjective) is called a one-to-one
correspondence (or bijective
function).
Reflexivity
Given a set A and a relation R in A, R is
reflexive iff all the ordered pairs of the form <x,x> are in R for every
x in A.
Reflexivity definition in
Quantitative Methods
|
|
Empirical relative:
<A, ~>
a ~ a
|
Representation in the numerical relative:
<A, ~> ⇔ <R, =>
a ~ b ⇔ 𝜑 (a) = 𝜑(b)
|
𝜑(x) is any number assigned to x
|
|
A relation that is not reflexive is called non-reflexive.
A relation which contains no ordered pair of
the form <x,x> is irreflexive.
Symmetry
For any set A, a binary relation R in A is
symmetric iff for every ordered pair <x,y> in R, the pair <y,x> is
also in R.
Symmetry definition in
Quantitative Methods
|
|
Empirical relative:
<A, ~>
a ~ b ⇒ b ~ a
|
Representation in the numerical relative:
<A, ~> ⇔ <R, =>
a ~ b ⇔ 𝜑 (a) = 𝜑(b)
|
𝜑(x) is any number assigned to x
|
|
A relation that is not symmetric is non-symmetric.
A relation in which it is never the case that
for an ordered pair <x,y> in it, <y,x> is also a member, is asymmetric.
A relation in which, whenever both <x,y>
and <y,x> are in R, then x=y, is antisymmetric.
An antisymmetric relation does not have to be reflexive.
Transitivity
A relation R is transitive iff for all ordered
pairs <x,y> and <y,z> in R, <x,z> is also in R.
NB: x, y and z are variables so they need not
be distinct.
Transitivity definition in
Quantitative Methods
|
|
Empirical relative:
<A, ~>
a ~ b ∧ b ~ c ⇒ a ~ c
<A, ≽ >
a ≽ b ∧ b ≽ c ⇒ a ≽ c
|
Representation in the numerical relative:
<A, ~> ⇔ <R, =>
a ~ b ⇔ 𝜑 (a) = 𝜑(b)
<A, ≽ > ⇔ <R, ≥ >
a ≽ b ⇔ 𝜑 (a) ≥ 𝜑(b)
|
𝜑(x) is any number assigned to x
|
|
A relation that is not transitive is non-transitive. If for no pairs
<x,y> and <y,z> in R, the ordered pair <x,z> is in R, then R
is intransitive.
Connectedness
A
relation R in a set A is connected or connex iff for every two distinct
elements x and y in A, <x,y> ∈ R or <y,x> ∈ R or both.
NB: The definition of connectedness applies to all members of A. Ordered
pairs of the form <x,x> may also occur in a connected relation, but they
are irrelevant for connectedness.
Connectedness definition in
Quantitative Methods
|
|
Empirical relative:
<A, ≽ >
a ≽ b ∨ b ≽ a ∨ ((a ≽ b) ∧ (b ≽ a));
((a ≽ b) ∧ (b ≽ a)) ⇔ a ~ b
|
Representation in the numerical relative:
<A, ≽ > ⇔ <R, ≥ >
a ≽ b ⇔ 𝜑 (a) ≥ 𝜑(b)
|
𝜑(x) is any number assigned to x
|
|
A relation that is not connected is non-connected.
R
(not ∅)
|
R-1
|
R’
|
Reflexive
|
Reflexive
|
Irreflexive
|
Irreflexive
|
Irreflexive
|
Reflexive
|
Symmetric
|
Symmetric
|
Symmetric
|
Asymmetric
|
Asymmetric
|
Non-symmetric
|
Anti-symmetric
|
Anti-symmetric
|
Depends on R
|
Transitive
|
Transitive
|
Depends on R
|
Intransitive
|
Intransitive
|
Depends on R
|
Connected
|
Connected
|
Depends on R
|
An equivalence
relation is one which is reflexive, symmetric and transitive.
The most typical equivalence relation is =
(equality), but we can easily create other equivalence relations such as “same
height as”, “same age as”, etc. The use of equivalence relations on a domain
divides this domain into subsets whose members are equivalent with respect to a
given relation. These subsets are mutually
disjoint and are called equivalence
classes.
An equivalence class [[x]] is defined for the set of all y such that
<x,y> ∈ R where R is an equivalence relation.
For a non-empty set A, a partition of A is a set consisting of non-empty subsets of A such
that
- For any two distinct subsets X and Y, X ∩ Y = ∅
- The union of all the subsets in the partition = A
The members of the partition are called the cells of the partition.
For a given equivalence relation R in a set A,
we can make a partition of A in which x and y are in the same cell of the
partition iff x and y are related by R, i.e. an equivalence class is simply a
cell in a possible partition of A.
An order
is a binary relation which is transitive
and additionally
- reflexive and anti-symmetric (weak order) or
- irreflexive and asymmetric (strict / strong order).
If
R is an order (weak or strict) and <x,y> ∈ R, then x precedes, is a predecessor of y and
y succeeds, is a successor of x.
W1 = {<a,b>, <a,c>, <a,d>, <b,c>,
<a,a>, <b,b>, <c,c>, <d,d>}
S1 = {<a,b>, <a,c>, <a,d>, <b,c>}
W2 = {<b,a>, <b,b>, <a,a>, <c,c>,
<d,d>, <c,b>, <c,a>}
S2 = {<b,a>, <c,b>, <c,a>}
|
In W1
and S1 b is between a and c, so a precedes c but does not immediately precede c.
In W2 and S2, c is an
immediate predecessor of b and b is an immediate
predecessor of a.
For an order R in a set A, an element x in A
is
-
minimal iff there is no other element in A
which precedes x
-
maximal iff there is no other element in A
which succeeds x
-
least iff x precedes every other element in A
-
greatest iff x succeeds every other element in
A.
A least element and a greatest element in an
order are always unique (if there were two such elements, each one would have
to precede / succeed the other, yielding both <x,y> and <y,x> and
thus violating either asymmetry or anti-symmetry).
On the other hand, an order can have more than
one minimal and more than one maximal element.
An order might also have no maximal, minimal,
least or greatest element (as for example the relation “greater than” defined
on the set of all positive and negative numbers and 0).
If an order (weak or strong) is also connected (i.e. every distinct element
is related to another in an ordered pair), then it is a total or linear order.
W3 = {<d,c>, <d,b>, <d,a>, <c,b>,
<c,a>, <a,a>, <b,b>, <c,c>, <d,d>, <b,a>}
S3 = {<d,c>, <d,b>, <d,a>, <c,b>,
<c,a>, <b,a>}
|
W3 and S3 are total / linear.
The following elements are minimal: a in W1 / S1,
c and d in W2 / S2, d in W3 / S3
The following elements are maximal: c and d in W1 / S1, a and d in
W2 / S2, a in W3 / S3
The following elements are least: a in W1 / S1, d inW3 / S3, none in W2 / S2
The following elements are greatest: a in W3 / S3, none in W1 / S1 and W2 / S2
A set A is well-ordered by a relation R iff
- R is a total order
- Every subset of A has a least element in R
A relation R is dense
iff for every <x,y> ∈ R where x ≠ y
there exists a z ∈ A, z ≠ x and z ≠ y such that <x,z> ∈ R and
<z,y> ∈ R.
Dense
=def RA
| <x,y> ∈ R; x ≠ y ∃z ∈ A; z ≠ x ∧ z ≠ y | {<x,z>, <y,x>} ∈ R
|
The relation “greater than” is dense defined on the set of real numbers, as there are an infinite number of numbers between any two numbers x and y. However, the relation “greater than” defined on the set of natural numbers is not dense, as there is no element z between two consecutive natural numbers (e.g. 1 and 2).
The cardinality
|A| of a set A = the number of members / elements inside A.
If
N = the set of natural numbers, then |N| = ∞ (infinity); |∅| = 0
Two sets are considered equivalent if there is a one-to-one
correspondence between them, i.e. for any two sets A and B, every element
of A must be mapped onto exactly one element of B (one-to-one mapping) and
every member must be mapped onto exactly one element of A (onto mapping). In
this case, it is also reasonable to say that |A| = |B|.
Notation:
A ~ B (A is equivalent to B).
NB: Set equality ≠ set equivalence!
Two
sets are equal iff they have the same members; they are equivalent iff they
have the same number of members. Equal sets ⊂ equivalent sets.
|
A set is infinite
iff it is equivalent to a proper subset of itself.
Example:
X: {a | a is an infinite number}
Y: {b | b is an even infinite number}
RXY: ∀<a,b> ∈ R, b = 2a
|
Formal logic
A formal
system in general consists of
- a non-empty set of primitives (objects we are interested in investigating further)
- a set of axioms (statements that we assume to be true but that are relative to the time and place we live in and thus may change with time) about these primitives
- a way to reason (make further statements) about these axioms by
a.
using a set of recursive rules of
definition
b.
investigating the background logic of the language in
which the axioms are stated (e.g. an axiom in statement logic would consist of
subparts in predicate logic)
c.
no explicit means of derivation,
just looking at what logically follows from
the axioms
We intuitively recognize that the reasoning on the left-hand side is
valid while the one on the right-hand side is not:
|
|
All X are Y
Z is a X
____________
Z is a Y
|
Some X are Y
Z is a X
___________
Z is a Y
|
Language
systems comprise natural languages (e.g. English,
Italian), artificial languages (e.g. Esperanto, Assyrani), programming languages (e.g. Python, Haskell),
but also arithmetics, set theory, music etc. They are basically formal systems
with
- a vocabulary
- a syntax that deals with the properties of the expressions (primitives, axioms, rules of inference or composition, etc.) within the system itself (“the form”)
- a semantics that deals with the interactions between the system and its interpretations / models (“the content)
Unlike natural language systems, which are
very complex, statement logic and predicate logic are strictly binary on the semantic side.
On the syntactic side, the expressions of these formal languages are all
declaratives (there are no interrogatives, imperatives, performatives etc.) and
the generative system (the means of
combining sentences to form more complex sentences) is also limited (whereas in
natural languages you can form sentences of infinite length and embedding).
The formal language system that is the object of our studies is the object language.
The formal language system we use to talk about the object language
is the meta language.
|
Statement logic
Statement logic represents a formal system
where the primitives are all statements. It has
- An infinite vocabulary of atomic statements p, q, r, s (atomic in the sense that they are all in their simplest form, i.e. primitives).
- Rules of syntax:
a.
An atomic statement itself is a well-formed formula (wff).
b.
Any wff preceded by ¬ (negation) is also a wff.
c.
Any two distinct wffs can be made into another wff by means of adding any of the
connectives ∧ (conjunction), ∨ (disjunction), → (conditional), ↔ (biconditional) between them
and enclosing the result in parentheses.
- Rules of semantics:
a.
Statements - both atomic and
complex - all have a truth value
which is set to either True (1) or False (0). No other options are
possible.
b.
The truth value of an atomic sentence
p depends on nothing but the content of p.
c.
The truth value of a complex
statement built out of atomic statements and connectives depends on the truth value of the atomic
statements and the truth functional
properties of the connectives. Unlike the truth values of statements, which
vary according to the content of the statements, the truth functional
properties of connectives are uniform
and universal.
Truth tables for
connectives
|
|||||||||
p
|
¬ p
|
p
|
q
|
p ∧ q
|
p ∨ q
|
p ⊻ q
|
p → q
|
p ↔ q
|
|
0
|
1
|
0
|
0
|
0
|
0
|
0
|
1
|
1
|
|
1
|
0
|
1
|
0
|
0
|
1
|
1
|
0
|
0
|
|
0
|
1
|
0
|
1
|
1
|
1
|
0
|
|||
1
|
1
|
1
|
1
|
0
|
1
|
1
|
|||
¬
reverses the truth value of the statement it attaches to
(P ∧ Q) is
true iff both P and Q are true
(P ∨ Q) is
true iff at least one of P and Q is true
(P ⊻ Q) is
true iff either P or Q (noth both!) is true
(P → Q) is
true iff a) Q is true or b) both P and Q are false
(P ↔ Q) is
true iff P → Q and Q → P is true, that is, iff a) both (P and Q) are true or b) both P and
Q are false
|
|||||||||
Computing the truth values
of complex expressions
The number of rows in a truth table for a complex expression is
determined by the requirement that we compute all possible truth values for
every atomic statement in that expression. As every atomic statement has two
possible values, the truth table for an expression X with n atomic statements
has 2n rows. In general we compute the truth values of the
innermost expressions before moving on to the outer expressions.
|
A logical tautology is a statement
that is always true (regardless of the truth values of the atomic statements in
it), e.g. (p
→ (q → p))
A logical contradiction is a statement that is always false
(regardless of the truth values of the atomic statements in it), e.g. (p ∧ (¬
p))
A logical contingency is a statement whose truth value
depends (is contingent) on the truth value of the atomic statements in it, e.g.
(p → q)
You can test whether an expression is a tautology or a contingency by
a)
drawing a truth table: iff the
final column contains only the value 1 (true), then the expression is a
tautology
b)
reductio ad absurdum: Assume the expression is
not a tautology but a contingency, i.e. one of its possible truth values is 0
(false). Reason backwards from this assumption to compute the truth values of
the atomic statements. If you run into a contradiction, the expression is
indeed a tautology, otherwise it is a contingency.
In a biconditional statement P ↔ Q that is a logical tautology, the expressions P and Q (i.e. the
ones on both sides of ↔) are logically equivalent to each other. Notation: P ⇔ Q.
Rule of Substitution: Iff two expressions P and Q are logically equivalent they can be
substituted for each other in a larger expression.
|
In a conditional statement P → Q that is a logical tautology, the consequent Q (the expression on the right side of →) is a logical consequence of the antecedent P (the expression on the left side of →), or: the antecedent P logically implies the consequent Q.
Notation: P ⇒ Q.
Idempotent Laws:
|
Commutative Laws:
|
Associative Laws:
|
|
Distributive Laws:
|
|
Identity Laws
|
Complement Laws
|
De Morgan’s Laws
|
Conditional Laws
|
Biconditional Laws
|
|
A proof is a formal
representation of valid patterns of reasoning. It consists of premises (statements that we, for the
sake of argument, assume to be true) and a conclusion
(a statement whose truth is demonstrated to necessarily follow from the assumed
truth of the premises).
For a given proof X, if P1, P2, …, Pn
are the premises of X and Q is the conclusion of X, then
X is valid iff ((P1 ∧ P2 ∧ … ∧ Pn) → Q) is a logical tautology
X is invalid iff ((P1 ∧ P2 ∧ … ∧ Pn) → Q) is not a logical tautology
Example:
p → q
p
_______
∴ q
|
If it rains, the street is wet.
It rains
_______
∴ The street is wet
|
As (((p → q) ∧ p) → q) is a logical
tautology, this is a valid proof.
|
|
Logical fallacies
Denomination
|
Pattern
|
Natural language example
|
Fallacy of affirming the consequent
|
p → q
q
_______
∴ p
|
If it rains, the street is wet.
The street is wet
____________
∴ It rains
|
Fallacy of denying the antecedent
|
p → q
¬p
_______
∴ ¬q
|
If it rains, the street is wet.
It does not rain
____________
∴ The street is not wet
|
You can test the validity of a (complex) proof by
a)
drawing a truth table
b)
breaking down the argument into
simpler arguments. If each of these simpler arguments has already been proven
valid, then we can be certain that the complex argument is also valid.
Rules of inference
Denomination/
Abbreviation
|
Pattern
|
Denomination/
Abbreviation
|
Pattern
|
Modus Ponens
(MP)
|
P → Q
P
_______
∴ Q
|
Modus Tollens (MT)
|
P → Q
¬ Q
_______
∴ ¬P
|
Hypothetical Syllogism (HS)
|
P → Q
Q → R
_______
∴ P → R
|
Disjunctive Syllogism (DS)
|
P ∨ Q
¬P
_______
∴ Q
|
Simplification
(Simpl.)
|
P ∧ Q
_______
∴ P
|
Conjunction
(Conj).
|
P
Q
_______
∴ P ∧ Q
|
Addition (Add.)
|
P
_______
∴ P ∨ Q
|
Types of proofs
Proofs can be made from rules of inference only (simple proofs, example a.),
but often the premises are not of the right syntactic form and must be converted into logically equivalent
forms (using the laws of statement logic) to
which the rules of inference can be applied directly (example b.)
a.
Prove t from p
→ q, p ∨ s, q → r, s → t and ¬r
|
b.
Prove (p → q)
from (p → (q ∨ r)) and (¬r)
|
||
|
3,5 MP
1,6 MP
2,7 DS
4,8 MP
|
|
1 Cond.
3 Assoc.
4 Commut.
2,5 DS
6 Cond.
|
In some cases the Rule of Substitution is applied to replace entire
premises by expressions that are logically equivalent to them (proofs with substitution of wffs):
c.
Prove (p ∧ (¬ (q → r))) from (¬(p → (¬q))) and (¬r)
|
|
|
1 Cond. (Sub.)
3 De Morgan
4 Comp.
5 Comp. (Sub.)
2,6 Conj.
7 Assoc.
8 Comp.
9 De Morgan
10 Cond. (Sub.)
|
Suppose a proof has premises P1,P2,...Pn
and (Q→R) as the conclusion (i.e.
the conclusion contains a conditional as
the main connective).
A direct conditional proof
is made by adding the antecedent Q of the conclusion as an additional auxiliary premise and then deriving the
consequent R of the conclusion from the premises P1,P2...Pn
and Q.
An auxiliary premise can be any wff / syntactic constituent of the
proof that will help us forward with the proof BUT the auxiliary premise must
be cancelled by the rule of
Conditional Proof before ending the entire proof.
By convention, every line of the proof which is based on the auxiliary
premise has to be indicated with a vertical bar.
A conditional proof may have multiple levels of embedding, i.e. there
may be multiple auxiliary premises, where the auxiliary premise APn
crucially depends on the auxiliary premise APn-1 (i.e. the second
auxiliary premise crucially depends on the first, the third on the second,
etc.) The exiting of a lower to a higher level of embedding is always indicated
by a conditional whose antecedent is the auxiliary premise from the lower level
and whose conclusion is the formula that has just been derived.
A statement from a higher level may always be used in a lower level but
a statement from a lower level may not
be used in a higher level, i.e. when we exit a lower level, that level is
considered closed off.
The validity of the conditional proof is based on
((P1
∧P2 ∧...∧Pn) → (Q → R)) ⇔ ((P1 ∧P2 ∧...∧Pn ∧ Q) → R))
or generally
(P
→ (Q → R)) ⇔ (P
∧ Q) → R))
This equivalence is very important when dealing with semantics and/or the programming
language Haskell,
where substituting (P ∧ Q) → R)) with (P
→ (Q → R)) is called Schönfinkelization (in semantics) or currying (in both semantics and Haskell)
and substituting (P → (Q → R)) with (P
∧ Q) → R)) (which is what we do in the direct
conditional proof) is called uncurrying.
|
Suppose a proof has premises P1, P2,...Pn and conclusion Q.
An indirect proof is made by
introducing the negation of the
conclusion, ¬Q, as an auxiliary premise, and then trying to derive a
contradiction, i.e. we indirectly prove that Q follows from P1, P2,...Pn by showing that ¬Q is not compatible with P1,
P2,...Pn.
Note that this form of argumentation uses the same logic as reductio ad absurdum.
Predicate Logic
Unlike in statement logic where the primitives are all statements, in
predicate logic the atomic constituents come in two parts: predicates and
terms.
- Vocabulary:
- Terms
- Individual constants j, k, l
- Individual variables x, y, z
- Predicates: P,Q,R
- Logical connectives: ¬∧∨→↔
- Quantifiers: ∀ (universal quantifier), ∃ (existential quantifier)
- Auxiliary symbols: (,) / [,]
Predicates can be understood as a type of functions that take terms as
their arguments. A n-place predicate is one that takes n terms as its arguments
(has to combine with n terms in order to form a statement). When the arity /
valency (= number of terms it requires) of a predicate does not match the
number of terms it combines with, the result is ill-formed.
One-place predicates
|
H(s)
|
H: Human, s: Susan
‘Susan is a human’
|
Two-place predicates
|
S (j,p)
|
S: see, j: John, p: Peter
‘John sees Peter’
|
Three-place predicates
|
G (a,b,c)
|
G: give, a: Alice, b: the book, c: Cynthia
‘Alice gives the book to Cynthia
|
When a predicate combines with individual constants, the result is a statement, when it combines with
individual variables, the result is an open
statement or propositional function.
e.g. H(s) ‘Susan is a human’ is a statement whereas H(x) ‘x is a human’
with x referring to some individual in the domain
of discourse is an open statement.
Open statements can be converted into statements by attaching
quantifiers to the expression:
∀
|
each, every, all
|
∀x
H(x) for all x, H holds for x
e.g.: all x are humans
|
false
iff ∃x ¬H(x), i.e. iff there is at least one x for
which H does not hold
e.g.: iff at least one x is not a human
|
∃
|
some (at least one)
|
∃x
H(x) there is at least one x for which H holds
e.g.: at least one x is a human
|
false
iff ∀x ¬H(x), i.e. iff for all x, H does not hold
e.g.: iff all x are not humans / iff no x is a human
|
- Syntax:
- If P is a n-ary predicate and t1...tn are terms, then P(t1…;tn) is a formula.
This generates atomic formulas
- If 𝛼 and 𝛽 are formulas, then ¬𝛼, 𝛼∧𝛽, 𝛼∨𝛽, 𝛼→𝛽, 𝛼↔𝛽 are formulas
This holds for statements as well as open statements
- If 𝜙 is a formula and x is an individual, then (∀x) 𝜙 and (∃x) 𝜙 are formulas.
A quantifier plus variable may be prefixed to any
expression, even if that expression does not contain this variable (vacuous
quantification)
- Nothing else is a formula.
In a formula (∀x) 𝜙 or (∃x) 𝜙, 𝜙 is the scope of the attached quantifier. A
quantifier may have a formula as its scope even if it only vacuously
quantifies. The scope of a quantifier may be contained in another. An
occurrence of a variable x is bound if it occurs in the scope of (∀x) or
(∃x).
A variable is free if it is
not bound. Tertium non datur (there
are no other options). Individual constants do not interact with quantifiers.
A statement in predicate logic is a formula that does not contain any
free variables whereas an open statement contains at least one free variable.
- Semantics:
A statement S can have one of two values: True (1) and False (0).
A constant c denotes, i.e.
means, refers to an individual in a
domain of discourse D whose members we know in advance.
Notation: ⟦𝛼⟧ : semantic value / denotation of 𝛼, ⟦⟧
: denotation symbol
|
A one-place predicate P1 denotes a subset of D: the set of individuals in D for which P1 holds.
A two-place predicate P2 denotes a subset of D × D (the
Cartesian Product of D with itself) : the set of pairs of individuals in D between which P holds.
A n-place predicate Pn denotes a subset of D1 × D2
×...×Dn : the set of n-tuples
of individuals between which Pn holds.
Any statement Pn
(c1...cn) is true iff the tuple <c1...cn>
is a member of P.
⟦Pn(c1...cn)⟧ = true iff <c1..cn> is a member of Pn
⟦c1⟧
= individual 1
⟦cn⟧
= individual n
⟦Pn⟧
= the set of n-tuples of individuals in D for which Pn holds
|
The truth-value of a statement (both atomic and complex) is therefore not absolute but logically contingent (i.e. depends) on the membership of the domain of discourse D and the choice of the denotations of constants and predicates involved. When these are fully specified, we have a model for predicate logic, so the truth-values of a statement are relative to a particular model M. Sometimes ⟦P(t)⟧ is written as ⟦P(t)⟧M to indicate the relativity of the denotation to the logical model.
A logical model consists
of the domain of discourse D and a function which assigns
to each individual constant, a member of D
to each one-place predicate, a subset of D
to each two-place predicate, a subset of D × D
to each n-place predicate, a subset of D1 × D2
×...×Dn
|
The truth value of a complex statement depends on the truth values of the atomic statements in it, the truth functional properties of the connectors in it and the quantifier scopes.
Nessun commento:
Posta un commento