I'm teaching an intermediate/advanced logic course this semester. So
I had to ask myself how to introduce the semantics of quantifiers, with
an eye on proving soundness and completeness. The standard approach,
going back to Tarski, defines a satisfaction relation between a formula,
a model, and an assignment function, and then defines truth by
supervaluating over all assignments. The main alternative, often found
in intro logic textbooks, is Mates' approach, where ∀xA(x) is defined as
true in a model M iff A(c) is true in every c-variant of M, where c is a
constant not occurring in A.
In the dark old days of early logic, there was only syntax. People
introduced formal languages and laid down axioms and inference rules,
but there was nothing to justify these except a claim to
"self-evidence". Of course, the languages were assumed to be meaningful,
but there was no systematic theory of meaning, so the axioms and rules
could not be justified by the meaning of the logical constants.
All this changed with the development of model theory. Now one could
give a precise semantics for logical languages. The intuitive idea of
entailment as necessary truth-preservation could be formalized. One
could check that some proposed system of axioms and rules was sound, and
one could confirm – what had been impossible before – that it was
complete, so that any further, non-redundant axiom or rule would break
the system's soundness.