Computational logic is the branch of logic and computer science concerned with the application of formal logical reasoning to computation, and conversely, the use of computational methods to solve problems in logic. It encompasses a wide family of formalisms including propositional logic, predicate logic, modal logic, temporal logic, and type theory, all studied through the lens of algorithmic processes. The discipline provides the theoretical backbone for program verification, automated theorem proving, logic programming, database query languages, and artificial intelligence.
The roots of computational logic trace back to the foundational work of George Boole, Gottlob Frege, Kurt Goedel, Alonzo Church, and Alan Turing in the 19th and 20th centuries. Boole's algebraic treatment of logic established the basis for digital circuit design, while Goedel's incompleteness theorems revealed fundamental limits of formal systems. Church's lambda calculus and Turing's machines provided equivalent models of computation, and the Curry-Howard correspondence later revealed a deep structural isomorphism between proofs and programs. These ideas crystallized into the modern field when researchers such as John Alan Robinson developed resolution-based theorem proving in 1965 and Robert Kowalski formalized logic programming in the 1970s.
Today, computational logic underpins critical areas of technology and science. SAT solvers and SMT solvers verify hardware and software systems with millions of components. Proof assistants like Coq, Lean, and Isabelle allow mathematicians and engineers to construct machine-checked proofs. Type systems rooted in constructive logic ensure the correctness of programs in languages like Haskell and Rust. In artificial intelligence, knowledge representation and reasoning engines rely on description logics and non-monotonic reasoning. The field continues to expand into areas such as quantum logic, probabilistic logic, and the formal verification of machine-learning systems.