In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY or SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
SAT is the first problem that was proven to be NPcomplete; see CookLevin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists; yet this belief has not been proven mathematically, and resolving the question of whether SAT has a polynomialtime algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing.
Nevertheless, as of 2007, heuristic SATalgorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols,^{[1]} which is sufficient for many practical SAT problems from, e.g., artificial intelligence, circuit design, and automatic theorem proving.
A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ?), OR (disjunction, ?), NOT (negation, ¬), and parentheses. A formula is said to be satisfiable if it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean satisfiability problem (SAT) is, given a formula, to check whether it is satisfiable. This decision problem is of central importance in many areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography and artificial intelligence.
There are several special cases of the Boolean satisfiability problem in which the formulas are required to have a particular structure. A literal is either a variable, called positive literal, or the negation of a variable, called negative literal. A clause is a disjunction of literals (or a single literal). A clause is called a Horn clause if it contains at most one positive literal. A formula is in conjunctive normal form (CNF) if it is a conjunction of clauses (or a single clause). For example, x_{1} is a positive literal, ¬x_{2} is a negative literal, x_{1} ? ¬x_{2} is a clause. The formula (x_{1} ? ¬x_{2}) ? (¬x_{1} ? x_{2} ? x_{3}) ? ¬x_{1} is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosing x_{1} = FALSE, x_{2} = FALSE, and x_{3} arbitrarily, since (FALSE ? ¬FALSE) ? (¬FALSE ? FALSE ? x_{3}) ? ¬FALSE evaluates to (FALSE ? TRUE) ? (TRUE ? FALSE ? x_{3}) ? TRUE, and in turn to TRUE ? TRUE ? TRUE (i.e. to TRUE). In contrast, the CNF formula a ? ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE or a=FALSE it evaluates to TRUE ? ¬TRUE (i.e., FALSE) or FALSE ? ¬FALSE (i.e., again FALSE), respectively.
For some versions of the SAT problem, it is useful to define the notion of a generalized conjunctive normal form formula, viz. as a conjunction of arbitrarily many generalized clauses, the latter being of the form R(l_{1},...,l_{n}) for some boolean operator R and (ordinary) literals l_{i}. Different sets of allowed boolean operators lead to different problem versions. As an example, R(¬x,a,b) is a generalized clause, and R(¬x,a,b) ? R(b,y,c) ? R(c,d,¬z) is a generalized conjunctive normal form. This formula is used below, with R being the ternary operator that is TRUE just when exactly one of its arguments is.
Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, which may, however, be exponentially longer. For example, transforming the formula (x_{1}?y_{1}) ? (x_{2}?y_{2}) ? ... ? (x_{n}?y_{n}) into conjunctive normal form yields
while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2^{n} clauses of n variables.
SAT was the first known NPcomplete problem, as proved by Stephen Cook at the University of Toronto in 1971^{[2]} and independently by Leonid Levin at the National Academy of Sciences in 1973.^{[3]} Until that time, the concept of an NPcomplete problem did not even exist. The proof shows how every decision problem in the complexity class NP can be reduced to the SAT problem for CNF^{[note 1]} formulas, sometimes called CNFSAT. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, deciding whether a given graph has a 3coloring is another problem in NP; if a graph has 17 valid 3colorings, the SAT formula produced by the CookLevin reduction will have 17 satisfying assignments.
NPcompleteness only refers to the runtime of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See Algorithms for solving SAT below.
SAT is trivial if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction of conjunctions of literals. Such a formula is indeed satisfiable if and only if at least one of its conjunctions is satisfiable, and a conjunction is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in linear time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form; for an example exchange "?" and "?" in the above exponential blowup example for conjunctive normal forms.
Like the satisfiability problem for arbitrary formulas, determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NPcomplete also; this problem is called 3SAT, 3CNFSAT, or 3satisfiability. To reduce the unrestricted SAT problem to 3SAT, transform each clause l_{1} ? ? ? l_{n} to a conjunction of n  2 clauses
where x_{2}, ? , x_{n  2} are fresh variables not occurring elsewhere. Although the two formulas are not logically equivalent, they are equisatisfiable. The formula resulting from transforming all clauses is at most 3 times as long as its original, i.e. the length growth is polynomial.^{[4]}
3SAT is one of Karp's 21 NPcomplete problems, and it is used as a starting point for proving that other problems are also NPhard.^{[note 2]} This is done by polynomialtime reduction from 3SAT to the other problem. An example of a problem where this method has been used is the clique problem: given a CNF formula consisting of c clauses, the corresponding graph consists of a vertex for each literal, and an edge between each two noncontradicting^{[note 3]} literals from different clauses, cf. picture. The graph has a cclique if and only if the formula is satisfiable.^{[5]}
There is a simple randomized algorithm due to Schöning (1999) that runs in time (4/3)^{n} where n is the number of variables in the 3SAT proposition, and succeeds with high probability to correctly decide 3SAT.^{[6]}
The exponential time hypothesis asserts that no algorithm can solve 3SAT (or indeed kSAT for any k > 2) in exp(o(n)) time (i.e., fundamentally faster than exponential in n).
Selman, Mitchell, and Levesque (1996) give empirical data on the difficulty of randomly generated 3SAT formulas, depending on their size parameters. Difficulty is measured in number recursive calls made by a DPLL algorithm.^{[7]}
3satisfiability can be generalized to ksatisfiability (kSAT, also kCNFSAT), when formulas in CNF are considered with each clause containing up to k literals. However, since for any k>=3, this problem can neither be easier than 3SAT nor harder than SAT, and the latter two are NPcomplete, so must be kSAT.
Some authors restrict kSAT to CNF formulas with exactly k literals. This doesn't lead to a different complexity class either, as each clause l_{1} ? ? ? l_{j} with j<k literals can be padded with fixed dummy variables to l_{1} ? ? ? l_{j} ? d_{j+1} ? ? ? d_{k}. After padding all clauses, 2^{k}1 extra clauses^{[note 4]} have to be appended to ensure that only d_{1}=?=d_{k}=FALSE can lead to a satisfying assignment. Since k doesn't depend on the formula length, the extra clauses lead to a constant increase in length. For the same reason, it does not matter whether duplicate literals are allowed in clauses (like e.g. ¬x ? ¬y ? ¬y), or not.
A variant of the 3satisfiability problem is the oneinthree 3SAT (also known variously as 1in3SAT and exactly1 3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine whether there exists a truth assignment to the variables so that each clause has exactly one TRUE literal (and thus exactly two FALSE literals). In contrast, ordinary 3SAT requires that every clause has at least one TRUE literal. Formally, a oneinthree 3SAT problem is given as a generalized conjunctive normal form with all generalized clauses using a ternary operator R that is TRUE just if exactly one of its arguments is. When all literals of a oneinthree 3SAT formula are positive, the satisfiability problem is called oneinthree positive 3SAT.
Oneinthree 3SAT, together with its positive case, is listed as NPcomplete problem "LO4" in the standard reference, Computers and Intractability: A Guide to the Theory of NPCompleteness by Michael R. Garey and David S. Johnson. Oneinthree 3SAT was proved to be NPcomplete by Thomas Jerome Schaefer as a special case of Schaefer's dichotomy theorem, which asserts that any problem generalizing Boolean satisfiability in a certain way is either in the class P or is NPcomplete.^{[8]}
Schaefer gives a construction allowing an easy polynomialtime reduction from 3SAT to oneinthree 3SAT. Let "(x or y or z)" be a clause in a 3CNF formula. Add six fresh boolean variables a, b, c, d, e, and f, to be used to simulate this clause and no other. Then the formula R(x,a,d) ? R(y,b,d) ? R(a,b,e) ? R(c,d,f) ? R(z,c,FALSE) is satisfiable by some setting of the fresh variables if and only if at least one of x, y, or z is TRUE, see picture (left). Thus any 3SAT instance with m clauses and n variables may be converted into an equisatisfiable oneinthree 3SAT instance with 5m clauses and n+6m variables.^{[9]} Another reduction involves only four fresh variables and three clauses: R(¬x,a,b) ? R(b,y,c) ? R(c,d,¬z), see picture (right).
Another variant is the notallequal 3satisfiability problem (also called NAE3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine if an assignment to the variables exists such that in no clause all three literals have the same truth value. This problem is NPcomplete, too, even if no negation symbols are admitted, by Schaefer's dichotomy theorem.^{[8]}
SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called 2SAT. This problem can be solved in polynomial time, and in fact is complete for the complexity class NL. If additionally all OR operations in literals are changed to XOR operations, the result is called exclusiveor 2satisfiability, which is a problem complete for the complexity class SL = L.
The problem of deciding the satisfiability of a given conjunction of Horn clauses is called Hornsatisfiability, or HORNSAT. It can be solved in polynomial time by a single step of the Unit propagation algorithm, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Hornsatisfiability is Pcomplete. It can be seen as P's version of the Boolean satisfiability problem. Also, deciding the truth of quantified Horn formulas can be done in polynomial time. ^{[10]}
Horn clauses are of interest because they are able to express implication of one variable from a set of other variables. Indeed, one such clause ¬x_{1} ? ... ? ¬x_{n} ? y can be rewritten as x_{1} ? ... ? x_{n} > y, that is, if x_{1},...,x_{n} are all TRUE, then y needs to be TRUE as well.
A generalization of the class of Horn formulae is that of renameableHorn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. For example, (x_{1} ? ¬x_{2}) ? (¬x_{1} ? x_{2} ? x_{3}) ? ¬x_{1} is not a Horn formula, but can be renamed to the Horn formula (x_{1} ? ¬x_{2}) ? (¬x_{1} ? x_{2} ? ¬y_{3}) ? ¬x_{1} by introducing y_{3} as negation of x_{3}. In contrast, no renaming of (x_{1} ? ¬x_{2} ? ¬x_{3}) ? (¬x_{1} ? x_{2} ? x_{3}) ? ¬x_{1} leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula.
Solving an XORSAT example by Gaussian elimination  

 
 
 
 
 
 

Another special case is the class of problems where each clause contains XOR (i.e. exclusive or) rather than (plain) OR operators.^{[note 5]} This is in P, since an XORSAT formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time by Gaussian elimination;^{[11]} see the box for an example. This recast is based on the kinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms a finite field. Since a XOR b XOR c evaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution of the 1in3SAT problem for a given CNF formula is also a solution of the XOR3SAT problem, and in turn each solution of XOR3SAT is a solution of 3SAT, cf. picture. As a consequence, for each CNF formula, it is possible to solve the XOR3SAT problem defined by the formula, and based on the result infer either that the 3SAT problem is solvable or that the 1in3SAT problem is unsolvable.
Provided that the complexity classes P and NP are not equal, neither 2, nor Horn, nor XORsatisfiability is NPcomplete, unlike SAT.
The restrictions above (CNF, 2CNF, 3CNF, Horn, XORSAT) bound the considered formulae to be conjunctions of subformulae; each restriction states a specific form for all subformulae: for example, only binary clauses can be subformulae in 2CNF.
Schaefer's dichotomy theorem states that, for any restriction to Boolean operators that can be used to form these subformulae, the corresponding satisfiability problem is in P or NPcomplete. The membership in P of the satisfiability of 2CNF, Horn, and XORSAT formulae are special cases of this theorem.^{[8]}
An extension that has gained significant popularity since 2003 is satisfiability modulo theories (SMT) that can enrich CNF formulas with linear constraints, arrays, alldifferent constraints, uninterpreted functions,^{[12]}etc. Such extensions typically remain NPcomplete, but very efficient solvers are now available that can handle many such kinds of constraints.
The satisfiability problem becomes more difficult if both "for all" (?) and "there exists" (?) quantifiers are allowed to bind the Boolean variables. An example of such an expression would be ?x ?y ?z (x ? y ? z) ? (¬x ? ¬y ? ¬z); it is valid, since for all values of x and y, an appropriate value of z can be found, viz. z=TRUE if both x and y are FALSE, and z=FALSE else. SAT itself (tacitly) uses only ? quantifiers. If only ? quantifiers are allowed instead, the socalled tautology problem is obtained, which is coNPcomplete. If both quantifiers are allowed, the problem is called the quantified Boolean formula problem (QBF), which can be shown to be PSPACEcomplete. It is widely believed that PSPACEcomplete problems are strictly harder than any problem in NP, although this has not yet been proved. Using highly parallel P systems, QBFSAT problems can be solved in linear time.^{[13]}
Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal with the number of such assignments:
Other generalizations include satisfiability for first and secondorder logic, constraint satisfaction problems, 01 integer programming.
The SAT problem is selfreducible, that is, each algorithm which correctly answers if an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on the given formula ?. If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on the partly instantiated formula ?{x_{1}=TRUE}, i.e. ? with the first variable x_{1} replaced by TRUE, and simplified accordingly. If the answer is "yes", then x_{1}=TRUE, otherwise x_{1}=FALSE. Values of other variables can be found subsequently in the same way. In total, n+1 runs of the algorithm are required, where n is the number of distinct variables in ?.
This property of selfreducibility is used in several theorems in complexity theory:
Since the SAT problem is NPcomplete, only algorithms with exponential worstcase complexity are known for it. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints (i.e. clauses).^{[1]} Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors,^{[12]}automatic test pattern generation, routing of FPGAs,^{[19]}planning, and scheduling problems, and so on. A SATsolving engine is now considered to be an essential component in the EDA toolbox.
A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 1960s (see references below) and is now commonly referred to as the DavisPutnamLogemannLoveland algorithm ("DPLL" or "DLL").^{[20]}^{[21]} Many modern approaches to practical SAT solving base on the DPLL algorithm and share the same structure. Often they only improve the efficiency of certain classes of SAT problems such as instances that appear in industrial applications or randomly generated instances.^{[22]} Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms.^{[]}
Algorithms that are not part of the DPLL family include stochastic local search algorithms. One example is WalkSAT. Stochastic methods try to find a satisfying interpretation but cannot deduce that a SAT instance is unsatisfiable, as opposed to complete algorithms, such as DPLL.^{[22]}
In contrast, randomized algorithms like the PPSZ algorithm by Paturi, Pudlak, Saks, and Zane set variables in a random order according to some heuristics, for example boundedwidth resolution. If the heuristic can't find the correct setting, the variable is assigned randomly. The PPSZ algorithm has a runtime^{[clarify]} of for 3SAT. This was the bestknown runtime for this problem until a recent improvement by Hansen, Kaplan, Zamir and Zwick that has a runtime of for 3SAT and currently the best known runtime for kSAT, for all values of k. In the setting with many satisfying assignments the randomized algorithm by Schöning has a better bound.^{[6]}^{[23]}^{[24]}
Modern SAT solvers (developed in the 2000s) come in two flavors: "conflictdriven" and "lookahead". Both approaches descend from DPLL.^{[22]} Conflictdriven solvers, such as conflictdriven clause learning (CDCL), augment the basic DPLL search algorithm with efficient conflict analysis, clause learning, nonchronological backtracking (a.k.a. backjumping), as well as "twowatchedliterals" unit propagation, adaptive branching, and random restarts. These "extras" to the basic systematic search have been empirically shown to be essential for handling the large SAT instances that arise in electronic design automation (EDA).^{[25]} Well known implementations include Chaff^{[26]} and GRASP^{[27]}. Lookahead solvers have especially strengthened reductions (going beyond unitclause propagation) and the heuristics, and they are generally stronger than conflictdriven solvers on hard instances (while conflictdriven solvers can be much better on large instances which actually have an easy instance inside).
Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others. Powerful solvers are readily available as free and open source software. In particular, the conflictdriven MiniSAT, which was relatively successful at the 2005 SAT competition, only has about 600 lines of code. A modern Parallel SAT solver is ManySAT^{[28]}. It can achieve super linear speedups on important classes of problems. An example for lookahead solvers is march_dl, which won a prize at the 2007 SAT competition.
Certain types of large random satisfiable instances of SAT can be solved by survey propagation (SP). Particularly in hardware design and verification applications, satisfiability and other logical properties of a given propositional formula are sometimes decided based on a representation of the formula as a binary decision diagram (BDD).
Almost all SAT solvers include timeouts, so they will terminate in reasonable time even if they cannot find a solution. Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. All of these behaviors can be seen in the SAT solving contests.^{[29]}
Parallel SAT solvers come in three categories: Portfolio, Divideandconquer and parallel local search algorithms. With parallel portfolios, multiple different SAT solvers run concurrently. Each of them solves a copy of the SAT instance, whereas divideandconquer algorithms divide the problem between the processors. Different approaches exist to parallelize local search algorithms.
The International SAT Solver Competition has a parallel track reflecting recent advances in parallel SAT solving. In 2016^{[30]}, 2017^{[31]} and 2018^{[32]}, the benchmarks were run on a sharedmemory system with 24 processing cores, therefore solvers intended for distributed memory or manycore processors might have fallen short.
In general there is no SAT solver that performs better than all other solvers on all SAT problems. An algorithm might perform well for problem instances others struggle with, but will do worse with other instances. Furthermore, given a SAT instance, there is no reliable way to predict which algorithm will solve this instance particularly fast. These limitations motivate the parallel portfolio approach. A portfolio is a set of different algorithms or different configurations of the same algorithm. All solvers in a parallel portfolio run on different processors to solve of the same problem. If one solver terminates, the portfolio solver reports the problem to be satisfiable or unsatisfiable according to this one solver. All other solvers are terminated. Diversifying portfolios by including a variety of solvers, each performing well on a different set of problems, increases the robustness of the solver.^{[33]}
Many solvers internally use a random number generator. Diversifying their seeds is a simple way to diversify a portfolio. Other diversification strategies involve enabling, disabling or diversifying certain heuristics in the sequential solver.^{[34]}
One drawback of parallel portfolios is the amount of duplicate work. If clause learning is used in the sequential solvers, sharing learned clauses between parallel running solvers can reduce duplicate work and increase performance. Yet, even merely running a portfolio of the best solvers in parallel makes a competitive parallel solver. An example of such a solver is PPfolio^{[35]}^{[36]}. It was designed to find a lower bound for the performance a parallel SAT solver should be able to deliver. Despite the large amount of duplicate work due to lack of optimizations, it performed well on a shared memory machine. HordeSat^{[37]} is a parallel portfolio solver for large clusters of computing nodes. It uses differently configured instances of the same sequential solver at its core. Particularly for hard SAT instances HordeSat can produce linear speedups and therefore reduce runtime significantly.
In recent years parallel portfolio SAT solvers have dominated the parallel track of the International SAT Solver Competitions. Notable examples of such solvers include Plingeling and painlessmcomsps.^{[38]}
In contrast to parallel portfolios, parallel DivideandConquer tries to split the search space between the processing elements. Divideandconquer algorithms, such as the sequential DPLL, already apply the technique of splitting the search space, hence their extension towards a parallel algorithm is straight forward. However, due to techniques like unit propagation, following a division, the partial problems may differ significantly in complexity. Thus the DPLL algorithm typically does not process each part of the search space in the same amount of time, yielding a challenging load balancing problem.^{[33]}
Due to nonchronological backtracking, parallelization of conflictdriven clause learning is more difficult. One way to overcome this is the CubeandConquer paradigm.^{[39]} It suggests solving in two phases. In the "cube" phase the Problem is divided into many thousands, up to millions, of sections. This is done by a lookahead solver, that finds a set of partial configurations called "cubes". A cube can also be seen as a conjunction of a subset of variables of the original formula. In conjunction with the formula, each of the cubes forms a new formula. These formulas can be solved independently and concurrently by conflictdriven solvers. As the disjunction of these formulas is equivalent to the original formula, the problem is reported to be satisfiable, if one of the formulas is satisfiable. The lookahead solver is favorable for small but hard problems^{[40]}, so it is used to gradually divide the problem into multiple subproblems. These subproblems are easier but still large which is the ideal form for a conflictdriven solver. Furthermore lookahead solvers consider the entire problem whereas conflictdriven solvers make decisions based on information that is much more local. There are three heuristics involved in the cube phase. The variables in the cubes are chosen by the decision heuristic. The direction heuristic decides which variable assignment (true or false) to explore first. In satisfiable problem instances, choosing a satisfiable branch first is beneficial. The cutoff heuristic decides when to stop expanding a cube and instead forward it to a sequential conflictdriven solver. Preferably the cubes are similarly complex to solve.^{[39]}
Treengeling is an example for a parallel solver that applies the CubeandConquer paradigm. Since its introduction in 2012 it has had multiple successes at the International SAT Solver Competition. CubeandConquer was used to solve the Boolean Pythagorean triples problem.^{[41]}
One strategy towards a parallel local search algorithm for SAT solving is trying multiple variable flips concurrently on different processing units.^{[42]} Another is to apply the aforementioned portfolio approach, however clause sharing is not possible since local search solvers do not produce clauses. Alternatively, it is possible to share the configurations that are produced locally. These configurations can be used to guide the production of a new initial configuration when a local solver decides to restart its search.^{[43]}
modern SAT solvers can often handle problems with millions of constraints and hundreds of thousands of variables.
References are ordered by date of publication:
journal=
(help)SAT Game  try solving a Boolean satisfiability problem yourself
A SAT problem is often described in the DIMACSCNF format: an input file in which each line represents a single disjunction. For example, a file with the two lines
1 5 4 0 1 5 3 4 0
represents the formula "(x_{1} ? ¬x_{5} ? x_{4}) ? (¬x_{1} ? x_{5} ? x_{3} ? x_{4})".
Another common format for this formula is the 7bit ASCII representation "(x1  ~x5  x4) & (~x1  x5  x3  x4)".
SAT solving in general:
More information on SAT:
This article includes material from a column in the ACM SIGDA enewsletter by Prof. Karem Sakallah
Original text is available here