Talk:Sole Sufficient Operator
Get Talk:Sole Sufficient Operator essential facts below. View Videos or join the Talk:Sole Sufficient Operator discussion. Add Talk:Sole Sufficient Operator to your topic list for future reference or share this resource on social media.
Talk:Sole Sufficient Operator

truth-functional completeness

What is here called "functional completeness" is more usually called "truth-functional completeness".

The connectives described on this page are truth-functional meaning that the truth of a sentence built using these operators is a function of the truths of the component sentences, E.g. the truth of a negation is the negation of the truth of its (sole) sub-formula. There are (natural, E.g. English) connectives that are not truth functional, E.g. "because", which cannot therefore modeled by truth-functional connectives in logic. A because B might be true if A and B are both true, but it might equally well be false under those conditions, causality depends on more than truth.

A collection of connectives is then called truth functionally complete if every possible truth function (of every possible arity) can be constructed using those connectives. It really is not clear what "functionally complete" might mean.

This distinction should be incorporated into the article. Gregbard 05:51, 8 July 2007 (UTC)
Another synonym fashionable in college texts is "expressive adequacy." 05:48, 7 November 2007 (UTC)

Functionally complete sets

I do not understand what is going on here. It is my understanding that there are 9 pairs of expressively adequate connectives, while this section shows 15. Also, universal algebra suggests a way of organizing the expressively adequate pairs, and I've introduced that into the entry. If you understand my intention, go ahead and modify what I've done. 05:48, 7 November 2007 (UTC)


I'm pretty sure this deserves some kind of mention, link, merge, etc with this article. It was taken out as a "distraction." That's not correct. Pontiff Greg Bard (talk) 22:34, 27 November 2007 (UTC)

universal gates

I think there are 4 missing universal gates. See These gates are asymmetric and don't have names afaik, but they can be used to construct any other gate. --Amenghra (talk) 20:17, 20 November 2013 (UTC)

Someone deleted a couple of sentences mentioning the Fredkin gate and Toffoli gate, with the comment "these are not Boolean operators".

I restored those sentences, because those gates are "functionally complete".

I wouldn't mind moving those sentences to some other article -- Is there some other article that is more appropriate for discussing universal gates? -- (talk) 15:01, 25 August 2008 (UTC)

I'm claiming that the Fredkin gate is functionally complete. I fail to see how it is relevant whether or not the Fredkin gate is a Boolean operator.

Allow me to break that claim into parts:

  • (a) Fredkin gates can be wired up such that they act exactly like Boolean AND, OR, and NOT gates.
  • (b) any set of "things" that can be wired up such that they act exactly like Boolean AND, OR, and NOT gates, is functionally complete -- no matter what bizarre non-Boolean stuff these "things" may happen to do internally.
  • (c) Therefore, the Fredkin gate is functionally complete.

If you disagree with conclusion (c), *please* tell if the problem is with (a) or (b).

I admit that (a) is not at all obvious from skimming the current Fredkin gate article. I hope that article will be greatly improved. Would 3 little drawings showing the Fredkin gate implementation of AND, OR, and NOT be helpful?

I am far more familiar with logic gates (AND, OR, NOT, etc.) than with logical connectives (conjunction , disjunction , negation , etc.). I am assuming that the functions performed by those gates are the same as the logical connectives, and so stuff that applies to one, such as (b), also applies to the other. Or is there some subtle difference I am missing here?

-- (talk) 05:44, 26 August 2008 (UTC)

This is an article about functional completeness of sets of Boolean functions, which is a precisely defined concept introduced and discussed in the article. You cannot out of the blue insert without any explanation something pertaining to a different concept dealing with another type of objects, just like you cannot sneak properties of oranges into an article about apples. Fredkin gate is not functionally complete in the sense used in this article, simply because it has more than one output. (Moreover, your explanation above does not at all mix with your continued references to reversible computing, because ignoring extra output wires is exactly the sort of things which you cannot do in reversible circuits.)
I tried to preserve your added information by providing a proper context and explanation to the user. However, your stubborn refusal to any compromise, reverting to your version which breaks the text flow as well as meaningfulness of the text, makes me think that you are just trolling. -- Emil J. (formerly EJ) 09:58, 26 August 2008 (UTC)
Oh, and the Fredkin gate is not functionally complete in the expected way even if you allow to ignore output wires. The gate preserves both constants (i.e., an all 0 input gives an all 0 output, and the same for 1s), hence it can only express constant-preserving functions. It thus cannot express the constant function 0 and 1, the negation function, and many others. This just further stresses that the notion of functional completeness used in reversible computing is quite different from the one used here, and cannot be mixed freely with it. -- Emil J. (formerly EJ) 10:05, 26 August 2008 (UTC)
I agree that something that cannot express the negation function is not functionally complete.
I also agree that something that can only express "the reversible Boolean operators", and not the complete set of logical connectives, is not functionally complete.
And I see that the current resource article on Fredkin gate implies that it can only express "reversible Boolean operators".
However, I see published
Have I been tricked into incorrectly believing that the "oranges" { AND, OR, NOT } gates mentioned in those books implement the same function as the "apples" { conjunction , disjunction , negation } logical connectives that this article discusses?
I see now that Fredkin gates alone cannot express the constant functions 0 and 1, as you pointed out.
This article currently claims that S is "functionally complete" if "every possible logical connective can be defined in terms of the members of S."
Is there any logical connective that cannot be defined in terms of the members of { Fredkin gate, 0, 1 }? -- (talk) 13:12, 28 August 2008 (UTC)

First, I'm still not sure we understand each other. An apple (Boolean function, or propositional connective) is a function which assigns a 0-1 output to an n-tuple of 0-1 inputs. , , and are apples. AND, OR, and NOT are also apples, as they are just a different notation for the same thing. On the other hand, a Fredkin gate is an orange, as it has three output bits, not one.
As you can see from the diagrams, the references you give cheat by using hard-wired constants 0 and 1, so the NOT gate is really constructed from 0, 1, and the Fredkin gate, rather than the Fredkin gate alone.
If you allow to discard extra output garbage, then a gate with three outputs is just a cheap trick how to put three Boolean functions into one object. In the case of the Fredkin gate, the three functions are the identity function, and two copies of the conditional operator on different permutations of the input variables. Thus with respect to definability, the Fredkin gate is just a complicated way to refer to the conditional operator, and you are really asking whether every Boolean function can be defined in terms of 0, 1, and the conditional operator. The answer is yes, as , , and . -- Emil J. (formerly EJ) 14:24, 28 August 2008 (UTC)
Currently this article (as well as the Fredkin gate article) implies that the Fredkin gate can't define "every possible logical connective", merely the smaller subset of "every reversible operator".
However, "an arbitrary logic gate, reversible or not, can be reproduced by a suitable arrangement of Fredkin gates" -- "Cellular Automata" p. 314
-- if we allow "cheating" by using lots of hard-wired "0" and "1" constants and ignoring most outputs. :-)
What can I do to fix this incorrect implication? -- (talk) 00:35, 4 September 2008 (UTC)
You cannot implement an electrical NOT gate without hard-wiring a 1/H (or a 0/L, if you switch how the mapping of the electrical value to the boolean value). It would be valuable to readers if the fact that 1 and 0 signals are there implicitly was clarified. -- Preceding unsigned comment added by 2620:0:1CFE:9A:24F2:78C0:604B:C228 (talk) 08:01, 4 February 2014 (UTC)

(Truth-) functional completeness is not computational universality

I've removed from the opening sentence an incorrect reference to (truth-) "functionally complete" as being "also called computation universal" (sic), which had a non-working link to Tinkertoy computers. (Truth) functional completeness is, of course, not a sufficient condition for computational universality in the usual sense of Turing completeness. --r.e.s. (talk) 00:55, 19 September 2008 (UTC)

That article about Tinkertoy computers defines "computation universal" as "the theoretical term for a set of components from which a fully programmable computer can be constructed.". Is there some term other than "computational universal" that is more commonly used to describe such sets of components? Something more descriptive than "universal" or "complete" by itself?
I think such sets of components are worth mentioning somewhere in Wikipedia. Alas, the "Turing completeness" article does not mention such sets of components, there is not yet a "computation universality" article, and now R.e.s. has deleted the mention of such components from this "functional completeness" article. Which article is the best article for describing such sets of components? Or should we start a new article on just that subject? What should we name that article?
Would such a set of components be functionally complete, since "every possible logical connective can be defined in terms of [its] members"?
Given a functionally complete set, what else is needed to give "sufficient conditions for computational universality"? It may be obvious to you, but -- WP:OBVIOUS -- I think this article should state them anyway (or else link to some other article that states them). -- (talk) 05:11, 27 September 2008 (UTC)
I see that the diode logic mentions that "Diode logic ... is not a complete logic family." Going into detail in that article about what "complete" means in that context seems out of place -- is there a more appropriate article I could link to? -- (talk) 04:22, 2 October 2008 (UTC)


"The affine connectives, such that each connected variable either always or never affects the truth value these connectives return," (NOT, TAUTOLOGY, CONTRADITION, IFF, NOT IFF).

How does this meaning of affine relate to the linked-to article, Affine transformation? Dependent Variable (talk) 22:09, 19 July 2010 (UTC)

They are exactly the affine maps in the usual sense of linear algebra when the set of truth values is identified with the 2-element field.--Emil J. 11:23, 20 July 2010 (UTC)
Thanks Emil. Could you explain a bit more? Are you saying that TRUE and FALSE are the elements of the underlying set of the field? How are addition and multiplication defined? Dependent Variable (talk) 00:56, 21 July 2010 (UTC)
Sorry, I assumed you knew something about algebra. There is only one two-element field up to isomorphism, and the tables of its operations are forced by the field axioms (0 is additive identity, 1 is multiplicative identity, 0 is multiplicative zero, and addition is cancellative). In other words, in this case multiplication is conjunction and addition is xor. See field, finite field, linear algebra, and indeed, affine transformation.--Emil J. 10:18, 21 July 2010 (UTC)

Ah, thanks. I see the section of Affine transformation called "Example of an affine transformation" actually deals with this case, GF(2). Are the "connected variables" the vectors composed of sequences of 1s and 0s, or are the "connected variables" the individual terms of these sequences (the individual components of the vectors)? Is always or never affecting the returned truth value a property of all affine connectives, or only a proper subset? Dependent Variable (talk) 13:36, 21 July 2010 (UTC)

The variables are individual components of the vectors (they cannot be anything else, since they are binary). The property holds for all affine connectives, and only for them. However, note that this kind of characterization of affine transformations really only works for functions GF(2)n -> GF(2), it does not work over other fields (in general, the property is necessary, but by no means sufficient).--Emil J. 14:07, 21 July 2010 (UTC)
Thanks for your patience with my questions, Emil. Is there a more common term for "affine connectives"? (Almost all of the few hits on Google are for this article.) How exactly does the matrix equation in Affine transformation relate to affine connectives such as NOT, IFF, etc.? Does affinity mean that an operation can be expressed as a sequence of expressions connected by XORs? Take, for example, P iff Q. How would this be expressed in terms of an affine transformation? Should I begin by write P and Q as components of a 2x1 matrix? Dependent Variable (talk) 23:02, 21 July 2010 (UTC)
As for terminology, many people call them "linear" instead of "affine". (This is not restricted to the GF(2) case: it's common in computer science to call general affine transformations "linear", on the grounds that they are degree 1 polynomials, even though this conflicts with the standard terminology in linear algebra.) Also, while this paragraph of the article speaks about "connectives", it's more usual to call them "Boolean functions".
Now, writing affine Boolean functions with matrices is possible but unnecessarily complicated, as the output is 1-dimensional, hence the matrix will be just a vector. Moreover, over GF(2) you do not need to explicitly mention scalar multiplication, the product aTx of two vectors is just the sum of those entries of x whose corresponding entry in a is 1. The general form of an affine Boolean function is thus where and c is 0 or 1. For example, is --Emil J. 10:44, 22 July 2010 (UTC)

I see. Thanks, Emil. So "not P" = P + 1, and tautology and contradiction have c = 1 and c = 0 respectively and k = 0. Dependent Variable (talk) 16:28, 22 July 2010 (UTC)

Comparing another kinds of completeness


--Krauss (talk) 09:50, 20 August 2012 (UTC)

See the article Completeness (logic) -- however, you are right -- this article should include a section that at least mentions, compares and contrasts to other forms of completeness. Right now, it seems very-boolean-logic influenced. See also List of logic systems. (talk) 02:06, 31 August 2016 (UTC)

minimal functional complete operator sets always consist of at least two primitive operators

To call a gate, or a truth table, nand, does not make nand something primitive. Nand can only be understood as the logical connection of the primitive not and the primitive and operator. Equivalently for nor. -- Preceding unsigned comment added by (talk) 00:24, 15 August 2015 (UTC)

Nand is also known to be the sheffer stroke which is known to be complete! So, yes, in fact, nand is primitive. (talk) 02:09, 31 August 2016 (UTC)

What does this mean?

"changing the truth value of any connected variables from F to T without changing any from T to F never makes these connectives change their return value from T to F"2A00:23C4:AC83:CE00:59DD:19D1:DCA0:F91 (talk) 09:32, 29 December 2016 (UTC)

Nonsense text

The Examples section contains the sentence: "Note that, an electronic circuit or a software function is optimized by the reuse, that reduce the number of gates." This isn't grammatical, and I'm not sure what the original author was intending to say. If anyone knows, please fix. Ross Fraser (talk)

  This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.



Music Scenes