There have been many academic debates about the meaning, relevance and utility of fuzzy concepts, as well as their appropriate use.
Rudolf E. Kálmán stated in 1972 that "there is no such thing as a fuzzy concept... We do talk about fuzzy things but they are not scientific concepts". The suggestion is that to qualify as a concept, the concept must always be clear
and precise, without any fuzziness. A vague notion would be at best a prologue to formulating a concept. Nevertheless, "Ever since the introduction of fuzzy sets by L. A. Zadeh, the fuzzy concept has invaded almost all branches of mathematics". In 2011, three Chinese engineers alleged that "Fuzzy set, its
t-norm,
s-norm and fuzzy supplement theories have already become the academic virus in the world".
"Fuzzy" label Lotfi A. Zadeh himself confessed that: However, the impact of the invention of fuzzy reasoning went far beyond names and labels. When Zadeh gave his acceptance speech in Japan for the 1989 Honda Foundation prize, which he received for inventing fuzzy theory, he stated that "The concept of a fuzzy set has had an upsetting effect on the established order."
Frege and Wittgenstein According to
The Foundations of Arithmetic by the logician
Gottlob Frege, In his notes on
language games,
Ludwig Wittgenstein replied to Frege's argument as follows:
The categorical status of concepts There is no general agreement among philosophers and scientists about how the notion of a "
concept" (and in particular, a scientific concept), should be defined. A concept could be defined as a mental representation, as a cognitive capacity, as an abstract object, as a cluster of linked phenomena etc. Edward E. Smith &
Douglas L. Medin stated that "there will likely be no crucial experiments or analyses that will establish one view of concepts as correct and rule out all others irrevocably." Of course, scientists also quite often do use imprecise analogies in their models to help understanding an issue. A concept can be clear enough,
but not (or not sufficiently) precise. Terminology scientists at the German National Standards Institute (
Deutsches Institut für Normung) provided the first official standard definition of what a concept is (in the terminology standard DIN 2330 of 1957, revised in 1974 and last revised in 2022). According to DIN 2330, a concept is "a unit of thought formed by abstraction from a set of objects by identifying their common characteristics". According to the ISO 1087 terminology standard of the
International Organization for Standardization, a concept is defined as “a unit of knowledge created by a unique combination of characteristics”. A concept is regarded as
language-independent, and exists independently of how exactly it is symbolized or referred to in natural language.
Individual concepts refer to a single object or instance.
General concepts refer to a class of objects with shared characteristics. The ISO 704 standard adds that a concept as a unit of thought comprises two parts: its extent and its intent. The
extent comprises all objects belonging to the concept, and the
intent comprises all attributes shared by those objects. Different standard definitions and terminologies for concepts exist for various systems in cyberspace. The official terminological standards are useful for many practical purposes. But for more complex concepts, the standards may not be so helpful. The reason is that complex concepts do not necessarily denote only a collection of objects which have something in common. A complex concept may for example express a
Gestalt, i.e. it may express a totality which
is more,
does more and
means more, than the sum of its parts (as recognized in
Aristotle's
Metaphysics). It may be that the parts cannot exist other than within the totality. The totality could also be a "totality of totalities". In such cases, the definition of the complex concept is not (or not fully) reducible to what its parts have in common. Modelling such a concept requires more than identifying and enumerating the parts that are included in (and excluded from) the concept. It requires also a specification of what all the parts together "add up to", or what they constitute collectively. In some respects at least, the totality differs qualitatively from any of its parts. The
Gestalt could be a fuzzy object, figure or shape.
Potential corruption Reasoning with fuzzy concepts is often viewed as a kind of "logical corruption" or scientific perversion because, it is claimed, fuzzy reasoning rarely reaches a definite "yes" or a definite "no". A clear, precise and logically rigorous conceptualization is no longer a necessary prerequisite, for carrying out a procedure, a project, or an inquiry, since "somewhat vague ideas" can always be accommodated, formalized and programmed with the aid of fuzzy expressions. The purist idea is, that either a rule applies, or it does not apply. When a rule is said to apply only "to some extent", then in truth the rule does
not apply. Thus, a compromise with vagueness or indefiniteness is, on this view, effectively a compromise with error — an error of conceptualization, an error in the inferential system, or an error in physically carrying out a task. The computer scientist
William Kahan warned in 1975 that "the danger of fuzzy theory is that it will encourage the sort of imprecise thinking that has brought us so much trouble." He said subsequently, According to Kahan, statements of a degree of probability are usually verifiable. There are standard statistical tests one can do. By contrast, there is no conclusive procedure which can decide the validity of assigning particular fuzzy truth values to a data set in the first instance. It is just assumed that a model or program will work, "if" particular fuzzy values are accepted and used, perhaps based on some statistical comparisons or try-outs.
Bad design In programming, a problem can usually be solved in several different ways, not just one way, but an important issue is, which solution works best in the short term, and in the long term. Kahan implies, that fuzzy solutions may create more problems in the long term, than they solve in the short term. For example, if one starts off designing a procedure, not with well thought-out, precise concepts, but rather by using fuzzy or approximate expressions which conveniently patch up (or compensate for) badly formulated ideas, the ultimate result could be a complicated, malformed mess, that does not achieve the intended goal. Had the reasoning and conceptualization been much sharper at the start, then the design of the procedure might have been much simpler, more efficient and effective — and fuzzy expressions or approximations would not be necessary, or required much less. Thus, by
allowing the use of fuzzy or approximate expressions, one might actually foreclose more rigorous thinking about design, and one might build something that ultimately does not meet expectations. If (say) an entity X turns out to belong for 65% to category Y, and for 35% to category Z, how should X be allocated? One could plausibly decide to allocate X to Y, making a rule that, if an entity belongs for 65% or more to Y, it is to be treated as an instance of category Y, and never as an instance of category Z. One could, however, alternatively decide to change the definitions of the categorization system, to ensure that all entities such as X fall 100% in one category only. This kind of argument claims, that boundary problems can be resolved (or vastly reduced) simply by using better categorization or conceptualization methods. If we treat X "as if" it belongs 100% to Y, while in truth it only belongs 65% to Y, then arguably we are really misrepresenting things. If we keep doing that with a lot of related variables, we can greatly distort the true situation, and make it look like something that it isn't. In a "fuzzy permissive" environment, it might become far too easy, to formalize and use a concept which is itself badly defined, and which could have been defined much better. In that environment, there is always a quantitative way out, for concepts that do not quite fit, or which don't quite do the job for which they are intended. The cumulative adverse effect of the discrepancies might, in the end, be much larger than ever anticipated.
Counter-argument A typical reply to Kahan's objections is, that fuzzy reasoning never "rules out" ordinary binary logic, but instead
presupposes ordinary true-or-false logic.
Lotfi Zadeh stated that "fuzzy logic is not fuzzy. In large measure, fuzzy logic is precise." It is a precise logic of imprecision. Fuzzy logic is not a replacement of, or substitute for ordinary logic, but an enhancement of it, with many practical uses. Fuzzy thinking does oblige action, but primarily in response to a change in quantitative gradation, not in response to a contradiction. One could say, for example, that ultimately one is
either "alive"
or "dead", which is perfectly true. Meantime though one is "living", which is also a significant truth — yet "living" is a fuzzy concept. It is true that fuzzy logic by itself usually cannot eliminate inadequate conceptualization or bad design. Yet it can at least make explicit, what exactly the variations are in the applicability of a concept which has unsharp boundaries. If one always had perfectly crisp concepts available, perhaps no fuzzy expressions would be necessary. In reality though, one often does not have all the crisp concepts to start off with. One might not have them yet for a long time, or ever — or, several successive "fuzzy" approximations might be needed, to get there. A "fuzzy permissive" environment may be appropriate and useful, precisely because it permits things to be actioned, that would never have been achieved, if there had been crystal clarity about all the consequences from the start, or if people insisted on absolute precision prior to doing anything. Scientists often try things out on the basis of "hunches", and processes like
serendipity can play a role. Learning something new, or trying to create something new, is rarely a completely formal-logical or linear process. There are not only "knowns" and "unknowns" involved, but also "
partly known" phenomena, i.e., things which are known or unknown "to some degree". Even if, ideally, we would prefer to eliminate fuzzy ideas, we might need them initially to get there, further down the track. Any method of reasoning is a tool. If its application has bad results, it is not the tool itself that is to blame, but its inappropriate use. It would be better to educate people in the best
use of the tool, if necessary with appropriate authorization, than to
ban the tool pre-emptively, on the ground that it "could" or "might" be abused. Exceptions to this rule would include things like computer viruses and illegal weapons that can only cause great harm if they are used. The US
Food and Drug Administration and the
European Food Safety Authority for example ban the use of certain foodstuffs, drugs and medicines completely, on the basis of scientific evidence, or restrict their use. There is no evidence though that fuzzy concepts as a species are intrinsically harmful or evil, even if some bad concepts can cause harm — if used in inappropriate contexts. There are no laws against (say) the possession of a hammer, even though a hammer could be used to hurt or kill people. Instead, the
use of a hammer to
cause harm can be prosecuted in civilized society, according to the
criminal code. If the possession and use of hammers was banned altogether, then carpenters and mechanics would not be able to do their job, and many industries would break down — unless perhaps a new piece of safe machinery could be used as an alternative for hammers. The peaceful use of drones (which may involve fuzzy logic technologies) is generally accepted, except if drones interfere with traffic, contravene privacy or residents rights, or violate police orders or military decrees. A drone strike that
intentionally targets civilians in a war, or that is
indiscriminate or
disproportionate, is unlawful under
International Humanitarian Law. No government has publicly admitted to using a fuzzy-logic engine in a war zone which
autonomously chooses drone or missile targets, or firing weapons based on a fuzzy rule set without human approval. But in practice there are risks, because such technology can be used, and the combatants may not stick to legal norms. What really happens on the battlefield, or in a command centre, may be difficult to audit for legal compliance. There may also be unpredictable interactions between different rules applicable on the battlefield. Certain uses of weapons are difficult to verify/validate for safety certification. So the definition of what actually happened may be fuzzy. An example is a new
loitering munition, the V2U drone developed by the Russian armed forces.
Reducibility Susan Haack once claimed that a many-valued logic requires neither intermediate terms between true and false, nor a rejection of bivalence. She implied that the intermediate terms (i.e. the gradations of truth) can always be restated as conditional if-then statements, and by implication, that fuzzy logic is fully reducible to binary true-or-false logic. This interpretation is disputed (it assumes that the knowledge already exists to fit the intermediate terms to a logical sequence), but even if it was correct, assigning a number to the applicability of a statement is often enormously more efficient than a long string of if-then statements that would have the same intended meaning. That point is obviously of great importance to computer programmers, educators and administrators seeking to code a process, activity, message or operation as simply as possible, according to logically consistent rules. Prof. Haack is, of course, quite correct when she argues that fuzzy logic does not do away with binary logic.
Quantification It may be wonderful to have an unlimited number of distinctions available to define what one means, but not all scholars would agree that any concept is equal to, or reducible to, a mathematical
set. Some phenomena are difficult or impossible to quantify and count, in particular if they lack discrete boundaries (for example, clouds).
George Lakoff emphasized that it is not true that fuzzy-set theory is the only or necessarily the most appropriate way to start modelling concepts.
Formalization Qualities may not be fully reducible to quantities – if there are no qualities, it may become impossible to say what the numbers are numbers of, or what they refer to, except that they refer to other numbers or numerical expressions such as algebraic equations. A measure requires a counting unit defined by a category, but the definition of that category is essentially qualitative; a language which is used to communicate data is difficult to operate, without any qualitative distinctions and categories. We may, for example, transmit a text in binary code, but the binary code does not tell us directly what the text intends. It has to be translated, decoded or converted first, before it becomes comprehensible. In creating a
formalization or
formal specification of a concept, for example for the purpose of measurement, administrative procedure or programming, part of the meaning of the concept may be changed or lost. For example, if we deliberately program an event according to a concept, it might kill off the spontaneity, spirit, authenticity and motivational pattern which is ordinarily associated with that type of event.
Quantification is not an unproblematic process. To quantify a phenomenon, we may have to introduce special assumptions and definitions which disregard part of totality of the phenomenon. • The economist
John Maynard Keynes concluded that formalization "runs the risk of leaving behind the subjectmatter we are interested in" and "also runs the risk of increasing rather than decreasing the muddle." •
Friedrich Hayek stated that "it is certainly not scientific to insist on measurement where you don't know what your measurements mean. There are cases where measurements are not relevant." • The
Hayekian big data guru Viktor Mayer-Schönberger states that "A system based on money and price solved a problem of too much information and not enough processing power, but in the process of distilling information down to price, many details get lost." •
Michael Polanyi stated that "the process of formalizing all knowledge to the exclusion of any
tacit knowing is self-defeating", since to mathematize a concept we need to be able to identify it in the first instance without mathematization.
Measurement Programmers, statisticians or logicians are concerned in their work with the main operational or technical significance of a concept which is specifiable in objective, quantifiable terms. They are not primarily concerned with all kinds of imaginative frameworks associated with the concept, or with those aspects of the concept which seem to have no particular functional purpose – however entertaining they might be. However, some of the qualitative characteristics of the concept may not be quantifiable or measurable at all, at least not directly. The temptation exists to ignore them, or try to infer them from data results. If, for example, we want to count the number of trees in a forest area with any precision, we have to define what counts as one tree, and perhaps distinguish them from saplings, split trees, dead trees, fallen trees etc. Soon enough it becomes apparent that the quantification of trees involves a degree of abstraction – we decide to disregard some timber, dead or alive, from the population of trees, in order to count those trees that conform to our chosen concept of a tree. We operate in fact with an abstract concept of what a tree is, which diverges to some extent from the true diversity of trees there are. Even so, there may be some trees, of which it is not very clear, whether they should be counted as a tree or not. It may be difficult to define the exact boundary where the forest begins and ends. The forest boundary might also change somewhat in the course of time. A certain amount of "fuzziness" in the definition of a tree and of the forest may therefore remain. The implication is, that the seemingly "exact" number offered for the total quantity of trees in the forest may be much less exact than one might think — it is probably more an estimate or indication of magnitude, rather than an exact description. Yet — and this is the point — the imprecise measure can be very useful and sufficient for all intended purposes. It is tempting to think, that if something can be measured, it must exist, and that if we cannot measure it, it does not exist. Neither might be true. Researchers try to measure such things as
intelligence or
gross domestic product, without much scientific agreement about what these things actually are, how they exist, and what the correct measures might be. When one wants to count and quantify distinct objects using numbers, one needs to be able to distinguish between all of those separate objects as countable units. If this is difficult or impossible, then, although this may not invalidate a quantitative procedure as such, quantification is not really possible in practice. At best, we may be able to assume or infer indirectly a certain distribution of quantities that must be there. In this sense, scientists often use
proxy variables to substitute as measures for variables which are known (or thought) to be there, but which themselves cannot be observed or measured directly.
Vague or fuzzy The exact relationship between vagueness and fuzziness is disputed.
Philosophical interpretation Philosophers often regard fuzziness as a particular kind of vagueness, and consider that "no specific assignment of semantic values to vague predicates, not even a fuzzy one, can fully satisfy our conception of what the extensions of vague predicates are like". Surveying recent literature on how to characterize vagueness, Matti Eklund states that appeal to lack of sharp boundaries, borderline cases and "sorites-susceptible" predicates are the three informal characterizations of vagueness which are most common in the literature.
Zadeh's argument However,
Lotfi A. Zadeh claimed that "vagueness connotes insufficient
specificity, whereas fuzziness connotes unsharpness of
class boundaries". Thus, he argued, a sentence like "I will be back in a few minutes" is fuzzy
but not vague, whereas a sentence such as "I will be back sometime", is fuzzy
and vague. His suggestion was that fuzziness and vagueness are logically quite different qualities, rather than fuzziness being a type or subcategory of vagueness. Zadeh claimed that "inappropriate use of the term 'vague' is still a common practice in the literature of philosophy".
Ethics and law In the scholarly inquiry about
ethics and
meta-ethics,
vague or fuzzy concepts and borderline cases are standard topics of controversy. Central to ethics are theories of "value", what is "good" or "bad" for people and why that is, and the idea of "rule following" as a condition for moral integrity, consistency and non-arbitrary behaviour. Yet, if human valuations or moral rules are only vague or fuzzy, then they may not be able to orient or guide behaviour. It may become impossible to operationalize rules. Evaluations may not permit definite moral judgements, in that case. Hence, clarifying fuzzy moral notions is usually considered to be critical for the ethical endeavour as a whole.
Excessive precision in rule-making Nevertheless,
Scott Soames has made the case that vagueness or fuzziness can be
valuable to rule-makers, because "their use of it is valuable to the people to whom rules are addressed". It may be more practical and effective to allow for some leeway (and personal responsibility) in the interpretation of how a rule should be applied — bearing in mind the overall purpose which the rule intends to achieve. If a rule or procedure is stipulated too exactly, it can sometimes have a result which is contrary to the aim which it was intended to help achieve. For example, "The
Children and Young Persons Act could have specified a precise age below which a child may not be left unsupervised. But doing so would have incurred quite substantial forms of arbitrariness (for various reasons, and particularly because of the different capacities of children of the same age)".
Conflicting rules A related sort of problem is, that if the application of a legal concept is pursued too exactly and rigorously, it may have consequences that cause a serious conflict with
another legal concept. This is not necessarily a matter of bad law-making. When a law is made, it may not be possible to anticipate all the cases and events to which it will apply later (even if 95% of possible cases are predictable). The longer a law is in force, the more likely it is, that people will run into problems with it, that were not foreseen when the law was made. So, the further implications of one rule may conflict with another rule. "Common sense" might not be able to resolve things. In that scenario, too much precision can get in the way of justice. Very likely a special court ruling wil have to set a norm. The general problem for jurists is, whether "the arbitrariness resulting from precision is worse than the arbitrariness resulting from the application of a vague standard". David Lanius has examined nine arguments for the "value of vagueness" in different contexts.
Mathematical ontology The definitional disputes about fuzziness remain unresolved so far, mainly because, as anthropologists and psychologists have documented, different languages (or symbol systems) that have been created by people to signal meanings suggest different
ontologies. Put simply: it is not merely that describing "what is there" involves symbolic representations of some kind. How distinctions are drawn, influences perceptions of "what is there", and vice versa, perceptions of "what is there" influence how distinctions are drawn. This is an important reason why, as
Alfred Korzybski noted, people frequently confuse the symbolic representation of reality, conveyed by languages and signs, with reality itself. For example, watching the TV news, the human brain spontaneously assumes that the TV images being shown are the same as what the TV viewers would see themselves, if they had been physically on the same scene at the same moment – even when the viewers don't have access to everything that exists or happens external to the image frame shown. In this way, the TV image shapes the meaning of what there is, and it does so for most viewers at the same time. A common saying is "seeing is believing", but "believing is seeing" could also be a reality to a certain extent. Fuzziness implies, that there exists a potentially
infinite number of truth values between complete truth and complete falsehood (the endpoints of a scale). If that is the case, it creates the foundational issue of what, in the case, can justify or prove the existence of the categorical absolutes which are assumed by logical or quantitative inference. If there is an infinite number of shades of grey, how do we know what is totally black and white, and how could we identify that? How do we reach the endpoints of the scale?
Tegmark's mathematical universe To illustrate the ontological issues, cosmologist
Max Tegmark argues boldly that the universe consists of math: "If you accept the idea that both space itself, and all the stuff in space, have no properties at all except mathematical properties," then the idea that everything is mathematical "starts to sound a little bit less insane." Tegmark moves from the
epistemic claim that mathematics is the only known symbol system which can in principle express absolutely everything, to the
methodological claim that everything is
reducible to mathematical relationships, and then to the
ontological claim, that ultimately everything that exists is mathematical (the
mathematical universe hypothesis). The argument is then reversed, so that
because everything is mathematical in reality, mathematics is
necessarily the ultimate universal symbol system. The main criticisms of Tegmark's approach are that (1) the steps in this argument do not necessarily follow, (2) no conclusive proof or test is possible for the claim that a total reduction of everything to mathematics is feasible, among other things because qualitative categories remain indispensable to understand and navigate what quantities mean, and (3) it may be that a complete reduction to mathematics cannot be accomplished, without at least partly altering, negating or deleting a non-mathematical significance of phenomena, experienced perhaps as
qualia. An additional complication is that mathematical theory is not something fixed and final, like a pancake which is served up, anymore than language is a finished, unchangeable system. Mathematical theory and language are ways of understanding the world which keep developing and changing, in response to new discoveries (without certainty about what the future could bring).
Zalta's metaphysics In his
meta-mathematical metaphysics,
Edward N. Zalta has claimed that for every set of properties of a concrete object, there
always exists
exactly one abstract object that encodes
exactly that set of properties and no others — a foundational assumption or
axiom for his
ontology of abstract objects By implication, for every fuzzy object there would always exist at least one
defuzzified concept which encodes it exactly. It is a modern interpretation of
Plato's
metaphysics of knowledge, which expresses confidence in the ability of science to conceptualize the world exactly. However, such a theory — like any metaphysical theory — is impossible to test definitively. According to the Dutch
computational linguist Kees van Deemter, "The fact that vagueness abounds in the presentation of mathematical results suggests that vagueness plays an important role in our thinking, even when the concepts about which we think are completely crisp."
Platonism versus cognitive realism The Platonic-style interpretation of concepts was critiqued by
Hartry H. Field. Mark Balaguer argues that we do not really know whether mind-independent abstract objects exist or not; so far, we cannot prove whether
Platonic realism is definitely true or false. Defending a cognitive realism,
Scott Soames argues that the reason why this unsolvable conundrum has persisted, is because the ultimate constitution of the meaning of concepts and propositions was misconceived. Traditionally, it was thought that concepts can be truly representational, because ultimately they are related to intrinsically representational Platonic complexes of
universals and
particulars (see
theory of forms). However, once concepts and propositions are regarded as cognitive-event types, it is possible to claim that they are able to be representational, because they are constitutively related to intrinsically representational cognitive acts in the real world. As another philosopher put it, The idea here is, that we can know the world and represent it realistically, because we are ourselves part of the world and within the world. Along these lines, it could be argued that reality, and the human cognition of reality, will inevitably contain some fuzzy characteristics, which can perhaps be represented only by concepts which are themselves fuzzy to some or other extent. Hongxing Li
et al. comment that:
Paradoxes Even using ordinary
set theory and
binary logic to reason something out, logicians have discovered that it is possible to generate statements which are logically speaking not completely true or imply a
paradox, even although in other respects they conform to logical rules (see
Russell's paradox). If a margin of indeterminacy therefore persists, then binary logic cannot totally remove fuzziness.
David Hilbert concluded that the existence of logical paradoxes tells us "that we must develop a meta-mathematical analysis of the notions of proof and of the axiomatic method; their importance is methodological as well as epistemological". ==Social science and the media==