in logic, the property of a sentence of some theory or formula of a calculus such that neither the sentence itself nor its negation can be derived from a given system of sentences (for example, a system of axioms) or, correspondingly, from a conjunction of given formulas, respectively.
A sentence can be shown to be independent of a given system of axioms by proving the consistency of two systems of axioms that can be obtained from the addition, respectively, of the given proposition and its negation to the given system of axioms. Independence is also related to the property of deductive completeness of axiomatic theories. If a consistent system of axioms is deductively complete, a contradiction results when any proposition (independent of the system) of the given theory is added as an axiom to the system. In speaking of the independence of intuitively formulated sentences, “derivability” is understood intuitively, “in accordance with the laws of logic.” On the other hand, strictly defined rules of inference (the question of the independence of which can also be raised) are always fixed when considering formal calculi.
It is possible to speak of “expressive” independence in a way analogous to the “deductive” independence described above. In this case, a concept (term) is said to be independent of a given system of concepts (terms) if it cannot be defined solely by means of these concepts (terms), although again, as above, it is assumed that a set of rules of definition has been fixed with respect to which the question of independence can be raised. The term “independence” (in both senses) is, finally, also applied to sets of sentences (formulas) or concepts (terms). A set is said to be independent (and also nonredundant, or minimal) if every one of its members is independent of the remaining members in the sense defined above. A number of highly important results concerning independence have been obtained in the axiomatic theory of sets and in mathematical logic.
IU. A. GASTEV
one of the most important concepts in probability theory. We give as an example the definition of the independence of two random events. Let A and B be two random events, and P(A) and P(B) their probabilities. The conditional probability P(Bǀ A) of the event B under the condition that A occurs is defined by the equality
where P(A&B) is the probability that A and B occur simultaneously. The event B is said to be independent of A if
(*) P(B\A) = P(B)
Equation (*) can be written in a form symmetric in A and B:
P(A&B) = P(A)P(B)
from which it is evident that if B is independent of A, then A is independent of B. Thus, we may simply speak of the independence of two events.
The specific meaning of this definition of independence can be clarified in the following manner. It is known that the probability of an event is expressed by the frequency of its occurrence. Therefore, if a large number N of trials is carried out, then the frequency with which the event B appears in all ¿V trials and the frequency with which it appears in those trials in which the event A occurs will be approximately equal. Thus, the independence of events indicates either that there is no relation between the occurrence of these events or that the relation is not essential. Thus, the event in which a randomly selected person has a last name beginning, for example, with the letter “A” and the event that this person will win the next drawing of a lottery are independent.
Pairwise and mutual independence are distinguished in defining the independence of several (more than two) events. The events A1, A2, . . .,An are said to be pairwise independent if any two of them are independent in the sense of the definition given above and are mutually independent if the probability that any of them occurs is independent of the occurrence of an arbitrary combination of the other events.
The concept of independence is also extended to random variables. The random variables X and Y are said to be independent if for any two intervals Δ1 and Δ2, the events that the variable X belongs to Δ2 and that Y belongs to Δ2 are independent. Highly important schemes in probability theory are based on the hypothesis that various events or random variables are independent.