NIP theories (also known as dependent theories or shatterproof theories) are a class of theories generalizing stable theories, but allowing for an ordering. Aside from stable theories, important examples are real closed fields, ACVF, {\displaystyle p}-adically closed fields, and o-minimal theories.

Definitions[]

Fix some complete theory {\displaystyle T}, with monster model {\displaystyle \mathbb {U} }. A formula {\displaystyle \phi (x;y)} is said to have the independence property if there exist {\displaystyle \langle a_{i}\rangle _{i\in \mathbb {N} }} and {\displaystyle \langle b_{S}\rangle _{S\subset \mathbb {N} }} such that for every {\displaystyle i} and {\displaystyle S}, {\displaystyle i\in S\iff \mathbb {U} \models \phi (a_{i};b_{S})} Note that the variables {\displaystyle x} and {\displaystyle y} could be tuples, and the location of the semi-colon matters.

A formula is said to be NIP if it does not have the independence property.

It turns out that the following conditions are equivalent:

A theory is said to be NIP if no formula has the independence property, i.e., every formula is NIP. A non-trivially equivalent condition is that every formula {\displaystyle \phi (x;y)} with {\displaystyle x} a singleton is NIP. That is, when checking NIP, one only needs to check formulas {\displaystyle \phi (x;y)} with {\displaystyle x} a singleton. This is a useful criterion when we have some control over one-dimensional definable sets, such as in the case of strongly minimal, o-minimal, or C-minimal theories.

Properties[]

Like most classification-theoretic boundaries, NIP is preserved under adding imaginary sorts (passing to {\displaystyle M^{eq}}), under reducts, and under naming parameters with new constant symbols.

Stable theories are NIP. In fact they are exactly the NIP theories which are also simple, or which are also NSOP.

Forking in NIP theories does not have many of the nice properties of forking in simple theories; for example, symmetry can fail, and forking can differ from dividing. However, it is still a useful tool. Here are some of the results on forking and dividing in NIP theories:

Some facts from stable group theory generalize to the NIP setting. For example, {\displaystyle G^{00}} always exists (the smallest type-definable subgroup of bounded index). This was proven by Shelah; a proof appears in a paper by Hrushovski, Peterzil, and Pillay. Maybe {\displaystyle G^{0}} also exists, depending on what exactly {\displaystyle G^{0}} means. But {\displaystyle G^{0}} and {\displaystyle G^{00}} need not agree, unlike in the stable case. For example, if {\displaystyle G} is the circle group, or any compact definable group in RCF, then {\displaystyle G^{0}} is the topological connected component of {\displaystyle G}, while {\displaystyle G^{00}} is the infinitesimal neighborhood of the origin.

Also, {\displaystyle G^{000}} always exists; someone proved it, maybe.

A version of Baldwin-Saxl exists for NIP groups. It says that if {\displaystyle G} is a type-definable group in an NIP theory, and {\displaystyle \phi (x;y)} is a formula, then there is an integer {\displaystyle n} such that any finite intersection of relatively definable subgroups cut out by {\displaystyle \phi (x;y)} is an intersection of at most {\displaystyle n} of them.

Generically stable types[]

In NIP theories, there is a well-defined notion of generically stable types, which have many of the properties of types in stable theories. A global {\displaystyle C}-invariant type is said to be generically stable if it satisfies one of the following equivalent conditions:

In light of the last bullet point, it makes sense to say that a complete type {\displaystyle p} over a set {\displaystyle C} is generically stable: this means that {\displaystyle p} is stationary, and the unique nonforking global extension of {\displaystyle p} is generically stable.

Even outside of the NIP setting, there is a notion of a generically stable type, but it is much more constrained, and the above equivalences fail.

Generically stable types are equivalent to stably dominated types in ACVF, and play an important role there. But there are many theories, such as RCF and {\displaystyle p}-adically closed fields, in which all generically stable types are trivial. In these settings, one must instead look at generically stable measures…

Keisler measures[]

"Smooth measures" exist.

Also, all measures are Borel type-definable.

[Counting types; dependent dreams]{#Counting_types;_dependent_dreams .mw-headline}[]

If {\displaystyle \kappa } is a cardinal, {\displaystyle ded(\kappa )} is something like the supremum of cardinals {\displaystyle \lambda } such that there is a linear order of cardinality {\displaystyle \kappa } which has at least {\displaystyle \lambda }-many Dedekind cuts. Under some set-theoretic assumptions, {\displaystyle ded(\kappa )=2^{\kappa }} for every {\displaystyle \kappa }, though under other set-theoretic assumptions, one has {\displaystyle ded(\kappa )<2^{\kappa }} for some {\displaystyle \kappa }. At any rate, ZFC does not decide the matter.

Shelah proved (something like) the following result:

Consequently, one can characterize NIP theories (in some models of ZFC) by a type-counting description, in the same way that one can characterize stable theories by counting types.

Wait, maybe the above result was for {\displaystyle \phi (x;y)} types, where {\displaystyle \phi (x;y)} is a NIP or non-NIP formula…

(Historical note: the original proof that NIP could be checked on formulas {\displaystyle \phi (x;y)} with {\displaystyle x} a singleton, was using this criterion?)

Types over finite sets[]

NIP formulas can be characterized by counting types over finite sets. If {\displaystyle \phi (x;y)} is a formula, consider the function {\displaystyle f:\mathbb {N} \to \mathbb {N} } which maps {\displaystyle n} to the maximum number of {\displaystyle \phi }-types over a set of size {\displaystyle n}. The Sauer-Shelah lemma says that one of two things happens:

It is conjectured that the polynomial growth can be witnessed by a uniform definability of types over finite sets. Specifically, it is conjectured that if {\displaystyle \phi (x;y)} is an NIP formula, then there is some formula {\displaystyle \psi (y;y_{1},\ldots ,y_{n})} such that for every finite set {\displaystyle A} and every {\displaystyle \phi }-type {\displaystyle p(x)} over {\displaystyle A}, there exist {\displaystyle b_{1},\ldots ,b_{n}\in A} such that for {\displaystyle b\in A}, the formula {\displaystyle \phi (x;b)} is in {\displaystyle p(x)} if and only if {\displaystyle \psi (b;b_{1},\ldots ,b_{n})} holds. So {\displaystyle \psi (-;b_{1},\ldots ,b_{n})} is the {\displaystyle \phi }-definition of {\displaystyle p(x)}. Of course any type over a finite set is definable over that set, but the conjecture is that the definition can be chosen uniformly across all finite sets. This conjecture turns out to be equivalent to a conjecture in machine learning.

The conjecture is known in the case when the ambient theory is NIP, i.e., when all formulas are NIP. This was proven by… ::: ::: :::