Consider a set which is even smaller (subset) than a null set. A null set is a set S S S such that P ( S ) = 0 \Pr(S) = 0 P ( S ) = 0 .
This subset of the null set also intuitively has P ( ⋅ ) = 0 \Pr(\cdot) = 0 P ( ⋅ ) = 0 . However, this small intuitive idea cannot be derived from the definition of a probability measure.
Why? Because there is no guarantee that if A ∈ F A \in \cF A ∈ F and B ⊆ A B \subseteq A B ⊆ A , then B ∈ F B \in \cF B ∈ F for a σ \sigma σ -field F \cF F .
If B ∈ F B \in \cF B ∈ F (i.e. if we knew it was measurable), then by monotonicity the idea above is true. But we don’t know if the subset is even measurable (in F \cF F ).
Definition: Complete Probability Space
A probability space ( Ω , F , P ) (\Omega, \cF, \Pr) ( Ω , F , P ) is complete if
A ⊆ B and P ( B ) = 0 ⟹ A ∈ F A \subseteq B \text{ and } \Pr(B) = 0 \implies A \in \cF A ⊆ B and P ( B ) = 0 ⟹ A ∈ F (and consequently P ( A ) = 0 \Pr(A) = 0 P ( A ) = 0 ).
If ( Ω , F , P ) (\Omega, \cF, \Pr) ( Ω , F , P ) is complete, then for a set A ′ A' A ′ , if there exists A ∈ F A \in \cF A ∈ F such that A Δ A ′ ⊆ B A \Delta A' \subseteq B A Δ A ′ ⊆ B for some B B B with P ( B ) = 0 \Pr(B) = 0 P ( B ) = 0 , then A ′ ∈ F A' \in \cF A ′ ∈ F and P ( A ′ ) = P ( A ) \Pr(A') = \Pr(A) P ( A ′ ) = P ( A ) .
Note: A Δ A ′ A \Delta A' A Δ A ′ is the symmetric difference, defined as A Δ A ′ = ( A ∪ A ′ ) − ( A ∩ A ′ ) A \Delta A' = (A \cup A') - (A \cap A') A Δ A ′ = ( A ∪ A ′ ) − ( A ∩ A ′ ) .
Since A Δ A ′ ⊆ B A \Delta A' \subseteq B A Δ A ′ ⊆ B and P ( B ) = 0 \Pr(B) = 0 P ( B ) = 0 , the subsets A ∩ A ′ c A \cap A'^c A ∩ A ′ c and A c ∩ A ′ A^c \cap A' A c ∩ A ′ are subsets of a null set in a complete space, so they are null sets.
This implies P ( A ) = P ( A ′ ) = P ( A ∩ A ′ ) \Pr(A) = \Pr(A') = \Pr(A \cap A') P ( A ) = P ( A ′ ) = P ( A ∩ A ′ ) .
A probability space always has a complete extension.
Theorem
Let ( Ω , F , P ) (\Omega, \cF, \Pr) ( Ω , F , P ) be a probability space. Then there exists a complete probability space ( Ω , F ′ , P ′ ) (\Omega, \cF', \Pr') ( Ω , F ′ , P ′ ) such that F ⊆ F ′ \cF \subseteq \cF' F ⊆ F ′ and P ( A ) = P ′ ( A ) \Pr(A) = \Pr'(A) P ( A ) = P ′ ( A ) for all A ∈ F A \in \cF A ∈ F .
Recall : The outer measure P ∗ \Pr^* P ∗ defined in the extension result is a probability measure on the σ \sigma σ -field M \mathcal{M} M where M \mathcal{M} M is the class of all P ∗ \Pr^* P ∗ -measurable sets.
A ∈ M ⟺ P ∗ ( E ) = P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E ) ∀ E ⊆ Ω A \in \mathcal{M} \iff \Pr^*(E) = \Pr^*(A \cap E) + \Pr^*(A^c \cap E) \quad \forall E \subseteq \Omega A ∈ M ⟺ P ∗ ( E ) = P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E ) ∀ E ⊆ Ω
We are going to show that ( Ω , M , P ∗ ) (\Omega, \mathcal{M}, \Pr^*) ( Ω , M , P ∗ ) is a complete probability space and it will be the required extension.
Let P ∗ ( B ) = 0 \Pr^*(B) = 0 P ∗ ( B ) = 0 and A ⊆ B A \subseteq B A ⊆ B . Our goal is to show P ∗ ( A ) = 0 \Pr^*(A) = 0 P ∗ ( A ) = 0 , to show that we need to just show A ∈ M A \in \mathcal{M} A ∈ M .
For any E ⊆ Ω E \subseteq \Omega E ⊆ Ω , consider:
P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E ) \Pr^*(A \cap E) + \Pr^*(A^c \cap E) P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E )
Note that:
A ∩ E ⊆ A ⊆ B ⟹ P ∗ ( A ∩ E ) ≤ P ∗ ( B ) = 0 A \cap E \subseteq A \subseteq B \implies \Pr^*(A \cap E) \le \Pr^*(B) = 0 A ∩ E ⊆ A ⊆ B ⟹ P ∗ ( A ∩ E ) ≤ P ∗ ( B ) = 0 . So P ∗ ( A ∩ E ) = 0 \Pr^*(A \cap E) = 0 P ∗ ( A ∩ E ) = 0 .
A c ∩ E ⊆ E ⟹ P ∗ ( A c ∩ E ) ≤ P ∗ ( E ) A^c \cap E \subseteq E \implies \Pr^*(A^c \cap E) \le \Pr^*(E) A c ∩ E ⊆ E ⟹ P ∗ ( A c ∩ E ) ≤ P ∗ ( E ) .
Therefore, by monotonicity of P ∗ \Pr^* P ∗ :
P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E ) ≤ 0 + P ∗ ( E ) = P ∗ ( E ) \Pr^*(A \cap E) + \Pr^*(A^c \cap E) \le 0 + \Pr^*(E) = \Pr^*(E) P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E ) ≤ 0 + P ∗ ( E ) = P ∗ ( E )
The other side of the inequality is trivial, as:
P ∗ ( E ) ≤ P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E ) \Pr^*(E) \le \Pr^*(A \cap E) + \Pr^*(A^c \cap E) P ∗ ( E ) ≤ P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E )
by the definition of outer measure (subadditivity).
Thus, we have equality:
P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E ) = P ∗ ( E ) \Pr^*(A \cap E) + \Pr^*(A^c \cap E) = \Pr^*(E) P ∗ ( A ∩ E ) + P ∗ ( A c ∩ E ) = P ∗ ( E )
This implies that A A A is P ∗ \Pr^* P ∗ -measurable, so A ∈ M A \in \mathcal{M} A ∈ M .
Also, P ∗ ( A ) = 0 \Pr^*(A) = 0 P ∗ ( A ) = 0 follows from the monotonicity of P ∗ \Pr^* P ∗ (since A ⊆ B A \subseteq B A ⊆ B and P ∗ ( B ) = 0 \Pr^*(B)=0 P ∗ ( B ) = 0 ).