Go to the RMD, R, PDF, or HTML version of this file. Go back to fan’s REconTools Package, R Code Examples Repository (bookdown site), or Intro Stats with R Repository (bookdown site).
Sample Space and Experimental Outcomes
We can use the letter \(S\) or \(\Omega\) to denote sample space. Supose we have a set of \(n\) experimental outcomes:
\[S = \left\{ E_1, E_2, ..., E_{n-1}, E_n\right\}\]
We can call \(S\) a sample space if:
Thinking about the world in terms of sample space is pretty amazing.
Assigning Probability to Experimental Outcomes
We can assign probabilities to events of a sample space. Since experimental outcomes are themselves events as well, we can assign probabilities to each experimental outcome.
For the mutually exclusive and jointly exhaustive experimental outcomes of the sample space, there are two equirements for assigning probabilities: - Each element of the sample space can not have negative probability of happening, and also can not have more than \(1\) probability of happening, with \(P\) denotes probability, we have: \[0 \le P(E_i) \le 1\] - The probabilities of all the mutually exclusive and jointly exhaustive experimental outcomes in the sample space sum up to \(1\). For an experimental with \(n\) experimental outcomes: \[\Sigma_{i=1}^{n} P(E_i) = 1\]
# Load Library
library(tidyverse)
# Define a List of Experimental Outcomes
experimental.outcomes.list <- c('Heavy Rain', 'Light Rain', 'No Rain')
# Probabilities on experimental outcomes
experimental.outcome.prob <- c(0.1, 0.2, 0.7)
# Show these in a Tibble
kable(tibble(tomorrow.experimental.outcomes = experimental.outcomes.list,
experimental.outcome.prob = experimental.outcome.prob)) %>% kable_styling_fc()
tomorrow.experimental.outcomes | experimental.outcome.prob |
---|---|
Heavy Rain | 0.1 |
Light Rain | 0.2 |
No Rain | 0.7 |
# What could happen tomorrow?
# We live in a probabilistic world, from today's perspective, tomorrow is uncertain
# If we draw tomorrow from a hat, given our possible outcomes
# and the probabilities associated with the outcomes
# what are the possible tomorrows?
number.of.tomorrow.to.draw = 20
tomorrow.weather.draws <- sample(experimental.outcomes.list,
size = number.of.tomorrow.to.draw,
replace = TRUE,
prob = experimental.outcome.prob)
# A little tibble to show results
# There are only three unique tomorrows, each of three weather outcomes
# could happen, but the chance of each happening differs by the probabilities
# we set in experimental.outcome.prob
kable(tibble(which.tomorrow = paste0('tomorrow:', 1:number.of.tomorrow.to.draw),
tomorrow.weather = tomorrow.weather.draws)) %>% kable_styling_fc()
which.tomorrow | tomorrow.weather |
---|---|
tomorrow:1 | Heavy Rain |
tomorrow:2 | Light Rain |
tomorrow:3 | No Rain |
tomorrow:4 | No Rain |
tomorrow:5 | No Rain |
tomorrow:6 | No Rain |
tomorrow:7 | No Rain |
tomorrow:8 | No Rain |
tomorrow:9 | No Rain |
tomorrow:10 | No Rain |
tomorrow:11 | No Rain |
tomorrow:12 | No Rain |
tomorrow:13 | Light Rain |
tomorrow:14 | Heavy Rain |
tomorrow:15 | No Rain |
tomorrow:16 | No Rain |
tomorrow:17 | Light Rain |
tomorrow:18 | No Rain |
tomorrow:19 | No Rain |
tomorrow:20 | No Rain |
Definitions:
Probabilities for Complements and Union
The Probabilities of Complements add up to 1: \[P(A) + P(A^c) = 1\]
The Addition Law: \[P (A \cup B) = P(A) + P(B) - P (A \cap B)\]
If two events \(A\) and \(B\) are mutually exclusive, which means they do not share any experimental outcomes (sample points), then: \(P (A \cap B) = 0\), and \(P (A \cup B) = P(A) + P(B)\).
The Multiplication Law for Indepedent Events: \[P (A \cap B) = P(A) \cdot P(B)\]
If the probability of event \(A\) happening does not change the probability of event \(B\) happening, and vice-versa. The two events are independent. Below we arrive this formulation from conditional probability.
We use a straight line \(\mid\) to denote conditional probability. Given \(A\) happens, what is the probability of \(B\) happening?
\[P (A \mid B) = \frac{P(A \cap B)}{P(B)}\]
This says the probability of \(A\) happening given that \(B\) happens is equal to the ratio of the probability that both \(A\) and \(B\) happen divided by the probability of \(B\) happening.
The formula also means that the probability that both \(A\) and \(B\) happens is equal to the probability that \(B\) happens times the probability that \(A\) happens conditional on \(B\) happening: \[ P(A \cap B) = P (A \mid B)\cdot P(B)\]
If \(A\) and \(B\) are independent, that means the probability of \(A\) happening does not change whether \(B\) happens or not, then, \(P (A \mid B) = P(A)\), and: \[ \text{If A and B are independent: } P(A \cap B) = P(A) \cdot P(B)\] This is what we wrote down earlier as well.