Answer: both of these are zero, because the die cannot take these values.
Answer: The CDF total probability of having a value less than or equal to its argument. Thus \(F(3)= 1/2\), \(F(7)=1\), and \(F(1.5)=1/6\)
Answer: The normal distribution is symmetric around its mean, with half of its probability on each side. Thus, \(F(0)=1/2\)
\(i\) | 1 | 2 | 3 | 4 |
---|---|---|---|---|
\(Z_i\) | 2.0 | -2.0 | 3.0 | -3.0 |
Solution: \[
s^2= \frac{\sum_{i=1}^N(Y_i - \bar{Y})^2}{N-1} = \frac{26}{3}
= 8\times \frac{2}{3}
\]
Solution: First, we multiply through and distribute: \[
\sum_{i=1}^N(X_i-\bar{X})(Y_i-\bar{Y}) = \sum_{i=1}^N(X_i-\bar{X})Y_i
- \sum_{i=1}^N(X_i-\bar{X})\bar{Y}
\] Next note that \(\bar{Y}\) (the mean of the \(Y_i\)s) doesn’t depend on \(i\) so we can pull it out of the summation: \[
\sum_{i=1}^N(X_i-\bar{X})(Y_i-\bar{Y}) = \sum_{i=1}^N(X_i-\bar{X})Y_i
- \bar{Y} \sum_{i=1}^N(X_i-\bar{X}).
\] Finally, the last sum must be zero because \[
\sum_{i=1}^N(X_i-\bar{X}) = \sum_{i=1}^N X_i- \sum_{i=1}^N \bar{X} = N\bar{X} - N\bar{X}=0.
\] Thus \[\begin{align*}
\sum_{i=1}^N(X_i-\bar{X})(Y_i-\bar{Y}) &= \sum_{i=1}^N(X_i-\bar{X})Y_i - \bar{Y}\times 0\\
& = \sum_{i=1}^N(X_i-\bar{X})Y_i.
\end{align*}\]
Using the definition of an expected value above and with X and Y having the same probability distribution, show that:
\[\begin{align*} \text{E}[X+Y] & = \text{E}[X] + \text{E}[Y]\\ & \text{and} \\ \text{E}[cX] & = c\text{E}[X]. \\ \end{align*}\]
Given these, and the fact that \(\mu=\text{E}[X]\), show that:
\[\begin{align*} \text{E}[(X-\mu)^2] = \text{E}[X^2] - (\text{E}[X])^2 \end{align*}\]
This gives a formula for calculating variances (since \(\text{Var}(X)= \text{E}[(X-\mu)^2]\)).
Solution: Assuming \(X\) and \(Y\) are both i.i.d. with distribution \(f(x)\). The expectation of \(X+Y\) is defined as \[\begin{align*} \text{E}[X+Y] & = \int (X+Y) f(x)dx \\ & = \int (X f(x) +Y f(x))dx \\ & = \int X f(x)dx +\int Y f(x)dx \\ & = \text{E}[X] + \text{E}[Y] \end{align*}\] Similarly \[\begin{align*} \text{E}[cX] & = \int cXf(x)dx \\ & = c \int Xf(x) dx \\ & = c\text{E}[X]. \\ \end{align*}\] Thus we can re-write: \[\begin{align*} \text{E}[(X-\mu)^2] & = \text{E}[ X^2 - 2X\mu + \mu^2] \\ & = \text{E}[X^2] - 2\mu\text{E}[X] + \mu^2 \\ & = \text{E}[X^2] -2\mu^2 + \mu^2 \\ & = \text{E}[X^2] - \mu^2 \\ & = \text{E}[X^2] - (\text{E}[X])^2. \end{align*}\]
Suppose that \(\mathrm{E}[X]=\mathrm{E}[Y]=0\), \(\mathrm{var}(X)=\mathrm{var}(Y)=1\), and \(\mathrm{corr}(X,Y)=0.5\).
Solution:
Suppose we have a random sample \(\{Y_i, i=1,\dots,N \}\), where \(Y_i \stackrel{\mathrm{i.i.d.}}{\sim}N(\mu,4)\) for \(i=1,\ldots,N\).
\[\displaystyle \mathrm{Var}(\bar{Y}) = \mathrm{Var}\left(\frac{1}{N}\sum_{i=1}^N Y_i\right) = \frac{N}{N^2}\mathrm{Var}(Y) =\frac{4}{N}\].
This is the derivation for the variance of the sampling distribution.
\[\displaystyle\mathrm{E}[\bar{Y}] = \frac{N}{N}\mathrm{E}(Y) = \mu.\] This is the mean of the sampling distribution.
\(\displaystyle \mathrm{Var}(Y) = 4\), because this is a sample directly from the population distribution.
Here, again, we are looking at the distribution of the sample mean, so we must consider the sampling distribution, and the standard error (aka the standard distribution) is just the square root of the variance from part i.
\[\displaystyle \mathrm{se}(\bar{Y}) = \sqrt{\mathrm{Var}(\bar{Y})} =\frac{2}{\sqrt{N}}\].
Suppose we sample some data \(\{Y_i, i=1,\dots,n \}\), where \(Y_i \stackrel{\mathrm{i.i.d.}}{\sim}N(\mu,\sigma^2)\) for \(i=1,\ldots,n\), and that you want to test the null hypothesis \(H_0: ~\mu=12\) vs. the alternative \(H_a: \mu \neq ` r m`\), at the \(0.05\) significance level.
This question is asking you think about the hypothesis that the mean of your distribution is equal to 12. I give you the distribution of the data themselves (i.e., that they’re normal). To test the hypothesis, you work with the sampling distribution (i.e., the distribution of the sample mean) which is: \(\bar{Y}\sim N\left(\mu, \frac{\sigma^2}{n}\right)\).
For 19 out of 20 different samples, an interval constructed in this way will include the true value of the mean, \(\mu\).