Thursday, November 29, 2007

8.2 due on 11/30

Difficult
In the ellipse example, I'm not sure how |C| = (pi) ab. Where did the pi come from? I'm assuming it has to do with some equation for an ellipse, but I'm not sure. I'm also a little confused how P = (arctan(a/b)) / pi. I know the book says it's because Theta is uniformly distributed, but I still don't know how that worked out. Do we take an integral? Or perhaps it's something conceptual?
Reflective
In the last few sections, I realized how out of practice I am with derivatives and integration. In this section, the introduction of the Jacobian makes me feel even more out of practice. The theorem for the joint density of change of variables looks a lot like something from one of the Math 30-series classes (I'm not even sure which one!). So, even though this section wasn't incredibly difficult concept-wise, I feel it'll still be challenging because I'm so rusty with these kinds of calculations.

Tuesday, November 27, 2007

8.1 due on 11/28

Difficult
The triangle example gave me a couple of problems. For part c (show it is possible to construct a triangle...), I'm not quite sure what the solution means. I felt like there needed to be more than a couple of sentences as an answer. For part d, I didn't even know where to begin. The problem seemed very tricky, even after I read how the book did it. I would also like to see the Bivariate Normal Density example worked out, since I feel like the book skipped a few steps.
Reflective
For the most part, a lot of the material in this section was concepts we already know applied to joint continuous random variables. Densities, distributions, and marginals are all analogous to their earlier definitions. It's nice to be able to see how the same concept works differently depending on the type of random variable(s) you have. The idea behind these concepts are the same in all cases, but the way you calculate them varies.

Sunday, November 25, 2007

7.4 due on 11/26

Difficult
The Pareto Density example confused me a little bit. Conceptually, I understand the expectation is infinity and why, but when I tried to work it out on my own, I got stuck doing the calculations. Also, the justification for using Defintion 1 really confused me. I understand that the book is using a discrete approximation, but once again I get confused when trying to do the calculations. It's frustrating because I want to be able to see it work out mathematically instead of just accepting these statements as fact. At the end of the section, Jensen's inequality is mentioned. What does it mean for a function to be convex? Also, Chebyshov's inequality is simply stated, but there's no proof. I can get it to work out mathematically, but what does it mean conceptually?
Reflective
The section was relatively easy to understand since we've done expectations twice already. When justifying why we use Definition 1, I know the book tried to show us mathematically why it works. But instead, I just try to remember that we're dealing with continuous random variables, so we can't sum over every possible value of x since P(X=x) = 0. Surely the expectation for all random variables is not 0! So we use integrals since that will give us a "sum" of all possible values of x without equaling 0 all the time.

Tuesday, November 20, 2007

7.2 due 11/21

Difficult
In Example (2), I'm a little confused about why F(y) is non-zero for y <= 0 and then f(y) is non-zero for y>0. Why did the values of y switch? Also, the book says the derivative does not exist at y=0. I may be out of practice on derivatives, but I thought the derivative at y=0 was 0. Also, in the step function example, I don't quite see how FS(x) >= FX(x). I'm also having trouble understanding how an integer-valued random variable is an approximation of a continuous random variable.
Reflective
This section on functions of random variables is very similar to what we covered in section 5.4 - sums and products of random variables. Only now, we're dealing with more complicated functions, such as logs or exponents. I thought the examples on inverse functions were helpful since the ideas are kind of easy to grasp, but not so obvious mathematically.

Sunday, November 18, 2007

Rest of 7.1 due on 11/19

Difficult
In Example (17), for x=0, f(x) = 0. But then the book implies you could set f(x) = lambda/2 because it doesn't matter. Why does it not matter? Why is that point "exceptional?" Also, I thought the later examples in the section were a bit difficult to follow. I think I got confused with the notation and all the Greek letters. I was able to (somewhat) follow up to the Normal Density example. After that, I was very confused.
Reflective
The book says that there are continuous random variables that do not have a density. What would these random variables look like? It must be something with a non-differentiable distribution function F, but I'm curious to see an example.

Tuesday, November 13, 2007

5.5 until p.179 (conditional expectation), 7.1 until p.291 (Example 4 exclusively) due on 11/14

Difficult
The definition of conditional mass function is very similar to the definition of conditional probability for events. However, the denominator in the definition confuses me. Shouldn't the denominator be fY(y) instead of fY(x)? That way, it would be analogous to the conditional probability for events. Also, the examples use fY(y) instead. In section 7.1, I'm not sure what the book means by "state space." Is this similar to a sample space, only instead the number of possible outcomes is uncountable? I also don't fully understand the proof of P(X=x) = 0 if F(x) is continuous. I'm not sure how they got (x - 1/n). And lastly, in the definition of a continuous random variable, does the v in the integral stand for anything?
Reflective
Even though chapter 5 deals with two random variables, I like how conditional probability is relatively the same as when dealing with events. Although this is "new" material, it's more like a reminder of what we've previously learned. The same is true for the distribution function F in section 7.1. At this point, I feel a lot of the definitions are being reapplied to different situations.

Thursday, November 8, 2007

5.4 due on 11/9

Difficult
The theorem for inclusion-exclusion inequalities makes sense, but I don't understand the proof. I don't know how or why there are combinations and from then on, I can't follow the rest of the proof. For the equation of a geometric random variable, it may be a small detail, but I thought the equation raised (1-p) to the (k-1) power instead of the k power. Unless X is the random variable representing the number of tails before the first head, instead of the total number of tosses. In part (b) of Example 13, I'm not sure how we came up with the probability of z tails. How did we get (z+n-1) C (n-1) ?
Reflective
The examples with the binomial random variable weren't as difficult as the rest of the section since the binomial RV was introduced as a series of Bernoulli trials. Even though there's a theorem stating any discrete RV can be written as a linear combination of Bernoulli trials, the binomial RV is the only one I can clearly visualize. I wonder how it is possible to represent a Poisson RV as a series of Bernoulli trials?

Sunday, November 4, 2007

5.2, 5.11(a), (b) due on 11/5

Difficult
I don't see how Example 1 in section 5.2 relates to independent random variables. I'm assuming it's used to show how two marginal mass functions can be the same but the joint mass functions are different, but I'm not sure. In section 5.11 part (a), I don't follow solution II. I don't know what the event H includes. I thought it was the event a hole is halved, but isn't that what Z is measuring?
Reflective
The proof of Theorem 7(a) in section 5.2 reminds me of the double integral in calculus, only with double sums instead. I thought it was very similar; as long as the X variables are together and the Y variables are together, the equation works.