Difficult
There's not much new material in this section since the few theorems we are given are just special cases of change of variables (section 8.2). Although, this section has shown me that I still need to practice finding marginals. But other than that, the only part I had trouble with was the bivariate normal distribution example. I know the problem is mostly book-keeping, but I found it tricky manipulating the equation so that the answer comes out nice.
Reflective
Overall, this section wasn't very difficult. The special equations are nice to know in order to save time, but it's not that much harder to work it on your own. Although, it really helped to see more examples on these types of problems. It cleared up a few issues I was having earlier in section 8.2.
Thursday, December 6, 2007
Wednesday, December 5, 2007
8.3 due on 12/5
Difficult
I still don't feel fully confident about change of variables, so I think it might be hard to find if RVs U = g(X) and V = h(Y) are independent. It's not so much the concept that gives me trouble but the lack of practice. In the Uniform Distribution example, I'm not sure about the reasoning behind the solution to part (a). The solution considers a set outside C and the intersection of a set with C. I thought we just had to look at sets of (x,y) in C. Also, in the Normal Densities example, I'm completely lost on part (c). I'm not even sure what the question is asking. Perhaps if there were a picture to show what we're trying to find the probability of?
Reflective
Even though we've gone over independence before, I feel like I might still have trouble with this section. The concept is the same, just the way you compute it is different. Before, we looked at events and discrete random variables. I believe the trick is to remember that for jointly distributed, we don't look at the density f (~ p.m.f.), but rather the distribution function F.
I still don't feel fully confident about change of variables, so I think it might be hard to find if RVs U = g(X) and V = h(Y) are independent. It's not so much the concept that gives me trouble but the lack of practice. In the Uniform Distribution example, I'm not sure about the reasoning behind the solution to part (a). The solution considers a set outside C and the intersection of a set with C. I thought we just had to look at sets of (x,y) in C. Also, in the Normal Densities example, I'm completely lost on part (c). I'm not even sure what the question is asking. Perhaps if there were a picture to show what we're trying to find the probability of?
Reflective
Even though we've gone over independence before, I feel like I might still have trouble with this section. The concept is the same, just the way you compute it is different. Before, we looked at events and discrete random variables. I believe the trick is to remember that for jointly distributed, we don't look at the density f (~ p.m.f.), but rather the distribution function F.
Thursday, November 29, 2007
8.2 due on 11/30
Difficult
In the ellipse example, I'm not sure how |C| = (pi) ab. Where did the pi come from? I'm assuming it has to do with some equation for an ellipse, but I'm not sure. I'm also a little confused how P = (arctan(a/b)) / pi. I know the book says it's because Theta is uniformly distributed, but I still don't know how that worked out. Do we take an integral? Or perhaps it's something conceptual?
Reflective
In the last few sections, I realized how out of practice I am with derivatives and integration. In this section, the introduction of the Jacobian makes me feel even more out of practice. The theorem for the joint density of change of variables looks a lot like something from one of the Math 30-series classes (I'm not even sure which one!). So, even though this section wasn't incredibly difficult concept-wise, I feel it'll still be challenging because I'm so rusty with these kinds of calculations.
In the ellipse example, I'm not sure how |C| = (pi) ab. Where did the pi come from? I'm assuming it has to do with some equation for an ellipse, but I'm not sure. I'm also a little confused how P = (arctan(a/b)) / pi. I know the book says it's because Theta is uniformly distributed, but I still don't know how that worked out. Do we take an integral? Or perhaps it's something conceptual?
Reflective
In the last few sections, I realized how out of practice I am with derivatives and integration. In this section, the introduction of the Jacobian makes me feel even more out of practice. The theorem for the joint density of change of variables looks a lot like something from one of the Math 30-series classes (I'm not even sure which one!). So, even though this section wasn't incredibly difficult concept-wise, I feel it'll still be challenging because I'm so rusty with these kinds of calculations.
Tuesday, November 27, 2007
8.1 due on 11/28
Difficult
The triangle example gave me a couple of problems. For part c (show it is possible to construct a triangle...), I'm not quite sure what the solution means. I felt like there needed to be more than a couple of sentences as an answer. For part d, I didn't even know where to begin. The problem seemed very tricky, even after I read how the book did it. I would also like to see the Bivariate Normal Density example worked out, since I feel like the book skipped a few steps.
Reflective
For the most part, a lot of the material in this section was concepts we already know applied to joint continuous random variables. Densities, distributions, and marginals are all analogous to their earlier definitions. It's nice to be able to see how the same concept works differently depending on the type of random variable(s) you have. The idea behind these concepts are the same in all cases, but the way you calculate them varies.
The triangle example gave me a couple of problems. For part c (show it is possible to construct a triangle...), I'm not quite sure what the solution means. I felt like there needed to be more than a couple of sentences as an answer. For part d, I didn't even know where to begin. The problem seemed very tricky, even after I read how the book did it. I would also like to see the Bivariate Normal Density example worked out, since I feel like the book skipped a few steps.
Reflective
For the most part, a lot of the material in this section was concepts we already know applied to joint continuous random variables. Densities, distributions, and marginals are all analogous to their earlier definitions. It's nice to be able to see how the same concept works differently depending on the type of random variable(s) you have. The idea behind these concepts are the same in all cases, but the way you calculate them varies.
Sunday, November 25, 2007
7.4 due on 11/26
Difficult
The Pareto Density example confused me a little bit. Conceptually, I understand the expectation is infinity and why, but when I tried to work it out on my own, I got stuck doing the calculations. Also, the justification for using Defintion 1 really confused me. I understand that the book is using a discrete approximation, but once again I get confused when trying to do the calculations. It's frustrating because I want to be able to see it work out mathematically instead of just accepting these statements as fact. At the end of the section, Jensen's inequality is mentioned. What does it mean for a function to be convex? Also, Chebyshov's inequality is simply stated, but there's no proof. I can get it to work out mathematically, but what does it mean conceptually?
Reflective
The section was relatively easy to understand since we've done expectations twice already. When justifying why we use Definition 1, I know the book tried to show us mathematically why it works. But instead, I just try to remember that we're dealing with continuous random variables, so we can't sum over every possible value of x since P(X=x) = 0. Surely the expectation for all random variables is not 0! So we use integrals since that will give us a "sum" of all possible values of x without equaling 0 all the time.
The Pareto Density example confused me a little bit. Conceptually, I understand the expectation is infinity and why, but when I tried to work it out on my own, I got stuck doing the calculations. Also, the justification for using Defintion 1 really confused me. I understand that the book is using a discrete approximation, but once again I get confused when trying to do the calculations. It's frustrating because I want to be able to see it work out mathematically instead of just accepting these statements as fact. At the end of the section, Jensen's inequality is mentioned. What does it mean for a function to be convex? Also, Chebyshov's inequality is simply stated, but there's no proof. I can get it to work out mathematically, but what does it mean conceptually?
Reflective
The section was relatively easy to understand since we've done expectations twice already. When justifying why we use Definition 1, I know the book tried to show us mathematically why it works. But instead, I just try to remember that we're dealing with continuous random variables, so we can't sum over every possible value of x since P(X=x) = 0. Surely the expectation for all random variables is not 0! So we use integrals since that will give us a "sum" of all possible values of x without equaling 0 all the time.
Tuesday, November 20, 2007
7.2 due 11/21
Difficult
In Example (2), I'm a little confused about why F(y) is non-zero for y <= 0 and then f(y) is non-zero for y>0. Why did the values of y switch? Also, the book says the derivative does not exist at y=0. I may be out of practice on derivatives, but I thought the derivative at y=0 was 0. Also, in the step function example, I don't quite see how FS(x) >= FX(x). I'm also having trouble understanding how an integer-valued random variable is an approximation of a continuous random variable.
Reflective
This section on functions of random variables is very similar to what we covered in section 5.4 - sums and products of random variables. Only now, we're dealing with more complicated functions, such as logs or exponents. I thought the examples on inverse functions were helpful since the ideas are kind of easy to grasp, but not so obvious mathematically.
In Example (2), I'm a little confused about why F(y) is non-zero for y <= 0 and then f(y) is non-zero for y>0. Why did the values of y switch? Also, the book says the derivative does not exist at y=0. I may be out of practice on derivatives, but I thought the derivative at y=0 was 0. Also, in the step function example, I don't quite see how FS(x) >= FX(x). I'm also having trouble understanding how an integer-valued random variable is an approximation of a continuous random variable.
Reflective
This section on functions of random variables is very similar to what we covered in section 5.4 - sums and products of random variables. Only now, we're dealing with more complicated functions, such as logs or exponents. I thought the examples on inverse functions were helpful since the ideas are kind of easy to grasp, but not so obvious mathematically.
Sunday, November 18, 2007
Rest of 7.1 due on 11/19
Difficult
In Example (17), for x=0, f(x) = 0. But then the book implies you could set f(x) = lambda/2 because it doesn't matter. Why does it not matter? Why is that point "exceptional?" Also, I thought the later examples in the section were a bit difficult to follow. I think I got confused with the notation and all the Greek letters. I was able to (somewhat) follow up to the Normal Density example. After that, I was very confused.
Reflective
The book says that there are continuous random variables that do not have a density. What would these random variables look like? It must be something with a non-differentiable distribution function F, but I'm curious to see an example.
In Example (17), for x=0, f(x) = 0. But then the book implies you could set f(x) = lambda/2 because it doesn't matter. Why does it not matter? Why is that point "exceptional?" Also, I thought the later examples in the section were a bit difficult to follow. I think I got confused with the notation and all the Greek letters. I was able to (somewhat) follow up to the Normal Density example. After that, I was very confused.
Reflective
The book says that there are continuous random variables that do not have a density. What would these random variables look like? It must be something with a non-differentiable distribution function F, but I'm curious to see an example.
Subscribe to:
Posts (Atom)
