Thursday, December 6, 2007

8.4 due on 12/7

Difficult
There's not much new material in this section since the few theorems we are given are just special cases of change of variables (section 8.2). Although, this section has shown me that I still need to practice finding marginals. But other than that, the only part I had trouble with was the bivariate normal distribution example. I know the problem is mostly book-keeping, but I found it tricky manipulating the equation so that the answer comes out nice.
Reflective
Overall, this section wasn't very difficult. The special equations are nice to know in order to save time, but it's not that much harder to work it on your own. Although, it really helped to see more examples on these types of problems. It cleared up a few issues I was having earlier in section 8.2.

Wednesday, December 5, 2007

8.3 due on 12/5

Difficult
I still don't feel fully confident about change of variables, so I think it might be hard to find if RVs U = g(X) and V = h(Y) are independent. It's not so much the concept that gives me trouble but the lack of practice. In the Uniform Distribution example, I'm not sure about the reasoning behind the solution to part (a). The solution considers a set outside C and the intersection of a set with C. I thought we just had to look at sets of (x,y) in C. Also, in the Normal Densities example, I'm completely lost on part (c). I'm not even sure what the question is asking. Perhaps if there were a picture to show what we're trying to find the probability of?
Reflective
Even though we've gone over independence before, I feel like I might still have trouble with this section. The concept is the same, just the way you compute it is different. Before, we looked at events and discrete random variables. I believe the trick is to remember that for jointly distributed, we don't look at the density f (~ p.m.f.), but rather the distribution function F.

Thursday, November 29, 2007

8.2 due on 11/30

Difficult
In the ellipse example, I'm not sure how |C| = (pi) ab. Where did the pi come from? I'm assuming it has to do with some equation for an ellipse, but I'm not sure. I'm also a little confused how P = (arctan(a/b)) / pi. I know the book says it's because Theta is uniformly distributed, but I still don't know how that worked out. Do we take an integral? Or perhaps it's something conceptual?
Reflective
In the last few sections, I realized how out of practice I am with derivatives and integration. In this section, the introduction of the Jacobian makes me feel even more out of practice. The theorem for the joint density of change of variables looks a lot like something from one of the Math 30-series classes (I'm not even sure which one!). So, even though this section wasn't incredibly difficult concept-wise, I feel it'll still be challenging because I'm so rusty with these kinds of calculations.

Tuesday, November 27, 2007

8.1 due on 11/28

Difficult
The triangle example gave me a couple of problems. For part c (show it is possible to construct a triangle...), I'm not quite sure what the solution means. I felt like there needed to be more than a couple of sentences as an answer. For part d, I didn't even know where to begin. The problem seemed very tricky, even after I read how the book did it. I would also like to see the Bivariate Normal Density example worked out, since I feel like the book skipped a few steps.
Reflective
For the most part, a lot of the material in this section was concepts we already know applied to joint continuous random variables. Densities, distributions, and marginals are all analogous to their earlier definitions. It's nice to be able to see how the same concept works differently depending on the type of random variable(s) you have. The idea behind these concepts are the same in all cases, but the way you calculate them varies.

Sunday, November 25, 2007

7.4 due on 11/26

Difficult
The Pareto Density example confused me a little bit. Conceptually, I understand the expectation is infinity and why, but when I tried to work it out on my own, I got stuck doing the calculations. Also, the justification for using Defintion 1 really confused me. I understand that the book is using a discrete approximation, but once again I get confused when trying to do the calculations. It's frustrating because I want to be able to see it work out mathematically instead of just accepting these statements as fact. At the end of the section, Jensen's inequality is mentioned. What does it mean for a function to be convex? Also, Chebyshov's inequality is simply stated, but there's no proof. I can get it to work out mathematically, but what does it mean conceptually?
Reflective
The section was relatively easy to understand since we've done expectations twice already. When justifying why we use Definition 1, I know the book tried to show us mathematically why it works. But instead, I just try to remember that we're dealing with continuous random variables, so we can't sum over every possible value of x since P(X=x) = 0. Surely the expectation for all random variables is not 0! So we use integrals since that will give us a "sum" of all possible values of x without equaling 0 all the time.

Tuesday, November 20, 2007

7.2 due 11/21

Difficult
In Example (2), I'm a little confused about why F(y) is non-zero for y <= 0 and then f(y) is non-zero for y>0. Why did the values of y switch? Also, the book says the derivative does not exist at y=0. I may be out of practice on derivatives, but I thought the derivative at y=0 was 0. Also, in the step function example, I don't quite see how FS(x) >= FX(x). I'm also having trouble understanding how an integer-valued random variable is an approximation of a continuous random variable.
Reflective
This section on functions of random variables is very similar to what we covered in section 5.4 - sums and products of random variables. Only now, we're dealing with more complicated functions, such as logs or exponents. I thought the examples on inverse functions were helpful since the ideas are kind of easy to grasp, but not so obvious mathematically.

Sunday, November 18, 2007

Rest of 7.1 due on 11/19

Difficult
In Example (17), for x=0, f(x) = 0. But then the book implies you could set f(x) = lambda/2 because it doesn't matter. Why does it not matter? Why is that point "exceptional?" Also, I thought the later examples in the section were a bit difficult to follow. I think I got confused with the notation and all the Greek letters. I was able to (somewhat) follow up to the Normal Density example. After that, I was very confused.
Reflective
The book says that there are continuous random variables that do not have a density. What would these random variables look like? It must be something with a non-differentiable distribution function F, but I'm curious to see an example.

Tuesday, November 13, 2007

5.5 until p.179 (conditional expectation), 7.1 until p.291 (Example 4 exclusively) due on 11/14

Difficult
The definition of conditional mass function is very similar to the definition of conditional probability for events. However, the denominator in the definition confuses me. Shouldn't the denominator be fY(y) instead of fY(x)? That way, it would be analogous to the conditional probability for events. Also, the examples use fY(y) instead. In section 7.1, I'm not sure what the book means by "state space." Is this similar to a sample space, only instead the number of possible outcomes is uncountable? I also don't fully understand the proof of P(X=x) = 0 if F(x) is continuous. I'm not sure how they got (x - 1/n). And lastly, in the definition of a continuous random variable, does the v in the integral stand for anything?
Reflective
Even though chapter 5 deals with two random variables, I like how conditional probability is relatively the same as when dealing with events. Although this is "new" material, it's more like a reminder of what we've previously learned. The same is true for the distribution function F in section 7.1. At this point, I feel a lot of the definitions are being reapplied to different situations.

Thursday, November 8, 2007

5.4 due on 11/9

Difficult
The theorem for inclusion-exclusion inequalities makes sense, but I don't understand the proof. I don't know how or why there are combinations and from then on, I can't follow the rest of the proof. For the equation of a geometric random variable, it may be a small detail, but I thought the equation raised (1-p) to the (k-1) power instead of the k power. Unless X is the random variable representing the number of tails before the first head, instead of the total number of tosses. In part (b) of Example 13, I'm not sure how we came up with the probability of z tails. How did we get (z+n-1) C (n-1) ?
Reflective
The examples with the binomial random variable weren't as difficult as the rest of the section since the binomial RV was introduced as a series of Bernoulli trials. Even though there's a theorem stating any discrete RV can be written as a linear combination of Bernoulli trials, the binomial RV is the only one I can clearly visualize. I wonder how it is possible to represent a Poisson RV as a series of Bernoulli trials?

Sunday, November 4, 2007

5.2, 5.11(a), (b) due on 11/5

Difficult
I don't see how Example 1 in section 5.2 relates to independent random variables. I'm assuming it's used to show how two marginal mass functions can be the same but the joint mass functions are different, but I'm not sure. In section 5.11 part (a), I don't follow solution II. I don't know what the event H includes. I thought it was the event a hole is halved, but isn't that what Z is measuring?
Reflective
The proof of Theorem 7(a) in section 5.2 reminds me of the double integral in calculus, only with double sums instead. I thought it was very similar; as long as the X variables are together and the Y variables are together, the equation works.

Tuesday, October 30, 2007

4.4, 4.9(c) due on 10/31

Difficult
I was able to follow the coin example up until equation 11. Why is E(X) = E(X')? I'm guessing because their probabilities are equal, so does that mean that knowing the first toss is head makes no difference? Also, is the problem asking for the expected value of the length of runs of HEADS or TAILS? I assume we are looking at runs of heads out of convenience, but I'm not sure. The computation of the variance also confused me. I think it's because I haven't practiced it as much, but it was hard for me to follow.
Reflective
Aside from the above confusion, I found section 4.4 fairly straightforward since most of it relates to what we've already learned about conditional probability. It's nice to get a review of the material and also see it being applied in a different way. In section 4.9, I thought it was a little tricky since you had to think about writing p as np/n and realize that (1+ x/n)^n equals e^x. Otherwise, that example was also straightforward.

Sunday, October 28, 2007

4.10 due on 10/29

Difficult
Once again, I'm having trouble applying the formula. In part (b), I kind of followed along, but what happens to the k! in the denominator? Is it because the book only considered the cases where k=0 or k=1, since then k! would be 1? Also, to find the number of bites a random postman sustains, do we take the derivative? If so, why?
Reflective
I was very happy that I could understand the derivation of the Poisson distribution in part (a). The book is hard to follow, but after playing with the equation a bit I got the final formula. However, as mentioned above, I'm still confused about which values are assigned to which variables when applying a formula.

Thursday, October 18, 2007

4.3 due on 10/19

Difficult
I'm not sure how we can assign positive infinity or negative infinity to E(X) if by definition, E(X) must be finite. I'm also not sure what the purpose is of the real-valued function g(.). Is the domain of g the actual random variable or is it the range of X? What does it mean to to find the expected value of X^2? What is the definition of a moment? I'm also very confused by the Coupons example, as well as others in this section. I think the reason I'm having so much trouble following these examples is because I still don't completely understand the previous section.
Reflective
The idea of expectation makes a lot of sense to me. Since more weight is given to a particular value of X based on its probability, the greater the probability, the more likely it is that value occurs. Pictorially it's like a graph, where the value with the highest expectation is where the area below is the largest.

Tuesday, October 16, 2007

4.2 due on 10/17

Difficult
This reading introduced a few new concepts, leaving me with lots of questions. First off, I understand the definition of a probability mass function but I'm having trouble applying it. I'm running into the same problem as before: I don't know where the numbers come from. In the poker example, I know the numerator is all the outcomes which satisfy 2 pairs, but I don't know how to get that numerator. I also thought the equations for a proper random variable and the Key Rule looked similar. What's the difference between the two? And what's an example of where the sum of probability mass functions is less than one?
I also did not fully understand the Poisson Distribution and the Negative Binomial Distribution. For Poisson, I don't see how the sum equals 1. I referred back to Theorem 3.6.9, but that only confused me more. For the Negative Binomial, how do we decide what f(r) equals? And how do we know f(r) lies within [0,1]? The cumulative distribution function F(x) completely confused me. I don't understand what it does or its relationship to f(x). Figure 4.1 didn't really help either. Is F(x) like finding the integral of f(x)?
Reflective
Overall, I feel like the book is skipping steps and I'm having trouble filling in the missing pieces. I feel like I should be able to figure out some of these steps on my own but I can't. I had the most trouble in this reading with the cumulative distribution function. I want to say that F(x) is like finding an integral since it's a distribution, but I'm still not sure.

Sunday, October 14, 2007

4.1 due on 10/15

Difficult
This reading was not as difficult as the last couple of readings. The part I had the most trouble with was the definition of a discrete random variable. At first, I thought X(w) was a probability function instead of a function mapping Omega to a countable set of real numbers. What I don't understand is why the sample space doesn't need to be countable. From the Darts example, when w is an event where the dart doesn't hit the board, then X(w) = 0. Does that mean that even though the sample space is uncountable, the only set of outcomes that matter is when the dart hits the board, which is countable?
Reflective
The concept of an indicator reminded me of what I've learned in my programming classes. It's like an indicator is a "true or false" test, with 1 being true and 0 being false.

Thursday, October 11, 2007

3.3, 3.6(9)-(12), 3.12 due on 10/12

Difficult
Theorem 3.3.3 confused me a little because I'm not sure how we derived the equation. It seems we take the product of (n
i+1) instead of only (ni) because we're adding any number of symbols from zero to ni, which equals (ni +1). However, I don't understand why we also subtract 1. I also don't understand how to derive the exponential function theorem and the binomial theorems.
Reflective
I did not find Example 3.12 helpful at all. I feel like I'm missing the reasoning behind the problem. As I've said before, it's nice to see examples of how and when to apply certain theorems, but not knowing why does not make me feel confident when approaching homework problems. As mentioned in my first blog post, grasping new math concepts is still something that I need to work on.

Tuesday, October 9, 2007

3.1, 3.2 due on 10/10

Difficult
I understood most of section 3.1 because a lot of it I've learned before. However, the following definition confused me: "a number n (say) of objects or things are to be divided or distributed into r classes or groups." In the example following this definition, there wasn't any reference to this definition or what n and r correlate to in a given problem. In section 3.2, this was cleared up since theorems (1) and (2) gave better explanations of how we can split n objects into r groups. Although, theorem (3) gave me some trouble because I don't quite follow the proof and I'm not sure what a multinomial coefficient is.
Reflective
The example in section 3.2 really helped me understand how the multinomial coefficient works, even though I'm still not sure what it is. It seems to me if you have Mn (x, y, z) then that means there are n total objects, with x number of objects of type 1, y number of objects of type 2, and z number of objects of type 3. The reason you divide n! by the product of (x! y! z!) is in order to have no repeats in the permutations.

Sunday, October 7, 2007

2.2, 2.13 due on 10/8

Difficult
I had trouble understanding the definition of conditionally independent and pairwise independent. It seems conditionally independent means two events are still independent of each other regardless of another third event. For pairwise independence, I thought of it as an extension of an independent collection, where the finite set F has cardinality 2. I also had trouble with the idea of protocol. It seemed like the first example (Tom and siblings) was like the ones we've done in class. It didn't occur to me that Tom could be a twin or a girl. For the goat/car example, I'm still not sure how the emcee's involvement changes the probability.
Reflective
Despite my confusion, I find the idea of protocol very intriguing. It shows we have to consider all factors of a problem or question before thinking about the answer. It was interesting to note how the examples in section 2.13 were published and the majority of the readers (including myself) were deceived and answered incorrectly.

Thursday, October 4, 2007

2.1, 2.7 due on 10/5

Difficult
I understand definition 2.1.1 and theorem 2.1.2, but the poker revisited example completely throws me off because I don't know how they got the numbers. What I mean is, how do you come up with 1/(52C5) as the probability for the intersection of R and SA or (51C4)/(52C5) as the probability of SA? I thought the probability of drawing the ace of spades is 1/52. I also had trouble with repellent and attractive events. The second example using Bayes's Theorem made no sense to me. Like in the poker problem, I don't know how they got certain values for the probabilities.
Reflective
This reading has showed me that I need to review combinations. I believe that is the main reason why I am so confused on the poker example. This reading has also pointed out that I'm able to understand concepts, but when I try to apply the formulas, I'm not sure which equation to use or when to use it.

Tuesday, October 2, 2007

1.8, 1.12 due on 10/3

Difficult
I didn't find this reading really difficult to understand, but there were some parts that were confusing when I first read it. The example in section 1.12 threw me off for a little bit. I tried to think about how I would go about solving the problem on my own before reading the solution below. Part (a) was pretty easy, but I was stumped by parts (b) and (c). I had to think about why we could consider some births "fictitious" when dealing with families with less than three children. If the solution was not given to me, I'm not sure I would have figured that out by myself.
Reflective
I like examples because I find them very useful. This reading cleared up how to apply some of the concepts we've learned in the previous readings. As mentioned above, I found section 1.12 very helpful. Section 1.8 was also good because I got to see a case in which using the complementary event is more efficient (faster) than a more direct approach.

Sunday, September 30, 2007

Week 1 - Monday

Difficult
In section 1.4, I still don't understand how property #8 works. I'm assuming this property applies when the events are not disjoint, or else property #3 would apply. When comparing this with Example (13), I come up with more questions. Why do we add some probabilities and subtract others? How can we be sure that this outcome will be less than or equal to 1? What happens when there are more than three events?
Reflective
I'm having a little trouble grasping the concept of F, a collection of subsets of omega, but I think it's analogous to the concept of a field F in linear algebra. When deciding whether or not A is an event, you must make sure A and its complement are in F. Although, if F is a collection of subsets of omega, how can omega be in F?

Week 0 - Friday

Difficult
It's always difficult to get used to a new textbook and the type of terminology and notation it uses. I'm still a little unsure about what the author means by "symmetry." Based on the context, I'm assuming that it means all possible outcomes have an equal chance of occurring, since "symmetry" is not used to describe the case with the weighted die. I'm also confused about when "capital omega" denotes the universal set or a sample space.
Reflective
It was nice to get some review on series and limits, since it's been a while since I've seen the formal notation. I found the Venn diagrams very useful in trying to visualize unions, intersections, and differences. I thought Example (13) on page 29 was a very good problem to show how we can use sets to draw more conclusions.

Thursday, September 27, 2007

HW 0

My name is Sara Chan and I am a third-year math major at UCLA. So far, I've taken Math 31B, 32A, 32B, 33A, 33B, 61, and 115A.

What I like about math is that once you figure out the pattern or process behind a problem, you can solve any similar problem. I'm a very detail-oriented and systematic person; if you show me something step-by-step, I will always follow each step in that same order. I really like finding patterns and I think that's what makes me "strong" in math.

Of course, everyone has their weaknesses. As much as I love patterns, I do find it difficult at times to actually *find* them. Often times, a TA or professor may ask "what pattern do you see?" and my mind draws a blank. It's only after I stare at the problem for a long time that I can see a pattern. New math concepts no longer come to me as quickly as they did in high school, which frustrates me. Now in college, I find that I have to work harder in my math classes. This doesn't make me dislike math, but it's definitely a challenge to change my mindset and realize that I have to put more effort into my math work in order to succeed.

I believe a good math teacher not only needs to explain the material well, but also needs to be enthusiastic about math. My favorite math teachers are the ones who make me want to go to class and learn. When I see their enthusiasm in their teaching, it makes me want to continue my line of study. I also believe a good math teacher should be flexible and approachable. In my experience, when a teacher seems reserved or intimidating, it's more difficult for students to build up the courage to go to office hours. It's difficult to get excited about math when your professor is always facing the blackboard and lecturing in a monotone voice.

For this course, I've read the syllabus and understand the following. After final exams are graded, they are kept for one quarter and then available for pickup the following quarter. After two quarters have passed, they are recycled. An assignment is considered semi-late if it is turned in during class (between 11:00 and 11:50 am). The five minutes rule states if I run into
Professor Brose, she'll (almost) always have five minutes to talk with me.