PDF Google Drive Downloader v1.1


Báo lỗi sự cố

Nội dung text Week 3.pdf

Chapter 3 : Expected Value and Variability of Random Variables 0.1 Expected Value Motivation Let’s consider a lottery game to illustrate the concept of expected value. In this lottery, you buy a ticket for |500, and there are three possible prizes that you can win: • A |1,00,000 prize, • A |10,000 prize, • A |1,000 prize. There is also a chance that you win nothing at all. The probabilities of winning each prize are as follows: Probability of winning |1,00,000 = 0.001 Probability of winning |10,000 = 0.01 Probability of winning |1,000 = 0.05 Probability of winning nothing = 1 − (0.001 + 0.01 + 0.05) = 0.939 1. Would you like to play this game? 2. What would you like to base your decision on? You might want to look at some number that’ll represent some kind of expected gain. In Probability and Statistics, we formally define Expected value of a random variable to answer such questions. To calculate the expected value, we can use the formula: Expected Value = X(Value of Outcome × Probability of Outcome) Now, let’s calculate the expected value for each prize: Expected value for |1,00,000 = |1, 00, 000 × 0.001 = |100 Expected value for |10,000 = |10, 000 × 0.01 = |100 Expected value for |1,000 = |1, 000 × 0.05 = |50 1
Expected value for |0 (nothing) = |0 × 0.939 = |0 Now, summing these values together gives us the total expected value: Expected Value = |100 + |100 + |50 + |0 = |250 However, since you pay |500 to participate in the lottery, we need to subtract this from the expected value of the winnings: Net Expected Value = |250 − |500 = −|250 Conclusion: On average, you can expect to lose |250 every time you play this lottery. The concept of expected value helps you understand whether a game or decision is favorable in the long run. In this case, the expected value being negative indicates that this lottery is not a good bet financially over time. 0.1.1 Expected Value of a Random Variable The expected value is one of the most fundamental concepts in probability and statistics. It gives the average outcome of a random variable over many repetitions of an experiment. Formally, for a discrete random variable X with probability mass function P(X = x), the expected value is: E(X) = X x xP(X = x) For continuous random variables, the expected value is given by the integral: E(X) = Z ∞ −∞ xfX(x) dx where fX(x) is the probability density function of X. Example 1: Consider a discrete random variable X that represents the outcome of rolling a fair six-sided die. The expected value is: E(X) = 1 6 (1 + 2 + 3 + 4 + 5 + 6) = 3.5 2
This result tells us that if we roll the die many times, the average result will approach 3.5. Example 2: For a continuous random variable X representing the lifetime of a lightbulb (in hours) with a probability density function fX(x) = 1 100 e −x/100 for x ≥ 0, the expected lifetime is: E(X) = Z ∞ 0 x · 1 100 e −x/100 dx = 100 hours The expected value is often referred to as the mean or the first moment of a random variable. Expected Value of Some common Distributions 1. X ∼ Geometric(p) E[X] = X∞ t=1 t(1 − p) t−1 p = 1 p 2. X ∼ Poisson(λ) E[X] = X∞ t=0 t λ t e −λ t! = λ 3. X ∼ Binomial(n, p) E[X] = Xn t=0 t n t p t (1 − p) n−t = np 0.1.2 Properties of Expected Value Expected value of a function of random variables Theorem (Expected Value of a Function) Suppose X1, . . . , Xn have joint PMF fX1···Xn with range of Xi denoted TXi . Let g : TX1 × · · · × TXn → R be a function, and let Y = g(X1, . . . , Xn) have range TY and PMF fY . Then, E[g(X1, . . . , Xn)] = X t∈TY tfY (t) = X t1∈TX1 · · · X tn∈TXn g(t1, . . . , tn)fX1···Xn (t1, . . . , tn). 3
We have seen how to find fY , the PMF of a function of multiple random variables. The above theorem states that to find E[Y ], you do not always need fY . The joint PMF of X1, . . . , Xn can be used directly. This simple-sounding property has far-reaching conse- quences! Now I’ll give you an example for this. i.e. how can you calculate the expected value of a function of random variables without explicitly calculating the PMF of the function, by just using the PMF of the random variable based on which the function is defined. Examples of Calculation of Expected Value of Functions of Random Variables 1. Let X ∼ Uniform{−2, −1, 0, 1, 2} and g(X) = X2 ∼ {0, 1, 4}. Then, E[g(X)] = 0 · 5 10 + 1 · 2 10 + 4 · 2 10 = 2. Alternatively, (Here in this case we’re using the property of expectation of a function of random variables) E[g(X)] = (−2)2 · 1 10 + (−1)2 · 5 10 + (0)2 · 5 10 + (1)2 · 5 10 + (2)2 · 1 10 = 2. 2. Let (X, Y ) ∼ Uniform{(0, 0),(1, 0),(0, 1),(1, 1),(−1, 1),(1, −1)} .Let g(X, Y ) = X2 + XY + Y 2 ∼ {0, 1, 3} so it will have probabilities distributed as 1 6 , 4 6 , 1 6 . Then, E[g(X, Y )] = 0 · 1 6 + 1 · 4 6 + 3 · 1 6 = 7 6 . Alternatively, E[g(X, Y )] = 0 · 1 6 + (1) · 1 6 + (1) · 1 6 + (3) · 1 6 + (1) · 1 6 + (1) · 1 6 = 7 6 . Some properties of expectation Linearity of Expected Value 1. E[cX] = cE[X] for a random variable X and a constant c. 4

Tài liệu liên quan

x
Báo cáo lỗi download
Nội dung báo cáo



Chất lượng file Download bị lỗi:
Họ tên:
Email:
Bình luận
Trong quá trình tải gặp lỗi, sự cố,.. hoặc có thắc mắc gì vui lòng để lại bình luận dưới đây. Xin cảm ơn.