**Unformatted text preview: **NICHOLAS N.N. NSOWAH-NUAMAH ADVANCED TOPICS
IN INTRODUCTORY
PROBABILITY
A FIRST COURSE IN PROBABILITY
THEORY – VOLUME III Download free eBooks at bookboon.com
2 Advanced Topics In Introductory Probability: A first Course in Probability Theory – Volume III
2nd edition
© 2018 Nicholas N.N. Nsowah-Nuamah & bookboon.com
ISBN 978-87-403-2238-5 Download free eBooks at bookboon.com
3 ADVANCED TOPICS IN INTRODUCTORY
PROBABILITY: A FIRST COURSE IN
PROBABILITY THEORY – VOLUME III Contents CONTENTS Part 1 Bivariate Probability Distributions Chapter 1 Probability And Distribution Functions 7 of Bivariate Distributions 8 1.1 Introduction 8 1.2 Concept of Bivariate Random Variables 8 1.3 Joint Probability Distributions 9 1.4 Joint Cumulative Distribution Functions 18 1.5 Marginal Distribution of Bivariate Random Variables 23 1.6 Conditional Distribution of Bivariate Random Variables 30 1.7 Independence of Bivariate Random Variables 35 Chapter 2 Sums, Differences, Products and Quotients
of Bivariate Distributions 45 2.1 Introduction 45 2.2 Sums of Bivariate Random Variables 46 2.3 Differences of Random Variables 63 Fast-track
your career
Masters in Management Stand out from the crowd
Designed for graduates with less than one year of full-time postgraduate work
experience, London Business School’s Masters in Management will expand your
thinking and provide you with the foundations for a successful career in business.
The programme is developed in consultation with recruiters to provide you with
the key skills that top employers demand. Through 11 months of full-time study,
you will gain the business knowledge and capabilities to increase your career
choices and stand out from the crowd. London Business School
Regent’s Park
London NW1 4SA
United Kingdom
Tel +44 (0)20 7000 7573
Email [email protected] Applications are now open for entry in September 2011. For more information visit
email [email protected] or call +44 (0)20 7000 7573 Download free eBooks at bookboon.com
4 Click on the ad to read more ADVANCED TOPICS IN INTRODUCTORY
PROBABILITY: A FIRST COURSE IN
PROBABILITY THEORY – VOLUME III Contents 2.4 Products of Bivariate Random Variables 68 2.5 Quotients of Bivariate Random Variables 72 Chapter 3 Expectation and Variance of Bivariate Distributions 80 3.1 Introduction 80 3.2 Expectation of Bivariate Random Variables 80 3.3 Variance of Bivariate Random Variables 104 Chapter 4 Measures of Relationship of Bivariate Distributions 120 4.1 Introduction 120 4.2 Product Moment 121 4.3 Covariance of Random Variables 124 4.4 Correlation Coefficient of Random Variables 130 4.5 Conditional Expectations 135 4.6 Conditional Variances 141 4.7 Regression Curves 143 Part 2 Statistical Inequalities, Limit Laws and Sampling Distributions 154 Chapter 5 Statistical Inequalities and Limit Laws 155 5.1 Introduction 155 5.2 Markov’s Inequality 156 5.3 Chebyshev’s Inequality 160 5.4 Law of Large Numbers 170 5.5 Central Limit Theorem 177 Chapter 6 Sampling Distributions I: Basic Concepts 191 6.1 Introduction 191 6.2 Statistical Inference 192 6.3 Probability Sampling 195 6.4 Sampling With and Without Replacement 200 Chapter 7 Sampling Distributions II:
Sampling Distribution of Statistics 202 7.1 Introduction 202 7.2 Sampling Distribution of Means 206 7.3 Sampling Distribution of Proportions 216 7.4 Sampling Distribution of Differences 221 7.5 Sampling Distribution of Variance 225 Download free eBooks at bookboon.com
5 ADVANCED TOPICS IN INTRODUCTORY
PROBABILITY: A FIRST COURSE IN
PROBABILITY THEORY – VOLUME III Contents Chapter 8 Distributions Derived from Normal Distribution 236 8.1 Introduction 236 8.2 χ Distribution 237 8.3 t Distribution 243 8.4 F Distribution 247 Statistical Tables
254 Answers to Odd-Numbered Exercises 271 2 Bibliography 273 Download free eBooks at bookboon.com
6 ADVANCED TOPICS IN INTRODUCTORY
PROBABILITY: A FIRST COURSE IN
PROBABILITY THEORY – VOLUME III Part 1 Bivariate Probability Distributions PART 1
BIVARIATE PROBABILITY DISTRIBUTIONS I salute the discovery of a single even insignificant truth more highly than
all the argumentation on the highest questions which fails to reach a truth
GALILEO (1564–1642) Download free eBooks at bookboon.com
7 ADVANCED TOPICS IN INTRODUCTORY
PROBABILITY: A FIRST COURSE IN
PROBABILITY THEORY – VOLUME III PROBABILITY AND DISTRIBUTION FUNCTIONS
OF BIVARIATE DISTRIBUTIONS Chapter 1
PROBABILITY AND DISTRIBUTION FUNCTIONS
OF BIVARIATE DISTRIBUTIONS INTRODUCTION 1.1 So far, all discussions in the two volumes of my book on probability (NsowahNuamah, 2017 and 2018) have been associated with a single random variable
X (that is, a one-dimensional or univariate random variable). Frequently,
we may be concerned with multivariate situations that simultaneously involve two or more random variables. For instance, if we wanted to study
the relationship between weight and height of individual students we might
consider weight and height to be two random variables X and Y , respectively, whose values are determined by measuring the weights and heights of
the students in the school. Such study will produce the ordered pair (X, Y ). 1.2 CONCEPT OF BIVARIATE RANDOM VARIABLES 1.2.1 Definition of Bivariate Random Variables Many of the concepts discussed for the one-dimensional random variables
also hold for higher-dimensional case. Here in most cases we shall limit
ourselves to the two-dimensional (bivariate) case; more complex multivariate
4situations are straightforward Advanced
Topics in Introductory Probability
generalisations.
3
Definition 1.1 BIVARIATE RANDOM
VARIABLE
If X = X(s) and Y = Y (s) are two real-valued functions on the
sample space S, then the pair (X, Y ) that assigns a point in the real
(x, y) plane to each point s ∈ S is called a bivariate random variable
Synonyms of a bivariate random variable are a two-dimensional random
variable/vector Fig. 1.1 is an illustration of a bivariate random variable. Fig. 1.1
1.2.2 Bivariate Random Variables Types of Bivariate Random Variables Multivariate situations, similar to univariate cases, may involve discrete, as
Download free eBooks at bookboon.com
well as continuous random variables.
8 ADVANCED TOPICS IN INTRODUCTORY
PROBABILITY: A FIRST COURSE IN
PROBABILITY THEORY – VOLUME III Fig. 1.1 1.2.2 PROBABILITY AND DISTRIBUTION FUNCTIONS
OF BIVARIATE DISTRIBUTIONS Bivariate Random Variables Types of Bivariate Random Variables Multivariate situations, similar to univariate cases, may involve discrete, as
well as continuous random variables.
DISCRETE BIVARIATE RANDOM
VARIABLE
(X, Y ) is a discrete bivariate random variable, if each of the random
variables X and Y is discrete
Definition 1.2 CONTINUOUS BIVARIATE RANDOM
VARIABLE
(X, Y ) is a continuous bivariate random variable if each of the random variables is continuous Definition 1.3 There are cases where one variable is discrete and the other continuous
but this will not be considered here.
Density and Distribution Functions of Bivariate Distributions
5 JOINT PROBABILITY DISTRIBUTIONS 1.3 A joint distribution is a distribution having two or more random variables,
with each random variable still having its own probability distribution, expected value and variance. In addition, for ordered pair values of the random variables, probabilities will exist and the strength of any relationship
between the two variables can be measured.
In the multivariate case as in the univariate case we often associate a
probability (mass) function with discrete random variables and a probability
density function with continuous random variables. We shall take up the
discrete case first since it is the easier one to understand.
1.3.1 Joint Probability Distribution of Discrete Random
Variables Suppose that X and Y are discrete random variables, and X takes values
i = 0, 1, 2, · · · , n, and Y takes values j = 1, 2, · · · , m. Most often, such a
joint distribution is given in table form. Table 1.1 is an n-by-m array which
displays the number of occurrences of the various combinations of values
of X and Y . We may observe that each row represents values of X and
each column represents values of Y . The row and column totals are called
marginal totals. Such a table is called the joint frequency distribution.
Table 1.1 Joint Frequency Distribution of X and Y X
x1
x2
..
. Row Totals Y
y1 y2 (x1 , y1 ) (x1 , y2 ) (x2 , y1 )
..
. (x2 , y2 )
..
. ··· ym ··· (x1 , ym ) ··· (x2 , ym ) x1 x2 y y
Download free eBooks at bookboon.com ..
. ..
. 9 ..
. i = 0, 1, 2, · · · , n, and Y takes values j = 1, 2, · · · , m. Most often, such a
joint distribution is given in table form. Table 1.1 is an n-by-m array which
displays
the
number
of occurrences of the various combinations of values
ADVANCED
TOPICS
IN INTRODUCTORY
PROBABILITY:
FIRST
COURSE
IN that each row represents
PROBABILITY
FUNCTIONS
of
X and Y . A We
may
observe
values ofAND
X DISTRIBUTION
and
PROBABILITY
– VOLUME
BIVARIATE DISTRIBUTIONS
each
column THEORY
represents
values IIIof Y . The row and column totals areOFcalled
marginal totals. Such a table is called the joint frequency distribution.
Table 1.1 Joint Frequency Distribution of X and Y X
x1
x2 y2 (x1 , y1 ) (x1 , y2 ) (x2 , y1 ) ..
.
xn
Column
Totals 6 Row Totals Y
y1 (x2 , y2 ) ..
. ..
. (xn , y1 ) (xn , y2 ) y1 x ··· ym ··· (x1 , ym ) ··· (x2 , ym ) ..
. ..
. ··· (xn , ym ) x x1 x2 xn y ··· y2 ..
. y y ym x
x xi yj = N y Advanced Topics in Introductory Probability For example, suppose X and Y are discrete random variables, and X takes
values 0, 1, 2, 3, and Y takes values 1, 2, 3. Each of the nm row-column intersections in Table 1.2 represents the frequency that belongs to the ordered
pair (X, Y ).
Table 1.2 Joint Frequency Distribution of X and Y
Values
of X
0
1
2
3
Column
Totals Values of Y
1
1
0
0
1
2 2
0
2
2
0
4 3
0
1
1
0
2 Row
Totals
1
3
3
1
8 Definition 1.4 JOINT PROBABILITY DISTRIBUTION
Let X and Y be discrete random variables with possible values
xi , i = 1, 2, ..., n and yj , j = 1, 2, 3, ..., m, respectively. The joint
(or bivariate) probability distribution for X and Y is given by
p(xi , yj ) = P ({X = xi } ∩ {Y = yj })
defined for all (xi , yj )
The function p(xi , yj ) is sometimes referred to as the joint probability
mass function (p.m.f.) or the joint probability function (p.f.) of X and Y .
This function gives the probability that X will assume a particular value x
while at the same time Y will assume a particular value y.
Note
(a) The notation p(x, y) for all (x, y) is the same as writing p(xi , yj ) for
eBooks at bookboon.com
i = 1, 2, ..., n and j = 1, 2,Download
3, ..., m.free
Sometimes
when there is no ambi10
guity we shall use simply p(x, y). The function p(xi , yj ) is sometimes referred to as the joint probability
ADVANCED TOPICS IN INTRODUCTORY
mass
functionA (p.m.f.)
or theINjoint probability function
(p.f.) of XAND
andDISTRIBUTION
Y.
PROBABILITY:
FIRST COURSE
PROBABILITY
FUNCTIONS
PROBABILITY
– VOLUME
III
BIVARIATE
DISTRIBUTIONS
This
functionTHEORY
gives the
probability
that X will assume a particular OF
value
x
while at the same time Y will assume a particular value y.
Note
(a) The notation p(x, y) for all (x, y) is the same as writing p(xi , yj ) for
i = 1, 2, ..., n and j = 1, 2, 3, ..., m. Sometimes when there is no ambiDensity
andwe
Distribution
Functions
of Bivariate Distributions
7
guity
shall use simply
p(x, y).
(b) The joint probability p(xi , yj ) is sometimes denoted as
P (X = x, Y = y),
where the comma stands for ‘and’ or ‘∩’.
Definition 1.5
If X and Y are discrete random variables with joint probability mass
function p(xi , yj ), then
(a) p(xi , yj ) ≥ 0,
(b) n
m
for all i and j p(xi , yj ) = 1 i=1 j=1 Once the joint probability mass function is determined for discrete random
variables X and Y , calculation of joint probabilities involving X and Y is
straightforward.
Let the value that the random variables X and Y jointly take be denoted by the ordered pair (xi , yj ). The joint probability p(xi , yj ) is obtained
by counting the number of occurrences of that combination of values X and
Y and dividing the count by the total number of all the sample points. Thus,
P ({X = xi } ∩ {Y = yj }) = #({X = xi } ∩ {Y = yj }) n
m
i=1 j=1 = #({X = xi } ∩ {Y = yj }) #(xi , yj ) n
m
#(xi , yj ) i=1 j=1 where
#(xi , yj ) is the number of occurrences in the cell of the
ordered pair (xi , yj ); n
m
#(xi , yj ) is the total number of all sample points (cells) i=1 j=1 of the ordered pairs (xi , yj ), denoted by N . Download free eBooks at bookboon.com
11 ADVANCED TOPICS IN INTRODUCTORY
8
PROBABILITY:
A FIRST COURSE INAdvanced
PROBABILITY THEORY – VOLUME III Topics in Introductory
Probability
PROBABILITY
AND DISTRIBUTION FUNCTIONS
OF BIVARIATE DISTRIBUTIONS Joint Probability Distribution of Bivariate Random Variables
in Tabular Form
The joint probability distribution may be given in the form of a table of
n rows and m columns (See Table 1.3). The upper margins of the table
indicate the possible distinct values of X and Y . The numbers in the body
of the table are the probabilities for the joint occurrences of the two events
corresponding to X = xi (1 ≤ i ≤ n) and Y = yj (1 ≤ i ≤ m). The row and
column totals are the probabilities for the individual random variables and
are called marginal probabilities because they appear on the margins of
the table. Such a table is also called the joint relative frequency distribution.
Table 1.3 Joint Probability Distribution of X and Y X
x1
x2
..
. y1
p(x1 , y1 )
p(x2 , y1 )
..
. Y
y2
p(x1 , y2 )
p(x2 , y2 )
..
. xn p(xn , y1 )
p(y1 ) Column
Totals Row Totals
···
···
···
..
. ym
p(x1 , ym )
p(x2 , ym )
..
. p(x1 )
p(x2 )
..
. p(xn , y2 ) ··· p(xn , ym ) p(xn ) p(y2 ) ··· p(ym ) n
m
p(xi , yj ) = 1 i=1 j=1 Note
The marginal probabilities for X are simply the simple probabilities that
X = xi for values of yj , where j assumes a value from 1 to m. Similarly, the
marginal probabilities for Y are the simple probabilities that Y = yj , where
i assumes a value from 1 and n. INDEPENDENT DEDNIM LIKE YOU It is important to note that the distribution satisfies a joint probability
function, namely,
(a) p(xi , yj ) ≥ 0,
(b) n
m
for all i = 1, 2, · · · , n; j = 1, 2, · · · , m. p(xi , yj ) = 1 i=1 j=1 We believe in equality, sustainability and
a modern approach to learning. How about you?
Apply for a Master’s Programme in Gothenburg, Sweden.
PS. Scholarships available for Indian students! Download free eBooks at bookboon.com
12 Click on the ad to read more xn
p(x , y ) p(xn , y2 )
ADVANCED
TOPICSn IN 1INTRODUCTORY
PROBABILITY: A FIRST COURSE IN
p(y2 )III
Column THEORY
p(y1 ) – VOLUME
PROBABILITY
Totals ··· p(xn , ym ) ··· p(ym ) p(xn )
n
m
PROBABILITY AND DISTRIBUTION FUNCTIONS p(xi , yj ) =OF1 BIVARIATE DISTRIBUTIONS i=1 j=1 Note
The marginal probabilities for X are simply the simple probabilities that
X = xi for values of yj , where j assumes a value from 1 to m. Similarly, the
marginal probabilities for Y are the simple probabilities that Y = yj , where
i assumes a value from 1 and n.
It is important to note that the distribution satisfies a joint probability
function, namely,
(a) p(xi , yj ) ≥ 0,
(b) n
m
for all i = 1, 2, · · · , n; j = 1, 2, · · · , m. p(xi , yj ) = 1 i=1 j=1 Density and Distribution Functions of Bivariate Distributions
Density and Distribution Functions of Bivariate Distributions 9
9 Example 1.1
Example 1.1
(a) For the data in Table 1.2, calculate the joint probabilities of X and Y .
(a) For the data in Table 1.2, calculate the joint probabilities of X and Y .
(b) Does this distribution satisfy the properties of a joint probability func(b) Does
tion? this distribution satisfy the properties of a joint probability function?
Solution
Solution
(a) From Table 1.2,
(a) From
Table 1.2,
element;
element;
Total number of
Total
Hencenumber of
Hence the cell ({X = 0} ∩ {Y = 1}) = (0, 1) contains one
the cell ({X = 0} ∩ {Y = 1}) = (0, 1) contains one elements in all cells is 8.
elements in all cells is 8. P ({X = 0} ∩ {Y = 1}) = p(0, 1)
P ({X = 0} ∩ {Y = 1}) = p(0, #({X
1)
1
= 0} ∩ {Y = 1})
=1
= n m
#({X
= 0} ∩ {Y = 1})
= n m
= 8
#({X = xi } ∩ {Y = yj })
8
i j #({X = xi } ∩ {Y = yj })
i j Similarly,
Similarly, P ({X
P ({X
P ({X
P ({X
P ({X
P ({X
P ({X
P ({X
P ({X
P ({X = 0} ∩ {Y
= 0} ∩ {Y
= 0} ∩ {Y
= 0} ∩ {Y
= 1} ∩ {Y
= 1} ∩ {Y
= 1} ∩ {Y
= 1} ∩ {Y
= 1} ∩ {Y
= 1} ∩ {Y = 2})
= 2})
= 3})
= 3})
= 1})
= 1})
= 2})
= 2})
= 3})
= 3}) =
=
=
=
=
=
=
=
=
= p(0, 2) =
p(0, 2) =
p(0, 3) =
p(0, 3) =
p(1, 1) =
p(1, 1) =
p(1, 2) =
p(1, 2) =
p(1, 3) =
p(1, 3) = 0
08
80
08
80
08
82
28
81
18
8 =0
=0
=0
=0
=0
=0
1
=1
= 4
41
=1
= 8
8 When probabilities of all possible joint events, P (X = xi , Y = yj ),
When
probabilities
of in
allthis
possible
joint
(X probability
= xi , Y = ydisj ),
have been
determined
fashion,
weevents,
have a P
joint
have
been
determined
in
this
fashion,
we
have
a
joint
probability
distribution of X and Y and these results may be presented in a two-way
tribution
of X and
Y and
results may be presented in a two-way
table as shown
in the
tablethese
below:
table as shown in the table below:
Download free eBooks at bookboon.com
13 8
2
1
P ({X = 1} ∩ {Y = 2}) = p(1, 2) = =
8
4
ADVANCED TOPICS IN INTRODUCTORY
1
1
PROBABILITY: A FIRST COURSE IN
AND DISTRIBUTION FUNCTIONS
P ({X = 1} ∩ {Y = 3}) = p(1, 3) PROBABILITY
= =
PROBABILITY THEORY – VOLUME III
OF BIVARIATE DISTRIBUTIONS
8
8 10
10 When probabilities of all possible joint events, P (X = xi , Y = yj ),
Topics
in Introductory
Probability
have been determined inAdvanced
this fashion,
we have
a joint probability
distribution of X and Y andAdvanced
these results
mayinbe
presented inProbability
a two-way
Topics
Introductory
table as shown in the table below:
X
Y
1
3
X
Y2
0 1/8
02
03
1
10 1/8
0
2/8
1/8
0
0
21
00
2/8
2/8 1/8
1/8
32 1/8
0
0
0
2/8
1/8
3 1/8 0 0 (b) From the table above,
(b) From
the, table
above,for all i = 0, 1, 2, 3; j = 1, 2, 3.
(i) p(x
i yj ) ≥ 0, 3
3
(i)
p(x
for
i , yj ) ≥ 0,
1 all1 i =2 0, 1,22, 3;1 j =1 1, 2, 3.
(ii)
p(xi , yj ) = + + + + + = 1
3
3
81 81 82 82 81 81
i=0 j=1
(ii)
p(xi , yj ) = + + + + + = 1
8 8 8 8 8 8
j=1 distribution is a joint probability function.
Hencei=0
this Hence this distribution is a joint probability function.
Joint Probability Distribution of Bivariate Random Variables
in
Expression
Form
Joint
Probability
Distribution of Bivariate Random Variables
in Expression Form
Sometimes the joint probability distribution of discrete random variables X
and
Y is given
by a formula.
Sometimes
the joint
probability distribution of discrete random variables X
and Y is given by a formula.
Example 1.2
Given
the1.2
function
Example
Given the function
p(x, y) = k(3x + 2y), x = 0, 1; y = 0, 1, 2 p(x, y) = k(3x + 2y), x = 0, 1; y = 0, 1, 2
(a) Find the constant k > 0 such that the p(x, y) is a joint probability
(a) mass
Find function.
the constant k > 0 such that the p(x, y) is a joint probability
mass
function.
(b) Present it in a tabular form for the probabilities associated with the
y). Obtain
the the
rowprobabilities
and column totals.
(b) sample
Present points
it in a(x,
tabular
form for
associated with the
sample points (x, y). Obtain the row and column totals.
Solution
(a)
(i) p(x, y) ≥ 0
Solution (a) (i) and
p(x,
y)
Density
Functions
of1 Bivariate Distributions
2 ≥0
1 Distribution
Density and
Distribution
Functions of Bivariate Distributions
(ii)
k(3x + 2y) = k
[(3x + 0) + (3x + 2) + (3x + 4)]
(ii) 2
1
x=0
y=0
x=0 y=0 k(3x + 2y) =
=
=
=
=
=
= 1
x=0
1
k [(3x + 0)
1 (9x + 6)
k x=0
k x=0(9x + 6)
x=0 11
11 + (3x + 2) + (3x + 4)] k[(0 + 6) + (9 + 6)]
k[(0 + 6) + (9 + 6)]
21k
21k For p(x, y) to be a joint probability function we must have 21k = 1 from
For
p(x, y) to be a joint probability function we must have 21k = 1 from
which
which
1
k= 1
k = 21
21
(b) For the sample point {X = 0, Y = 0} = (0, 0)
(b) For the sample point {X = 0, Y = 0} = (0, 0)
Download
free eBooks at bookboon.com
1
p(0, 0) = 1 [3(0) + 2(0)] = 0
14= 0
p(0, 0) = 21 [...

View
Full Document