This is done as follows. We assume always that the uniform infinitesimality condition 3. Accompanying Laws. Therfore 3. In the study of limit theorems for sums of independent random variables infinitely divisible distributions play a very important role. Show that the convolution of any two infinitely divisible distributions is again infinitely divisible. We now investigate when such things will have a weak limit.
In fact it is the characteristic function of an infinitely divisible probability distribution. Let us call measures that satisfy 3. As the normal distribution is also an infinitely divisible probability distribution, we arrive at the following Theorem 3. The main theorem of this section is Theorem 3. For every bounded continuous function f that vanishes in some neighborhood of 0, Z Z lim 3. Let us prove the sufficiency first.
Condition 3. We now turn to proving the necessity. In order complete the proof of necessity we need only establish the uniqueness of the representation, which is done in the next lemma. Finally Corollary 3. Convergence to the Poisson Distribution. Since we have to center by the mean we can pick any level say 12 for truncation. Then the truncated means are all P 0. Convergence to the normal distribution. Exercise if P 3. The answer is no and is not hard to show. Because the normal distribution has an infinitely long tail, i.
Since it cannot be 0 it must be 1. The law of the iterated logarithm provides an answer. Each term in the sum of 3. We saw that the law of the iterated logarithm depends on two things. Both inequalities can be obtained from a uniform rate of convergence in the central limit theorem. Such an error estimate is provided in the following theorem Theorem 3. Berry-Esseen theorem. Assume that the i. The proof will be carried out after two lemmas. This is essentially the Fourier inversion formula. We now proceed with the proof of the theorem.
This involves dividing 0 by 0 and should involve differentiation of some kind. The way to prove the above result will take us on a detour. Countable addivity is again in any of the following two equivalent senses. A measurable subset of a totally positive set is totally positive.
Any countable union of totally positive subsets is again totally positive. Lemma 4. Now let us complete the proof. At least one of the two sets E, E c will have the second property and we can call it A1.
Theorem 4. Hahn-Jordan Decomposition. Totally positive sets are closed under countable unions, disjoint or not. Remark 4. Definition 4. There is trouble with sets of measure 0 for every comparison between two rationals a1 and a2.
Collect all such troublesome sets only a countable number and throw them away. Let us take two real numbers a b. Exercise 4. Disintegration Theorem. The converse is of course easier. The state of the system is described by a point x in the state space X of the system. We may stop at some finite terminal time N or go on indefinitely. An important subclass is generated when the transition probability depends on the past history only through the current state.
Such processes are called time-homogeneous Markov Proceses or Markov Processes with stationary transition probabilities. Chapman-Kolmogorov Equations:. The identity is basically algebra.
The multiple integral can be carried out by iteration in any order and after enough variables are integrated we get our identity. The Markov property in the reverse direction is the similar condition for bounded measurable functions f on X. P They look different. In view of the symmetry it sufficient to prove the following: Theorem 4. Let us fix f and g. Let us denote the common value in 4. P which is 4. Conversely, we assume 4. Let b y be a bounded measurable function on Y. Let us look at some examples.
Suppose we have an urn containg a certain number of balls nonzero some red and others green. A ball is drawn at random and its color is noted. Then it is returned to the urn along with an extra ball of the same color. Then a new ball is drawn at random and the process continues ad infinitum. Consider a queue for service in a store. If the queue is non empty at some time, then exactly one customer will be served and will leave the queue at the next time point.
The number of new arrivals at distinct times are assumed to be independent. Consider a reservoir into which water flows. We may wish to assume a percentage loss due to evaporation. The current amount in storage is a Markov Process with Stationary transition probabilities. This is often referred to as a random walk.
They could have two components like inflow and demand. The new state is a deterministic function of the old state and the noise. Verify that the first two examples can be cast in the above form. In fact P is stationary i. Many applications fall in this category and an understanding of what happens in this situation will tell us what to expect in general. We will assume for simplicity that every state communicates with every other state. Such Markov Chains are called irreducible.
Let us first limit ourselves to the study of irreducible chains. If for a not necessarily irreducible chain starting from x, the probability of ever visiting y is positive then so is the probability of visiting y before returning to x. Assume that for the chain starting from x the probability of visiting y before returning to x is zero. But when it returns to x it starts afresh and so will not visit y until it returns again. This reasoning can be repeated and so the chain will have to visit x infinitely often before visiting y.
But this will use up all the time and so it cannot visit y at all. For an irreducible chain all states x are of the same type. Let x be recurrent and y be given. By the previous lemma, for the chain starting from x, there is a positive probability of visiting y before returning to x.
After each successive return to x, the chain starts afresh and there is a fixed positive probability of visiting y before the next return to x. Since there are infinitely many returns to x, y will be visited infinitely many times as well. Or y is also a recurrent state. We now prove that if x is positive recurrent then so is y.
We now turn to Proof. Hence both sets have the same greatest common divisor. The proof can be completed by induction. Let X be irreducible and positive recurrent with period d. Let us collect all the transient states and call the set Xtr. The complement consists of all the recurrent states and will be denoted by Xre. By the renewal property these are independent events and so y will be recurrent too.
The set of recurrent states Xre can be divided into one or more equivalence classes accoeding to the following procedure. The restriction of the chain to a single equivalence class is irreducible and possibly periodic. Different equivalence classes could have different periods, some could be positive recurrent and others null recurrent. We can combine all our observations into the following theorem. P n Theorem 4.
If y is transient then x, y 0. In such a case, d lim. The only statement that needs an explanation is the last one. The chain starting from a transient state x may at some time get into a positive recurrent equivalence class Xj with period d. If it does, it never leaves that class and so gets absorbed in that class.
The probability of this is f x, y where y can be any state in Xj. Depending on which subclass the chain enters and when, the phase of its future is determined.
There are d such possible phases. Example 4. Simple Random Walk. The chain is easily seen to irreducible, but periodic of period 2. Return to the starting point is possible only after an even number of steps. Since the behaviour is similar at both points let us concentrate near the origin. We have a similar lower bound as well. P Exercise 4. This is the transient behavior of the queue.
To prove positive recurrence when m P has a nontrivial nonnegative solution such that x q x P for every n. One would hope that if we solve for these equations then we have our U. This requires uniqueness.
Since our U is bounded in fact by 1, it is sufficient to prove uniqueness within the class of bounded solutions 4. We will now establish that any bounded solution U of equation 4. Then we will prove, by induction, that for any solution U of equation 3. Actually if p or q is initially 0 it remains so for 4. Actually the limit exists almost surely and we will show it when p we discuss martingales later.
Pp,q Example 4. Branching Process. Consider a population, in which each individual member replaces itself at the beginning of each day by a random number of offsprings. Every individual has the same probability distribution, but the number of offsprings for different individuals are distibuted independently of each other.
Xn is seen to be a Markov chain on the set of nonnegative integers. Note that if Xn ever becomes zero, i. Let us denote by m the expected number of offsprings of any individual, i. If we have initially i individulas each of the i family lines have to become extinct for the entire population to become extinct. There is no chance of the population becoming extinct. But g z is convex and therefore ther can be atmost one more root.
If we can rule out the possibility of extinction probability being equal to 1, then this root q must be the extinction probability when we start with a single individual at time 0.
Let us denote by qn the probability of extinction with in n days. Actually the converse is also true. Determine the conditions for positive recurrence in the previous example. Such processes are called birth and death processes.
Work out the conditions in that case. Formulate it precisely. Can the transition probabilities of the reversed chain be determined by the transition probabilities of the forward chain? If the forward chain has stationary transition proabilities does the same hold true for the reversed chain? What if we assume that the chain has a finte invariant probability distribution and we initialize the chain to start with an initial distribution which is the invariant distribution?
Determine the invariant probability measure in the positive recurrent case. Show that any null recurrent equivalence class must necessarily contain an infinite number of states.
In patricular any Markov Chain with a finite state space has only transient and positive recurrent states and moreover the set of positive recurrent states must be non empty. A formal definition is given below. Definition 5. Remark 5. P Remark 5. Such sequences are called martingale differences. We can generate martingale sequences by the following procedure. Of course every finite martingale sequence is generated this way for we can always take X to be Xn , the last one.
For infinite sequences this raises an important question that we will answer later. A related notion is that of a super or sub-martingale. If, in the definition of a martingale, we replace the equality in iii by an inequality we get super or sub-martingales. Summing up 5. For example Corollary 5.
The proof is a consequence of the following fairly general lemma. Lemma 5. This gives us uniform bounds on YNp dP and we can Rpass to the limit. Exercise 5.
It corresponds to a partition with 2n sets. We would like to prove Theorem 5. Assume that X is a bounded function. See Exercise 4. Theorem 5. Suppose kXn kp is uniformly bounded. See [7] or [3].
We can therefore choose a subsequence Xnj that converges weakly in Lp to a limit in the weak topology. We call this limit X.
We can now apply the preceeding theorem. Example 5. We can show that the convergence in the preceeding theorems is also valid almost everywhere. Clearly M is a linear subset of L1. We will prove that it is closed in L1 and that it is dense in L1. If Xn is an L1 bounded martingale, it is not clear that it comes from an X. If it did arise from an X, then Xn would converge to it in L1 and in particular would have to be uniformly integrable. The converse is also true.
The uniform integrability of Xn implies the weak compactness in L1 and if X is any weak limit of Xn [see [7]], it is not difficult to show as in Theorem 5. The L1 bounded martingale that we constructed earlier in Exercise 5. We will defer the analysis of L1 bounded martingales to the next section. In some sense the simplest example is also the most general. More precisely the decomposition theorem of Doob asserts the following. Doob decomposition theorem.
Yn , Fn is a martingale. Xn determines Yn and An uniquely. Property 2 is then plainly equivalent to the submartingale property. To establish the representation, we define An inductively by 5.
The decomposition is always unique. It is easy to verify from 5. Such a decomposition is called the semi-martingale decomposition. If we have to deal with continuous time this will become a thorny issue. We now return to the study of L1 bounded martingales.
One easy way to generate L1 bounded martingales is to take the difference of two nonneagtive martingales. We have the converse as well. Let Xn be an L1 bounded martingale. We can always assume that our nonnegative martingale has its expectation equal to 1 because we can always multiply by a suitable constant. Here is a way in which such martingales arise.
Can you equality above? Exact calculations like in Eaxmple 5. Let us try to estimate again the exit time from a ball of radius R. Now proceed as in Example 5. We can use these methods for proving positive recurrence as well.
This is a contradiction. Let us return to our example of a branching process Example 4. It then has an almost sure limit, which can only be 0 or 1. If q is the probabbility that it is 1 i. Chapter 6 Stationary Stochastic Processes. The stationarity of the process is reflected in the invariance of P with respect to T i. One says that P is an invariant measure for T or T is a measure preserving transformation for P.
The study of stationary stochastic process is then more or less the same as the study of measure preserving i. In other words U acts as an isometry i. A basic theorem known as the Ergodic theorem asserts that Theorem 6.
See exercise 6. There is an alternate characterization of H0. Functions f in H0 are invariant under T , i. First we will establish an inequality called the maximal ergodic theorem. Theorem 6. Maximal Ergodic Theorem. Lemma 6. En We are done. Given the lemma the proof of the almost sure ergodic theorem follows along the same lines as the proof of the almost sure convergence in the martingale context.
But such functions are dense in L1 P. See the proof of Theorem 5. Exercise 6. For any bounded linear transformation A on a Hilbert Space H, show that the closure of the range of A, i. Show that any almost invariant set differs by a set of measure 0 from an invariant set i. Although the ergodic theorem implies a strong law of large numbers for any stationary sequence of random variables, in particular a sequence of independent identically distributed random variables, it is not quite the end of the story.
Any product measure is ergodic for the shift. Let A be an invariant set. The set M, which may be empty, is easily seen to be a convex set. Show that M is empty. A point of a convex set is extreme if it cannot be written as a nontrivial convex combination of two other points from that set.
One of the questions in the theory of convex sets is the existence of sufficiently many extremal points, enough to recover the convex set by taking convex combinations. In particular one can ask if any point in the convex set can be obtained by taking a weighted average of the extremals. The next theorem answers the question in our context. Our integral representation in terms of ergodic measures will just be an immediate consequence of the change of variables formula.
Let us first prove stationarity. We have to negotiate carefully through null sets. We now turn to ergodicity. Again there is a minefield of null sets to negotiate. We need therfore only to 6. This completes the proof. Show that any two distinct ergodic invariant measures P1 and P2 are orthogonal on I, i. If a is irrational there is just one invariant measure P , namely the uniform distribution on [0, 1. This is seen by Fourier Analysis.
See Remark 2. We can denote this distribution by Px. Such a process if it had started long time back will be found nowhere today! So it does not exist. On the other hand if we take X to be the set of all integers then P is seen to exist. In fact there are lots of them. The following theorem connects stationarity and the Markov property.
Prove the above Theorem. Use Remark 4. We say that a Markov process is reversible if the time reversed process Q of the previous example coincides with P. That it is also sufficient is a little bit of a surprise. The following theorem is the key step in the proof. The remaining part is routine. Suppose that the state space can be partitioned nontrivially i. See Theorem 4. If the Markov process starts from A or Ac , it does not ever leave it.
That means 0 Remark 6. We have combined two distinct possibilities into one. What we have shown is that when we have multiple invariant measures they essentially arise in this manner.
Remark 6. The extremals of this convex set are precisely the ones that correspond to ergodic stationary processes and they are called ergodic or extremal invariant measures. Clarify this statement. Let f be a bounded measurable function on X. We saw in the earlier section that any stationary process is an integral over stationary ergodic processes. Me Exercise 6. If there are at least two invariant mesaures, then there are at least two ergodic ones which are orthogonal.
One of the questions that is important in the theory of Markov Processes is the rapidity with which the memory of the initial state is lost. There is no unique way of assessing it and depending on the circumstances this could happen in many differerent ways at many different rates. Read more here. Graduate students and research mathematicians interested in probability theory and stochastic processes and in applications to economics, and finance.
Continue Shopping Checkout. AMS Homepage. Join our email list. Sign up. Probability Theory Share this page. Advanced search. Author s Product display : S. Abstract: S. Volume: 7. Publication Month and Year: Copyright Year: Page Count: Cover Type: Softcover. Print ISBN Online ISBN Print ISSN: Online ISSN: Primary MSC: Applied Math? MAA Book? Inquiry Based Learning? Electronic Media?
Apparel or Gift: false. Online Price 1 Label: List. Online Price 1:
0コメント