Author Archives: Sybil Shaver

Homework for Monday 23 April

[Latexpage]

See the notes on this page.

Instructions for some calculators are linked on this page.

 

This is practice computing normal probabilities. Make sure that you practice using the calculator you will use on the tests, since you will not be allowed to use R during tests.

The problems are posted in pdf form in the “Resources” page in Piazza.

Section 4.3 p. 245 #4.3.1, 4.3.2, 4.3.5(a, b), 4.3.9, 4.3.11

4.3.9 and 4.3.11 are examples of binomial probabilities which are being estimated by a normal probability using the Demoivre-Laplace Theorem. See the notes linked at the top of this page.

Also, more practice finding critical values of Z: use your calculator please!

Find values of the standard normal z such that:

$P(Z \le z) = 0.01$

$P(Z \gr z) = 0.01$

$P(Z \gr z) = 0.005$

 

Also don’t forget about R programming assignment 1

 

Don’t forget, if you get stuck on a problem, you can post a question on Piazza. Make sure to give your question a good subject line and tell us the problem itself – we need this information in order to answer your question. And please only put one problem per posted question!

 

Notes from Wednesday 18 April class

[Latexpage]

Estimating binomial using normal: This makes use of the DeMoivre-Laplace Theorem: essentially, what this theorem says is that under certain conditions (see below) a binomial probability distribution with parameters n, p is very close to a normal probability density with parameters $\mu = n\theta$, $\sigma = \sqrt{n\theta(1-\theta)}$.  In the past this was used to estimate binomial probabilities for computational purposes: this has become less important as computers are used to perform such computations now. The importance of this theorem is that it will be used in the estimation techniques when we do inferential statistics (coming right up!)

The conditions required are that n should be large and p should not be too close to either 0 or 1. A common rule of thumb is to require that np and n(1-p) both be at least 5, but 10 is a better choice for the cutoff – see the computation below.

————–

Here is the example that I worked in class estimating binomial by normal: obviously we do not need to do this estimation since we can easily compute the binomial probability itself. The example is merely to illustrate a little tricky part to the method, and also to show that this gives excellent estimates when we meet the conditions of the rule of thumb given above, especially if we stay away from the lower bound 5 for the mean and “failure mean”.

Suppose we flip a coin 20 times. This is an unbalanced coin, so that the probability of heads showing is 40%. What is the probability that we will get heads exactly 6 times?

This is binomial: $P(X = 6) = b(6; 20, 0.4) = \binom{20}{4}(0.4)^{6}(0.6)^{14} \approx 0.1244$

Check that it is OK to approximate it by a normal probability:

Is $\mu = n\theta \ge 5$? $n\theta = 20(0.4) = 8$, yes.

Is $n(1-\theta) \ge 5$? $n(1-\theta) = 20(0.6) = 12$, yes.

Now to estimate it, we have to be careful because we are using a continuous density to estimate a discrete probability distribution! So if you think about estimating the area of the bar of the probability distribution histogram by the area under a normal curve, you can see that we need to find the normal probability that X is between 5.5 and 6.5 (the edges of the bar). We estimate by a normal probability with mean $\mu = 8$ and standard deviation $\sigma = \sqrt{8(0.4)(0.6)} = \sqrt{4.8}$.

Therefore $P(X=6) \approx n(5.5<X<6.5; 8, \sqrt{4.8})$

$\approx 0.1212$ by using the normal cdf function.

In R, we compute this by using pnorm(6.5, 8, sqrt(4.8)) – pnorm(5.5, 8, sqrt(8.8))

In your calculator, locate the normal cdf functions. Instructions for some calculators are linked on this page.

 

So the normal approximation gives us two decimal places which are correct, and the third decimal place is off.

This is good enough for many purposes, but not always good enough. Because the mean 8 is close to the cutoff 5, we are seeing the limits of this kind of estimate. The reason for using the higher cutoff of 10 for $np$ and $n(1-p)$ is to get more accuracy (a third decimal place).

 

Some more examples, with pictures of the graphs, are worked out here.

Using calculators to compute special distribution probabilities

I will add to this as I find more resources. If you find something useful, please let us know!

 

Using TI-84 for binomial probability: [pdf]

Other TI graphing calculators probably work similarly.

Here is a resource that discusses many useful computations for probability and statistics on TI-84, including finding binomial and Poisson probabilities:

For the Casio fx-9750gii Power Graphic calculator, here is a comprehensive guide to statistics and probability functions.

 

 

If you are using a calculator not listed above and these instructions don’t help, let me know!

Homework for Wednesday 18 April

Add to the problems in the previous assignment:

Section 4.2 p.226 # 4.2.5, 4.2.7

Section 4.2 p. 233 #4.2.10, 4.2.11, 4.2.13, 4.2.17, 4.2.19

Also don’t forget the R assignment: see also these notes

and to work on corrections to Test 2 if you wish to do so. (DO NOT OMIT the part about stating what you did wrong in the original Test!)

 

There will be a Quiz on Wednesday, based on one of the problems from the assignment above.

Don’t forget, if you get stuck on a problem, you can post a question on Piazza. Make sure to give your question a good subject line and tell us the problem itself – we need this information in order to answer your question. And please only put one problem per posted question!

 

 

Math Club this Thursday

This might be interesting!

 

This week in the City Tech Math Club:

 

Title: “Statistical Mechanics and Combinatorics”

Speaker: Dr. Ezra Halleck (NYCCT)

Date/Room: Thursday April 19, 2018, 12:50-2:00pm, Namm 720

 

Abstract: In this picture-rich and proof-light treatment, I will begin with the connections between the 2 subjects but focus on enumerative and bijective aspects. One example is tiling using dimers. Another is a model of ice, again in a plane. There will be several hands-on activities as well as recursive programming examples in MATLAB, Python and R.

 

Pizza and refreshments will be served at 12:45pm. Feel free to stop by anytime and let interested students know about this event. We are still in need of volunteers to give talks in May.

https://openlab.citytech.cuny.edu/mathclub/

R programming assignment 1

(This is in addition to the Datacamp assignments, of course!)

Below you will find an R script which makes a histogram of the probability distribution function for problem 3.2.2a. I have added some comments, to include the description of the problem and to explain some of the coding.

I have also included the graph, which I exported as a jpeg.

 

Your assignment is to write a similar script for the probability distribution functions of problems 3.2.1(a and b) and 3.2.2(b). Each problem should have a separate script.

You may do this by editing my script, but make sure that you change everything that needs to be changed. Also make sure that your variable name(s) are good and descriptive.

You should also explore the “help” in Rstudio for the barplot function, and see what various features you can add or change in the graph. If you add a feature, put in a comment to describe what you did.

It is possible to write the scripts in a word processing program and save as a text file with the extension .r (although your word processor may object to that!), but you will need to run them in Rstudio anyway, so it is probably best to do the final editing in Rstudio. The “R script” menu item is found by clicking on the green + sign at the upper left of the Rstudio window.

 

Save your scripts with names of the following format:

Lastname_Firstname_problemnumber_Graph.r

Where Lastname = your last name

Firstname = your first name

problemnumber = the number of the problem

For example, my script was saved under the name

Shaver_Sybil_3.2.2a_Graph.r

 

Also, export the graphs as either jpegs or pdfs, your choice. The “export” is at the top of the Plots tab. Save them under the same names as the scripts, but with the extension .jpeg or .pdf instead of .r

 

Post the three scripts and the three graphs in Piazza in a private note to me. This is how you will submit your work.

The scripts and graphs are due by 10 PM Monday the 23rd of April.

 

Here is my R script and after it is my graph:

#Problem 3.2.2a: two numbers are selected from the integers 1 through 5, with replacement.
# X represents the larger of the two numbers. This is the pdf for X.
problem3_2_2a_dist <- c(1/25, 3/25, 5/25, 7/25, 9/25)
# The line below adds labels to the bars showing the X value.
# as.character is used because the “names” attribute must be of character type.
# If we wanted to list the numbers, we would have to put them in quotes to make them characters.
names(problem3_2_2a_dist) <- as.character(1:5)
# I could put a comment here to explain the features I have added to this graph.
barplot(problem3_2_2a_dist, space=0, xlab = “larger number”, ylab = “probability”, col=”blue”)

 

 

Homework for Monday 16 April

The problems are posted in pdf form in the “Resources” page in Piazza. Try to do these by Wednesday at the latest, when there will be a quiz on them.

Section 3.2 p. 108 #3.2.1, 3.2.5, 3.2.7, 3.2.9, 3.2.11, 3.2.15

Section 3.2 p. 117 #3.2.21, 3.2.23, 3.2.28

Section 4.2 p. 236 # 4.2.1

 

 

Note: You may redo missed problems from Test 2 following the same rules as the redo for Test 1 – don’t forget that you must tell what you did wrong as well as correcting your error! This is due by no later that Monday the 23rd of April.MAT2572Test2RedoInstructions

Homework for Wednesday 28 March

See the notes from Monday and the Wednesday before

• There is a new Datacamp assignment from a new course Working in Rstudio. We probably won’t do the whole course (although you can if you want to!) but this will help with the future assignment that you will have to work out in Rstudio and “hand in” via Piazza. (Coming soon!)

• Problems to work on expected value and variance (and standard deviation) of RVs: remember, the standard deviation is just the square root of the variance!

Find the expected value, variance, and standard deviation for the RVs of each of these previous homework problems (for which you have already found the probability distribution function):

p. 128 #3.3.1, 3.3.2,  3.3.3, 3.3.5, 3.3.7

Also do the following:

p.159 #3.6.5 (The theorem they mention is the “computational formula” we used in class.)

For X with exponential probability density $f(x) = 3e^{-3x}$, compute the mean, the variance, and the standard deviation. You will have to use integration by parts! but it’s not too hard. A little nice review is here: there is an example which integrates $x\cdot e^{x}$ which is what you will be doing more or less. Also check out Question 6 at the bottom: once you’ve chosen your answer, it will show you a step-by-step solution.


Please read the following. There is a problem for you to do at the end.

• Go back to problems 3.3.1 and 3.3.2 on p. 128.

In problem 3.3.1, we have five balls numbered 1 through 5, and we select two of them successively (without replacement). We define the RV X = the larger of the two numbers, so its possible values are 2, 3, 4, 5. We are told to find the pdf for X.

If you look at the answers to 3.3.1 in the book (and we did it in class), they all have denominator 10, which suggests that the sample space had 10 outcomes in it, in other words that the sample space was

$S = \{(1,2), (1,3), (1,4), (1,5), (2,3), (2,4), (2,5), (3,4), (3,5), (4,5)\}$

As far as the value of X is concerned, the order in which we choose the numbered balls does not matter, so this seems fine. Or is it?

This sample space, assuming the outcomes are equally likely, does give the correct pdf for X. But there is a little cheating going on, which catches up to us if we try to extend this sample space to use it in problem 3.3.2.

In 3.3.2, we have the same problem except that now we select with replacement. That means that the sample space must include outcomes where both numbers are the same: it must include (1,1), (2,2), and so on. If we just throw these into our previous sample space, we come up with the sample space

? $S_{1} =  \{(1,1), (1,2), (1,3), (1,4), (1,5), (2,2), (2,3), (2,4), (2,5), (3,3), (3,4), (3,5), (4,4), (4,5), (5,5)\}$

I’ve put a question mark in front because I suspect that this sample space does not have equally likely outcomes. (I will explain next time, but you may already see why.) This sample space has 15 outcomes in it, so if they are indeed equally likely outcomes, the probability distribution function would have denominators 15.

Suppose that we had never heard of problem 3.3.1. Working from scratch, if we are selecting two things with replacement from a set of five objects, there should be $5^{2} = 25$ possible outcomes, and the sample space would be similar to what we used for the “rolling two dice” example:

$S_{2} =  \{(1,1), (1,2), (1,3), (1,4), (1,5),

(2,1), (2,2), (2,3), (2,4), (2,5),

(3,1), (3,2), (3,3), (3,4), (3,5),

(4,1), (4,2), (4,3), (4,4), (4,5),

(5,1), (5,2), (5,3), (5,4), (5,5)\}$

This sample space has 25 outcomes, so if they are indeed equally likely outcomes, the probability distribution function would have denominators 25.

Both of these things cannot be true. Only one can be true (at the most). You might think that the pdfs would come out the same after reducing to lowest terms, but when you go and do it you will see that they are not the same.

How can we decide? There are two ways to think about that question: is there a mathematical way to show that one of them has equally likely outcomes and the other does not? Or we could ask, if we do this experiment in the real world, which one gives the actual probabilities?

I’ll give a mathematical answer, but the second question is the most interesting, because if there is no way to test probability theory in the real world, that is a very sad state of affairs!

We will test these two models against each other by using the frequentist approach: we will repeat the experiment a very large number of times, and see what proportion (relative frequency) of the time each possible value of X shows up. According to the frequentist interpretation of probability, if we repeat the experiment a very large number of times, those relative frequencies should be close to the actual probabilities.

In fact, we won’t do the experiment in real life (by drawing actual physical numbered balls), but we will use R to simulate it. That will mean that we can easily repeat the “experiment” (the simulation) 1000 times or more if we want! That’s a pretty large number.

Your job: (for now)

Find the pdf for X using $S_{1}$ and assuming the outcomes are equally likely. Then do the same for $S_{2}$ assuming its outcomes are equally likely. Verify that the two pdfs are not the same.

 

Notes for Monday 26 March (includes Monday 19 March)

(after Test 2)

Last Monday we discussed:

• Computing the expected value for a continuous RV

• Define the variance and standard deviation for discrete and for continuous RVs

• Computing the variance for discrete and continuous RVs, using the definition and also the “computational formula”.

All definitions and many of the computations are in the slideshows:

Math2501ExpectedValueForRVs-slideshow

Math2501VarianceForRVs-slideshow

Make sure that you know the definitions and the notations! There is a pretty good summary of the notation, facts, and definitions about random variables in the box on the first page of these notes

(which are on the distributions and densities we will study next, so you may want to look through them.)

 

Another example worked in class: We also computed the expected value and variance for a continuous RV with pdf $f(x) = 3x^{2}$ for $0 < x < 1$. Here are the computations: (I omit a few details of the computation which you can probably fill in without much trouble)

Computing the expected value (the mean) of X:

$E(X)$ or $\mu_{X}$ = $\displaystyle \int_{0}^{1}x\cdot3x^{2}\textrm{d}x = 3\int_{0}^{1}x^{3}\textrm{d}x$

$= 3\left[\frac{x^{4}}{4}\right]_{0}^{1}$

$= 3\cdot \frac{1}{4} = \frac{3}{4}$

Computing the variance of X using the definition of the variance:

$Var(X)$ or $\sigma_{X}^{2}$ = $\displaystyle \int_{0}^{1}\left(x -\frac{3}{4}\right)^{2}\cdot 3x^{2}\textrm{d}x$

$= 3\displaystyle \int_{0}^{1}\left(x^{2} – \frac{3}{2}x + \frac{9}{16}\right)\cdot x^{2}\textrm{d}x$

$= 3\displaystyle \int_{0}^{1}\left(x^{4} – \frac{3}{2}x^{3} + \frac{9}{16}x^{2}\right)\textrm{d}x$

$= 3\displaystyle \left[\frac{x^{5}}{5} – \frac{3}{2}\cdot\frac{x^{4}}{4} + \frac{9}{16}\cdot\frac{x^{3}}{3}\right]_{0}^{1}$

$= 3\displaystyle \left[\frac{1}{5} – \frac{3}{8} + \frac{3}{16}\right] = \frac{3}{80}$

So the variance is $\frac{3}{80}$

NOTE: And the standard deviation is $\sqrt{\frac{3}{80}}$

 

We can simplify this computation of the variance somewhat by using the computational formula (a result of a theorem)

$\sigma^{2}_{X} = E(X^{2}) – \mu_{X}^{2}$

Applying it to our RV, we already know $\mu_{X} = \frac{3}{4}$. We need $E(X^{2})$:

$E(X^{2}) = \displaystyle \int_{0}^{1}x^{2}\cdot 3x^{2}\textrm{d}x$

$= 3 \displaystyle \int_{0}^{1}x^{4}\textrm{d}x$

$= 3 \displaystyle \left[\frac{x^{5}}{5}\right]_{0}^{1}$

$ = \frac{3}{5}$

 

Now to compute the variance:

$\sigma^{2}_{X} = E(X^{2}) – \mu_{X}^{2} = \frac{3}{5} – \left(\frac{3}{4}\right)^{2} = \frac{3}{5} – \frac{9}{16} = \frac{3}{80}$

We get the same answer as using the definition (as we should, since this is a mathematical theorem.

 

Please note that the variance cannot ever be a negative number. That is because, by definition, we are taking the mean of the squared deviations, and squared real numbers cannot be negative. If your variance ever comes out negative, you have made an error somewhere!