There are 2 Graduate Level Assignments of Quantitative Analysis for Managers.PS：These topics are Discussion Board questions not essay.Details see: 2 Graduate Level Assignments of Quantitative Analysis for Managers.docx The reading materials listed below:1. Reading 4 – Sampling.pdf 2. Topic 2 – Sampling Example.xlsxThere are 2 Graduate Level Assignments of Quantitative Analysis for Managers. Please

give your options, show your ethics, and discuss deeply.

PS：These topics are Discussion Board questions not essay.

Topic 1 – Devotional Question:

Please respond to the following questions:

Sampling

Read Luke 10:29-37

Since we cannot minister to all people, what is the chance God will bring someone in

need across your path? How should you respond? How do you respond? How often will

God “sample” your response?

Topic 2 – Discussion Question:

A. Read (Reading 4 – Sampling.pdf), then respond to the following question:

Why is it important to understand random sampling when drawing conclusions

about a population from a sample? Would random selection be important if you

were making a decision to accept a shipment of goods from a supplier or reject a

shipment of goods from a supplier? Why or why not? Cite a couple of examples

where the concept of random sampling is important in the business environment.

B. Prepare an Excel Spreadsheet by following the example spreadsheet attached (Topic

2 – Sampling Example.xlsx).

Illustrate a Systematic Sample of a population. Calculate the required descriptive

statistics. Be different from the example. Create a sample composed of at least 9

values. Determine a 95% Confidence Interval of the Mean.

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

1

Sampling Distributions & Inferential Statistics Review

Inferential statistics requires the inference of something about a population from

a sample. Sampling is the process of making a selection from a population of

interest such that the sample has a high probability of representing the

population. If we were interested in the mean life of our company’s tires versus

the mean life of our competitor’s tires, we would take a sample from each

population to make that comparison and draw conclusions.

Reasons for Sampling:

1) Many times to contact the entire population is too time consuming. The

chances of a person being elected to an office are never based on the

entire population of registered voters.

2) The cost of studying the entire population may be prohibitive. To test the

possible success of marketing a new cereal you could not afford to sample

the entire population of people who have children to whom the product

might be targeted.

3) Often a sample value is just as adequate as knowing the entire population

value. The Bureau of Labor Statistics (BLS) samples grocery stores

scattered throughout the nation to determine the cost of milk, bread,

beans and other food items. Including ALL grocery stores throughout the

nation would not yield substantial differences. A sample is just as good as

a 100% survey.

4) Sampling is practical especially if you must use destructive testing to

determine the quality of a product. Example: Ammunition testing, testing

a critical part to the point of destruction.

5) Sometimes is it physically impossible to check all of the items in the

population. Examples: Fish, birds, snakes, dogs or cats.

The process of sampling must be a fair representation of the population of

interest. You cannot bias – intentionally or unintentionally – the results of your

sample and then apply inferential concepts.

Sampling error is defined as the difference in the parameter and the statistic

used to estimate it. µ – X = sampling error of the mean or σ2 – S2 is also

X and S2 are

sampling error of the variance. µ and σ2 are parameters.

statistics. We use statistics to estimate parameters.

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

2

Two Types of Statistics:

There are two basic types of statistics – Descriptive and Inferential. Generally

descriptive statistics will describe the data set whereas inferential statistics will

infer something about the population value from taking a sample.

Descriptive statistics describes data sets.

Inferential statistics infers something about a population (larger group of data)

from a sample (smaller group of data).

When using Descriptive statistics we make use of measures of central tendency

and measures of dispersion.

Central Tendency Measures

Mean – Arithmetic average

Median – One in the middle.

Mode – One that occurs most often.

Weighted Mean – uses the probability of each possible outcome to

develop a mean.

Dispersion Measures

Range – difference between the high and the low.

Average deviation (always zero)

Variance – mean of the squared deviations around the mean.

Standard deviation – the square root of the variance.

Percentiles, quartiles, deciles – divides the data set into equal

increments.

Inter-quartile range (IQR) – the difference between the first quartile

and the third quartile.

Outliers – data set aberrations which need to be adjusted or

eliminated.

Important Concepts for Inferential Statistics:

Population – all of the observations of interest to the researcher.

Sample – a representative portion of the population.

Populations are defined by the researcher. All of the measures of central

tendency or all of the measures of dispersion can be classified as a

parameter or a statistic. A parameter is a descriptive measure of the

population (both begin with P). A statistics is a descriptive measure of the

sample (both begin with S). In other words, I can have a descriptive

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

3

measure, say the mean, which can be called a parameter or can be called

a statistic. The mean describes a part of the data set. If reference is

made to the mean as a parameter this term refers to the population value.

If reference is made to the mean as a statistic, this term refers to the

sample value.

Variables – Quantitative and Qualitative.

Quantitative Variable are quantifiable. They take on two particular

types – discrete (whole numbers) and continuous (range of values).

Qualitative Variables are not quantifiable such as gender, religious

preference or hair color. A technique called dummy variables will

allow us to quantify this type of variable.

To work with inferential statistics, one must use a random sampling

concept. A random sample occurs when each and every element in the

population has an equal and independent chance of being selected.

Without random selection the interpretation of the results of any study are

very difficult if not impossible.

There are four basic methods of random sampling:

Random Numbers – Each and every element in the population

has an equal and independent chance of being selected.

Stratified Sampling – divide the population into strata

(homogeneous sub-sets) and then take proportionate random

samples from the population which matches the strata percentages.

Systematic Sampling – the population is placed in random order

then a starting point is selected at random. Next every 10th or 20th

or 50th or “ith” item may be selected for sampling.

Cluster Sampling – the population is divided into geographically

similar units and from those units a number of clusters are selected

at random. Once the cluster is selected 100% of that cluster will be

sampled.

If we do not select a sample that meets the condition of randomness from a

population, then we cannot apply the inferential concepts in our

interpretation. We cannot assert that the sample represents the population

if the condition of randomness is not present in our sample.

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

4

Division of Population into Sub-Sets or Different Samples:

In inferential statistics, a sample is selected to represent the population. Let’s

assume you want to know the average age of a class of 22 people. That’s not

hard to figure out. You would go around the room and ask each person his or

her age, sum the ages and divide by 22. Let’s say we did not want to take the

time or trouble to get the average age of all the students since all students in the

class were first year college students. You could select three at random from

among the 22 and let their age represent the population age. This is letting a

sample represent the population. The problem is there are many possible

arrangements of the class of 22 into data sets three at a time. Each sample

selected would have a mean of that sample. The mean of that sample then is

supposed to represent the mean of the population, but there are a bunch (a

sophisticated statistical term) of arrangements.

Okay, you say, how do we measure a bunch?

Let’s take a simple example and determine how many sub-sets can be

developed from the 22 students 3 at a time.

There are two basic methods for dividing our population into sub-sets. One is

called permutations and the other is combinations. In using permutations

order is important, but in combinations order is not important.

nPr =

n!

(n − r )!

nCr =

n!

r!(n − r )!

Take the elements ABC and arrange them (develop the sub-sets) of three

elements three at a time. Assume 0! = 1. There are 6 arrangements of the data

set using permutations, but only 1 arrangement of the data set using

combinations.

The calculations:

3P3 =

3!

3 * 2 *1

=

=6

(3 − 3)!

0!

If order is important, then ABC is different from ACB which is different from BAC

and so forth.

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

3C3 =

5

3!

3 * 2 *1

=

=1

3!(3 − 3)! 3 * 2 *1(0!)

If order is not important, then ABC is the same as BAC as CAB and so forth.

Let’s now turn our attention to the average age of 22 students using a sample

sub-set of 3 at a time. Determine the possible arrangement of a class size using

22 arranged 3 at a time. Assume order is not important.

22C3 =

22!

22 * 21 * 20 *19!

=

3!(22 − 3)!

3 * 2 *1(19!)

The 19! (read: 19 factorial) in both the numerator and denominator cancel out, so

there is no need to do excessive multiplication.

22C3 =

22 * 21 * 20 9,240

=

= 1,540

6

6

When order is not important, there are 1,540 possible arrangements of our data

set of 22 students arranged 3 at a time. Since we were interested in the average

age of the students in the class and wanted to take a sample of 3 at a time from

among the full population of 22, there would be 1,540 possible sample means.

Each sample mean would then be representative of the population mean as long

as the rule of randomness is not violate.

The distribution of these means is referred to as the sampling distribution of

sample means. It is important to know that the Central Limit Theorem (which

was introduced in the last reading) allows us to assume the distribution of these

sample means is itself normally distributed as long as “n” (the sample size) is

sufficiently large (usually 30 or greater). When this occurs, we can then use the

Z-process to determine probabilities for the sampling distribution of sample

means.

Standard Deviation, Standard Error and Sampling Error:

Variance and the variance of the stand error are essentially the same thing. Both

are variances. There is one difference. The variance is associated with a

sample and the variance of the standard error is associated with the sampling

distribution of sample means.

Sampling error is, however, a different term. Sampling error is the difference

between the population mean and the sample mean. ( µ less X )

Reading 4: Sampling Distributions & Inferential Statistics Review

6

(File008r reference only)

Any time n is greater than 1, the sampling distribution of sample means is the

distribution of interest. Samples greater than 1 make all calculations using the

standard error (square root of the variance of the standard error). A single

sample, however, has a standard deviation and not a standard error. When

comparing the parameter (mean of the population) to the statistic (mean of the

sample), the difference will be sampling error.

Even though this was discussed in the review on probability, let’s review the

concept one more time. In the review on probability, we looked at the normal

distribution and the Z-process. Remember the Z-process is used when working

with normal distributions and continuous data sets.

Z=

X −µ

σ

This Z approach would be used if “n” is unspecified or n = 1.

If “n” is greater than 1, the standard error is used.

calculated as follows:

Z=

X −µ

σx

where the standard error is

Here the Z-process is

σx =

σ

n

As “n” increases the standard error will decrease.

For example:

Using the Z formula for standard error just above and given a standard

deviation of 2, determine the standard error for the following values of n.

n

σx

4

16

25

36

Try to make these calculations before looking at the solution below:

Don’t peek yet!!!

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

7

Solution:

n

σx

4

16

25

36

1.00

0.50

0.40

0.33

Notice that as the sample size increases (n goes up) the standard error (not

sampling error, but standard error) decreases. Technically if we survey 100% of

the population, we will have no standard error. We will still have a standard

deviation, but no standard error. This inverse relationship is quite useful.

One of the most important theorems in all of inferential statistics is the Central

Limit Theorem (CLT). As demonstrated above, as the sample size increases the

underlying distribution of sample means will approach a normal distribution.

This occurs when “n” is equal to or greater than 30. This is important because

when the distribution is normal the Z-distribution can be used to determine

probabilities. As you should recall from your statistics course, if the sample size

is less than 30, the t-distribution must be used instead of the Z-distribution.

Two techniques are useful when working with sampling error (difference between

X and µ). One is confidence intervals and the other is hypothesis testing.

Confidence Intervals measure how confident we are that the true, unknown

population mean is between and upper and lower confident limit. Hypothesis

testing measures the hypothesized mean of the population against a sample

value to determine if the null hypothesis is true or not true. This course will not

concern itself with the details of either of those two techniques. They are simply

mentioned here as a reminder.

Forecasting Using Samples:

In reading 3, some time was spent on the ideas of distinguishable (not parallel,

different or not similar) and indistinguishable (parallel, not different or

similar) data sets. A newspaper vendor was used as an example. Going one

step further, suppose the vendor wanted to know how many newspapers the

vendor could sell. To do this, the vendor believes that the use of historical data

would best forecast future results (sales in the future). From reading 3, the basic

data set is repeated below. This provides the starting point for the historic

forecast.

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

Newspaper Demand Data Set (Original Before Analysis):

Date

24

25

26

27

28

29

30

31

1

2

3

4

5

6

7

8

9

10

Day

Sold Date Day

Sold

Wednesday 70

11 Sunday

41

Thursday

73

12 Monday

59

Friday

68

13 Tuesday

43

Saturday

56

14 Wednesday 46

Sunday

58

15 Thursday

49

Monday

71

16 Friday

52

Tuesday

65

17 Saturday

39

Wednesday 55

18 Sunday

40

Thursday

47

19 Monday

44

Friday

54

20 Tuesday

59

Saturday

42

21 Wednesday 52

Sunday

44

22 Thursday

41

Monday

61

23 Friday

56

Tuesday

51

24 Saturday

44

Wednesday 48

25 Sunday

40

Thursday

50

26 Monday

54

Friday

54

27 Tuesday

46

Saturday

45

Adjusting for Distinguishable Data (different – grey shade) and

Indistinguishable Data (similar – un-shaded area):

Date

24

25

26

27

28

29

30

31

1

2

3

4

5

6

7

8

9

10

Day

Sold Date Day

Sold

Wednesday 70

11 Sunday

41

Thursday

73

12 Monday

59

Friday

68

13 Tuesday

43

Saturday

56

14 Wednesday 46

Sunday

58

15 Thursday

49

Monday

71

16 Friday

52

Tuesday

65

17 Saturday

39

Wednesday 55

18 Sunday

40

Thursday

47

19 Monday

44

Friday

54

20 Tuesday

59

Saturday

42

21 Wednesday 52

Sunday

44

22 Thursday

41

Monday

61

23 Friday

56

Tuesday

51

24 Saturday

44

Wednesday 48

25 Sunday

40

Thursday

50

26 Monday

54

Friday

54

27 Tuesday

46

Saturday

45

8

Reading 4: Sampling Distributions & Inferential Statistics Review

9

(File008r reference only)

There is a difference in demand between weekday and weekend sales

(distinguishable data or different data). There was an environmental

change (unmatched price increase) at the first of the month which

rendered the last 8 days of the previous month different (distinguishable

from the next month).

After adjusting the data set, there remained two separate indistinguishable

(similar) data sets. The weekdays are indistinguishable from each other

and the weekends are indistinguishable from other weekends. Using

these two data sets a forecast of future sales for the weekdays and a

separate forecast of future sales for the weekends may be accomplished.

The forecast for the weekday is shown in the following table.

Indistinguishable Weekday Data Set

Weekday Date

Demand

1 Thursday

2 Friday

5 Monday

6 Tuesday

7 Wednesday

8 Thursday

9 Friday

12 Monday

13 Tuesday

14 Wednesday

15 Thursday

16 Friday

47

54

61

51

48

50

54

59

43

46

49

52

Weekday

Date

19 Monday

20 Tuesday

21 Wednesday

22 Thursday

23 Friday

26 Monday

27 Tuesday

Demand

44

59

52

41

56

54

46

Mean Average = 50.84

Standard Deviation = 5.65

Standard Error =

σ

n

=

5.65

= 1.30

19

The mean sales is 50.84 (rounded to 51). The standard deviation is 5.65

with a standard error of 1.30 papers. The expected future sales of

weekdays is 51 papers. However, how is the standard error useful, you

might ask?

The vendor will use the 51 newspapers sold on the weekdays as a point

estimate of future weekday sales. However, the point estimate is an

estimate of the sales at a given point. From the basic statistics course,

point estimates are just that – point estimates. It is quite difficult if not

impossible to have the forecasted point estimate of the sales, in this

instance, equal the actual sales. It is more practical to place around the

point estimate an interval which then can be interpreted using either the

Empirical Rule or Chebyshev’s Theorem.

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

10

Several rules apply for the use of the Empirical Rule. First the data set

must be continuous data (range of values) and the distribution must be

normal. In this case, the data set is discrete (whole numbers) and there is

not evidence of a normal distribution. Does this mean the forecast cannot

have an interval estimate, you ask? Of course not, I reply. Whatever the

interpretation (Empirical Rule or Chebyshev’s Theorem), the calculation

will be the same.

X ± 1S X

X ± 2S X

X ± 3S X

Around the point estimate of 51 an interval is calculated which is based on

1, 2 or 3 times the standard error. 1, 2 or 3 is the number of standard

errors. Here the assumption is that the sampling distribution of sample

means is being encountered (n ≥ 30).

Arbitrarily selecting 2, the interval estimate is calculated to be the

following:

51 ± 2 (1.30)

The standard error comes from the table.

51 ± 2.6

48.4

to

53.6

48

to

54

Rounded

The calculation is rather simple. However, how is it interpreted?

Since the data set is discrete and non-normal, the Empirical Rule

interpretation cannot be used. Chebyshev’s Theorem applies to any data

set (discrete or continuous). Chebyshev’s formula is as follows:

1−

1

K2

Where K = the number of standard deviations (standard error here).

K must also be greater than 1 (K > 1). If 1 is used in the formula,

the result is “zero”, which is impractical for an interpretation.

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

11

Since K was selected to be 2, two can be substituted into the formula to

obtain an interpretation.

1−

1

22

At least 75% is the proper interpretation.

Putting the interpretation into more understandable English, the following

can be said:

At least 75% of the times, the weekday newspaper sales should be

between 48 to 54 newspapers. This knowledge helps the vendor to order

newspapers for the weekdays.

The variance and standard deviation calculations are not shown in detail

here, but they will follow the same pattern as the variance and standard

deviation calculations learned from the basic statistics course.

In words, the variance is the mean of the squared deviations around the

mean. The standard deviation is the square root of the variance. The

standard error is the standard deviation divided by the square root of the

size of the sample. These are conceptual issues so they should not hold

up the calculation and interpretation issues surrounding this forecast.

Let’s complete the forecast for indistinguishable weekend data set.

Indistinguishable Weekend Data Set

Weekend Date

3 Saturday

3 Sunday

10 Saturday

11 Sunday

17 Saturday

18 Sunday

24 Saturday

25 Sunday

Demand

42

44

45

41

39

40

44

40

Mean Average = 41.88

Standard Deviation = 5.90 (see table below)

Standard Error =

σ

n

=

5.90

= 2.10

8

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

12

The sale of newspapers on the weekend is less than those on the

weekdays. This is something that is logical and should be expected by

the newspaper vendor. The calculation of the interval estimate is as

follows (using 2 once again).

42 ± 2 (2.10)

42 ± 4.2

37.8 to

37

to

46.2

47

Rounded

At least 75% of the time, the weekend sales will be between 37 to 47

newspapers.

The forecast for the weekdays is higher and more tightly compacted

around the mean sales than is the weekend sales.

The distribution for

weekend sales is broader or wider than that of the weekday sales.

Our weekday sales are higher than the weekend sales. The weekday

sales are on average 1.2143 greater than the weekends or about 21%

higher (51 ÷ 42). It should make some sense why the data set must be

divided into two data sets – distinguishable (not similar) and

indistinguishable (similar).

Reading 4: Sampling Distributions & Inferential Statistics Review

(File008r reference only)

13

Supplemental Calculations:

For those of you, who want to know how to calculate the variance and the

standard deviation, the Table below shows the calculation of the variance

and the standard deviation for the weekend data set. Simply said, this is

the pattern that should be used to calculate the variance and standard

deviation for the weekday sales. That calculation has not been shown

prior to this point in the reading.

The variance is the mean of the squared deviations around the mean.

The standard deviation is the square root of the variance.

Data (X)

[X- X ]2

[X – (X ) ]

42

0.12

0.0144

44

2.12

4.4944

45

3.12

9.7344

41

0.88

0.7744

39

2.88

8.2944

40

1.88

3.5344

44

2.12

4.4944

40

1.88

3.5344

Variance 34.8608

Standard Deviation (Square Root of Variance) 5.9043

Mean (X )

41.88

41.88

41.88

41.88

41.88

41.88

41.88

41.88

Item

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

Random Value

Systematic Sample (k=4)

0,7205

0,7205

0,7130

0,3661

0,5266

0,6313

0,6313

0,5031

0,8798

0,6146

0,1369

0,1369

0,7524

0,9638

0,1891

0,6256

0,6256

0,9458

0,3894

0,1384

0,7102

0,7102

0,9346

0,3860

0,5424

0,4260

0,4260

0,4385

0,8111

0,4610

0,8322

0,8322

Use Excel random number generator, =Rand()

Descriptive Statistics

0,7764

0,0790

7

0,0730

Mean, =Average(C2,C26)

Standard Deviation, =STDEV.S(C2,C26)

sample size

95% Margin of error, =Confidence.T(0.05,D3,D4)

Confidence Interval

0,7033 lower limit

0,8494 upper limit

Purchase answer to see full

attachment

#### Why Choose Us

- 100% non-plagiarized Papers
- 24/7 /365 Service Available
- Affordable Prices
- Any Paper, Urgency, and Subject
- Will complete your papers in 6 hours
- On-time Delivery
- Money-back and Privacy guarantees
- Unlimited Amendments upon request
- Satisfaction guarantee

#### How it Works

- Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
- Fill in your paper’s requirements in the "
**PAPER DETAILS**" section. - Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
- Click “
**CREATE ACCOUNT & SIGN IN**” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page. - From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.