# statistics hypothesis testing

F-test is named after the more prominent analyst R.A. Fisher. F-test is utilized to test whether the two autonomous appraisals of populace change contrast altogether or whether the two examples may be viewed as drawn from the typical populace having the same difference. For doing the test, we calculate F-statistic is defined as:

## Formula

${F} = \frac{Larger\ estimate\ of\ population\ variance}{smaller\ estimate\ of\ population\ variance} = \frac{{S_1}^2}{{S_2}^2}\ where\ {{S_1}^2} \gt {{S_2}^2}$

### Procedure

Its testing procedure is as follows:

1. Set up null hypothesis that the two population variance are equal. i.e. ${H_0: {\sigma_1}^2 = {\sigma_2}^2}$
2. The variances of the random samples are calculated by using formula:

${S_1^2} = \frac{\sum(X_1- \bar X_1)^2}{n_1-1}, \\[7pt] \ {S_2^2} = \frac{\sum(X_2- \bar X_2)^2}{n_2-1}$

3. The variance ratio F is computed as:

${F} = \frac{{S_1}^2}{{S_2}^2}\ where\ {{S_1}^2} \gt {{S_2}^2}$

4. The degrees of freedom are computed. The degrees of freedom of the larger estimate of the population variance are denoted by v1 and the smaller estimate by v2. That is,
1. ${v_1}$ = degrees of freedom for sample having larger variance = ${n_1-1}$
2. ${v_2}$ = degrees of freedom for sample having smaller variance = ${n_2-1}$
5. Then from the F-table given at the end of the book, the value of ${F}$ is found for ${v_1}$ and ${v_2}$ with 5% level of significance.
6. Then we compare the calculated value of ${F}$ with the table value of ${F_.05}$ for ${v_1}$ and ${v_2}$ degrees of freedom. If the calculated value of ${F}$ exceeds the table value of ${F}$, we reject the null hypothesis and conclude that the difference between the two variances is significant. On the other hand, if the calculated value of ${F}$ is less than the table value, the null hypothesis is accepted and concludes that both the samples illustrate the applications of F-test.
7. ### Example

Problem Statement:

In a sample of 8 observations, the entirety of squared deviations of things from the mean was 94.5. In another specimen of 10 perceptions, the worth was observed to be 101.7 Test whether the distinction is huge at 5% level. (You are given that at 5% level of centrality, the basic estimation of ${F}$ for ${v_1}$ = 7 and ${v_2}$ = 9, ${F_.05}$ is 3.29).

Solution:

Let us take the hypothesis that the difference in the variances of the two samples is not significant i.e. ${H_0: {\sigma_1}^2 = {\sigma_2}^2}$

We are given the following:

${n_1} = 8 , {\sum {(X_1 – \bar X_1)}^2} = 94.5, {n_2} = 10, {\sum {(X_2 – \bar X_2)}^2} = 101.7, \\[7pt] {S_1^2} = \frac{\sum(X_1- \bar X_1)^2}{n_1-1} = \frac {94.5}{8-1} = \frac {94.5}{7} = {13.5}, \\[7pt] {S_2^2} = \frac{\sum(X_2- \bar X_2)^2}{n_2-1} = \frac {101.7}{10-1} = \frac {101.7}{9} = {11.3}$

Applying F-Test

${F} = \frac{{S_1}^2}{{S_2}^2} = \frac {13.5}{11.3} = {1.195}$

For ${v_1}$ = 8-1 = 7, ${v_2}$ = 10-1 = 9 and ${F_.05}$ = 3.29. The Calculated value of ${F}$ is less than the table value. Hence, we accept the null hypothesis and conclude that the difference in the variances of two samples is not significant at 5% level.

2.statistics analysis of variance

3.statistics arithmetic mean

4.statistics arithmetic median

5.statistics arithmetic mode

6.statistics arithmetic range

7.statistics bar graph

8.statistics best point estimation

9.statistics beta distribution

10.statistics binomial distribution

11.statistics blackscholes model

12.statistics boxplots

13.statistics central limit theorem

14.statistics chebyshevs theorem

15.statistics chisquared distribution

16.statistics chi squared table

17.statistics circular permutation

18.statistics cluster sampling

19.statistics cohens kappa coefficient

20.statistics combination

21.statistics combination with replacement

22.statistics comparing plots

23.statistics continuous uniform distribution

24.statistics cumulative frequency

25.statistics coefficient of variation

26.statistics correlation coefficient

27.statistics cumulative plots

28.statistics cumulative poisson distribution

29.statistics data collection

30.statistics data collection questionaire designing

31.statistics data collection observation

32.statistics data collection case study method

33.statistics data patterns

34.statistics deciles statistics

35.statistics dot plot

36.statistics exponential distribution

37.statistics f distribution

38.statistics f test table

39.statistics factorial

40.statistics frequency distribution

41.statistics gamma distribution

42.statistics geometric mean

43.statistics geometric probability distribution

44.statistics goodness of fit

45.statistics grand mean

46.statistics gumbel distribution

47.statistics harmonic mean

48.statistics harmonic number

49.statistics harmonic resonance frequency

50.statistics histograms

51.statistics hypergeometric distribution

52.statistics hypothesis testing

53.statistics interval estimation

54.statistics inverse gamma distribution

55.statistics kolmogorov smirnov test

56.statistics kurtosis

57.statistics laplace distribution

58.statistics linear regression

59.statistics log gamma distribution

60.statistics logistic regression

61.statistics mcnemar test

62.statistics mean deviation

63.statistics means difference

64.statistics multinomial distribution

65.statistics negative binomial distribution

66.statistics normal distribution

67.statistics odd and even permutation

68.statistics one proportion z test

69.statistics outlier function

70.statistics permutation

71.statistics permutation with replacement

72.statistics pie chart

73.statistics poisson distribution

74.statistics pooled variance r

75.statistics power calculator

76.statistics probability

78.statistics probability multiplicative theorem

79.statistics probability bayes theorem

80.statistics probability density function

81.statistics process capability cp amp process performance pp

82.statistics process sigma

84.statistics qualitative data vs quantitative data

85.statistics quartile deviation

86.statistics range rule of thumb

87.statistics rayleigh distribution

88.statistics regression intercept confidence interval

89.statistics relative standard deviation

90.statistics reliability coefficient

91.statistics required sample size

92.statistics residual analysis

93.statistics residual sum of squares

94.statistics root mean square

95.statistics sample planning

96.statistics sampling methods

97.statistics scatterplots

98.statistics shannon wiener diversity index

99.statistics signal to noise ratio

100.statistics simple random sampling

101.statistics skewness

102.statistics standard deviation

103.statistics standard error se

104.statistics standard normal table

105.statistics statistical significance

106.statistics formulas

107.statistics notations

108.statistics stem and leaf plot

109.statistics stratified sampling

110.statistics student t test

111.statistics sum of square

112.statistics tdistribution table

113.statistics ti 83 exponential regression

114.statistics transformations

115.statistics trimmed mean

116.statistics type i amp ii errors

117.statistics variance

118.statistics venn diagram

119.statistics weak law of large numbers

120.statistics z table

121.discuss statistics