F-test is named after the more prominent analyst R.A. Fisher. F-test is utilized to test whether the two autonomous appraisals of populace change contrast altogether or whether the two examples may be viewed as drawn from the typical populace having the same difference. For doing the test, we calculate F-statistic is defined as:

## Formula

${F} = \frac{Larger\ estimate\ of\ population\ variance}{smaller\ estimate\ of\ population\ variance} = \frac{{S_1}^2}{{S_2}^2}\ where\ {{S_1}^2} \gt {{S_2}^2}$

### Procedure

Its testing procedure is as follows:

- Set up null hypothesis that the two population variance are equal. i.e. ${H_0: {\sigma_1}^2 = {\sigma_2}^2}$
- The variances of the random samples are calculated by using formula:
${S_1^2} = \frac{\sum(X_1- \bar X_1)^2}{n_1-1}, \\[7pt]

\ {S_2^2} = \frac{\sum(X_2- \bar X_2)^2}{n_2-1}$ - The variance ratio F is computed as:
${F} = \frac{{S_1}^2}{{S_2}^2}\ where\ {{S_1}^2} \gt {{S_2}^2}$

- The degrees of freedom are computed. The degrees of freedom of the larger estimate of the population variance are denoted by v1 and the smaller estimate by v2. That is,
- ${v_1}$ = degrees of freedom for sample having larger variance = ${n_1-1}$
- ${v_2}$ = degrees of freedom for sample having smaller variance = ${n_2-1}$

### Example

**Problem Statement:**

In a sample of 8 observations, the entirety of squared deviations of things from the mean was 94.5. In another specimen of 10 perceptions, the worth was observed to be 101.7 Test whether the distinction is huge at 5% level. (You are given that at 5% level of centrality, the basic estimation of ${F}$ for ${v_1}$ = 7 and ${v_2}$ = 9, ${F_.05}$ is 3.29).

**Solution:**

Let us take the hypothesis that the difference in the variances of the two samples is not significant i.e. ${H_0: {\sigma_1}^2 = {\sigma_2}^2}$

We are given the following:

{S_1^2} = \frac{\sum(X_1- \bar X_1)^2}{n_1-1} = \frac {94.5}{8-1} = \frac {94.5}{7} = {13.5}, \\[7pt]

{S_2^2} = \frac{\sum(X_2- \bar X_2)^2}{n_2-1} = \frac {101.7}{10-1} = \frac {101.7}{9} = {11.3}$

Applying F-Test

${F} = \frac{{S_1}^2}{{S_2}^2} = \frac {13.5}{11.3} = {1.195}$

For ${v_1}$ = 8-1 = 7, ${v_2}$ = 10-1 = 9 and ${F_.05}$ = 3.29. The Calculated value of ${F}$ is less than the table value. Hence, we accept the null hypothesis and conclude that the difference in the variances of two samples is not significant at 5% level.

Table of Contents

1.statistics adjusted rsquared

2.statistics analysis of variance

4.statistics arithmetic median

8.statistics best point estimation

9.statistics beta distribution

10.statistics binomial distribution

11.statistics blackscholes model

13.statistics central limit theorem

14.statistics chebyshevs theorem

15.statistics chisquared distribution

16.statistics chi squared table

17.statistics circular permutation

18.statistics cluster sampling

19.statistics cohens kappa coefficient

21.statistics combination with replacement

23.statistics continuous uniform distribution

24.statistics cumulative frequency

25.statistics coefficient of variation

26.statistics correlation coefficient

27.statistics cumulative plots

28.statistics cumulative poisson distribution

30.statistics data collection questionaire designing

31.statistics data collection observation

32.statistics data collection case study method

34.statistics deciles statistics

36.statistics exponential distribution

40.statistics frequency distribution

41.statistics gamma distribution

43.statistics geometric probability distribution

46.statistics gumbel distribution

49.statistics harmonic resonance frequency

51.statistics hypergeometric distribution

52.statistics hypothesis testing

53.statistics interval estimation

54.statistics inverse gamma distribution

55.statistics kolmogorov smirnov test

57.statistics laplace distribution

58.statistics linear regression

59.statistics log gamma distribution

60.statistics logistic regression

63.statistics means difference

64.statistics multinomial distribution

65.statistics negative binomial distribution

66.statistics normal distribution

67.statistics odd and even permutation

68.statistics one proportion z test

69.statistics outlier function

71.statistics permutation with replacement

73.statistics poisson distribution

74.statistics pooled variance r

75.statistics power calculator

77.statistics probability additive theorem

78.statistics probability multiplicative theorem

79.statistics probability bayes theorem

80.statistics probability density function

81.statistics process capability cp amp process performance pp

83.statistics quadratic regression equation

84.statistics qualitative data vs quantitative data

85.statistics quartile deviation

86.statistics range rule of thumb

87.statistics rayleigh distribution

88.statistics regression intercept confidence interval

89.statistics relative standard deviation

90.statistics reliability coefficient

91.statistics required sample size

92.statistics residual analysis

93.statistics residual sum of squares

94.statistics root mean square

96.statistics sampling methods

98.statistics shannon wiener diversity index

99.statistics signal to noise ratio

100.statistics simple random sampling

102.statistics standard deviation

103.statistics standard error se

104.statistics standard normal table

105.statistics statistical significance

108.statistics stem and leaf plot

109.statistics stratified sampling

112.statistics tdistribution table

113.statistics ti 83 exponential regression

114.statistics transformations

116.statistics type i amp ii errors

119.statistics weak law of large numbers