numpy.var#
Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis.
Parameters : a array_like
Array containing numbers whose variance is desired. If a is not an array, a conversion is attempted.
axis None or int or tuple of ints, optional
Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.
If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before.
dtype data-type, optional
Type to use in computing the variance. For arrays of integer type the default is float64 ; for arrays of float types it is the same as the array type.
out ndarray, optional
Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary.
ddof int, optional
“Delta Degrees of Freedom”: the divisor used in the calculation is N — ddof , where N represents the number of elements. By default ddof is zero.
keepdims bool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims will not be passed through to the var method of sub-classes of ndarray , however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised.
where array_like of bool, optional
Elements to include in the variance. See reduce for details.
If out=None , returns a new array containing the variance; otherwise, a reference to the output array is returned.
The variance is the average of the squared deviations from the mean, i.e., var = mean(x) , where x = abs(a — a.mean())**2 .
The mean is typically calculated as x.sum() / N , where N = len(x) . If, however, ddof is specified, the divisor N — ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.
Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative.
For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue.
>>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25])
In single precision, var() can be inaccurate:
>>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) 0.20250003
Computing the variance in float64 is more accurate:
>>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025
Specifying a where argument:
>>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0
numpy.var#
Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis.
Parameters : a array_like
Array containing numbers whose variance is desired. If a is not an array, a conversion is attempted.
axis None or int or tuple of ints, optional
Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.
If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before.
dtype data-type, optional
Type to use in computing the variance. For arrays of integer type the default is float64 ; for arrays of float types it is the same as the array type.
out ndarray, optional
Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary.
ddof int, optional
“Delta Degrees of Freedom”: the divisor used in the calculation is N — ddof , where N represents the number of elements. By default ddof is zero.
keepdims bool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims will not be passed through to the var method of sub-classes of ndarray , however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised.
where array_like of bool, optional
Elements to include in the variance. See reduce for details.
If out=None , returns a new array containing the variance; otherwise, a reference to the output array is returned.
The variance is the average of the squared deviations from the mean, i.e., var = mean(x) , where x = abs(a — a.mean())**2 .
The mean is typically calculated as x.sum() / N , where N = len(x) . If, however, ddof is specified, the divisor N — ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.
Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative.
For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue.
>>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25])
In single precision, var() can be inaccurate:
>>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) 0.20250003
Computing the variance in float64 is more accurate:
>>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025
Specifying a where argument:
>>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0
numpy.var#
Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis.
Parameters : a array_like
Array containing numbers whose variance is desired. If a is not an array, a conversion is attempted.
axis None or int or tuple of ints, optional
Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.
If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before.
dtype data-type, optional
Type to use in computing the variance. For arrays of integer type the default is float64 ; for arrays of float types it is the same as the array type.
out ndarray, optional
Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary.
ddof int, optional
“Delta Degrees of Freedom”: the divisor used in the calculation is N — ddof , where N represents the number of elements. By default ddof is zero.
keepdims bool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims will not be passed through to the var method of sub-classes of ndarray , however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised.
where array_like of bool, optional
Elements to include in the variance. See reduce for details.
Provide the mean to prevent its recalculation. The mean should have a shape as if it was calculated with keepdims=True . The axis for the calculation of the mean should be the same as used in the call to this var function.
If out=None , returns a new array containing the variance; otherwise, a reference to the output array is returned.
The variance is the average of the squared deviations from the mean, i.e., var = mean(x) , where x = abs(a — a.mean())**2 .
The mean is typically calculated as x.sum() / N , where N = len(x) . If, however, ddof is specified, the divisor N — ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.
Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative.
For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue.
>>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25])
In single precision, var() can be inaccurate:
>>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) 0.20250003
Computing the variance in float64 is more accurate:
>>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025
Specifying a where argument:
>>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0
Using the mean keyword to save computation time: >>> import numpy as np >>> from timeit import timeit >>> >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> mean = np.mean(a, axis=1, keepdims=True) >>> >>> g = globals() >>> n = 10000 >>> t1 = timeit(“var = np.var(a, axis=1, mean=mean)”, globals=g, number=n) >>> t2 = timeit(“var = np.var(a, axis=1)”, globals=g, number=n) >>> print(f’Percentage execution time saved %’) #doctest: +SKIP Percentage execution time saved 32%