- Linear algebra ( numpy.linalg )#
- The @ operator#
- Matrix and vector products#
- Decompositions#
- Matrix eigenvalues#
- Norms and other numbers#
- Solving equations and inverting matrices#
- Exceptions#
- Linear algebra on several matrices at once#
- numpy.linalg.svd#
- Особое значение разложения (SVD) в Python
- Особое значение разложения (SVD) в Python
- Особенности декомпозиции сингулярного значения
- Реализация SVD в Python
- 1. Использование Numpy
- 2. Использование Scikit-Learn
- Заключение
Linear algebra ( numpy.linalg )#
The NumPy linear algebra functions rely on BLAS and LAPACK to provide efficient low level implementations of standard linear algebra algorithms. Those libraries may be provided by NumPy itself using C versions of a subset of their reference implementations but, when possible, highly optimized libraries that take advantage of specialized processor functionality are preferred. Examples of such libraries are OpenBLAS, MKL (TM), and ATLAS. Because those libraries are multithreaded and processor dependent, environmental variables and external packages such as threadpoolctl may be needed to control the number of threads or specify the processor architecture.
The SciPy library also contains a linalg submodule, and there is overlap in the functionality provided by the SciPy and NumPy submodules. SciPy contains functions not found in numpy.linalg , such as functions related to LU decomposition and the Schur decomposition, multiple ways of calculating the pseudoinverse, and matrix transcendentals such as the matrix logarithm. Some functions that exist in both have augmented functionality in scipy.linalg . For example, scipy.linalg.eig can take a second matrix argument for solving generalized eigenvalue problems. Some functions in NumPy, however, have more flexible broadcasting options. For example, numpy.linalg.solve can handle “stacked” arrays, while scipy.linalg.solve accepts only a single square array as its first argument.
The term matrix as it is used on this page indicates a 2d numpy.array object, and not a numpy.matrix object. The latter is no longer recommended, even for linear algebra. See the matrix object documentation for more information.
The @ operator#
Introduced in NumPy 1.10.0, the @ operator is preferable to other methods when computing the matrix product between 2d arrays. The numpy.matmul function implements the @ operator.
Matrix and vector products#
Dot product of two arrays.
Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order.
Return the dot product of two vectors.
Inner product of two arrays.
Compute the outer product of two vectors.
Matrix product of two arrays.
Compute tensor dot product along specified axes.
einsum (subscripts, *operands[, out, dtype, . ])
Evaluates the Einstein summation convention on the operands.
einsum_path (subscripts, *operands[, optimize])
Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays.
Raise a square matrix to the (integer) power n.
Kronecker product of two arrays.
Decompositions#
Compute the qr factorization of a matrix.
linalg.svd (a[, full_matrices, compute_uv, . ])
Singular Value Decomposition.
Matrix eigenvalues#
Compute the eigenvalues and right eigenvectors of a square array.
Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix.
Compute the eigenvalues of a general matrix.
Compute the eigenvalues of a complex Hermitian or real symmetric matrix.
Norms and other numbers#
Compute the condition number of a matrix.
Compute the determinant of an array.
Return matrix rank of array using SVD method
Compute the sign and (natural) logarithm of the determinant of an array.
trace (a[, offset, axis1, axis2, dtype, out])
Return the sum along diagonals of the array.
Solving equations and inverting matrices#
Solve a linear matrix equation, or system of linear scalar equations.
Solve the tensor equation a x = b for x.
Return the least-squares solution to a linear matrix equation.
Compute the (multiplicative) inverse of a matrix.
Compute the (Moore-Penrose) pseudo-inverse of a matrix.
Compute the ‘inverse’ of an N-dimensional array.
Exceptions#
Generic Python-exception-derived object raised by linalg functions.
Linear algebra on several matrices at once#
Several of the linear algebra routines listed above are able to compute results for several matrices at once, if they are stacked into the same array.
This is indicated in the documentation via input parameter specifications such as a : (. M, M) array_like . This means that if for instance given an input array a.shape == (N, M, M) , it is interpreted as a “stack” of N matrices, each of size M-by-M. Similar specification applies to return values, for instance the determinant has det : (. ) and will in this case return an array of shape det(a).shape == (N,) . This generalizes to linear algebra operations on higher-dimensional arrays: the last 1 or 2 dimensions of a multidimensional array are interpreted as vectors or matrices, as appropriate for each operation.
numpy.linalg.svd#
When a is a 2D array, and full_matrices=False , then it is factorized as u @ np.diag(s) @ vh = (u * s) @ vh , where u and the Hermitian transpose of vh are 2D arrays with orthonormal columns and s is a 1D array of a’s singular values. When a is higher-dimensional, SVD is applied in stacked mode as explained below.
Parameters : a (…, M, N) array_like
A real or complex array with a.ndim >= 2 .
full_matrices bool, optional
If True (default), u and vh have the shapes (. M, M) and (. N, N) , respectively. Otherwise, the shapes are (. M, K) and (. K, N) , respectively, where K = min(M, N) .
compute_uv bool, optional
Whether or not to compute u and vh in addition to s. True by default.
hermitian bool, optional
If True, a is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False.
Returns : When compute_uv is True, the result is a namedtuple with the following attribute names: U < (…, M, M), (…, M, K) >array
Unitary array(s). The first a.ndim — 2 dimensions have the same size as those of the input a. The size of the last two dimensions depends on the value of full_matrices. Only returned when compute_uv is True.
S (…, K) array
Vector(s) with the singular values, within each vector sorted in descending order. The first a.ndim — 2 dimensions have the same size as those of the input a.
Unitary array(s). The first a.ndim — 2 dimensions have the same size as those of the input a. The size of the last two dimensions depends on the value of full_matrices. Only returned when compute_uv is True.
If SVD computation does not converge.
Similar function in SciPy.
Compute singular values of a matrix.
Changed in version 1.8.0: Broadcasting rules apply, see the numpy.linalg documentation for details.
The decomposition is performed using LAPACK routine _gesdd .
SVD is usually described for the factorization of a 2D matrix \(A\) . The higher-dimensional case will be discussed below. In the 2D case, SVD is written as \(A = U S V^H\) , where \(A = a\) , \(U= u\) , \(S= \mathtt(s)\) and \(V^H = vh\) . The 1D array s contains the singular values of a and u and vh are unitary. The rows of vh are the eigenvectors of \(A^H A\) and the columns of u are the eigenvectors of \(A A^H\) . In both cases the corresponding (possibly non-zero) eigenvalues are given by s**2 .
If a has more than two dimensions, then broadcasting rules apply, as explained in Linear algebra on several matrices at once . This means that SVD is working in “stacked” mode: it iterates over all indices of the first a.ndim — 2 dimensions and for each combination SVD is applied to the last two indices. The matrix a can be reconstructed from the decomposition with either (u * s[. None, :]) @ vh or u @ (s[. None] * vh) . (The @ operator can be replaced by the function np.matmul for python versions below 3.5.)
If a is a matrix object (as opposed to an ndarray ), then so are all the return values.
>>> a = np.random.randn(9, 6) + 1j*np.random.randn(9, 6) >>> b = np.random.randn(2, 7, 8, 3) + 1j*np.random.randn(2, 7, 8, 3)
Reconstruction based on full SVD, 2D case:
>>> U, S, Vh = np.linalg.svd(a, full_matrices=True) >>> U.shape, S.shape, Vh.shape ((9, 9), (6,), (6, 6)) >>> np.allclose(a, np.dot(U[:, :6] * S, Vh)) True >>> smat = np.zeros((9, 6), dtype=complex) >>> smat[:6, :6] = np.diag(S) >>> np.allclose(a, np.dot(U, np.dot(smat, Vh))) True
Reconstruction based on reduced SVD, 2D case:
>>> U, S, Vh = np.linalg.svd(a, full_matrices=False) >>> U.shape, S.shape, Vh.shape ((9, 6), (6,), (6, 6)) >>> np.allclose(a, np.dot(U * S, Vh)) True >>> smat = np.diag(S) >>> np.allclose(a, np.dot(U, np.dot(smat, Vh))) True
Reconstruction based on full SVD, 4D case:
>>> U, S, Vh = np.linalg.svd(b, full_matrices=True) >>> U.shape, S.shape, Vh.shape ((2, 7, 8, 8), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(U[. , :3] * S[. , None, :], Vh)) True >>> np.allclose(b, np.matmul(U[. , :3], S[. , None] * Vh)) True
Reconstruction based on reduced SVD, 4D case:
>>> U, S, Vh = np.linalg.svd(b, full_matrices=False) >>> U.shape, S.shape, Vh.shape ((2, 7, 8, 3), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(U * S[. , None, :], Vh)) True >>> np.allclose(b, np.matmul(U, S[. , None] * Vh)) True
Особое значение разложения (SVD) в Python
Особого разложения ценностей (SVD) является одним из широко используемых методов снижения размеров. SVD разлагает матрицу на три других матрица.
Особое значение разложения (SVD) в Python
Особое значение разложения ценностей (SVD) является одним из широко используемых методов уменьшения размеров Отказ SVD разлагает матрицу на три других матрица.
Если мы увидим матрицы как то, что вызывает линейное преобразование в пространстве, то с разложением сингулярного значения, мы разлагаемся на одном преобразовании в трех движениях.
В этой статье мы увидим разные методы для реализации SVD.
Особенности декомпозиции сингулярного значения
SVD факторы одной матрицы в матрицу U, D и V * соответственно.
- U и V * являются ортогональными матрицами.
- D – диагональная матрица сингулярных значений.
SVD также можно рассматривать как разложение одного комплексного преобразования в 3 более простых преобразованиях (вращение, масштабирование и вращение).
С точки зрения преобразований
Таким образом, в основном это позволяет нам выражать нашу оригинальную матрицу в качестве линейной комбинации матриц низкого ранга. Только первые несколько единственных ценностей большие.
Условия, кроме первых немногих, могут быть проигнорированы, не теряя много информации, и поэтому SVD называется техникой снижения размерности.
Реализация SVD в Python
Начнем с реализации SVD в Python. Мы будем работать с несколькими библиотеками, чтобы продемонстрировать, как будет продолжаться реализация.
1. Использование Numpy
Python Numpy иметь возможности для реализации большинства Линейная алгебра Методы предлагают легкую реализацию SVD.
Мы будем использовать numpy.linalg Модуль, который имеет SVD класс для выполнения SVD на матрице.
import numpy as np #Creating a matrix A A = np.array([[3,4,3],[1,2,3],[4,2,1]]) #Performing SVD U, D, VT = np.linalg.svd(A) #Checking if we can remake the original matrix using U,D,VT A_remake = (U @ np.diag(D) @ VT) print(A_remake)
D представляет собой 1D массив вместо 2D массива. D – это диагональная матрица с большинством значений заканчивается нулю, такая матрица называется редкая матрица Отказ Чтобы сохранить место, он возвращается как 1D массив.
2. Использование Scikit-Learn
Мы будем использовать Trunchedsvd класс от Sklearn.DeComposition модуль.
В Trunchedsvd Нам нужно указать количество компонентов, которые нам нужны на нашем выходе, поэтому вместо расчета целых разложений мы просто рассчитаем необходимые единственные значения и обрежьте остальные.
#Importing required modules import numpy as np from sklearn.decomposition import TruncatedSVD #Creating array A = np.array([[3,4,3],[1,2,3],[4,2,1]]) #Fitting the SVD class trun_svd = TruncatedSVD(n_components = 2) A_transformed = svd.fit_transform(A) #Printing the transformed matrix print("Transformed Matrix:") print(A_transf)
Заключение
В этой статье мы увидели, как мы можем реализовать сингулярное разложение ценностей (SVD) с использованием библиотек, таких как Numpy и Scikit – учиться.