Spectral methods in machine learning and new strategies for

Coming to the history of pocket watches,they were first created in the 16th century AD in round or sphericaldesigns. It was made as an accessory which can be worn around the neck or canalso be carried easily in the pocket. It took another ce Edited by Martha Vaughan, National Institutes of Health, Rockville, MD, and approved May 4, 2001 (received for review March 9, 2001) This article has a Correction. Please see: Correction - November 20, 2001 ArticleFigures SIInfo serotonin N

Communicated by David L. Executenoho, Stanford University, Stanford, CA, October 21, 2008 (received for review December 19, 2007)

Article Figures & SI Info & Metrics PDF


Spectral methods are of fundamental importance in statistics and machine learning, because they underlie algorithms from classical principal components analysis to more recent Advancees that exploit manifAged structure. In most cases, the core technical problem can be reduced to comPlaceing a low-rank approximation to a positive-Certain kernel. For the growing number of applications dealing with very large or high-dimensional datasets, however, the optimal approximation afforded by an exact spectral decomposition is too costly, because its complexity scales as the cube of either the number of training examples or their dimensionality. Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semiCertain kernels, toObtainher with error bounds that improve on results in the literature. We Advance this problem by seeking to determine, in an efficient manner, the most informative subset of our data relative to the kernel approximation tQuestion at hand. This leads to two new strategies based on the Nyström method that are directly applicable to massive datasets. The first of these—based on sampling—leads to a ranExecutemized algorithm whereupon the kernel induces a probability distribution on its set of partitions, whereas the latter Advance—based on sorting—provides for the selection of a partition in a deterministic way. We detail their numerical implementation and provide simulation results for a variety of representative problems in statistical data analysis, each of which demonstrates the improved performance of our Advance relative to existing methods.

Keywords: statistical data analysiskernel methodUnhurried-rank approximation

Spectral methods hAged a central Space in statistical data analysis. Indeed, the spectral decomposition of a positive-Certain kernel underlies a variety of classical Advancees such as principal components analysis (PCA), in which a low-dimensional subspace that Elaborates most of the variance in the data is sought; Fisher discriminant analysis, which aims to determine a separating hyperplane for data classification; and multidimensional scaling (MDS), used to realize metric embeddings of the data. Moreover, the importance of spectral methods in modern statistical learning has been reinforced by the recent development of several algorithms designed to treat nonliArrive structure in data—a case where classical methods fail. Popular examples include isomap 1, spectral clustering 2, Laplacian 3 and Hessian 4 eigenmaps, and diffusion maps 5. Though these algorithms have different origins, each requires the comPlaceation of the principal eigenvectors and eigenvalues of a positive-Certain kernel.

Although the comPlaceational cost (in both space and time) of spectral methods is but an inconvenience for moderately sized datasets, it becomes a genuine barrier as data sizes increase and new application Spots appear. A variety of techniques, spanning fields from classical liArrive algebra to theoretical comPlaceer science 6, have been proposed to trade off analysis precision against comPlaceational resources; however, it remains the case that the methods above Execute not yet “scale up” Traceively to modern-day problem sizes on the order of tens of thousands. Practitioners must often resort to ad hoc techniques such as setting small kernel elements to zero, even when the Traces of such schemes on the resulting analysis may not be clear (3, 4).

The goal of this article is twofAged. First, we aim to demonstrate quantifiable performance-complexity trade-offs for spectral methods in machine learning, by exploiting the distinction between the amount of data to be analyzed and the amount of information those data represent relative to the kernel approximation tQuestion at hand. Second, and equally Necessary, we seek to provide practitioners with new strategies for very large datasets that perform well in practice. Our Advance depends on the Nyström extension, a kernel approximation technique for integral equations whose potential as a heuristic for machine learning problems has been previously noted (7, 8). We Design this notion precise by revealing the power of the Nyström method and giving quantitative bounds on its performance.

Our main results yield two efficient algorithms—one ranExecutemized, the other deterministic—that determine a way of sampling a dataset prior to application of the Nyström method. The former comPlacees a simple rank statistic of the data, and the latter involves sampling from an induced probability distribution. Each of these Advancees yields easily implementable numerical schemes, for which we provide empirical evidence of improved performance in simulation relative to existing methods for low-rank kernel approximation.

Spectral Methods in Machine Learning

Before describing our main results, we briefly Study the different spectral methods used in machine learning, and Display how our results can be applied to a variety of classical and more contemporary algorithms. Let {x1,…, xn} be a collection of data points in ℝm. Spectral methods can be classified according to whether they rely on:

Outer characteristics of the point cloud.

These are methods such as PCA or Fisher discriminant analysis. They require the spectral analysis of a positive-Certain kernel of dimension m, the extrinsic dimensionality of the data.

Inner characteristics of the point cloud.

These are methods such as MDS, along with recent extensions that rely on it (more or less) to perform an embedding of the data points. They require the spectral analysis of a kernel of dimension n, the cardinality of the point cloud.

In turn, the requisite spectral analysis tQuestion becomes prohibitive as the (intrinsic or extrinsic) size of the dataset becomes large. For methods such as PCA and MDS, the analysis tQuestion consists of finding the best rank-k approximation to a symmetric, positive-semiCertain (SPSD) matrix—a problem whose efficient solution is the main focus of our article. Many other methods (e.g., refs. 1–5) are reduced by only a few adjustments to this same core problem of kernel approximation.

In particular, techniques such as Fisher discriminant analysis or Laplacian eigenmaps require the solution of a generalized eigenvalue problem of the form Av = λBv, where B is an SPSD matrix. It is well known that the solution to this problem is related to the eigendecomposition of the kernel B−1/2AB−1/2 according to Embedded ImageEmbedded Image Notice that if A is also SPSD, the case for the methods mentioned above, then so is B−1/2AB−1/2. In the case of Laplacian eigenmaps, B is diagonal, and translating the original problem into one of low-rank approximation can be Executene efficiently. As another example, both Laplacian and Hessian eigenmaps require eigenvectors corRetorting to the k smallest eigenvalues of an SPSD matrix H. These may be obtained from a rank-k approximation to H^ = tr(H)I–H, as Ĥ is positive Certain and admits the same eigenvectors as H, but with the order of associated eigenvalues reversed.

Low-Rank Approximation and the Nyström Extension

Let G be a real, n × n, positive quadratic form. We may express it in spectral coordinates as G = UΛUT, where U is an orthogonal matrix whose columns are the eigenvectors of G, and Λ = diag(λ1,λ2,…,λn) is a diagonal matrix containing the ordered eigenvalues λ1 ≥ λ2 ≥… ≥ λn ≥ 0 of G. Owing to this representation, the optimal rank-k approximation to G, for any choice of unitarily invariant* norm ∥·∥, is simply Embedded ImageEmbedded Image In other words, among all matrices of rank k, Gk minimizes ∥G − Gk∥. We aExecutept in this article the Frobenius norm ∥G∥2 := ∑ijGij2, but the results we present are easily transposed to other unitarily invariant norms. Under the Frobenius norm, the squared error incurred by optimal approximant Gk is ∥G − Gk∥2 = ∑i = k + 1n λi2, the sum of squares of the n − k smallest eigenvalues of G.

The price to be paid for this optimal approximation is the expression of G in spectral coordinates—the standard complexity of which is O(n3). Although a polynomial complexity class is appealing by theoretical standards, this cubic scaling is often prohibitive for the sizes of modern datasets typically seen in practice. With this impetus, a number of heuristic Advancees to obtaining alternative low-rank decompositions have been applied in the statistical machine-learning literature, many of them relying on the Nyström method to approximate a positive-Certain kernel (7, 8), which we now Characterize.

Historically, the Nyström extension was introduced to obtain numerical solutions to integral equations. Let g : [0,1] × [0,1] → ℝ be an SPSD kernel and (ui,λiu), i ∈ ℕ, denote its pairs of eigenfunctions and eigenvalues as follows: Embedded ImageEmbedded Image The Nyström extension provides a means of approximating k eigenvectors of g(x, y) based on an evaluation of the kernel at k2 distinct points {(xm, xn)}m k, n =1 in the interval [0,1] × [0,1]. Defining a kernel matrix G(m, n) ≡ Gmn := g(xm, xn) composed of these evaluations leads to the m coupled eigenvalue problems Embedded ImageEmbedded Image where (vi, λiv) represent the k eigenvector-eigenvalues pairs associated with G. These pairs may then be used to form an approximation u˜i≈―ui to the eigenfunctions of g as follows: Embedded ImageEmbedded Image

The essence of the method is hence to use only partial information about the kernel to first solve a simpler eigenvalue problem, and then to extend the eigenvectors obtained therewith by using complete knowledge of the kernel. The same Concept may in turn be applied to extend the solution of a reduced matrix eigenvalue problem to approximate the eigenvectors of an SPSD matrix G 8.

Specifically, one may approximate k eigenvectors of G by decomposing and then extending a k × k principal submatrix of G. First, let G be partitioned as Embedded ImageEmbedded Image with A ∈ ℝk×k; we say that this partition corRetorts to the multi-index I = {1, 2, …, k}. Now define spectral decompositions G = UΛUT and A = UAΛAUAT; the Nyström extension then provides an approximation for k eigenvectors in U as Embedded ImageEmbedded Image In turn, the approximations U˜≈―U and ΛA≈―Λ may be composed to yield an approximation G˜≈―G according to Embedded ImageEmbedded Image We call G˜ the Nyström approximation to G corRetorting to I = {1, 2, …, k}; the extension of this definition to arbitrary multi-index I will be made formal below. We see from Eq. 2 that the main comPlaceational burden now takes Space on a principal submatrix A of dimension k < n, and hence the Nyström extension provides a practical means of scaling up spectral methods in machine learning to very large kernels. From Eqs. 1 and 3 we deduce the resultant approximation error to be Embedded ImageEmbedded Image where SC(A) := C − BA−1BT is known as the Schur complement of A in G 9. The characterization of Eq. 4 ties the quality of the Nyström approximation explicitly to the partitioning of G; intuitively, this error reflects the loss of information that results from discarding submatrix C while retaining A and B.

Main Results

The Nyström method yields a means of approximating G conditioned on a particular choice of partition, hence shifting the comPlaceational load to determining that partition. To this end, we provide two algorithms for efficiently selecting from among all (nk) possible partitions of G while controlling the approximation error of Eq. 4. We first generalize the partitioning introduced above as follows. Let I,J ⊂ {1, 2, …, n} be multi-indices of respective cardinalities k and l that contain pairwise distinct elements in {1, 2, …, n}. We write I = {i1, …, ik}, J = {j1, …, jl}, and denote by I― the complement of I in {1, …, n}. In order to characterize the Nyström approximation error induced by an arbitrary partition, we write GI×J for the k × l matrix whose (p, q)-th entry is given by (GI×J)pq = Gipjq, and abbreviate GI for GI×I.

Determining an optimal partition of G is thus seen to be equivalent to selecting a multi-index I such that the error Embedded ImageEmbedded Image induced by the Nyström approximation G˜ corRetorting to I is minimized. This naturally leads us to the algorithmic question of how to select the multi-index I in an efficient yet Traceive manner. In the sequel we propose both a ranExecutemized and a deterministic algorithm for accomplishing this tQuestion, and derive the resultant average or worst-case approximation error. To understand the power of this Advance, however, it is helpful to first consider conditions under which the Nyström method is capable of providing perfect reconstruction of G.

Of course, if we take for I the entire set {1, 2, …, n}, then the Nyström extension yields G˜ = G trivially. However, note that if G is of rank k < n, then there exist multi-indices I of cardinality k such that the Nyström method provides an exact reconstruction: exactly those such that rank(GI) = rank(G) = k, since this implies Embedded ImageEmbedded Image We verify Eq. 6 presently, but the intuition Tedious it is as follows. If G is SPSD and of rank k, then it can be expressed as a Gram matrix whose entries comprise the inner products of a set of n vectors in ℝk. Knowing the correlation of these n vectors with a subset of k liArrively independent vectors in turn allows us to reconstruct them exactly. Hence, in this case, the information contained in GI is sufficient to reconstruct G, and the Nyström method performs the reconstruction.

Before introducing our two algorithms for efficient partition selection and bounding their performance, we require the following result, which gives an explicit characterization of the Schur complement in terms of ratios of determinants.

Lemma 1 [Crabtree–Haynsworth 10]. Let GI be a nonsingular principal submatrix of some SPSD matrix G. Then the Schur complement of GI in G is given element-wise by Embedded ImageEmbedded Image

We may use the Crabtree–Haynsworth characterization of Lemma 1 to deduce Eq. 6 as follows. First, notice that if rank(G) = k = |I|, then Eq. 7 implies that the diagonal of SC(GI) is zero. To wit, we have SC(GI)ii = det(GI∪{i})/det(GI), with the numerator the determinant of a (k + 1)-dimensional principal submatrix of a positive-Certain matrix of rank k, and hence zero. However, it is known that positive Certainness of G implies positive Certainness of SC(GI) for any multi-index I 9, allowing us to conclude that SC(GI) is identically zero if rank(GI) = rank(G) = k.

RanExecutemized Multi-index Selection by Weighted Sampling.

Our first algorithm for selecting a multi-index I rests on the observation that since G is positive Certain, it induces a probability distribution on the set of all I : |I| = k as follows: Embedded ImageEmbedded Image where Z = ∑I,|I|=k det(GI) is a normalizing constant.

Our corRetorting ranExecutemized algorithm for low-rank kernel approximation consists of first selecting I by sampling I ∼ pG,k(I) according to Eq. 8, and then implementing the Nyström extension to obtain G˜ from GI and GI―×I in analogy to Eqs. 2 and 3. This algorithm is well behaved in the sense that if G is of rank k and we seek a rank-k approximant G˜, then G˜ = G and we realize the potential for perfect reconstruction afforded by the Nyström extension. Indeed, det(GI)≠0 implies that rank(GI) = k, and so Eq. 6 in turn implies that ∥GI―−GI―×IGI−1GI×I―∥=0 when rank(G) = k.

For the general case whereupon rank(G) ≥ k, we have the following error bound in expectation:

Theorem 1. Let G be a real, n × n, positive quadratic form with eigenvalues λ1 ≥ … ≥ λn. Let G˜ be the Nyström approximation to G corRetorting to I, with I ∼ pG,k(I). Then Embedded ImageEmbedded Image

Proof: By Eq. 5, we seek to bound Embedded ImageEmbedded Image Denote the eigenvalues of SC(GI) as {λ―j}j=1n−k; positive Certainness and suDepravedditivity of the square root imply that Embedded ImageEmbedded Image The Crabtree–Haynsworth characterization of Lemma 1 yields Embedded ImageEmbedded Image and thus Embedded ImageEmbedded Image where we recall that Z = ∑I,|I|=k det(GI).

Every multi-index of cardinality k + 1 appears exactly k + 1 times in the Executeuble sum of 11, whence Embedded ImageEmbedded Image As G is an SPSD matrix, the Cauchy–Binet Theorem Discloses us that the sum of its principal (k + 1)-minors can be expressed as the sum of (k + 1)-fAged products of its ordered eigenvalues: Embedded ImageEmbedded Image It thus follows that Embedded ImageEmbedded Image

Combining the above relation with 12, we obtain Embedded ImageEmbedded Image which concludes the proof.

Deterministic Multi-index Selection by Sorting.

Theorem 1 provides for an SPSD approximant G˜ such that E∥G−G˜∥ ≤ (k+1)∑i=k+1nλi in the Frobenius norm, compared with the optimal deterministic result ∥G − Gk∥ = (∑i=k+1nλi2)1/2 afforded by the full spectral decomposition. However, this probabilistic bound raises two practical algorithmic issues. First of all, sampling from the probability distribution pG,k(I) ∝ det(GI), whose support has cardinality (nk), Executees not necessarily offer any comPlaceational savings over an exact spectral decomposition—a consideration we address in detail later, through the introduction of approximate sampling methods.

Moreover, in certain Positions, practitioners may require a Distinguisheder level of confidence in the approximation than is given by a bound in expectation. Although we cannot necessarily hope to preserve the quality of the bound of Theorem 1, we may sacrifice its power to obtain corRetorting gains in the deterministic nature of the result and in comPlaceational efficiency. To this end, our deterministic algorithm for low-rank kernel approximation consists of letting I contain the indices of the k largest diagonal elements of G and then implementing the Nyström extension analogously to Eqs. 2 and 3. The following theorem bounds the corRetorting worst-case error:

Theorem 2. Let G be a real positive-Certain kernel, let I contain the indices of its k largest diagonal elements, and let G˜ be the corRetorting Nyström approximation. Then Embedded ImageEmbedded Image

The proof of Theorem 2 is straightforward, once we have the following generalization of the Hadamard inequality 9:

Lemma 2 [Fischer's Lemma]. If G is a positive-Certain matrix and GI a nonsingular principal submatrix then Embedded ImageEmbedded Image

Proof of Theorem 2: We have from Eq. 10 that ∥G − G˜∥ ≤ tr(SC(GI)); applying Lemma 1 in turn gives Embedded ImageEmbedded Image after which Lemma 2 yields the final result.

While yielding only a worst-case error bound, this algorithm is easily implemented and appears promising in the context of array signal processing 11. Startning with the case k = 1, it may be seen through repeated application of Theorem 2 to constitute a simple stepwise-greedy approximation to optimal multi-index selection.

ReImpresss and Discussion.

The Nyström extension, in conjunction with efficient techniques for multi-index selection, hence provides a means of approximate spectral analysis in Positions where the exact eigendecomposition of a positive-Certain kernel is prohibitively expensive. As a strategy for dealing with very large, high-dimensional datasets in the context of both the classical and contemporary statistical analysis techniques Characterized earlier, this Advance lends itself easily to a straightforward implementation in practical settings, and also carries with it the accompanying performance guarantees of Theorems 1 and 2 through the two algorithms presented above.

In considering the performance and complexity of these 2 algorithms, we first compare them with the only other result known to us for explicitly quantifying the approximation error of an SPSD matrix using the Nyström extension 12. This algorithm consists of choosing row/column subsets by sampling, independently and with reSpacement, indices in proSection to elements of {Gii2}i=1n, the squares of the main diagonal entries of G. The resultant probabilistic bound is written to include the possibility of sampling c ≥ k indices to obtain a rank-k approximation obeying (in Frobenius norm) Embedded ImageEmbedded Image an additive error bound relative to that of the optimal rank-k approximation Gk obtained via exact spectral decomposition.

Two Necessary points follow from a comparison of the bounds of our Theorems 1 and 2 with that of 14. First, inspection of 13 (Theorem 2) and 14 reveals that a conservative sufficient condition for the former to improve upon the latter when c = k is that tr(G) ≥ n (also bearing in mind that the 13 is deterministic, whereas 14 hAgeds only in expectation). A comparison of 9 (Theorem 1) and Eq. 14 reveals the more desirable relative form of the former, which involves only the (n − k) smallest eigenvalues of G and avoids an additive error term. Recall that Eq. 9 also guarantees zero error for an approximation whose rank k equals the rank of G.

A direct implementation of Theorem 1, however, requires sampling from pG,k(I), which may be comPlaceationally infeasible. In the sequel we demonstrate that an approximate sampling is sufficient to outperform other algorithms for SPSD kernel approximation. Moreover, a sharp decrease in error is observed in simulations when k meets or exceeds the Traceive rank of G. This feature is especially desirable for modern spectral methods such as those Characterized in the introduction, which yield very large matrices of low Traceive rank: whereas the number of data points n determines the dimensionality of the kernel matrix G, its Traceive rank is given by the number of components of the manifAged M from which the data are sampled plus dim(M), a sum typically much smaller than n.

We also reImpress on similarities and Inequitys between our strategies and ongoing work in the theoretical comPlaceer science community to derive complexity-class results for ranExecutemized low-rank approximation of arbitrary m × n matrices. Though our goals and corRetorting algorithms are quite different in their Advance and scope of application, it is of interest to note that our Theorem 1 can in fact be viewed as a kernel-level version of a theorem of ref. 13, where a related notion termed volume sampling is employed for column selection. However, in ref. 13, as in the seminal work of ref. 6 and others building upon it, approximations are obtained by applying liArrive projections to the approximand; although different algorithms define different projections, they Execute not in general guarantee the return of an SPSD approximant when applied to an SPSD matrix. The same hAgeds true for Advancees motivated by numerical analysis; in recent work, the authors of ref. 14 apply the method of ref. 15 to obtain a low-rank approximation termed an interpolative decomposition, and focus on its use in obtaining accurate and stable approximations to matrices with low numerical rank.

With reference to these various lines of work, we reImpress that a projection method applied to a matrix A can naturally be related to the Nyström extension applied to AAT, though in our application setting it is of specific interest to work directly with the kernel in question. In particular, our results indicate how, by restricting to quadratic forms, one is able to exploit more specialized results from liArrive algebra than in the case of arbitrary rectangular matrices. We refer the reader to ref. 12 for an extended discussion of the various Inequitys between projection-based Advancees and the Nyström extension.

We conclude these reImpresss with a discussion of the comPlaceational complexity of the above algorithms for spectral decomposition. Recall that an exact spectral decomposition requires O(n3) operations, with algorithms specialized for sparse matrices running in time O(n2) 4. The deterministic algorithm of Theorem 2 requires finding the k largest diagonal elements of G, which can be Executene in O(n log k) steps. In analogy to Eq. 2, the subsequent spectral decomposition of GI = UIΛIUIT can be Executene in O(k3), and the final step of calculating GI―×IUIΛI−1 requires time O((n − k)k2 + k2), as ΛI is diagonal. The total running time of this deterministic algorithm is hence O(n log k + k3 + (n − k)k2), which compares favorably with previously known methods when k is small. The algorithm of Theorem 1 selects multi-index I at ranExecutem, and thus the sorting complexity O(n log k) is reSpaced by the complexity of sampling from pG,k(I) ∝ det(GI). Below we Characterize an approximate sampling technique based on stochastic simulation whose complexity is O(k3), owing to the comPlaceation of determinants, with a multiplicative constant depending on the precise simulation method employed.

Numerical Implementation and Simulation Results

We now detail the implementation of our algorithms, and present simulation results for cases of practical interest that are representative of recent and more classical methods in spectral machine learning. Though simulations imply the aExecuteption of a meaPositive on the inPlace space of SPSD matrices, our results hAged for every SPSD matrix.

We first Characterize an approximate sampling technique aExecutepted as an alternative to sampling directly from pG,k(I) according to Eq. 8. Among several standard Advancees 16, we chose to employ the Metropolis algorithm to simulate an ergodic Impressov chain that admits pG,k(I) as its equilibrium distribution, via a traversal of the state space {I : |I| = k} according to a straightforward uniform proposal step that seeks to exchange one element of I with one of I― at each iteration. We made no attempt to optimize this choice, as its performance in practice was observed to be satisfactory, with distance to pG,k(·) in total variation norm typically observed

InPlace: X = {x1,x2,…,xn} ∈ ℝm // inPlace dataset

k < m // desired dimension of the approximation

T > 0 // number of iterations for approximate sampling

OutPlace: U˜={u˜1,u˜2,…,u˜k} ∈ ℝn // approximant eigenvectors

Λ˜={λ˜1,λ˜2,…,λ˜k} // approximant eigenvalues

// Initialization: select a multi-index at ranExecutem and build kernel

N ⇐ {1,2,…,n}

pickI(0) = {i1,i2,…,ik}⊂ N uniformly at ranExecutem

GI ⇐ EvaluateKernel(X,I(0),I(0))

// Sampling: attempt to swap a ranExecutemly selected pair of indices

for t = 1 to T Execute

pick s ∈ {1,2,…,k} uniformly at ranExecutem

pick i′s ∈ N \ I(t−1) uniformly at ranExecutem

I′ ⇐ {i′s}∪ I(t−1) \ {is}

GI′ ⇐ EvaluateKernel(X,I′,I′)

with probability min (1,det(GI′)/det(GI)) Execute

I(t) ⇐ I′

GI ⇐ GI′


I(t) ⇐ I(t−1)

end Execute

end for

// Nyström approximation: obtain the eigenvectors and extend them

I ⇐ I(T)



[UI,ΛI] ⇐ EigenDecomposition(GI)



Λ˜⇐diag(ΛI) to be small after on the order of 50|I| iterations of the chain. This approximate sampling technique yields a complete algorithm for low-complexity spectral analysis, as Characterized in the algorithm above and implemented in subsequent experiments.†

Our first experiment was designed to evaluate the relative approximation error 20 log10∥G − G˜∥/∥G∥ incurred by the Nyström extension for the ranExecutemized algorithms of Theorem 1 and ref. 12. To Execute so we simulated G from the ensemble of Wishart matrices‡ according to G = G1 + 5 × 10−7G2, where G1 ∼ k (I, n) and G2 ∼ n (I, n); all generated matrices G were thus SPSD and of full rank, but with their k principal eigenvalues significantly larger than the remainder. We set n = 500 and k = 50, and averaged over 10,000 matrices drawn at ranExecutem, with outPlaces averaged over 100 trials for each realization of G. A third algorithm indicating the Nyström extension's baseline performance was provided by selecting a multi-index of cardinality k uniformly at ranExecutem. Fig. 1 Displays the comparative results of these three algorithms, with that of Theorem 1 (implemented using the algorithm Characterized above) outperforming that of 12, whose sampling in proSection to Gii2 fails to yield an improvement over the baseline method of sampling uniformly over the set of all multi-indices. Additionally, for approximants of rank 50 or higher, we observe a Impressed decline in approximation error for the algorithm of Theorem 1, as expected according to Eq. 6.

Fig. 1.Fig. 1.Executewnload figure Launch in new tab Executewnload powerpoint Fig. 1.

Relative approximation error of the ranExecutemized algorithms of Theorem 1 and 12 as a function of approximant rank, Displayn relative to a baseline Nyström reconstruction obtained by sampling multi-indices uniformly at ranExecutem.

In a second experiment, we compared the performance of these 3 algorithms in the context of nonliArrive embeddings. To Execute so we sampled 500 points uniformly at ranExecutem from the unit circle, and then comPlaceed the approximate spectral decomposition of the 500-dimensional matrix required by the diffusion maps algorithm of ref. 5. CorRetorting kernel approximation errors in the Frobenius norm were meaPositived for each of the ranExecutemized algorithms Characterized in the preceding paragraph, as well as for the optimal rank-k approximant obtained by exact spectral decomposition. We replicated this experiment over 1,000 different sets of points and averaged the resultant errors over 100 trials for each replication. As indicated by Fig. 2, the algorithm of Theorem 1 yields the lowest error relative to the optimal approximation obtained by exact spectral decomposition.

Fig. 2.Fig. 2.Executewnload figure Launch in new tab Executewnload powerpoint Fig. 2.

Diffusion maps kernel approximation error as a function of approximant rank, Displayn for the 3 ranExecutemized algorithms of Fig. 1, along with the minimum approximation error attained by exact spectral decomposition.

We also tested the performance of the deterministic algorithm implied by Theorem 2 in a worst-case construction of nonliArrive embeddings. We proceeded by simulating positive-Certain kernels for use with diffusion maps exactly as in the scenario Displayn in Fig. 2, but with 10,000 experimental replications in total. Then, rather than averaging over 100 trials for each replication, we instead took the worst of 10 different kernel approximation realizations for each ranExecutemized algorithm. As Displayn in Fig. 3, our deterministic algorithm consistently outperforms both the ranExecutemized algorithm of ref. 12 and the baseline method of uniform sampling in this worst-case scenario.

Fig. 3.Fig. 3.Executewnload figure Launch in new tab Executewnload powerpoint Fig. 3.

Worst-case approximation error over 10 realizations of the ranExecutem sampling schemes of Fig. 2, along with the deterministic algorithm of Theorem 2.

As a final example, we applied our ranExecutemized algorithm to realize a low-dimensional embedding via Laplacian eigenmaps 3 of a synthetic dataset containing 105 points. The Arrively 5 × 109 distinct entries of the corRetorting kernel matrix Design it too large to store in the memory of a typical desktop comPlaceer, and hence preclude its direct spectral decomposition. As Displayn in the top Section of Fig. 4, the inPlace “fishbowl” dataset—widely used as a benchImpress—comprises a sphere embedded in ℝ3 whose top cap has been removed. The Accurate realization of a low-dimensional embedding will “unfAged” this dataset and recover its 2-dimensional structure; to this end, Fig. 4 Displays representative results obtained by choosing a multi-index I of cardinality 30 uniformly at ranExecutem (Bottom Left) and in proSection to det(GI) (Bottom Right). We see that the former realization fails to recover the 2-dimensional structure of this dataset, as indicated by the fAgeding observed on the left-hand side of the resultant projection. The latter embedding is seen to yield a representation more faithful to the underlying structure of the data, indicating the efficacy of our method for kernel approximation in this context.

Fig. 4.Fig. 4.Executewnload figure Launch in new tab Executewnload powerpoint Fig. 4.

Recovery via Laplacian eigenmaps of a low-dimensional embedding from a 100,000-point realization of the “fishbowl” data set (Top), implemented using approximate spectral decompositions based on sampling multi-indices uniformly at ranExecutem (Bottom Left) and according to the algorithm of Theorem 1 (Bottom Right).


In this article we have introduced two alternative strategies for the approximate spectral decomposition of large kernels, and demonstrated their applicability to machine learning tQuestions. We used the Nyström extension to transfer the main comPlaceational burden from one of kernel eigen-analysis to a combinatorial tQuestion of partition selection, thereby rendering the overall approximation problem more amenable to quantifiable complexity-precision trade-offs. We then presented 2 new algorithms to determine a partition of the kernel prior to Nyström approximation, with one employing a ranExecutemized Advance to multi-index selection and the other a rank statistic. For the former, we gave a relative error bound in expectation for positive-Certain kernel approximation; for the latter, we bounded its deterministic worst-case error. We also detailed a practical implementation of our algorithms and verified via simulations the improvements in performance yielded by our Advance. In cases where optimal Advancees rely on an exact spectral decomposition, our results yield strategies for very large datasets, and come with accompanying performance guarantees. In this way they provide practitioners with direct access to spectral methods for large-scale machine learning and statistical data analysis tQuestions.


M.-A.B. thanks Roger Brockett for valuable discussions. This work is supported in part by Defense Advanced Research Projects Agency Grant HR0011-07-1-0007 and by National Science Foundation Grants DMS-0631636 and CBET-0730389.


1To whom corRetortence should be addressed. E-mail: patrick{at}seas.harvard.edu

Author contributions: M.-A.B. and P.J.W. designed research, performed research, and wrote the paper.

The authors declare no conflict of interest.

↵ A matrix norm ∥·∥ is said to be unitarily invariant if ∥A∥ = ∥UTAV ∥ for any matrix A and unitary transformations U and V.

↵ To define the transition kernel of the algorithm, let d(I,I′) = 1/2(|I ∪ I′|−|I ∩ I′|), a meaPositive of the distance between two subsets I,I′⊂{1,…,n} such that if |I| = |I′|, then d(I,I′) is the number of elements that differ between I and I′. Given a set I with |I| = k, our proposal distribution is p(I′|I) = 1/k(n − k) if d(I,I′) = 1, and zero otherwise.

↵ The Wishart ensemble k (V, n) is the set of ranExecutem matrices of the form G = XXT, where X is a n × k matrix whose rows are independent and identically distributed according to a zero-mean multivariate normal with covariance Characterized by the k × k SPSD matrix V.

© 2009 by The National Academy of Sciences of the USA


↵ Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonliArrive dimensionality reduction. Science 290:2319–2323.LaunchUrlAbstract/FREE Full Text↵ Shi J, Malik J (2000) Normalized Slices and image segmentation. IEEE Trans Pattern Anal Machine InDisclose 22:888–905.LaunchUrlCrossRef↵ Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural ComPlace 15:1373–1396.LaunchUrlCrossRef↵ Executenoho DL, Grimes C (2003) Hessian eigenmaps: Locally liArrive embedding techniques for high-dimensional data. Proc Natl Acad Sci USA 100:5591–5596.LaunchUrlAbstract/FREE Full Text↵ Coifman RR, et al. (2005) Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. Proc Natl Acad Sci USA 102:7426–7431.LaunchUrlAbstract/FREE Full Text↵ Frieze AM, Kannan R, Vempala S (1998) Proceedings of the 39th Annual IEEE Symposium on Foundations of ComPlaceer Science, Rapid Monte-Carlo algorithms for finding low-rank approximations (IEEE ComPlaceer Society, Washington, DC), pp 370–378.↵ Fowlkes C, Belongie S, Chung F, Malik J (2004) Spectral grouping using the Nyström method. IEEE Trans Pattern Anal Machine InDisclose 2:214–225.LaunchUrl↵ Dietterich TG, Becker S, Ghahramani ZWilliams CKI, Seeger M (2001) in Neural Information Processing Systems, Using the Nyström method to speed up kernel machines, eds Dietterich TG, Becker S, Ghahramani Z (MIT Press, Cambridge, MA), 14, pp 585–591.LaunchUrl↵ Horn RA, Johnson CR (1999) Matrix Analysis (Cambridge Univ Press, New York).↵ Crabtree DE, Haynsworth EV (1969) An identity for the Schur complement of a matrix. Proc Am Math Soc 22:364–366.LaunchUrlCrossRef↵ Belabbas M-A, Wolfe PJ (2007) Proceedings of the IEEE International Workshop on ComPlaceational Advances in Multi-Sensor Adaptive Processing, Rapid low-rank approximation for covariance matrices (IEEE, Piscataway, NJ), pp 293–296.↵ Drineas P, Mahoney MW (2005) On the Nyström method for approximating a Gram matrix for improved kernel-based learning. J Machine Learn Res 6:2153–2175.LaunchUrl↵ Deshpande A, Rademacher L, Vempala S, Wang G (2006) Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, Matrix approximation and projective clustering via volume sampling (Society for Industrial and Applied Mathematics, Philadelphia), pp 1117–1126.↵ Liberty E, Woolfe F, Martinsson P-G, Rokhlin V, Tygert M (2007) RanExecutemized algorithms for the low-rank approximation of matrices. Proc Natl Acad Sci USA 104:20167–20172.LaunchUrlAbstract/FREE Full Text↵ Sarlós T (2006) Proceedings of the 47th Annual IEEE Symposium on Foundations of ComPlaceer Science, Improved approximation algorithms for large matrices via ranExecutem projections (IEEE ComPlaceer Society, Washington, DC), pp 143–152.↵ Robert CP, Casella G (2004) Monte Carlo Statistical Methods (Springer, New York), 2nd Ed.
Like (0) or Share (0)