原理解釋

LLE (Locally linear embedding)：在數據降維后仍然保留原始高維數據的拓撲結構，這種拓撲結構表現為數據點的局部鄰接關系。

源代碼

%% %編寫程序，實現PCA和LEE算法
%對圖像進行降維實驗，並顯示降維重建后的圖像
%運行已有程序，和自己的對比
%實驗報告（偽代碼（或流程圖）、源代碼、實驗結果及分析）

%% 預處理和數據輸入 clc clear addpath(genpath(pwd));%將子孫文件添加到工作目錄
load face_images; %導入數據
data = images;
%data = data(:,1:50);
%% 初始化參數 d = 2; len = 64; wid = 64; k = 12; %%
[p ,N] = size(data);%特征維度和對象數目
[IDX,~] = knnsearch(data',data','K',k+1);
IDX = IDX(:,2:end);
W = zeros(N);
for i = 1:N
xk = data(:,i);
index = IDX(i,:);
Qk_temp = repmat(xk,1,k) - data(:,index);
Qk = Qk_temp'*Qk_temp; wk_temp = Qk\ones(k,1); wk = wk_temp/sum(wk_temp); W(i,index) = wk; end W = W';
I = eye(N);
M = (I-W)*(I-W)'; [P,L] = eigs(M,d+1,0); P = P(:,2:end); Y = (P*sqrt(N))';

實驗結果與分析

實驗結果分析

%% 使用工具箱進行進行降維來和我的實驗結果進行比較
clc
clear
close all

method = 'LLE';%可選LLE或者PCA
addpath(genpath(pwd));

% 產生測試數據
%[X, labels] = generate_data('helix', 2000);
if strcmp(method,'PCA') load AR %導入數據 [p,N] = size(AR);
X = double(AR);%導入數據
else
load face_images %導入數據
[p,N] = size(images);
X = double(images);%導入數據
end

% 估計本質維數
%no_dims = round(intrinsic_dim(X, 'MLE'));
%disp(['MLE estimate of intrinsic dimensionality: ' num2str(no_dims)]);
d = 2;
k = 12;
% PCA降維或LLE降維
[mappedX, mapping] = compute_mapping(X', method,d); Y = mappedX';

if strcmp(method,'PCA') x0 = (mapping.mean)';
W = (mapping.M);
AR_shift = X - repmat(x0,1,N);

%%
close all;
k = 1;
y = Y(:,k);
X_rebuid = W*y + x0;%第k個圖像的重建還原

image = AR(:,k);
image = reshape(image,50,40);
imshow(mat2gray(image));%對原矩陣歸一化

figure;
image_re = X_rebuid;
image_re = reshape(image_re,50,40);
imshow(mat2gray(image_re));%對原矩陣歸一化

end

% LLE ALGORITHM (using K nearest neighbors)
% [Y] = lle(X,K,dmax)
% X    ：data as D x N matrix (D = dimensionality, N = #points)
% K    ：number of neighbors
% dmax ：max embedding dimensionality
% Y    ：embedding as dmax x N matrix
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%function [Y] = lle(X,K,d)
addpath(genpath(pwd));%將子孫文件添加到工作目錄
load face_images; %導入數據
data = images;
X = data;
K = 12;
d = 2;
%% [D,N] = size(X); fprintf(1,'LLE running on %d points in %d dimensions\n',N,D); %% Step1: compute pairwise distances & find neighbour fprintf(1,'-->Finding %d nearest neighbours.\n',K); X2 = sum(X.^2,1); distance = repmat(X2,N,1)+repmat(X2',1,N)-2*X'*X; [sorted,index] = sort(distance); neighborhood = index(2:(1+K),:); % Step2: solve for recinstruction weights fprintf(1,'-->Solving for reconstruction weights.\n'); if(K>D) fprintf(1,'   [note: K>D; regularization will be used]\n'); tol=1e-3; % regularlizer in case constrained fits are ill conditioned else tol=0; end W = zeros(K,N); for ii=1:N z = X(:,neighborhood(:,ii))-repmat(X(:,ii),1,K); % shift ith pt to origin C = z'*z;                                        % local covariance
C = C + eye(K,K)*tol*trace(C);                   % regularlization (K>D)
W(:,ii) = C\ones(K,1);                           % solve Cw=1
W(:,ii) = W(:,ii)/sum(W(:,ii));                  % enforce sum(w)=1
end;

% Step 3: compute embedding from eigenvects of cost matrix M=(I-W)'(I-W) fprintf(1,'-->Computing embedding.\n'); % M=eye(N,N); % use a sparse matrix with storage for 4KN nonzero elements M = sparse(1:N,1:N,ones(1,N),N,N,4*K*N); for ii=1:N w = W(:,ii); jj = neighborhood(:,ii); M(ii,jj) = M(ii,jj) - w';
M(jj,ii) = M(jj,ii) - w;
M(jj,jj) = M(jj,jj) + w*w'; end; % calculation of embedding options.disp = 0; options.isreal = 1; options.issym = 1; [Y,eigenvals] = eigs(M,d+1,0,options); Y = Y(:,2:d+1)'*sqrt(N); % bottom evect is [1,1,1,1...] with eval 0
fprintf(1,'Done.\n');

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% other possible regularizers for K>D
%   C = C + tol*diag(diag(C));                       % regularlization
%   C = C + eye(K,K)*tol*trace(C)*K;                 % regularlization  

“網絡版”的數據結果和我的版本的結果是一樣的。我們開啟Matlab的探查功能來比較耗時，結果如下：

注意！

© 2014-2021 ITdaan.com