tzczsq / Learning-Affective-Features-with-a-Hybrid-Deep-Model-for-Audio-Visual-Emotion-Recognition

This is an example code for audio-visual emotion recognition via a hybrid deep model
7 stars 0 forks source link

context_window() function #1

Open leluoye opened 6 years ago

leluoye commented 6 years ago

Hi, @tzczsq Thank you for your great work. In file logm_spectrogram_RGB_64_64_extraction.m, it uses a context_window() function. logms_context0=context_window(logms,context_size,num_bands,shift); However, I could not find where the function is? Could you please upload the context_window() function file? Thank you.

tzczsq commented 5 years ago

@leluoye I have attached the detailed code here. function [ A ] = context_window(data, n,num_bands, shift) %%% input data=framesnumber_filterbands % Sliding window %%% context window % for example, n = 15;number_filterbands=40. m=size(data,2); %%%dimensionframes if m<n data = [repmat(data(:,1),1,floor((n-m)/2)) data repmat(data(:,end),1,ceil((n-m)/2))]; else k=mod(m-n+1,shift); if mod(m-n+1,shift)~=0 % data(:,1:floor(k/2))=[]; % data(:,end-ceil(k/2)+1:end)=[]; data = [repmat(data(:,1),1,floor(k/2)) data repmat(data(:,end),1,ceil(k/2))]; end end

%m = size(data, 2); %%% 帧数

% A = zeros(m - n + 1, num_bands*n ); A=[]; start = 1; stop = n; while stop <= size(data,2) T = data(:,start:stop); % A(start,:) = T(:); A1={T}; A=[A;A1]; start = start + shift; stop = stop + shift; end

end