Closed Oryud closed 2 years ago
Hi Or - yeah that function was a bit experimental, I don't actively use it and I use electrophysiological landmarks programmatically instead (e.g. I load in a given dataset, find a particular feature, and use that to figure out where the probe depth is). I guess I should pull it from the repo or update it to be usable so it's not confusing.
I think aligning by landmarks might be safer - do you have any available in your recordings you could use? When you ask about "general power" - do you mean something like total LFP power? I don't know folks that use that - some I've used are 1) blocks of correlated LFP where the probe is outside the brain, 2) lack of spikes when going through a ventricle or white matter, and 3) sudden shifts in multiunit correlation between areas.
Or if you wanted to use the GUI (and I'm happy to upgrade it to working standard):
Do you use kilosort to sort spikes? If so, channel_positions
comes from channel_positions.npy
.
For LFP - I use OpenEphys and I have a little code to load that in if useful.
Also, here's another version of this GUI that you might want to start with instead - it was written by Enny van Beest in our lab and is probably a little cleaner and allows for stretching coordinates: https://github.com/EnnyvanBeest/GeneralNeuropixelAnalysisFunctions/blob/main/Histology/alignatlasdata.m
Hers I think assumes that LFP data was from SpikeGLX so it loads in that format.
Hi Andy, Thank you so much.
I'm afraid I don't really have any significant landmarks since in my recordings I'm trying to record from the auditory cortex (AUDp, AUDv, TeA, I'll take some AUDd) which are not extremely distinct in their activity profile from each other. Usually I will hit some other cortical areas around such as entorhinal, ectorhinal, supplemental somatosensory but I don't have any white matter or hippocampus. I meant LFP though so far I didn't even touch this data so far, only AP recordings.
I am using SpikeGLX to record, and Kilosort 2.5 + phy2 for spike sorting. This GUI looks great but I'm struggling using it - I keep getting errors and I don't fully understand why, I thought I should use as input the probe_ccf.mat file I get from your GUI but it keeps looking for struct fields that don't exist.
Anyway, is that okay that I'm bringing it up here? If not I can open an issue on Slack to get help from other people in the community, of course (I saw there that they recommended using your GUI for issues similar to mine).
Thanks again!
Does that mean your trajectory looks something like this?
If so - looks like some of the probe will be sticking out of the brain, and you could use that? Or maybe there are changing properties once you hit something like the entorhinal (multiunit correlation? auditory responses?)?
It sounds like some of your histology uncertainty might be in lateral area rather than just depth, or are you just referring to just uncertainty in depth? The histology pipeline assumes that the reconstructed trajectory is correct - it's just a matter of placing the probe somewhere along that trajectory that the ephys data is good for.
Yup fine to bring it up here! If you're talking about the Neuropixels Slack, I don't look at that regularly, so I probably wouldn't see things on it (hopefully I haven't missed questions about my code there!)
We are inserting the probe in 70 degrees angle in AP -2.5, ML 4.1, In order to be able to avoid removal of muscle tissue - so usually all of the probe is inside. I can definitely try to look which regions do not have an auditory response though I'm not sure how to do so technically.
My uncertainty is unfortunately due to inefficient marking of penetrations used with DiO, and some not-so-good histology work that ended up in probe trajectory appearing as a mere dot in multiple slices, for example:
There are 3 different probe penetrations (separate recordings) here, 2 using DiI and another with DiO.
Haven't seen any questions there, just recommendations so far :)
Having the probe be a dot in each slice is totally fine as long as you can follow it across a few slices to figure out the direction the probe was going, is that the case in yours?
But I see what you mean - if your whole probe is in the brain, I'm not sure what landmarks you could use besides presence/lack of auditory responses. I'm a bit surprised you're able to fit the whole probe in the brain with that trajectory, if I try to do that in the trajectory explorer it's a really tight fit (and requires a more medial coordinate than yours):
In this case though, the ephys alignment GUI wouldn't help you since it requires the presence of some kind of landmarks to align things. You could instead potentially estimate the endpoint of the probe from the last DiI point, though that's not usually that accurate.
Indeed - I am able to track the direction of the probe.
I had another look at my notes and my answer wasn't 100% accurate (like my penetration location...). In some cases I am not inserting the whole probe to the brain, due to some tilts in the angle due to the headbar location, and to the penetration point in the craniotomy (we usually make it 1mm diameter). So in the cases where not all of the probe is inside the brain I think the GUI will be helpful - maybe I can start from there and see how it affects my data. Do you know why it seems to be looking for fields that do not exist in the prbe_ccf file?
Thank you again!
BTW, will you be attending FENS in a dew weeks?
Not sure about non-existent fields from Enny's GUI, I haven't used it.
If part of your probe's outside of the brain, feel free to adapt code I've already got for that copied below - it requires a snippet of all LFP channels loaded in as the variable lfp
with channels in order of most superficial to most deep (I load 10 seconds worth of data, I use OpenEphys though so don't have code to load SpikeGLX format).
What this does: channels outside the brain are usually super correlated with a steep drop-off once a channel enters the brain, so this gets the correlation across channels, finds where the correlation with superficial channels drops off, and sets that as the 'start of the brain'. That gives an anchor point where you can match the probe to the CCF, it then finds where the probe is along the probe_ccf trajectory based on the length of the probe, and gets the CCF areas and borders across the probe. It's not particularly clean code - it's in my own pipeline and not intended for general use - but hopefully it's a useful starting point.
Yeah I'll be at FENS, I'll have a poster at Poster Session 2.
% Cortex alignment: assume top of probe out of brain and very
% correlated, look for big drop in correlation from top-down
lfp_corr = corrcoef(double(transpose(lfp-nanmedian(lfp,1))));
lfp_corr_diag = lfp_corr;
lfp_corr_diag(triu(true(size(lfp_corr)),0)) = NaN;
lfp_corr_from_top = nanmean(lfp_corr_diag,2)';
n_lfp_medfilt = 10;
lfp_corr_from_top_medfilt = medfilt1(lfp_corr_from_top,n_lfp_medfilt);
lfp_corr_from_top_medfilt_thresh = ...
(max(lfp_corr_from_top_medfilt) - ...
range(lfp_corr_from_top_medfilt)*0.2);
ctx_start = lfp_channel_positions( ...
find(lfp_corr_from_top_medfilt > lfp_corr_from_top_medfilt_thresh,...
1,'last'));
if verbose
figure;
imagesc(lfp_channel_positions,lfp_channel_positions,lfp_corr_diag);
axis image
colormap(brewermap([],'*RdBu'));
caxis([-1,1])
xlabel('Depth (\mum)');
ylabel('Depth (\mum)');
c = colorbar;
ylabel(c,'Med. sub. correlation');
line(xlim,[ctx_start,ctx_start],'color','k','linewidth',2);
line([ctx_start,ctx_start],ylim,'color','k','linewidth',2);
title(sprintf('%s %s: LFP correlation and cortex start',animal,day));
end
% (give leeway for cortex to start relative to first unit - if within
% this leeway then back up cortex start, if outside then keep but throw
% a warning)
ctx_lfp_spike_leeway = 100; % um leeway for lfp/unit match
ctx_lfp_spike_diff = ctx_start-min(template_depths);
if ctx_lfp_spike_diff > ctx_lfp_spike_leeway
warning('%s %s: LFP-estimated cortex start is after first unit %.0f um', ...
animal,day,ctx_lfp_spike_diff);
elseif ctx_lfp_spike_diff > 0 && ctx_lfp_spike_diff < ctx_lfp_spike_leeway
ctx_start = min(template_depths)-1; % (back up to 1um before first unit)
end
%%% If histology is aligned, get areas by depth
[probe_ccf_fn,probe_ccf_fn_exists] = AP_cortexlab_filename(animal,[],[],'probe_ccf');
if ~probe_ccf_fn_exists
if verbose; disp('No histology alignment for probe...'); end
elseif probe_ccf_fn_exists
if verbose; disp('Estimating histology-aligned cortical areas on probe...'); end
% (load the CCF structure tree)
allen_atlas_path = fileparts(which('template_volume_10um.npy'));
st = loadStructureTree([allen_atlas_path filesep 'structure_tree_safe_2017.csv']);
% Load probe CCF alignment
load(probe_ccf_fn);
% Get area names and borders across probe
[~,dv_sort_idx] = sort(probe_ccf.trajectory_coords(:,2));
dv_voxel2um = 10*0.945; % CCF DV estimated scaling
probe_trajectory_depths = ...
pdist2(probe_ccf.trajectory_coords, ...
probe_ccf.trajectory_coords((dv_sort_idx == 1),:))*dv_voxel2um;
probe_depths = probe_trajectory_depths + ctx_start;
% Get recorded areas and boundaries (ignore layer distinctions)
probe_areas = unique(regexprep( ...
st(probe_ccf.trajectory_areas(probe_depths > 0 & ...
probe_depths < max(channel_positions(:,2))),:).safe_name, ...
' layer .*',''));
probe_area_boundaries = cellfun(@(area) ...
prctile(probe_depths(contains( ...
st(probe_ccf.trajectory_areas,:).safe_name,area)),[0,100]), ...
probe_areas,'uni',false);
end
Amazing! Thank you so much! Hope I'll be able to find you there and buy you a beer or something (I'm in poster session 6).
Dear Andrew,
First, thank you so much for this pipeline, it's amazing!
I really want to align my probe registration with the recorded electrophysiology. However, the current 'AP_align_probe_histology' uses a few variables I'm not sure how to get and can't find any explanation on the Wiki page. The current use is AP_align_probe_histology(st,slice_path, spike_times,spike_templates,template_depths,lfp,channel_positions(:,2), use_probe); I got spike_time, spike_templates and template_depths by using this code (is it correct?): spike_struct = loadKSdir(myKsDir); [spike_times, spike_templates, template_depths, spikeSites] = ksDriftmap(myKsDir);
But I'm struggling to figure out on how to get 'lfp' and 'channel_positions'.
If I'm bothering you with this issue, I might as well ask: do you know of any way to use recorded activity for the alignment that is beyond general power? My histology isn't the best, but if I can see an auditory response it will help telling me I'm in an auditory cortex and not in some somatosensory area.
Thank you so so much in advance!