Open tomtom13 opened 8 years ago
So what I would like this feature to change to is: rather than creating link to a physical share - that's causing circular references, rather create links to a special directory that samba can look into for snapshots. All links would have a name that contain creation date & time, through that one could have several different snapshot schedulers working with different amount of snapshots being kept ... you know a very simple way of keeping snapshot to minimum to not hammer the btrfs with high snap number and to make search easier for user historical data recovery.
Can somebody explain to me the NFS snapshot export thing ? does it even work ? does nfs support shadow copy like samba ?
No, NFS doesn't have any integration like samba, but where did you see NFS snapshots?
It is a bit messy and there is a:
but in _toggle_visibility there are some calls like if (NFSExport.objects.filter(share=share).exists()):
So yeah, it's a bit confusing :/ and tracing all calls to declarations is cumbersome ...
Oh yeah, it just makes the share's snapshots available as (hidden) directories, so you could manually restore snapshot data by copying from those. It's not exactly the same as what you get with Windows' Previous Versions or Apple's Time Machine features.
So Steven, to summarise:
Mirko: could you please explain, call me stupid but I can't work out what you mean here :)
Hi @tomtom13 sorry about that - hope you didn't find it impolite -, i thought this was old issue about "writable" snapshot, while it's on visible/not visible snapshots. Sorry again :wink:
My two cents on visible snapshots over samba & shadow copies: had some tests simulating a ramsonware / deleting snapshots and finally had snapshots inside snapshots inside snapshots -> infinite. IMHO: Shadow copies are useful and except for ransomware users too can mess them all (SysAdmin POV : better to waste time and roll back after user request rather then have users able to do it/mess up on their own)
Well, a feature in the sense that Windows clients will show the version history in file/directory properties. As for Time Machine, it's a back-up feature I confused with snapshots, so never mind.
But yes, in the other cases the snapshot data is simply shared. They aren't symlinks though, the "visible" snapshots are actually mounted in their respective share, in addition to their regular location /mnt2/pool/.snapshots/share/snap-name
.
No worries Mirko, after a decade or so in development environments where people quite literary will call you an imbecile for making an error ... it takes a bit to upset me :)))))
Anyway Mirko, I've contributed the read only thingy to actually counter act to ransomware (actually I would never trust users with writable backup, but your point of ransomware was spot on !) so now anybody will struggle to mess up shadowcopy.
Steven, actually I've checked it in my setup and yes snapshot's get mounted in /mnt2/pool/snapshots/share/snap-name (yes there is NO dot) and when snapshot is made visible a python makes a symlin /mnt2/share/.snap-name -> /mnt2/pool/snapshots/share/snap-name
I'm just retesting this now on a vmware instalation and I get the btrfs snapshots directly mounted into share ... no links ... WTF ?! did anythign change within past 4 weeks ? on my work server I was getting a symlinks in shared folder ! :/
I'm not sure Tomasz, as far as I know and can see they were always like this :open_mouth:.
I'll dig into it when I'm back at the office but so far I think I need to stay of my meds :)
More I look into it more I think that I should really change my meds for something stronger :/
So, conclusion:
You guys are right, there were never any symlinks, just a double mounter subvolumes. By double I mean: Location 1: /mnt2/pool/snapshots/share/snap-name Location 2: /mnt2/share/.snap-name
I don't know why when I was testing it I was getting a circular reference on this share. Share is operated fully by windows users, so those don't even know what symlink is :/
Anyway for some strange reason when I've got a snapshots NOT mounted in share folder, samba works ok with now around 12000 snapshots and btrfs just keeps going. It's also compressed with LZO :) so it's even more strange.
Might be the case that having a thousands of snapshots mounted in share folder makes samba crawl through whole directory structure and then make "shadow_copy" crawl through those subfolders again.
Folders for snaps are define telly created !
//---speculation_mode=TRUE---->> Maybe, just maybe when shadow copy module is crawling through the directories and reads data about folders and files access time get's update (I have noatime) or some form of kernel notifier about new data available in the folder that samba is watching, and it triggers samba to crawl through folder structure again just to have a fresh tree for speedy network operation.
This is just a speculation, but right now I have no other explanation why keeping snapshots outside of chare and pointing shadow_copy elsewhere manually makes samba work with tens of thousands of snapshots.
This issue relates to the problem that I've reported in the forum:
https://forum.rockstor.com/t/circular-snapshots-visibility-samba-shadow-copy-and-fun/2080
///---- quote---->>>> Hi,
I've somehow discovered a reason for slow down on creating a lot of snapshots on share that would be visible via samba.
SO, to be able to see snapshots in samba as a shadow copy, snapshots created need to be set to "visible".
To make it more graphica, lets say you've got a pool called "main_pool" and a share called "share" Those will sit in: /mnt2/main_pool /mnt2/share
Fair enough - right ? Now, if rockstor script creates a scheduled snapshot (called for sake "share_5min") which is set to be visible, it will create snapshot located here: /mnt2/main_pool/.snapshots/share/share_5min_201609191525
but it will also create a symlink: /mnt2/share/.share_5min_201609191525
so when a samba comes in and tries to find a shadow copy - here is a symlink, go away and look for veto files there. So far so good :) Now, lets say scheduler creates another snapshot, it will locate it here: /mnt2/main_pool/.snapshots/share/share_5min_201609191530 and create another symlink: /mnt2/share/.share_5min_201609191530
BUT !!! Bare in mind that your share has now TWO (2) symlinks on it!!! and your second snapshot has a symlink pointing to previous snapshot !!!
So, samba comes in and looks for veto files: lets follow symlink: /mnt2/share/.share_5min_201609191530 -> that points you to snapshot -> /mnt2/main_pool/.snapshots/share/share_5min_201609191530 -> hey that one has a symlink -> /mnt2/share/.share_5min_201609191525 -> that points you to snapshot -> /mnt2/main_pool/.snapshots/share/share_5min_201609191530
That causes samba to hash through a log(n) snapshots (where n = total amount of snaphost of share you have) every time it wants to find a shadow copy.
To fix that: switch off your "visible" options on snapshots delete all the snapshots that are visible delete all silly symlinks you have on your share disable "shadow copy" on samba share Go to services configuration and add those lines to samba main config: wide links = yes unix extensions = no Go to configuration of you share and add manually those options: vfs objects = shadow_copy2 shadow:sort = desc shadow:basedir = /mnt2/share shadow:format = share5min%Y%m%d%H%M shadow:snapdir = /mnt2/main_pool/.snapshots/share
And that's it, you're golden :D I sure hope that this will help somebody and point rockstor developers away from this crazy circular creation scheme (FYI btrfs get's hell confused while creating "visible" snapshots and very often you can see kworker thread sitting there at 100%)
<<-----quote------//