datto / dattobd

kernel module for taking block-level snapshots and incremental backups of Linux block devices
GNU General Public License v2.0
569 stars 121 forks source link

Does file_write(...) incur I/O??? #243

Closed nickchen-cpu closed 3 years ago

nickchen-cpu commented 3 years ago

Do we have another I/O when writing to cow file? I mean when file_write(...), why don't we get infinite loop ?

static int __cow_write_data(struct cow_manager *cm, void *buf){
        int ret;
        char *abs_path = NULL;
        int abs_path_len;
        uint64_t curr_size = cm->curr_pos * COW_BLOCK_SIZE;

        if(curr_size >= cm->file_max){
                ret = -EFBIG;

                file_get_absolute_pathname(cm->filp, &abs_path, &abs_path_len);
                if(!abs_path){
                        LOG_ERROR(ret, "cow file max size exceeded (%llu/%llu)", curr_size, cm->file_max);
                }else{
                        LOG_ERROR(ret, "cow file '%s' max size exceeded (%llu/%llu)", abs_path, curr_size, cm->file_max);
                        kfree(abs_path);
                }

                goto error;
        }

        ret = file_write(cm->filp, buf, curr_size, COW_BLOCK_SIZE);
        if(ret) goto error;

        cm->curr_pos++;

        return 0;

error:
        LOG_ERROR(ret, "error writing cow data");
        return ret;
}
nixomose commented 3 years ago

because somewhere in the cow manager (I forget where) it checks to see if the block being written is for the cow file, and if it is, it won't try and cow it.

nixomose commented 3 years ago

from your email:

Hi, I mean every writBig to cow file will incur other I/O on machine, does it make sense?

yes, while in snapshot/cowing mode, every write (well, every unique write within the snapshot) will add two additional I/O's, one to read the block to be cowed, and one to write that block to the cow file.