dotnet / runtime

.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.
https://docs.microsoft.com/dotnet/core/
MIT License
15.14k stars 4.71k forks source link

File writing got Permission Denied (13) error on Win10 when the same file is being accessed from a read-only network share at Ubuntu 22.04 LTS #92275

Open aDisplayName opened 1 year ago

aDisplayName commented 1 year ago

Description

We are facing a problem, that a on-going local file writing operation will be blocked, using C++ standard library as soon as the same file is opened in read-only mode from the remote network share by a remote data reader within Ubuntu 22, even the 2nd party opened the file After the original file has already been opened for writing.

The file generation must use a block size of 8192 bytes or more for the problem to show up.

The problem shows up when we upgrade the OS hosting our remote data reader from Ubuntu 18.04 to Ubuntu 22.04 LTS. The problem does not show up when the data reader runs under Ubuntu 18.04 or Ubuntu 20.04

The problem applies to .NET 6.0, 7.0 and 8.0 RC1

Reproduction Steps

Setup

We are using a readonly network share to share file content across two OS:

sudo mount -t cifs -o ro,vers=3.0,username=user,password=***,file_mode=0444 -v //192.168.1.100/shared_data ~/fileserver

According to mount.cifs(8) man pages, we also tried the combination of different options -o:

Step to reproduce

App 1@Windows 10: Microsoft VC++ 14.3: A C++ program is created to continuously writing data to the a file shared_data\test.dat:

#include <iostream>
#include <string>
#include <conio.h>
#include <cerrno>

int main(int argc, char** argv)
{
    if (argc < 3)
    {
        std::cout << "Usage: targetFilePath bytesToWritten" << std::endl;
        return 0;
    }

    const char* filename = argv[1];
    std::cout << "Target File: " << filename << std::endl;
    const unsigned long size = std::stoi(argv[2]);
    std::cout << "Create File " << filename << " with " << size << " bytes per batch." << std::endl;

    FILE* f = nullptr;
    {
        f=_fsopen(filename, "wb", _SH_DENYWR);
    }
    long totalDataWritten = 0;

    const auto data=new char[size];
    for (;;) {

        std::cout << "[Enter] next batch. [Esc] Stop";
        auto k = _getch();
        if (k == 0x1b || k == 0x03)
        {
            std::cout << "\r                              \r";
            break;
        }

        if (k == '\r')
        {
            try
            {

                size_t dd;
                std::cout << "\r                              \r";

                // ==== LABEL A ====
                const auto data_written = fwrite(data, sizeof(char), size, f);
                totalDataWritten += data_written;
                if (data_written != size)
                {
                    // When the file is being opened in Ubuntu OS over network share, the file writing will fail with errno set to 13.
                    std::cout << "Data size written expected: " << size << "; Data actual written: " << data_written << "; Error code: " << errno;
                }
                else
                {
                    std::cout << size << " bytes written " << std::endl;
                }
            }
            catch (const std::exception& e)
            {
                std::cout << "exception caught" << e.what();
            }
            catch (...)
            {
                std::cout<< ":(";
            }
        }
    }

    std::cout << "Press any key to close file";
    _getch();

    auto err = fclose(f);
    std::cout << "Total data written: " << totalDataWritten;
}

To reproduce the problem, the block size must be at least of 8192 bytes or more:

FIleGenerator  shared_data/test.dat 8192

App 2: After the App1 started on the Windows side, running the following .NET 6.0/7.0/8.0 RC1 C# program under Ubuntu 22.04 LTS towards the same file over the mounted share folder ~/fileserver/test.dat:

const int MaxBlockSize=1024*1000;
var dataSize=(new FileInfo(filename)).Length;
await using( var fs = File.Open(_setting.TargetFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)){
    fs.Seek(0, SeekOrigin.Begin);
    // Simulating the data processing
    await Task.Delay(TimeSpan.FromSeconds(5));

    while (newDataSize > 0)
    {
        var dataSizeCurrentBatch = (int)(dataSize > MaxBlockSize ? MaxBlockSize : dataSize);
        // minimum data to be written has exceeded.
        var dataRead = await fs.ReadAsync(_buffer, 0, dataSizeCurrentBatch, stoppingToken);
        dataSize -= dataRead;
    }
}

Strangely, the behavior we've found out, is that as soon as we started App 2 over the Ubuntu, the writing action ('LABEL A' in App 1) will fail:

Here is the explanation from Microsoft about the error code 13: https://learn.microsoft.com/en-us/cpp/c-runtime-library/errno-constants?view=msvc-170#remarks

Permission denied. The file's permission setting doesn't allow the specified access. An attempt was made to access a file (or, in some cases, a directory) in a way that's incompatible with the file's attributes.

For example, the error can occur when an attempt is made to read from a file that isn't open. Or, on an attempt to open an existing read-only file for writing, or to open a directory instead of a file. Under MS-DOS operating system versions 3.0 and later, EACCES may also indicate a locking or sharing violation.

The error can also occur in an attempt to rename a file or directory or to remove an existing directory.

Expected behavior

No file access conflicts from either data generator on Windows side, or the data reader on the Ubuntu side. Even if the data reader has trouble to access data for read, the file generation on the Windows side should not be affected.

Actual behavior

File generation on the Windows side was denied access when the same file is being accessed by .NET application in readonly mode as stated in the sample code.

Regression?

No response

Known Workarounds

We have tried the following non-dotnet based data reader, and they do not causing problem on the data generation side:

Configuration

No response

Other information

No response

ghost commented 1 year ago

Tagging subscribers to this area: @dotnet/area-system-io See info in area-owners.md if you want to be subscribed.

Issue Details
### Description We are facing a problem, that a on-going local file writing operation will be blocked, as soon as the same file is opened in read-only mode from the remote network share by a remote data reader within Ubuntu 22, even the 2nd party opened the file After the original file has already been opened for writing. The problem shows up when we upgrade the OS hosting our remote data reader from Ubuntu 18.04 to Ubuntu 22.04 LTS. The problem does not show up when using Ubuntu 18.04 or Ubuntu 20.04 The problem applies to .NET 6.0, 7.0 and 8.0 RC1 ### Reproduction Steps # Setup We are using a readonly network share to share file content across two OS: * OS 1: Windows 10 with IP of 192.168.1.100, a local folder shared_data is shared over network with Readonly mode. * App 1: Data Generator: A Win32 program written in C++ writting to a file in the same local folder. * OS 2: Ubuntu 22.04 LTS, residing within the same LAN as the OS 1, the same shared folder is mounted using ro mode using mount.cifs to local folder ~/fileserver * App 2: Remote data reader: A .NET 6.0 console application running within the Ubuntu 22 OS, reading data from same file within the mounted folder ~/fileserver ```bash sudo mount -t cifs -o ro,vers=3.0,username=user,password=***,file_mode=0444 -v //192.168.1.100/shared_data ~/fileserver ``` According to mount.cifs(8) man pages, we also tried the combination of different options -o: * nolease * cache=none, and cache=loose (default is cache=strict) # Step to reproduce App 1@Windows 10: Microsoft VC++ 14.3: A C++ program is created to continuously writing data to the a file `shared_data\test.dat`: ```c #include #include #include #include int main(int argc, char** argv) { if (argc < 3) { std::cout << "Usage: targetFilePath bytesToWritten" << std::endl; return 0; } const char* filename = argv[1]; std::cout << "Target File: " << filename << std::endl; const unsigned long size = std::stoi(argv[2]); std::cout << "Create File " << filename << " with " << size << " bytes per batch." << std::endl; FILE* f = nullptr; { f=_fsopen(filename, "wb", _SH_DENYWR); } long totalDataWritten = 0; const auto data=new char[size]; for (;;) { std::cout << "[Enter] next batch. [Esc] Stop"; auto k = _getch(); if (k == 0x1b || k == 0x03) { std::cout << "\r \r"; break; } if (k == '\r') { try { size_t dd; std::cout << "\r \r"; // ==== LABEL A ==== const auto data_written = fwrite(data, sizeof(char), size, f); totalDataWritten += data_written; if (data_written != size) { // When the file is being opened in Ubuntu OS over network share, the file writing will fail with errno set to 13. std::cout << "Data size written expected: " << size << "; Data actual written: " << data_written << "; Error code: " << errno; } else { std::cout << size << " bytes written " << std::endl; } } catch (const std::exception& e) { std::cout << "exception caught" << e.what(); } catch (...) { std::cout<< ":("; } } } std::cout << "Press any key to close file"; _getch(); auto err = fclose(f); std::cout << "Total data written: " << totalDataWritten; } ``` App 2: After the App1 started on the Windows side, running the following .NET 6.0/7.0/8.0 RC1 C# program under Ubuntu 22.04 LTS towards the same file over the mounted share folder `~/fileserver/test.dat`: ```csharp var dataSize=(new FileInfo(filename)).Length; await using( var fs = File.Open(_setting.TargetFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)){ fs.Seek(0, SeekOrigin.Begin); // Simulating the data processing await Task.Delay(TimeSpan.FromSeconds(5)); while (newDataSize > 0) { var dataSizeCurrentBatch = (int)(dataSize > MaxBlockSize ? MaxBlockSize : dataSize); // minimum data to be written has exceeded. var dataRead = await fs.ReadAsync(_buffer, 0, dataSizeCurrentBatch, stoppingToken); dataSize -= dataRead; } } ``` Strangely, the behavior we've found out, is that as soon as we started App 2 over the Ubuntu, the writing action ('LABEL A' in App 1) will fail: * the data_written will be 0 instead of desired length * the errno will be 13, indicating a permission violation. Also, when we were using the same setup except using Ubuntu 18, the local file writing in Windows side is not being blocked. The problem only happens when we upgrade our Ubuntu to Ubuntu 22. Here is the explanation from Microsoft about the error code 13: https://learn.microsoft.com/en-us/cpp/c-runtime-library/errno-constants?view=msvc-170#remarks > Permission denied. The file's permission setting doesn't allow the specified access. An attempt was made to access a file (or, in some cases, a directory) in a way that's incompatible with the file's attributes. > > For example, the error can occur when an attempt is made to read from a file that isn't open. Or, on an attempt to open an existing read-only file for writing, or to open a directory instead of a file. Under MS-DOS operating system versions 3.0 and later, EACCES may also indicate a locking or sharing violation. > > The error can also occur in an attempt to rename a file or directory or to remove an existing directory. ### Expected behavior No file access conflicts from either data generator on Windows side, or the data reader on the Ubuntu side. Even if the data reader has trouble to access data for read, the file generation on the Windows side should not be affected. ### Actual behavior File generation on the Windows side was denied access when the same file is being accessed by .NET application in readonly mode as stated in the sample code. ### Regression? _No response_ ### Known Workarounds We have tried the following non-dotnet based data reader, and they do not causing problem on the data generation side: * rsync ```bash while :; do rsync -v --size-only fileserver/test.dat filetarget; retVal=$?; if [ $retVal -ne 0 ]; then break; fi; done ``` * A simple python code in python 3.10 ```python import argparse import os import datetime parser = argparse.ArgumentParser() parser.add_argument("targetFile", help="name of the product") parser.add_argument("datalength", type=int, help="Size of data to read") args = parser.parse_args() print(f'{datetime.datetime.now()}: Read {args.datalength} from {args.targetFile}') f=open(args.targetFile, "rb"); f.seek(-args.datalength, os.SEEK_END) data=f.read(args.datalength) ``` ### Configuration _No response_ ### Other information _No response_
Author: aDisplayName
Assignees: -
Labels: `area-System.IO`, `untriaged`
Milestone: -
adamsitnik commented 1 year ago

Hi @aDisplayName

First of all, there is a lot of differences between Unix and Windows when it comes to file locking:

image

Because of that, the behavior may be specific to OS and file system.

The problem shows up when we upgrade the OS hosting our remote data reader from Ubuntu 18.04 to Ubuntu 22.04 LTS. The problem does not show up when the data reader runs under Ubuntu 18.04 or Ubuntu 20.04

The problem applies to .NET 6.0, 7.0 and 8.0 RC1

Most likely Ubuntu has changed something related to file locking (or we are no longer able to recognize the file system). We will most likely need to recognize it similarly to what we did in https://github.com/dotnet/runtime/pull/55256 for NFS/CIFS/SMB.

To unblock you, you can disable file locking. This can be done by using System.IO.DisableFileLocking app context switch or DOTNET_SYSTEM_IO_DISABLEFILELOCKING environment variable.

It makes .NET stop respecting file locks, so for example if you open a file with exclusive access you can no longer assume nobody else is reading from it/writing to it. It can be a major problem if you are using files for synchronization.

aDisplayName commented 1 year ago

To unblock you, you can disable file locking. This can be done by using System.IO.DisableFileLocking app context switch or DOTNET_SYSTEM_IO_DISABLEFILELOCKING environment variable.

It makes .NET stop respecting file locks, so for example if you open a file with exclusive access you can no longer assume nobody else is reading from it/writing to it. It can be a major problem if you are using files for synchronization.

@adamsitnik Thank you for the insight. We've tested the workaround by using the environment variable, and the conflicts seemed disappeared on the data generation side.