golang / go

The Go programming language
https://go.dev
BSD 3-Clause "New" or "Revised" License
123.67k stars 17.62k forks source link

archive/tar: add support for writing tar containing sparse files #13548

Open grubernaut opened 8 years ago

grubernaut commented 8 years ago

I've created a Github Repo with all the needed steps for reproducing this on Ubuntu 12.04 using Go1.5.1. I've also verified that using Go1.5.2 still experiences this error.

Run vagrant create then vagrant provision from repository root.

vagrant create
vagrant provision

Expected Output:

$ vagrant provision
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: stdin: is not a tty
==> default: go version go1.5.2 linux/amd64
==> default: Creating Sparse file
==> default: Proving file is truly sparse
==> default: 0 -rw-r--r-- 1 root root 512M Dec  9 15:26 sparse.img
==> default: Compressing in Go without sparse
==> default: Compressing in Go with sparse
==> default: FileInfo File Size: 536870912
==> default: Proving non-sparse in Go gained size on disk
==> default: 512M -rw-r--r-- 1 root root 512M Dec  9 15:26 non_sparse/sparse.img
==> default: Proving sparse in Go DID keep file size on disk
==> default: 0 -rw-r--r-- 1 root root 0 Dec  9 15:26 sparse/sparse.img
==> default: Compressing via tar w/ Sparse Flag set
==> default: Proving sparse via tar DID keep file size on disk
==> default: 0 -rw-r--r-- 1 root root 512M Dec  9 15:26 tar/sparse.img

Actual Output:

$ vagrant provision
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: stdin: is not a tty
==> default: go version go1.5.2 linux/amd64
==> default: Creating Sparse file
==> default: Proving file is truly sparse
==> default: 0 -rw-r--r-- 1 root root 512M Dec  9 15:35 sparse.img
==> default: Compressing in Go without sparse
==> default: Compressing in Go with sparse
==> default: Proving non-sparse in Go gained size on disk
==> default: 513M -rw-r--r-- 1 root root 512M Dec  9 15:35 non_sparse/sparse.img
==> default: Proving sparse in Go DID NOT keep file size on disk
==> default: 512M -rw-r--r-- 1 root root 512M Dec  9 15:35 sparse/sparse.img
==> default: Compressing via tar w/ Sparse Flag set
==> default: Proving sparse via tar DID keep file size on disk
==> default: 0 -rw-r--r-- 1 root root 512M Dec  9 15:35 tar/sparse.img

The Vagrantfile supplied in the repository runs the following shell steps:

This is somewhat related to #12594.

I could also be creating the archive incorrectly, and have tried a few different methods for creating the tar archive, each one however, did not keep the sparse files intact upon extraction of the archive. This also cannot be replicated in OSX as HGFS+ does not have a concept of sparse files, and instantly destroys any file sparseness, hence the need for running and testing the reproduction case in a vagrant vm.

Any thoughts or hints into this would be greatly appreciated, thanks!

bradfitz commented 8 years ago

/cc @dsnet who's been going crazy on the archive/tar package in the Go 1.6 tree ("master" branch)

dsnet commented 8 years ago

This isn't a bug per-say, but more of a feature request. Sparse file support is only provided for tar.Reader, but not tar.Writer. Currently, it's a bit asymmetrical, but supporting sparse files on tar.Writer requires API change, which may take some time to think about.

Also, this is mostly unrelated to #12594. Although, that bug should definitely be fixed before any attempt at this is made. For the time being, I recommend putting this in the "unplanned" milestone, I'll revisit this issue when the other tar bugs are first fixed.

grubernaut commented 8 years ago

@dsnet should I keep this here as a feature request, or is there another preferred format for those?

dsnet commented 8 years ago

The issue tracker is perfect for that. So this is just fine.

dsnet commented 8 years ago

This my proposed addition to the tar API to support sparse writing.

First, we modify tar.Header to have an extra field:

type Header struct {
    ...

    // SparseHoles represents a sequence of holes in a sparse file.
    //
    // The regions must be sorted in ascending order, not overlap with
    // each other, and not extend past the specified Size.
    // If len(SparseHoles) > 0 or Typeflag is TypeGNUSparse, then the file is
    // sparse. It is optional for Typeflag to be set to TypeGNUSparse.
    SparseHoles  []SparseHole
}

// SparseEntry represents a Length-sized fragment at Offset in the file.
type SparseEntry struct {
    Offset int64
    Length int64
}

On the reader side, nothing much changes. We already support sparse files. All that's being done is that we're now exporting information about the sparse file through the SparseHoles field.

On the writer side, the user must set the SparseHoles field if they intend to write a sparse file. It is optional for them to set Typeflag to TypeGNUSparse (there are multiple formats to represent sparse files so this is not important). The user then proceeds to write all the data for the file. For sparse holes, they will be required to write Length zeros for that given hole. It is a little inefficient writing zeros for the holes, but I decided on this approach because:

I should note that the tar format represents sparse files by indicating which regions have data, and treating everything else as a hole. The API exposed here does the opposite; it represents sparse files by indicating which regions are holes, and treating everything else as data. The reason for this inversion is because it fits the Go philosophy that the zero value of some be meaningful. The zero value of SparseHoles indicates that there are no holes in the file, and thus it is a normal file; i.e., the default makes sense. If we were to use SparseDatas instead, the zero value of that indicates that there is no data in the file, which is rather odd.

It is a little inefficient requiring that users write zeros and the bottleneck will be the memory bandwidth's ability to transfer potentially large chunks of zeros. Though not necessary, the following methods may be worth adding as well:

// Discard skips the next n bytes, returning the number of bytes discarded.
// This is useful when dealing with sparse files to efficiently skip holes.
func (tr *Reader) Discard(n int64) (int64, error) {}

// FillZeros writes the next n bytes by filling it in with zeros.
// It returns the number of bytes written, and an error if any.
// This is useful when dealing with sparse files to efficiently skip holes.
func (tw *Writer) FillZeros(n int64) (int64, error) {}

Potential example usage: https://play.golang.org/p/Vy63LrOToO

ianlancetaylor commented 8 years ago

If Reader and Writer support sparse files transparently, why export SparseHoles? Is the issue that when writing you don't want to introduce a sparse hole that the caller did not explicitly request?

dsnet commented 8 years ago

The Reader expands sparse files transparently. The Writer is "transparent" in the sense that a user can just do io.Copy(tw, sparseFile) and so long as the user already specified where there sparse holes are, it will avoid writing the long runs of zeros.

Purely transparent sparse files for Writer cannot easily done since the tar.Header is written before the file data. Thus, the Writer cannot know what sparse map to encode in the header prior to seeing the data itself. Thus, Writer.WriteHeader needs to be told where the sparse holes are.

I don't think tar should automatically create sparse files (for backwards compatibility). As a data point, the tar utilities do not automatically generate sparse files unless the -S flag is passed in. However, it would be nice if the user didn't need to come up with the SparseHoles themselves. Unfortunately, I don't see an easy solution to this.


There are three main ways that sparse files may be written:

  1. In the case of writing a file from the filesystem (the use case that spawned this issue is of this), I'm not aware of any platform independent way to easily query for all the sparse holes. There is a method to do this on Linux and Solaris with SEEK_DATA and SEEK_HOLE (see my test in CL/17692), but I'm not aware of ways to do this on other OSes like Windows or Darwin.
  2. In the case of a round-trip read-write, a tar.Header read from Reader.Next and written to Writer.WriteHeader will work just fine as expected since tar.Header will have the SparseHoles field populated.
  3. In the case of writing a file from a memory, the user will need to write their own zero detection scheme (assuming they don't already know where the holes are).

I looked at the source for GNU and BSD tar to see what they do:

I'm not too fond of the OS specific things that they do to detect holes (granted archive/tar already has many OS specific things in it). I think it would be nice if tar.Writer provided a way to write spares files, but I think we should delegate detection of sparse holes to the user for now. If possible, we can try and get sparse info during FileInfoHeader, but I'm not sure that os.FileInfo has the necessary information to do the queries that are needed.

AkihiroSuda commented 7 years ago

@dsnet Design SGTM (non-binding), do you plan to implement that feature?

dsnet commented 7 years ago

I'll try and get this into the Go 1.9 cycle. However, a major refactoring of the tar.Writer implementation needs to happen first.

dsnet commented 7 years ago

That being said, for all those interested in this feature, can you mention what your use case is?

For example, are you only interested in being able to write a sparse file where you have to specify explicitly where the holes in the file are? Or do you expect to pass an os.FileInfo and have the tar package figure it out (I'm not sure this is possible)?

willglynn commented 7 years ago

My use is go_ami_tools/aws_bundle, a library which makes machine images for Amazon EC2. The inside of the Amazon bundle format is a sparse tar, which is a big advantage for machine images since there's usually lots of zeroes. go_ami_tools currently writes all the zeroes and lets them get compressed away, but a spare tar would be better.

I'd like to leave zero specification up to the user of my library. ec2-bundle-and-upload-image – my example tool – would read zeroes straight from the host filesystem, but someone could just as easily plug the go_ami_tools library to a VMDK or QCOW reader in which case the zeroes would be caller-specified.

AkihiroSuda commented 7 years ago

My use case is to solve a Docker's issue https://github.com/docker/docker/issues/5419#issuecomment-41786665 , which leads docker build to ENOSPC when the container image contains a sparse file.

grubernaut commented 7 years ago

We (Hashicorp) run Packer builds for customers on our public SaaS, Atlas. We offer up an Artifact Store for Atlas customers so that they can store their created Vagrant Boxes, VirtualBox (ISO, VMX), QEMU, or other builds inside our infrastructure. If the customer specifies using the Atlas post-processor during a Packer build, we first create an archive of the resulting artifact, and then we create a POST to Atlas with the resulting archive.

Many of the resulting QEMU, VirtualBox, and VMware builds can be fairly large (10-20GB), and we've had a few customers sparse the resulting disk image, which can lower the resulting artifacts size to ~500-1024MB. This, of course allows for faster downloads, less bandwidth usage, and a better experience overall.

We first start to create the archive from the Atlas Post-Processor in Packer (https://github.com/mitchellh/packer/blob/master/post-processor/atlas/post-processor.go#L154). We then archive the resulting artifact directory, and walk the directory. Finally, we write the file headers, and perform an io.Copy: (https://github.com/hashicorp/atlas-go/blob/master/archive/archive.go#L381).

In this case, we wouldn't know explicitly where the holes in the file are, and would have to rely on os.FileInfo or something similar to generate the sparsemap of the file; although I'm not entirely sure that this is possible.

vbatts commented 7 years ago

@dsnet the use-case is largely around the container images. So the Reader design you proposed SGTM, though it would be nice if the tar reader also provider io.Seeker to accommodate the SparseHoles, but that is not a terrible issue just less than ideal. For the Writer, either passing the FileInfo, or some way quick detection and perhaps an io.Writer wrapper with a type assertion? Both sides would be useful though. Thanks for your work on this.

dsnet commented 7 years ago

Sorry this got dropped in Go1.9, I have a working solution out for review for Go1.10.

gopherbot commented 7 years ago

Change https://golang.org/cl/56771 mentions this issue: archive/tar: refactor Reader support for sparse files

gopherbot commented 7 years ago

Change https://golang.org/cl/57212 mentions this issue: archive/tar: implement Writer support for sparse files

rasky commented 7 years ago

I think the proposed API is suboptimal because it leaves users of the library with the daunting task of correctly doing hole detections if they want to properly handle sparse files without disk explosions. I have a proposal for a different API.

Reader

Change it so that sparse files can be transparently extracted by using io.Copy to disk.

Writer

Change it so that sparse files can be transparently packed by using io.Copy from disk, with best-effort hole detection.

Notes

dsnet commented 7 years ago

The current API may be sub-optimal in performance, but it is complete in functionality. The suggestions you have are reasonable approaches in addition to what's currently sent out for review.


Your suggestion to add Reader.WriteTo seems reasonable.

However, an implementation of Writer.ReadFrom is not so easy. There are several problems:

If it sparse-file detection were more prevalent across all OS's, reliable, and easy to access, then I would support the Writer.ReadFrom, but it's currently too magical in how it works.


In terms of performance, the current API can be augmented by Reader.Discard and Writer.FillZeros, which does allow you to very quickly skip through the holes. While it is a disadvantage that it is the user's responsibility to skip over the holes themselves using Header.SparseHoles. It is an approach that it is much more explicit and clear in how it works.

dsnet commented 7 years ago

As compromise, here's a possibility that has the advantages of Reader.WriteTo and Writer.ReadFrom for performance and (more) explicit handling of sparse files.

We can do the following:

The above has the advantage that Writer.ReadFrom only needs to check for io.ReadSeeker and doesn't need to assume SEEK_HOLE and SEEK_DATA support. It avoids any magic in Writer.WriteHeader where it would cache the header, possibly change it again, and write it on first write operation. Population of Header.SparseHoles is the responsibility of FileInfoHeader, which is already an OS-specific function given that it takes in an os.FileInfo.

rasky commented 7 years ago

I like your suggestion because it manages to avoid the implicit header caching, and moves hole detection into header creation, where it belongs. But I don't see how it can be implemented. Main question: how can FileInfoHeader populate Header.SparseHoles? It only gets a os.FileInfo in input and there's no way to open a file from a FileInfo (there's no full path information in it).

Keeping SparseHoles exported also raises some consistency questions:

but I guess this can be fixed with documentation.

dsnet commented 7 years ago

But I don't see how it can be implemented. Main question: how can FileInfoHeader populate Header.SparseHoles?

Agreed. I tried implementing it and it's not possible. I don't see a way around this other than a new constructor function func FileHeader(f *os.File) (*Header, error). I would still like to see a solution for whether that signature is sufficient to detect sparse holes on OSX and Windows.

What happens if a user sets SparseHoles in the header but then write non-zero bytes in the holes using Writer.Write?

The documentation for Writer.Write in CL/57212 says it must be written with NUL-bytes.

What happens if a user sets SparseHoles but not Typeflag to TypeGNUSparseFile?

That's fine. TypeGNUSparseFile implies that the format will be GNU, otherwise it will be PAX. Both are valid. I'll document it more when the format is actually exposed to the user in #18710.

rasky commented 7 years ago

I don't see a way around this other than a new constructor function func FileHeader(f *os.File) (*Header, error).

That is really unfortunate, as it would not even be a superset of FileInfoHeader() (as FileInfoHeader() works with any os.FileInfo, not only those that come from os.File; I used it many time to generate a header from in-memory structures that exposed a os.FileInfo as a way to fake a filesystem node). So we would end up with two similar functions, none of which is able to handle all required cases, and the user would be forced to use one or another depending on the context.

So it looks like there are currently two options on the table:

Any other idea? Do you have a final call on this?

I would still like to see a solution for whether that signature is sufficient to detect sparse holes on OSX and Windows.

In Windows, you can use os.File.Fd() to access the underlying HANDLE, with which you can call DeviceIOControl with control code FSCTL_QUERY_ALLOCATED_RANGES to access the hole list (see this example).

Currently released versions of macOS (or rather HFS+) doesn't support sparse files. The new APFS filesystem supports them, but the documentation is rather sparse at the moment, given that macOS with APFS is still in beta (this is the only APFS-related API list I found, and it touches several features but not sparse files).

I did some quick test on both beta e non beta version of macOS, and it looks like APFS allows to create sparse file just like Linux, by simply seeking; for instance, I did dd if=/dev/zero of=file.img bs=1 count=0 seek=512000000 to create a file of apparent size of 512 MB that occupies zero bytes (verified with du file.img). Also, the man page of lseek includes SEEK_HOLE and SEEK_DATA, though I haven't directly tested them, but they're described as working exactly as they work in Linux and Solaris. So it looks like that macOS support will be achieved with the same code that will be used on Linux.

dsnet commented 7 years ago

(transferring over discussion from CL/57212)

There are 3 distinct tasks with regard to sparse files:

While they are obviously related, they are independent problems, and I believe conflating them together is a mistake.

As it currently stands, the API for Writer is split into two parts: WriteHeader and Write (of which zero or more calls are made to populate the data for the previously written header). This API exactly reflects how TAR files are serialized.

Any solution for sparse files must have the information for sparse holes available at the time that WriteHeader is called (which implies that information about spares holes is held within the Header as either exported or unexported information). I am a proponent of having that information exported since there are other ways through which I want to create sparse files other than just pulling them straight from disk. While I understand that this information is "lower-level" than what users may want, it is a literal representation of what the sparse file looks like and is sufficient for representing sparse files in both GNU and PAX format. Users that want to use higher-level APIs to populate this field do not need to care about it. In the same way, if you use FileInfoHeader, you don't need to care about about setting the Header.Mode yourself ever, but the fact that Mode is available is still very useful when crafting the Header manually.

That being said, we can separate-out task C as a helper method or function that takes in an *os.File and populates a Header.SparseHoles field. It seems that we can't use FileInfoHeader(os.FileInfo) (*Header, error) because of lack of information, and there are disadvantages to FileHeader(*os.File) (*Header, error). We could make it even more surgical and only generate the sparse holes: (*Header) SetSparseHolesFrom(*os.File) error.

API aside, the implementation itself is actually hard because support for sparse-hole detection varies widely across operating systems. (Anyone who's looked at the code for GNU or BSD tar will see a host of #ifdef special-casing logic for different platforms, yuck). The fact that detection relies on OS-specific details is all the more reason why we should not conflate C into B or A; that is, Writer or Reader should not change behavior depending on OS specific details (It's fine for OS-specific information to affect the creation of Header, but not directly Reader or Writer).

In regards to B, how to efficiently and easily write/read a file to/from disk is a separate problem from how it is represented in the TAR format (which is task A and addressed by CL/57212). The suggestions given above regarding how to resolve B are both compatible with the the approach taken for A. For example, Reader.Discard and Writer.FillZeros are actually implemented (but unexported in CL/57212). The unit tests actually uses them to efficiently write a sparse file with very large logical size. Also, Reader.WriteTo and Writer.ReadFrom can be added that special-case inputs that are also a io.WriteSeeker or io.ReadSeeker. WriteTo/ReadFrom can be internally implemented in terms of Discard/FillZeros and the use of io.Seeker does not need to depend on OS-specific details like SEEK_HOLE and SEEK_DATA, but only io.SeekCurrent to skip past holes. Again, neither of these extensions conflict with A.

In regards to A, I don't think there's any controversy here. Support for sparse files clearly requires a logic to encode a valid TAR file representing the sparse holes.


It seems to me, that the biggest unknown is how to accomplish C. Your research seems to confirm that *os.File is sufficient to detect holes on all major platforms. Whether we add a new function or a new method to Header, and what it looks like is still up for debate, but I am fairly convinced that this is the right direction to be headed.

That being said, the fact that C is not fully thought through should not prevent A and B from happening. In fact, doing A and B first gives us a a testing ground to prototype what C should look like. It's okay if only support for A (and B if time permits) lands in Go1.10, and users still need to write their own logic for C. We can merge those into Go1.11, based on experience reports.

rasky commented 7 years ago

I understand your line of reasoning. I have a couple of comments:

I'm still very worried about releasing 1.10 with only A and B, as we might make mistakes on the API that are hard to revert afterwards. But that's your call.

dsnet commented 7 years ago

You seem not to care much about the fact that existing code will have to be modified to fully support sparse files.

It's not that I don't care, but that when considering competing concerns, I feel this is not a compelling benefit:

You seem to want to avoid OS-specific code in Reader / Writer. I'm afraid that's not fully possible because on Windows you need to create holes through a specific API; seeking by itself does not create holes, just zeros. So Reader.WriteTo will have to call OS-specific code, when Windows support is added.

That is unfortunate, but it still works fine with WriteTo and ReadFrom since they will documented as handling only io.Seeker specially. In the case of windows, WriteTo will be equivalent to:

io.Copy(dst, struct { io.Reader }{tr})

Which is exactly what happens today.

Also, Windows users who really care about sparse writing still have alternatives:

dsnet commented 7 years ago

I'm still very worried about releasing 1.10 with only A and B, as we might make mistakes on the API that are hard to revert afterwards.

I understand.

The only exposed API in A is Header.SparseHoles. I know you have reservations about exposing that information, but I believe users should be able to craft a sparse-file manually. If we only rely on what results from C to produce sparse files, then manual crafting would not be possible.

To a degree, I do share your concern regarding B. I don't feel rushed to expose this for Go1.10, but it would be nice.

rasky commented 7 years ago

OK so the roadmap is clear now. Do you want me to send CLs about some specific parts?

dsnet commented 7 years ago

After I submit the CL for A, feel free to send out a CL to add WriteTo/ReadFrom to address B. We can continue discussing here for a design for C. Do you have any proposals for C?

rasky commented 7 years ago

While implementing Reader.WriteTo, I realized that only using io.WriteSeeker abstraction is not sufficient also on Linux/macOS to create sparse files, because it is not able to create a hole at the end of file; you need to call os.File.Truncate() for that to really happen; if you seek past EOF and close the file, the file size is not changed. This is in addition of io.WriteSeeker being insufficient on Windows.

We have a few possibilities:

Any comment?

dsnet commented 7 years ago

Another possibility is to have io.Seeker to seek to 1-before the last byte in the last fragment and write a single byte.

My evaluation of the approaches:


Seek to 1-byte before the last hole and write a single zero byte.

For consistency, Writer.ReadFrom can also do the same 1-byte before EOF technique to ensure the file really is that long (since you can Seek to arbitrary offsets and most io.Seeker won't tell you it is past EOF).

I don't think byte-for-byte reproduction (in terms of where the hole regions are) of sparse files is necessary. So I'm okay if this implicitly causes a single block to be allocated at the end of the file. The reality is that the sparse file generated is still at the whim of the underlying filesystem, which may not be able to exactly respect the hole regions from the original tar file (the source FS may have 4KiB blocks, and the target FS may have a different block size and can't represent holes at offsets from the original FS).

Special-case Reader.WriteTo for os.File rather than io.WriteSeeker

For consistency, WriteTo/ReadFrom should both use os.File then. The upside to this approach is that it more clearly optimized for os.File, which has stronger guarantees about the behavior of seeking past EOF. The downside, you can't use a wrapper around os.File that does hole-punching yourself (in the case of Windows).

Special-case Reader.WriteTo for io.WriteSeeker, plus tell the users that they need to call Truncate themselves.

The downside is this a very subtle requirement for the user.

Special-case Reader.WriteTo for io.WriteSeeker and os.File (or a non-idiomatic Truncater interface).

The downside is more special-casing. There is value in having as few special-cases as possible.


My first vote goes to "seek 1-byte before" technique. My second vote is special-casing for "os.File" only.

dsnet commented 7 years ago

This bug seems interesting to what we're trying here: #21681

dsnet commented 7 years ago

@rasky, have you started working on B yet? I have a working version of it using the "seek 1-byte before" technique.

gopherbot commented 7 years ago

Change https://golang.org/cl/60871 mentions this issue: archive/tar: add Header.DetectSparseHoles

gopherbot commented 7 years ago

Change https://golang.org/cl/60872 mentions this issue: archive/tar: add Reader.WriteTo and Writer.ReadFrom

vbatts commented 7 years ago

Oh nice!

On Wed, Sep 20, 2017, 18:14 GopherBot notifications@github.com wrote:

Closed #13548 https://github.com/golang/go/issues/13548 via 1eacf78 https://github.com/golang/go/commit/1eacf78858fd18b100d25f7a04c4c62d96a23020 .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/golang/go/issues/13548#event-1258355845, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEF6aJFttX9P7yHd6bsQ39Af0soO6nJks5skY49gaJpZM4Gx9sz .

rasky commented 7 years ago

Now that support has been added, it would be great if people interested in this feature would provide feedback at least on the API, before it gets shipped and can’t be changed anymore.

Have a look at https://tip.golang.org/pkg/archive/tar/

grubernaut commented 7 years ago

cc: @mwhooker, as the original case for this issue came from an end-user requiring sparse support inside of Atlas-Go after creating a sparse image via Packer. More detail and function to be patched linked here: https://github.com/golang/go/issues/13548#issuecomment-265770745

astromechza commented 7 years ago

Came across this issue looking for sparse-file support in Golang. API looks good to me and certainly fits my usecase :). Is there no sysSparsePunch needed for unix?

dsnet commented 7 years ago

On Unix OSes that support sparse files, seeking past EOF and writing or resizing the file to be larger automatically produces a sparse file.

astromechza commented 7 years ago

Cool, so it detects that you've skipped past a block and not written anything to it and automatically assumes its sparse? Nice 👍

gopherbot commented 6 years ago

Change https://golang.org/cl/78030 mentions this issue: archive/tar: partially revert sparse file support

rasky commented 6 years ago

Unfortunately, the code had to be reverted and will not be part of 1.10 anymore. This bug should probably be reopened.

gogowitsch commented 3 years ago

Dear Go heros, please try to get sparse support into tar.Writer. Thanks!

realtebo commented 6 months ago

is this bug still present?