Open sfackler opened 4 years ago
Is there a situation under which this would not be better off as an associated constant?
Those are not compatible with trait objects.
Those are not compatible with trait objects.
Is this a design choice, or just something that has not been implemented yet?
That is inherent to how constants work.
Is there a situation under which this would not be better off as an associated constant?
Is there a timeline to stabilize this feature? Knowing whether a socket supports vectored I/O or not gives applications a bunch of optimization opportunities :)
As one potential argument for the acceptance of io::Write::is_write_vectored()
, I would like to note that Tokio has stabilized tokio::io::AsyncWrite::is_write_vectored()
. However, tokio::io::AsyncRead
does not expose any analogue for is_read_vectored()
, which may be a potential argument against io::Read::is_read_vectored()
. I'm guessing that vectored reads are less likely to improve performance in async code as opposed to largely-synchronous local filesystem i/o though, so tokio's decision to avoid vectored reads is likely not pertinent to the general utility of vectored reads.
As @xiaoyawei noted, it would be especially nice to be able to detect support for vectored writes on stable. In an experimental branch of the zip
crate, I demonstrated writing zip headers in a single call using vectored writes (https://github.com/cosmicexplorer/zip/blob/f755697eff4f342c842e2c76ae2bb387515de56b/src/spec.rs#L214-L241):
pub async fn write_async<T: io::AsyncWrite>(&self, mut writer: Pin<&mut T>) -> ZipResult<()> {
let block: [u8; 22] = unsafe {
mem::transmute(CentralDirectoryEndBuffer {
magic: CENTRAL_DIRECTORY_END_SIGNATURE,
disk_number: self.disk_number,
disk_with_central_directory: self.disk_with_central_directory,
number_of_files_on_this_disk: self.number_of_files_on_this_disk,
number_of_files: self.number_of_files,
central_directory_size: self.central_directory_size,
central_directory_offset: self.central_directory_offset,
zip_file_comment_length: self.zip_file_comment.len() as u16,
})
};
if writer.is_write_vectored() {
/* TODO: zero-copy!! */
let block = IoSlice::new(&block);
let comment = IoSlice::new(&self.zip_file_comment);
writer.write_vectored(&[block, comment]).await?;
} else {
/* If no special vector write support, just perform two separate writes. */
writer.write_all(&block).await?;
writer.write_all(&self.zip_file_comment).await?;
}
Ok(())
}
}
I suppose it's true that I could have done this without checking is_write_vectored()
, but if vectored write support isn't guaranteed, I don't want to risk a non-atomic write (not sure if this really matters, I guess it just makes me uncomfortable).
However, I'm now also realizing that the above code is probably incorrect, as I don't check whether tokio's async write_vectored()
writes all of my data, so I'm probably looking for #70436/write_all_vectored()
to be stabilized as well (and I can see there is a legitimate discussion required before that can be stabilized). I will conclude this note by proposing that especially in the absence of a stabilized write_all_vectored()
, it becomes even more important to avoid complex vectorized writes unless necessary, and I think this makes is_{read,write}_vectored()
even more useful to stabilize first.
Regarding @m-ou-se's comment on io::Chain
behavior (https://github.com/rust-lang/rust/issues/69941#issuecomment-1366723373):
https://github.com/rust-lang/rust/pull/105917 is an interesting case to think about: io::Chain. When chaining one stream that is write vectored with another one that isn't, io::Chain::is_write_vectored could make its return value depend on whether it's currently in the first or second stream, I suppose. (Not saying that it should. But we should probably document what is expected.)
@workingjubilee's proposal at https://github.com/rust-lang/rust/pull/105917/files#r1278629167 to make io::Chain
only report is_read_vectored()
if both first
and second
supported vectored reads seems like the most intuitive response to this question.
Thanks so much for reading, I really appreciate your work.
Unresolved Questions
Implementation history
67841