Closed grrtrr closed 2 months ago
The same problem exists for bufferstream
. Test program:
#include <boost/interprocess/streams/bufferstream.hpp>
#include <vector>
#include <cstdlib>
#include <iostream>
#include <cmath>
int main(int argc, char* argv[])
{
// some numbers such that total buffer is larger than 2GB
const std::streamoff s1(2000);
const std::streamoff s2(314748);
std::vector<float> buffer(s1*s2);
const auto buf_size = static_cast<std::streamsize>(buffer.size())* sizeof(float);
boost::interprocess::bufferstream s(reinterpret_cast<char*>(buffer.data()), buf_size, std::ios::in | std::ios::out | std::ios::binary);
// write some data from a smaller buffer
std::vector<float> buffer2(s2, 1.F);
for (std::streamoff i=0; i<s1; ++i)
{
std::cout << "before (" << i << ") " << s.tellp() << std::endl;
s.write(reinterpret_cast<const char *>(buffer2.data()), s2*sizeof(float));
}
if (std::abs(buffer[s1*s2-1] -1.F) > .001F)
{
std::cerr << "wrong output\n";
return 1;
}
return 0;
}
This segfaults with g++-8 and boost 1.65.1, and also g++-12 and boost 1.74.0.
...
before (1703) 2144063376
before (1704) 2145322368
before (1705) 2146581360
before (1706) 2147840352
Segmentation fault (core dumped)
If you comment out the tellg()
, then it runs fine.
We observed this problem on
boost
1.71, but it is also present in 1.81.Problem description
On a system with 4-byte
int
and anINT_MAX
of 2147483647, attempting to store more than that in abasic_vectorbuf
causes problems due to converting a larger type (std::char_traits::off_type
orstd::vector::difference_type
) to anint
:std::streambuf::pbump
andstd::streambuf::gbump
both implicitly convert toint
,INT_MAX
to these functions movespptr/gptr
beforepbase/eback
- access causes segmentation fault;int
which would also produce negative offsets due to overflow.How to reproduce
The problem is fully reproducible by
INT_MAX
elements in abasic_vectorbuf
, orreserve()
to allocate more thanINT_MAX
elements, and then callingseekoff
ortellp()
once the write position has advanced pastINT_MAX
.