Closed flying-sheep closed 7 years ago
Hi Philip, I had implemented this, however not released yet. I make a release 0.9.0 which should support large data arrays.
I've also uploaded pyRserve 0.9.0 to pypi. Can you please check that large data support works for you as well? And if it does close this issue?
you seem to have uploaded the wrong version as 0.9.0: when i try to use it, there’s an exception in finalize
.
the version tagged 0.9.0
looks like this
the version on PyPI (i.e. the one i have installed) looks like this:
def finalize(self):
"""Finalize the message package before actually sending/wriring it out.
-> Set the length of the entire data package in the general message hdr
as number of bytes of the entire message minus the general hdr
"""
# Jump to end of buffer to determine its length:
self._buffer.seek(0, os.SEEK_END)
messageSize = self._buffer.tell() - rtypes.RHEADER_SIZE
if DEBUG:
print('writing size of header: %2d' % messageSize)
# Goto position 4 of the general Rserve package header and write the
# size of the overall rserve message there. For message size > 2**32
# the size is split into two parts, the lower 32 bits are written at
# position 4, the higher part is written at position 12 (see QAP1 docs)
bin_messageSize = struct.pack('<Q', messageSize)
bin_messageSize_lo = bin_messageSize[:4]
bin_messageSize_hi = bin_messageSize[4:]
self._buffer.seek(4)
self._buffer.write(bin_messageSize_lo)
self._buffer.write('\x00\x00\x00\x00') # data offset, zero by default
self._buffer.write(bin_messageSize_hi)
return self._getRetVal()
right, I haven't pushed by patch. It is now uploaded and correctly tagged.
Seems to work well.
a test is as easy as
import numpy as np
import pyRserve
con = pyRserve.connect()
con.r.test = np.arange(40000000)
Somewhere around 32M entries, vectors can’t be transferred anymore
i assume
DT_LARGE
would be used for that?