Open gospodnetic opened 7 years ago
It wouldn't be as efficient as indexing a single vector yourself, but if you truly needed the data in a vector of vectors, you could do the following:
`
cnpy::NpyArray arr = cnpy::npy_load("arr1.npy");
double* loaded_data = arr.data
size_t nrows = arr.shape[0];
size_t ncols = arr.shape[1];
std::vector<std::vector
vec2d.reserve(nrows); for(size_t row = 0; row < nrows;row++) { vec2d.emplace_back(ncols); for(size_t col = 0;col < ncols;col++) { vec2d[row][col] = loaded_data[row*nrows+col]; } } `
Just for the sake of visibility..
cnpy::NpyArray arr = cnpy::npy_load("arr1.npy");
double* loaded_data = arr.data();
size_t nrows = arr.shape[0];
size_t ncols = arr.shape[1];
std::vector<std::vector> vec2d;
vec2d.reserve(nrows);
for(size_t row = 0; row < nrows;row++) {
vec2d.emplace_back(ncols);
for(size_t col = 0;col < ncols;col++) {
vec2d[row][col] = loaded_data[row*nrows+col];
}
}
Some code edits that I felt were necessary while compiling the code...
cnpy::NpyArray arr = cnpy::npy_load("arr1.npy");
double* loaded_data = arr.data<double>();
size_t nrows = arr.shape[0];
size_t ncols = arr.shape[1];
std::vector<std::vector<double> > vec2d;
vec2d.reserve(nrows);
for(size_t row = 0; row < nrows;row++) {
vec2d.emplace_back(ncols);
for(size_t col = 0;col < ncols;col++) {
vec2d[row][col] = loaded_data[row*nrows+col];
}
}
It compiles but the indexing is wrong. Should be: vec2d[row][col] = loaded_data[row*ncols+col];
.
It compiles but the indexing is wrong. Should be:
vec2d[row][col] = loaded_data[row*ncols+col];
.
It depends on how you are counting. If the matrix is row-major then what you have specified is correct. However, the code I wrote is for the matrix that has been stored in column-major format.
Sorry, I need to disagree. Two things:
fortran_order: False
and during npy_load
it checks assert(!fortran_order)
, i.e. all matrices are row-major). If you ignore this, you will get different results when loading the same matrix with numpy and cnpy.vec2d[row][col] = loaded_data[col*nrows+row]
. Your proposed indexing coincidentally works for square matrices but not for arbitrary ones.Please correct me if I'm missing something.
We've integrated cnpy into xtensor
& xtensor-io
by the way, if you want to use a "NumPy-like" container directly in C++ without needing to resort to inefficient vector-of-vector constructs.
I was going to ask if cnpy could automatically load data type/shapes. But seems like one should use xtensor instead? @wolfv, Should this be in readme file?
Hello, I am unclear, if I have a 2D array stored in a npz file, how can I access the data and store it in a vector<vector>?
Also, if a scalar is saved as a npz from a simple python code (below), it's shape[0] is not 1 (as shown in the example), but 0.
Is there a way that I can exactly determine the underlying data type within the npz file if all I know is that it contains floating point values ranging 0 - 255?
Thank you!