Open Alex555github opened 1 month ago
The error message looks like it's not from trying to load the JSON file (at runtime), but from trying to compile the C++ code, that you wrote to load the JSON file.
I'm not aware of any dependencies on GMP, MPFR, MPC, or ISL.
Please make sure you're using a modern enough compiler on each machine, and have the latest version of frugally-deep (and its dependencies, like nlohmann/json
) installed.
If this does not help, maybe you can re-create the problem in a Dockerfile
, such that I reproduce it locally and debug.
Hello, Tobias. Your frugally-deep project is very interesting. I tried to read neural network models (json) on two local machines. On the local machine with the following configuration ubuntu1~18.04, compiled by GNU C version 7.5.0, GMP version 6.1.2, MPFR version 4.0.1, MPC version 1.1.0, isl version isl-0.19-GMP the neural network model file is read successfully. On the other local machine with the following configuration ubuntu1~20.04.2, g++ 9.4.0, gmp-6.1.2, mpfr-4.0.2, mpc-1.1.0, isl-0.15 or later when calling the function fdeep::load_model('file_model_name.json'); neural network model file (json) cannot be loaded and built, as a result of which the following entry appears in the console:
daemon.err my_custom_package[1]: my_custom_package: /home/jenkins/workspace/owrtmain_dev/staging_dir/target-x86_64_glibc/usr/include/nlohmann/json.hpp:2147: const value_type& nlohmann::json_abi_v3_11_3::basic_json<ObjectType, ArrayType, StringType, BooleanType, NumberIntegerType, NumberUnsignedType, NumberFloatType, AllocatorType, JSONSerializer, BinaryType, CustomBaseClass>::operator[](const typename nlohmann::json_abi_v3_11_3::basic_json<ObjectType, ArrayType, StringType, BooleanType, NumberIntegerType, NumberUnsignedType, NumberFloatType, AllocatorType, JSONSerializer, BinaryType, CustomBaseClass>::object_t::key_type&) const [with ObjectType = std::map; ArrayType = std::vector; StringType = std::__cxx11::basic_string; BooleanType = bool; NumberIntegerType = long int; NumberUnsignedType = long unsigned int; NumberFloatType = double; AllocatorType = std::allocator; JSONSerializer = nlohmann::json_abi_v3_11_3::adl_serializer; BinaryType = std::vector; CustomBaseClass = void; nlohmann::json_abi_v3_11
Please tell me if there are any minimum restrictions on the versions of the GMP, MPFR, MPC, ISL libraries for using frugally-deep? Or maybe you know another reason for the unsuccessful loading of the neural network model file (json)? Thank you very much in advance for your answer.