-
I am currently implementing sockeye 2.3.2 and it specifically needs 1.7.0.post1. Is there a way I can get the wheel for this version so I can build it from source?
-
### 🚀 The feature, motivation and pitch
Fuyou Training Framework Integration for PyTorch
Description:
Integrate the Fuyou training framework into PyTorch to enable efficient fine-tuning of larg…
-
MXNet already has experimental AMP (Automatic Mixed Precision) support, exposed in mxnet.contrib package. It is used for automatic casting models to both float16 and bfloat16. This RFC covers moving i…
mk-61 updated
3 years ago
-
We are beginning a migration to a new approach to Magick. This includes a full rewrite of our core agent library, with a design consideration for developer usage and consumption. We are wrapping thi…
-
## Description
MXNet does not support NumPy 1.24
### Error Message
Conversion of MXNet models by using OpenVINO Model Optimizer will raise this issue
python3.8/site-packages/mxnet/numpy/utils.…
-
As I was not able to run the gluoncv with mxnet on my M1 mac, so I tried to use the docker file as python interpreter.
mxnet/python:1.9.1_aarch64_cpu_py3 which is running Python 3.7.13, mxnet==1.9.…
-
What are minimum and recommended hardware requirements to run the model and to do training?
1. How much GPU Memory (VRAM) is required?
2. How much RAM is required?
3. What GPUs are recommended?
…
-
## Description
On the [Get Started instructions](https://mxnet.apache.org/versions/1.9.1/get_started?platform=windows&language=python&processor=cpu&environ=pip&) for MXNet version 1.9.1 say to simply…
-
## Description
On NVIDIA V100 code work well.
On NVIDIA A100 code is blocking in gluon_net.load_params, without any error.
And I'm sure , params file exist, gpu is available.
Anyone have same p…
-
Traceback (most recent call last):
File "slowfast_export.py", line 273, in
converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file, verbose = True)
…