marian-nmt / marian

Fast Neural Machine Translation in C++
https://marian-nmt.github.io
Other
1.21k stars 227 forks source link

bug: malloc: large alloc when Using factor #387

Open Sarah-Callies opened 2 years ago

Sarah-Callies commented 2 years ago

marian version:v1.11.0 using factor word embedding

[2022-05-25 23:28:43] [marian] Marian v1.11.0 f00d0621 2022-02-08 08:39:24 -0800 [2022-05-25 23:28:43] [marian] Running on dnnl as process 7562 with command line: [2022-05-25 23:28:43] [marian] /dnn98/yandaowei/marian/build/marian -c ./config_factor.yml [2022-05-25 23:28:43] [config] after: 0e [2022-05-25 23:28:43] [config] after-batches: 0 [2022-05-25 23:28:43] [config] after-epochs: 0 [2022-05-25 23:28:43] [config] all-caps-every: 0 [2022-05-25 23:28:43] [config] allow-unk: false [2022-05-25 23:28:43] [config] authors: false [2022-05-25 23:28:43] [config] beam-size: 6 [2022-05-25 23:28:43] [config] bert-class-symbol: "[CLS]" [2022-05-25 23:28:43] [config] bert-mask-symbol: "[MASK]" [2022-05-25 23:28:43] [config] bert-masking-fraction: 0.15 [2022-05-25 23:28:43] [config] bert-sep-symbol: "[SEP]" [2022-05-25 23:28:43] [config] bert-train-type-embeddings: true [2022-05-25 23:28:43] [config] bert-type-vocab-size: 2 [2022-05-25 23:28:43] [config] build-info: "" [2022-05-25 23:28:43] [config] check-gradient-nan: false [2022-05-25 23:28:43] [config] check-nan: false [2022-05-25 23:28:43] [config] cite: false [2022-05-25 23:28:43] [config] clip-norm: 5 [2022-05-25 23:28:43] [config] cost-scaling: [2022-05-25 23:28:43] [config] [] [2022-05-25 23:28:43] [config] cost-type: ce-mean-words [2022-05-25 23:28:43] [config] cpu-threads: 0 [2022-05-25 23:28:43] [config] data-threads: 8 [2022-05-25 23:28:43] [config] data-weighting: "" [2022-05-25 23:28:43] [config] data-weighting-type: sentence [2022-05-25 23:28:43] [config] dec-cell: gru [2022-05-25 23:28:43] [config] dec-cell-base-depth: 2 [2022-05-25 23:28:43] [config] dec-cell-high-depth: 1 [2022-05-25 23:28:43] [config] dec-depth: 6 [2022-05-25 23:28:43] [config] devices: [2022-05-25 23:28:43] [config] - 2 [2022-05-25 23:28:43] [config] - 3 [2022-05-25 23:28:43] [config] dim-emb: 512 [2022-05-25 23:28:43] [config] dim-rnn: 1024 [2022-05-25 23:28:43] [config] dim-vocabs: [2022-05-25 23:28:43] [config] - 0 [2022-05-25 23:28:43] [config] - 0 [2022-05-25 23:28:43] [config] disp-first: 0 [2022-05-25 23:28:43] [config] disp-freq: 500 [2022-05-25 23:28:43] [config] disp-label-counts: true [2022-05-25 23:28:43] [config] dropout-rnn: 0 [2022-05-25 23:28:43] [config] dropout-src: 0 [2022-05-25 23:28:43] [config] dropout-trg: 0 [2022-05-25 23:28:43] [config] dump-config: "" [2022-05-25 23:28:43] [config] dynamic-gradient-scaling: [2022-05-25 23:28:43] [config] [] [2022-05-25 23:28:43] [config] early-stopping: 100 [2022-05-25 23:28:43] [config] early-stopping-on: first [2022-05-25 23:28:43] [config] embedding-fix-src: false [2022-05-25 23:28:43] [config] embedding-fix-trg: false [2022-05-25 23:28:43] [config] embedding-normalization: false [2022-05-25 23:28:43] [config] embedding-vectors: [2022-05-25 23:28:43] [config] [] [2022-05-25 23:28:43] [config] enc-cell: gru [2022-05-25 23:28:43] [config] enc-cell-depth: 1 [2022-05-25 23:28:43] [config] enc-depth: 6 [2022-05-25 23:28:43] [config] enc-type: bidirectional [2022-05-25 23:28:43] [config] english-title-case-every: 0 [2022-05-25 23:28:43] [config] exponential-smoothing: True [2022-05-25 23:28:43] [config] factor-weight: 1 [2022-05-25 23:28:43] [config] factors-combine: sum [2022-05-25 23:28:43] [config] factors-dim-emb: 0 [2022-05-25 23:28:43] [config] gradient-checkpointing: false [2022-05-25 23:28:43] [config] gradient-norm-average-window: 100 [2022-05-25 23:28:43] [config] guided-alignment: none [2022-05-25 23:28:43] [config] guided-alignment-cost: mse [2022-05-25 23:28:43] [config] guided-alignment-weight: 0.1 [2022-05-25 23:28:43] [config] ignore-model-config: false [2022-05-25 23:28:43] [config] input-types: [2022-05-25 23:28:43] [config] [] [2022-05-25 23:28:43] [config] interpolate-env-vars: false [2022-05-25 23:28:43] [config] keep-best: false [2022-05-25 23:28:43] [config] label-smoothing: 0.1 [2022-05-25 23:28:43] [config] layer-normalization: false [2022-05-25 23:28:43] [config] learn-rate: 0.0003 [2022-05-25 23:28:43] [config] lemma-dependency: "" [2022-05-25 23:28:43] [config] lemma-dim-emb: 0 [2022-05-25 23:28:43] [config] log: ./model/train.log [2022-05-25 23:28:43] [config] log-level: info [2022-05-25 23:28:43] [config] log-time-zone: "" [2022-05-25 23:28:43] [config] logical-epoch: [2022-05-25 23:28:43] [config] - 1e [2022-05-25 23:28:43] [config] - 0 [2022-05-25 23:28:43] [config] lr-decay: 0 [2022-05-25 23:28:43] [config] lr-decay-freq: 50000 [2022-05-25 23:28:43] [config] lr-decay-inv-sqrt: [2022-05-25 23:28:43] [config] - 16000 [2022-05-25 23:28:43] [config] lr-decay-repeat-warmup: false [2022-05-25 23:28:43] [config] lr-decay-reset-optimizer: false [2022-05-25 23:28:43] [config] lr-decay-start: [2022-05-25 23:28:43] [config] - 10 [2022-05-25 23:28:43] [config] - 1 [2022-05-25 23:28:43] [config] lr-decay-strategy: epoch+stalled [2022-05-25 23:28:43] [config] lr-report: True [2022-05-25 23:28:43] [config] lr-warmup: 16000 [2022-05-25 23:28:43] [config] lr-warmup-at-reload: false [2022-05-25 23:28:43] [config] lr-warmup-cycle: false [2022-05-25 23:28:43] [config] lr-warmup-start-rate: 0 [2022-05-25 23:28:43] [config] max-length: 100 [2022-05-25 23:28:43] [config] max-length-crop: false [2022-05-25 23:28:43] [config] max-length-factor: 3 [2022-05-25 23:28:43] [config] maxi-batch: 512 [2022-05-25 23:28:43] [config] maxi-batch-sort: trg [2022-05-25 23:28:43] [config] mini-batch: 64 [2022-05-25 23:28:43] [config] mini-batch-fit: True [2022-05-25 23:28:43] [config] mini-batch-fit-step: 10 [2022-05-25 23:28:43] [config] mini-batch-round-up: true [2022-05-25 23:28:43] [config] mini-batch-track-lr: false [2022-05-25 23:28:43] [config] mini-batch-warmup: 0 [2022-05-25 23:28:43] [config] mini-batch-words: 0 [2022-05-25 23:28:43] [config] mini-batch-words-ref: 0 [2022-05-25 23:28:43] [config] model: ./model/model.npz [2022-05-25 23:28:43] [config] multi-loss-type: sum [2022-05-25 23:28:43] [config] n-best: false [2022-05-25 23:28:43] [config] no-nccl: false [2022-05-25 23:28:43] [config] no-reload: false [2022-05-25 23:28:43] [config] no-restore-corpus: false [2022-05-25 23:28:43] [config] normalize: 0.6 [2022-05-25 23:28:43] [config] normalize-gradient: false [2022-05-25 23:28:43] [config] num-devices: 0 [2022-05-25 23:28:43] [config] optimizer: adam [2022-05-25 23:28:43] [config] optimizer-delay: 1 [2022-05-25 23:28:43] [config] optimizer-params: [2022-05-25 23:28:43] [config] - 0.9 [2022-05-25 23:28:43] [config] - 0.98 [2022-05-25 23:28:43] [config] - 1e-09 [2022-05-25 23:28:43] [config] output-omit-bias: false [2022-05-25 23:28:43] [config] overwrite: false [2022-05-25 23:28:43] [config] precision: [2022-05-25 23:28:43] [config] - float32 [2022-05-25 23:28:43] [config] - float32 [2022-05-25 23:28:43] [config] pretrained-model: "" [2022-05-25 23:28:43] [config] quantize-biases: false [2022-05-25 23:28:43] [config] quantize-bits: 0 [2022-05-25 23:28:43] [config] quantize-log-based: false [2022-05-25 23:28:43] [config] quantize-optimization-steps: 0 [2022-05-25 23:28:43] [config] quiet: false [2022-05-25 23:28:43] [config] quiet-translation: True [2022-05-25 23:28:43] [config] relative-paths: false [2022-05-25 23:28:43] [config] right-left: false [2022-05-25 23:28:43] [config] save-freq: 5000 [2022-05-25 23:28:43] [config] seed: 1111 [2022-05-25 23:28:43] [config] sentencepiece-alphas: [2022-05-25 23:28:43] [config] [] [2022-05-25 23:28:43] [config] sentencepiece-max-lines: 2000000 [2022-05-25 23:28:43] [config] sentencepiece-options: "" [2022-05-25 23:28:43] [config] sharding: global [2022-05-25 23:28:43] [config] shuffle: data [2022-05-25 23:28:43] [config] shuffle-in-ram: false [2022-05-25 23:28:43] [config] sigterm: save-and-exit [2022-05-25 23:28:43] [config] skip: false [2022-05-25 23:28:43] [config] sqlite: "" [2022-05-25 23:28:43] [config] sqlite-drop: false [2022-05-25 23:28:43] [config] sync-freq: 200u [2022-05-25 23:28:43] [config] sync-sgd: True [2022-05-25 23:28:43] [config] tempdir: /tmp [2022-05-25 23:28:43] [config] tied-embeddings: false [2022-05-25 23:28:43] [config] tied-embeddings-all: True [2022-05-25 23:28:43] [config] tied-embeddings-src: false [2022-05-25 23:28:43] [config] train-embedder-rank: [2022-05-25 23:28:43] [config] [] [2022-05-25 23:28:43] [config] train-sets: [2022-05-25 23:28:43] [config] - ./resource/ja-bped-factored-train.fsv [2022-05-25 23:28:43] [config] - ./resource/en-bped-factored-train.fsv [2022-05-25 23:28:43] [config] transformer-aan-activation: swish [2022-05-25 23:28:43] [config] transformer-aan-depth: 2 [2022-05-25 23:28:43] [config] transformer-aan-nogate: false [2022-05-25 23:28:43] [config] transformer-decoder-autoreg: average-attention [2022-05-25 23:28:43] [config] transformer-decoder-dim-ffn: 0 [2022-05-25 23:28:43] [config] transformer-decoder-ffn-depth: 0 [2022-05-25 23:28:43] [config] transformer-depth-scaling: false [2022-05-25 23:28:43] [config] transformer-dim-aan: 2048 [2022-05-25 23:28:43] [config] transformer-dim-ffn: 2048 [2022-05-25 23:28:43] [config] transformer-dropout: 0.1 [2022-05-25 23:28:43] [config] transformer-dropout-attention: 0 [2022-05-25 23:28:43] [config] transformer-dropout-ffn: 0 [2022-05-25 23:28:43] [config] transformer-ffn-activation: swish [2022-05-25 23:28:43] [config] transformer-ffn-depth: 2 [2022-05-25 23:28:43] [config] transformer-guided-alignment-layer: last [2022-05-25 23:28:43] [config] transformer-heads: 8 [2022-05-25 23:28:43] [config] transformer-no-projection: false [2022-05-25 23:28:43] [config] transformer-pool: false [2022-05-25 23:28:43] [config] transformer-postprocess: dan [2022-05-25 23:28:43] [config] transformer-postprocess-emb: d [2022-05-25 23:28:43] [config] transformer-postprocess-top: "" [2022-05-25 23:28:43] [config] transformer-preprocess: "" [2022-05-25 23:28:43] [config] transformer-tied-layers: [2022-05-25 23:28:43] [config] [] [2022-05-25 23:28:43] [config] transformer-train-position-embeddings: false [2022-05-25 23:28:43] [config] tsv: false [2022-05-25 23:28:43] [config] tsv-fields: 0 [2022-05-25 23:28:43] [config] type: transformer [2022-05-25 23:28:43] [config] ulr: false [2022-05-25 23:28:43] [config] ulr-dim-emb: 0 [2022-05-25 23:28:43] [config] ulr-dropout: 0 [2022-05-25 23:28:43] [config] ulr-keys-vectors: "" [2022-05-25 23:28:43] [config] ulr-query-vectors: "" [2022-05-25 23:28:43] [config] ulr-softmax-temperature: 1 [2022-05-25 23:28:43] [config] ulr-trainable-transformation: false [2022-05-25 23:28:43] [config] unlikelihood-loss: false [2022-05-25 23:28:43] [config] valid-freq: 5000 [2022-05-25 23:28:43] [config] valid-log: ./model/valid.log [2022-05-25 23:28:43] [config] valid-max-length: 1000 [2022-05-25 23:28:43] [config] valid-metrics: [2022-05-25 23:28:43] [config] - ce-mean-words [2022-05-25 23:28:43] [config] - perplexity [2022-05-25 23:28:43] [config] - translation [2022-05-25 23:28:43] [config] valid-mini-batch: 64 [2022-05-25 23:28:43] [config] valid-reset-stalled: false [2022-05-25 23:28:43] [config] valid-script-args: [2022-05-25 23:28:43] [config] [] [2022-05-25 23:28:43] [config] valid-script-path: ./je.validate.sh [2022-05-25 23:28:43] [config] valid-sets: [2022-05-25 23:28:43] [config] - ./resource/ja-bped-factored-dev.fsv [2022-05-25 23:28:43] [config] - ./resource/en-bped-factored-dev.fsv [2022-05-25 23:28:43] [config] valid-translation-output: ./model/je.valid.output [2022-05-25 23:28:43] [config] vocabs: [2022-05-25 23:28:43] [config] - ./resource/vocab.fsv [2022-05-25 23:28:43] [config] - ./resource/vocab.fsv [2022-05-25 23:28:43] [config] word-penalty: 0 [2022-05-25 23:28:43] [config] word-scores: false [2022-05-25 23:28:43] [config] workspace: 12000 [2022-05-25 23:28:43] [config] Model is being created with Marian v1.11.0 f00d0621 2022-02-08 08:39:24 -0800 [2022-05-25 23:28:43] Using synchronous SGD [2022-05-25 23:28:43] [comm] Compiled without MPI support. Running as a single process on dnnl [2022-05-25 23:28:43] Synced seed 1111 [2022-05-25 23:28:43] [vocab] Loading vocab spec file ./resource/vocab.fsv [2022-05-25 23:28:43] [vocab] Factor group '(lemma)' has 35995 members [2022-05-25 23:28:43] [vocab] Factor group '|s' has 2 members [2022-05-25 23:28:43] [vocab] Factored-embedding map read with total/unique of 71987/35997 factors from 35995 example words (in space of 107,988) [2022-05-25 23:28:43] [vocab] Expanding all valid vocab entries out of 107,988... [2022-05-25 23:28:44] [vocab] Completed, total 71987 valid combinations [2022-05-25 23:28:44] [data] Setting vocabulary size for input 0 to 71,987 [2022-05-25 23:28:44] [vocab] Reusing existing vocabulary object in memory (vocab size 71987) [2022-05-25 23:28:44] [data] Setting vocabulary size for input 1 to 71,987 [2022-05-25 23:28:44] [batching] Collecting statistics for batch fitting with step size 10 [2022-05-25 23:28:44] [memory] Extending reserved space to 12032 MB (device gpu2) [2022-05-25 23:28:44] [memory] Extending reserved space to 12032 MB (device gpu3) [2022-05-25 23:28:44] [comm] Using NCCL 2.8.3 for GPU communication [2022-05-25 23:28:44] [comm] Using global sharding [2022-05-25 23:28:45] [comm] NCCLCommunicators constructed successfully [2022-05-25 23:28:45] [training] Using 2 GPUs [2022-05-25 23:28:45] [embedding] Factored embeddings enabled [2022-05-25 23:28:45] [embedding] Factored outputs enabled [2022-05-25 23:28:45] [logits] Applying loss function for 2 factor(s) [2022-05-25 23:28:45] [memory] Reserving 274 MB, device gpu2 [2022-05-25 23:28:45] [gpu] 16-bit TensorCores enabled for float32 matrix operations [2022-05-25 23:28:45] [memory] Reserving 274 MB, device gpu2 [2022-05-25 23:29:09] [batching] Done. Typical MB size is 19,618 target words [2022-05-25 23:29:09] [memory] Extending reserved space to 12032 MB (device gpu2) [2022-05-25 23:29:09] [memory] Extending reserved space to 12032 MB (device gpu3) [2022-05-25 23:29:10] [comm] Using NCCL 2.8.3 for GPU communication [2022-05-25 23:29:10] [comm] Using global sharding [2022-05-25 23:29:10] [comm] NCCLCommunicators constructed successfully [2022-05-25 23:29:10] [training] Using 2 GPUs [2022-05-25 23:29:10] Training started [2022-05-25 23:29:10] [data] Shuffling data [2022-05-25 23:29:24] [data] Done reading 13,439,256 sentences [2022-05-25 23:30:20] [data] Done shuffling 13,439,256 sentences to temp files [2022-05-25 23:30:24] WARNING: Unknown factor '簡' in '簡|s1'; mapping to '' [2022-05-25 23:30:24] WARNING: Unknown factor '迅' in '迅|s1'; mapping to '' [2022-05-25 23:30:24] WARNING: Unknown factor '殊' in '殊|s0'; mapping to '' [2022-05-25 23:30:24] WARNING: Unknown factor '述' in '述|s1'; mapping to '' [2022-05-25 23:30:24] WARNING: Unknown factor '往' in '往|s1'; mapping to '' [2022-05-25 23:30:25] [training] Batches are processed as 1 process(es) x 2 devices/process [2022-05-25 23:30:25] [memory] Reserving 274 MB, device gpu2 [2022-05-25 23:30:25] [memory] Reserving 274 MB, device gpu3 [2022-05-25 23:30:25] [memory] Reserving 274 MB, device gpu2 [2022-05-25 23:30:25] [memory] Reserving 274 MB, device gpu3 [2022-05-25 23:30:26] Parameter type float32, optimization type float32, casting types false [2022-05-25 23:30:26] Parameter type float32, optimization type float32, casting types false [2022-05-25 23:30:26] Allocating memory for general optimizer shards [2022-05-25 23:30:26] [memory] Reserving 137 MB, device gpu2 [2022-05-25 23:30:26] [memory] Reserving 137 MB, device gpu3 [2022-05-25 23:30:26] Allocating memory for Adam-specific shards [2022-05-25 23:30:26] [memory] Reserving 274 MB, device gpu2 [2022-05-25 23:30:26] [memory] Reserving 274 MB, device gpu3 [2022-05-25 23:33:18] Ep. 1 : Up. 500 : Sen. 129,622 : Cost 9.88837147 : Time 248.90s : 20946.26 words/s : gNorm 3.2351 : L.r. 9.3750e-06 [2022-05-25 23:36:11] Ep. 1 : Up. 1000 : Sen. 260,394 : Cost 7.20386457 : Time 173.27s : 30060.32 words/s : gNorm 2.6923 : L.r. 1.8750e-05 [2022-05-25 23:39:05] Ep. 1 : Up. 1500 : Sen. 390,530 : Cost 5.94917393 : Time 173.66s : 29775.11 words/s : gNorm 2.8178 : L.r. 2.8125e-05 [2022-05-25 23:42:00] Ep. 1 : Up. 2000 : Sen. 519,637 : Cost 5.51556253 : Time 175.20s : 29776.81 words/s : gNorm 2.4543 : L.r. 3.7500e-05 [2022-05-25 23:44:54] Ep. 1 : Up. 2500 : Sen. 649,601 : Cost 5.26803017 : Time 173.97s : 29840.71 words/s : gNorm 2.0656 : L.r. 4.6875e-05 [2022-05-25 23:47:48] Ep. 1 : Up. 3000 : Sen. 780,501 : Cost 5.09393597 : Time 173.91s : 30115.05 words/s : gNorm 2.0582 : L.r. 5.6250e-05 [2022-05-25 23:50:43] Ep. 1 : Up. 3500 : Sen. 911,616 : Cost 4.96280003 : Time 174.81s : 30083.03 words/s : gNorm 1.8415 : L.r. 6.5625e-05 [2022-05-25 23:53:36] Ep. 1 : Up. 4000 : Sen. 1,041,696 : Cost 4.83178806 : Time 172.60s : 30176.75 words/s : gNorm 1.6385 : L.r. 7.5000e-05 [2022-05-25 23:56:28] Ep. 1 : Up. 4500 : Sen. 1,172,393 : Cost 4.73234224 : Time 172.40s : 30255.05 words/s : gNorm 1.6617 : L.r. 8.4375e-05 [2022-05-25 23:59:22] Ep. 1 : Up. 5000 : Sen. 1,301,853 : Cost 4.64060259 : Time 173.66s : 30132.48 words/s : gNorm 1.6165 : L.r. 9.3750e-05 [2022-05-25 23:59:22] Saving model weights and runtime parameters to ./model/model.iter5000.npz [2022-05-25 23:59:23] Saving model weights and runtime parameters to ./model/model.npz [2022-05-25 23:59:27] Saving Adam parameters [2022-05-25 23:59:29] [training] Saving training checkpoint to ./model/model.npz and ./model/model.npz.optimizer.npz [2022-05-26 00:26:33] [valid] Ep. 1 : Up. 5000 : ce-mean-words : 3.51406 : new best [2022-05-26 00:53:34] [valid] Ep. 1 : Up. 5000 : perplexity : 33.5842 : new best tcmalloc: large alloc 12616466432 bytes == 0x7f77ca000000 @ tcmalloc: large alloc 12750684160 bytes == 0x7f6ec0000000 @ tcmalloc: large alloc 12884901888 bytes == 0x7f68c0000000 @ tcmalloc: large alloc 13019119616 bytes == 0x7f62b0000000 @ tcmalloc: large alloc 13153337344 bytes == 0x7f5c90000000 @ tcmalloc: large alloc 13287555072 bytes == 0x7f5660000000 @ tcmalloc: large alloc 13421772800 bytes == 0x7f5020000000 @ tcmalloc: large alloc 13555990528 bytes == 0x7f49d0000000 @ tcmalloc: large alloc 13690208256 bytes == 0x7f4370000000 @ tcmalloc: large alloc 13824425984 bytes == 0x7f3d00000000 @ tcmalloc: large alloc 13958643712 bytes == 0x7f3680000000 @ tcmalloc: large alloc 14092861440 bytes == 0x7f2ff0000000 @ tcmalloc: large alloc 14227079168 bytes == 0x7f2950000000 @ tcmalloc: large alloc 14361296896 bytes == 0x7f22a0000000 @ tcmalloc: large alloc 14495514624 bytes == 0x7f1be0000000 @ tcmalloc: large alloc 14629732352 bytes == 0x7f1510000000 @ tcmalloc: large alloc 14763950080 bytes == 0x7f0e30000000 @ tcmalloc: large alloc 12616466432 bytes == 0x7f77ca000000 @ tcmalloc: large alloc 12750684160 bytes == 0x7f6ec0000000 @ tcmalloc: large alloc 12884901888 bytes == 0x7f68c0000000 @ tcmalloc: large alloc 13019119616 bytes == 0x7f62b0000000 @ tcmalloc: large alloc 13153337344 bytes == 0x7f5c90000000 @ tcmalloc: large alloc 13287555072 bytes == 0x7f5660000000 @ tcmalloc: large alloc 13421772800 bytes == 0x7f5020000000 @ tcmalloc: large alloc 13555990528 bytes == 0x7f49d0000000 @ tcmalloc: large alloc 13690208256 bytes == 0x7f4370000000 @ tcmalloc: large alloc 13824425984 bytes == 0x7f3d00000000 @

emjotde commented 2 years ago

We just fixed a bug concerning memory allocation in decoding in the development branch. I am waiting for a few more positive test results and then it will be released here. A day or two, I hope. Quite likely this might be the culprit.

Sarah-Callies commented 1 year ago

Did this bug fixed? thanks