-
Dear beautiful developers of Woltka,
Please note that using [kegg_query.py](https://github.com/qiyunzhu/utils#kegg_querypy) returns empty mapping file for "ko-to-ec.txt".
This might be because th…
-
Hello!
I'm faced with significant performance degradation after upgrade from 23.2.2 to 23.9.5.29 at KafkaEngine consumer.
We have large kafka topic with 18 partition - 25 billion messages per day.
…
-
### What version of Bun is running?
1.1.9+bb13798d9
### What platform is your computer?
Microsoft Windows NT 10.0.19044.0 x64
### What steps can reproduce the bug?
Use Bun.js version 1.…
-
embedding model前向推理时,不同batchsize,相同文本,输出embedding小数点后几位不同,请问这是什么原因导致的?理论上相同的输入应该会有完全相同的输出才对?
使用的是示例代码:
```
sentences_1 = ["样例数据"]
sentences_2 = ["样例数据", "样例数据", "样例数据", "样例数据"]
model_path='…
-
I imagine it can be very frustrating for the user if the cache does not work as expected (inconsistencies, corrupted output, performance degradation). I think we should always be able to deactivate th…
adpi2 updated
1 month ago
-
ARM Cortex-M3 is a little endian and with NATIVE_LITTLE_ENDIAN I expected performance to increase. However, it instead decreases.
For 1M cycles of hashing a 32-byte message on Teensy 3 (ARM Cortex-M…
-
The question that troubles me is whether combining several models for seperation has an effect on the degradation of the output sound. For example, combining Demucs with mdx or others.
Does such com…
-
For grouping ReactionRules in a Model, the Model class should provide the interface to tag them as something like the followings:
``` python
with reaction_rules():
A > B | 1 | _tag('upstream')
…
kaizu updated
7 years ago
-
First of all, I appreciate your amazing work.
The model inference time via Tensor-RT is much faster than the torch in your research paper.
I wonder if you evaluated models via both torch and Tenso…
-
I tried the inference by using BigDL LLM optimized Qwen-14B-Chat model in Aliyun (64 core, Intel(R) Xeon(R) Platinum 8369B, 256GB). For simple test case (one simeple question), there was a big perfoma…