-
- ### How long do I need to train on a custom dataset?
- ### How do I know if the training is complete?
- ### How to get the result/output for a custom dataset?
- ### How to calculate the FID?
…
-
# 0.3.0
Greetings! This is issue aims to provide a roadmap for `neuronika`'s 0.1.1 version release.
## To Add
- [x] Kullback-Leibler divergence loss function.
- [x] Learning Rate schedulers…
frjnn updated
2 years ago
-
### 🚀 The feature, motivation and pitch
LayerNorm starts to be applied to image data on per-channel basis (e.g. in ConvNeXt model).
`torch.nn.LayerNorm` support normalization only on the last se…
-
The benchmarks this time around are interesting, with some fairly clear trends emerging for the near future.
### Looking Back
First, some appreciation for where things are,
- 9 months ago, we were ~3…
-
### Issue type
Documentation Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.14.0
### Custom code
Yes
### OS platform and distr…
-
Hey Andre, I had a doubt regarding Model Conversion. I am trying to use Mobile net trained model, which is already based on Tensorflow, Do I need to undergo model conversion for that? I dont think I s…
-
Some code to start working through the integration. This is ugly and very hacky. Of course the ultimate objective is to write a full-featured `htmlwidget` that is usable for an R user with no knowle…
-
-
### Describe the bug
When using `AsyncInferenceClient.chat_completion(stream=True)` the client expects the endpoint to emit `[DONE]\n` token at the end of the stream (which TGI does). However, due …
-
The usage with the parameter ```python run_eagerly=True``` comes with a very high computation time for the training. Is there any way to use this loss function in the tensorflow native graph-mode?
…
roggf updated
5 months ago