-
**Describe the bug**
I'm training nerfacto on my dataset with 300k iterations (since I want to keep the original data resolution and it's 10 images 6kx4k). I know the standard is training for 30k but…
-
Hi, first thanks for the awesome package!
1. Regularization:
After some experiments with the ALS recommender on my data it seems that I'm not using the `regularization` hyperparameter in the right…
-
The author wrote following words in paper:
Additionally, we found that it was important to put very little or no weight decay (l2 regularization) on the depthwise filters since their are so few param…
-
Hello, author. First of all, thank you very much for sharing the code of this article. I learned a lot by reading your articles and code. I have a little contact with neural network, and I can't get …
-
Takes data from Data Processor, creates inputs (eg embeds battles), values and trains models w/ parameters. Needs to scale to ESCHER. Blocked on embedder and data processor
Also add tests to understa…
-
I was trying to create a reproducible example of another issue I'm having with JoinLayers() taking an indefinite amount of time (killed manually after ~12 hours).
The dataset I used is from [here](…
-
tune learning rate, regularization, input window size, epochs and hidden layer size.
Write a program to iterate through different combinations and save the best model parameters.
printStats can be co…
-
Hi, may I ask how to optimize the nimble param to fit nimble meshes to mano meshes?
I take your advice in the previous [issue](https://github.com/korrawe/harp/issues/4) to use the [method](https://…
-
Regularization is kind of putting a penalty on having large values for the parameters of your model... so is that considered a penalty function, or am I confusing two completely different things?
-
Problem Description:
The aim is to build a model that predicts the selling price of a car based on features such as age, fuel type, seller type, transmission, etc. This problem is critical for car de…