kyungheee / 2024-Samsung-AI-Challenge-Black-box-Optimization

2024 Samsung AI Challenge : Black-box Optimization
0 stars 0 forks source link

diffusion 관련 코드 #31

Open rhycha opened 2 months ago

rhycha commented 2 months ago

혹시 노이징이랑 디노이징 관련 코드 없이 neural network로만 구현하는 거죠? 그 denosing 관련 논문 구현하려면 어떤식으로 스트럭처를 짜야하는지 1D문제에서 효과적인 뉴럴네트워크 구성은 뭔지 찾아봐야할 것 같습니다.

rhycha commented 2 months ago

그리고 뉴럴네트워크 차원 검토 부탁드려요 데이터 디멘션이 11차원인데 128로 늘리는 건 아닐 것 같은데

rhycha commented 2 months ago

수정 이전 image

rhycha commented 2 months ago
  1. Layer Sizes: The model's complexity should match the size of your input data. Since your input dimension is only 11, using relatively large layers like 128 and 64 units might be too complex, leading to overfitting, especially if your dataset is small. Consider reducing the layer sizes to something more proportional to the input dimension. For example, you might use 32 and 16 units instead of 128 and 64.
  2. Latent Dimension: The choice of latent dimension is crucial. If your input dimension is 11, you might not need a very large latent space. The latent dimension should typically be less than the input dimension unless there is a specific reason for it. For example, you could try latent dimensions like 2, 4, or 8 depending on the complexity of the underlying data patterns you wish to capture.
  3. Normalization: Ensure that your input data is normalized (e.g., between 0 and 1 or standardized) before feeding it into the model. This can help with training stability and performance.
  4. Activation Functions: The ReLU activation functions are standard choices, but you could experiment with other activation functions (e.g., LeakyReLU or Tanh) depending on your data characteristics.
rhycha commented 2 months ago

수정 이후. 로스 쥐똥만큼 작아짐. image

rhycha commented 2 months ago

다른 이슈에서 언급했듯이, 스큐 안 한게 결과 더 잘나옴. image