youxch / Inverse-design-of-patch-antennas

This repository hosts a simple demonstration of a deep learning approach for the inverse design of patch antennas. The goal is to explore energy-efficient designs and to significantly reduce simulation cost compared to conventional methods.
MIT License
52 stars 8 forks source link

While running "predict.py" a memory leakage occured #2

Open hair-an opened 5 months ago

hair-an commented 5 months ago

Before running 'predict.py', I had already run 'train.py',it's fine. But when I was running 'predict.py', the terminal started printing “1/1 [==============================] - 0s 10ms/step“, until my memory usage reached 63.8/64GB, and still don't stop!

youxch commented 5 months ago

Thank you for your question. The memory overflow issue arises from the need to predict over 2 million samples. If we predict these samples sequentially on a CPU without using a GPU, it can indeed lead to memory overflow.

We have addressed this in our new codes version by predicting only 100 samples at a time and then outputting the results, which resolves the memory overflow issue. In the future, we will guide users on how to use GPUs for parallel prediction to further accelerate the process, which is currently under development.

If you have any more questions, please feel free to ask, and I'd be happy to help you with your inquiries.

youxch commented 5 months ago

For GPU usage, employ predict-gpu.py to significantly enhance the prediction output rate, offering a 500x speedup compared to the CPU version.

hair-an commented 5 months ago

Thank you for your reply. I have seen the latest release and hope it will works. Also, I'm curious why you used TensorFlow instead of pytorch since pytorch has stronger support for cuda versions than TensorFlow, which could be more convenient to install in different version of OS.

------------------ 原始邮件 ------------------ 发件人: "youxch/Inverse-design-of-patch-antennas" @.>; 发送时间: 2024年4月9日(星期二) 下午5:08 @.>; @.**@.>; 主题: Re: [youxch/Inverse-design-of-patch-antennas] While running "predict.py" a memory leakage occured (Issue #2)

For GPU usage, employ predict-gpu.py to significantly enhance the prediction output rate, offering a 500x speedup compared to the CPU version.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

hair-an commented 5 months ago

So, after a moment of debug, it works, but after a night of running of "predict.py"which runs on RTX A5000. I found that Cuda doesn't occupy it at all, but the graphics memory is full, and as the running time goes by, the inference time for each step also increases from 20ms to 200ms. Is this normal? QQ截图20240410100228

youxch commented 5 months ago

Thank you for your feedback. Regarding your first question (why not use PyTorch), it’s because the Keras code logic is more beginner-friendly. We will launch a PyTorch version in the future, so please stay tuned for updates.

For your second question, I have two suggestions: 1) You can use the predict-gpu code, which is more efficient than using the CPU. 2) In our predict.py, we output a total of 2 million samples, which is a very large number (and not actually necessary). You can reduce the number of iterations in the for loop to decrease the number of prediction samples. Hopefully, this will address your issue.

hair-an commented 5 months ago

What is the correspondence between 'ScaleSlot', 'Uw3' in the code and the variables in the paper? I’d appreciate that if you could tell me the correspondence between the parameters in the output txt files and the paper.Thanks a loooot!!

youxch commented 5 months ago

Thank you for your question. You can refer to the new schematic diagram I made below. Figure 1 is the original structure by Professor K. L. Wong. Figure 2 is the structural parameters extracted from the original structure (which correspond exactly to the code). Figure 3 is the annotation of the meaning of each variable (which also corresponds exactly to the code). I hope this helps you. Please feel free to ask any questions.

If you have any other questions, please create a new issue so that others can see and address it. Thank you.

image

image

image