PacktPublishing / Deep-Reinforcement-Learning-Hands-On

Hands-on Deep Reinforcement Learning, published by Packt
MIT License
2.85k stars 1.3k forks source link

Error in ruuning the code in chapter 8 #79

Closed ghost closed 4 years ago

ghost commented 4 years ago

Hi,

How to solve this error in training the model? Error.docx

I would be happy to solve the problem as somos as possible.

Thanks. Yaser

Shmuma commented 4 years ago

Hi!

Could you please put your error message as plain text here? Word Document is not very convenient to work with

ImGonnaDans commented 4 years ago

find where the error happened, probably at where establishing the variable "done". You need to replace ".uint8" with ".bool", because the dtype torch.uint8 is now deprecated in the PyTorch version you are using. You can using finding order like "Ctrl + F" to find that.

ghost commented 4 years ago

. ../aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. ../aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a

This is the error I faced to when I am running the training_model.py:

dtype torch.bool instead. ../aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. ../aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. ../aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. ../aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. ../aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. ../aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprec

ghost commented 4 years ago

find where the error happened, probably at where establishing the variable "done". You need to replace ".uint8" with ".bool".

Hi,

The point is that I could not find any place that used .uint8

Shmuma commented 4 years ago

You're using wrong version of pytorch. Please install version which was stated in requirements.txt. This warning is produced by new pytorch version, which shouldn't be used.

ImGonnaDans commented 4 years ago

Hi, I found the warning where it may arise, which is in Deep-Reinforcement-Learning-Hands-On/Chapter08/lib/common.py/ image

ghost commented 4 years ago

Hello,

Thanks a lot for your quik reply.

Thanks, Yaser

On 30 Jun 2020, at 11:31, ImGonnaDans notifications@github.com wrote:

Hi, I found the warning where it may arise, which is in Deep-Reinforcement-Learning-Hands-On/Chapter08/lib/common.py/ https://user-images.githubusercontent.com/42405338/86116272-b8518b00-baff-11ea-97eb-c66544343eba.png — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-651708967, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2N677K7HXM6ZKZ6NHDRZG5IHANCNFSM4OMAEG5A.

ImGonnaDans commented 4 years ago

Yes, uint8 is unsigned Byte, so the two variable need to be replaced. But in line 95, you need to use "torch.BoolTensor" and you are welcome.

ghost commented 4 years ago

Hi,

Thanks a lot. I works properly. Just one question; How can I replace double DQN with DQN in this code (the purpose is to see the performance of another algorithm)? is this possible to replace based on this code?

Best, Yaser

On 30 Jun 2020, at 11:53, ImGonnaDans notifications@github.com wrote:

Yes, uint8 is unsigned Byte, so the two variable need to be replaced. But in line 95, you need to use "torch.BoolTensor" and you are welcome.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-651718787, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2LJGXHQ2T4KG7MQX7TRZG73HANCNFSM4OMAEG5A.

ImGonnaDans commented 4 years ago

Hi,

Yes. you needs to replace those codes in the red rectangle in the following picture. image

ghost commented 4 years ago

Hi,

Thanks a lot for useful comments.

-The last issues: How can I transfer all the codes related to stock trading to "Jupyter Notebook" and then do the TESTING part with data after doing training? (this is very important to me as I am new in ML/RL algorithm)

Best, Yaser

On 2 Jul 2020, at 13:22, ImGonnaDans notifications@github.com wrote:

Hi,

Yes. you needs to take place those codes in the red rectangle in the following picture. https://user-images.githubusercontent.com/42405338/86357976-46b14280-bca1-11ea-94c8-deccfda4f84a.png — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-652973773, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2PSOGOHJ3MD4LJ42QLRZR3Z3ANCNFSM4OMAEG5A.

ghost commented 4 years ago

And then PLOT the graph of REWARD function in Jupyter Notebook.

On 2 Jul 2020, at 14:56, Yaser Kord yaser.kord@yahoo.com wrote:

Hi,

Thanks a lot for useful comments.

  • As far as I understood; I would need to bring Duoble DQN part and replace in line 97-100.

-The last issues: How can I transfer all the codes related to stock trading to "Jupyter Notebook" and then do the TESTING part with data after doing training? (this is very important to me as I am new in ML/RL algorithm)

Best, Yaser

On 2 Jul 2020, at 13:22, ImGonnaDans <notifications@github.com mailto:notifications@github.com> wrote:

Hi,

Yes. you needs to take place those codes in the red rectangle in the following picture. https://user-images.githubusercontent.com/42405338/86357976-46b14280-bca1-11ea-94c8-deccfda4f84a.png — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-652973773, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2PSOGOHJ3MD4LJ42QLRZR3Z3ANCNFSM4OMAEG5A.

ghost commented 4 years ago

Hi,

I think to do the TESTING with all figures to appear; one need to run this command: /run_model.py -d data/YNDX_160101_161231.csv -m saves/ff-YNDX16/mean_val-0.332.data -b 10 -n test

I did it but gives me this error: : Box bound precision lowered by casting to float32 warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow')) Traceback (most recent call last): File "./run_model.py", line 35, in net.load_state_dict(torch.load(args.model, map_location=lambda storage, loc: storage)) File "/Users/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 419, in load f = open(f, 'rb') FileNotFoundError: [Errno 2] No such file or directory: 'saves/ff-YNDX16/mean_val-0.332.data’

Best, Yaser

On 2 Jul 2020, at 15:02, Yaser Kord yaser.kord@yahoo.com wrote:

And then PLOT the graph of REWARD function in Jupyter Notebook.

On 2 Jul 2020, at 14:56, Yaser Kord <yaser.kord@yahoo.com mailto:yaser.kord@yahoo.com> wrote:

Hi,

Thanks a lot for useful comments.

  • As far as I understood; I would need to bring Duoble DQN part and replace in line 97-100.

-The last issues: How can I transfer all the codes related to stock trading to "Jupyter Notebook" and then do the TESTING part with data after doing training? (this is very important to me as I am new in ML/RL algorithm)

Best, Yaser

On 2 Jul 2020, at 13:22, ImGonnaDans <notifications@github.com mailto:notifications@github.com> wrote:

Hi,

Yes. you needs to take place those codes in the red rectangle in the following picture. https://user-images.githubusercontent.com/42405338/86357976-46b14280-bca1-11ea-94c8-deccfda4f84a.png — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-652973773, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2PSOGOHJ3MD4LJ42QLRZR3Z3ANCNFSM4OMAEG5A.

ImGonnaDans commented 4 years ago

Hi,

  1. you just need to use the DQN formulation in place of the DDQN code in the original code, the DQN formulation is image

  2. If PLOT code exixts in the original code, you wil get that, otherwise you need to write it if you want get that.

  3. The error is kind of obvious, No such file in the path you given, you can check whether the mean_val-0.332.data exists.

ypb_personal@163.com

ghost commented 4 years ago

Hi,

Last request: can you please tell me how to run “run_model”.py; it gives me this error: run_model.py: error: the following arguments are required: -d/--data, -m/--model, -n/—name

Yaser.

On 3 Jul 2020, at 03:10, ImGonnaDans notifications@github.com wrote:

Hi,

you just need to use the DQN formulation in place of the DDQN code in the original code, the DQN formulation is $$ loss = r+\gamma \max_{a'}{A(s',a';\theta)}-Q(s,a;\theta)$$.

If PLOT code exixts in the original code, you wil get that, otherwise you need to write it if you want get that.

The error is kind of obvious, No such file in the path you given, you can check whether the mean_val-0.332.data exists.

ypb_personal@163.com mailto:ypb_personal@163.com — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-653296380, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2OXQVWP424VS3WDD7TRZU427ANCNFSM4OMAEG5A.

ImGonnaDans commented 4 years ago

You can run this command: python run_model.py -d ./data/YNDX_150101_151231.csv -m "the model's path" -n produced Make sure that:

As you can see, these parameters are required, and the meanings of them are written in the help. Other parameters are optional. image

ghost commented 4 years ago

Hi,

I could run the “run_model.py” after training with this command: ./run_model.py -d data/YNDX_160101_161231.csv -m lib/models.py -b 10 -n test

but gives me this error:

Traceback (most recent call last): File "./run_model.py", line 35, in net.load_state_dict(torch.load(args.model, map_location=lambda storage, loc: storage)) File "/Users/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 426, in load return _load(f, map_location, pickle_module, pickle_load_args) File "/Users/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 603, in _load magic_number = pickle_module.load(f, pickle_load_args) _pickle.UnpicklingError: could not find MARK

Yaser.

On 3 Jul 2020, at 03:10, ImGonnaDans notifications@github.com wrote:

Hi,

you just need to use the DQN formulation in place of the DDQN code in the original code, the DQN formulation is $$ loss = r+\gamma \max_{a'}{A(s',a';\theta)}-Q(s,a;\theta)$$.

If PLOT code exixts in the original code, you wil get that, otherwise you need to write it if you want get that.

The error is kind of obvious, No such file in the path you given, you can check whether the mean_val-0.332.data exists.

ypb_personal@163.com mailto:ypb_personal@163.com — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-653296380, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2OXQVWP424VS3WDD7TRZU427ANCNFSM4OMAEG5A.

ImGonnaDans commented 4 years ago

No, "lib/models.py" is not the model file, the model file was produced there. image

ghost commented 4 years ago

Thanks, but It does not work for me. Can we have Zoom/Skype meeting (short meeting) this afternoon? I need to present the results of this code soon

Yaser.

On 3 Jul 2020, at 07:50, ImGonnaDans notifications@github.com wrote:

No, "lib/models.py" is not the model file, the model file was produced there. https://user-images.githubusercontent.com/42405338/86440055-65184c00-bd3c-11ea-9776-2320775d4737.png — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-653383714, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2OAWOER5MCYEY7QLKDRZV5SPANCNFSM4OMAEG5A.

ghost commented 4 years ago

Hi,

My email account is: yaser.kord@yahoo.com mailto:yaser.kord@yahoo.com My Skype account is: Yaser Faghan (the one in Lisbon)

Thanks in advance, Yaser

On 3 Jul 2020, at 08:54, ImGonnaDans notifications@github.com wrote:

I gave you my Email address just now. You can give me you contact info on that.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-653408047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2MJNWTSYZ2T2VNSPYTRZWFEZANCNFSM4OMAEG5A.

ghost commented 4 years ago

Hi again,

Can we talk on Monday by the Skype?

I did run the code and is working for both training and testing with the plot of reward evolution (Great!). The only thing that I need is that I do change a bit the states (for adversarial attack purpose) and then I need to have both (unperturbed and perturbed) plots of reward function in the same graph: I do not know how to do it in Notepad!

Thanks in advance, Yaser

On 3 Jul 2020, at 10:13, Yaser Kord yaser.kord@yahoo.com wrote:

Hi,

My email account is: yaser.kord@yahoo.com mailto:yaser.kord@yahoo.com My Skype account is: Yaser Faghan (the one in Lisbon)

  • Can we talk tomorrow more relax; today I need to prepare the presentation for this code results and graphs?

  • If Yes, please let me know at what is fine with you (possibly with time difference); I am in Lisbon (UK time).

Thanks in advance, Yaser

On 3 Jul 2020, at 08:54, ImGonnaDans <notifications@github.com mailto:notifications@github.com> wrote:

I gave you my Email address just now. You can give me you contact info on that.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-653408047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2MJNWTSYZ2T2VNSPYTRZWFEZANCNFSM4OMAEG5A.

ghost commented 4 years ago

Hi,

I trained the code and then run it but the graph is completely different than yours, why? Can you please help me?

Yaser.

On 4 Jul 2020, at 22:50, Yaser Kord yaser.kord@yahoo.com wrote:

Hi again,

Can we talk on Monday by the Skype?

I did run the code and is working for both training and testing with the plot of reward evolution (Great!). The only thing that I need is that I do change a bit the states (for adversarial attack purpose) and then I need to have both (unperturbed and perturbed) plots of reward function in the same graph: I do not know how to do it in Notepad!

Thanks in advance, Yaser

On 3 Jul 2020, at 10:13, Yaser Kord <yaser.kord@yahoo.com mailto:yaser.kord@yahoo.com> wrote:

Hi,

My email account is: yaser.kord@yahoo.com mailto:yaser.kord@yahoo.com My Skype account is: Yaser Faghan (the one in Lisbon)

  • Can we talk tomorrow more relax; today I need to prepare the presentation for this code results and graphs?

  • If Yes, please let me know at what is fine with you (possibly with time difference); I am in Lisbon (UK time).

Thanks in advance, Yaser

On 3 Jul 2020, at 08:54, ImGonnaDans <notifications@github.com mailto:notifications@github.com> wrote:

I gave you my Email address just now. You can give me you contact info on that.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-653408047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2MJNWTSYZ2T2VNSPYTRZWFEZANCNFSM4OMAEG5A.

ImGonnaDans commented 4 years ago

You can post a new issue. Maybe the auther cannot see this any more, because he closed this issue.

发送自 Windows 10 版邮件应用

发件人: Kord62 发送时间: 2020年7月9日 17:53 收件人: PacktPublishing/Deep-Reinforcement-Learning-Hands-On 抄送: ImGonnaDans; Comment 主题: Re: [PacktPublishing/Deep-Reinforcement-Learning-Hands-On] Error inruuning the code in chapter 8 (#79)

Hi,

I trained the code and then run it but the graph is completely different than yours, why? Can you please help me?

Yaser.

On 4 Jul 2020, at 22:50, Yaser Kord yaser.kord@yahoo.com wrote:

Hi again,

Can we talk on Monday by the Skype?

I did run the code and is working for both training and testing with the plot of reward evolution (Great!). The only thing that I need is that I do change a bit the states (for adversarial attack purpose) and then I need to have both (unperturbed and perturbed) plots of reward function in the same graph: I do not know how to do it in Notepad!

Thanks in advance, Yaser

On 3 Jul 2020, at 10:13, Yaser Kord <yaser.kord@yahoo.com mailto:yaser.kord@yahoo.com> wrote:

Hi,

My email account is: yaser.kord@yahoo.com mailto:yaser.kord@yahoo.com My Skype account is: Yaser Faghan (the one in Lisbon)

  • Can we talk tomorrow more relax; today I need to prepare the presentation for this code results and graphs?

  • If Yes, please let me know at what is fine with you (possibly with time difference); I am in Lisbon (UK time).

Thanks in advance, Yaser

On 3 Jul 2020, at 08:54, ImGonnaDans <notifications@github.com mailto:notifications@github.com> wrote:

I gave you my Email address just now. You can give me you contact info on that.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On/issues/79#issuecomment-653408047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMRLM2MJNWTSYZ2T2VNSPYTRZWFEZANCNFSM4OMAEG5A.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.