Open Harry-maximum opened 4 months ago
Hi! after you load an agent with
from mushroom_rl.core import Agent
path = "your_path"
agent = Agent.load(path)
You can access the Gaussian policy's mean network with agent.policy._mu.network
and save it however you like. Instead of saving and reloading in .msh, you can access and save the mean network similarly during training.
Dear Firas,
Thank you for your reply! I can now load the .msh file to my a .pt file to the environment. I am working on the GR1T1 and GR1T2 from Fourier Intelligence, the expert trajectory seems very well and behave quite like a human, but after epochs of training, the robot can only walk using small steps. So can you please judge my model and speculate the reasons? The training Hyper_params just keeps similar with Unitree H1 and G1.
Best regards, Xuanbo
run_expert.mp4 https://drive.google.com/file/d/1fZQ9lgWznhHIHovBI6wLWWiKHdieTxVl/view?usp=drive_web
walk_expert.mp4 https://drive.google.com/file/d/1lB7-GKh90Jc-f0qitpwoOX3Fo1pW_XsA/view?usp=drive_web
2024-07-12 09-59-18.mp4 https://drive.google.com/file/d/1D4ehH0u7WVGTSrY-u1hJEFd3LO2dHCR9/view?usp=drive_web
Firas Al-Hafez @.***> 于2024年7月11日周四 18:57写道:
Hi! after you load an agent with
from mushroom_rl.core import Agentpath = "your_path"agent = Agent.load(path)
You can access the Gaussian policy's mean network with "agent.policy._mu.network" and save it however you like.
— Reply to this email directly, view it on GitHub https://github.com/robfiras/loco-mujoco/issues/37#issuecomment-2222628108, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7ND7MLWEZFMB27VQACQJ2TZLZQILAVCNFSM6AAAAABKR7ZUZ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRSGYZDQMJQHA . You are receiving this because you authored the thread.Message ID: @.***>
Hi! Recently I trained my own agent in a new robot env, but I find that it uses the mushroom rl so that it save the file format of .msh, so is there any methods I can change the .msh to .pt file?