Closed ppap36 closed 7 months ago
Oh, I found another bug, the mesh file in your rrr_arm.urdf is missing! Where are they?
Hi there, thanks for reaching out, and sorry for the trouble, I was in a hurry while I was adding the link and it might have ended up being private only. Here is the new link. One thing the notes aren't fully complete, while I was working on this repo I didn't consider that it might end up getting stars or issue raised by other people as it was just my hobby project. A few commits are left to be pushed from my local repo which contains your mentioned files.
I am not able to work on this project given the time constraints. However, I am considering completing the notes for all the things I have implemented till now, let me which part of this project you want to see first (I will work on it this weekend) in the notes which are missing.
One last thing if you want a detailed explanation for most of the optimal control stuff, you can refer to Optimal Control by Zachary Manchester highly recommended. I am always open to discussion about optimal control and robotics so, please feel free to tag me when ever you want.
thank you so much!!!!
Maybe you can complete the manipulator's mesh file. I have ran your cartpole perfectly, It ran well And I want to check the manipulator's result
I will update these files tonight and let you know once I'm done.
Hi @ppap36, the issue with urdf is fixed, you can try running it on your side and let me know if you face any problems. After pulling updates from the remote, use git submodule update --init --recursive
. Also, to run cart pole iLQR you have to make changes to classical control gym library files as there are some constraints on the states of the cart pole by default which need to be removed for nonlinear stuff. I will update those tomorrow.
Great! I have tried it and the 3 python files goes well, except the 3R_manipulator.py
but I think it is not necessary to fix that because it doesn't have much content.
My tutor recommend me to learn your project, you are famous in our laboratory Thank you so much!
That's great, the files you mentioned weren't supposed to be there I removed all the unnecessary ones in my recent commits.
sorry to bother you, my friend. Did you take notes when deploying ilqr on the manipulator? If so, could you do a favor to share it? Thank you very much
No, I haven't however it's simple to I will try to summarise this here.
I hope this helps, I try to write this in notes in a more detailed matter however in the due time you can refer to this blog on iLQR which I think might help (here)
I almost got your code's idea However, I suddenly noticed that the final error of the manipulator maybe too huge.(Check picture below) The 5-th of the final state error is over 1, will it be too huge?
I faced this problem while working with this many times. The best solution to this is to increase the cost for that particular state until you see a good enough performance.
Another issue with traj opt which I faced a lot is the state deviating from the desired state at the end of the horizon in both direction collocation method and iLQR. I think it might be due to some ill-conditioning at the end. You can try removing the last few time step's states and control inputs and see how it works.
When I increase the cost of position, the position error will be less and velocity error will be larger.(obviously)
But why the velocity error didn't effect the pybullet I suppose it will have too fast speed and exceed the target position
yeah will exceed the desired state with that velocity. but I am running the simulation only for planning horizon length time that's why it's not visible in the pybullet.
I just checked the issue that you mentioned the issue here is that there is no terminal cost incorporated in my code which is general practice in optimal control. So you might see in the plot the velocities are slowly deviating from the desired value in the last 100 - 200 time steps which leads to this error.
these values of Q and R matrix should solve the issue without the terminal cost
as you can see the second joint requires a lot more torque as compared to the rest two. So, for the states to reach properly the cost of the torque should be low.
Great! I knew you didn't have the terminal cost before, but I didn't know it is so important to have the terminal cost.
By the way, I think your code is convenient for beginner and I want to make a video for it on https://www.bilibili.com/ Maybe it can add your stars. Do you have interest in it?
Yeah sure, that sounds great!!.
by the way, if you don't mind can I know what are working on?
Great! I knew you didn't have the terminal cost before, but I didn't know it is so important to have the terminal cost.
yeah terminal cost doesn't have the control inputs in it so, it tries to minimize over-state as much as possible which avoids situations like these.
OK! I am a college student. And my reserch field is robot arm.
Ohh I see that's nice, if in case you are using iLQR for MPC my code is very slow for that, from my experience with ROS+MPC you would require at least a node frequency of 10-20hz in the case of turtle bot might be even higher for chaotic dynamics like manipulator arm. I would suggest using direct collocation (with an initial guess) for trajectory optimization instead of this would be much faster.
By the way my dear friend. When you encounter problems with robotics algorithms, where do you usually search for forums or websites except github? Do you join some group to ask for questions? I am a beginner for the https://leggedrobotics.github.io/ocs2/overview.html, which is used for solving optimal control problem. And the ocs2's developer did not have a complete tutorial(You know, just some robot examples, but they don't tell how to use the function or the structure of the code). I can only develop it by mimicking example code.
Unfortunately, the stuff related to robotics algo is pretty scattered especially optimal control. I previously joined a discord server for control theory discussion but I haven't used it much, you can give it a try. Apart from this there is DRAKE library which has very good tutorials in Python (Prof Russ might reply sometimes to clear doubts). I don't think there are any forums or websites for optimal control that I specifically refer to.
And yeah it's kind of true. Whatever optimal control libraries I have seen till now, all are pretty bad at maintaining documentation on how to use them
hey bro! I have made a video for your program on bilibili. I hope you will get more stars! https://www.bilibili.com/video/BV1nK421t7Jg/?vd_source=5d5065c07632b817f1765a8d48bbdb4f
Thank you, it means a lot. I will put the video link in the readme.
problem description In your Notes in Reference When I open it, it shows that your note is missing or I have no permission
can anyone help me? Thanks