yaswanth1701 / Trajectory-Optimization

This repo contains trajectory-optimisation on some basic systems (ex: pendulum,cartpole ,quadrotors and manipulators). Will be implementing algorithms like finite horizon LQR, iLQR and Box DDP
17 stars 1 forks source link

HELP! Cannot get into your note #1

Closed ppap36 closed 7 months ago

ppap36 commented 7 months ago

problem description In your Notes in Reference When I open it, it shows that your note is missing or I have no permission

can anyone help me? Thanks

ppap36 commented 7 months ago

Oh, I found another bug, the mesh file in your rrr_arm.urdf is missing! Where are they?

yaswanth1701 commented 7 months ago

Hi there, thanks for reaching out, and sorry for the trouble, I was in a hurry while I was adding the link and it might have ended up being private only. Here is the new link. One thing the notes aren't fully complete, while I was working on this repo I didn't consider that it might end up getting stars or issue raised by other people as it was just my hobby project. A few commits are left to be pushed from my local repo which contains your mentioned files.

yaswanth1701 commented 7 months ago

I am not able to work on this project given the time constraints. However, I am considering completing the notes for all the things I have implemented till now, let me which part of this project you want to see first (I will work on it this weekend) in the notes which are missing.

yaswanth1701 commented 7 months ago

One last thing if you want a detailed explanation for most of the optimal control stuff, you can refer to Optimal Control by Zachary Manchester highly recommended. I am always open to discussion about optimal control and robotics so, please feel free to tag me when ever you want.

ppap36 commented 7 months ago

thank you so much!!!!

ppap36 commented 7 months ago

Maybe you can complete the manipulator's mesh file. I have ran your cartpole perfectly, It ran well And I want to check the manipulator's result

yaswanth1701 commented 7 months ago

I will update these files tonight and let you know once I'm done.

yaswanth1701 commented 7 months ago

Hi @ppap36, the issue with urdf is fixed, you can try running it on your side and let me know if you face any problems. After pulling updates from the remote, use git submodule update --init --recursive. Also, to run cart pole iLQR you have to make changes to classical control gym library files as there are some constraints on the states of the cart pole by default which need to be removed for nonlinear stuff. I will update those tomorrow.

ppap36 commented 7 months ago

Great! I have tried it and the 3 python files goes well, except the 3R_manipulator.py but I think it is not necessary to fix that because it doesn't have much content.

My tutor recommend me to learn your project, you are famous in our laboratory Thank you so much!

yaswanth1701 commented 7 months ago

That's great, the files you mentioned weren't supposed to be there I removed all the unnecessary ones in my recent commits.

ppap36 commented 7 months ago

sorry to bother you, my friend. Did you take notes when deploying ilqr on the manipulator? If so, could you do a favor to share it? Thank you very much

yaswanth1701 commented 7 months ago

No, I haven't however it's simple to I will try to summarise this here.

I hope this helps, I try to write this in notes in a more detailed matter however in the due time you can refer to this blog on iLQR which I think might help (here)

ppap36 commented 7 months ago

I almost got your code's idea However, I suddenly noticed that the final error of the manipulator maybe too huge.(Check picture below) Screenshot from 2024-03-06 17-14-01 The 5-th of the final state error is over 1, will it be too huge?

yaswanth1701 commented 7 months ago

I faced this problem while working with this many times. The best solution to this is to increase the cost for that particular state until you see a good enough performance.

yaswanth1701 commented 7 months ago

Another issue with traj opt which I faced a lot is the state deviating from the desired state at the end of the horizon in both direction collocation method and iLQR. I think it might be due to some ill-conditioning at the end. You can try removing the last few time step's states and control inputs and see how it works.

ppap36 commented 7 months ago

When I increase the cost of position, the position error will be less and velocity error will be larger.(obviously)

But why the velocity error didn't effect the pybullet I suppose it will have too fast speed and exceed the target position

yaswanth1701 commented 7 months ago

yeah will exceed the desired state with that velocity. but I am running the simulation only for planning horizon length time that's why it's not visible in the pybullet.

yaswanth1701 commented 7 months ago

I just checked the issue that you mentioned the issue here is that there is no terminal cost incorporated in my code which is general practice in optimal control. So you might see in the plot the velocities are slowly deviating from the desired value in the last 100 - 200 time steps which leads to this error.

yaswanth1701 commented 7 months ago

Screenshot from 2024-03-06 15-39-00 Figure_2

yaswanth1701 commented 7 months ago

these values of Q and R matrix should solve the issue without the terminal cost

yaswanth1701 commented 7 months ago

as you can see the second joint requires a lot more torque as compared to the rest two. So, for the states to reach properly the cost of the torque should be low.

ppap36 commented 7 months ago

Great! I knew you didn't have the terminal cost before, but I didn't know it is so important to have the terminal cost.

ppap36 commented 7 months ago

By the way, I think your code is convenient for beginner and I want to make a video for it on https://www.bilibili.com/ Maybe it can add your stars. Do you have interest in it?

yaswanth1701 commented 7 months ago

Yeah sure, that sounds great!!.

by the way, if you don't mind can I know what are working on?

yaswanth1701 commented 7 months ago

Great! I knew you didn't have the terminal cost before, but I didn't know it is so important to have the terminal cost.

yeah terminal cost doesn't have the control inputs in it so, it tries to minimize over-state as much as possible which avoids situations like these.

ppap36 commented 7 months ago

OK! I am a college student. And my reserch field is robot arm.

yaswanth1701 commented 7 months ago

Ohh I see that's nice, if in case you are using iLQR for MPC my code is very slow for that, from my experience with ROS+MPC you would require at least a node frequency of 10-20hz in the case of turtle bot might be even higher for chaotic dynamics like manipulator arm. I would suggest using direct collocation (with an initial guess) for trajectory optimization instead of this would be much faster.

ppap36 commented 7 months ago

By the way my dear friend. When you encounter problems with robotics algorithms, where do you usually search for forums or websites except github? Do you join some group to ask for questions? I am a beginner for the https://leggedrobotics.github.io/ocs2/overview.html, which is used for solving optimal control problem. And the ocs2's developer did not have a complete tutorial(You know, just some robot examples, but they don't tell how to use the function or the structure of the code). I can only develop it by mimicking example code.

yaswanth1701 commented 7 months ago

Unfortunately, the stuff related to robotics algo is pretty scattered especially optimal control. I previously joined a discord server for control theory discussion but I haven't used it much, you can give it a try. Apart from this there is DRAKE library which has very good tutorials in Python (Prof Russ might reply sometimes to clear doubts). I don't think there are any forums or websites for optimal control that I specifically refer to.

yaswanth1701 commented 7 months ago

And yeah it's kind of true. Whatever optimal control libraries I have seen till now, all are pretty bad at maintaining documentation on how to use them

ppap36 commented 7 months ago

hey bro! I have made a video for your program on bilibili. I hope you will get more stars! https://www.bilibili.com/video/BV1nK421t7Jg/?vd_source=5d5065c07632b817f1765a8d48bbdb4f

yaswanth1701 commented 7 months ago

Thank you, it means a lot. I will put the video link in the readme.