urosolia / RacingLMPC

Implementation of the Learning Model Predictive Controller for autonomous racing
284 stars 77 forks source link

QP formulation and LMPC Relaxation in the paper #12

Closed yuwwang123 closed 3 years ago

yuwwang123 commented 4 years ago

Hi Ugo,

I was trying to implement your LMPC on the F1/10 autonomous car but had some trouble understanding the LMPC Relaxation part in your paper. image I'm a bit confused about why the time index is up to 4N. By doing this relaxation are you effectively trying to convert the problem into a QP? I would highly appreciate if you could give any more insights regarding the idea behind the relaxation. Thank you!

Yuwei

junzengx14 commented 4 years ago

I think the implementation just means getting some neighboring points from previous iteration, it could be 3N, 4N or something other appropriate number. Leave this opinion to @urosolia .

By the way, during the current codebase's implementation, N = 12 and there are 32+12=44 neighboring points.

urosolia commented 4 years ago

Hi Yuwei,

I think that you are looking that the paper "Autonomous Racing using Learning Model Predictive Control" which describes the first implementation of the LMPC. The equation that you posted describes the old strategy that I used to approximate the safe set and value function.

I suggest you to take a loot at the paper "Learning How to Autonomously Race a Car: A Predictive Control Approach" (link) which describes the latest implementation (video of the experiments here). In particular, it describes the strategy implemented in this repo in the branch "devel-ugo"

Please feel free to reach out also by email if you want to have more details on how to implement the strategy

baihongzeng commented 4 years ago

Hi Urosolia,

Your reply is helpful but in the main.py, the comment section on top still refers to the old papers instead of the new one. Probably they can be changed. Btw, this really is a fantastic algorithm.

urosolia commented 4 years ago

Thanks, I have updated the ref. Also make sure to check the branch "devel-ugo" with the most recent implementation, and let me know if you have any questions!

yuwwang123 commented 4 years ago

Thank you, Ugo. That was extremely helpful! We'll look into the new paper and hope you don't mind us reaching out again if we have new questions.

Thanks, Yuwei

yuwwang123 commented 4 years ago

Hi Ugo,

Just a quick follow-up question. For the cost objective in LMPC formulation in the new paper, image I understand the second term which is linear in the decision variable lambda, but I'm not sure how the h(x) goes into the QP, since it takes on value of 1 or 0 depending on x (discrete and nonlinear function of x). Thank you!

Yuwei

urosolia commented 4 years ago

Hi Yuwei,

In this repo I approximated as 1 and I subtracted -1 to the state which have crossed the finish line Here

Here you can find the most recent implementation where I change the cost after the prediction has crossed the finish line. Basically, once the prediction has crossed the finish line the goal is to complete the upcoming lap and the terminal cost is therefore changed (with this strategy you still have h(x,u) = 1 as you always predict to go to the next finish line which is out of you prediction horizon).