yalmip / YALMIP

MATLAB toolbox for optimization modeling
https://yalmip.github.io/
Other
494 stars 139 forks source link

When using YALMIP in MATLAB to parse a model, an issue occurred where the memory usage was locked at 347GB.And it does not compute any results. #1451

Open kgkhhx opened 2 weeks ago

kgkhhx commented 2 weeks ago

When using YALMIP in MATLAB to parse a model, an issue occurred where the memory usage was locked at 347GB. Additionally, no logs could be output, and the model could not be solved. The toolbox being used is Gurobi. The model is very large, with the constraint matrix being on the scale of tens of millions by tens of millions. The current system memory is 1024GB.How should I address this problem?

johanlofberg commented 2 weeks ago

First, make sure you are running the develop branch, as I think there has been some fix that might be related.

Then, a reproducible example is needed if it persist, to see if it is some YALMIP bottleneck issue or if the model simply is too big

kgkhhx commented 2 weeks ago

First, make sure you are running the develop branch, as I think there has been some fix that might be related.

Then, a reproducible example is needed if it persist, to see if it is some YALMIP bottleneck issue or if the model simply is too big

Thanks, I have tried separating some of the constraints into model structures such as model1, model2, etc., and then concatenating the A, sense, and rhs matrices of these structures. After that, I directly compute using the statement result = gurobi(real_model, params) in Gurobi. Currently, the issue is that when I use result = gurobi(model1, params), the computation works fine, but once I perform matrix concatenation, an error occurs stating model.A not representable.

johanlofberg commented 2 weeks ago

You would have to supply reproducible yalmip code illustrating your issue

kgkhhx commented 2 weeks ago

You would have to supply reproducible yalmip code illustrating your issue thanks. ` ops2 = sdpsettings('verbose',2,'solver','gurobi','debug', 1); ops2.gurobi.TimeLimit = 120; ops2.gurobi.MIPFocus = 0; ops2.gurobi.TuneTimeLimit = 10; ops2.gurobi.NoRelHeurTime = 120; ops2.gurobi.MemLimit = 1000; ops2.gurobi.NodefileStart = 1000; C2 = []; C3 = []; for tem = 1:n tic; str = ['final_oral_threadmultipath',num2str(tem),'_1_611736.mat']; %“Constraint file” load(str); C2 =A_eq [ x_o(:);x_b(:)] == rhs_eq; C3 = A_heq [ x_o(:);x_b(:)] <= rhs_heq; C = [C0,C1,C2,C3]; [model, recovery] = export(C, z, ops2); save(['model',num2str(tem),'.mat'],'model','-v7.3'); toc; clearvars -except C0 C1 x_o x_b z z1 z2 z3 tem n ops2

end

for i = 1:n load(['model',num2str(i),'.mat']);

if i==1
    model.modelsense = 'min';
    real_model = model;  
else
    real_model.A = [real_model.A;model.A];
    real_model.sense = [real_model.sense;model.sense];
    real_model.rhs = [real_model.rhs;model.rhs];
end

    clearvars -except real_model i

end

if ~issparse(real_model.A) real_model.A = sparse(real_model.A); end

index_eq = find(real_model.sense=='='); index_heq = find(real_model.sense~='='); real_model.A = [real_model.A(index_eq,:);real_model.A(index_heq,:)]; real_model.sense = [real_model.sense(index_eq,:);real_model.sense(index_heq,:)]; real_model.rhs = [real_model.rhs(index_eq,:);real_model.rhs(index_heq,:)];

try params.outputflag = 1; params.TimeLimit = 30; params.MIPFocus = 0; params.TuneTimeLimit = 10; params.NoRelHeurTime = 120; params.Threads = 32; params.MemLimit = 1000; params.Presolve = 2;

result = gurobi(real_model, params);

catch ME disp('Error during model processing:'); disp(ME.message); end`