Added a control option -mid_dim, now it is possible to specify the intermediate layer dimension of the adapter or the rank of the Lora matrix using -mid_dim in the command line.
Fixed the previous bug, so that the Position Embedding with pre-trained parameters can be used properly in SAM.
Modified the way of using the Lora to fine-tune the model: following existing work, now in the Attention Block, only the q and v vectors are fine-tuned using Lora, without adjusting the k vector.