Closed hunterzju closed 7 years ago
Hi @hunterzju ,
Great question. And thanks for bringing this up. I just made a jupyter notebook out of the linear-svm code here:
There is a constant in the loss function, alpha
, which allows for the flexibility of the soft-margin. The higher it is the more tolerance for erroneous points. If you want to have the hard-margin behaviour, then you can set alpha = 0
.
Unfortunately I didn't illustrate it very well, because the dataset that I choose are linearly seperable, so a soft margin is hard to illustrate here. I'm assuming if you try with a different combination of Iris features you can see the soft margin at work. Let me know if you have any more questions or if it isn't working for you.
I'm going to close the issue for now, but please reopen if you have any more questions. Thanks!
the example can work well, but I wonder the mathematical theory of the code, could you show me how the code solve the svm problem. To be specific, I tried the SVM in sklearn, but it is too slow. But in sklearn there is a parameter "C" which can control the softmargin, I want to know if the code support the softmargin and how it works. thanks!