Here, we introduce the adversarial nature- minimizing the error in number of bits, where Bob learns to reconstruct original message using the ciphertext and key might lead to Alice passing the original message itself as ciphertext to Bob. To ensure that that doesn't happen, we introduce a loss replicating Eve as an eavesdropper. In short, better the Eve gets at breaking their encryption bigger Alice-Bob’s loss gets.
L1 and L2 are two loss functions in machine learning which are used to minimize the error.
L1 Loss function stands for Least Absolute Deviations. Also known as LAD.
L1 Loss Function is used to minimize the error which is the sum of the all the absolute differences between the true value and the predicted value.
Here, we introduce the adversarial nature- minimizing the error in number of bits, where Bob learns to reconstruct original message using the ciphertext and key might lead to Alice passing the original message itself as ciphertext to Bob. To ensure that that doesn't happen, we introduce a loss replicating Eve as an eavesdropper. In short, better the Eve gets at breaking their encryption bigger Alice-Bob’s loss gets.
L1 and L2 are two loss functions in machine learning which are used to minimize the error. L1 Loss function stands for Least Absolute Deviations. Also known as LAD. L1 Loss Function is used to minimize the error which is the sum of the all the absolute differences between the true value and the predicted value.