uzh-rpg / deep_uncertainty_estimation

This repository provides the code used to implement the framework to provide deep learning models with total uncertainty estimates as described in "A General Framework for Uncertainty Estimation in Deep Learning" (Loquercio, Segù, Scaramuzza. RA-L 2020).
MIT License
122 stars 22 forks source link

About regression loss for steering prediction #4

Open godspeed1989 opened 4 years ago

godspeed1989 commented 4 years ago

Thanks for your great work. In your paper: image To train DroNet for steering prediction, did you just use MSE for supervision?

Can you paste the part of code to calculate training loss and evaluation metrics? The release code just contains SoftmaxHeteroscedasticLoss for classification in CIFAR.

mattiasegu commented 3 years ago

Hi @godspeed1989

I am glad to know that you appreciate our work!

The loss to train our network is literally torch.nn.functional.mse(outputs,targets), plus L2 regularization on model weights. To evaluate the network, we use RMSE, EVA and NLL.

EVA:

def explained_variance_1d(ypred,y):
    """
    Var[ypred - y] / var[y].
    https://www.quora.com/What-is-the-meaning-proportion-of-variance-explained-in-linear-regression
    """
    assert y.ndim == 1 and ypred.ndim == 1
    vary = np.var(y)
    return np.nan if vary==0 else 1 - np.var(y-ypred)/vary

def compute_explained_variance(predictions, real_values):
    """
    Computes the explained variance of prediction for each
    steering and the average of them
    """
    assert np.all(predictions.shape == real_values.shape)
    ex_variance = explained_variance_1d(predictions, real_values)
    print("EVA = {}".format(ex_variance))
    return ex_variance

RMSE:

def compute_rmse(predictions, real_values):
    assert np.all(predictions.shape == real_values.shape)
    mse = np.mean(np.square(predictions - real_values))
    rmse = np.sqrt(mse)
    print("RMSE = {}".format(rmse))
    return rmse

Log-Likelihood:

def log_likelihood(y_pred, y_true, sigma):
    y_true = torch.Tensor(y_true)
    y_pred= torch.Tensor(y_pred)
    sigma = torch.Tensor(sigma)

    dist = torch.distributions.normal.Normal(loc=y_pred, scale=sigma)
    ll = torch.mean(dist.log_prob(y_true))
    ll = np.asscalar(ll.numpy())
    return ll

I hope these functions can be helpful! Cheers

godspeed1989 commented 3 years ago

Hi @mattiasegu Thanks for your reply. One more question ;) I am still confusing about how we can estimate aleatoric uncertainty (i.e., output var) without a specific supervision. The output var is used in SoftmaxHeteroscedasticLoss.

In my mind, the steering prediction is a regression problem. From ADF's original paper Lightweight Probabilistic Deep Networks, there is a probabilistic analog for regression by minimizing negative log likelihood: image So, why can't we add this as a part of the learning target?