当前位置:首页 » 《随便一记》 » 正文

YOLOX改进之损失函数修改(上)

15 人参与  2022年10月13日 10:11  分类 : 《随便一记》  评论

点击全文阅读


文章内容:如何在YOLOX官网代码中修改–置信度预测损失

环境:pytorch1.8

损失函数修改内容

(1)置信度预测损失更换:二元交叉熵损失替换为FocalLoss或者VariFocalLoss

(2)定位损失更换:IOU损失替换为GIOU、CIOU、EIOU以及a-IOU系列

提示:使用之前可以先了解YOLOX及上述损失函数原理

参考链接

YOLOX官网链接:https://github.com/Megvii-BaseDetection/YOLOX

YOLOX原理解析(Bubbliiiing大佬版):https://blog.csdn.net/weixin_44791964/article/details/120476949

FocalLoss损失解析:https://cyqhn.blog.csdn.net/article/details/87343004

VariFocalLoss损失解析:https://blog.csdn.net/weixin_42096202/article/details/108567189

GIOU、CIOU、EIOU等:https://blog.csdn.net/neil3611244/article/details/113794197

a-IOU:https://blog.csdn.net/wjytbest/article/details/121513560

使用方法:直接替换即可

代码修改过程

1、置信度预测损失更换之FocalLoss(不需要创建新的py文件)

使用:直接在YOLOX-main/yolox/models/yolo_head.py的YOLOXHead类中创建focal_loss方法

(1)首先找到置信度预测损失计算位置loss_obj,并进行替换(位置在386-405行左右)

# loss_iou:定位损失;loss_obj:置信度预测损失;loss_cls:预测损失        loss_iou = (            self.iou_loss(bbox_preds.view(-1, 4)[fg_masks], reg_targets)        ).sum() / num_fg        #loss_obj = (          #    self.bcewithlog_loss(obj_preds.view(-1, 1), obj_targets)        #).sum() / num_fg        loss_obj = (            self.focal_loss(obj_preds.sigmoid().view(-1, 1), obj_targets)        ).sum() / num_fg        loss_cls = (            self.bcewithlog_loss(                cls_preds.view(-1, self.num_classes)[fg_masks], cls_targets            )        ).sum() / num_fg

(2)创建focal_loss方法,放到def get_l1_target(…)之前即可,代码如下:

def focal_loss(self, pred, gt):        pos_inds = gt.eq(1).float()        neg_inds = gt.eq(0).float()        pos_loss = torch.log(pred+1e-5) * torch.pow(1 - pred, 2) * pos_inds * 0.75        neg_loss = torch.log(1 - pred+1e-5) * torch.pow(pred, 2) * neg_inds * 0.25        loss = -(pos_loss + neg_loss)        return loss

2、置信度预测损失更换之VariFocalLoss(代码较多,所以额外创建新的py文件)

步骤一:YOLOX-main/yolox/models文件夹下创建varifocalloss.py文件,内容如下:

import torch.nn as nnimport torch.nn.functional as F def reduce_loss(loss, reduction):    """Reduce loss as specified.    Args:        loss (Tensor): Elementwise loss tensor.        reduction (str): Options are "none", "mean" and "sum".    Return:        Tensor: Reduced loss tensor.    """    reduction_enum = F._Reduction.get_enum(reduction)    # none: 0, elementwise_mean:1, sum: 2    if reduction_enum == 0:        return loss    elif reduction_enum == 1:        return loss.mean()    elif reduction_enum == 2:        return loss.sum()        def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):    """Apply element-wise weight and reduce loss.    Args:        loss (Tensor): Element-wise loss.        weight (Tensor): Element-wise weights.        reduction (str): Same as built-in losses of PyTorch.        avg_factor (float): Avarage factor when computing the mean of losses.    Returns:        Tensor: Processed loss values.    """    # if weight is specified, apply element-wise weight    if weight is not None:        loss = loss * weight    # if avg_factor is not specified, just reduce the loss    if avg_factor is None:        loss = reduce_loss(loss, reduction)    else:        # if reduction is mean, then average the loss by avg_factor        if reduction == 'mean':            loss = loss.sum() / avg_factor        # if reduction is 'none', then do nothing, otherwise raise an error        elif reduction != 'none':            raise ValueError('avg_factor can not be used with reduction="sum"')    return lossdef varifocal_loss(pred,                   target,                   weight=None,                   alpha=0.75,                   gamma=2.0,                   iou_weighted=True,                   reduction='mean',                   avg_factor=None):    """`Varifocal Loss <https://arxiv.org/abs/2008.13367>`_    Args:        pred (torch.Tensor): The prediction with shape (N, C), C is the            number of classes        target (torch.Tensor): The learning target of the iou-aware            classification score with shape (N, C), C is the number of classes.        weight (torch.Tensor, optional): The weight of loss for each            prediction. Defaults to None.        alpha (float, optional): A balance factor for the negative part of            Varifocal Loss, which is different from the alpha of Focal Loss.            Defaults to 0.75.        gamma (float, optional): The gamma for calculating the modulating            factor. Defaults to 2.0.        iou_weighted (bool, optional): Whether to weight the loss of the            positive example with the iou target. Defaults to True.        reduction (str, optional): The method used to reduce the loss into            a scalar. Defaults to 'mean'. Options are "none", "mean" and            "sum".        avg_factor (int, optional): Average factor that is used to average            the loss. Defaults to None.    """    # pred and target should be of the same size    assert pred.size() == target.size()    pred_sigmoid = pred.sigmoid()    target = target.type_as(pred)    if iou_weighted:        focal_weight = target * (target > 0.0).float() + \            alpha * (pred_sigmoid - target).abs().pow(gamma) * \            (target <= 0.0).float()    else:        focal_weight = (target > 0.0).float() + \            alpha * (pred_sigmoid - target).abs().pow(gamma) * \            (target <= 0.0).float()    loss = F.binary_cross_entropy_with_logits(        pred, target, reduction='none') * focal_weight    loss = weight_reduce_loss(loss, weight, reduction, avg_factor)    return loss  class VarifocalLoss(nn.Module):     def __init__(self,                 use_sigmoid=True,                 alpha=0.75,                 gamma=2.0,                 iou_weighted=True,                 reduction='mean',                 loss_weight=1.0):        """`Varifocal Loss <https://arxiv.org/abs/2008.13367>`_        Args:            use_sigmoid (bool, optional): Whether the prediction is                used for sigmoid or softmax. Defaults to True.            alpha (float, optional): A balance factor for the negative part of                Varifocal Loss, which is different from the alpha of Focal                Loss. Defaults to 0.75.            gamma (float, optional): The gamma for calculating the modulating                factor. Defaults to 2.0.            iou_weighted (bool, optional): Whether to weight the loss of the                positive examples with the iou target. Defaults to True.            reduction (str, optional): The method used to reduce the loss into                a scalar. Defaults to 'mean'. Options are "none", "mean" and                "sum".            loss_weight (float, optional): Weight of loss. Defaults to 1.0.        """        super(VarifocalLoss, self).__init__()        assert use_sigmoid is True, \            'Only sigmoid varifocal loss supported now.'        assert alpha >= 0.0        self.use_sigmoid = use_sigmoid        self.alpha = alpha        self.gamma = gamma        self.iou_weighted = iou_weighted        self.reduction = reduction        self.loss_weight = loss_weight     def forward(self,                pred,                target,                weight=None,                avg_factor=None,                reduction_override=None):        """Forward function.        Args:            pred (torch.Tensor): The prediction.            target (torch.Tensor): The learning target of the prediction.            weight (torch.Tensor, optional): The weight of loss for each                prediction. Defaults to None.            avg_factor (int, optional): Average factor that is used to average                the loss. Defaults to None.            reduction_override (str, optional): The reduction method used to                override the original reduction method of the loss.                Options are "none", "mean" and "sum".        Returns:            torch.Tensor: The calculated loss        """        assert reduction_override in (None, 'none', 'mean', 'sum')        reduction = (            reduction_override if reduction_override else self.reduction)        if self.use_sigmoid:            loss_cls = self.loss_weight * varifocal_loss(                pred,                target,                weight,                alpha=self.alpha,                gamma=self.gamma,                iou_weighted=self.iou_weighted,                reduction=reduction,                avg_factor=avg_factor)        else:            raise NotImplementedError        return loss_cls

步骤二:在YOLOX-main/yolox/models/yolo_head.py中调用VarifocalLoss

(1)导入

from .varifocalloss import VarifocalLoss

(2)在init中实例化

self.varifocal = VarifocalLoss(reduction='none')

(3)替换原有的置信度预测损失loss_obj

# loss_iou:定位损失;loss_obj:置信度预测损失;loss_cls:预测损失        loss_iou = (            self.iou_loss(bbox_preds.view(-1, 4)[fg_masks], reg_targets)        ).sum() / num_fg        #loss_obj = (          #    self.bcewithlog_loss(obj_preds.view(-1, 1), obj_targets)        #).sum() / num_fg        loss_obj = (self.varifocal(obj_preds.view(-1, 1), obj_targets)        ).sum() / num_fg        loss_cls = (            self.bcewithlog_loss(                cls_preds.view(-1, self.num_classes)[fg_masks], cls_targets)        ).sum() / num_fg

效果:根据个人数据集而定。FocalLoss与VariFocalLoss在我的数据集上均能提升,模型越大效果越明显。(但是在yolox-tiny上FocalLoss效果AP50会低于原来)

以上代码链接
链接:https://pan.baidu.com/s/1ee1sQ9Eulz_mUdHTOnBe7w
提取码:8v8r


点击全文阅读


本文链接:http://m.zhangshiyu.com/post/45258.html

<< 上一篇 下一篇 >>

  • 评论(0)
  • 赞助本站

◎欢迎参与讨论,请在这里发表您的看法、交流您的观点。

关于我们 | 我要投稿 | 免责申明

Copyright © 2020-2022 ZhangShiYu.com Rights Reserved.豫ICP备2022013469号-1