PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.06k stars 5.54k forks source link

layers.hardsigmoid slope默认为0.2 #44820

Closed zhaoying9105 closed 2 years ago

zhaoying9105 commented 2 years ago

bug描述 Describe the Bug

  1. paddle.nn.functional.hardsigmoid 默认参数是1/6 python/paddle/nn/functional/activation.py
    def hardsigmoid(x, slope=0.1666667, offset=0.5, name=None):
  2. fluid.layers.hard_sigmoid 默认参数是0.2 python/paddle/fluid/layers/nn.py
    @templatedoc()
    def hard_sigmoid(x, slope=0.2, offset=0.5, name=None):
     ....

    另外paddle/fluid/operators/activation_op.cc 默认的slope参数也是0.2

    
    class HardSigmoidOpMaker : public framework::OpProtoAndCheckerMaker {
    public:
    void Make() override {
    AddInput("X", "An N-D Tensor with data type float32, float64. ");
    AddOutput("Out", "A Tensor with the same shape as input. ");
    AddAttr<float>("slope",
                   "The slope of the linear approximation of sigmoid. Its "
                   "value MUST BE positive. Default is 0.2. ")
        .SetDefault(0.2f);
    AddAttr<float>(
        "offset",
        "The offset of the linear approximation of sigmoid. Default is 0.5. ")
        .SetDefault(0.5f);
    AddComment(R"DOC(
    HardSigmoid Activation Operator.

A 3-part piecewise linear approximation of sigmoid(https://arxiv.org/abs/1603.00391), which is much faster than sigmoid.

$$out = \max(0, \min(1, slope * x + offset))$$

)DOC"); } };


是不是应该统一一下

### 其他补充信息 Additional Supplementary Information

_No response_
xiaoxiaohehe001 commented 2 years ago

好的,我们会尽快对齐一下

Ligoml commented 2 years ago

你好,fluid下的API目前不推荐使用,如果你使用paddle.nn.functional.hardsigmoid没有出现其他问题,那我们建议是不做修改~