Samsung / ONE

On-device Neural Engine
Other
432 stars 158 forks source link

[Compiler][luci] Refactoring luci/Service for removing dependency with loco #4796

Closed llFreetimell closed 3 years ago

llFreetimell commented 3 years ago

Originated Issue : #4726


Backgrounds

Until now, CircleNode has two kinds of shape and dtype. One is shape/dtype of loco as AnnotedItem and the other is shape/dtype of CircleNode. However, during ShapeInferencePass and TypeInferencePass, only shape/dtype of loco is updated. shape/dtype of CircleNode is only updated after final exporting step. These redundancy is not desirable so refactoring to remove these dependency is required.

Goals

  1. Reorder and change phase mode
    • To Keep consistency of shape/dtype whenever some passes change them.
  2. Inference shape/dtype only with shape/dtype of circle node, not loco.
    • To remove redundant dependency between luci and loco
  3. Support static/dynamic shape inference with CircleShapeSignatureInferenceHelper
    • To reduce duplicated code and logic.

ToDo

Draft

Draft PR : #4850

Review points

There are many commits in the draft PR, which are grouped by similar contents. Please refer to each of them.

llFreetimell commented 3 years ago

@seanshpark @jinevening @mhs4670go @struss

I created a draft PR with seven separate commits. Each separate commits are grouped by co-related modifications so you can review one by one. I know it would be really hard to review :( However, if you give me any opinion for this draft PR as much as you can, I will really appreciate it :)

jinevening commented 3 years ago

Do we have to re-implement the existing shape inference logic in New_ShapeInferencePass? Why not just copy the shape inferred by the existing ShapeInferenceAlgorithm to the circle node ?

For example, how about copying the inferred shape (shape) to the circle node in the below function?

bool CircleShapeInferenceRule::infer(const loco::Node *node, loco::NodeShape &shape) const
{
  LOGGER(l);

  assert(node->dialect() == CircleDialect::get());

  ShapeInferenceAlgorithm alg;
  auto circle_node = loco::must_cast<const CircleNode *>(node);

  bool is_shape_undefined = (circle_node->shape_status() == ShapeStatus::UNDEFINED);
  bool is_shape_none = (circle_node->shape_status() == ShapeStatus::NOSHAPE);
  bool is_scalar = (circle_node->rank() == 0);

  if (is_shape_undefined)
    shape = circle_node->accept(&alg);
  else
  {
    if (is_shape_none || is_scalar)
      shape = own_shape(circle_node);
    else
      shape = circle_node->accept(&alg);
  }

  VERBOSE(l, 1) << "[luci] shape: " << circle_node->name();
  VERBOSE(l, 1) << "              own_shape: " << own_shape(circle_node)
                << " -> infer: " << shape.as<loco::TensorShape>();

  return true;
}
llFreetimell commented 3 years ago

@jinevening

Do we have to re-implement the existing shape inference logic in New_ShapeInferencePass?

I think that most of shape inference logic can be copied and pasted. (New shortcut functions can be added but not re-implementation) However, some of logics are using loco::shape_get and these will be modified little bit so that loco::shape_get is removed.

Why not just copy the shape inferred by the existing ShapeInferenceAlgorithm to the circle node ?

bool CircleShapeInferenceRule::infer(const loco::Node *node, loco::NodeShape &shape) const

Because node has const keyword, we cannot copy the inferred shape to the circle node :( Even worse, if we try to remove the const keyword, it is not allowed because it is inheritance of loco::ShapeInference and loco requires the const keyword. In short, if we want to remove const keyword in the definition, we should change loco

jinevening commented 3 years ago

Because node has const keyword, we cannot copy the inferred shape to the circle node :( Even worse, if we try to remove the const keyword, it is not allowed because it is inheritance of loco::ShapeInference and loco requires the const keyword. In short, if we want to remove const keyword in the definition, we should change loco

I see. Thanks :)

mhs4670go commented 3 years ago

@llFreetimell Just a little suggestion. Please ignore it if it makes you having trouble:)

struct BigShape
{
  Shape s;
  ShapeSignature ss;
};

...
BigShapeInferenceRule->infer(node, bigshape);
node->shpae(bigshape->s);
node->shapesig(bigshape->ss);

...
llFreetimell commented 3 years ago

@mhs4670go

I think the idea itself is not bad :) However, for now, I am not sure whether I can agree with your suggestion because

So, after refactoring finished and those uncertainty is cleared, your suggestion can be applied :)

mhs4670go commented 3 years ago

@llFreetimell Thank you for checking up my idea.

There may be cases that only ShapeInference is needed or only ShapeSignatureInference is needed. If then, we may provide four services and it can be another burden.

The situation you are saying doesn't come to me.. it will more helpful if you give me a little example.

To be clear, this is why I can't understand what you are saying. First, in optimize(CircleOptimizer.cpp) section, we already includes two pass by default. - ShapeInferencePass, ShapeSignatureInferencePass

phase.emplace_back(std::make_unique<luci::ShapeInferencePass>());
phase.emplace_back(std::make_unique<luci::ShapeSignatureInferencePass>());

If BigShapeInferencePass(name could be better later) is introduced, it will be like this

phase.emplace_back(std::make_unique<luci::BigShapeInferencePass>());

This wouldn't need like four services, just one.


There may be cases that core inference logic for shape and signature is different. I think most of operations have same logic but I am not sure about all operations.

As you said, most of operations might have same logic. Even if it doesn't, in my opinion, their being together would be helpful to implement a algorithm and to read.

seanshpark commented 3 years ago

@mhs4670go 's idea seems nice but there can be more changes at once than @llFreetimell 's current task. I'd say go step by step. First finish @llFreetimell 's shape signature enabling. Second try @mhs4670go 's idea and see it's benifit...

llFreetimell commented 3 years ago

@mhs4670go @seanshpark

Thanks for suggestions and opinions :) In conclusion, I will

seanshpark commented 3 years ago

Um... I would like to use another name thanBigShape... @mhs4670go , can you describe about about the properties of BigShape so that we can name it to reflect it's nature?

mhs4670go commented 3 years ago

Well, there is nothing to elaborate on the properties because I just thought it very simple.

Once again, I think that it would be better to make those shape-family into one because they are very similar and actually doesn't seem to should be divided. Static shape and dynamic shape have only one difference. -1 or 1. So, I even think that only one shape is needed. I'm not sure but one of the reason why shape_signature was introduced is for backward compatibility.

To sum up, there are two candidate.

struct TensorShape
{
  Shape static_shape;
  Shape dynamic_shape;
  // shape_status?
};

or as I said, they have only one difference. So, just one shape is enough.

Shape shape;

Speaking of which, there is one more thing to consider. Whether we follow the inference result of tensorflow. Currently, TensorFlow's dynamic shape inference is inconsistent. We don't even know if the inconsistency was intended or just a mistake. Maybe it would be good to go on our way.

seanshpark commented 3 years ago

Maybe it would be good to go on our way.

👍🏼

seanshpark commented 3 years ago

To use instead of BigShape... Um... as I couldn't catch any particular properties right now... but it's a CircleNode shape... CircleNodeShape is one that popped up... or CircleTensorShape As it's for dyamic shape, DynamicShape As there is two inside, DualShape

CircleShape may get confused with Shape operator...

llFreetimell commented 3 years ago

or as I said, they have only one difference. So, just one shape is enough.

I like second candidate among the candidates and I think it makes sense. My concern was both static shape and signature(dynamic shape) have problems.

And one more thing. If we write some passes, we should consider both shape and signature. To do so, writing a code will be difficult.

Nevertheless, the reason why I wrote current draft instead of introducing CircleTensorShape (name is not fixed) is that I was not sure it is really okay. Introducing CircleTensorShape may make writing a code in luci easier but it makes some difference between circle schema and circle IR. I thought that IR should be similar as much as schema.

I want to hear your opinions about this :) (Of course, if we decide to follow this, current draft... will be re-written)

Until then, I will work only for dtype and pause for shape :)


FYI, If we introduce CircleTensorShape (name is not fixed) in luci, code may be written as follow.

// Definition of circle node
/*
struct CircleNode : public loco::Node,
                    public loco::NodeMixin<loco::NodeTrait::DataType>,
                    public loco::NodeMixin<loco::NodeTrait::TensorShape>
*/
struct CircleNode : public loco::Node,
                    public loco::NodeMixin<loco::NodeTrait::DataType>,
                    public CircleTensorShape
              luci IR            ------>    circle schema

(CircleTensorShape) [1, 2, 3, 4] ------> (shape) [1, 2, 3, 4]
                                    +--> (signature) []

(CircleTensorShape) [-1, 2, 3, 4] -----> (shape) [1, 2, 3, 4]
                                    +--> (signature) [-1, 2, 3, 4]
llFreetimell commented 3 years ago

Speaking of which, there is one more thing to consider. Whether we follow the inference result of tensorflow. Currently, TensorFlow's dynamic shape inference is inconsistent. We don't even know if the inconsistency was intended or just a mistake. Maybe it would be good to go on our way.

I agree :)

llFreetimell commented 3 years ago

Since overall direction is changed and many things are changed, creating new issue seems better. Therefore, this issue will be continued at #5501.

Close this issue!