EntropyOrg / perl-AI-TensorFlow-Libtensorflow

https://p3rl.org/AI::TensorFlow::Libtensorflow
Other
9 stars 0 forks source link

tags #22

Open pjfalbe opened 1 year ago

pjfalbe commented 1 year ago

I have successfully written a command line script based on your notebook and matched the results. Now I am trying to use a save container model created on GCP. I see several difference from your saved model. If I do this on your model I get the following results

saved_model_cli show --dir $PWD --tag_set serve --signature_def serving_default --list_ops

The MetaGraph with tag set ['serve'] contains the following ops: {'Pack', 'VarHandleOp', 'Identity', 'NoOp', 'Mul', 'BiasAdd', 'AssignVariableOp', 'ReadVariableOp', 'AddV2', 'StatefulPartitionedCall', 'Select', 'Shape', 'Softmax', 'StringJoin', 'DepthwiseConv2dNative', 'Reshape', 'Const', 'AvgPool', 'Relu6', 'MergeV2Checkpoints', 'Placeholder', 'Sub', 'Conv2D', 'ShardedFilename', 'RestoreV2', 'StaticRegexFullMatch', 'SaveV2', 'Squeeze', 'FusedBatchNormV3'}
The given SavedModel SignatureDef contains the following input(s):
  inputs['inputs'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 224, 224, 3)
      name: serving_default_inputs:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['logits'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1001)
      name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict

If I do on the GCP container saved model I get

The MetaGraph with tag set ['serve'] contains the following ops: {'Placeholder', 'Tile', 'Enter', 'LogicalAnd', 'Relu', 'Cast', 'TensorArrayReadV3', 'DepthwiseConv2dNative', 'TensorArrayWriteV3', 'Conv2D', 'NoOp', 'TensorArrayV3', 'LoopCond', 'ExpandDims', 'Slice', 'Const', 'Softmax', 'Shape', 'Rsqrt', 'FakeQuantWithMinMaxVars', 'ResizeBilinear', 'TensorArrayScatterV3', 'MatMul', 'Switch', 'Squeeze', 'Reshape', 'AddV2', 'Range', 'Merge', 'Mean', 'Exit', 'Identity', 'Mul', 'BiasAdd', 'Sub', 'Less', 'Pack', 'TensorArrayGatherV3', 'DecodeJpeg', 'NextIteration', 'RealDiv', 'StridedSlice'}
The given SavedModel SignatureDef contains the following input(s):
  inputs['image_bytes'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: Placeholder:0
  inputs['key'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: Placeholder_1:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['key'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: Identity:0
  outputs['labels'] tensor_info:
      dtype: DT_STRING
      shape: (-1, 2)
      name: Tile:0
  outputs['scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 2)
      name: scores:0
Method name is: tensorflow/serving/predict

I don't get a preferred size and I don't see a tag StatefulPartitionedCall which I think is making these calls die

my %ops = (
    in  => $graph->OperationByName('serving_default_inputs'),
    out => $graph->OperationByName('StatefulPartitionedCall'),
);

Any ideas?

zmughal commented 1 year ago

OK, so it looks like the input names are:

and the output names are

I currently do not have any code that tests the DT_STRING data type though, so this won't work quite yet. I didn't have a simple model to try it out on so that is why I left out support at the time. What is the underlying the model (that is, what was the model that this derives from so I can see the model documentation)? I would like to add an example for that so you can get something working faster.

From searching around, I believe that the data that goes into image_bytes will need to be something like JPEG and the model will decode the data from that, but without the documentation, I can't try to reproduce it myself.

pjfalbe commented 1 year ago

it is a vertex ai doing a auto ml is that what you mean? I could send the saved model to you.

On Thu, Aug 17, 2023, 12:09 PM Zaki Mughal [sivoais] < @.***> wrote:

OK, so it looks like the input names are:

  • image_bytes and
  • key

and the output names are

  • key,
  • labels, and
  • scores.

I currently do not have any code that tests the DT_STRING data type though, so this won't work quite yet. I didn't have a simple model to try it out on so that is why I left out support at the time. What is the underlying the model (that is, what was the model that this derives from so I can see the model documentation)? I would like to add an example for that so you can get something working faster.

From searching around, I believe that the data that goes into image_bytes will need to be something like JPEG and the model will decode the data from that, but without the documentation, I can't try to reproduce it myself.

— Reply to this email directly, view it on GitHub https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/issues/22#issuecomment-1682662418, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUZI7D3EDYNUP6OBWU5X2DXVZF4FANCNFSM6AAAAAA3UCSDXA . You are receiving this because you authored the thread.Message ID: @.*** com>

Disclaimer

The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been automatically archived. Cassens

zmughal commented 1 year ago

it is a vertex ai doing a auto ml is that what you mean?

What I mean: is there a model that this is a fine-tuning of or is this directly one of the models that Google provides without any fine-tuning? I just need the name of the pre-trained foundation/base model or architecture that this is based on and I should be able to download it myself from Google.

I could send the saved model to you.

Yeah, that could work if necessary, but I would prefer something openly available from Google so that I can create another notebook around it.

pjfalbe commented 1 year ago

I will see if I can get the model name

On Thu, Aug 17, 2023 at 2:08 PM Zaki Mughal [sivoais] < @.***> wrote:

it is a vertex ai doing a auto ml is that what you mean?

What I mean: is there a model that this is a fine-tuning of or is this directly one of the models that Google provides without any fine-tuning? I just need the name of the pre-trained foundation/base model or architecture that this is based on and I should be able to download it myself from Google.

I could send the saved model to you.

Yeah, that could work if necessary, but I would prefer something openly available from Google so that I can create another notebook around it.

— Reply to this email directly, view it on GitHub https://github.com/EntropyOrg/perl-AI-TensorFlow-Libtensorflow/issues/22#issuecomment-1682818234, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUZI7CXR7NE7VCABCAZNW3XVZT3JANCNFSM6AAAAAA3UCSDXA . You are receiving this because you authored the thread.Message ID: @.*** com>

Disclaimer

The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been automatically archived. Cassens

pjfalbe commented 1 year ago

from Data Scientist "This is not a Google-model but one we created for you. This was not a model provided by Google." so I don't wouldn't be an open available model