Closed inbra closed 4 years ago
So Camelcase is usually the native Java names used by the API, and snake_case is the python client conversion of those as interpreted from the api spec in the swagger. Everything in registry/models/* is procedurally generated from the Swagger, so the problem may be there.
But something else is nagging at me - did you export the Flow using NiPy itself, or did you use the new Download Flow process either in the GUI or via a REST call? Also, I'll need some more details on how to reproduce your issue if you can help me set it up.
I have been using the nipyapi interface as follows:
...
self._buckets_api = registry.apis.bucket_flows_api.BucketFlowsApi()
...
def export_flow(self, p_bucket_id, p_flow_id, p_file_path) -> bool:
try:
l_snapshot = self._buckets_api.get_latest_flow_version(p_bucket_id, p_flow_id)
except ApiException as e:
self._log.error('unable retrieve latest version of flow {}: {} - {}'.format(p_flow_id, e.status, e.body))
return False
try:
utils.fs_write(utils.dump(l_snapshot.to_dict()), p_file_path)
except OSError as e:
self._log.error('unable to open file for writing {}: {}'.format(p_file_path, e))
return False
return True
...pretty straight forward, I guess. The nifi-cli tool is doing an export without having the keys converted to snake case - and that is the reason why I have to use either nipyapi for export and import or nifi-cli. I am trying to automate as many steps in the process as possible, though. Import is simpler still:
from nipyapi import versioning
@staticmethod
def import_flow(p_bucket_id: str, p_importfile: str, p_flow_id: str = None, p_flow_name: str = None) -> Any: # pragma: no cover
return versioning.import_flow_version(p_bucket_id, file_path=p_importfile, flow_name=p_flow_name, flow_id=p_flow_id)
Now that I'm looking at this side-by-side I notice I used different endpoints for export and import. Should this be the cause? I'll try on Monday.
Ok, I have been checking with the export_flow_version
function from the versioning package: here it seems the re-conversion to camel-case is already done before writing to file. That makes it not only compatible to the nifi-cli tool, but also evades the issue described above. And makes my code a bit simpler:
def export_flow(self, p_bucket_id, p_flow_id, p_file_path) -> bool:
try:
versioning.export_flow_version(p_bucket_id, p_flow_id, file_path=p_file_path, version=None)
return True
except Exception as e:
self._log.error('Exception: {}'.format(e))
return False
Problem solved, thanks anyway for the reflection! ;)
Sorry I haven't been more helpful through this - still juggling work / childcare during Covid here in the UK.
I am pleased that the solution was already in the codebase - nipyapi.versioning was written when the registry first came to the project and I do not remember solving this issue but perhaps I did at the time. Too many beers between then and now perhaps! Now that I think about it though, you may see a method in the nipyapi.utils for loading the objects back in using the DTO specified in the swagger spec, which is where I believe this is addressed. https://nipyapi.readthedocs.io/en/latest/nipyapi-docs/nipyapi.html#nipyapi.utils.load
On Mon, Aug 31, 2020 at 11:21 AM Ingo Brauckhoff notifications@github.com wrote:
Ok, I have been checking with the export_flow_version function from the versioning package: here it seems the re-conversion to camel-case is already done before writing to file. That makes it not only compatible to the nifi-cli tool, but also evades the issue described above. And makes my code a bit simpler:
def export_flow(self, p_bucket_id, p_flow_id, p_file_path) -> bool: try: versioning.export_flow_version(p_bucket_id, p_flow_id, file_path=p_file_path, version=None) return True except Exception as e: self._log.error('Exception: {}'.format(e)) return False
Problem solved, thanks anyway for the reflection! ;)
— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/Chaffelson/nipyapi/issues/217#issuecomment-683694719, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZAZOB5XNLTLCRBMDGAWVDSDN2THANCNFSM4QNZFFTA .
Description
Hi, thanks for the great work so far! However I have stumbled upon a little problem that may be an issue: I'm trying to export flows from one nifi/registry to another one by saving the flow temporarily in a json-file. Export works fine and I have already worked around that problem with the nested versioned flows. While importing the file I get a deserialization error, though. Seems like trouble with the mapping between CamelCase and snake_case conversion.
What I Did
Essentially trying to import a versionedFlowSnapshot:
I have been tracking this with the help of a debugger: the problem lies in
python3.8/site-packages/nipyapi/registry/api_client.py
and its method__deserialize_model
, line 626 (and klass.attribute_map[attr] in data \
), where the mapper is looking up for the original name of the property but cannot find it in the map provided, because it is looking at the values, but should be looking at the keys of that map.klass.attribute_map[attr]
.klass
is<class 'nipyapi.registry.models.versioned_flow.VersionedFlow'>
and its attribute_map is{'link': 'link', 'identifier': 'identifier', 'name': 'name', 'description': 'description', 'bucket_identifier': 'bucketIdentifier', 'bucket_name': 'bucketName', 'created_timestamp': 'createdTimestamp', 'modified_timestamp': 'modifiedTimestamp', 'type': 'type', 'permissions': 'permissions', 'version_count': 'versionCount'}
; attr is'bucket_identifier'
-- so when looking up the map returns 'bucketIdentifier'. data is{'bucket_identifier': '1adcd98a-69ac-48af-beb9-eb8006b8c860', 'bucket_name': 'Releases', 'created_timestamp': 1597229769204, 'description': 'RawData-Transfer v1.0.0', 'identifier': 'a2d3911c-0fbc-4030-a53b-fcea310e7645', 'link': {'href': 'buckets/1adcd98a-69ac-48af-beb9-eb8006b8c860/flows/a2d3911c-0fbc-4030-a53b-fcea310e7645', 'params': {'rel': 'self'}}, 'modified_timestamp': 1597229770042, 'name': 'RawData-Transfer v1.0.0', 'permissions': None, 'type': 'Flow', 'version_count': 1}
When looking for 'bucketIdentifier' in data the result is false and the (required) property is not set.Urgency
This issue at the moment ist blocking my way to use nipyapi to do the import. I'll have to resort to nifi-cli for the moment, but the ops-guys won't be happy.
Thank you for looking into this!