ci-richard-mcelhinney / nhaystack

Niagara module for Project Haystack
https://www.project-haystack.org
Academic Free License v3.0
36 stars 23 forks source link

Create Entity and History Push to a Haystack server #12

Open patrickc77 opened 2 years ago

patrickc77 commented 2 years ago

@ci-richard-mcelhinney As discussed, we have a proposal to extend the Haystack client capabilities to include pushing history data to a Haystack server. It supports pushing to pre-defined entities or automatic creation of point entities if they don't exist. If support is detected via ops op, it will use a create operation, which is a WideSky extension to the Haystack REST API standard.

Below is a proposed update to the docs which adds to the existing section 7 "Using NHaystack as a client"


7.1. Exporting history to another Project Haystack server

The NHaystack client can be used to push history stored on the Niagara station to an upstream Project Haystack-compatible server. This feature has been successfully tested on both NHaystack server and on WideSky, but should work with any Project Haystack server.

The NHaystack History Export Manager can be found by right-clicking on Histories beneath your Project Haystack server instance and selecting Views → NHaystack History Export Manager. From here, clicking Discover will present you with a list of all the histories on the station.

After selecting the histories of interest, click Add, and you'll be presented with a dialogue box listing the histories and the options to be set.

From this form, each history must be associated with the Ref of the point to which it will write its historical data. There are three ways this can be done.

7.1.1 Manual association of points via the Add form

The lowest-common denominator method here is to use a standard Project Haystack client to get a list of valid points from your server, then manually copying the id of each point into the Id field on the Add form.

This should work for any Project Haystack server. The point is assumed to carry a his tag, be of the correct kind for the type of history being exported, and either carries a tz tag matching that of the history, or is on a server that supports time-zone conversion.

For more detail on tagging assumptions, see section 7.1.2 below.

7.1.2 Automatic look-up using axStation and axHistoryId.

Depending on the Project Haystack server, it may be easier to retrieve a list of histories from the station and tag the points so the station can find them. Such a list can be obtained from WorkPlace via the following procedure:

  1. Double click on your station's "History" space (it'll take you to a chart view)
  2. Press CTRL+L to bring up the ORD dialogue
  3. At the end of the text field (after history:), add bql:select id, recordType, timeZone from /
  4. From the File menu, select Export.
  5. Choose "Text to CSV" and "Save to File"
  6. Enter the path where you want the file and click OK.

Filter out of that list the histories you want to upload. To associate a history in this list to a point in your asset model, tag the point with the following two tags:

The point will also need to have kind and tz set up properly:

7.1.3 Automatic creation of points

If the server supports the createRec call, then points will be automatically created on the Project Haystack server. The station will issue a HTTP POST instruction to the server's createRec endpoint with a single-row grid containing the following columns:

It is assumed that siteRef and equipRef are optional and can be filled in later via other CRUD ops by the end user. The axStation and axHistoryId should be used to obtain listings of the orphaned point entities so they can be associated with the correct site and equip.

7.2 Fine tuning payload sizes

The number of records exported at a time can be tuned by configuring the Upload Size. This is a count of the number of rows used in each hisWrite request. It defaults to 10000, which depending on the server may work, or may result in a "Request Entity Too Large" error message from the server.

In such cases, the driver will automatically divide this value by two and try again, so if the server experiences difficulty processing the set number of records, you'll see smaller payloads (5000, 2500, 1250, etc) attempted. (It will not go any smaller than 1 record.)

You can manually set this value to any payload size suits your requirements.

7.3 Setting an alternate start point

By default, exports begin at the very first record in the history. This can be configured via the "Upload From Time" field. This specifies the start point for the very next export task.

It can be used to "skip over" invalid data that was captured during the commissioning process, as well as to re-upload data in the event that data is lost or corrupted on the upstream Project Haystack server.

ci-richard-mcelhinney commented 2 years ago

Thanks @patrickc77, I'll take a look and let you know if we have any further suggestions.

patrickc77 commented 2 years ago

How was the review going? Would you like me to submit some code? It would still need a rebase with the latest branch.

ci-richard-mcelhinney commented 2 years ago

@patrickc77 I have added the documentation you provided to the Wiki so we can collaborate on this more easily. Please let me know if you need to be added as a contributor to the project so you can edit where necessary. I've modified some of the tag names to make them a little more generic as well. I will review the docs further today.

We probably need to document the REST API calls for the extra Project Haystack Ops somewhere, would you like to do it on this Wiki as well?

patrickc77 commented 2 years ago

I'm happy to be added as a contributor. We didn't add create, update & delete capability to the nHaystack server, they are a WideSky extension at the moment, but I'm happy to document them. I'm not sure where in the Wiki they would go.

callum-rosel commented 2 years ago

Couple of comments:

ci-richard-mcelhinney commented 2 years ago

Any rules for required tags will be specific to the implementation of a Haystack server at the minute. I'm pretty sure there aren't any rules documented for required tags. I mean, they might be documented but their enforcement will be down to the specific implementation.

patrickc77 commented 2 years ago

@callum-rosel

  1. unit tag: If a specific Haystack implementation requires it, that's fine. We needed to document the minimum requirements for nHaystack client.
  2. Data sync: Is there something in section 7.3 that's not covered?
patrickc77 commented 2 years ago

@ci-richard-mcelhinney There are slight differences between the original comment and the version in the wiki shall we treat the wiki as design reference?

ci-richard-mcelhinney commented 2 years ago

@patrickc77 agree, let’s edit and work on the wiki version, I think it tracks changes as well.

Do you need me to give you access to edit it?

patrickc77 commented 2 years ago

Yes please.