ConsumerDataStandardsAustralia / standards

Work space for data standards development in Australia under the Consumer Data Right regime
Other
321 stars 56 forks source link

Decision Proposal 39 - Draft Standards #39

Closed JamesMBligh closed 5 years ago

JamesMBligh commented 6 years ago

This thread is for feedback on the full 2 November working draft. Overarching feedback on the draft can be made here, and complete responses to the draft in .doc format can also be uploaded.

Updated 11 November 2018 Thanks to everyone who is already adding feedback to this thread. With organisations having three weeks to respond to the working draft in full, we're going to hold off on deeply engaging with this thread until after the 23rd November. We are watching and monitoring and absorbing feedback, so please do keep it coming. Cheers, Ellen

perlboy commented 6 years ago

Could you please publish the source swagger.json used to generate the docs? The github has the result doc but not the definition itself? At the moment we are "estimating" the swagger definition so it would be good to get the direct one as soon as possible, even if it is in draft.

In lieu of an official swagger.json we have published an approximated OpenAPI 3.0.x compatible version within our CDR Sandbox project: https://github.com/bizaio/cdr-sandbox/blob/develop/support/swagger.json

We have verified compilation using the online Swagger editor (https://editor.swagger.io) although we admit we could clean up some general ordering. Once things are locked down we will do further cleanup.

We will review changes following this thread and synchronise as quickly as possible as we work towards a development sandbox in support of the emerging CDR standard.

speedyps commented 6 years ago

Just a question on the Transaction responses as provided in the specification.

In existing data delivery in Australia (via files), and in the UK Banking Spec, the transaction type is supplied with every transaction. This is a key field of a transaction, and the draft seems to have excluded this field.

In the UK, the Transaction response must include either:

This seems to have missed the requirement stated in 5.3.2 - Transaction Data in the ACCC CDR Rules Framework

BrianParkerAu commented 6 years ago

I have called out the same issue

BrianParkerAu commented 6 years ago

By way of an intro to everyone, 86400 is a new entrant to the banking market and currently applying for an ADI license. We are supporters of Open Banking and its possible use cases to enhance customer experiences.

The APIs that have been discussed in this open forum, and visible on the portal seem to be a very small subset of the APIs available in the UK model.

The information returned in a Transaction query is lacking key elements that would create consumer value; Specifically the BankTransactionCode and MerchantDetails elements. As the specification is currently defined, the consumer of the data will need to parse the content to be able to categorise the transactions, and the quality is likely to be less than if the information was provided. Parsing the contents of the description is problematic as the format lacks any industry recognised structure. I would also advocate adding a BillerDetails element to structure BPAY payments - ie include the biller code, name and CRN.

I also note the lack of support for Standing Orders. I had expected this to be front of mind as from a consumer perspective they need visibility to payments they have scheduled. The UK model appears quite robust in this regard... OBReadStandingOrder1. From a customer perspective who is looking to transfer their banking relationship to an new FI having the ability to access this information would make the process easier, and less error prone.

Also missing is the concept of an Address Book. This is implemented in the UK model as OBReadBeneficiary. From a customer perspective having the ability to port their address book to a new provider creates a better experience.

The masking applied to accounts and PANs is inconsistent with industry standards. The last 4 digits should be displayed in the clear - not 3 as currently defined.

As others have noted the Extended Transaction data for NPP is incomplete. NPP transactions can be delivered as "Single Credit Transfers" (SCT) that have no defined service levels, or as an Overlay service that defines service levels (eg OSKO payments). The Services enumeration currently doesn't address this.

Also missing from the NPP definition is "non-financial" transactions. These could include "request to pay" type messages that are to be delivered to a payer. They then have the ability to respond and send the funds to the Payee using the contents of the message.

Transactions currently have defined states - PENDING or POSTED. Pending transactions normally relate to card based authorisations that are awaiting settlement. When settled the amounts may not match the values stored on the PENDING transaction and may carry FX rates and fees. These don't seem to be captured anywhere in the spec.

Also missing is a discussion on when PENDING transactions expire. Typically this is within 3-5 business days of the authorisation request but varies by institution. This suggests that all calls to retrieve transactions will need to select at least 3-5 days of transactions.... unless all current PENDING transactions are returned if the selection end-date is the current date. Currently the input parameters only allow selection of transactions POSTED between the selected dates.

speedyps commented 6 years ago

One other question relating to transactions.

Unlike how the Direct Debits or Balances schemas are working, where the same response is used for the Bulk call and the specific Account call, for transactions, the Bulk call and the specific account calls are using different responses - I'm excluding the Detailed call from this.

All include accountId,

But Get Transactions For Account has displayName and nickName, and doesn't flag if additional detail is available (isDetailAvailable).

Get Bulk Transactions however, doesn't have displayName and nickName, but has the isDetailAvailable flag.

Ideally, shouldn't these two calls just return the same information? I think they should both have the isDetailAvailable flag at the very least.

I personally wouldn't have the account displayName and nickName in a transaction call.

davidthornton commented 6 years ago

The single biggest issue facing this spec is the proper handling of pending and posted transactions, as they specifically relate to scheme authorisations.

The ability to connect each pending transaction to a posted journal entry once the transaction has settled is crucial to the delivery of value to consumers. This is because it comes to bear upon the currently available funds of a customer at any given time.

If a customer (or their nominee) isn't able to determine a customers' currently available funds, it throws out of sync the vast multitude of realtime/budgeting/value added calculations, services and potential notifications.

In short, improper handling of account authorisations would render the spec almost useless.

speedyps commented 6 years ago

There is a mistake in the https://consumerdatastandardsaustralia.github.io/standards/#get-transactions-for-specific-accounts area.

The 200 response points to AccountsBalancesResponse not (I assume) AccountTransactionsResponse

speedyps commented 6 years ago

There are no end-date parameters on the Transaction requests, so it isnt possible to retrieve transactions within a specified range, only from a date to today.

Going back through the Decision 28, I couldn't see commentary suggesting that the end-date parameter was being removed.

BrianParkerAu commented 6 years ago

The extendedData element appears only returned when querying a specific transaction : accounts//transactions/.

A call to accounts/transactions does not return this data element - instead it returns isDetailAvailable = true/false.

It would be preferable to provide a parameter to control the return of the extendedData element in the following APIs:

accounts//transactions accounts/transactions

Alternately always return the extendedData element

BrianParkerAu commented 6 years ago

@speedyps You are correct ... the end-date is no longer part of the filter.... although it is part of the UK standard.

@JamesMBligh Was this intentionally removed?

JamesMBligh commented 6 years ago

Hi All, some minimal response to some of the feedback to clarify. I don’t attempt to respond to all feedback here but there are some logistics to comment on...

Swagger Yes, swagger is coming. It may take a few more days.

Discrepancies There are some discrepancies noted between the decisions and the draft spec. There was a lot of content to transition and sometimes I made mistakes. Apologies. This includes the transaction payloads, end-time filters, missing products, etc.

I have been reviewing since release and believe I have fixed these discrepancies. I suspect that some will still remain. Please note if something seems wrong. Also interested in whether the documentation is readable.

Transactions Hopefully there is still value in the spec if we don’t get transactions exactly correct in the first draft. Some use cases don’t even require transactions ;) At the moment I believe the transaction documentation matches the discussion on the transaction payload decision. If it doesn’t let me know, otherwise the feedback is noted and will be considered.

Missing Payloads

-JB-

BrianParkerAu commented 6 years ago

@JamesMBligh The Swagger doesn't show the banking/payees end points although it is referenced in issue 32. Could I suggest that you provide a folder to record all of the final decisions - at present it requires trawling through all of the closed items to locate the pdf.

BrianParkerAu commented 6 years ago

The latest Swagger defines common field types "MaskedPANString" and MaskedAccountString. Both of these show the last 4 digits in clear.

The Get Account Detail response shows the account being returned as MaskedAccountNumber with text indicating the last 3 digits are in the clear.

This appears to be an error. When details are returned the Account Number should be returned in the clear. ie change the name from "maskedNumber" to "accountNumber" and set the type to String.

Consider removing the MaskedAccountNumber type completely as it is conflicting with the common type "MaskedAccountString".

BrianParkerAu commented 6 years ago

When accountNumber is returned in the clear (see previous comment) guidance should be provided on the formatting to include BSB and Account. Specific reference needs to be made to handling 5 digit BSB's (ie unconditionally format as a 6 digit number followed by a separator (- or /) followed by the account number (these also should have formatting rules (eg 9 digit numbers with leading zeroes). NPP Participants will be well aware of the problems free-form account numbers have caused.

Alternately implement a BSBAccount type {"bsb":"12345", "accountNumber":"12345"} where leading zeroes don't need to be considered.

BrianParkerAu commented 6 years ago

Rule 5.3.2 requires the inclusion of "any identifier or categorisation of the transaction by the data holder (that is, debit, credit, fee, interest, etc.).

This seems to be omitted from the Get Transactions and Get Transaction Details

I had expected to see structures similar to the UK model : Either the BankTransactionCode (which is the ISO transaction code list), or ProprietaryBankTransactionCode, or both may be populated.

Parsing the description to determine the transaction type is problematic due to the proprietary formatting of this field by FI's.

BrianParkerAu commented 6 years ago

Rule 5.3.2 should also cover the identification of Card based transactions using the AS2805 fields supplied in the message.

The MerchantDetails structure as implemented in the UK model would satisfy this need. Populating the Merchant Name and Merchant Category Code from the AS2805 fields will deliver a more accurate result than attempting to parse the message and add considerable value to consumers of the API

jh-a commented 6 years ago

Virtually every use case of value to customers requires transactions and their detail. As transaction detail has to be requested individually, what would be one call to the transactions endpoint with readtransactionsdetail as the permission grant (UK model), could become 1001 calls where each transaction detail is pulled individually (Aus model). As soon as this approach is scaled for any number of customers/accounts it becomes completely unsustainable for data providers and data consumers alike.

speedyps commented 6 years ago

Transactions Hopefully there is still value in the spec if we don’t get transactions exactly correct in the first draft. Some use cases don’t even require transactions ;) At the moment I believe the transaction documentation matches the discussion on the transaction payload decision. If it doesn’t let me know, otherwise the feedback is noted and will be considered.

@JamesMBligh, not having the transaction types ie things like Interest, Withdrawal, Deposit, Transfer, Direct Debit, ATM, ACH, Dividends, Check etc is a step backwards from what is available now. This is a fundamental part of data delivery today (and has been for the last 20 years). I would strongly suggest that the transaction record include a Type field. My recommendation would be to use the UK Open Banking spec ProprietaryBankTransactionCode entry. This would then match existing functionality and allow providers to use their own internal codes (and optionally specify if they adhere to an existing standard)

If we wanted to stretch that bit further (but it might incur too much work for providers), I would recommend including the BankTransactionCode as per the UK Open Banking spec, (mapped to ISO 20022). This would mean that all transactions will be being reported the same way across all providers. However, I accept that this is probably one step too far based on our current timelines.

perlboy commented 6 years ago

When accountNumber is returned in the clear (see previous comment) guidance should be provided on the formatting to include BSB and Account. Specific reference needs to be made to handling 5 digit BSB's (ie unconditionally format as a 6 digit number followed by a separator (- or /) followed by the account number (these also should have formatting rules (eg 9 digit numbers with leading zeroes).

This sounds like "a good idea" and can be solved with an OpenAPI format definition on the strings in question.

If we wanted to stretch that bit further (but it might incur too much work for providers), I would recommend including the BankTransactionCode as per the UK Open Banking spec, (mapped to ISO 20022). This would mean that all transactions will be being reported the same way across all providers. However, I accept that this is probably one step too far based on our current timelines.

I agree that transaction type records seem like a relevant component to add to the standard. With that said I oppose the idea of adopting a proprietary record set because it's "too hard" to adopt an ISO standard. Provider implementation difficulty shouldn't trump the use of open standards not to mention that data mappers are used heavily within organisations already (ie. it isn't particularly difficult). If ISO20022 meets the requirements it should be adopted.

Without wanting to politicise this process I dare say that organisations will come up with a myriad of reasons why they won't be able to hit the proposed timeline. While some of these will be valid it seems to me that the mere fact this standard is coming about through legislation indicates implementation challenges are far outweighed by protectionist attitudes in the industry generally. Happy to be refuted on this but maybe this is best taken offline (or integrated with a helpful response) to avoid poisoning @JamesMBligh's thread.

Virtually every use case of value to customers requires transactions and their detail.

I disagree, a mere listing of high level transaction data seems like a very common use case akin to the current typical transaction list within normal internet banking applications. With that said though I agree that making 1:N calls to obtain detailed information could represent a significant API scale demand.

Perhaps the middle ground here is that GET /accounts/{accountId}/transactions could include a detailed=true parameter which determines whether the returned object is BasicTransaction or TransactionDetail. The challenge of this approach is that the response is now softly typed with an anyOf between the two. I'd also point out that anyOf was not supported until OpenAPI 3.0.x so this makes older implementations incompatible. At this stage I don't believe the standard declares a minimum OpenAPI version, perhaps it should?

The alternate to the above would be the introduction of a GET /accounts/{accountId}/transactions/detailed or GET /accounts/{accountId}/transactions-detailed method which would allow hard response typing while still achieving the goal. I favour the /transactions/detailed approach as it allows further extension while facilitating load balancer based decisions. Of course such decisions are possible in the non nested approach but imply a pattern match (rather than a URI routing) which would add additional performance consideration.

With regards to Product Categories, I'd highlight the standard is enforcing a set list of types without a catchall like OTHER_TYPE being available. With changes in financial products occurring on a regular basis it's quite possible new (potentially implementation specific) account types will exist and not having a way of communicating these within the standard will result in misrepresentation for these cases and/or somewhat unnecessary adoption of vendor specific variable overloading (which is supported but a bit excessive?).

With regards to the Account schema. The explicit use of separate Balance return types lends itself to nastiness if it extends past the current three. While this may be driven by the UK standard it is likely the reason for this was because at the time OpenAPI 2.x (fka swagger) didn't support nested oneOf's. I believe OpenAPI 3.x has added this support allowing for more explicit oneOf on the three return types. Ie. The return variable can be "balance" with 3 different types. This could put a spanner in code generation libraries though so maybe we just accept that there will be if/else chains to determine what balance variable to read.

With regards to scope names, it's pedantic but the traditional method of scope declaration is x:y:read/write which typically lends itself to trivial pattern matching during security evaluation. The scope names as defined now seem more human readable than machine readable and this will result in pick list hardwiring resulting in nested if/else during implementation.

Finally, the use of POST's to supply an array of values (as is the case in POST /accounts/transactions & POST /accounts/direct-debits) isn't "normal" REST nomenclature. Typically such requests would be placed inside GET's with query strings although I acknowledge that doing so could introduce query string limitations and also loose typing as OpenAPI GET support of arrays in query strings isn't particularly nice. All that aside though it means that scope assessment is once again being hardwired in a way that disregards the method type in the process and so WAF's won't be able to do explicit filtering at the point of receive. I accept I haven't provided much of a suggestion here but I've always seen GET as read, POST as create and PUT as update. This breaks such typical conventions but perhaps I just need to "get over it". :-)

BrianParkerAu commented 6 years ago

@jh-a I totally agree. Access to the detail should be controlled by a scope/permission to allow the additional structures to be populated by the provider when processing the Get Transactions call. Forcing the consumer to make additional calls to Get Transaction Detail will significantly degrade the end-user experience, place unnecessary load on the provider and provides no additional security on the detail data.

perlboy commented 6 years ago

@jh-a I totally agree. Access to the detail should be controlled by a scope/permission to allow the additional structures to be populated by the provider when processing the Get Transactions call. Forcing the consumer to make additional calls to Get Transaction Detail will significantly degrade the end-user experience, place unnecessary load on the provider and provides no additional security on the detail data.

This implies multiple response types with the same http code (or sparsely populated detailed transaction objects which, quite frankly, "sucks"). This was considered and rejected in 2015 (https://github.com/OAI/OpenAPI-Specification/issues/270) but an alternate has since become available via anyOf/oneOf/allOf in OpenAPI 3 (https://swagger.io/docs/specification/data-models/oneof-anyof-allof-not/).

With that said, using scope/permissions as the discriminator, in my opinion, is a "bad idea" as it is now tying the security implementation to the data generation layer. Channeling the KISS principle security rules should be defined cleanly & separate from data generation.

It is very common practice to separate these layers (in code and/or infra and/or department) as it means security audits have a far smaller evaluation surface and can be coupled with response filtering (ie. inside the WAF) away from the data generation side.

I don't see any particular advantage over simply adding a boolean "enable detail" parameter or separate detailed list call which, assuming OpenAPI 3 is the minimum expectation, can be facilitated with a oneOf wrapper on the transaction data. This approach could be trivially inspected at the security layer against the token permissions while keeping the data generation layer cleanly defined based on the input parameters.

speedyps commented 6 years ago

I don't see any particular advantage over simply adding a boolean "enable detail" parameter or separate detailed list call which, assuming OpenAPI 3 is the minimum expectation, can be facilitated with a oneOf wrapper on the transaction data. This approach could be trivially inspected at the security layer against the token permissions while keeping the data generation layer cleanly defined based on the input parameters.

I agree with @perlboy on this. I think my preference would be for a flag to say enable detail, as that feels slightly cleaner.

BrianParkerAu commented 6 years ago

@speedyps Works for me... a flag to control what is generated, and scope to ensure the caller is authorised to set the flag.

JamesMBligh commented 6 years ago

Just a note. I have uploaded the swagger in JSON and YAML format to the documentation site. They can be found via links in the table of contents on the left hand side. OpenAPI v3 is better and more flexible but version 2 of Swagger has been used to maximise utility amongst vendor products.

No other changes have been made to the site.

-JB-

perlboy commented 6 years ago

Firstly, thank you @JamesMBligh for publishing the swagger. At least we have something to start with but beyond that my initial read of what has been published made me cringe. It is increasingly apparent that the definition has been hand coded from text documentation (rather than the reverse). This is the quintessential definition of “bad” when it comes to this stuff and I believe/hope Data61/CDSA can do better. The success of this initiative relies on the implementing customer (likely a developer) having a good experience, what is there now is making us developer types sob in a corner.

Secondly, while I understand the reasoning for utilising Swagger2 as the base definition, the OpenAPI 3 specification was alpha released in January 2017 and officially released ~July 2017. By the time this standard is adopted OpenAPI 3 would have been released for at least 2 (probably 3) years. OpenAPI 3 is structurally different to Swagger2 primarily as a proxy of OpenAPI 3 being the first release under the auspices of The Linux Foundation rather than Swagger2 which was governed by SmartBear. I can totally accept that it may be advisable to avoid using OpenAPI3 specific features to allow for graceful degradation but there are a bunch of tools already available to perform this degradation (such as: https://github.com/LucyBot-Inc/api-spec-converter) for toolsets which are not yet compatible.

Thirdly, and trying to put it nicely, the published spec is neither directly compatible with the published documentation nor does it adhere to quite a few api definition style guides. While there is no official coding style published by OAI the IBM Watson team published most of the generally accepted best practices here: https://github.com/watson-developer-cloud/api-guidelines/blob/master/swagger-coding-style.md

Here's a Top 10 of immediate observations:

  1. Poorly named operationIds. During code generation the operationId is used as the method name typically with all - & _ stripped then camel cases based on this separator. This will result in basically illegible method names of gBa, gBaD, pBabQ.
 Ref: https://github.com/watson-developer-cloud/api-guidelines/blob/master/swagger-coding-style.md#operationid
  2. Model Types should be prefixed (ie. ResponseX, ErrorY) not suffixed. While this isn’t explicitly specified within the Watson guide this approach at least for ErrorY is particularly relevant because SDKs treat error responses prefixed with the word Error differently (separated vs. inline)
 Ref: https://github.com/watson-developer-cloud/api-guidelines/blob/master/swagger-coding-style.md#error-response-models
  3. Model names embedding state inside them. Ie. 200_GET_Banking_Account_Transactions. This not only kills effective reuse but also means models generated in code will be called 200GetBankingAccountTransactions. This is what OpenBankProject did making the developer experience terrible. I don’t believe the UK standard does this although maybe I have wiped this blight during implementation of both standards from my mind. The generally accepted middle ground here is to use a common prefix for type such as RequestX/ResponseY/ErrorZ. This is a middle ground that communicates enough without being method specific.
  4. Use of embedded enumerations is prolific. These should have their own model with a reference or at the very least an embedded schema reference that is dereferenced during generation. The written documentation implies this, the swagger ignores it. Either way, from a development perspective it means there is now duplicate validation code required.
  5. Date Types are being declared in string format with no format validator despite the description declaring DateTimeString as the input.
 Ref: https://github.com/watson-developer-cloud/api-guidelines/blob/master/swagger-coding-style.md#use-well-defined-property-types
  6. None of the models are extending each other. Once again Transaction_Object vs. TransactionDetail_Object. It’s even worse in BulkTransaction_Object where, in order to add accountId the spec writer has embedded the previous fields. Using at LEAST a schema extension would have been a start although once again, with consideration for avoiding duplicate code, potentially a transaction key with an explicit nested Transaction_Object 
Ref: https://github.com/watson-developer-cloud/api-guidelines/blob/master/swagger-coding-style.md#combine-models-when-practical
  7. Models should use consistent separators. Either camel case (easier to read but code generators will, incorrectly, break it) or underscore separated. The published swagger uses both. The Watson guidelines say use CamelCase. Ref: https://github.com/watson-developer-cloud/api-guidelines/blob/master/swagger-coding-style.md#model-names
  8. Product-Category is not subtyped. It isn’t even enumerated!
  9. Gratuitous use of the word “Object” in models. Not only does this not match the published documentation it is also completely pointless when considered in the context of a model. Ref: https://github.com/watson-developer-cloud/api-guidelines/blob/master/swagger-coding-style.md#model-names
  10. Use of required on fields with a default value of “”. If a field is an empty string (which in /banking/accounts/{accountId} case it should never be) it is not classified as required.
 Ref: https://github.com/watson-developer-cloud/api-guidelines/blob/master/swagger-coding-style.md#proper-use-of-required

Honestly, I could keep going but I really don’t want this being perceived as a flame. While Biza.io wants to be one of the first to formally support this standard in full the swagger as published now means we have three different standards (the html one, the one we have done from the html one and the swagger one).

Oh final note, there is no security definitions in the swagger. I suspect this is because things are still in flux here but worth pointing out as the html spec seems to state a hybrid oauth model. The html spec also states there is no client side exposure either which is one of the very few justifications for utilising a hybrid model whereby half the auth is used to authorise a client while the other half is used to authorise a middle man such as an ASP.

JamesMBligh commented 6 years ago

Hi @perlboy, I appreciate your desire not to flame so I will err on the side of assuming your robust feedback is constructive. Your observations are valuable and we will take them on board to improve the swagger files. The published files are a first draft so improvement was always going to be necessary. Some of the comments made had already been identified as improvements that need to be made.

The statement that we are creating the standard from text documentation is true and also fairly self evident. We have been using a methodology of decision proposal documents and, for traceability and accountability of decision making, these proposals have to be the source of truth for the standards. The swagger should, however, match the proposals and be usable for developers so we will strive to improve things in the coming weeks.

-JB-

perlboy commented 6 years ago

Hi @perlboy, I appreciate your desire not to flame so I will err on the side of assuming your robust feedback is constructive.

@JamesMBligh Please please do! We simply want the CDR standard to be a success and we understand there is likely a thousand stakeholders who all want things their way! :)

Your observations are valuable and we will take them on board to improve the swagger files. The published files are a first drafts so improvement was always going to be necessary. Some of the comments made had already been identified as improvements that need to be made.

Totally understand and thanks!

The statement that we are creating the standard from text documentation is true and also fairly self evident. We have been using a methodology of decision proposal documents and, for traceability and accountability of decision making, these proposals have to be the source of truth for the standards. The swagger should, however, match the proposals and be usable for developers so we will strive to improve things in the coming weeks.

I'm totally onboard with design proposals etc. having a formal process and they can continue being in the current format. My main suggestion on this front is simply that post design proposal approval, generate the swagger first and then slate can auto generate most of the doc. Tools are here: https://github.com/lord/slate/wiki/Slate-Related-Tools

Doing it this way also means workload is significantly reduced because no-one really particularly likes maintaining documentation so why not make it easier. :)

kiwisincebirth commented 6 years ago

Hi I am new to this forum and not involved in working group, so apologise if this has been discussed. I would like to comment on "Principle 4: APIs provide a good developer experience", specifically in regard to "Payload Conventions" defined in the draft API.

The draft structure defines a top level set of objects in the payload, namely "data", "links", "meta", etc, with "data" containing the primary data payload. Thus for a developer access to most data is going to require the constant (and repeated) reference via the "data" object e.g.

response.data.accountId response.data.displayName response.data.nickname response.data.maskedNumber ... etc

Of course there are stratagies around this, namely assigning the "response.data" to local variable. While this is a simple solution, it does add a line of code (agreed not much), but this line must be added in many places, specifically whenever the "response" is passed through a function call.

As background to my proposal for an improvement please see the following. http://stateless.co/hal_specification.html

I am not proposing adoption of this 'Specification' in a wholesale way. I am proposing a small change to the data structures, to align to the way HAL represents data

Thus a developer can reference primary attributes of the response more simply, e.g.

response.accountId response.displayName response.nickname response.maskedNumber ... etc

This will provide (IMHO) a more developer friendly API ("Principle 4") , and align to a wider standard, in all fairness it is harder to know level of adoption of this standard.

For disclosure: My background is as an Architect/Developer, currently working on Financial services application, with publicly consumed REST API using HATEOAS HAL, with more than 200 endpoints. So yes I may be slightly opinionated

spikejump commented 6 years ago

Hi,

I am also new to this forum and would like to add some comments on the currently defined spec and existing feedbacks.

The masking applied to accounts and PANs is inconsistent with industry standards. The last 4 digits should be displayed in the clear - not 3 as currently defined.

+1. This should be last 4 digits instead of 3.

in the UK Banking Spec, the transaction type is supplied with every transaction. This is a key field of a transaction, and the draft seems to have excluded this field.

+1. Transaction type is required in the TransactionBasic schema to identify transaction properly.

There are no end-date parameters on the Transaction requests, so it isnt possible to retrieve transactions within a specified range, only from a date to today.

+1. Get Transactions For Account needs an end-date parameter to allow selection of transactions within a specific date range.

For Get Direct Debits For Account, there can be multiple direct debits for an account. There should be an array of 'authorisedEntity' objects defined for the response.

Similarly, for Get Bulk Direct Debits

  1. There can be multiple direct debits for an account. There should be an array of 'authorisedEntity' objects defined for the response.
  2. This is meant to fetch all direct debits of all accounts. The response as defined provides an array of directDebitAuthorisations only with only an accountId to identify which account the direct debits belong to. Most use cases would include all direct debits of an account together. The response should really have all direct debits for an account be wrapped inside an Account object.

Get Bulk Transactions is also meant to fetch all transactions of all accounts. Like above, the response as defined provides an array of transactions with only an accountId to identify which account the transactions belong to. Most use cases would want to bundle all transactions of an account together. The response should really have transactions for an account be wrapped inside an Account object.

In general, it is good practice whenever a query parameter is specified then that particular parameter should be returned in the response object. For example,

The MaskedAccountNumber object should probably be renamed to MaskedNumber to allow for any kind of numbers to be masked instead of just Account numbers.

The Get Bulk Balances API seems to be very similar to Get Accounts. May be this can be combined with Get Accounts with a query parameter of "includeBalances=true/false" to indicates whether to return balances or not? In addition, should Get Bulk Balances support a query parameter for a particular balance$type?

Can there be a health-check endpoint? e.g. GET /healthcheck returns { data: { status: string; version: string } }. This will support basic connectivity testing.

How do we deal with productCategory enum value needing extension? Wait for another spec release? Different industries have different product categories. Does productCategory need to be well defined?

Cheers.

benkolera commented 6 years ago

Sorry to add to the pile here, but when will the PAF address schema make its way into the swagger json and published docs?

For clarity here so people don't have to code dive, right now it is just this in the JSON in the repo:

    "PAFAddress_Object": {
      "type": "object"
    },
ivanhosgood commented 6 years ago

Hi, As @perlboy has mentioned, at biza.io we are building out a development sandbox supporting this standard, so I've had the opportunity to create the model against the specification.

I'd like to add the following:

  1. Why is there inconsistency between how Account / AccountDetailed and how Transaction / TransactionDetailed are defined. AccountDetail appears to be a superset of Account while TransactionDetail contains the same properties as Transaction with a reference to the extended model. These inconsistencies just add another concept that the developer must learn to interact with the model.

  2. There is a lot of copy and paste specification happening here. As @perlboy has mentioned, Transaction vs TransactionDetail and Account vs AccountDetail. There is plenty of opportunity to refactor here and follow a DRY paradigm

  3. Again as mentioned by @perlboy, embedded enumerations are everywhere. This does not follow DRY and upon code generation, we have multiple declarations of these enums.

  4. AccountDetail requires "address$type" however this is not a property on the model

  5. LoanAccount repaymentType and repaymentFrequency are the wrong way around.

  6. Documentation does not match swagger for TransactionBasic vs Transaction

In addition, can we please get some guidance on what the release schedule for each iteration is going to look like? Currently, we are making changes to our Swagger document in anticipation that these issues will be fixed. We'd obviously rather be working against the official document.

perlboy commented 6 years ago

I am not proposing adoption of this 'Specification' in a wholesale way. I am proposing a small change to the data structures, to align to the way HAL represents data

  • "data" is removed and the attributes of "data" move to the containing (parent object)
  • All other top level object e.g. "meta", "links", etc are prefixed with "_" e.g. "_meta", "_links", etc.

I'm in two minds on this suggestion. I totally agree flattening the structure would look nicer in the JSON structure but the main issue I see with this is it's impact on model reuse during code generation. Schema $ref's are generally dereferenced as part of this generation which means the JSON def looks nice but the resultant code has model duplication.

From my perspective, there are generally two ways this (and most others) API standard can be implemented.

The first is hand coded template mashing on a per response basis. Historically this has been the normal thing to do and gives total control over the response at the expense of long term maintainability.

The second is automatic model generation. This has the effect that setters/getters are dynamically defined and recursive validation methods etc. can be implied. That is to say it's a "heavier" approach but it means validations can be recursively descended and dynamic inspection can occur in a OO abstracted way.

Implementing this suggestion makes the first easier but realistically the second is most likely in an enterprise, aided by an IDE or some nice (and expensive!) data mapping tools. On balance I think the existing proposal is probably more conducive to adoption within existing institutional systems. I guess the best bet would be to canvass the involved parties and go from there.

speedyps commented 6 years ago

not having the transaction types ie things like Interest, Withdrawal, Deposit, Transfer, Direct Debit, ATM, ACH, Dividends, Check etc is a step backwards from what is available now. This is a fundamental part of data delivery today (and has been for the last 20 years). I would strongly suggest that the transaction record include a Type field. My recommendation would be to use the UK Open Banking spec ProprietaryBankTransactionCode entry. This would then match existing functionality and allow providers to use their own internal codes (and optionally specify if they adhere to an existing standard)

@JamesMBligh, just wondering if an opinion is being formed over my comment (above).

As I currently work for SISS Data Services, and have previously worked for BankLink and Intuit, Transaction Type is delivered in existing datasets to all of the above.

LiJiang-IndustrieCo commented 6 years ago

@JamesMBligh This is one of the points I brought up at the workshop last week. Although this seems commonsense I think it's worth putting them in the standard.

Include the error object in the 400 and 422 responses for all endpoints to cover the exception request scenarios. By defining the detail error scenarios will force the server implementation to validate the request parameters/objects and return meaningful response which can eliminate ambiguity on the client side. Some examples are listed in the table below for your reference.

Endpoint Error Code Error Description
GET /banking/accounts OBE0001 Invalid product category
GET /banking/accounts OBE0002 Invalid status
GET /accounts/{accountId} OBE0003 Account ID does not exist
GET /accounts/{accountId} OBE0004 Not authorised to access the account
GET /banking/accounts/balances OBE0005 Invalid product category
POST /banking/accounts/balances OBE0006 Invalid account IDs provided
POST /banking/accounts/balances OBE0007 Not authorised to view some of the accounts
GET /banking/accounts/{accountId}/transactions OBE0008 Invalid account ID
GET /banking/accounts/{accountId}/transactions OBE0009 End time must be after the start time if both are present
GET /banking/accounts/{accountId}/transactions OBE0010 Max amount must be greater than the min amount if both are present
GET /banking/payees OBE0011 Invalid payee type
WestpacOpenBanking commented 6 years ago

Westpac has the following preliminary feedback in relation to the draft standards. We’re likely to provide additional feedback later this week.

General remarks on standard and process

Security standards

Pagination and extensibility guidelines

The standard should be changed to accommodate or default to cursor based pagination instead of row offset pagination. To accommodate, extensibility guidelines should allow additional query parameters so that the *next and prev fields would then allow cursor based pagination. Alternatively, an optional paginationCursor** query parameter needs to be added to each endpoint.

More strongly than the above, we further recommend that cursor based pagination be adopted in favour of row offset pagination. This would be achieved by adding a paginationCursor query parameter as above and removing the page query parameter, as well as the totalRecords and totalPages parts of the response. This reduces implementation complexity for data consumers as they don’t need to worry so much about duplicate transactions (especially in the case where two transactions are identical and have no identifier – how does the data consumer know if these are duplicates or not?). The method also better supports low latency, because the need for infrastructure to support the outdated row-offset paradigm is not needed:

We believe that the UK open banking standard may have chosen to only require the linking approach because of these established best practises and the need to provide flexibility for disparate infrastructures across data holders.

Closed accounts

Tactical design for representation of product information

Mandatory bulk endpoints

Direct debits

We reiterate our support for the positions put forward by supports the position put forward by @anzbankau and @NationalAustraliaBank on direct debits made during feedback on the relevant decision proposal.

Other suggested minor edits

Relevant DPs Part of standards Suggested change or note Reasoning
DP-001 – API Principles Principle 2: APIs use open standards - In order to promote widespread adoption, open standards that are robust and widely used in the industry will be used wherever possible. Insert ‘or adapted’ after ‘will be used’. This addition further serves the goals of widespread adoption and reduces implementation cost.
DP-002 – URI Structure Provider Path: The provider path is a base path set by the data provider. It can be any URI desired by the provider. Append ‘The base path for public endpoints may be different’ OR ‘The base path may vary depending on the endpoint’ Per Westpac comment in DP-031: ‘Westpac has a technical requirement to be able to serve public APIs from a different domains…’

James Bligh's response in DP-030 was supportive: ‘The ability to host unauthenticated APIs independently from authenticated APIs will be considered. It was not specifically the intention of the URI structure proposal to prevent this but the language does imply this.’
DP-010 – Standard HTTP Headers

DP-011 – Error Handling
HTTP Headers HTTP Response Codes Add note allowing additional response codes as appropriate for caching and compression. For example HTTP 304 Not Modified might be returned when a client uses If-Modified-Since, If-None-Match, or If-Unmodified-Since in conjunction with a server ETag.

Alternatively, add additional HTTP response codes to support caching and compression. 

Modify the HTTP Additional Headers note if it is felt that it doesn’t include the aforementioned headers. For clarity, add a cross reference to the errors object discussion below (next item in table)
Principle 9: APIs are performant
DP-011 – Error Handling

DP-012 – Payload Naming Conventions and Structures
The errors object will be an array of zero or more unnamed objects. The fields in each of these objects will be as follows: Suggest adding heading to section containing this statement and a cross referencing discussion to error related HTTP response codes Readability suggestion.
N/A Field Naming Conventions, Maps: For JSON maps (i.e. key/value pairs) any Unicode character MAY be used as a field name and stylistic requirements do not apply. Append comment on escaping of JSON special characters Readability suggestion.
DP-013 – Primative Data Types Common Field Types: A natural number (ie. a positive integer inclusive of zero) Change to “A non-negative integer” Pedantic suggestion based on the meaning of ‘integer’.
DP-013 – Primative Data Types Common Field Types: PositiveInteger: A positive integer (zero excluded) Change to NonPositiveInteger: A non-positive integer. Align type name with meaning
DP-013 – Primative Data Types Common Field Types: MaskedPanString: "xxxxxxxxxxxx1234" MaskedAccountString: “xxxx xxxx xxxx 1234" "xxx-xxx xxxxx1234" Be consistent with spaces for PANs as in both examples. Be proscriptive on BSB format (the dash is not always included) Increases likelihood of data provider consistency
DP-013 – Primative Data Types Common Field Types Add Invalid Examples column Increases likelihood of data provider consistency and eases understanding for data consumers
DP-009 – ID Permanence ID Permanence: Within these standards resource IDs are REQUIRED to comply with the following Change to “Unless otherwise noted, within these standards resource IDs are REQUIRED to comply with the following” Exception wording is for transaction and the public product endpoints
DP-003 – Extensibility Extensibility: The three types of extension that the standards address are… Suggest setting extensibility guidelines for adding additional filter parameters  
DP-003 – Extensibility Extensibility: The new end point MUST comply with the overall standards including naming conventions and data types. Change to “The new end point MUST comply with standards principles including naming conventions and data types.” This is probably closer to the intent.
DP-003 – Extensibility Extensibility: Data providers seeking to extend the standards MUST nominate a prefix to identify all extensions. Extended fields and end points and would use this prefix consistently Multiple brands for a data holder might have disparate systems and choose not to implement extensions for all of them. It might make sense to allow prefixes for each brand for this reason and to think about the means of negotiation.  
DP-005 – Authorization Granularity Authorization Scopes: Includes basic account information plus account identifiers and product information. Insert: “, mailing addresses associated with the account” Helps to make clear that this sensitive detail is included in this security scope.
DP-036 OIDC userinfo Support OIDC Scopes & Claims Add OIDC versioning information and links to field documentation for relevant version Helps to ensure implementation consistency across providers.
DP-027 – Basic Account Payloads Account common schema Add openStatus field. Per our response to DP-027 and response from James Bligh: “The absence of the openStatus field in the payload was an oversight”.

 Not present in decision or draft documentation
DP-027 – Basic Account Payloads Somewhere near BalanceType discussions Add note on intended behaviour when the available balance is negative and current balance is positive or vice versa (consumer’s point of view)? Which balance type should be returned?  
N/A “All field names defined in either a request or response payload” “All field names defined in a request or response payload” Grammar
DP-027 – Basic Account Payloads Account common schema: “The unique type as defined by the account provider.” As per our question on the payload discussion we still aren’t sure what this field means. Suggest rewording the description.  
DP-030 Product Payloads

DP-027 – Basic Account Payloads
  Add guidelines for length of text fields if needed. Some consistency between providers will support data consumer use cases like product comparison. It was previously suggested that this would be considered.
DP-028 – Transaction Payloads Three instances of “defaults to today” in start-time parameter Should be changed to “defaults to current time” ‘today’ might be interpreted as the ISO8601 date without the time
DP-028 – Transaction Payloads “The value of the transaction. Negative values mean money was outgoing.” Change to: “The value of the transaction. Negative values means money was outgoing from the account.” Grammar, clarity
DP-026 – Customer Payloads Person Common Schema and Organisation Common Schema Add lastUpdated field Was acknowledged as a good idea on the basis of arguments presented but not incorporated into standard.
DP-026 – Customer Payloads Person Common Schema Add commentary on what to do for persons with a single name. Acknowledgement here and arguments preceding
DP-026 – Customer Payloads Phone Number Common Schema Required fields might be absent if enum is UNSPECIFIED. Change to not required with conditions? Per the note.
DP-026 – Customer Payloads CustomerResponse Organisation description Add text from decision proposal explaining when organisation is to be returned Consistency with decision
DP-026 – Customer Payloads PersonDetail common schema uses PhysicalAddress type whereas AccountDetail uses ‘object’ Align between the two Consistency
DP-026 – Customer Payloads PAFAddress Add text and/or URI which gives a definition of this. See also questions and comments below. We’re not sure how to format addresses this way without the added information.
dlockqv commented 6 years ago

With reference to @JamesMBligh response to @BrianParkerAu comment on lack of support for Standing Orders.:

  • Standing orders is not included as it wasn’t referenced in the ACCC Rules Framework, which the standards are subordinate to.

Does the working group have the ability to challenge the scope of the ACCC Rules Framework? If one of the intents is to support consumers being able to move banking relationships, then support for Standing Orders adds a lot more value than support for a list of Direct Debits IMHO.

FinderOpenBanking commented 6 years ago

One element that Finder believes is missing from the current working draft is specifications on the minimum speed for data transfer.

Our understanding from the the UK example is that there have been some providers taking as long as 30 seconds to return a response. Slow response times will adversely impact user experience and uptake. We propose that a minimum response time of 300 milliseconds is set is for all data providers. We also propose that API performance indicators are made public like they have been on the UK Open Banking website.

DDobbing commented 6 years ago

Some feedback from my side in addition to that already provided. UK Account and Transaction API specification expressed in the under Mapping to Schemes & Standards is: "The Account Info API resources, where possible, have been borrowed from the ISO 20022 camt.052 XML standard. However - has been adapted for APIs based as per our design principles.". Believe our draft would benefit from further alignment with ISO 20022, also noting AU NPP, ASX and an increasing number of C2B developments are ISO 20022 ready or on the way to be. In support, a few observations follow.

1) Common Field Types - Amount String Defined in draft API reference as: AmountString A string representing an amount of currency.

anzbankau commented 6 years ago

ANZ has the following feedback in relation to the draft standards.

Scopes

  1. Please confirm that all scopes are explicit. A Data Recipient wanting to use an endpoint with a resource identifier (e.g. Get Account Detail) would need to explicitly include the scope for the endpoint that provides a list of resource identifiers. There are no implicit relationships between scopes - along the lines of 'inheritence' or 'hierarchies'

ID permanence

  1. Please confirm that IDs are unique for a Data Recipient/Customer/Subject (e.g. Account, Transaction) combination.

Pagination

  1. ANZ agree with the recent comments by @WestpacOpenBanking and recommend the use of cursor-based pagination.

Get Customer - GET /common/customer

  1. ANZ accounts can be related to (including owned by) multiple legal entities (individuals and/or organisations) under the single digital identity (logon). Given the customer and consent structure the expectation is that we cannot mix customers under a single consent. Please confirm that a consent is per legal entity.
  2. ANZ have expressed concern to the ACCC around the lack of clarity and definition with regard to how the customers relationship to the account (e.g. an accountant is a third party signatory) affects the way that data is shared. As discussed in the Data61 meeting on the 16/11/2018 there was mention of adding an isOwned flag into the account interface. Our view is this decision is highly dependent on new information in the rules framework and potentially requires a workshop with participants, similar to products to finalise as the definition of ownership and it is unlikely to be the same across participants. Without this clarification there would be ambiguity for the Data Recipient consuming the data.
  3. The Organisation party type covers the name etc for the single organisation entity but many companies transact in context of a division/geography/office etc. Could organisation office name or similar be included in the response?
  4. ANZ recommend including ARBN along with ABN and ACN.

Get Transaction Detail - GET /banking/accounts/{accountId}/transactions/{transactionId}

  1. During the Data61 meeting on the 16/11/2018 there was discussion of introducing the Biller code into the interface. ANZ suggest that if this is introduced it be made optional as this information may not be available for all scenarios.

Get Transactions For Specific Accounts - GET /banking/accounts/transactions

  1. Repeat of feedback given on 26/10/18 - Query Parameter text - A text search on description/reference fields across several transactions will/may impact response times. ANZ are recommending that this filter be removed or simplified to a single field search.
  2. It is ANZ’s view that Customer permission/consent to sharing of narrative and reference information in transaction information should be explicit as this data can contain personal or sensitive data.
  3. With regards to end-time the standard states "If absent defaults to start-time plus 100 day". Please confirm that this is 100 calendar days, not business days.
  4. Sort order for the bulk endpoints is not clear in the document. ANZ is making the assumption that sorting will be done at Account level first then transactions will be ordered under each account as per "get transactions for account endpoint" with the most recent transaction first. Please confirm if this assumption is incorrect.

Get Transactions For Account - GET /banking/accounts/{accountId}/transactions

  1. There may be a scenario that the customer does not have any transactions against a particular account, ANZ is expecting to respond with a 200 OK and include an empty transaction array. Please confirm this is in line with the standards?
  2. DisplayName and NickName at the account level is embedded in this call, it is the view of ANZ that this information should not be included in the response as this is mixing resources in a single request, no other endpoint does this. If they are to be included could they be at least changed to non-mandatory?
  3. The TransactionDetail documentation and sample JSON has amount and currency as separate members but the AccountTransactionResponse sample JSON has an object: "amount": {"amount": 300.56, "currency": "AUD"}.
  4. The Get Transaction Detail and Get Transaction Detail have sample JSON with member "extendedData"/"extensionType" (without "$type" suffix) whereas the referenced schema ExtendedTransactionData has the proper name "extension$type".
  5. min/max amount - according to the schema, a negative transaction amount implies an outgoing amount. For this filter, is this on the absolute value of the transaction amount? min/max for negatives may lead to confusion, if the search assumes non-negative then we suggest adding another filter of debit/credit which is to be used when the min/max amounts are used.

Get Product Details - GET /banking/products/{productId}

  1. Description is not correct for 'effectiveFrom' and 'lastUpdated' as it reads “A description of the product”.
  2. Bundles – Please provide a sample JSON
  3. Sample JSON has depositRates/discountType that is not a valid member (presumably just pasted from lendingRates and not removed). See https://consumerdatastandardsaustralia.github.io/standards/#schemaproductdepositrate.
  4. depositRates/depositRateType - This is an example of product complexity not supported by a simple array of generic objects (including rate tiers): enum "FIXED" has additionalValue = 'Duration (ISO8601)' but a tier applying to a range (e.g. 1 < 2 months) requires two fields or (less desirable) a concatenation of two fields with a standard format including delimiter character.
  5. Product Category descriptions have small spelling mistakes (more than these 2 occurrences) - "The product category an account aligns withs" and "The the product category an account aligns withs".
  6. Suggestion to include a "constraintType" Enum of "MAX_TERM" and "MIN_TERM" eg :Home Loan "additionalValue": "P30Y" and additionalValue": "P1Y" . This also may apply to other products eg. Term Deposits.
  7. Please clarify the difference between brand and brand name? An example would be helpful.

Get Bulk Balances - GET /banking/accounts/balances

  1. Sample has member "$balance$type". It should be "balance$type".

Get Accounts - GET /banking/accounts

  1. Account schema member "deposits" should be "deposit" as it is not an array. Member "purses" is plural because it is an array.
  2. LoanAccountType/minRedraw has Type = "number(date)" and maxRedraw has "number". Presumably these should be "AmountString" like originalLoanAmount in the same schema.

Get Account Detail - GET /banking/accounts/{accountId}

  1. Repeat of feedback given on Oct 24, 2018 - As discussed during ABA meeting on 24/10/18 Given James' explanation of the intention of 'providerType' in the same discussion, we also suggest that it be changed to 'productName' (not 'accountType' as discussed). Accounts are instances of a product so the product represents its type. Since it's the name used by the provider (i.e. consistent with other channels but not consistently applied across providers - so not an enum) it's not 'productType'. The description should state that it's the provider's product name with no reliable relationship to the productCategory.
  2. Can you please confirm the definition of available balance? Most retail customers have a pretty clear definition but some customers (i.e organisations and institutions) can access a proportion of uncleared funds, is this expected to be represented in available balance?
  3. Given the discussions around data recency and potential for caching responses at the Data Recipient, adding an optional calculation timestamp on the balance response may assist with Data Recipient’s understanding the point of time the data was generated.
  4. Available balance is assumed to be positive or zero - overdrawn accounts would show negative amounts. Is the expectation for this negative value to be set to zero in this case?

Get Payee Detail - GET /banking/payees/{payeeId}

  1. Under DomesticPayeeType for the payId PayeeAccountType Name is considered mandatory. ANZ suggest this be made optional as there will be cases where a customer has entered a PayeeId and Type but this as not been validated with the scheme as a payment has not yet been made therefore name will not be available. Note: This is something that was put in place as a security feature to stop account fishing.
  2. Under InternationalPayeeType\beneficiaryDetails the Country is considered mandatory, the assumption is that if it is not explicitly specified ANZ will default this value to the bankDetails country value.
  3. Scope bank_detailed_account would be more appropriate (rather than bank_basic_accounts). Inclusion in bank_basic_accounts scope appears inconsistent with exclusion of product information and transaction data.

Common Schemas & Field Types

  1. As a common field across all CDR market domains, "RateString" should be "PercentageRateString" as other domains may use it for the more general usage e.g. rate of a particular measure against time. Also, a rate is effectively a ratio and would normally be manipulated and stored as a true representation of the ratio, not multiplied by 100 for human readability.
  2. ASCIIString type is described as "Standard UTF-8 string but limited to the ASCII character set". The ASCII character set has some 33 non-printing characters. Please clarify.
  3. MaskedAccountString is specified in the common field types but never used, instead there is another type MaskedAccountNumber which is used. Suggest consolidating to one type.

General Comments

  1. Links to common schema work within the page but not when used as an external link or pasted into an address bar. They are remapped to the top-level ie. https://consumerdatastandardsaustralia.github.io/standards/#tocBankingCommonSchemas.
  2. Can we get any early visibility on HTTP Status responses that are expected to be given in the event of authorisations that have expired, or been revoked, or are otherwise inactive?
  3. As per our feedback on 26/10/2018 - Bulk endpoints - Our recommendation is to make this optional like the UK standard.
  4. Closed accounts - ANZ agree with the recent comments by @WestpacOpenBanking and recommend that closed accounts for former or current customers should not be in scope for day 1. At this stage we are uncertain of any implications i.e. legal and privacy which could come up in the context of sharing data for closed accounts. Another issue might be if the closed account is the only account the customer has in there digital channels, this would disable there online access and limit there ability to provide and manage consent.
  5. How are life cycle events on the account expected to impact the consent? i.e. if the relationship to the account changes does this invalidate the consent.
darkedges commented 6 years ago

IMHO I feel that the security profile is lacking definition and agree with @WestpacOpenBanking that this is important and will delay implementation. As for how the overall spec is designed via swagger, it is a first pass and like most first passes it will evolve into more and improve.

I feel that permission vs discriminator needs to be addressed, as level of details of data should be covered by the Intent/Consent/Authorisation model. I can be both sides of the argument, one is that you may just want to get a basic level of data to present to the consumer and then allow them to deep dive if need be into more details. This kind of interaction should still be covered by a permission, but it is about the level of data being requested. An ADR should not be able to request more data than consented to, just be able to control the size of data requested within the permission. If the permissions are basic vs detailed, then with both an ADR should be able to get a reduced data set version via a discriminator, but not jump from a lower permission to a higher via the same mechanism. The permission model works for me and I don't like to see being open to abuse.

I would like to see less requests coming to a gateway, as I foresee we are going to have to do a lot of work to validate the realtime state of the access token across a network i.e. is the ADR still an active and valued member of the CDR eco-system. The more call the slower it becomes overall and I am not able to predict volumes, but it will become a lot larger than what most Data Holders are used too. Caching becomes key, but it has problems, as length held could be the difference between consumer being protected vs their being a complaint data was accessed inappropriately. It also impacts dashboarding, audit and the user experience, so less is more. Any guidance on how to be comformant whilst being performant would be greatly appreciated.

I did notice removing authorisation from the data holder is not mentioned. I am assuming as this is a WIP it will come in a latter stage, but it is an important piece of the consent model. One that is open to problems of abuse and essential to the right to be forgotten, and perhaps needs to be considered in the responses if authorisation is removed from the data holder. Again any guidance on how Consent/Authorisation removal is appreciated.

I look forward to the next round of discussion.

Edit:

DeloittePE commented 6 years ago

Overall, we have been pleased with the process of open collaboration on the CDR API standards. The value of many eyes and domain experience has been demonstrated in deep and healthy discussions about the data to be represented.

At this stage, with the first complete draft available for review we believe that the MVP API specification is on the right track. Part A below covers a few technical details about the API specifications and recommend minor changes which we believe will lead to a better developer experience and improved extensibility within the overall standard. Part B presents some of our banking domain observations.

Part A - Technical Considerations

Paucity of Hyperlinks in Payloads

Our experience is that hyperlinks are an important facility in REST APIs which support consumer concerns such as discovery, navigability and composition. We note that the draft standard makes little or no use of hyperlinks to represent relationships between resources. We believe that judicious use of hyperlinks would be beneficial to the standard to represent common relationships such as those between Account and Customer or Transaction and Account.

Hyperlinks for Transaction Detail

The Transaction payload has TransactionID as a conditional property as well as the boolean isDetailAvailable to indicate the availability of product detail. It is left to the consumer to construct the URL for that detail (using out of band information).

We propose that a conditional hyperlink to transaction detail is a better approach by improving the developer experience as well as being a more extensible approach.

Transaction Detail and the N+1 Problem

Some of the discussion in the Transaction Detail proposal raised the concern that consumers wanting to retrieve bulk transaction details would need to iterate over the array of transactions obtained from

GET /banking/accounts/{accountID}/transactions

and then fetch individual transaction details where available.

One way to address this concern would be to provide an optional "include" query parameter allowing the consumer to request embedded detail where it is available. This facility is discussed in the JSON:API protocol. For example, a consumer could request:

GET /banking/accounts/{accountID}/transactions?include=transactionDetails

The provider would then return an array of Transactions where those with additional details would have that information embedded as a sub-object. The "include" parameter is of general benefit in many resources where the consumer wishes to control the granularity of detail of where it is common to request related resources. We suggest that the standard considers judicious use of an "include" parameter where it adds value.

Customer and Userinfo Endpoints

The proposal on Userinfo endpoints offered a number of options for support of Userinfo endpoints and/or Customer endpoints. Within the discussion on this proposal, a number of respondents noted the limitation with restricting Customer to only the currently authenticated Customer. In particular, see the following two comments:

We agree that conflating the Customer endpoint and the authenticated Userinfo endpoint may lead to confusion and limitations on the utility of the standard. We would prefer something along the lines of the proposed "option 3" with both a complete Customer endpoint and a minimal Userinfo endpoint available.

The draft standard provides a singleton Customer resource GET /common/customer which refers to the authenticated customer. This imposes limitations on how the URL might be used as a link to navigate from related resources.

Consider an account which has two customers associated with it as joint account holders. It would be useful to be able to link to those two customers from the Account payload. E.g.

GET /banking/accounts/{accountId}

    {
         data: { … }
         links: {
             accountHolders: [
                    /common/customers/314e8f4f,
                    /common/customers/9f83c401
              ]
           }
    }

Unfortunately, such linking is impossible because these customer endpoints are not supported in the proposal.

While the proposed Customer endpoint may offer a strong security stance we feel that this URL choice locks down the API to a specific use-case and closes off later opportunities.

The intended functionality may be retained by using a pseudo-ID for the customerId (e.g. “me”) to represent the currently logged in party. So:

GET /common/customers/me returns the currently authenticated customer.

GET /common/customers/9f83c401 returns a different customer that the authenticated party has authorization to see.

Error Payloads

The draft standard relies on 400 and 422 HTTP status codes to represent all consumer-side error conditions without any further elucidation.

We believe this will yield a poor developer experience. If the consumer sends an erroneous payload than all they will get back is a 422 with no indication as to which property or enumeration is detected as the problem. We strongly believe that any 400 or 422 status should be augmented with an error body providing more detail as to the specific error condition(s). The error body should provide a machine-readable error code as well as a human readable message.

We note that something along these lines has been proposed in Issue Comment 440111013

OAS Specification

We note that the publication of the OAS Specification added greatly to our ability to scan and understand the details as well as the high-level structure of the draft standard. With hindsight, the earlier introduction of this artefact would have been extremely useful. We suggest that this specification should be the primary format for the discussion going forward and for future CDR domain standards.

Part B - Banking Domain Considerations

The following banking domain considerations have been identified, this feedback will be provided per resource grouping.

Get Accounts

Account Status Coverage

With the request there is an optional flag for open-status. We need to consider the enumerations for this as OPEN and CLOSED, this does not indicate all the status types for an account (ie. Pending). Additionally, the Account and AccountDetail response does not include this status.

Account & Account Details entities

Product Category Coverage

The accounts response payload & product category may not be held consistently by all ADIs, however their portfolio generally is consistent (MLN, SAV, DDA,TDA, RCA). The mapping of Account types don’t seem to match 1:1 with the standard portfolio types as we only have 3 account type entities showing:

BSBAccountType Object

Agree with Brain Parker (Cuscal) feedback for a BSBAccountType object since many ADI’s have multiple BSB’s, so it makes sense to include BSB.

Open-status Property

We need to Include the open-status property in the response payloads, so it can used in subsequent filters if required.

Eligibility Type Enumeration

Eligibility restrictions on people who may apply for a product is currently explicit to STAFF and STUDENT as an enumeration. Would this be better represented via an Eligibility Type which would be a restriction array of types of individuals who are eligible for that product. E.g Mutuals may have products specific to employment groups e.g. defence workers and emergency services.

Get Payees

NPP Payee Type

Can we extend the payee$type enumeration to include a flag to indicate NPP Aliases? We have ability to view NPP Account enabled status for purpose or sending of receiving NPP payments but not for viewing Payees that are an NPP Alias.

We understand that these are treated separately in most institutions however we feel the API should have a holistic view of Payees across the board.

Payee Creation Date

We propose an extension to the response schema to include creation date of the payee. This would allow sorting of payees by creation date (specifically in the presentation tier).

Promysys commented 6 years ago

Consent

The current standards framework and associated discussions are an opportunity to address one of the key and fundamental elements called out in Chapter 8 of the ACCC Rules Framework for the CDR, specifically the requirement for Consent.

Consent and new Consent Standards as yet haven’t been addressed explicitly within the Technical Standards or working groups and we feel strongly that in order to support the ACCC framework and broader Consumer Data Right (and Consumer expectations), it should be. New Consent standards need to be developed and included to manage and ensure that, per the Framework:

• Consent should be freely given by the consumer. • Consumer’s Consent should be expressed, implicit and not implied. • Consumer Consent should be informed. • The Consent obtained should be specific as to the purpose of sharing data, that is, the uses to which the data will be put. • Consent should be time limited. • Consent should be able to be easily withdrawn with near immediate effect.

At Priviti, we are helping to shape the global standard for Consent and our solution complies with the intention of Consent Management as it relates to Open Banking and more broadly within the CDR guidelines. Consent is NOT Identity & Access Management or Authorisation, which is how it has been traditionally handled in this discussion and other Open Banking implementations, it is essential and complimentary to that.

This comment is an effort to call out the need for this group and the Data Standards bodies to ensure that Consent is addressed explicitly now and not (as what happened in the UK) an afterthought necessitating rework and further definition. We’ve shared an example of how it has been considered in the UK and also provide a working example of how we have created our Consent platform.

The request of the Standard bodies is that we have a separate Consent specific workshop to ensure that the considerations called out in the framework are addressed as currently they haven’t been, and we’re keen to help shape this standard. In order to protect our Australian and globally approved Patents for Consent, we are not going to post detailed flows and code in this forum and are requesting a Consent specific workshop with all/any interested parties willing to help shape the standard for Consent for Data Sharing in Australia.

Example flows and Consent Basics

Overview from the UK Open Banking Framework

The figure below provides a general outline of an account information requests and flow using the Account Info APIs.

image

Steps Step 1: Request Account Information • This flow begins with a PSU consenting to allow an AISP to access account information data.

Step 2: Setup Account Request • The AISP connects to the ASPSP that services the PSU's account(s) and creates an account-request resource. This informs the ASPSP that one of its PSUs is granting access to account and transaction information to an AISP. The ASPSP responds with an identifier for the resource (the AccountRequestId - which is the intent identifier). • This step is carried out by making a POST request to /account-requests endpoint • The setup payload will include these fields - which describe the data that the PSU has consented with the AISP: • Permissions - a list of data clusters that have been consented for access • Expiration Date - an optional expiration for when the AISP will no longer have access to the PSU's data • Transaction Validity Period - the From/To date range which specifies a transaction history period which can be accessed by the AISP • An AISP may be a broker for data to other 4th parties, and so it is valid for a customer to have multiple account-requests for the same accounts, with different consent/authorisation parameters agreed.

Step 3: Authorise Consent • The AISP redirects the PSU to the ASPSP. The redirect includes the AccountRequestId generated in the previous step. This allows the ASPSP to correlate the account-request that was setup. The ASPSP authenticates the PSU. The ASPSP updates the state of the account-request resource internally to indicate that the account request has been authorised. • The principle we have agreed is that consent is managed between the PSU and the AISP - so the account-request details cannot be changed (with the ASPSP) in this step. The PSU will only be able to authorise or reject the account-request details in its entirety. • During authorisation - the PSU selects accounts that are authorised for the AISP request (in the ASPSP's banking interface) • The PSU is redirected back to the AISP.

Step 4: Request Data • This is carried out by making a GET request the relevant resource. • The unique AccountId(s) that are valid for the account-request will be returned with a call to GET /accounts. This will always be the first call once an AISP has a valid access token.

Overview of an implementation of the Priviti Consent Framework

We would acknowledge the UK are now providing adequate guidelines in the consideration of new standards for Consent. The actual adoption however has been stagnated and complex to implement, due to Consent and Consent standards not being prioritised (and to be fair, were relatively immature) when the UK Open Banking standards were being defined. Systemised, fine-grained Consent wasn’t the priority when compared with Open Banking and GDPR compliance. We believe there is an opportunity for Australia now to leapfrog the rest of the world with the CDR to ensure Consent is a forethought component of the new standards rather than an afterthought.

In Priviti we use the notion of a Consent Triangle which facilitates Consent between three parties – a Consent Requestor, a Consent Provider and a Credential provider (who has a relationship with both parties and can share information or act on behalf of the Consent provider) our approach is helping to shape the current and emerging standards for Consent within Open Data (CDR) and Open Banking.

Priviti supports the emerging standard by allowing an individual to release credential access securely to permitted third parties, for a specific purpose, for a limited time. A unique combination of push-based dual channel authentication and matching functionality delivers a secure and versatile solution for sharing credentials.

Again in order to protect our Patents we don’t feel it is appropriate to post our code and schemas in this Open forum at this stage, so we are requesting a specific workshop on the concept. We have included the principal actors and flows to guide a future discussion on the concept.

There are 4 roles in the process as identified in the diagram and defined below:

image

  1. Priviti is a matching service which acts as an intermediary between the Credential Provider, Presenter and the Acceptor. Priviti validates that shared details sent from the Acceptor and the Presenter match. If the match is successful, Priviti forwards the relevant details to the Credential Provider who then releases credential access to the Acceptor.
  2. The Credential Provider holds the Presenter’s credential. The Credential Provider will use the Credential Reference LookUp service to add credential references during Presenter onboarding and retrieve these when a successful match has occurred. The Credential Provider is responsible for releasing the credential access, following the successful authentication of the request by Priviti. E.g. a bank who holds customer’s bank details.
  3. The Presenter is an individual, who has an existing relationship with a Credential Provider, such as a bank. The Presenter will be responsible for granting consent to allow an Acceptor to access their credential, using the Credential Provider's existing application. E.g. a customer who has a bank account, who wishes to use a third party service.
  4. The Acceptor is a third-party service, who requires consent to access a Presenter's credential. The Acceptor will be responsible for requesting consent by communicating directly with the Presenter. E.g. an aggregator who requires consent to access a customer’s bank details.

The following data set will be required:

Authorisation Request Token (ART)

The ART contains the following information:

image

image

image

tgrid-usa commented 6 years ago

Secure Logic would like to present the following feedback and suggestions of future work in respect of the draft Consumer Data Standards.

Security

  1. We note that all transactions in scope of the Open Banking API standards will only be conducted through backchannel communication between servers. For this specific leg of communication in the process, it is recommended that the standards specify a requirement for Cross Origin Resource Sharing (CORS) to be disabled to avoid the security risk.

  2. Will the standards cover authentication and authorisation requirements for multiple client modalities such as mobile and web? It is important to mention requirements for such measures in order to set a security baseline over all Accredited Data Recipient (ADR) client implementations, protecting the end-to-end integrity of an Open Banking transaction flow.

Performance

  1. In section 9. Authorisation and Authentication Process of the ACCC CDR Rules Framework, it is explicitly stated that “data holders must collect and maintain records and report on API performance, including response times against minimum service level benchmarks set out in the standards”.

    The draft Consumer Data Standards seems to lack concrete definition of the required API performance service levels, such as maximum response time. In addition, Secure Logic also recommends a convention for handling errors relating to failure to meet performance-related thresholds to be set out in the standards. This will aid in the multi-party integration between data holders and ADRs from simple to complex use cases.

  2. Caching can help data holders meet set performance service levels by reducing the amount of incoming traffic load for data which remains identical over a long period of time, such as personal details. Furthermore, it can also enable data holders to only provide incremental transaction history based on new / different data as opposed to the whole works.

    The standards should govern or guide the implementation of caching-related HTTP headers to promote uniform behaviour across the Open Banking ecosystem.

Future Work – Granular Authorisation Scope and Consent Taxonomy

  1. The first pass of the standards at providing authorisation scope is sufficient and practical in view of the July 2019 go-live timeframe. However, Secure Logic encourages review and adjustment of the structure to enable facilities for more finely-grained authorisation scopes such that each data attribute can be authorised atomically. An example would include consumers who only want to expose a combination of name and salary deposit transactions from their bank accounts to a smart budgeting app. In the current scheme, they will need to agree to Basic Customer Data and Bank Transaction Data which includes other superfluous information.

  2. Constraint parameters should be introduced to the authorisation scope so that consumers do not need to unnecessarily expose their personal or financial information in its entirety. Examples of this concept can be manifested in form of a date range for transaction history, a payer filter for transaction history, and the like.

  3. The concept of consent is still fluid in the context of the draft standards. We recommend future work to cover explicit consent data structure and storage for audit purposes. To deliver an effective consent data structure, the standards should first define a consent taxonomy to integrate consent into the workflow in a seamless and deterministic way.

dmcranston commented 6 years ago

Hi, I am not sure if this is the correct thread or not. In the specification it references components of the JASON:API specifications rather than taking on the whole specification. Is there are reason the PI's are not being fully compliant to that specification?

kennethleungswift commented 6 years ago

To supplement on my colleague @DDobbing 's feedback above, from SWIFT’s perspective, we highly recommend the consideration of using ISO20022 message elements and components where applicable.

Another point I want to make is to make sure the design of payload has considered both from the lenses of consumer client and corporate client. For instance, the response data’s structure and completeness are more important for reconciliation purpose from a corporate client’s perspective.

A few more suggestion:

  1. TransactionCode • It has been mentioned by other folks but would like to stress the importance of this for end-users to accurately identify and distinguish the transaction without additional parsing / guessing logic. • Consider to use the ISO element BankTransactionCode and/or ProprietaryBankTransactionCode – as in UK Open Banking. 7.The following elements are missing which are essential and important for a transaction from a bank customers’ perspective to perform reconciliation in a structural manner: • ChargeAmount (related bank charges on the transaction if applicable) • CurrencyExchange, InstructedAmount (when it is a FX related transaction) • CreditorAgent • CreditorAccount • DebtorAgent • DebtorAccount • MerchantDetails (if it is retail and card transaction related) • CardInstrument (if it is retail and card transaction related) All the above are ISO20022 elements and used in UK Open Banking as well as the wider standards in the existing bank account reporting space
MacquarieBank commented 6 years ago

Our first feedback is to say well done on reaching the milestone of a first draft for Open Banking! Below is our feedback on the first draft. Please note, given this is the first draft, we have sought feedback from a wider group of internal stakeholders, so some feedback will be on earlier decision proposals. We felt it was important to include this feedback, as it relates to complexity in implementation.

HTTP Headers (MGL-1)

We support the requirement to return a minimum of set of standard HTTP response headers, to ensure strong security amongst all data holders. A good starting point for these headers is https://www.owasp.org/index.php/OWASP_Secure_Headers_Project#tab=Headers

URI Structure (MGL-2)

Although discussed in an earlier decision proposal, feedback from our engineers is a strong preference to avoid naming collisions in the URI structure.

As an example:

GET …/accounts/{id} Returns the detail of a specific account GET …/accounts/transactions Returns the transactions of multiple accounts GET …/accounts/{id}/transactions Returns the transactions of a specific account

The second API makes it difficult, as you need to ensure there is no {id} = transactions. It would be preferable to have this as GET ../transactions

Pagination (MGL-3)

We don't support making the following attributes mandatory: page, last, totalRecords, totalPages. We do not currently provide the functionality to jump to a random page in our website or mobile app, and favour a standard 'cursor' approach, where you can simply fetch the next set of transactions. This is a common approach used by many applications (e.g Facebook, Twitter, Google), and is natively supported by many database technologies.

Having to implement page, last, totalRecords, totalPages requires architectural changes, and will have an impact on performance. Given the timeframe of 1st July 2019, this would be difficult to accommodate.

Security (MGL-4)

As noted in our previous feedback, we do not support using MTLS. See our original comment for rationale and preferred alternative.

Get Accounts (MGL-5)

(MGL-5.1) Could a more detailed description be given for the providerType field? It is not clear what we should provide for this field. Perhaps a more descriptive field name could also be given?

(MGL-5.2) Our preference would be for balances to be removed from this API, and only be included in the balances API. The rationale for this is: • Easily allows a customer to only share account details (and not balances) if they choose • Reduce maintenance and testing as the balance schema changes over time • Improve performance by not having to retrieve balances (if not required by the data recipient)

Get Bulk Balances (MGL-6)

We have concerns about being able to meet API principle #9 (APIs are performant), particularly where the consumer may have thousands of accounts (e.g. a business, accountant, or financial adviser). For July 1 2019, we would like to see this made optional, and give time to explore if there are any better patterns for handling bulk balances. If bulk is required, a data consumer can still retrieve all accounts, and then retrieve balances for a set of accounts at a time.

Get Bulk Transactions (MGL-7)

We have concerns about being able to meet API principle #9 (APIs are performant), particularly where the consumer may have thousands of accounts (e.g. a business, accountant, or financial adviser). For July 1 2019, we would like to see this made optional, and give time to explore if there are any better patterns for handling bulk transactions. If bulk is required, a data consumer can still retrieve all accounts, and then retrieve transactions for a set of accounts at a time.

Get Direct Debits (MGL-8)

As has been communicated prior, direct debit authorisations are not held by banks, so this API cannot technically be implemented. Our recommendation is that this API is flagged in some way, so that all participants are aware that it is not feasible to implement.

Get Products (MGL-9)

  1. The current enumeration of feature types is insufficient to fairly compare products. Below is a list of additional feature types we would like to see added: a. Concierge Service b. Discount gift cards c. Entertainment Offers d. Airport services e. Discount Travel Bookings f. Bonus Reward points g. Chip enabled card h. Purchase notifications i. Mobile Banking - this is to differ from online banking (i.e. a purpose built mobile app has been created) j. BPAY k. Scheduled Payments l. Direct Debit m. International Transactions n. Cash Deposits o. Cheque Deposits p. Emergency Travel Assistance q. Government Guarantee r. Interest Redirection s. Cheque Book t. Integrated Online Trading u. Account Type Switching - ability to switch between account types (e.g. At-Call and Term Deposit) while retaining same account number v. Business Segment Expertise w. Trust Accounting x. Dedicated Relationship Manager y. Third Party Integrations - describes the number of existing software integrations z. Bulk Payments aa. Merchant Payment Terminals bb. Payment Reconciliation cc. Customisable Interest payment frequency dd. Customisable TD Terms - you can specify a term deposit of any length you like (e.g. 65 day term deposit)
  2. We would also recommend a new Product Category to cover Regulated Trust Accounts (Deposits)
  3. We would recommend removing business/personal from Product Category, and leaving that to be part of Eligibility. We have some products that are applicable for both personal and business clients, so we cannot map this to a Product Category accurately.
  4. We would like to see the ability to define both tiered and stepped rates. Given there is an upcoming workshop on this, we will wait for the outcome of this.
spikejump commented 6 years ago

In addition to earlier feedback, we have the following additional comments.

+1 on use of cursor based pagination. Perhaps with examples of bulk API call with return of multiple account transactions with attention to pagination.

Perhaps include additional customer$type of "delegate" to accommodate a delegated user who has access to an account permitted by the account owner, e.g. financial advisers.

May be CreditCardAccountType needs a "balance$type"?

Both Loan and Credit Card Accounts should have an "autoInstallment" Boolean to indicate whether the bank will automatically direct debit the customer to pay the minimum amount type.

There should be an Investment account type (for specificAccount$type) supporting brokerage, annuity, pre-tax retirement and post-tax retirement holding types and transactions to support personal financial management or investment advisor applications.

da-banking commented 6 years ago

Direct Debits

Repeating our comment on #029:

We do not have the data required to meet the proposed structure. If this is retained in scope in its current form, we would implement the directDebitAuthorisations as an empty array.

We understand that the current position of Data61 is that as the ACCC has this specifically in scope, then it will remain in the API. We consider this position bizarre. Having an endpoint that does nothing makes no sense. We hope the ACCC sees sense.

Security

There is no mention about the user opting into the sharing of specific accounts.

Our expectation is that the PSU will be able to consent explicitly to which accounts are available to the data consumer (as per UK).

Get Transaction Detail

Just a clarification on the ExtendedTransactionData.payee field - outbound payments via NPP need not be to a PayID - the value assigned to the payee field may simply be a name assigned by the user.

What's the intent of this structure?

"extension$type": "extendedDescription",
"extendedDescription": "string"

It appears that we are redirecting one string field to another.

We propose the extension$type is NPP and extendedDescription and service are both specific to this type.

As identified by others, need support for sct as well as x2p1.

Pending transactions

There have been several mentions of pending transactions. These are not a thing from the perspective of a core banking system. They do not appear as a transaction on the account or the general ledger or a customer's statement. Pending transactions, which we call holds, are an entirely separate concept that can change the available balance of an account. They can be deleted, and they can auto-expire. Including holds, if in scope, should be a distinct endpoint to the transaction endpoints.

Get Products

The effective bool parameter appears a little odd.

If true then only include products that are effective right now and exclude products that may be available at a future time. If false only include products effective in the future. If absent defaults to include all products.

Giving a specific meaning to effective being unspecified is inconsistent, counter-intuitive and inflexible. For consistency with other APIs, we recommend an enum instead: CURRENT, FUTURE, ALL

We would also re-iterate the need we stated here that discounts can be applied to customers, not just to products and accounts. This is a form of bundling. An example of this might be a regular customer gets $100 per month of waived transaction fees, rolled up across all of their accounts; where a high-value customer might get $200 per month of waived transaction fees.

The Product Discount Type enum is too limited. For example, a discount may be offered on the basis of age or employment status. Indeed most of the items described in Product Eligibility Types may also be a valid reason to provide a discount. That is, while access to the product may not be limited to specific eligibility criteria, discounts on fees for that product may be based on such criteria. Perhaps the the list of Product Discount Types should include the values in Product Eligibility Types. At a minimum, OTHER is essential to capture novel scenarios.

Also a swagger nitpick - example code includes discountType as a field of features, but the documentation for ProductFeature does not.

Pagination

Others have suggested cursor based pagination is preferred. We have no strong preference either way, and will support either, but not both.

WestpacOpenBanking commented 6 years ago

Westpac has the following additional comments

Endpoint versioning and Swagger

The current proposed versioning strategy is problematic when viewed in regard of swagger and automated tools for development (including automated code generation), quality assurance, operations and so on. In particular, there is no standardized way for swagger 2.0 to accommodate endpoint versioning with the proposed header negotiation – it can only expose all endpoints and fields for an API. We note that the versioning proposal was only able to be commented on by one bank and two individuals, and we feel that this issue needs to be revisited with wider input.

We strongly suggest a block versioning split by authenticated/unauthenticated endpoints and by industry (including a separate version for common endpoints) in alignment with the UK. This approach will support the development needs of both consumers and holders by allowing them to continue to use off-the-shelf and standardized development tools and welcome further discussion on the issue.

Other remarks on the Swagger specification

Sensitive PII data included

A number of endpoints, including the customer endpoints, the account detail endpoint, the payee endpoints and the transaction endpoints include sensitive data. We have concerns with sharing sensitive data without masking/hashing. We also note that the sensitive data shared may not be the data of the party who gave consent to share it. Sharing such information at scale puts at risk sensitive information of both customers and non-customers.

BillerCode

We are able to provide billerCode for payees. For transactions, the billerCode is part of the payment instruction which is not always possible to link to the transaction and may or may not be held by us.

Transaction Codes

Transaction codes in Australia depend on the payment type and are often proprietary (with less standardization than in the UK). We think that the most value to data consumers will be provided by providing an appropriate ISO 20022 code, but that doing so requires considerable mapping activity to occur. This would also be the case with a less complex scheme with fewer transaction types. For pragmatic reasons around the July 19 deadline and build complexity we suggest that codes are added to a later version of the standard.

Data quality and data entry

Data quality issues are common with customer entered information. For example, customers might accidentally transpose or misspell names or may not understand the difference between a business name and a trading name and enter those fields incorrectly. We suggest that the descriptions of person, organisation and payee endpoints reflect how data has been collected and that it is subject to customer error. This would facilitate transparency around the accuracy obligations under the proposed Consumer Data Right Privacy Safeguard 11.

Minor inconsistencies between draft standard and decision proposals

In addition to those noted in our previous response, we have spotted the following inconsistencies between the draft standard and the decision proposals:

Schema Field name Discrepancy
Person lastUpdateTime Description missing
Person firstName Decision is optional, standard mandatory. Description partially missing
Person middleNames Description missing including note that array can be empty
Person Prefix Formatting example part of description missing
Person Suffix Formatting example part of description missing
Person organisationType Standard is has required = false, decision has the field being required.
PAFAddress N/A Decision says that this will be defined in the draft standard, but there is only a placeholder
Account productCategory Called accountCategory in the decision. Optional in draft standard, mandatory in decision.
Account balance$type String instead of enum and permissible responses from decision missing
AccountDetail termDeposit/creditCard/loan Decision allows for none of these to be included for appropriate account types. Standard requires inclusion of exactly one of these objects
AccountDetail specificAccount$type Mandatory in decision, required is false in draft standard
AccountDetail address (Note our earlier feedback on security scopes in relation to this field) It is optional in the draft standard and mandatory in the decision.
ProductFee Amount Mandatory in decision, not required in standard
Many schemas additionalValue Wording in draft standard is confusing because the field isn’t labelled ‘Conditional’
TransactionBasic Reference Draft standard has required, decision has optional
Payee Type Enumeration is of string type

We agree with @anzbankau’s comments on the transaction, product and account endpoints, on common schemas and their general comments section. In particular, we note that the comment about the inconsistent use of an amount object for storing amounts and currencies. We suggest alignment with the UK and consistency.

NationalAustraliaBank commented 6 years ago

Summary

NAB welcomes the opportunity to respond to the 2 November 2018 Working Draft of the CDR/Open Banking Standards.

Summary of our feedback:

Related feedback

This response builds on NAB’s extensive contributions to the public policy debate on Open Banking. These include:

Sensitive Information

In all of NAB’s interactions regarding the development of the CDR we have emphasised that safety and security of customer data is absolutely paramount. NAB continues to have serious concerns regarding the security implications of some aspects of the framework. This includes the sharing of sensitive information including Personally Identifiable Information (PII) and information that supports banking identity verification, such as mobile numbers and email addresses. It also includes customer sensitive transaction and payee data.

NAB objects to the inclusion of customers' personal information as there are significant security risks associated with the sharing of such a level of confidential data.

Given that KYC data is excluded in Phase 1 and customers will not be able to switch providers via the CDR alone, NAB does not consider there is an appropriate use-case for the data. There is also a strong argument that any personal information can be supplied by the customer to the data recipient directly.

Security

There are significant security risks associated with the sharing of confidential data and therefore security should be our top priority as part of the Open Banking scheme. NAB welcomes the kick-start of the industry security forums but we strongly recommend Data61 dedicate more resources to carefully and dutifully analyse abuse-cases and design controls in collaboration with the industry to achieve a high security standard.

We have identified a number of security gaps that require further analysis and inclusion in the Standards, as follows:

The definition of these and other security gaps are essential to complete the solution's blueprint. Where these matters remain uncertain, it limits the ability of data holders to build a solution and has the potential to add delays and risks to the process.

Phasing of scope

We strongly recommend focusing on a smaller set of Minimum Viable Product (MVP) APIs for Phase 1 rather than attempting such an ambitious scope. It will be a major accomplishment to have the system go live by July 1st 2019 regardless of the number of APIs that are present in the scheme. Instead we should focus on de-risking the implementation by focusing our attention on the non-functional aspects of the scheme. We highly recommend following an Agile or Iterative methodology as this allows us to learn and adapt as we uncover the finer intricacies of what makes for a successful and secure system.

Now that we have visibility of the data scope which consists of 17 APIs we are confident that the timelines are unrealistic, especially given the fact that industry testing will need to start well before July 1st 2019. We also are cognisant that the directory and administration APIs are unclear and will also require significant data holder implementation effort.

As a start, we propose the following APIs be removed from the MVP scope. These APIs either provide the same data as other APIs, expose sensitive data, or we as data holders do not have the data available.

GET /banking/accounts/transactions (bulk transactions)  
POST /banking/accounts/transactions (bulk transactions)  
GET /banking/accounts/{accountId}/direct-debits  
GET /banking/accounts/direct-debits  
POST /banking/accounts/direct-debits  
GET /banking/payees  
GET /banking/payees/{payeeId}  
GET /common/customer  
GET /common/customer/detail

We also propose that a plan is developed to include a review and updates to the Standards based on the planned phasing i.e. Phase 1, 2 and 3. Rather than review the current published Standards as the final end state version, NAB considers that there should be an opportunity to review and revise the Standards after each phase implementation in order to improve them for the next phase.

Feedback on the process

We commend the progress made thus far, the openness of the process and the pace at which the various industry bodies are working in parallel. There are however, fundamental gaps:

General standards feedback

We have previously given feedback via DP30 that the definition of mandatory vs. optional vs. conditional for each field/object was too ambiguous e.g. sometimes within the standard Optional seemed to indicate a provider choice.

To minimise confusion and interpretation whilst implementing the APIs we would like to see the revised version of the Standards clearly distinguish:

Transactions APIs

The following feedback relates to the transactions APIs:

Overall

Detailed transaction remittance information within transaction data APIs allows data recipients to infer and correlate information about our customers, their behaviour and lifestyle patterns (e.g. data about hospital visits and specialist treatments and conversations via NPP detailed and instant remittance information). The inclusion of this information may not be apparent to customers when giving consent and is irrevocable once granted. Customers may not be aware of the possibilities of how this data may be used in ways which they did not intend.

Therefore, we recommend additional and more granular authorisation scopes for transaction data as per the below list:

Query Parameters

The inclusion of complex query parameters such as transaction amount and free text searches across vast data sets and unbounded timescales is resource intensive for data holders. We recommend reducing the MVP scope and making these query parameters optional.

Description

We believe the definition "The transaction description as applied by the financial institution." is too broad and can be interpreted in many different ways. We believe this should be detailed further with some examples given.

Direct Debits APIs

The following feedback relates to the direct debits APIs:

As previously mentioned in DP29, banks are not the data holders for direct debit authorisations which are debited from their customers' accounts.

NAB strongly objects to the proposed approach which is a workaround to derive the data from the already processed direct debits. This will not be a trivial exercise, could lead to inconsistencies across implementations and exposes incomplete and misleading data. We also question whether this would deliver the intended user experience, particularly in the use case of account portability, as key data is missing e.g. frequency, amount per debit, expiry date. We also note that in Phase 1 account portability within the CDR will not be possible given KYC data is excluded.

Get Payee Detail

As previously mentioned above and in DP32, the API payload contains the personal information of other parties. NAB strongly objects to the inclusion of data that belongs to other people or organisations without their explicit consent. The transfer of this data would effectively involve the transfer of personal information where the individual to whom it relates has not explicitly consented to its transfer. In addition and as noted above, in the absence of a use-case for data portability in Phase 1 of the CDR we do not agree or endorse that this data should be included.

DomesticPayeeType - bsb and accountNumber

We note that the bsb and accountNumber for the customer's payee are not masked whereas we do expect the customer's account number and BSB to be masked. We believe this is inconsistent and masking should also apply to the payee's account number and BSB.

DomesticPayeeType - payId

The name field should not be included in the Standards. This field is mandatory and is defined as "The name assigned to the PayID by the owner of the PayID." However, this is not data required to make an NPP PayID payment nor is it always held or stored by the data holder. There are other ways to access this data held by the NPP PayID Central Addressing Service.

The identifier field is a payee's personal information e.g. mobile number, email address. The identifier is not the customer's data and the data holder does not have consent from the payee to share this data. The payee may not want the customer to share their personal information with a third party. The payee may have provided their PayID to the customer with the expectation that it would only be used to make a payment via the customer's banking channel.

Common Schemas

address within AccountDetail

Similar to our other concerns regarding the inclusion of a customer's personal or sensitive information within the payloads, we object to the inclusion of the correspondence address information within the Account Detail payload. The correspondence address is considered PII and is particularly sensitive, especially if this potentially reveals the location in which the bank will send out physical plastic cards. Also, we believe it is not intuitive for a customer that their detailed account information also includes their address.

balance$type and associated structures

We believe the way Balance and Limit are represented are overly complex and suggest this should be be simplified. We believe the current proposal creates the need for more complex processing rules for data holders and data recipients than is necessary.

A simpler representation is possible with only one balances object (allowing multiple balances and currencies) and a separate limit object. Our recommendation is summarised below:

This allows any account to have one or many balances in one or multiple currencies.

This simpler representation delivers on the intent and is simpler to understand, interrogate and implement and is the most accurate representation of the data. An example of how this might be represented is below:

Name                 Type               Required    Restrictions      Description

» balances           [AccountBalance]   true        none              Array of balances on the account

» limit              [AccountLimit]     false       none              The credit limit details for the account

AccountBalance Properties

Name                 Type               Required    Restrictions      Description

» currentBalance     CurrencyAmount     true        none              The current balance of the account at this time. Should align to the current balance available via other channels such as ATM balance enquiry or Internet Banking.

» availableBalance   CurrencyAmount     true        none              The available funds in an account. Assumed to be positive or zero.

AccountLimit Properties

Name                 Type               Required    Restrictions      Description

» creditLimit        CurrencyAmount     true        none              The maximum amount of credit that is available for this account. Assumed to be positive or zero.

» amortisedLimit     CurrencyAmount     false       none              The available limit amortised according to payment schedule.

ProductCategory

While we can work with the enumerations outlined in the standard, we think it’s easier and at times more accurate to remove the Business and Personal separation for this classification and use eligibility criteria to determine whether available to businesses or individuals or both for most product types.

We also believe the Foreign Currency options can be treated as Features within existing product categories rather than mandating them as their own categories.

Given the above, we propose a simpler list that looks like the below.

maskedNumber

This definition of masking conflicts with the definition in the standards section under "Common Field Types". Also in the standards section there are two masked data types, one for credit cards and one for other account numbers, whereas within the Accounts payload, only one type is used for both.

CreditCardAccountType

We don’t understand the difference between minPaymentAmount and paymentDueAmount and this should be further detailed in the description. They appear to be the same field.

LoanAccountType

LoanEndDate, nextInstalmentDate, minInstalmentAmount and repaymentFrequency do not apply to most Overdraft type products so recommend they should be optional and not mandatory

openStatus (missing)

We note that openStatus is missing in the response. This has previously been agreed to be included but has not made it into this version.

providerType / productName

We note the previous change of providerType to productName in this schema has been rolled back. We assume this is an oversight as this was fixed in final published versions of DP27 and DP31.

isNegotiable

As previously stated, we do not support inclusion of a field called isNegotiable, particularly as described. We propose the following replacement name and description:

“pricingOnRequest" - pricing for this product is not published because of the complexity of factors that drive the final price and can only be finalised as part of the application process.

Product & Account Components

Overall

We are concerned that the split of the Account and Product Components into different groups / enumerations (for example Product Feature Types / Account Feature Types, but extends also to Fee Types, Discount Types and Rate Types) will lead to these diverging over time and this does not seem necessary. This is already happening with the ESTABLISHMENT fee type being available as an option in Product Fee but missing from Account Fee. We assume this is an oversight, but propose they are all merged to remove future / unnecessary anomalies or confusion. If these are to stay split, what controls will be in place to ensure they remain consistent? Feedback throughout is combined in the interest of not repeating ourselves.

Rate Type Feedback

As per our feedback on DP30 the modelling of interest rates is insufficient. The suggestion that complications for differences in rates should all be covered in the additionalValue field can be implemented by NAB but is not easily machine-readable. Examples of the sort of information that would need to be placed in the additionalValue field is outlined in the links below that we publish today. Note that the additionalValue field will need to include information about lending purpose, repayment type and fixed term in the home loan examples.

https://www.nab.com.au/personal/interest-rates-fees-and-charges/indicator-rates-selected-term-deposit-products

https://www.nab.com.au/personal/interest-rates-fees-and-charges/interest-rates-for-home-lending

Product Feature Types / Account Feature Types

We recommend the inclusion of the additional Feature Types listed below:

Product Fee Types / Account Fee Types

We recommend the removal of the fee types listed below from the enumeration as they are specific fee names, not fee types (i.e. these should be listed fees within the Fee name field mainly classified as OTHER_EVENT):

Product Discount Types / Account Discount Types

We recommend the inclusion of the additional Discount Types listed below:

Product Deposit Rate Types / Account Deposit Rate Types

As per previous feedback, we recommend the inclusion of the additional Deposit Rate Types listed below:

Product Eligibility Types

We recommend the inclusion of a type to indicate that a product is available to an individual / natural person.

Common (customer) APIs

Overall

As explained above and in DP26, we strongly believe that PII that is currently used for the purposes of identity verification, password recovery or multi-factor authentication should not be shared with data recipients under any condition.

lastUpdateTime

The exact intent of this field is not clear. For instance, it may refer to either the last time that the complete set of customer data has been reviewed and updated; or any individual data field has been updated. Depending on the intent, NAB may or may not hold this data. Given the uncertainty regarding the intent and accuracy of the field, we recommend the field to be removed or be made optional.

firstName, middleNames, prefix

These fields should be optional as they may not apply to all customers.

isPreferred (Phone)

This field should be optional as it is for email. No preference may have been captured so can not be inferred. Also, some customers may have requested (as a preference) not to be contacted by phone, so provision of any phone number as a preferred contact number would be inconsistent with previous customer preferences.

occupationCode

NAB considers that this field should not be included within the customer payload, as it is likely to be inaccurate (i.e. it may be out of date) and that it is sensitive customer information for an individual. In addition, we note that this field was not specifically referred to within the Designation Instrument or Rules Framework.

abn, acn, isACNCregistered, industryCode, organisationType, registeredCountry, establishmentDate

This data was not specifically requested within the Designation Instrument or Rules Framework so we do not believe it should be included within the customer payload.

phoneNumbers, emailAddresses

NAB has previously provided detailed feedback regarding our concerns with respect to sharing sensitive PII data. NAB strongly believes this data should not be included in the payloads as it increases the risk of identity takeover in the event of a breach.

physicalAddresses

As above, NAB has serious concerns with sharing sensitive PII data via payloads. We also note that this data was not specifically requested within the Designation Instrument or Rules Framework.