ConsumerDataStandardsAustralia / standards

Work space for data standards development in Australia under the Consumer Data Right regime
Other
319 stars 56 forks source link

Decision Proposal 288 - Non-Functional Requirements Revision #288

Closed CDR-API-Stream closed 1 year ago

CDR-API-Stream commented 1 year ago

This decision proposal contains a proposal for changes to Non-Functional Requirements and the Get Metrics end point based on feedback received through various channels.

The decision proposal is embedded below: Decision Proposal 288 - Non-Functional Requirements Revision.pdf

Consultation on this proposal will close on the 7th April 2023.

Note that the proposed changes have been republished in this comment and this comment

Consultation will be extended to obtain feedback on these updated changes until the 5th May 2023 12th May 2023 19th May 2023.

CDR-API-Stream commented 1 year ago

The opening comment has been updated and the Decision Proposal is now attached.

The Data Standards Body welcomes your feedback.

ranjankg2000 commented 1 year ago

In the recommended “consent metrics”, we can potentially look at splitting authorisation count for accounts between individual and non- individual entity. It can provide an indication of adoption among businesses

AusBanking-Christos commented 1 year ago

On behalf of the ABA, can we please request an extra week for our Members to respond. Thank you.

CDR-API-Stream commented 1 year ago

Of course, no problem. We'll extend the consultation until 7th April

damircuca commented 1 year ago

We recommend making the “GET /admin/metrics” endpoint publicly accessible without any authentication or protection. This change would provide numerous benefits to the ecosystem.

Ref: https://consumerdatastandardsaustralia.github.io/standards/#get-metrics

We believe that restricting access to the “GET /admin/metrics” endpoint only to the ACCC and individual data holders limits the potential benefits to the ecosystem. By allowing public access, ADRs and other stakeholders can make better-informed decisions and plan their approach to each data holder more effectively.

nils-work commented 1 year ago

Hi @damircuca

Are you finding cases where the Get Status endpoint does not accurately represent implementation availability (i.e., because you encounter unexpected unavailability), or that there is not enough detail (on specific endpoint availability, for example) for it to be useful when initiating a series of requests?

jimbasiq commented 1 year ago

Hi @damircuca

Are you finding cases where the Get Status endpoint does not accurately represent implementation availability (i.e., because you encounter unexpected unavailability), or that there is not enough detail (on specific endpoint availability, for example) for it to be useful when initiating a series of requests?

Hi @nils-work

If you are able, please take a look at https://cdrservicemanagement.atlassian.net/servicedesk/customer/portal/2/CDR-3328 to see an example of when the Get Status endpoint has not worked sufficiently.

You can see the failure reflected in the CDR performance dashboard screengrab below. Availability is apparently 100% but a 50% drop in API traffic is visible i.e. APIs are down. Specifically Error 500s on data retrieval APIs.

image

nils-work commented 1 year ago

Thanks @jimbasiq, I'm not sure if I'll be able to access that ticket, but I'll check.

As a general comment, and it may not have been the case, but my initial thinking is that a scheduled outage may produce this effect (I note the drop in traffic appears to be over a weekend).

The Availability metric (at ~100%) would not be affected by a scheduled outage, but any invocations (resulting in 500s) may still be recorded and reported in Metrics (though there is not an expectation of this during an outage).

This makes it appear that either the Status SCHEDULED_OUTAGE was ignored by clients and about 50% of the average invocations were still being received (perhaps only some endpoints were affected), or the status was incorrectly reported as OK during an unexpected outage (of about 3 days) but only about 50% of invocations could actually be logged.

If it was an unexpected outage, the Status response should have been PARTIAL_FAILURE or UNAVAILABLE and Availability should have been about 90% for the month.

damircuca commented 1 year ago

Hey @nils-work generally the more data that is available the more options we have on how to incorporate it within the delivery of CDR services.

The Get Status endpoint is very coarse, and doesn't provide enough depth for us. Whereas metrics has a lot more detail that can be used to better support customers, implement a more refined CDR CX flow, and also help with fault resolution.

Even now, whenever a support ticket is raised, we find ourselves going straight to the metrics report (available via CDR site) to see what the lay of the land looks like before we respond back to our customers.

Further to this, even with scraping we have bee surfacing metrics such as performance metrics, stability percentages and more, which we find valuable to drive decisions and future enhancements.

Realise it may be a big ask, but it will be valuable - and will also raise the transparency and accountability within the ecosystem which is equally important for making a success.

CDRPhoenix commented 1 year ago

Rather than opening up the Get Metric endpoint to the public, I think it is worthwhile to allow the public to sign up and download raw data from Get Metrics that ACCC CDR collects, this will still place the responsibility of chasing non responding Data Holder Brands with CDR and wont flood the Data Holders brands with Get Metrics request.

As a side note stemming out of ADRs hitting Maximum TPS tresholds, I think it is also worthwhile to revisit batch CDR requests and whether there is a need for something like Get Bulk Transaction for banking if there is a use case for get fetch transaction periodically and for DSB to consider creating a best practice article on ZenDesk on ADR calling patterns, i.e. do you need to perform a get Customer Detail, get Account details every time you want to pull transactions down. Otherwise any increase in traffic threshold will be soaked up with "low value" calls and we will be forever chasing for more and more bandwidth.

CDR-API-Stream commented 1 year ago

In response to @ranjankg2000:

In the recommended “consent metrics”, we can potentially look at splitting authorisation count for accounts between individual and non- individual entity. It can provide an indication of adoption among businesses

Thank you for this feedback. This is a good idea to incorporate

CDR-API-Stream commented 1 year ago

In response to @damircuca:

Making the metrics API public is a very interesting idea. The DSB will discuss this internally with Treasury and the ACCC to identify any policy reasons why this would not be possible. There are real no technical reasons why this would be an issue provided there were low non-functional requirements that would ensure Data Holder implementations didn't need to be over-scaled.

The other option provided @CDRPhoenix, where the data is made available from the Register is also something that could be investigated.

damircuca commented 1 year ago

One thing to consider which CDRPheonix touched on was to open up the data that ACCC collects vs forcing the Data Holders to make changes on their end. Sorry for stating the obvious your likely considering this already 🤷🏻‍♂️

ACCC-CDR commented 1 year ago

The ACCC supports the changes outlined in Decision Proposal 288. These changes will improve the accuracy of the information captured through Get Metrics and better support the estimation of consumer uptake.

The ACCC suggests a further change to the PerformanceMetrics value. Currently, it is proposed that this value be split into unathenticated and authenticated metrics. The ACCC suggests that splitting this value by performance tier (i.e. Unauthenticated, High Priority, Low Priority, Unattended etc.) would better align these measures with the metrics reported for invocations and averageResponse. This change would assist the ACCC’s monitoring of Data Holder’s compliance with the performance requirements.

The ACCC notes suggestions by participants regarding the availability of Get Metrics data. As flagged by the DSB above, the ACCC will continue to collaborate with its regulatory partners to assess how Get Metrics data can most effectively enhance the CDR ecosystem but suggests that such measures should be considered separately from this decision.

cuctran-greatsouthernbank commented 1 year ago

Overall this will be a large-sized change for Great Southern Bank to implement. Given we have already planned work up until July 2023, it would be much appreciated if the obligation date for this change can be at least 6 months once the decision is made.

Issue: Error code mappings. Out of the 2 options proposed, we prefer option 1 - Split the error counter to http error code and corresponding counts. This will give better understanding on what all error codes application retuned. Currently all 5XX considering as same.

Issue: Lack of consent metrics We need clarification on historical records - are we looking for counts (authorisation/revocation) from beginning or the last 8 days? All other metrics data carrying last 8 days' data, but customer count and recipient count is not.

Issue: Scaling for large data holders. We prefer Option 2 - tiered TPS and Session count NFRs based on the number of active authorisations the data holder has. The customer base and the uptake of Open Banking at Great Southern Bank remain relatively small compared to the big banks. This option will help us reduce the cost of maintaining our infrastructure to meet the current TPS requirements. We can proactively manage the active authorisation and scale up slowly as required. Depending on the tiered threshold, potentially we can look at ensuring we meet the current tier plus the next tier up to cater for any sudden influx of registrations. Further consultation to define the tiered threshold will be much appreciated.

anzbankau commented 1 year ago

We are broadly supportive of proposal to uplift the CDR’s non-functional requirements as outlined in Decision proposal 288. This decision proposal describes a range of topics, and we suggest any proposed implementation schedule be priority driven with careful consideration given to improved consumer outcomes, ecosystem value and impact to data holders (i.e., cost, time, and complexity of implementation).

Specific points of feedback as follows: <html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns="http://www.w3.org/TR/REC-html40">

Item | Feedback -- | -- Get Metrics Issues: authenticated and unauthenticated | We support the recommendation to "Split between authenticated and unauthenticated". Get Metrics Issues: Granularity alignment to NFRs | We are not clear on the benefit, consumer or otherwise, for reporting on an hourly basis. Suggest that the benefit is clearly articulated before changes are made. Get Metrics Issues: Lack of consent metrics | As above we suggest that the consumer benefit of this proposal be clearly articulated as the effort to implement is likely to be significant. NFR Issues: Scaling for large data holders | We support scaling at brand level and the uplift of the TPS ceiling in accordance with appropriate demand forecast. We have yet to see evidence that the current TPS ceiling is inadequate and/or adversely affecting consumer outcomes. We suggest an evidence-based approach to forecasting future demand, so that holders can plan implementations with sufficient lead time. We note that open banking digital channels are not like-for-like with existing banking channels, the CDR rules for entitlements mean that there are additional performance considerations for data holders. We welcome the opportunity to work with the DSB to review the current demand on the system. We do not support removing site wide NFRs. NFR Issues: NFRs for incident response | We are broadly supportive of NFRs for incident response and endorse a more transparent approach to tracking issues and resolution. NFRs for incident resolution are problematic as there is no easy way to guarantee resolution times, particularly with complex issues which require interactions between ADRs, consumers and data holders; these interactions can be laboured and transactional, owing to the limited information which can be exchanged outside of the CDR.  We are also unclear how issues can be objectively and consistently classified in terms of severity and prioritisation without independent mediation. Implementation Considerations | Per earlier point, implementation should be priority-based with appropriate consideration given to the ecosystem’s capacity for change and demonstrable consumer benefit. A more predictable change cadence with sufficient lead time for implementation is recommended. Get Metrics changes:Errors | We recommend to count by HTTP status code rather than URN.

kristyTPGT commented 1 year ago

TPGT appreciates the opportunity to provide feedback in relation to Decision Proposal 288. Please find our feedback attached. DP-288 Final Response.pdf

johnAEMO commented 1 year ago

AEMO thanks you for the opportunity to respond to this Decision Proposal

In terms of feedback to the getMetrics API, AEMO has the following comments:

  1. The definitions of each field could be made clearer by including the httpStatus codes applicable to each field. That is: • Availability – is presumed to be the % all requests not returning a 5xx series status code • Performance – is presumed to be for all successful requests (200s) within thresholds • Invocations – is presumed to be all requests (successful or not) • Average response – is presumed to be for all successful requests (200s) • Tps fields – is presumed to be all requests (successful or not) • Errors – are presumed to be all 5xx series status code responses • Rejections – are presumed to be all 429 series status code responses (traffic threshold limits) Missing: • Other 400 series errors should be reported. These are not currently reported in getMetrics and in some instances are significant in number and complete the overall picture of request quality • 95th percentile – is statistically a more useful indicator of overall performance spread than the currently requested mean

  2. NFRs for AEMO as a secondary data holder who is without access to the necessary fields to determine if the request is Customer Present or Not Present. At this stage we have made assumptions that the customer is present, except where there are multiple service points requested (getServicePoints API only).

  3. Performance Observations AEMO currently has issues with its Usage APIs’ performance in providing large payloads and is undertaking a proof of concept to identify where and how to best address this. There are two changes in the industry that will increase payload size in the short term and medium term. Both changes will likely impact performance. • In the short term the tranche 2 obligation for complex requests is expected to include additional multiple service points in one request and that will impact performance. • In the medium term, the industry is planning to accelerate the upgrade from basic meters to interval meters as part of the AEMCs ‘Review of the regulatory framework for metering services’. The objective of the review is to replace all basic meters with 5 minute interval meters in the NEM; each meter will provide 288 interval reads per day per unit of measure.

While we accept that AEMO is obliged to service every request it receives, there are some observations we have already made that may improve the ADRs’ experience of this service: • AEMO receives interval meter Usage data from meter data providers at best the day after the reading is taken – multiple API requests within a day will not yield any more up-to-date data. Interval meter readings of 5-30 minutes are used by the energy industry to settle the market and to charge retailers for the energy their consumers have used during each interval, (who in turn use this to bill their consumers). While the meter might read every 5-30 minutes, this does not indicate the frequency that usage data is circulated across the industry. • AEMO Basic meter Usage data is typically read on a 1-3 monthly basis and it too is shared across the energy industry at best the day after reading. Similarly, multiple requests in a day will not yield more up-to-date data.

AusBanking-Christos commented 1 year ago

6 April 2023

Submitted on Thursday, 6 April 2023 via: [Consumer Data Standards Australia - GitHub site](https://github.com/ConsumerDataStandardsAustralia/standards/issues/288)

Dear @CDR-API-Stream

ABA Response to Decision Proposal 288 – Non-Functional Requirements

The Australian Banking Association (ABA) welcomes the opportunity to respond on behalf of our Members regarding DP 288 Non-Functional Requirements.

The ABA has met with Members to discuss DP 288 in more detail and provides the following feedback.

A point raised by Members centred on Dynamic Client Registration (DCR) Response Time - NFR. Members noted that current CD Standards (the Standards) response times for DCR can prove challenging to comply with, given the additional latency taken by the Accredited Data Recipient’s (ADRs) to undertake the registration request JWT validation required by the Standards. This additional latency time which Data Holders (DH) rely on, in relation ADRs to outbound connection whitelisting is not split out from times noted in the Standards. We propose that the DSB further reconsiders amending response times to reflect this. As a point of reference, we include a link to last year’s Dynamic Client Registration Response Time NFR #409.

We note DP 288 confirms that the DCR will not be subject to change but reserved for a future direction of the Register Standards. We ask the DSB to reconsider our Members’ position to address their concerns on this point as part of the development requirements in the DP 288.

Equally, we do thank the DSB for providing clarity on the origins of the six new Consent Metrics (new authorisations) introduced into DP 288. Where the DSB explained that the ACCC requested these specific new Consent Metrics, so the ACCC can determine where customers are dropping off in the consent flow, and whether this is occurring at the ADR or DH ends.

We propose an open discussion or workshop with the ACCC regarding their request for additional consent metrics as a way to understand and improve consent drop off rates. The cost and effort to add these metrics, when aggregated across all DHs is significant. We propose that a small number of DHs that between them cover most consent flow types (basically looking at different OTP delivery mechanisms), to volunteer and provide the metrics requested on a one-off basis as input for a study into improvement in consent flow UX, which is presumably what the ACCC want the metrics for in the first place. This would lead to a faster outcome and be cheaper for not only all DHs but also for the volunteers (as they would not be extending the Metrics API, only collating the data on a one-off basis). We also note that consent flow is likely to change radically because of Action Initiation and the introduction of FAPI 2.0 and RAR.

Should the above volunteer proposal not be accepted, some Members have raised comment on the DP 288 section around Implementation Considerations, which includes the six new Consent Metrics. We note that the DSB acknowledged that it was prepared to ramp up the implementation schedule over an extended period. An initial proposal raised by the DSB was for five years being a potential period of implementation. The ABA welcomes this proposal by the DSB, to allow our Members to better resource and budget accordingly for these, and other priorities, including those planned for future CDR implementation (e.g., Action Initiation and Non-Bank Lending).

We would also ask the DSB further considers how it would prefer to update the CX journey negative path. At any point along the customer journey, the customer can decide to cancel and there can be multiple reasons why (regardless of whether the customer is still at ADR side or DH side). Recognising it is not only about the customer hitting a technical issue and can’t continue with the journey. Ideally at the point of customer-initiated cancellation, data should be collected as to why the customer decides to cancel and it should be a “standardised” set of reasons Members can all report on. Currently (incl., ADR/DH) this data is not collected from customer when they cancel, as it is deemed as introducing friction when it is not on the DSB’s CX flow.

We generally understand the DSB’s proposal to balance the requirement on TPS ceiling obligations tied to the number of consents held by each bank. Meaning this is intended by the DSB to be a fairer allocation of investment across individual banks. As opposed to setting a fixed figure which some smaller Members may result in excessive systems costs based on TPS ceiling measure which banks are not likely to reach.

Members have expressed challenges on TPS thresholds around provisioning for peak times. Members have suggested further workshops be facilitated by the ACCC and DSB on how to address this matter on TPS and response time concerns and achieve a fair and reasonable model across all industries and emerging areas like Action Initiation. Members believed this approach could better serve reaching a resolution than direct feedback to a DP.

We would rather have a staged lift in TPS that is tied to a realistic industry consensus forecast. If the increase is staged over a number of years, we would also like a mechanism to periodically revise the required TPS as more data becomes available. Alternatively, if a formulaic approach tied to consents is taken, we would expect that the formula be deployed in a manner that gives Members enough time to budget for and implement system uplifts to cater for increased TPS NFRs, including systems changes for third party service providers.

We also propose that demand management is considered. For example, demand from ADRs could be spread across 24 hours, and not 3 hours in the early morning. This could be enforced through hourly quotas. Another consideration is restricting the number of times that slow moving data is queried. If a given data set is only updated daily, then this could be flagged with a new metadata field that the ADRs would have to respect and only request that data once a day.

In conclusion, we note DP 288 raises challenges for smaller DHs around TPS and consents, with a few options raised by the DSB to remediate under the heading, Scaling for large DHs. One proposal includes, ‘increase in the site wide TPS and Session Count NFRs’. Some Members have requested evidenced based data that the DSB sees why, or foresees in the ecosystem to warrant a change, and the types of changes being proposed by the DSB.

Further discussions or workshops with the ACCC and the DSB to discuss these NFRs and other appropriate matters to understand how this potential proposal could be applied efficiently, would benefit our Members. Given if it were applied by the DSB, to accommodate for a rise in TPS ceiling thresholds, this would likely result in significant investment to affected ABA Members.

We thank the DSB again for the opportunity to respond on behalf of our Members, as we are equally thankful for the DSB extending our response date by a week.

We look forward to continuing our engagement and thank the DSB for its support in these matters.

Yours sincerely

Australian Banking Association

Telstra-CDR commented 1 year ago

Please find attached feedback from Telstra DP288 - Feedback.pdf

CDR-API-Stream commented 1 year ago

Thanks everyone for all of the feedback. There was a lot that came in just before or over the Easter weekend. We are going through the feedback and will respond incrementally over the next couple of days.

We will leave this consultation open during this time and for a further couple of days so that everyone can respond to what we will be proposing to take to the chair.

CDR-API-Stream commented 1 year ago

Submitted via email on the 6th of April 2023

NAB Response to Decision Proposal 288 – Non-Functional Requirements

National Australia Bank Ltd (NAB) welcomes the opportunity to respond to Decision Proposal 288 Non-Functional Requirements. Due to technical issues, we have not been able to submit our response via Github. As such, we provide our response to certain items below.

Dynamic Client Registration

As per previous GitHub issues listed below, we request that DCR performance threshold is increased. Dynamic Client Registration Response Time NFR · Issue #409 · ConsumerDataStandardsAustralia/standards-maintenance · GitHub CDR Data Holders outbound connection whitelisting · Issue #418 · ConsumerDataStandardsAustralia/standards-maintenance · GitHub

Whilst we acknowledge that further consultation into DCR is opened under Noting Paper 289 – Register Standards Revision, we request that increase in DCR performance threshold is implemented as a quick fix whilst a strategic plan forward is discussed as a part of Noting Paper 289.

Scaling NFRs for Large Data Holders

We suggest further workshops be facilitated by the DSB and ACCC on the how to address the TPS issue and achieve a fair and reasonable model across all industries and emerging areas like action initiation. We prefer to have a staged lift in TPS that is tied to a realistic industry consensus forecast. We also suggest that ADRs factor TPS thresholds to their implementations, as Data Holders should not be forced to invest into expanding their capabilities due to ADR implementation choices, i.e. using heavy batch processes to request data in bulk. As the API Availability threshold is set to 99.5% per month and API performance requirements enable fast data sharing, the ecosystem should be moving towards real-time on-demand data.

API Response Times

Based on the interesting points raised in the GitHub issue #566 (Optionality of critical fields is facilitating data quality issues across Data Holder implementations · Issue #566 · ConsumerDataStandardsAustralia/standards-maintenance · GitHub), we believe that NFRs should be enabling the data sharing ecosystem rather than constraining it. Current NFRs have been made binding without any extensive consultation or with any consideration of the unique challenges presented by legacy systems that hold CDR Data. We believe the focus of CDR at this stage should be on data quality and adoption rather than imposing arbitrary, restrictive performance requirements. As one of the CDR principles is that the experience should be commensurate to digital channels, API response times should also be aligned. We strongly recommend that each API performance threshold is increased by at least 1000ms.

NFRs for incident response

NAB is strongly of the view that service level agreements for incident response must consider implementations where multiple data holders (and potentially third parties) are involved. Such incidents take considerable amount of time, effort, and coordination between all involved parties. CDR service management portal should also be uplifted to allow multiple parties to work on an incident and have visibility into it.

Impracticality of Current API Performance Requirements for Complex White Label Implementations

Context

With the acquisition of Citigroup’s consumer banking business, NAB is now the CDR Data Holder for white label credit cards issued under Card Services, Coles Financial Services, Kogan Money Credit Cards, Qantas Premier Credit Cards, Suncorp, Bank of Queensland, and Virgin Money Australia. Whilst some of these white label products are completely serviced by NAB (including CDR Data sharing and data sharing consent), some are serviced in partnership with other institutions including other ADIs that have their own separate CDR obligations. Further adding to the complexity, CDR data sharing was implemented using a third-party service provider. Figures 1 and 2 below visualise current implementations:

image

image

When these solutions were implemented, the direction was to prioritise customer experience and consistency with existing digital (and non-digital) servicing models, with additional considerations including technical complexity, scalability, compliance deadlines and opportunities to improve existing channel integration. The understanding at the time was that the non-binding NFRs were to undergo a robust consultation prior to become binding and the consultation would factor in the complexity of white label arrangements, especially ones where multiple parties are involved to provide optimal customer experience.

API Response Time Requirements

Current NFRs for API response times are not achievable for white label implementations where one ADR facing party must integrate with multiple Data Holders to provide CDR Data. Currently, the API response times measure individual API response times, however in a complex white label implementation, there are multiple steps that need to be completed in the background to:

An additional consideration in this scenario is network latency, especially in instances where infrastructure of the involved parties is not in the same region or a country.

This consequently means that even in a scenario where each individual Data Holder meets the prescribed API response times, the nature of the implementation means that the ADR facing API response time will be over the threshold.

NAB is of the view that the issue could be addressed by increasing the API response time thresholds across the board, which we believe would have a broader positive impact on the ecosystem. It would alleviate NFR pressures on Data Holders, who are often in a position where they must make trade-offs to remain compliant with NFRs. NAB believes that the focus of the CDR ecosystem should remain on customer experience and adoption.

Alternatively, the metrics reporting could be enhanced to allow ADR facing Data Holders to report metrics based on their own environment, with additional fields to report on data sharing metrics of another Data Holder that supplies CDR Data via a private integration. NAB would welcome the opportunity to contribute to a discussion regarding the development of new metrics applicable to complex white label arrangements.

jimbasiq commented 1 year ago

Considering the comments around the difficulty of Data Holder implementation whilst balancing other work and obligations, Basiq would be supportive of a phased delivery approach and further discussion is required to agree and prioritise "most useful" and "easier to implement" metrics. I would prefer to have several most useful metrics in three months rather than all metrics in 12 months.

jimbasiq commented 1 year ago

On the topic of a TPS metric. It is always going to be a challenge for Data Holders to "right size" their infrastructure in order to avoid negatively affecting a consumer. For instance crystal balls or true elastic scalability will be required to set TPS and Session Count NFRs based on the number of active authorisations the data holder has.

Can I suggest the TPS metrics drives the ongoing obligation. i.e. Data Holders do not just report on TPS but on % utilisation of their current limit. If metrics show TPS is regularly exceeding a defined threshold (e.g 90%) the Data Holder should be obligated to raise their TPS.

jimbasiq commented 1 year ago

One last comment on the Consent Metrics.

For abandonedConsentCount The number of initiated consent authorisation flows abandoned (for any reason) during the period, reported on a daily basis

Could we get more granular than "for any reason", a Data Holder should be able to detect the different between:

CDR-API-Stream commented 1 year ago

Please note the attached feedback provided via email by AGL: AGL - Non-Functional Requirements Revision.pdf

CDR-API-Stream commented 1 year ago

Thanks everyone for the volume of feedback but also for the depth of analysis and insight contained in the feedback. It has taken a while to parse all of the feedback due to the quality and some of the initial assumptions of the DSB as to a likely proposal to the chair have changed as a consequence.

For this reason the DSB has decided to do the following:

  1. Summarise our understanding and interpretation of the feedback received on each of the key topics in the consultation
  2. Present a proposal of the changes we intend to recommend to the Chair based on the feedback
  3. Leave the consultation open for a further two weeks to let the community to respond to our proposal to clarify any misunderstandings we may have to improve the proposed changes
CDR-API-Stream commented 1 year ago

Summary of feedback received on the key issues in the consultation:

Changes to existing Metrics

We think we will be able to incorporate this feedback into the proposed changes to the standards without too much difficulty.

New Metrics

We will be incorporating this feedback into the proposed changes noting that we need to maintain a balance between the usefulness of the data and the cost of implementation.

Great Southern Bank suggested that it would be helpful to have clarity over how far back metrics need to be provided at go live of any changes. This is helpful feedback and will be incorporated into the proposed changes.

Changes to NFRs

The feedback on the non-functional requirements was extensive and raised some real issues that were not canvassed in the Decision Proposal. General feedback will be responded to first with specific topics being addressed afterwards.

The ABA noted a proposal for a five year schedule that was discussed in a bilateral meeting that deserves to be more clearly articulated. It is understood by the DSB that there is a long lead time for the planning for capital expenditure on infrastructure so a long term, predictable, plan for NFR changes would be very helpful. The DSB therefore suggested that a multi-year plan for changes to NFR thresholds could be published. This plan could then be reviewed annually in a workshop of participants and agencies. This aligns with feedback received from the ADR community about the need for NFRs to constantly evolve with the ecosystem. This approach will be considered in the proposed solution.

It was suggested by participants in the Telco and Energy sectors that there should be sectoral differences in NFRs. The NFRs are designed to provide a consistent and predictable service to ADRs so that they can operationally plan and manage their solutions and are therefore not specifically aligned to the sector of the Data Holder. Also, sector specific NFRs would become complex to manage and is not aligned with the cross-sectoral approach of the CDR data standards. That said, the reasons for these sectoral differences were provided and align with the concerns raised in the Banking sector about the same NFRs being applied to small banks as to large banks. For instance, it would appear that a very small ADI in the banking sector is likely to have less technical capacity than a large telco or energy retailer. As a result these issues will be address via tiering for data type and number of authorisations rather than on a sectoral basis.

It was suggested by one participant that the existing NFRs were not extensively consulted on. The DSB reject this position. The existing NFRs were consulted on multiple times and many months of active operation was allowed before the NFRs became enforceable so that participants could communicate concerns related to the NFRs and request amendments. The DSB were clear that this was the intended approach. This consultation is itself another step in that ongoing consultation process.

There were a number of participants that called for using an evidence based approach to defining the NFRs. This is the approach preferred by the DSB and we have been using all of the data we are able to obtain to develop the standards. We are able to access the existing metrics data provided to the ACCC by Data Holders and a number of ADRs have been forthcoming with data through various consultation channels. Data from Data Holders that they feel would be helpful in the Chair being able to make evidence based decisions would be extremely welcome.

Tiering of TPS

There was consensus support for a tiered approach to TPS thresholds based on number of active authorisations. In the absence of any specific data provided by participants the first proposal for such a tiering strategy will be based on existing metrics data.

The suggestion by Basiq that setting these tiers based on existing average and peak TPS outcomes reported by Data Holders is very useful in this regard.

NFRs for Low Velocity Data

Multiple participants provided feedback that low velocity data sets (such as energy usage data from AEMO that only updates on a daily basis) are being requested far more often than necessary by ADRs. This was echoed by the ABA for slow moving data sets in the banking sector.

This is very useful feedback and is currently addressed at a high resolution by the NFRs applicable to ADRs.

There is currently not mechanism for Data Holders to respond to breaches of these requirements by ADRs with a 429 error code. To address this the proposal will incorporate a tiering structure to indicate the number of duplicate requests for a specific resource that Data Holders are required to service before being permitted to expect the ADR to be caching the response provided.

White Labelled Brands

One participant raised specific concerns around NFRs related to white labelled brands. The DSB proposes to discuss this issues with the participant bilaterally to further understand the concerns raised. First review of these concerns indicate, however, that the issues are specific to the implementation choices made inside the ADI and are not generalised issues faced by all white label solution providers. As a result these issues will not be addressed in the proposed solution but further consultation will be undertaken.

DCR NFRs

It was noted in the Decision Proposal that DCR specific NFRs would not be addressed in this consultation but would be raised in the consultation on the future of the Register Standards. This consultation is now open at: Noting Paper 289 - Register Standards Revision

NFRs for Incident Response

As noted in the Decision Proposal, requirements for specific response times for incidents will be considered separately via the existing incident management working group.

Implementation Considerations

There was competing feedback given for the implementation of the changes arising from this consultation. On one hand, there was reasonable feedback provided concerning the cost and time of delivery. On the other hand, there was an urgent need articulated for some of the issues to be addressed as soon as practical.

The DSB is also conscious of the need to provide clarity for the second tier energy retailers going live later this year.

It was suggested by Basiq that a phased approach be used to allow for some of the more important changes be incorporated faster than some of the less critical changes.

The proposed solution approach will take this feedback into account and seek to strike a balance between need and cost.

Additional Feedback and Comments

It was suggested that clarification of terms in NFRs and metrics would be very helpful with those implementing data holder solutions. The DSB agrees with this and has been weighing the benefit of expanding the detail in the standards, along with the associated increasing in complexity of the standards, and providing more expansive guidance. We have opted to develop improved guidance to address this issue as it allows us to provide clarity more rapidly when it is needed.

It was suggested that metrics data should be made public so that anyone can call a Data Holder’s metrics API. The rules allow that the standards related to metrics is for public reporting so it is understood that there is no regulatory barrier to this approach. This would, however, be a big change and would mean that the Get Metric API will receive a lot more calls than originally intended. It was also suggested that the ACCC could make the data they collect available to participants for an API which would address this concern. While this is a very helpful suggestion and a good topic to be considered it was not the intended focus of this consultation and it has been requested by the ACCC that this be considered separately. As a result a change of this nature will not be incorporated into changes resulting form this Decision Proposal.

It was raised by AEMO that they do not receive visibility of whether an invocation is customer present or not present so cannot scale accordingly. This was an oversight as they are not required to report on NFRs but it is clear based on the feedback received that this would be helpful for AEMO in managing their infrastructure. As a result a solution to address this will be raised as a CR to be considered in maintenance iteration 15.

The ABA and others suggested that data around the CX flow that would help improve how the authorisation process evolves would be helpful. We would welcome such data. This would best be handled via other consultations and workshops related to authentication uplift.

CDR-API-Stream commented 1 year ago

At the DSAC on Wednesday the 19th April a presentation was provided by a member on operational concerns with the CDR ecosystem.

In the discussion arising from this presentation it was identified that the ability to understand the stage at which a customer abandons the authorisation flow would be helpful in identifying CX problems with specific Data Holders.

We will attempt to incorporate this feedback into the changes to the metrics API.

spikejump commented 1 year ago

At the DSAC on Wednesday the 19th April a presentation was provided by a member on operational concerns with the CDR ecosystem.

Is this presentation shareable publicly? or was it a live presentation?

CDR-API-Stream commented 1 year ago

It was a live presentation. The DSAC member is free to publish the presentation if they wish but the practice of the DSAC is not to attribute contributions to specific members. If it is made public it will be via the minutes of the DSAC which take a while to process.

CDR-API-Stream commented 1 year ago

Note: edited on 2nd May to add a sample structure for the V4 and V5 APIs and to address feedback provided

Here are the proposed changes to Get Metrics for further feedback. These are candidate changes to be proposed to the Chair unless there is feedback indicating they should change:

Proposed Metrics Changes

There will be two additional versions of the Get Metrics API defined - v4 and v5 - with different obligation dates.

V4 will contain additions that are considered less impactful for implementors and will be targeted to align with the obligation dates for the remaining energy data holders in the later half of 2023.

V5 will contain additional metrics that may require more instrumentation to be able to achieve and will be targeted for Q1 2024.

As there is only a single client for the Get Metrics API data holders will not be required to maintain older versions of the Get Metrics API but they must pay attention to the fact that the ACCC, when calling this API, will likely call with an x-v value of 5 and an x-min-v value of 3 and implementations should successfully respond with their highest supported version based on the current requirements of the standards.

V4 - Obligation Date Y23 No. 5 (13/11/2023)

Type changes:

Metric Changes:

New Metrics: A new authorisations field will be added to Get Metrics containing the following new metrics:

In addition, language will be added to the standards to clarify that the Get Metrics API is intended to be called at the Data Holder Brand level (rather than the level of legal entity).

V5 - Obligation Date Y24 No. 1 (11/03/2024)

The PerformanceMetrics model will be broken down into tiers similar to the InvocationMetricsV2 model. Each of the entries will however be an array of RateString values instead of a single value. Each element of the array will represent a 1 hour period for the represented day with the 1st entry representing 12am-1am using the timezone used by the Data Holder.

A new abandonmentsByStage object will be added to authorisations. This object will have the following entries:

Sample Structures

Sample JSON files are attached below. Note that GitHub doesn't support JSON attachments so the files have a .TXT extension.

V4 Sample

v4_sample_json.txt

V5 Sample

v5_sample_json.txt

perlboy commented 1 year ago

A few questions as an initial gambit here, there is definitely plenty of ambiguity which will make it quite difficult to commence implementation.

CDR-API-Stream commented 1 year ago

Thanks @perlboy for the comprehensive feedback. Much appreciated. Feedback is below…

  • A proposed "end state" structure of V5 with an annotated V4 highlight (or vice versa) would be useful to understand the intended outcome.

I’ll add a proposed structure to the comment with the proposal

  • For HTTP Status Code

    • Some clarity will be required as to where it is expected this will be reported from. Currently many (most?) implementations have instrumented metric generation from the (micro)service itself, this may not be the http response experienced by the Data Recipient with respect to border control (which may simply timeout, be responsible for throwing 429s or return 200s with "Go Away") or load balancer routing behaviours (504 response codes). Indeed, some responses may never be observed by the service. Nearly all implementations will have almost zero visibility of status codes returned from DDoS protection providers. All of this means that deriving insights from this sort of data across holders is likely to be problematic.

We would probably best address this via guidance. Nils recently updated all of the metrics guidance and merged it into a single article and we could extend that article as we go. There is a potentially an endless set of clarifications for data holder specific issues that may arise and these would be too granular to include in the standards. In general, however, we would align to the principle that errors should be reported as they are presented to the ADR.

  • Within underlying OAuth2 specifications there are multiple http status codes that are considered acceptable (ie. 400 vs. 422) driven by HTTP 1.0 vs. HTTP 1.1 aligned behaviours. This means this metric won't be a consistent read across holders.

That is understood. There are already many implementation specific issues that make the metrics inconsistent across data holders. The trend for any specific data holder should still be consistent over time, however.

  • The domains of the V4 and V5 authorisation values seem to be overlapping in many spaces and it's unclear if one is introduced and then replaced by the other while there seems to be some confusion around "consents" being inside authorisations. Additionally when considering amendments it's unclear what the abandonment stage would be, I can definitely see scenarios that are going to result in ambiguity and inquiry from the Regulator who have historically not understood the difference between an authentication, authorisation, consent, arrangement. Clear guidelines, ideally with a breakdown of scenarios aligned with the CX Guidelines would be highly beneficial.

We will look to provide this clarity by expanding guidance but there is probably improved clarity that can be added in the standards as well. Specifically I will try to address all of the questions around exactly when each phase cuts over in the proposal.

Regarding the measurement of these calculations during amendment. That isn’t an issue that has come up before and is possibly something we should address. Do you think that we should separate metrics for amendments from new authorisations?

  • RateString is really looking out of place now and I suspect is going to be more and more confusing as more industries are added. Is there any reason why a gradual transition to PercentString as a copy paste with the example removed wouldn't be appropriate?

I could argue that, technically, a RateString doesn’t mean interest rate but the mathematical contcept of a ratio of two measures over time. That would be pretty pedantic on my part though as I'm sure most people simply assume the financial measure is implied.

Transitioning from RateString to PercentString is something that has been discussed before but has never seemed worth the effort. If the timing is right to discuss it, however, maybe we should raise a maintenance CR and tackle it directly. I’d rather not deal it with as a side effect of this consultation.

  • What is the expected migration plan? Many metric implementations have deliberately setup long term data structures in alignment with the existing metrics definitions. At a minimum there would be a 7 day lag of probably inaccurate data to flush out old collection and replace it with new. While it is possible to maintain continuity this is additional scope with non-trivial engineering effort purely to achieve a 7 day roll over.

The lead time for the proposed FDOs is intended to give space for a migration plan for each data holder. In most cases, existing metrics are unaffected. The type changes are really just API formatting and should not impact instrumentation and the new metrics do not have an existing set of data that will be changed.

The splitting of authenticated from unauthenticated metrics is the only real impact to an existing metric and the feedback has not indicated that this will be a problem as many implementations already have them separated.

Are there specific aspects of the proposal that will need further transition accommodation?

  • Will it be acceptable to leap frog to V5 immediately?

Yes. The phasing is to accommodate data holders that may have to make significant changes to instrument their platforms. If that isn’t a big deal then jumping to v5 should be acceptable.

  • Further on this, is it acceptable to assume that ACCC will be requesting x-v=5 by November and therefore a big bang cutover that resets counters to 0 at the new format will be reasonable?

Making the requests with x-v=5 and x-min-v=3 would be the goal allowing data holders to upgrade their systems at their own pace. There is no resetting of counters anticipated, however.

On the resetting of counters, while metrics that were not previously recorded cannot be returned for periods for which they weren’t collected there should be no need to reset counters for any of the existing metrics.

CDR-API-Stream commented 1 year ago

Here are the proposed changes to the Non-Functional Requirements for further feedback. These are candidate changes to be proposed to the Chair unless there is feedback indicating they should change:

Non-Functional Requirement Changes

Tiering of Traffic Thresholds

As there was consensus support for a tiered approach to traffic thresholds based on number of active authorisations the DSB is proposing amendments to the standards as outlined below. These thresholds have been developed from the data that the DSB has been able to obtain regarding actual TPS and authorisation metrics for existing data holders.

The following statements in the standards in the Traffic Thresholds section will be amended:

For secure traffic (both Customer Present and Unattended) the following traffic thresholds will apply:

  • 300 TPS total across all consumers

For Public traffic (i.e. traffic to unauthenticated end points) the following traffic thresholds will apply:

  • 300 TPS total across all consumers (additive to secure traffic)

These statements will be replaced with:

For secure traffic (both Customer Present and Unattended) the following traffic thresholds will apply:

  • For Data Holders with 0 to 2,000 active authorisations, 200 TPS total across all consumers
  • For Data Holders with 2,001 to 5,000 active authorisations, 300 TPS total across all consumers
  • For Data Holders with 5,001 to 10,000 active authorisations, 350 TPS total across all consumers
  • For Data Holders with 10,001 to 25,000 active authorisations, 400 TPS total across all consumers
  • For Data Holders with 25,001 to 50,000 active authorisations, 450 TPS total across all consumers
  • For Data Holders with more than 50,000 active authorisations, 500 TPS total across all consumers

For Public traffic (i.e. traffic to unauthenticated end points) the following traffic thresholds will apply:

  • 300 TPS total across all consumers (additive to secure traffic)

Note that this will be a reduction in expectation for the vast majority of existing Data Holders and will be an increase in expectation for a small number of the most active Data Holders.

It is proposed that these changes will be tied to a Future Dated Obligation of Obligation Date Y23 No. 5 (13/11/2023)

NFRs for Low Velocity Data

To a differentiation for the calling by ADRs of low velocity data sets the following text will be added to the Data Recipient Requirements sub-section in the Non-functional Requirements section of the standards.

Low Velocity Data Sets

For endpoints that provide access to data that is low velocity (ie. the data does not change frequently) the Data Recipient is expected to cache the results of any data they receive and not request the same resource again until the data may reasonably have changed.

For low velocity data sets, if the same data is requested repeatedly a Data Holder may reject subsequent requests for the same data during a specified period.

Identified low velocity data sets are to be handled according to the following table noting that:

  • the Velocity Time Period is a continuous period of time in which calls beyond a specific threshold MAY be rejected by the Data Holder
  • the Allowable Call Volume is the threshold number of calls to the same resource for the same arrangement above which calls MAY be rejected by the Data Holder
Data Set Impacted Endpoints Velocity Time Period Allowable Call Volume
NMI Standing Data Get Service Point Detail 24 hours 10 calls
Energy Usage Data Get Usage For Service Point, Get Bulk Usage, Get Usage For Specific Service Points 24 hours 10 calls
DER Data Get DER For Service Point, Get Bulk DER, Get DER For Specific Service Points 24 hours 10 calls

As this change is really an expansion of the requirement that ADRs minimise traffic with Data Holders most ADRs should already be minimising highly cacheable data so no future dated obligation will be placed on this change. Feedback on this aspect of the proposal is welcome.

anzbankau commented 1 year ago

Here are the proposed changes to the Non-Functional Requirements for further feedback. These are candidate changes to be proposed to the Chair unless there is feedback indicating they should change:

Non-Functional Requirement Changes

Tiering of Traffic Thresholds

As there was consensus support for a tiered approach to traffic thresholds based on number of active authorisations the DSB is proposing amendments to the standards as outlined below. These thresholds have been developed from the data that the DSB has been able to obtain regarding actual TPS and authorisation metrics for existing data holders.

The following statements in the standards in the Traffic Thresholds section will be amended:

For secure traffic (both Customer Present and Unattended) the following traffic thresholds will apply:

  • 300 TPS total across all consumers

For Public traffic (i.e. traffic to unauthenticated end points) the following traffic thresholds will apply:

  • 300 TPS total across all consumers (additive to secure traffic)

These statements will be replaced with:

For secure traffic (both Customer Present and Unattended) the following traffic thresholds will apply:

  • For Data Holders with 0 to 2,000 active authorisations, 200 TPS total across all consumers
  • For Data Holders with 2,001 to 5,000 active authorisations, 300 TPS total across all consumers
  • For Data Holders with 5,001 to 10,000 active authorisations, 350 TPS total across all consumers
  • For Data Holders with 10,001 to 25,000 active authorisations, 400 TPS total across all consumers
  • For Data Holders with 25,001 to 50,000 active authorisations, 450 TPS total across all consumers
  • For Data Holders with more than 50,000 active authorisations, 500 TPS total across all consumers

We are supportive of tiering to allow lower thresholds for Data Holders with fewer users, however we do not agree that the maximum TPS levels should change at this time. 

Any uplift of TPS beyond 300TPS has a large impact on Data holders. Given this, the proposal to introduce new tiers of up to 500TPS should be performed through a dedicated Decision Proposal. This will allow Data Holders visibility of this impactful change, and time to assess implementation considerations.

Origin-Rachel commented 1 year ago

I think there needs to be a distinction between the type of consumer that these non-functional requirements apply to. I refer in particular to rules around response times for the "Get Bulk" APIs, which have no upper limit as to the number of accounts expected in the response. This means, for example, that the expected response time for "Get Bulk Billing" for a consumer with 1 account is the same for a consumer with 10, 50, or 100 accounts, when in practice increasing the number of accounts naturally results in an increased response time due to the amount of data requested.

I suggest reviewing the practicality of applying the same response time to every CDR customer. I note that these requirements seem written predominantly for retail/mass market consumers. It is possible for business customers in energy to have over 100+ accounts.

AusBanking-Christos commented 1 year ago

Dear DSB,

In light of the discussion today with the DSB and our Members, who are seeking further opportunity to provide additional feedback, can we please request extending the consultation for another week to 19 May 2023.

Kindest Regards, Australian Banking Association

cuctran-greatsouthernbank commented 1 year ago

We also agree with @AusBanking-Christos and would like to request for an extension of the consultation period to 19 May 2023.

Kind regards, Great Southern Bank

CDR-API-Stream commented 1 year ago

This consultation will be extended until the 19th May as requested. The DSB would prefer not extend this consultation any further beyond this date.

We understand the need for modifications to NFRs to be an ongoing process and to be based on objective data. It would appear that this may require changes the NFRs to be supported by a more specific, regular and engaged consultation process.

To that end we are planning a series of workshops specifically on NFRs for the ecosystem in late July or early August. These workshops will be used to work with the community to create an ongoing consultation process for NFRs that works for everyone as well as to canvas the community about any issues and solutions related to the NFR standards that the community wishes to raise.

More details on these workshops will be announced in due course.

johnAEMO commented 1 year ago

Comment on the NFRs for low velocity data:

perlboy commented 1 year ago

The feedback for this DP is significant making it difficult to comment further. What I'll note here is that it appears to discuss both NFR and Metrics endpoints simultaneously when the reality is that they are two separate spheres. Essentially NFRs set the thresholds and are likely to be more structurally architecture components while Metrics report on them which is more of an engineering activity.

I suggest a more focused pair of DPs is proposed so that feedback can more easily be targeted on the specific areas.

AusBanking-Christos commented 1 year ago

18 May 2023

Submitted on Thursday, 18 May 2023 via: Consumer Data Standards Australia - GitHub site

Dear @CDR-API-Stream

ABA Follow Up Response to Decision Proposal 288 – Non-Functional Requirements

ABA welcomes the plan for a series of workshops on NFRs for the CDR ecosystem. The proposed multistakeholder approach will lay the foundations for a shared and transparent capacity planning framework that balances the needs of all participants while ensuring appropriate customer outcomes.

The workshops will provide the opportunity for richer performance data to be assessed when setting NFR standards. ABA member banks commit to working with the DSB ahead of the meeting to identify a consistent data set that will be the most useful contribution to the workshop process.

We welcome the opportunity to contribute toward the development of NFR standards that will result in a sustainable and predictable capacity planning model for all CDR participants.

jimbasiq commented 1 year ago

Basiq feedback on the Traffic Thresholds proposed amendment is we are generally supportive but still concerned with the upper boundary. A highest limit dictated in

For Data Holders with more than 50,000 active authorisations, 500 TPS total across all consumers

seems low considering Basiq currently have considerably more than 50,000 screen scrape active authorisations with each of the major banks, some in the hundreds of thousands. We intend to move all of these connections from screen scrape to open banking CDR connections.

If CDR intends to move data sharing from screen scraping to CDR it needs to both support the existing load and provide some overhead. I don't believe the current proposal is doing this.

AGL-CDR commented 1 year ago

Thank you for the opportunity to provide feedback on this area of discussion.

AGL does not support the tiering of thresholds for TPS for energy.

This is because:

AGL requests that these proposed changes are delayed and revisited (for energy) until such time that at least twelve months of real-world traffic volumes have been observed following Tranche 3 Large Retailer Go Live. (November 2024)

commbankoss commented 1 year ago

Regarding additional consent metrics, CBA suggests a similar outcome, i.e. improvements to the consumer experience for consent flow to reduce drop-off rates, could be achieved through a consultative approach with ADRs and DHs. Our recommendation is that a sample of relevant consent metrics be amalgamated by participants and provided to the DSB as input. This approach would more cost effective for the ecosystem, achieve a similar outcome and avoid regret spend if authentication and consent flows are matured to enable Action Initiation in the future.

anzbankau commented 1 year ago

In light of recent discussions, ANZ requests that the proposed tiering remains conditional upon the outcome of the forthcoming workshops. Given the complex nature of open banking systems, meeting the revised tiering is unlikely to be a simple scaling out exercise. The workshops must consider that data holders will require extensive capacity planning, design and implementation activities.

NationalAustraliaBank commented 1 year ago

NAB welcomes the plan for further workshops on NFRs. As the topic appears to be of great interest to CDR community, we recommend these workshops are scheduled sooner rather than later to maintain the positive momentum.

We also acknowledge DSB feedback regarding white label implementations and are keen to engage with DSB and any other interested participants to explore the topic in detail. With regards to the proposed Get Metrics future dated obligations, we request that they are pushed back by one release cycle, i.e. that the v4 FDO is aligned with Y24 #1 (11/03/2024) and v5 is aligned with Y24 #2 (13/05/2024).

WestpacOpenBanking commented 1 year ago

Westpac welcomes the opportunity to respond to the additional proposals added to DP288.

Scaling NFRs for Large Data Holders

A tiered approach by activity is an improvement to the current standard. Nevertheless, Westpac suggests that the proposal needs to be evidence-based prior to structuring the tiering levels and thresholds. Our evidence suggests that current activity in the ecosystem do not warrant the unusually high thresholds in the current proposal. We welcome the opportunity to discuss the TPS proposal in the planned workshops for July-August and we support earlier comments that this is not ready for presentation to the DSB chair.

Westpac notes that it is difficult to set a fair and adequate TPS level without context of the use-cases that the ecosystem is wanting to support; since some use-cases require more load than others. We suggest that focus should be on activity growth in the medium term future only. Handling of larger volumes can be revisited as the ecosystem matures with clearer pipeline of future use-cases and activity types flowing in the ecosystem. This would allow better allocation and direction of investment that is aligned to the Govt intention as announced in the recent Budget.

Proposed changes to existing metrics

Westpac is broadly supportive of the proposed changes to existing metrics.

Proposed new metrics

Westpac notes from various comments above that there may be various uses to the statistics around ‘abandonment by stage’ by different parties within the ecosystem (regulators, ADRs, DHs, incumbents, and prospects). We suggest the following improvements to increase the value of new metrics prior to implementing changes:

Westpac also notes that there are many comments and questions around the definition of the metrics that needs to be discussed and resolved prior, to presentation to the DSB Chair. Considering the nature and size of the change vary depending on these definitions, it would be more appropriate to set the delivery timelines after the conclusion of the discussions or workshops. We ask that in light of the current backlog of standard changes, 9 months be provided as minimum to allow organisations to budget resource and deliver. The ecosystem cannot sustain ongoing urgent revisions to standards as we have recently experienced with FAPI 1.0

JohnMillsEnergyAustralia commented 1 year ago

Thank you for this opportunity to make a submission. EnergyAustralia submits the following:

With the energy sector only so recently going live the existing NFRs for us as a Data Holder remain untested by ADR usage seen to-date. Therefore the need to revise the NFRs so dramatically, and then to apply these to energy sector would be premature.

We are aligned with the AGL submission made on this topic that is reflective of the energy sector.

It appears that a staged approach to retain the existing NFRs for Energy may well prove more suitable to support a nascent CDR sector like Energy. This will avoid the risk of over-funding capacity. A sectorial approach should be based on CDR usage statistics from the Energy sector so when it reaches the maturity of banking sector it would see it move to the next stage of NFRs. It would then see more appropriate NFRs for more mature sectors and existing NFRs for new to CDR sectors like Energy for their first two years.

Publication of metrics of overall usage remains of benefit. However more detailed publication of NFR metrics on such small numbers of usage is not really presently of industry benefit (until the 2 year point) following their CDR implementation, and only if volumes increase. Such limited usage will skew the usage to potentially mis-represent any valid conclusions drawn.

Further. we specifically endorse the final paragraph from the AGL submission on AEMO performance that concludes with “AGL considers that it would be appropriate for AEMO to establish its own service desk arrangement for the resolution of tickets directly with ADRs and reduce administrative pressures on data holders to manage these issues.”