OAI / OpenAPI-Specification

The OpenAPI Specification Repository
https://openapis.org
Apache License 2.0
29.03k stars 9.07k forks source link

Clarify spec wrt readOnly and writeOnly in referenced schemas #1622

Closed tedepstein closed 4 years ago

tedepstein commented 6 years ago

The current spec says that readOnly and writeOnly are "relevant only for Schema properties definitions."

I have encountered at least two parsers, one new, one widely used, that interpret this in the most literal sense, meaning "discard readOnly and writeOnly if they occur in a top-level schema definition."

The parsers either have their own logic that does this, or they parse into an object graph that:

With this interpretation, reusable schemas can't be intrinsically readOnly or writeOnly by definition. The following won't work:

components:
  schemas:
    project:
      type: object
      properties:
        projectID:
          type: string
        projectName: 
          type: sting
        created:
          $ref: "#/components/schemas/dataChangeEvent"
        updated: 
          $ref: "#/components/schemas/dataChangeEvent"

    dataChangeEvent:
      readOnly: true
      type: object
      properties:
        changedBy:
          type: string
        timestamp:
          type: string
          format: date-time

I would interpret "relevant" to mean that the readOnly and writeOnly annotations are only effective in the context of a property subschema. Not that they are disallowed, safe to ignore, or safe to discard when declared in other contexts.

I think my interpretation is probably consistent with the original intent, but I wasn't part of that discussion. The word "relevant" is ambiguous, and the spec only uses that word in the readOnly and writeOnly descriptions.

I think the simple answer is to clarify the meaning of the spec. The current spec for readOnly starts like this:

Relevant only for Schema "properties" definitions. Declares the property as "read only".

I'd propose changing it to:

MAY occur in any Schema Object. When used in a property subschema, either directly or by reference, a true value declares the property as "read only".

Similar change proposed for writeOnly. Happy to open a PR with this change if the TSC agrees.

MikeRalphson commented 6 years ago

Another instance of 'relevancy' but this time using the wording

This MAY be used only on properties schemas. It has no effect on root schemas.

Crops up with regard to the use of the xml object in #1435

More clarity and consistency in these (and other?) instances would be helpful.

darrelmiller commented 6 years ago

@OAI/tsc Agrees that wording should be clarified to allow the above scenario. @tedepstein will create PR with better words.

handrews commented 6 years ago

Note that we address readOnly and writeOnly in root schemas in the most recent JSON Schema Validation spec.

It is relevant in APIs as a protocol-independent way to document the behavior of an entire resource. In HTTP readOnly at the root is effectively the same as Allow: HEAD, GET, writeOnly in the root is the same as Allow: HEAD, PUT, DELETE (or something similar involving PATCH). I suppose OPTIONS could be allowed for either as well.

tedepstein commented 6 years ago

@handrews, thanks. We have so far only agreed to clarify the spec, not to expand usage to root schemas. We could address root schemas later if there is a need, or if the TSC decides to formally align with a draft of JSON Schema that includes readOnly and writeOnly.

handrews commented 5 years ago

@darrelmiller @tedepstein Since we seem very likely to move to JSON Schema 2019-09 in OAS 3.1, this problem will go away since (as noted above) the issue is now addressed in the JSON Schema spec. We haven't had anyone complain about how we specified it so it seems to be working out OK.

handrews commented 4 years ago

@tedepstein @darrelmiller @webron the discussion in #2110 reminded me that in addition to covering the topic of these keywords in root schemas, JSON Schema is a bit more lax than OAS in terms of whether read-only and write-only values are to be included in requests and responses respectively. Here is the relevant part for both topics:

If "readOnly" has a value of boolean true, it indicates that the value of the instance is managed exclusively by the owning authority, and attempts by an application to modify the value of this property are expected to be ignored or rejected by that owning authority.

An instance document that is marked as "readOnly for the entire document MAY be ignored if sent to the owning authority, or MAY result in an error, at the authority's discretion.

If "writeOnly" has a value of boolean true, it indicates that the value is never present when the instance is retrieved from the owning authority. It can be present when sent to the owning authority to update or create the document (or the resource it represents), but it will not be included in any updated or newly created version of the instance.

An instance document that is marked as "writeOnly" for the entire document MAY be returned as a blank document of some sort, or MAY produce an error upon retrieval, or have the retrieval request ignored, at the authority's discretion.

Note that since JSON Schema has more use cases than just APIs, it talks about an "owning authority" instead of a server- in the case of HTTP-based APIs, the owning authority is the HTTP server.

In our experience, it is particularly important to allow readOnly values to be sent in the request and ignored, especially if they are unchanged or auto-generated (e.g. a last-modified timestamp). This seems particularly critical in supporting a GET-modify-PUT approach. Having to strip out all of the readOnly fields before doing the PUT is potentially extremely burdensome in large, complex representations. Removing those fields from the PUT request could also be interpreted as expressing the intent that they be removed on the server, which of course violates the readOnly constraint.

Do OAS users actually strip out readOnly fields before sending requests? What I've seen more often is that as long as a non-auto-generated readOnly field is unchanged, it is ignored. Auto-generated readOnly fields are either always ignored or are only errors if you try to set them to the future (or if ETags are in use, if you tried to set it to a value other than the one that matched the ETag).

This issue should be added to @philsturgeon's tracking list in #2099.

philsturgeon commented 4 years ago

@tedepstein is this something you have time for, or shall I take a swing at it?

tedepstein commented 4 years ago

@philsturgeon , I can submit a PR, but need to verify a couple of things.

First, I assume this change should target 3.1. And in the current 3.1 spec, readOnly and writeOnly have been removed entirely. They just defer to the JSON Schema spec.

@handrews , I agree with you on this point:

In our experience, it is particularly important to allow readOnly values to be sent in the request and ignored, especially if they are unchanged or auto-generated (e.g. a last-modified timestamp). This seems particularly critical in supporting a GET-modify-PUT approach. Having to strip out all of the readOnly fields before doing the PUT is potentially extremely burdensome in large, complex representations. Removing those fields from the PUT request could also be interpreted as expressing the intent that they be removed on the server, which of course violates the readOnly constraint.

I think it's beneficial for OpenAPI to specify uniform treatment of readOnly and writeOnly:

  1. readOnly means the property value SHOULD NOT be sent in the request, and MUST be ignored by the server if present in the request.

  2. writeOnly means the property value SHOULD NOT be sent as part of the response, and MUST be ignored by the client if present in the response.

  3. Including a readOnly or writeOnly property in the required list only makes it required in the response or request, respectively. (Same as 3.0.)

  4. When a schema is used as the root schema for a request, response, parameter, or header, the readOnly and writeOnly keywords are ignored. Those keywords are only effective when the schema is used as a property subschema.

Points 1 and 2 build on OpenAPI 3.x and JSON Schema by specifying the expected behavior when a readOnly property is included in the request, or a writeOnly property in the response. I don't think this breaks anything.

Point 3 reinstates the OAS 3.0 semantics of required in combination with readOnly or writeOnly. But I don't think JSON Schema has this rule, and it may be a problem:

Thoughts...?

Point 4 could also be an issue, because JSON Schema says that a readOnly request or writeOnly response "MAY be ignored if sent to the owning authority, or MAY result in an error, at the authority's discretion." JSON Schema does not seem to give us the option to ignore readOnly or writeOnly when the schema is used as a root schema. But I don't think it's terribly important. I think we could just defer to JSON Schema on this one, because I don't know of any reason to use a readOnly or writeOnly schema to describe an entire message. Seems like a Bad Idea™.

handrews commented 4 years ago

@tedepstein thanks for working through the cases here!

Regarding points 1 and 2, I think it would be a really good idea to explicitly call out the GET-modify-PUT cycle as a place where the SHOULD should (SHOULD?) be ignored. I've encountered junior engineers who think these requirements mean the fields need to be stripped out before sending a PUT request. In general, people don't always have a good feel for when an exception to a SHOULD is appropriate. I think this one is clear enough, and fundamental enough to API design, that it warrants a call-out.


Regarding point 3, I view one of the goals of JSON Schema compatibility to be that validation can be handed off to a generic JSON Schema validator without knowledge of the API context. The validator might need to understand the OAS extension vocabulary, but understanding extensions is now within the JSON Schema specification. The validator cannot, however, know whether an instance is a "read" instance or "write" instance, so there's no way for it to selectively disable the required keyword in such situations.

In terms of required and readOnly fields on a write, this is another reason why I think it's more important to emphasize "allowed but ignored." Or possibly to discourage required in this case, which makes sense if the published OpenAPI document is primarily intended for clients. Clients don't necessarily need that required enforced, it could instead just be documented that the server always sends it, without validation-level enforcement.

I've left off required for readOnly when it's not that necessary for the client to know it. I've gone with required-but-ignored when it's trivial or at least easy for the client to fill in the field (again, most often with GET-modify-PUT workflows, which I tend to emphasize heavily).

If I absolutely have to, I allOf additional required fields into the GET and PUT representations separately.


Regarding point 4, in HTTP using these in a root schema is pretty irrelevant. If there's a GET you can read, if there's a PUT or DELETE you can write. You can use the Allow header and the 405 Not Allowed status to manage such things.

In other contexts, you don't have that extra information because you don't have as rich of a transport protocol. That readOnly or writeOnly may be the most convenient in-application way of figuring out whether you can interact with the backing store for an instance at all.

I think it's fine but redundant for people to use root schema readOnly or writeOnly to align with "you can't GET this resource" or "you can't PUT, PATCH, or DELETE this resource". I don't think we need to encourage it, though, and I would not object to discouraging it- it's probably more confusing than useful as you can figure out the relevant information more clearly elsewhere in the OAS document. But that's how I would expect it to work.

tedepstein commented 4 years ago

Regarding points 1 and 2, I think it would be a really good idea to explicitly call out the GET-modify-PUT cycle as a place where the SHOULD should (SHOULD?) be ignored.

OK, I can work that scenario into the revision.

Regarding point 3, I view one of the goals of JSON Schema compatibility to be that validation can be handed off to a generic JSON Schema validator without knowledge of the API context. The validator might need to understand the OAS extension vocabulary, but understanding extensions is now within the JSON Schema specification. The validator cannot, however, know whether an instance is a "read" instance or "write" instance, so there's no way for it to selectively disable the required keyword in such situations.

In terms of required and readOnly fields on a write, this is another reason why I think it's more important to emphasize "allowed but ignored." Or possibly to discourage required in this case, which makes sense if the published OpenAPI document is primarily intended for clients. Clients don't necessarily need that required enforced, it could instead just be documented that the server always sends it, without validation-level enforcement.

This still has me a little concerned for a few reasons:

  1. There are some API specs that are not just of interest to clients; they are written to be implemented by multiple services.

  2. For clients, required properties of a response are guaranteed to be present, so it is meaningful to flag them as required.

  3. There's the case of writeOnly + required, where the required assertion is of direct concern to the client.

  4. Even if we discounted all of the above, we don't want to break backward compatibility in 3.1 (if we can avoid it).

I'm not sure what to suggest. Kicking around some thoughts, and I'll post back soon.

handrews commented 4 years ago

@tedepstein I'll be interested to hear what you come up with.

It might be worth getting clarity on the importance of being able to hand validation off to a totally vanilla (plus OAS extension module) JSON Schema validator. Because there is intentionally no way* to supply the validator with context to change its behavior. So having validation behavior change based on something like "is this instance from a request or response" is absolutely noncompliant. In general validation is not aware of application context- things that need application-awareness must be signaled using annotations.

*the exception being the partial ability to control format, but that is a legacy thing and a major problem, and we reduced it substantially in 2019-09 to put the control in the schema author's hands. And hopefully we'll kill at least that external control if not the entire keyword off entirely as people get more comfortable with vocabularies. (format fans, don't worry, if it's killed off it will only be because we put in place a superior alternative).

tedepstein commented 4 years ago

@tedepstein I'll be interested to hear what you come up with.

Prepare to be underwhelmed. :-)

It might be worth getting clarity on the importance of being able to hand validation off to a totally vanilla (plus OAS extension module) JSON Schema validator.

This is the only practical solution I've come up with -- just accept that the full semantics of readOnly and writeOnly, as specified in OpenAPI 3.x, are not compatible with JSON Schema. So if you have required properties that are readOnly or writeOnly, and you expect that the client or server will actually omit some of these properties, and you want to do correct schema validation on those messages, you'll need to use a specialized OpenAPI-aware validator or preprocessor.

Hopefully, there won't be many other cases like this in 3.1. But where we do have these incompatibilities, we should probably call them out explicitly in the 3.1 spec. And in 4.0, unless some better option presents itself, we can drop these special semantics and just go with standard JSON Schema.

@philsturgeon, how do you feel about this compromise? Do you see any other way around it?

...Because there is intentionally no way* to supply the validator with context to change its behavior.

This was the other idea I was batting around. Not a hard-coded, predefined notion of "request" or "response," but a user-defined context that is basically a runtime argument to the schema validator. Schemas could have certain properties that are only defined in a given context. Or maybe if...then...else gets extended to inspect the context.

If JSON Schema had that generic capability, we could use it to define "request" and "response" contexts, and specialize schema validations in that manner. But it sounds like you considered this, and you don't want to do it.

Barring that, some languages can model business domains at a level of abstraction above physical schemas, define context-specific variants, and validate against those variants (or generate physical schemas for them). RAPID-ML calls these variants realizations. But (full disclosure) the only implementation of RAPID-ML is in RepreZen API Studio. And it looks like alternative schemas won't be available in OpenAPI until at least a 4.0 release.

philsturgeon commented 4 years ago

I'm not sure I've really considered readOnly and writeOnly in the context of validation, but I should have.

For me its usually documentation and mocking creating example response bodies when given a schema. Should it shove created_at in the request body, etc.

If you try and update a "id" property... agh.

I don't want to create any technical differences from JSON Schema, but maybe we can provide advice for some types of tooling of how to handle these things instead?

tedepstein commented 4 years ago

Question: Would it be a breaking change if we softened the "SHOULD NOT be sent" language to say "MAY be omitted?"

  1. readOnly means the property value MAY be omitted from the request, and MUST be ignored by the server if present in the request.
  2. writeOnly means the property value MAY be omitted from the response, and MUST be ignored by the client if present in the response.

It doesn't really solve the problem of JSON Schema compatibility, but it does make it easier to pivot to the GET-modify-PUT scenario as a case where it might be better to leave the property in the request.

I don't think this should be a breaking change. "SHOULD NOT be sent" and "MAY be omitted" both allow for cases where the property is present or absent. "SHOULD" just adds guidance about which way is preferred, while "MAY" is neutral.

tedepstein commented 4 years ago

@philsturgeon,

For me its usually documentation and mocking creating example response bodies when given a schema. Should it shove created_at in the request body, etc...

I don't want to create any technical differences from JSON Schema, but maybe we can provide advice for some types of tooling of how to handle these things instead?

It's less a case of creating technical differences, more just trying to cope with the technical difference we already have. OAS v3 makes required context-dependent, and JSON Schema doesn't support context-sensitive validation.

Practically speaking, you only hit this problem when all of the following are true:

  1. The API has required properties that are readOnly or writeOnly.
  2. The client or server actually omits some of these properties.
  3. You want to use a standard JSON Schema validator to enforce the contract.

So there are three corresponding solutions for API clients (including client libraries and API testing tools), servers (including mocks), code generators, example generators, and other tools:

  1. Discourage required + readOnly and required + writeOnly properties, e.g. by detecting these and warning of the limitations in such cases.

  2. Include all required properties in requests and responses, regardless of readOnly and writeOnly. They can have null values (if allowed by the schema), placeholder values, or whatever; and those values will be ignored on the receiving end. But they have to be present to make the schema validator happy. This guidance applies to mock services, code generators, example generators, and any other tool that needs to create request and/or response messages.

  3. Don't use a standard JSON Schema validator directly on a request or response. Instead, do one of the following:
    • Use an OAS-specialized message validator, which may be a specially modified JSON Schema validator.
    • Preprocess the OAS document to convert required + *Only to separate request and response schemas.
    • Preprocess request and response messages to inject missing required + *Only values.
    • Use a different validation technology altogether.

In this particular corner case, I don't see a way to make validation fully compatible with JSON Schema without a breaking change to the spec. We've made a clear commitment to backward compatibility in minor spec releases. We've also set a goal of full JSON Schema compatibility, and we're very nearly 100% there, AFAICT. But IMO there's a little more wiggle room to compromise on JSON Schema compatibility than on backward compatibility. It does suck, I know.

If we can't stand the thought of compromising JSON Schema compatibility, I can still see two options:

  1. Plead with @handrews to build some kind of context mechanism into a new release of JSON Schema.
  2. Make this a 4.0 release instead of a 3.1 release.

I'll venture a guess that Option 1 is a nonstarter. Option 2 might seem radical, but it goes back to my argument in issue #2019:

...there is still a question of whether we should do minor releases in the future. I still think we should avoid them, because there are so many minor changes we'd like to make that would, strictly speaking, break backward compatibility, and/or consume extra bandwidth trying to solve those without breaking changes. But we agreed on a TSC call that we'd revisit this question as and when we're considering the possibility of another minor release.

Case in point: changing the OAS semantics of required + *Only would be a very small change, really a nominal compromise to backward compatibility, and IMO clearly a favorable tradeoff for the much greater benefit of 100% JSON Schema compatibility. Yet here we are, devoting considerable bandwidth to this issue, and still compromising JSON Schema compatibility, which I think none of us are happy about.

handrews commented 4 years ago

Plead with @handrews to build some kind of context mechanism into a new release of JSON Schema

Yeah that's a nonstarter for me. I could bring it up with other JSON Schema folks, but to me this would be a huge application-specific warping of the consistent JSON Schema architecture that we've spent the last year and a half putting together. It would be a breaking change on our side, not just in terms of a keyword behaving differently but in terms of changing the scope of what keywords and validators do in general.

MikeRalphson commented 4 years ago

I would be in favour of pointing people at the schema pre-processing solutions suggested by @tedepstein if/when this comes up (i.e. not in the spec itself)

handrews commented 4 years ago

@MikeRalphson @tedepstein I'm a bit confused on the pre-processing idea. I might be fine with it, but it's a little unclear where the "preprocessor" fits in and what the messaging is around this behavior. There are already at least two issues filed showing confusion over the use of readOnly and writeOnly even in OAS 3.0.

I think it will help to clarify what counts as a "JSON Schema implementation" vs an "application" that sits on top of JSON Schema. If OAS 3.1 is going to claim JSON Schema compatibility without any asterisks, then it has to set the expectation that anything handed as written in the OAS document to an implementation (most notably a validator) will behave according to the JSON Schema spec.

However, there are lots of things in the OAS ecosystem that should be considered "applications", including but not limited to:

If the code generators are aware of whether they are being generated for (or used in) the client vs server, then they can do different things, and documentation renderers can generate notes about client vs server.

But we can't claim "compatibility" if the schema gets changed from what's written into something else to be handed off to JSON Schema. Putting sleight of hand in so that it looks like required is only validated conditionally is going to be even more confusing. Because as soon as they take their "compatible" schema elsewhere it will suddenly behave differently.

In the "preprocessor" approach, what does the user see?

handrews commented 4 years ago

oops, left junk in the end of that last one from edits- sorry about that, please read on the web site. Adding comment for email notification

tedepstein commented 4 years ago

@handrews,

However, there are lots of things in the OAS ecosystem that should be considered "applications", including but not limited to:

  • Server side code generators
  • Client side code generators
  • Documentation renderers

The main scenario I'm thinking about is runtime request and/or response message validation on the client and/or server.

In any of these cases, we would ideally want the schemas embedded in or referenced from the OAS document to be pure JSON Schema, so a JSON Schema implementation can validate the messages.

To pass this test, there are some prerequisites intrinsic to JSON Schema:

On top of these, the OpenAPI validation use case implies three additional criteria:

  1. The results of the validation should be fully conformant with the OpenAPI specification. Put another way: The OpenAPI specification must not state any validation semantics that contradict standard JSON Schema validation rules.
  2. The schema should be ready for use by the JSON Schema validator implementation, without modification.
  3. The message should also be ready for validation against that schema, without modification.

The problem is with the first rule: OpenAPI has this special interaction between *Only and required that contradicts JSON Schema validation rules. OpenAPI 3.0 breaks that first rule, and OpenAPI 3.1 cannot (AFAICT) fix that without a breaking change to the OpenAPI spec.

In the "preprocessor" approach, what does the user see?

The preprocessing ideas involve design-time or runtime adapters that address the JSON Schema incompatibility by compromising rule 2 or rule 3, above.

Adapting Schema and Schema Bindings

Assume an OpenAPI contract like this:

openapi: "3.0.3"
info:
  version: 1.0.0
  title: Example for OAI/OpenAPI-Specification issue 1622.
paths:
  /foo:
    post:
      requestBody:
        content: 
          "application/json":
            schema:
              $ref: "#/components/schemas/FooObject"
      responses:
        201:
          description: Created Successfully
          content:
            "application/json":
              schema:
                $ref: "#/components/schemas/FooObject"
components:
  schemas:
    FooObject:
      type: object
      required:
      - id
      properties:
        id:
          type: string
          readOnly: true
        name: 
          type: string

According to OpenAPI, id is required in the response, optional (and maybe discouraged) in the request.

If the client sends this request:

{
  "name": "Ted"
}

OpenAPI says its valid, but JSON Schema says it's not.

Let's assume the API server implementation wants to do request validation. If it just hands that request and the embedded schema to a JSON Schema validator, it will fail validation, as expected by JSON Schema, but not in compliance with OpenAPI 3.x.

The server API implementation could, on its own or with the help of an OpenAPI tool or framework, adapt the OpenAPI contract to make validation work the way OpenAPI says it should. It could do this dynamically at runtime, or by use of a code generator at design time.

An OpenAPI contract adapted at design time might look like this:

openapi: "3.0.3"
info:
  version: 1.0.0
  title: Example for OAI/OpenAPI-Specification issue 1622.
paths:
  /foo:
    post:
      requestBody:
        content: 
          "application/json":
            schema:
              $ref: "#/components/schemas/FooObject_REQUEST"
      responses:
        201:
          description: Created Successfully
          content:
            "application/json":
              schema:
                $ref: "#/components/schemas/FooObject_RESPONSE"
components:
  schemas:
    FooObject:
      type: object
      properties:
        id:
          type: string
          readOnly: true
        name: 
          type: string
    FooObject_REQUEST:
      allOf:
      - $ref: '#/components/schemas/FooObject'
    FooObject_RESPONSE:
      allOf:
      - $ref: '#/components/schemas/FooObject'
      required:
      - id

So in this scenario, the API implementation invokes the standard JSON Schema validator with the request body (unmodified) and the FooObject_REQUEST schema, which does not require id.

Similarly, a client wanting to do response validation could use static code generation or runtime adaptation to generate the above schema, and validate the response against FooObject_RESPONSE, which does require id.

Adapting Request and Response Messages

Another approach is to modify the request or response message to inject required property values.

In the above scenario, the request message:

{
  "name": "Ted"
}

could be adapted to this:

{
  "id": "PLACEHOLDER_VALUE",
  "name": "Ted"
}

The server implementation would invoke a standard JSON Schema validator with the original schema and the modified request body. It could use hand-written code to do this adaptation, or could use an OpenAPI-aware runtime library with that behavior built-in.

I think it will help to clarify what counts as a "JSON Schema implementation" vs an "application" that sits on top of JSON Schema. If OAS 3.1 is going to claim JSON Schema compatibility without any asterisks, then it has to set the expectation that anything handed as written in the OAS document to an implementation (most notably a validator) will behave according to the JSON Schema spec.

The above preprocessor/adapter ideas fail that test.

If the code generators are aware of whether they are being generated for (or used in) the client vs server, then they can do different things, and documentation renderers can generate notes about client vs server.

But we can't claim "compatibility" if the schema gets changed from what's written into something else to be handed off to JSON Schema. Putting sleight of hand in so that it looks like required is only validated conditionally is going to be even more confusing. Because as soon as they take their "compatible" schema elsewhere it will suddenly behave differently.

Yeah. Thus my earlier assessment: this kind of sucks. We can create an article somewhere with these adaptation/preprocessing ideas, along with other workarounds and guidelines. But these things are just Band-Aids over a JSON Schema incompatibility. They are not a way for us to claim 100% compatibilty.

philsturgeon commented 4 years ago

@handrews @tedepstein @darrelmiller in the last TSC I confused two issues and said I was writing up something for this issue, but I am not. I confused this with #2017, which I am going to take a stab at now.

Henry, I think this might fall on your lap, and/or need more discussion on Thursday.

handrews commented 4 years ago

@darrelmiller this needs a TSC decision on whether we're going to break SemVer or whether we're going to break JSON Schema compatibility. Those are the two options, everything else is just details about how to document it.

At the last call where this was discussed, the question was raised about whether any tooling, particularly any validator, even handles this correctly at all right now. In practice, it's only a breaking change to change this if someone somewhere actually relies on it in code.

@philsturgeon there's nothing I can do here without that decision.

philsturgeon commented 4 years ago

This reply is you handling it so: great! 🥳

darrelmiller commented 4 years ago

This has been addressed in 3.1-rc0 by removing readOnly/WriteOnly from the spec and dropping semantic versioning.

ChaitanyaBabar commented 3 years ago

@darrelmiller @tedepstein Since we seem very likely to move to JSON Schema 2019-09 in OAS 3.1, this problem will go away since (as noted above) the issue is now addressed in the JSON Schema spec. We haven't had anyone complain about how we specified it so it seems to be working out OK.

@handrews I had gone through above discussion and since the docs change by @tedepstein is not yet reflected in trunk just wanted to sure that I have interpreted the discussion correctly so asking few Questions below.

@handrews @philsturgeon @darrelmiller Can you please put in your thoughts/comments about the same.

ChaitanyaBabar commented 3 years ago

@darrelmiller @tedepstein Since we seem very likely to move to JSON Schema 2019-09 in OAS 3.1, this problem will go away since (as noted above) the issue is now addressed in the JSON Schema spec. We haven't had anyone complain about how we specified it so it seems to be working out OK.

@handrews I had gone through above discussion and since the docs change by @tedepstein is not yet reflected in trunk just wanted to sure that I have interpreted the discussion correctly so asking few Questions below.

@handrews @philsturgeon @darrelmiller Can you please put in your thoughts/comments about the same.

  • Question 1 :- Does above comment mean that readOnly would be supported in root context in current OAS specifications ?
  • Question 2 :- If that is the case what would happen when readOnly is used along side $ref ? As per docs official here it says siblings property along side $ref are ignored , so is the use of readOnly along side $ref supported ?
  • Question 3 :- Is there any difference between use/interpretion of readOnly between OAS 3.0.X and OAS 3.1.X versions ?

@handrews Can you please put in your thoughts on above query ?

philsturgeon commented 3 years ago

@ChaitanyaBabar this is a closed issue and Henry andrews doesn’t owe you anything, let’s not pester contributors for answers especially so long after their involvement has passed.

please feel free to create a new issue with brand new context and limit how many people you ping directly as this effects everyone’s inboxes. Thanks for understanding!