Azure / autorest.typescript

Extension for AutoRest (https://github.com/Azure/autorest) that generates TypeScript code. The transpiled javascript code is isomorphic. It can be run in browser and in node.js environment.
MIT License
178 stars 75 forks source link

[Search] Issue with OData Discriminator #802

Closed sarangan12 closed 3 years ago

sarangan12 commented 3 years ago

For the following swagger:

"TokenFilter": {
  "discriminator": "@odata.type",
  "properties": {
    "@odata.type": {
      "type": "string",
      "description": "Identifies the concrete type of the token filter."
    },
    "name": {
      "type": "string",
      "externalDocs": {
        "url": "https://docs.microsoft.com/rest/api/searchservice/custom-analyzers-in-azure-search#index-attribute-reference"
      },
      "description": "The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters."
    }
  },
  "required": [
    "@odata.type",
    "name"
  ],
  "description": "Base type for token filters.",
  "externalDocs": {
    "url": "https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search"
  }
},
"AsciiFoldingTokenFilter": {
  "x-ms-discriminator-value": "#Microsoft.Azure.Search.AsciiFoldingTokenFilter",
  "allOf": [{
    "$ref": "#/definitions/TokenFilter"
  }],
  "properties": {
    "preserveOriginal": {
      "type": "boolean",
      "default": false,
      "description": "A value indicating whether the original token will be kept. Default is false."
    }
  },
  "description": "Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the \"Basic Latin\" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.",
  "externalDocs": {
    "url": "http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html"
  }
}
"CjkBigramTokenFilter": {
  "x-ms-discriminator-value": "#Microsoft.Azure.Search.CjkBigramTokenFilter",
  "allOf": [{
    "$ref": "#/definitions/TokenFilter"
  }],
  "properties": {
    "ignoreScripts": {
      "type": "array",
      "items": {
        "$ref": "#/definitions/CjkBigramTokenFilterScripts",
        "x-nullable": false
      },
      "description": "The scripts to ignore."
    },
    "outputUnigrams": {
      "type": "boolean",
      "default": false,
      "description": "A value indicating whether to output both unigrams and bigrams (if true), or just bigrams (if false). Default is false."
    }
  },
  "description": "Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene.",
  "externalDocs": {
    "url": "http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html"
  }
}

For the above code, the new generator fails. What is the problem? The generated code looks like this:

export interface TokenFilter {
  /**
   * Polymorphic discriminator, which specifies the different types this object can be
   */
  @odataType: "#Microsoft.Azure.Search.AsciiFoldingTokenFilter" | "#Microsoft.Azure.Search.CjkBigramTokenFilter" ;
  /**
   * Identifies the concrete type of the token filter.
   */
  odataType: string;
  /**
   * The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
   */
  name: string;
  1. There is no closing brace
  2. The @odataType and odataType creates error.
sarangan12 commented 3 years ago

Code Completed with PR https://github.com/Azure/autorest.typescript/pull/804