Closed renebrandel closed 3 months ago
Hi @renebrandel!
I have a couple of questions:
amplify codegen models
command to generated the type-safe models to use in my frontend app? Is that the workflow?Hi @abdallahshaban557:
How would the GraphQL schema be generated? will Amplify automatically detect that Schema once I provide the connection details for my database?
The "amplify api generate-schema" (command not finalized, any suggestion?) will scan the existing SQL database tables and infer the GraphQL schema. Once inferred, the GraphQL models are now available with resolvers for CRUD operations.
So after the schema is created - I can then use the amplify codegen models command to generated the type-safe models to use in my frontend app? Is that the workflow?
Exactly! The goal with the "amplify api generate-schema" is to create a GraphQL schema just like the ones customers create today. All subsequent commands to generate client-side configurations should "just work". So amplify codegen models
(which also runs after amplify push
today) will create .dart, .swift, .js, .java files to further consume on the Amplify libraries.
got it! Thank you @renebrandel - this answers my questions! I like the command name for amplify api generate-schema
that makes sense to me! Maybe providing a shorthand as well to just generate my schema and push it in one-go would be great as well! maybe with a conditional argument or something? Although that might be dangerous since I'd want to review my generated schema BEFORE starting to spin up resources with amplify codegen, I am going back and forth on that!
daily_challenge
" for table names). Would there be possibility to e.g. freeze table names ? Or specify different naming for createdAt
/ updatedAt
columns? autoIncrement
for primary keys ? (primary key would become one of 1,2,3,4,5 instead of just uuid)Right now we are using e.g. Sequelize framework (ORM) inside nodeJS lambda to connect to the RDS DB instance - mainly to add either more features to the platform / or fix things in legacy backend written in Python.
Regarding the auth topic, please support IAM based authentication, at least when using first party RDS-based databases.
Also, instead of hardcoding credentials, consider supporting credentials stored in Secret Manager. This way they can automatically be rotated by SM.
Hi @renebrandel , I hope you are doing well. I have a question, will it be available for Oracle as well?
Hi @renebrandel I look forward to this feature becoming a reality soon. I will leave some feedback.
I think that case of using mapsTo
directive is best about Option D.
I think that it is better identifier of user
table is represented as User
in GraphQL schema. But when using renameTo
it is represented as user
in schema.
Which RDS engine type do you use most?
MySQL
Where do you host your RDS?
Amazon Aurora Amazon RDS - MySQL, PostgreSQL
Do you already have an RDS proxy configured for most of your databases to manage connection pooling?
Yes
@majirosstefan - Thanks for all the feedback! Let me work through some of these questions over the next week in more-depth with the team to provide more actionable next steps!
@dreamorosi - Yes! We'll support IAM-based authentication built-in whenever it's possible. We liked to reserve the Username + Password combination only for databases where IAM is not possible. For example, DBs hosted externally outside of AWS.
This brings up an interesting point that we should probably discuss on how to surface it to the customer that if it's within Aurora etc. to prompt/warn them explicitly to use IAM.
@gnunez-dev - thanks for bringing Oracle up! For now, we're seeing most interest in MySQL and Postgres. One main area of complexity is the DB column type to GraphQL type matching. Once we've achieved a reasonable structure for MySQL and Postgres, we'll probably tackle that next. Generally when we ask our customers MySQL seems to be the primary choice at this point but would love to hear others' feedback as well to verify if we're making the right prioritization.
@fossamagna - thanks so much for your feedback! Just to verify I understand correctly, you'd prefer if Amplify just auto-inferred/suggested these remappings in (Section D), then customers apply edits on those?
@abdallahshaban557
Maybe providing a shorthand as well to just generate my schema and push it in one-go would be great as well! maybe with a conditional argument or something?
I'd need to think through the lifecycle some more and really verify how often the database definitions are changing. What do you think of on every amplify pull
and amplify push
to at least verify with the database if the database schema has changed and then throw a warning. Something like:
amplify pull
...
⚠️ Your database (mydatabase.com)'s schema is out-of-sync with your GraphQL schema.
Run "amplify api generate-schema" to apply the latest database schema changes to your GraphQL API"
@renebrandel
@fossamagna - thanks so much for your feedback! Just to verify I understand correctly, you'd prefer if Amplify just auto-inferred/suggested these remappings in (Section D), then customers apply edits on those?
I prefer remapping (e.g. rename fields) from auto-insert/suggested field names. In many cases, the auto-inserted/suggested field names are enough, but if We want to map a field name to a DB table column name that is completely different from the DB table column name, I think it is great that the customer can apply the change!
Hi @majirosstefan
- What about migrations support ? Extending columns, adding default values, e,g. setting length constraints e.g. ?
In the initial release, migrations aren’t going to be supported because the database is managed outside of Amplify. Ultimately whoever owns the underlying database should be able to set these constraints directly into the database tables themselves. Potentially, we could have Amplify also "export" a desired SQL query to mitigate the migration to make the existing RDS database "match" the GraphQL schema.
Once we support "greenfield" RDS, i.e. creating RDS from scratch using Amplify, then we can provide more of these constraints in-line within the schema and manage migrations. What are your top used constraints? We currently only support @default
in Amplify but are looking to expand to more validations.
- Would there be any possibility to run raw query (e.g. some postgres specific thing) ?
New RFC specifically on this coming VERY soon! We're rethinking the way "custom business logic" is done in a way that's idiomatic to the underlying data source but also allows for higher-level abstractions to exist.
- What about sql triggers ? // or e.g. connecting Lambda to listen to the changes in the given table (maybe as an alternative to dynamo streams ?)
Do you mean SQL triggers in the sense a/ of updating a record upon write of another record or b/ tying a SQL trigger event to also invoke another Lambda function?
For a/ that should be still managed by the underlying database. IMPORTANT: this does have implications on real-time capabilities. Subscriptions in AppSync can only be triggered if there was a mutation event on AppSync.
For b/ we might need to identify a new workflow. Are you set that this has to happen in SQL or would your use case also be served if on the GraphQL API itself, we allow you to slot in your own Lambda function after a mutation?
- We are using different engine (or extension) for Postgres that is deployed in RDS (it's something related with GEO / points on the map) - would be nice if types would be generated for that too (or we could add them manually)
Sounds good! We will try to do our best as we get to Postgres support. Our first platform will be MySQL but Postgres is next on our list!
- What if we have something deployed in prod, but we do not want to build new features with the naming coming w/ Amplify ? (python is using smth like "
daily_challenge
" for table names). Would there be possibility to e.g. freeze table names ? Or specify different naming forcreatedAt
/updatedAt
columns?
Can you tell me more about this - I'm not sure I fully understand the steps involved here. (Would be good if you could lay out a step-by-step instruction on how you're iterating on this today).
Intuitively speaking, I think it should be covered in Option D but wanted to check if I'm missing some context here.
- Would it be possible to use something like
autoIncrement
for primary keys ? (primary key would become one of 1,2,3,4,5 instead of just uuid)
We're likely going to stick with UUID/ULID for now. One thing we fear is "where we should manage" the "auto-increment". If we manage it on the GraphQL API, then it might go out of sync, if there are requests coming from other endpoints. If we manage it on database, then we'd effectively be making 2 requests per create mutation. There might be some scalability concerns here. Though curious on how you manage this today and if you've run into any concerns with your approach.
Local mocking is in scope of this RFC? or Is that covered in another RFC?
Regarding limits - they are mostly length limits for strings (VARCHAR (100), etc) - specifying length for each field in given model does not force you run sync // this is useful in our scenario when we are connecting to existing DB
perfect! Sometimes it's easier to run raw query, parse results to JS objects and send them to the app (e.g. "batch" gets: like SELECT * FROM products WHERE catid IN ('1', '2', '3', '4')
or smth with "group by", etc
a - no that's not a thing I meant b - not sure about it - but I would expect something that mimics DynamoDB streams c- I forgot to mention transactions - this is the feature that we use. Model A is updated in together with model B or not at all d - On Cascade Delete - I am not sure if that's supported already or not
Would be great if that would be supported // or scenario where dev can edit (or add) custom types to support different database engine & (types?) - this is the first link that I found on Google in regard to that
type User @model
could be stored in table "users_something_different" and not just "user"@renebrandel Overall this looks great! A few thoughts below:
A. Configure existing relational database
amplify import api
is less intuitive as I'm not thinking that I would be importing an API, but a database. For consideration amplify import database
.
I like Option A.1 and ++1 for .env support!
B. Generate GraphQL schema from database “Amplify generate schema” is weird as I'm not expecting it to generate my schema, but connect to my database and retrieve the schema. Suggestion would be
amplify api update schema
We may do wanna do push/pull terminology as when I would update my schema in my RDS, I'm not thinking I'd run "generate-schema" for it to get the changes.
I like Option B.1, keep everything in one file.
C. Add authorization rules to GraphQL API with RDS data source
Love the use of @auth rules!
D. Rename RDS-inferred data model & fields
Option D.1
Definitely NOT Option D.3, too confusing and has a DSL feel to it.
Note: At first, I thought I liked a combination of D.1 and D.2, reuse of the existing @mapsTo
only for fields, but not for tables (those would use renameTo
). The more I thought about it, I feel the "mapping" concept applies to the table as well and "rename" is confusing.
Awesome news for support Databases on Amplify. It will solve GDPR related issues. I'm looking forward to see this working. When it will be arrive ?
Hi @MarlonJD - what kind of GDPR-specific use cases are you looking for this to solve?
Hi @MarlonJD - what kind of GDPR-specific use cases are you looking for this to solve?
If we can connect to and RDB that means, I can use amplify for big companies which has some IT Rules, universities that has some it rules. They want to use their own server or database. If we can use any database like posgres, mysql, msSQL. It will create the opportunity to use it in many different institutions, for these reasons, I cannot use amplify for specific IT rules.
Hello @renebrandel! Really nice RFC! Do you have a timeline for releasing it? I'm more than keen to test it and provide feedback. I see some active branch feature/rds-support.
Which RDS engine type do you use most? MySQL
Where do you host your RDS? Amazon Aurora
Do you already have an RDS proxy configured for most of your databases to manage connection pooling? No
Hello,
is there any plan to have this being functionnal any time soon ?
Regards, David
⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️ THIS PREVIEW IS NOT PRODUCTION-READY. DO NOT USE THIS IN YOUR PRODUCTION ENVIRONMENT OR WITH PRODUCTION DATABASES ⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️⚠️
npm i -g @aws-amplify/cli@12.5.3-rds-3.0 --loglevel=error --force
amplify version
. It should print 12.5.3-rds-3.0
. THIS STEP IS IMPORTANT! If the version doesn't match, run amplify uninstall
or npm uninstall -g @aws-amplify/cli
and go back to step 1.mkdir rds-project && cd rds-project && amplify init -y
amplify import api
and answer the questions regarding your database endpoint
amplify import api
amplify push -y
(you can now go to the AppSync console and try some queries!)
schema.rds.graphql
file@sql(statement: "")
pass in parameters using the :<variable>
notation.
type Query {
listRestaurantsInState(state: String): Restaurants @sql("SELECT * FROM Restaurants WHERE state LIKE CONCAT('%', :state, '%')")`
}
amplify push -y
Use the @hasOne
and @hasMany
directives to establish a 1:1 or 1:M relationship. Use the @belongsTo
directive to create a bi-directional relationship back to the relationship parent.
# 1:M relationship example
type Blog @model {
id: String! @primaryKey
content: String
posts: [Post] @hasMany(references: ["blogId"])
}
type Post @model {
id: String! @primaryKey
content: String
blogId: String!
blog: Blog @belongsTo(references: ["blogId"])
}
# 1:1 relationship example
type User @model {
id: String! @primaryKey
name: String
profile: Profile @hasOne(references: ["userId"])
}
type Profile @model {
id: String! @primaryKey
details: String
userId: String!
user: User @belongsTo(references: ["userId"])
}
In API schema, you can change a field and model name by using the @refersTo
directive. If you don't provide the @refersTo
directive, Amplify assumes the model name and field name matches exactly the database table and column names.
Assume you have the following API schema:
type Todo @model @refersTo(name: "todos") {
content: String
done: Boolean
}
To change the "done" field name to "isCompleted", add the @refersTo
directive with the original field name and rename the done
field name to any desired value.
type Todo @model @refersTo(name: "todos") {
content: String
isCompleted: Boolean @refersTo(name: "done")
}
CREATE TABLE Persons (
PersonID int PRIMARY KEY,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);
amplify push -y
Post in this thread and mention me: "@renebrandel"
Sorry for the delayed response on this thread. I really want to give out a special thanks for folks in this thread for providing your feedback thus far:
amplify api import
amplify push
Deployment failed.
Deploying root stack webapp [ ====================-------------------- ] 1/2
amplify-amplifyc32f5ae0ce464-… AWS::CloudFormation::Stack UPDATE_ROLLBACK_COMPLETE Mon Oct 30 2023 20:52:06…
apiwebapp AWS::CloudFormation::Stack CREATE_FAILED Mon Oct 30 2023 20:51:31…
Deploying api webapp [ ====================-------------------- ] 2/4
GraphQLAPI AWS::AppSync::GraphQLApi CREATE_COMPLETE Mon Oct 30 2023 20:51:14…
GraphQLAPITransformerSchema3C… AWS::AppSync::GraphQLSchema CREATE_FAILED Mon Oct 30 2023 20:51:28…
GraphQLAPIDefaultApiKey215A6D… AWS::AppSync::ApiKey CREATE_COMPLETE Mon Oct 30 2023 20:51:16…
🛑 The following resources failed to deploy:
Resource Name: GraphQLAPITransformerSchema3CB2AE18 (AWS::AppSync::GraphQLSchema)
Event Type: create
Reason: Schema Creation Status is FAILED with details: Found 1 problem(s) with the schema:
There is no top level schema object defined.
amplify api generate-schema
amplify push
same outcome as above
Hi @connorwilloughby - it seems like you're connecting to a PostgreSQL database. In this preview build we only support MySQL. That'll likely be the root cause of the issue. We're going to publish a PostgreSQL-compatible version of the preview soon.
No worries, I did read that your rds 3.0 version only supported mySql but after seeing the flow updated in the 4.0 branch thought id give it a go.
@connorwilloughby - we just dropped a new tagged release. THis preview should support Postgres as well:
npm i -g @aws-amplify/cli@12.8.0-rds-5.0
Hi @renebrandel,
Looks like im getting the same thing again;
PS C:\Users\Connor\Documents\Github\second_amplify> amplify push
/ Fetching updates to backend environment: staging from the cloud.- Building resource api/webapp
⚠️ WARNING: your GraphQL API currently allows public create, read, update, and delete access to all models via an API Key. To configure PRODUCTION-READY authorization rules, review: https://docs.amplify.aws/cli/graphql/authorization-rules
/ Fetching updates to backend environment: staging from the cloud.✅ GraphQL schema compiled successfully.
Edit your schema at C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema.graphql or place .graphql files in a directory at C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema
√ Successfully pulled backend environment staging from the cloud.
- Building resource api/webapp
⚠️ WARNING: your GraphQL API currently allows public create, read, update, and delete access to all models via an API Key. To configure PRODUCTION-READY authorization rules, review: https://docs.amplify.aws/cli/graphql/authorization-rules
\ Building resource api/webapp✅ GraphQL schema compiled successfully.
Edit your schema at C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema.graphql or place .graphql files in a directory at C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema
Current Environment: staging
┌──────────┬───────────────┬───────────┬───────────────────┐
│ Category │ Resource name │ Operation │ Provider plugin │
├──────────┼───────────────┼───────────┼───────────────────┤
│ Api │ webapp │ Create │ awscloudformation │
└──────────┴───────────────┴───────────┴───────────────────┘
√ Are you sure you want to continue? (Y/n) · yes
⚠️ WARNING: your GraphQL API currently allows public create, read, update, and delete access to all models via an API Key. To configure PRODUCTION-READY authorization rules, review: https://docs.amplify.aws/cli/graphql/authorization-rules
✅ GraphQL schema compiled successfully.
Edit your schema at C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema.graphql or place .graphql files in a directory at C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema
- Building resource api/webapp
⚠️ WARNING: your GraphQL API currently allows public create, read, update, and delete access to all models via an API Key. To configure PRODUCTION-READY authorization rules, review: https://docs.amplify.aws/cli/graphql/authorization-rules
- Building resource api/webapp✅ GraphQL schema compiled successfully.
Edit your schema at C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema.graphql or place .graphql files in a directory at C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema
? Do you want to generate code for your newly created GraphQL API Yes
? Choose the code generation language target typescript
? Enter the file name pattern of graphql queries, mutations and subscriptions src\graphql\**\*.ts
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2
? Enter the file name for the generated code src/API.ts
Deployment failed.
Deploying root stack webapp [ ====================-------------------- ] 1/2
amplify-amplifyc32f5ae0ce464-… AWS::CloudFormation::Stack UPDATE_ROLLBACK_COMPLETE Wed Nov 08 2023 19:43:25…
apiwebapp AWS::CloudFormation::Stack CREATE_FAILED Wed Nov 08 2023 19:42:50…
Deploying api webapp [ ====================-------------------- ] 2/4
GraphQLAPI AWS::AppSync::GraphQLApi CREATE_COMPLETE Wed Nov 08 2023 19:42:34…
GraphQLAPIDefaultApiKey215A6D… AWS::AppSync::ApiKey CREATE_COMPLETE Wed Nov 08 2023 19:42:36…
GraphQLAPITransformerSchema3C… AWS::AppSync::GraphQLSchema CREATE_FAILED Wed Nov 08 2023 19:42:47…
🛑 The following resources failed to deploy:
Resource Name: GraphQLAPITransformerSchema3CB2AE18 (AWS::AppSync::GraphQLSchema)
Event Type: create
Reason: Schema Creation Status is FAILED with details: Found 1 problem(s) with the schema:
There is no top level schema object defined.
🛑 Resource is not in the state stackUpdateComplete
Name: GraphQLAPITransformerSchema3CB2AE18 (AWS::AppSync::GraphQLSchema), Event Type: create, Reason: Schema Creation Status is FAILED with details: Found 1 problem(s) with the schema:
There is no top level schema object defined., IsCustomResource: false
input AMPLIFY {
engine: String = "postgres"
globalAuthRule: AuthRule = {allow: public}
}
input ModelStringInput {
ne: String
eq: String
le: String
lt: String
ge: String
gt: String
contains: String
notContains: String
between: [String]
beginsWith: String
attributeExists: Boolean
attributeType: ModelAttributeTypes
size: ModelSizeInput
}
input ModelIntInput {
ne: Int
eq: Int
le: Int
lt: Int
ge: Int
gt: Int
between: [Int]
attributeExists: Boolean
attributeType: ModelAttributeTypes
}
input ModelFloatInput {
ne: Float
eq: Float
le: Float
lt: Float
ge: Float
gt: Float
between: [Float]
attributeExists: Boolean
attributeType: ModelAttributeTypes
}
input ModelBooleanInput {
ne: Boolean
eq: Boolean
attributeExists: Boolean
attributeType: ModelAttributeTypes
}
input ModelIDInput {
ne: ID
eq: ID
le: ID
lt: ID
ge: ID
gt: ID
contains: ID
notContains: ID
between: [ID]
beginsWith: ID
attributeExists: Boolean
attributeType: ModelAttributeTypes
size: ModelSizeInput
}
input ModelSubscriptionStringInput {
ne: String
eq: String
le: String
lt: String
ge: String
gt: String
contains: String
notContains: String
between: [String]
beginsWith: String
in: [String]
notIn: [String]
}
input ModelSubscriptionIntInput {
ne: Int
eq: Int
le: Int
lt: Int
ge: Int
gt: Int
between: [Int]
in: [Int]
notIn: [Int]
}
input ModelSubscriptionFloatInput {
ne: Float
eq: Float
le: Float
lt: Float
ge: Float
gt: Float
between: [Float]
in: [Float]
notIn: [Float]
}
input ModelSubscriptionBooleanInput {
ne: Boolean
eq: Boolean
}
input ModelSubscriptionIDInput {
ne: ID
eq: ID
le: ID
lt: ID
ge: ID
gt: ID
contains: ID
notContains: ID
between: [ID]
beginsWith: ID
in: [ID]
notIn: [ID]
}
enum ModelAttributeTypes {
binary
binarySet
bool
list
map
number
numberSet
string
stringSet
_null
}
input ModelSizeInput {
ne: Int
eq: Int
le: Int
lt: Int
ge: Int
gt: Int
between: [Int]
}
enum ModelSortDirection {
ASC
DESC
}
Hi @connorwilloughby 👋
Can you please run amplify version
and confirm that you are on version 12.8.0-rds-5.0
?
Also, the schema you shared seems like a compiled schema. Is that file from the amplify/backend/api
folder or the amplify/backend/api/build
folder?
It looks like you only have the engine and globalAuthRule in the editable schema.rds.graphql
file.
UPDATE: I edited your comment to fix the formatting.
Were you able to connect to your database and generate models from your database schema before running amplify push
?
Hi @chrisbonifacio
Thanks for reaching out. So the result of amplify version
is 12.8.0-rds-5.0.
The result of amplify push
is the same as the console output from my last comment.
PS C:\Users\Connor\Documents\Github\second_amplify> amplify import api
⚠️ This feature is in preview and is not recommended to use with production systems.
√ Select the database type: · PostgreSQL
Please provide the following database connection information:
√ Enter the database url or host name: · {AWS_HOSTNAME_REDACTED}
√ Enter the port number: · 5432
√ Enter the username: · {UN_REDACTED}
√ Enter the password: ·
√ Enter the database name: · {DB_NAME_REDACTED}
√ Successfully fetched the database schema.
Successfully imported the database schema into C:\Users\Connor\Documents\Github\second_amplify\amplify\backend\api\webapp\schema.rds.graphql.
input AMPLIFY {
engine: String = "postgres"
globalAuthRule: AuthRule = {allow: public}
}
amplify import api
amplify api generate-schema
amplify push
It seems to be sending a generic graphql api which does not reference the database schema provided during the amplify import api
step.
Deployed api webapp [ ======================================== ] 9/9
GraphQLAPI AWS::AppSync::GraphQLApi CREATE_COMPLETE Fri Nov 10 2023 17:10:07…
GraphQLAPINONEDS95A13CF0 AWS::AppSync::DataSource CREATE_COMPLETE Fri Nov 10 2023 17:10:09…
GraphQLAPIDefaultApiKey215A6D… AWS::AppSync::ApiKey CREATE_COMPLETE Fri Nov 10 2023 17:10:09…
GraphQLAPITransformerSchema3C… AWS::AppSync::GraphQLSchema CREATE_COMPLETE Fri Nov 10 2023 17:10:21…
Blog AWS::CloudFormation::Stack CREATE_COMPLETE Fri Nov 10 2023 17:11:09…
Comment AWS::CloudFormation::Stack CREATE_COMPLETE Fri Nov 10 2023 17:12:12…
Post AWS::CloudFormation::Stack CREATE_COMPLETE Fri Nov 10 2023 17:12:12…
ConnectionStack AWS::CloudFormation::Stack CREATE_COMPLETE Fri Nov 10 2023 17:12:24…
CustomResourcesjson AWS::CloudFormation::Stack CREATE_COMPLETE Fri Nov 10 2023 17:12:36…
Any questions please let me know!
@connorwilloughby did you ever run amplify add api
before amplify import api
?
It does seem that the API being deployed is using one of our starter schema templates.
Do you have both a schema.graphql and a schema.rds.graphql file in your amplify/backend/api folder?
id you ever run amplify add api before amplify import api?
Yes. However ive since removed and deployed over it. My import api
call is still returning the template.
Do you have both a schema.graphql and a schema.rds.graphql file in your amplify/backend/api folder?
No I only have amplify\backend\api\webapp\schema.rds.graphql
This issue is now closed. Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one.
Amplify CLI’s GraphQL API allows you to create new GraphQL API powered by AWS AppSync and DynamoDB within 5 minutes! We’re looking to extend Amplify’s GraphQL capabilities to support customers with existing data sources, especially relational databases (RDS). We will also expose aggregation scenarios via these RDS data sources, such as sum, average, max, min.
We are proposing a new developer experience to bring existing relational databases into your GraphQL API. This RFC breaks down into:
Workflow summary
@auth
A. Configure existing relational database
The general gist to give customers the ability to input the connection information of the relational database and Amplify CLI will discover all tables and columns to generate a representative GraphQL schema.
Customers can run
amplify import api
and get a new selection.This creates and opens a new
amplify/backend/api/schema.rds.graphql
file. (You can skip the CLI step and create this file manually in your filesystem as well!)This GraphQL file is pre-filled with a template to provide the database connection information:
Option A.1 - Provide your connection URL, use environment variables for login info:
Customers provide their database login information via environment variables or “.env” files. Amplify CLI will automatically add .env to .gitignore. Amplify Hosting customers can reuse the existing “Environment Variables“ feature to inject the values required at build time.
Option A.2 - Just provide engine type and endpoint:
To limit any unintended exposure of critical information within the schema, we provide an expanded
engine
field with more specific connection information options. Uponamplify push
oramplify api generate-schema
, we prompt via the CLI in a secure input for username and password. (For Amplify Hosting, the flow remains the same)Option A.3: Enable both - A.1 and A.2
The benefit with this approach is more flexibility but that can potentially also lead to more confusion down the road due to different paths leading to the same outcomes.
B. Generate GraphQL schema from database
Customers can run
amplify api generate-schema
and a user-edited GraphQL schema gets generated to allow you to add further features. (Note: the command name is NOT final. Would love your ideas in the comments!)The generated schema will infer directives including
@model
,@primaryKey
,@index
, and the relationship directives (@hasOne
,@hasMany
,@belongsTo
).Option B.1: Re-use the same file where you configured the setup
In the
schema.rds.graphql
, where you configured the connection information a new section is generated below, where we’ll allow you to apply “limited edits”.The benefit of this is that you have one file that defines the source of truth of your GraphQL schema for RDS. There is a drawback that it might cause confusion on where the “line” is between the input vs. the output of Amplify.
Option B.2: Create a separate “.generated” file
In the same folder, we create a
schema.rds.generated.graphql
file with the auto-generated content. The edits that a customer can make are still limited as we’ll see in the following sections.This allows you to cleanly separate out what the “input” config is vs. the Amplify-generated output. The drawback of a solution like this could be that multiple files can get messy and version controlling these files correctly might add additional mental overhead.
schema.rds.graphql
schema.rds.generated.graphql
If this is the preferred route, we could explore alternative, non-file-based DX for data source input. Though traditionally, Amplify customers loved when everything was defined “in-place” on the schema-level. One of the major positive feedback we’ve received from the GraphQL Transformer v2 update.
C. Add authorization rules to GraphQL API with RDS data source
While the generated schema is a good start, it still requires some authorization capabilities to be fully function. As said previously, customers can make limited edits to the auto-generated portion to “layer in” authorization rules just like they would with the DynamoDB today.
Note: Amplify’s GraphQL API’s are “deny-by-default”. It is a requirement to use
@auth
rules or a global authorization rule to open the API access to clients.The goal is for these edits to feel natural if you’re already an Amplify GraphQL customer. The authorization rules’ strategy, provider, and operations should “just work” on the model-level and field-level. See launch phases below for better breakdown on feature availability.
D. Rename RDS-inferred data model & fields
Not all database table names are “frontend-developer-friendly”. We want to give customers the ability to rename the inferred data models to introduce better ergonomics downstream.
For example, there is a database table “users” with columns “id” and “email_address”. There are two changes that could be made to make it more ergonomic:
snake_case
is not idiomatic for most frontend languages such as in JavaScript (React), Java, Swift.Option D.1: Rename the types and map to the original name via a directive
Customer can rename the type to the updated name and use the
@mapsTo
directive to point to original database name. This would be an extension to Amplify CLI’s existing@mapsTo
directive. Amplify CLI can automatically add in these mappings to make types and field names idiomatic by-default.(We would love to hear if you’d like to introduce a new directive for this use case or re-use the existing directive. In addition, what are your thoughts on Amplify CLI automatically inferring this?)
Option D.2: Rename the types and fields via a directive
Customer can rename the type to the updated name via the
@renameTo
directive to point to original database name.This would be a departure to an existing pattern of the
@mapsTo
directive but allows the customers to provide only “additive” updates. This also introduces additional mental overhead for the customer on the codegen outputs because you don’t need to do this for the existing DynamoDB workflows.(If you prefer this approach, we’d love to hear your suggestion on the directive name.)
Option D.3: Use a “mapping definition” in the
input AMPLIFY
In the “input” section of the file, customers can provide mapping information in a centralized way.
This allows customers to centrally manage the type and field name-mappings. The drawback is that in creates a “duality” of where the customer is supposed to edit the inputs vs. where the customer is supposed to edit the “generated outputs”.
(If you prefer this option, we would love to hear your idea on what the actual format of the “mapping” could be.)
E. Deploying and interactions with libraries
Upon
amplify push
, Amplify CLI will generate an AppSync GraphQL API with the attached RDS data source and authorization rules via CloudFormation.In addition, similar to the existing DynamoDB experience, the code generation happens upon a successful push and the Amplify libraries can consume the generated codegen files (.js, .swift, .java) just as they do today.
FAQs
How do I iterate on my data model?
Here’s a diagram on the end-to-end iteration workflow. The goal is to retain what we can from your schema edits. If you make “additive” changes to your schema, such as adding columns or tables, your existing model edits’ mappings and authorization rules will be maintained.
The Amplify CLI can throw a (non-blocking?) warning on the changes that were “discarded” between iterations. This provides customers a history of the discarded changes within the terminal to easily reference back to if needed. This could look something like:
(Note: This warning is intended to help you catch issues in pre-production environments before going to production. This warning will be “too late” if your app only has one environment (“prod”) as there is already a drift between the database and the GraphQL API.)
Can I have multiple GraphQL schema files for my GraphQL API?
Amplify CLI today already supports multiple GraphQL schema files for one GraphQL API. This launch will be purely additive to that existing feature.
Can I have multiple RDS data sources for my GraphQL API?
Yes. We will likely phase it out though to initially first allow one data source while this feature is in preview period and then gauge if this is critical for general availability launch based on customer feedback.
We also would love to learn if you have major use cases to support multiple RDS data sources within one GraphQL schema file.
Which directives are supported?
Our initial goal is to infer as many directives as possible from the data source including:
@model
,@primaryKey
,@index
, and the relationship directives (@hasOne
,@hasMany
,@belongsTo
).At general availability launch, customers can add in
@auth
and any directive/changes required for remapping.Do you have a major directive that you’d like to see support for? Let us know below!
How are we going to provide aggregation queries?
Our current plan is to add
sum
,max
,min
, andaverage
to the selection set on list operations. To get the latestsignUpDate
, you could do:We still need to iterate on this areas to iron out the exact API definition.
How are we going to phase the launch of this feature?
We are going to phase this feature into multiple previews phases. This is our tentative plan. Based on your comments, we’ll surely move things around a bit:
Milestone I: Preview phases:
@auth
directive support on the model-level@auth
directive support on the field-levelMilestone II: General availability launch Milestone III: Support for multiple engine types (mysql and postgres)
What is NOT in scope of this RFC:
@model
directive. That’s covered in another RFC that we’re going to publish VERY soon!Questions for the community