I'm trying to use denoDB to create two tables which both have a primary key column named "id." But when I link and sync the database, I get the following error:
deno run -A example.ts
error: Uncaught (in promise) PostgresError: relation "id" already exists
return new PostgresError(parseWarning(msg));
^
at parseError (https://deno.land/x/postgres@v0.11.2/connection/warning.ts:48:10)
at Connection.processError (https://deno.land/x/postgres@v0.11.2/connection/connection.ts:689:19)
at Connection._simpleQuery (https://deno.land/x/postgres@v0.11.2/connection/connection.ts:591:22)
at async Connection.query (https://deno.land/x/postgres@v0.11.2/connection/connection.ts:795:16)
at async PostgresConnector.query (https://deno.land/x/denodb@v1.0.39/lib/connectors/postgres-connector.ts:75:22)
at async Database.query (https://deno.land/x/denodb@v1.0.39/lib/database.ts:240:21)
at async Function.createTable (https://deno.land/x/denodb@v1.0.39/lib/model.ts:172:5)
at async Database.sync (https://deno.land/x/denodb@v1.0.39/lib/database.ts:210:7)
Here's the full example code:
import {
Database,
DataTypes,
Model,
PostgresConnector,
} from "https://deno.land/x/denodb@v1.0.39/mod.ts";
class User extends Model {
static table = "users";
static fields = {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
},
};
}
class Todo extends Model {
static table = "todos";
static fields = {
id: {
type: DataTypes.INTEGER,
primaryKey: true,
},
};
}
const connector = new PostgresConnector({
database: Deno.env.get("DATABASE_NAME") || "",
host: Deno.env.get("DATABASE_HOST") || "",
username: Deno.env.get("DATABASE_USERNAME") || "",
password: Deno.env.get("DATABASE_PASSWORD") || "",
port: parseInt(Deno.env.get("DATABASE_PORT") || "5432"),
});
const db = new Database(connector);
db.link([User, Todo]);
db.sync();
I can use psql to look at the users table that was created (the todos table never gets created):
\d users
Table "public.users"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
id | integer | | not null |
Indexes:
"id" PRIMARY KEY, btree (id)
I can see here that the index that enforces the primary key constraint is just called "id." Whereas if I had just created the table using plain sql like this:
CREATE TABLE users (id integer PRIMARY KEY);
then the table inspection shows me that the primary key index defaults to users_pkey:
users
Table "public.users"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
id | integer | | not null |
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
I can confirm that the trying to create two indexes named id is the issue using psql to specify the primary key constraint name:
CREATE TABLE todos(
id integer,
CONSTRAINT id PRIMARY KEY (id)
);
CREATE TABLE todos(
id integer,
CONSTRAINT id PRIMARY KEY (id)
);
-- CREATE TABLE
-- ERROR: relation "id" already exists
TLDR: db.sync() seems to be giving primary key constraints the same name as the column they constrain. This makes it impossible (at least in PostgreSQL?) to use db.sync() to create multiple tables with primary key columns with the same name.
Thanks for the hard work, and for reading my issue! :)
Hello!
I'm trying to use denoDB to create two tables which both have a primary key column named "id." But when I link and sync the database, I get the following error:
Here's the full example code:
I can use
psql
to look at the users table that was created (the todos table never gets created):I can see here that the index that enforces the primary key constraint is just called "id." Whereas if I had just created the table using plain sql like this:
then the table inspection shows me that the primary key index defaults to
users_pkey
:I can confirm that the trying to create two indexes named
id
is the issue usingpsql
to specify the primary key constraint name:TLDR:
db.sync()
seems to be giving primary key constraints the same name as the column they constrain. This makes it impossible (at least in PostgreSQL?) to usedb.sync()
to create multiple tables with primary key columns with the same name.Thanks for the hard work, and for reading my issue! :)