Open jpmens opened 9 years ago
Several metadata keys (TSIG-ALLOW-AXFR, FORWARD-DNSUPDATE, TSIG-ALLOW-DNSUPDATE, ALLOW-AXFR-FROM, possibly others) are currently allowed to exist multiple times. However, I can't think of a reason for any of them to exists multiple times with the same content, so I don't have an immediate objection to your suggestion.
As for what your example means - most code that expects a single metadata entry picks the first or last one (terms that are meaningless in an SQL context, of course - so it's best to say they pick an arbitrary one).
Your suggestion will not fix the situation where somebody manages to put SOA-EDIT=X, SOA-EDIT=Y in the database. I do not have a better suggestion in any case!
check-zone could possibly check this.
I've done some research.
SQLite and MySQL both support a REPLACE like statement. Additionally MySQL supports "ON DUPLICATE KEY UPDATE'
postgresql > 9.5 has "upsert" support.
With oracle it can be achieved using a MERGE INTO statememt.
See https://www.withdata.com/blog/postgresql/replace-update-or-insert-a-row-into-postgresql-table.html
With oracle it can be achieved using a MERGE INTO statememt.
We don't ship any Oracle :)
(but we do ship a MSSQL schema+config for godbc)
Funnily the best MSSQL solution seems to be MERGE based as well, supported since sql server 2008.
So fixing the insertion queries to deal gracefully with duplicate entries should be fine.
Providing an automated migration sql file will be impossible for this though, is it okay if the create unique index just fails? Or should we maybe provide some sort of script?
@RobinGeuze wrote:
Providing an automated migration sql file will be impossible for this though, is it okay if the create unique index just fails? Or should we maybe provide some sort of script?
@zeha wrote, earlier:
check-zone could possibly check this.
Perhaps we can add the check to 4.3.0 and the create-unique-index-that-fails to 4.4.0?
That won't prevent new duplicate entries from being created in 4.3.0 though, unless we add specific code logic for that.
That won't prevent new duplicate entries from being created in 4.3.0 though, unless we add specific code logic for that.
Right, we can either add that logic, or pretend that having check-zone
report on it is good enough.
For SQLite, the schema migration could do the dedup. For the other databases, I don't know what is feasible.
If #8813 makes it to 4.5, we can add constraints in 4.6.
If #8813 makes it to 4.5, we can add constraints in 4.6.
it did, but I'm postponing constraints until after 4.6.
Release 3.4.2
The database schema for MySQL's
domainmetadata
table allows rows with duplicate content, the effects of which are unknown to me. I think the index definitionshould encompass
content
and be unique.