Daursu / laravel-zero-downtime-migration

Zero downtime migrations with Laravel and pt-online-schema-change
MIT License
77 stars 13 forks source link

Possible solution for drop table? #39

Open bbrala opened 8 months ago

bbrala commented 8 months ago

We've been using the following 'script' to drop large databses (on any really) on our cluster. We are planning to start to use your package for our Laravel apps since this would've saved us some downtime in the last few years ;)

But i noticed you cannot drop tables, perhaps our procedure could help? It doesn't clean itself up right now since we only use it for full databases, so it does that anyways. Also you'd need a user that can make procedures.

But for what its worth:


DELIMITER $$

DROP PROCEDURE IF EXISTS delete_table_incrementally $$
CREATE PROCEDURE delete_table_incrementally(
    IN table_name_str VARCHAR(255),
    INOUT total_rows_deleted INT
)
    MODIFIES SQL DATA
BEGIN
    DECLARE row_count INT DEFAULT 0;

    SET @sql = CONCAT('DELETE FROM `', DATABASE(), '`.`', table_name_str, '` LIMIT 1000');
    PREPARE stmt FROM @sql;

    REPEAT
        EXECUTE stmt;
        SET @row_count = ROW_COUNT();
        SET total_rows_deleted = total_rows_deleted + @row_count;

        COMMIT;

        DO SLEEP(0.01);

    UNTIL @row_count = 0
    END REPEAT;

    DROP PREPARE stmt;
END $$

DROP PROCEDURE IF EXISTS drop_tables_incrementally $$
CREATE PROCEDURE drop_tables_incrementally(
    INOUT total_rows_deleted INT,
    INOUT total_tables_deleted INT
)
BEGIN
    DECLARE done INT DEFAULT FALSE;
    DECLARE table_name_str CHAR(255);
    DECLARE table_names CURSOR FOR
        SELECT TABLE_NAME
        FROM INFORMATION_SCHEMA.TABLES
        WHERE TABLE_SCHEMA = DATABASE()
        GROUP BY TABLE_NAME;

    DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;

    OPEN table_names;

    table_loop: LOOP
        FETCH table_names INTO table_name_str;
        IF done THEN
            LEAVE table_loop;
        END IF;

        SET @rows_deleted = 0;

        CALL delete_table_incrementally(table_name_str, @rows_deleted);

        SET total_rows_deleted = total_rows_deleted + @rows_deleted;

        SET @sql = CONCAT('DROP TABLE `', DATABASE(), '`.`', table_name_str, '`');

        PREPARE stmt FROM @sql;
        EXECUTE stmt;
        DROP PREPARE stmt;

        SET total_tables_deleted = total_tables_deleted + 1;

        SELECT table_name_str AS table_deleted, @rows_deleted as rows_deleted;
    END LOOP;

    CLOSE table_names;
END $$

DROP PROCEDURE IF EXISTS drop_database_incrementally $$
CREATE PROCEDURE drop_database_incrementally()
BEGIN
    SET @database_deleted = DATABASE();
    SET @total_rows_deleted = 0;
    SET @total_tables_deleted = 0;

    CALL drop_tables_incrementally(@total_rows_deleted, @total_tables_deleted);

    SET @sql = CONCAT('DROP DATABASE `', DATABASE(), '`');

    PREPARE stmt FROM @sql;
    EXECUTE stmt;
    DROP PREPARE stmt;

    SELECT
        @database_deleted AS database_deleted,
        @total_tables_deleted AS total_tables_deleted,
        @total_rows_deleted AS total_rows_deleted;
END $$

DELIMITER ;

SET foreign_key_checks = 0;
CALL drop_database_incrementally();
Daursu commented 1 month ago

Thanks for sharing this procedure. For deletes, the library proxies the calls to the original Laravel Schema::drop method. Is there a reason this would not work for your use-case?

I would be cautious when running DELETE FROM when replication is enabled, as that will increase the size of the bin-log and put pressure on the replicas.

bbrala commented 1 month ago

It's a percona xtradb cluster, it will lock the database when doing ddl queries. A drop of a large db or table will mean locking is long and will stop all other write queries until done.

Using this you first delete de data with small intervals then drop when schemas are empty l which is quick.

Percona xtradb is not really replication but it runs everything writing on both servers pretty much. Great for consistency but requires using as little (or light) ddl queries as possible.