bingoohuang / blog

write blogs with issues
MIT License
176 stars 23 forks source link

MySQL SQL #190

Open bingoohuang opened 3 years ago

bingoohuang commented 3 years ago

只是一些SQL而已

create table t_member (
    id bigint unsigned not null auto_increment primary key comment '会员id',
    name  varchar(20) not null comment '会员姓名',
    sex tinyint(1) unsigned not null default 0 comment '性别: 0:无,1:男,2:女',
    citizen_id char(18) null comment '身份证号码',
    update_time datetime on update current_timestamp not null default current_timestamp comment '更新时间',
    create_time datetime not null default current_timestamp comment '创建时间',
    remark varchar(100) null comment '备注'
) engine = innodb default charset = utf8mb4 comment = '会员基础信息表';

insert into t_member(name, sex, citizen_id, remark) values('唐僧', 1, '110822199010110011', '唐朝高僧');
insert into t_member(name, sex, citizen_id, remark) values('悟空', 1, '110822199010110012', '第一高徒');
insert into t_member(name, sex, citizen_id, remark) values('八戒', 1, '110822199010110022', '第一吃货');
insert into t_member(name, sex, citizen_id, remark) values('沙僧', 1, '110822199010110023', '最小师弟');
insert into t_member(name, sex, citizen_id, remark) values('嫦娥', 2, '110822199010110013', '第一美女');
insert into t_member(name, sex, citizen_id, remark) values('观音', 0, '110110100011000000', '法宝众多');
insert into t_member(name, sex, citizen_id, remark) values('如来', 1, '999999999999999999', '如来佛祖');

update t_member set remark  = '千手观音' where name = '观音';
bingoohuang commented 3 years ago
DROP TABLE IF EXISTS `emp`;
CREATE TABLE `emp`  (
  `id` int(11) NOT NULL COMMENT '员工编号',
  `name` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '员工姓名',
  `dept_id` int(11) NULL DEFAULT NULL COMMENT '部门编号',
  `leader` int(11) NULL DEFAULT NULL COMMENT '直属领导id',
  `is_enable` int(11) NULL DEFAULT NULL COMMENT '是否在职 1在职 0离职',
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB default charset = utf8mb4;

INSERT INTO `emp` VALUES (1, '张三丰', 1, 0, 1);
INSERT INTO `emp` VALUES (2, '张无忌', 1, 1, 1);
INSERT INTO `emp` VALUES (3, '小龙女', 1, 1, 1);
INSERT INTO `emp` VALUES (4, '小白菜', 1, 3, 1);
INSERT INTO `emp` VALUES (5, '韦小宝', 2, 0, 1);
INSERT INTO `emp` VALUES (6, '令狐冲', 2, 0, 1);
INSERT INTO `emp` VALUES (7, '东方不败', 0, 8, 1);
INSERT INTO `emp` VALUES (8, '任我行', 3, 0, 1);
INSERT INTO `emp` VALUES (9, '李寻欢', 0, 8, 1);

DROP TABLE IF EXISTS `dept`;
CREATE TABLE `dept`  (
  `id` int(11) NOT NULL COMMENT '部门id',
  `name` varchar(255)  COMMENT '部门名称',
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB default charset = utf8mb4;

INSERT INTO `dept` VALUES (1, '销售部');
INSERT INTO `dept` VALUES (2, '信息技术部');
INSERT INTO `dept` VALUES (3, '财务部');
INSERT INTO `dept` VALUES (4, '有关部门');

-- 题目列表
-- 1、查询张姓员工的员工信息和所在部门信息。
-- 2、查询张三丰管理了几个员工
-- 3、查询出所有实习员工(实习员工无部门信息)
-- 4、查询每个部门有多少个员工,并打印部门名字、部门里的所有员工名字

-- 答案列表
-- 1、查询张姓员工的员工信息和所在部门信息。
select * from emp e left join dept d on e.dept_id=d.id where e.`name` like "%张%";

-- 2、查询张三丰管理了几个员工
select e1.name,count(1) as "手下数量" from emp e1 inner join emp e2 on e1. id = e2.leader where e1.`name` = "张三丰";

-- 3、查询出所有实习员工(实习员工无部门信息)
select * from emp e where e.dept_id not in (select id from dept)
select * from emp e left join dept d on e.dept_id = d.id where d.name is null

-- 4、查询每个部门有多少个员工,并打印部门名字、部门里的所有员工名字
select d.id,d.name,count(1),group_concat(e.name) from emp e right join dept d on e.dept_id = d.id group by d.id
SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------
-- Table structure for course
-- ----------------------------
DROP TABLE IF EXISTS `course`;
CREATE TABLE `course`  (
  `Cno` int(11) NOT NULL COMMENT '课程编号',
  `Cname` varchar(255) NULL DEFAULT NULL COMMENT '课程名称',
  `Tno` int(11) NULL DEFAULT NULL COMMENT '老师编号',
  PRIMARY KEY (`Cno`) USING BTREE
) ENGINE = InnoDB default charset = utf8mb4;

-- ----------------------------
-- Records of course
-- ----------------------------
INSERT INTO `course` VALUES (1, '数学', 1);
INSERT INTO `course` VALUES (2, '语文', 2);
INSERT INTO `course` VALUES (3, '英文', 1);

-- ----------------------------
-- Table structure for sc
-- ----------------------------
DROP TABLE IF EXISTS `sc`;
CREATE TABLE `sc`  (
  `Sno` int(11) NOT NULL COMMENT '学号',
  `Cno` int(11) NOT NULL COMMENT '课程编号',
  `score` int(255) NULL DEFAULT NULL COMMENT '分数',
  PRIMARY KEY (`Sno`, `Cno`) USING BTREE
) ENGINE = InnoDB default charset = utf8mb4;

-- ----------------------------
-- Records of sc
-- ----------------------------
INSERT INTO `sc` VALUES (1, 1, 99);
INSERT INTO `sc` VALUES (1, 2, 80);
INSERT INTO `sc` VALUES (1, 3, 50);
INSERT INTO `sc` VALUES (2, 1, 70);
INSERT INTO `sc` VALUES (2, 2, 90);
INSERT INTO `sc` VALUES (3, 1, 90);
INSERT INTO `sc` VALUES (4, 1, 60);
INSERT INTO `sc` VALUES (4, 2, 50);
INSERT INTO `sc` VALUES (4, 3, 40);

-- ----------------------------
-- Table structure for student
-- ----------------------------
DROP TABLE IF EXISTS `student`;
CREATE TABLE `student`  (
  `Sno` int(255) NOT NULL COMMENT '学号',
  `Sname` varchar(255) NULL DEFAULT NULL COMMENT '姓名',
  `Sage` int(255) NULL DEFAULT NULL COMMENT '年龄',
  `Ssex` varchar(255) NULL DEFAULT NULL COMMENT '性别',
  PRIMARY KEY (`Sno`) USING BTREE
) ENGINE = InnoDB default charset = utf8mb4;

-- ----------------------------
-- Records of student
-- ----------------------------
INSERT INTO `student` VALUES (1, '张三丰', 108, '男');
INSERT INTO `student` VALUES (2, '李小龙', 20, '男');
INSERT INTO `student` VALUES (3, '小龙女', 17, '女');
INSERT INTO `student` VALUES (4, '白发魔女', 18, '女');
INSERT INTO `student` VALUES (5, '韦小宝', 19, '男');

-- ----------------------------
-- Table structure for teacher
-- ----------------------------
DROP TABLE IF EXISTS `teacher`;
CREATE TABLE `teacher`  (
  `Tno` int(11) NOT NULL COMMENT '老师编号',
  `Tname` varchar(255) NULL DEFAULT NULL COMMENT '老师名称',
  PRIMARY KEY (`Tno`)
) ENGINE = InnoDB default charset = utf8mb4;

-- ----------------------------
-- Records of teacher
-- ----------------------------
INSERT INTO `teacher` VALUES (1, '无崖子');
INSERT INTO `teacher` VALUES (2, '孤独求败');
INSERT INTO `teacher` VALUES (3, '洪七公');

SET FOREIGN_KEY_CHECKS = 1;

-- 题目列表
-- 1、查询年龄小于18岁的学员信息 
-- 2、查询无崖子授课信息
-- 3、查询没有参与任意课程的学生信息
-- 4、查询无崖子每个授课课程的学员人数 统计 + 分组 
-- 5、查询张三丰数学成绩
-- 6、查询出语文最高分【待完成】
-- 7、查询没有参与语文考试的学生信息
-- 8、查询语数外三门成绩的平均分
-- 9、查询报名孤独求败老师课程的学生信息
-- 10、没有报名孤独求败老师课程的学生信息

--  答案列表
-- #1 查询年龄小于18岁的学员信息
select * from student where Sage<18;

-- #2 查询无崖子授课信息
select * from teacher t join course c on c.Tno = t.Tno where t.Tname="无崖子";

-- #3 查询没有参与任意课程的学生信息
select * from student s left join sc  on s.Sno = sc.Sno where sc.score is null;

-- #4 查询无崖子每个授课课程的学员人数 统计 + 分组 
select t.Tname,c.Cname,count(1)  as "学员人数" from teacher t join course c on t.Tno = c.Tno join sc on sc.Cno = c.Cno group by sc.Cno HAVING t.Tname="无崖子";

-- #5 查询张三丰数学成绩
select s.Sname,c.Cname,sc.score from  student s join sc on s.Sno = sc.Sno join course c on c.Cno = sc.Cno where s.Sname="张三丰" and c.Cname="数学";

-- #6 查询出语文最高分【待完成】
select s.Sname,max(sc.score) as "分数" from sc join course c on sc.Cno = c.Cno join student s on s.Sno = sc.Sno where c.Cname="语文";

-- #7 查询没有参与语文考试的学生信息
select * from student s join sc on sc.Sno = s.Sno right join course c on c.Cno = sc.Cno and c.Cname = "语文" and sc.score is null;
select * from course  c join sc on c.Cno = sc.Cno and c.Cname="语文" right join student s on s.Sno = sc.Sno where sc.score is null;

-- #8 查询语数外三门成绩的平均分 
select c.Cname,avg(sc.score) from course c join sc on c.Cno = sc.Cno GROUP BY c.Cno;

-- #9 查询报名孤独求败老师课程的学生信息
select * from student s join sc on sc.Sno = s.Sno join  course c on c.Cno= sc.Cno join teacher t on t.Tno = c.Tno where t.Tname="孤独求败" ;
select * from teacher t join course c on c.Tno = t.Tno and t.Tname = "孤独求败" join sc on c.Cno = sc.Cno right join student s on s.Sno = sc.Sno where sc.score is not null;

-- #10 没有报名孤独求败老师课程的学生信息
select * from teacher t join course c on c.Tno = t.Tno and t.Tname = "孤独求败" join sc on c.Cno = sc.Cno right join student s on s.Sno = sc.Sno where sc.score is null;
SET FOREIGN_KEY_CHECKS = 0;
DROP TABLE IF  EXISTS dept_;-- 部门表
DROP TABLE IF  EXISTS emp_;-- 部门表
SET FOREIGN_KEY_CHECKS = 1;

SELECT  @@FOREIGN_KEY_CHECKS;

CREATE TABLE dept_ ( 
   DEPTNO INT PRIMARY KEY, -- 部门编号
   DNAME VARCHAR ( 14 ), -- 部门名称
   LOC VARCHAR ( 13 ) -- 部门地址
)ENGINE = InnoDB default charset = utf8mb4;

INSERT INTO dept_ VALUES( 10, 'ACCOUNTING', 'NEW YORK' );
INSERT INTO dept_ VALUES( 20, 'RESEARCH', 'DALLAS' );
INSERT INTO dept_ VALUES( 30, 'SALES', 'CHICAGO' );
INSERT INTO dept_ VALUES( 40, 'OPERATIONS', 'BOSTON' );

CREATE TABLE emp_ (
  emp_NO INT PRIMARY KEY,-- 员工编号
  ENAME VARCHAR ( 10 ),-- 员工名称
  JOB VARCHAR ( 9 ),-- 工作
  MGR DOUBLE,-- 直属领导编号
  HIREDATE DATE,-- 入职时间
  SAL DOUBLE,-- 工资
  COMM DOUBLE,-- 奖金
  DEPTNO INT,-- 部门号
  FOREIGN KEY ( DEPTNO ) REFERENCES dept_ ( DEPTNO ) 
)ENGINE = InnoDB default charset = utf8mb4;

INSERT INTO emp_ VALUES ( 7369, 'SMITH', 'CLERK', 7902, '1980-12-17', 800, NULL, 20 );
INSERT INTO emp_ VALUES ( 7499, 'ALLEN', 'SALESMAN', 7698, '1981-02-20', 1600, 300, 30 );
INSERT INTO emp_ VALUES ( 7521, 'WARD', 'SALESMAN', 7698, '1981-02-22', 1250, 500, 30 );
INSERT INTO emp_ VALUES ( 7566, 'JONES', 'MANAGER', 7839, '1981-04-02', 2975, NULL, 20 );
INSERT INTO emp_ VALUES ( 7654, 'MARTIN', 'SALESMAN', 7698, '1981-09-28', 1250, 1400, 30 );
INSERT INTO emp_ VALUES ( 7698, 'BLAKE', 'MANAGER', 7839, '1981-05-01', 2850, NULL, 30 );
INSERT INTO emp_ VALUES ( 7782, 'CLARK', 'MANAGER', 7839, '1981-06-09', 2450, NULL, 10 );
INSERT INTO emp_ VALUES ( 7788, 'SCOTT', 'ANALYST', 7566, '1987-07-13', 3000, NULL, 20 );
INSERT INTO emp_ VALUES ( 7839, 'KING', 'PRESIDENT', NULL, '1981-11-17', 5000, NULL, 10 );
INSERT INTO emp_ VALUES ( 7844, 'TURNER', 'SALESMAN', 7698, '1981-09-08', 1500, 0, 30 );
INSERT INTO emp_ VALUES ( 7876, 'ADAMS', 'CLERK', 7788, '1987-07-13', 1100, NULL, 20 );
INSERT INTO emp_ VALUES ( 7900, 'JAMES', 'CLERK', 7698, '1981-12-03', 950, NULL, 30 );
INSERT INTO emp_ VALUES ( 7902, 'FORD', 'ANALYST', 7566, '1981-12-03', 3000, NULL, 20 );
INSERT INTO emp_ VALUES ( 7934, 'MILLER', 'CLERK', 7782, '1982-01-23', 1300, NULL, 10 );

-- 问题列表
-- 1、列出至少有三个员工的所有部门和部门信息。
-- 2、列出受雇日期早于直接上级的所有员工的编号,姓名,部门名称
-- 3、列出职位为“CLERK”的姓名和部门名称,部门人数:

答案列表
-- 1.列出至少有三个员工的所有部门和部门信息。
select * from dept_ where DEPTNO in (select DEPTNO from emp_ group by DEPTNO having count(1) >= 3)

-- 2.列出受雇日期早于直接上级的所有员工的编号,姓名,部门名称
select a.emp_NO,a.ENAME,(select DNAME from dept_ c where a.DEPTNO = c.DEPTNO) "部门名称" from emp_ a join emp_ b on a.MGR = b.emp_NO where a.HIREDATE < b.HIREDATE

-- 3.列出职位为“CLERK”的姓名和部门名称,部门人数:
select a.ENAME,b.DNAME,count(1) from emp_ a join dept_ b on a.DEPTNO = b.DEPTNO  where a.JOB = "CLERK" group by b.DEPTNO
bingoohuang commented 3 years ago

blog 字段写入

CREATE TABLE products(id BIGINT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(50), price DOUBLE, image BLOB) ENGINE = InnoDB default charset = utf8mb4;
MYSQL_PWD=root mysql -h 127.0.0.1 -u root -vvv -e "INSERT INTO products(name, price, image) VALUES('some', 12345, FROM_BASE64('`base64 -i stock-photo-1028699654.jpg`'))" bee
bingoohuang commented 3 years ago

MySql protocol-CRUD

https://www.programmersought.com/article/51583745379/

image

ResultSet is composed of many rows (Row), each row (Row) represents a record, Row format is as follows:

image

bingoohuang commented 3 years ago

mysql binlog

https://www.cnblogs.com/martinzhang/p/3454358.html

binlog 基本认识

MySQL的二进制日志可以说是MySQL最重要的日志了,它记录了所有的DDL和DML(除了数据查询语句)语句,以事件形式记录,还包含语句所执行的消耗的时间,MySQL的二进制日志是事务安全型的。

一般来说开启二进制日志大概会有1%的性能损耗(参见MySQL官方中文手册 5.1.24版)。二进制有两个最重要的使用场景:

  1. MySQL Replication在Master端开启binlog,Mster把它的二进制日志传递给slaves来达到master-slave数据一致的目的。
  2. 自然就是数据恢复了,通过使用mysqlbinlog工具来使恢复数据。

二进制日志包括两类文件:二进制日志索引文件(文件名后缀为.index)用于记录所有的二进制文件,二进制日志文件(文件名后缀为.00000*)记录数据库所有的DDL和DML(除了数据查询语句)语句事件。

开启binlog日志:

vi编辑打开mysql配置文件

vi /usr/local/mysql/etc/my.cnf

在[mysqld] 区块设置/添加 log-bin=mysql-bin 确认是打开状态(值 mysql-bin 是日志的基本名或前缀名);

重启mysqld服务使配置生效

pkill mysqld

/usr/local/mysql/bin/mysqld_safe --user=mysql &

也可登录mysql服务器,通过mysql的变量配置表,查看二进制日志是否已开启 单词:variable[ˈvɛriəbəl] 变量

# MYSQL_PWD=root mysql -h 127.0.0.1 -u root
mysql> show variables like '%bin%';
+--------------------------------------------+--------------------------------+
| Variable_name                              | Value                          |
+--------------------------------------------+--------------------------------+
| bind_address                               | *                              |
| binlog_cache_size                          | 32768                          |
| binlog_checksum                            | CRC32                          |
| binlog_direct_non_transactional_updates    | OFF                            |
| binlog_error_action                        | ABORT_SERVER                   |
| binlog_format                              | ROW                            |
| binlog_group_commit_sync_delay             | 0                              |
| binlog_group_commit_sync_no_delay_count    | 0                              |
| binlog_gtid_simple_recovery                | ON                             |
| binlog_max_flush_queue_time                | 0                              |
| binlog_order_commits                       | ON                             |
| binlog_row_image                           | FULL                           |
| binlog_rows_query_log_events               | OFF                            |
| binlog_stmt_cache_size                     | 32768                          |
| binlog_transaction_dependency_history_size | 25000                          |
| binlog_transaction_dependency_tracking     | COMMIT_ORDER                   |
| innodb_api_enable_binlog                   | OFF                            |
| innodb_locks_unsafe_for_binlog             | OFF                            |
| log_bin                                    | ON                             |
| log_bin_basename                           | /var/lib/mysql/mysql-bin       |
| log_bin_index                              | /var/lib/mysql/mysql-bin.index |
| log_bin_trust_function_creators            | OFF                            |
| log_bin_use_v1_row_events                  | OFF                            |
| log_statements_unsafe_for_binlog           | ON                             |
| max_binlog_cache_size                      | 18446744073709547520           |
| max_binlog_size                            | 1073741824                     |
| max_binlog_stmt_cache_size                 | 18446744073709547520           |
| sql_log_bin                                | ON                             |
| sync_binlog                                | 1                              |
+--------------------------------------------+--------------------------------+
29 rows in set (0.03 sec)

常用binlog日志操作命令

  1. 查看所有binlog日志列表 show master logs;
  2. 查看master状态show master status;,即最后(最新)一个binlog日志的编号名称,及其最后一个操作事件pos结束点(Position)值
  3. 刷新log日志flush logs;,自此刻开始产生一个新编号的binlog日志文件
    • 每当mysqld服务重启时,会自动执行此命令,刷新binlog日志;
    • 在mysqldump备份数据时加 -F 选项也会刷新binlog日志;
  4. 重置(清空)所有binlog日志 reset master;

查看某个binlog日志内容

使用mysqlbinlog自带查看命令法:

注: binlog是二进制文件,普通文件查看器cat more vi等都无法打开,必须使用自带的 mysqlbinlog 命令查看 binlog日志与数据库文件在同目录中(我的环境配置安装是选择在/usr/local/mysql/data中)

在MySQL5.5以下版本使用mysqlbinlog命令时如果报错,就加上 “--no-defaults”选项

# mysqlbinlog /usr/local/mysql/data/mysql-bin.000013
下面截取一个片段分析:

...............................................................................
# at 552
#131128 17:50:46 server id 1  end_log_pos 665   Query   thread_id=11    exec_time=0     error_code=0 ---->执行时间:17:50:46;pos点:665
SET TIMESTAMP=1385632246/*!*/;
update zyyshop.stu set name='李四' where id=4              ---->执行的SQL
/*!*/;
# at 665
#131128 17:50:46 server id 1  end_log_pos 692   Xid = 1454 ---->执行时间:17:50:46;pos点:692 
...............................................................................

注: server id 1     数据库主机的服务号;
   end_log_pos 665 pos点
   thread_id=11    线程号

上面这种办法读取出binlog日志的全文内容较多,不容易分辨查看pos点信息,这里介绍一种更为方便的查询命令:

mysql> show binlog events [IN 'log_name'] [FROM pos] [LIMIT [offset,] row_count];`

     选项解析:
       IN 'log_name'   指定要查询的binlog文件名(不指定就是第一个binlog文件)
       FROM pos        指定从哪个pos起始点开始查起(不指定就是从整个文件首个pos点开始算)
       LIMIT [offset,] 偏移量(不指定就是0)
       row_count       查询总条数(不指定就是所有行)

             截取部分查询结果:
             *************************** 20. row ***************************
                Log_name: mysql-bin.000021  ----------------------------------------------> 查询的binlog日志文件名
                     Pos: 11197 ----------------------------------------------------------> pos起始点:
              Event_type: Query ----------------------------------------------------------> 事件类型:Query
               Server_id: 1 --------------------------------------------------------------> 标识是由哪台服务器执行的
             End_log_pos: 11308 ----------------------------------------------------------> pos结束点:11308(即:下行的pos起始点)
                    Info: use `zyyshop`; INSERT INTO `team2` VALUES (0,345,'asdf8er5') ---> 执行的sql语句
             *************************** 21. row ***************************
                Log_name: mysql-bin.000021
                     Pos: 11308 ----------------------------------------------------------> pos起始点:11308(即:上行的pos结束点)
              Event_type: Query
               Server_id: 1
             End_log_pos: 11417
                    Info: use `zyyshop`; /*!40000 ALTER TABLE `team2` ENABLE KEYS */
             *************************** 22. row ***************************
                Log_name: mysql-bin.000021
                     Pos: 11417
              Event_type: Query
               Server_id: 1
             End_log_pos: 11510
                    Info: use `zyyshop`; DROP TABLE IF EXISTS `type`

      这条语句可以将指定的binlog日志文件,分成有效事件行的方式返回,并可使用limit指定pos点的起始偏移,查询条数;

恢复binlog日志实验(zyyshop是数据库)

1.假设现在是凌晨4:00,我的计划任务开始执行一次完整的数据库备份:

  将zyyshop数据库备份到 /root/BAK.zyyshop.sql 文件中:
  # /usr/local/mysql/bin/mysqldump -uroot -p123456 -lF --log-error=/root/myDump.err -B zyyshop > /root/BAK.zyyshop.sql
    ......

    大约过了若干分钟,备份完成了,我不用担心数据丢失了,因为我有备份了,嘎嘎~~~

  由于我使用了-F选项,当备份工作刚开始时系统会刷新log日志,产生新的binlog日志来记录备份之后的数据库“增删改”操作,查看一下:
  mysql> show master status;
  +------------------+----------+--------------+------------------+
  | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
  +------------------+----------+--------------+------------------+
  | mysql-bin.000023 |      120 |              |                  |
  +------------------+----------+--------------+------------------+
  也就是说, mysql-bin.000023 是用来记录4:00之后对数据库的所有“增删改”操作。

2.早9:00上班了,业务的需求会对数据库进行各种“增删改”操作~~~~~~~
  @ 比如:创建一个学生表并插入、修改了数据等等:
    CREATE TABLE IF NOT EXISTS `tt` (
      `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
      `name` varchar(16) NOT NULL,
      `sex` enum('m','w') NOT NULL DEFAULT 'm',
      `age` tinyint(3) unsigned NOT NULL,
      `classid` char(6) DEFAULT NULL,
      PRIMARY KEY (`id`)
     ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

  导入实验数据
  mysql> insert into zyyshop.tt(`name`,`sex`,`age`,`classid`) values('yiyi','w',20,'cls1'),('xiaoer','m',22,'cls3'),('zhangsan','w',21,'cls5'),('lisi','m',20,'cls4'),('wangwu','w',26,'cls6');

  查看数据
  mysql> select * from zyyshop.tt;
  +----+----------+-----+-----+---------+
  | id | name     | sex | age | classid |
  +----+----------+-----+-----+---------+
  |  1 | yiyi     | w   |  20 | cls1    |
  |  2 | xiaoer   | m   |  22 | cls3    |
  |  3 | zhangsan | w   |  21 | cls5    |
  |  4 | lisi     | m   |  20 | cls4    |
  |  5 | wangwu   | w   |  26 | cls6    |
  +----+----------+-----+-----+---------+

  中午时分又执行了修改数据操作
  mysql> update zyyshop.tt set name='李四' where id=4;
  mysql> update zyyshop.tt set name='小二' where id=2;

  修改后的结果:
  mysql> select * from zyyshop.tt;
  +----+----------+-----+-----+---------+
  | id | name     | sex | age | classid |
  +----+----------+-----+-----+---------+
  |  1 | yiyi     | w   |  20 | cls1    |
  |  2 | 小二     | m   |  22 | cls3    |
  |  3 | zhangsan | w   |  21 | cls5    |
  |  4 | 李四     | m   |  20 | cls4    |
  |  5 | wangwu   | w   |  26 | cls6    |
  +----+----------+-----+-----+---------+

  假设此时是下午18:00,莫名地执行了一条悲催的SQL语句,整个数据库都没了:
  mysql> drop database zyyshop;

3.此刻杯具了,别慌!先仔细查看最后一个binlog日志,并记录下关键的pos点,到底是哪个pos点的操作导致了数据库的破坏(通常在最后几步);

  备份一下最后一个binlog日志文件:
  # ll /usr/local/mysql/data | grep mysql-bin
  # cp -v /usr/local/mysql/data/mysql-bin.000023 /root/

  此时执行一次刷新日志索引操作,重新开始新的binlog日志记录文件,理论说 mysql-bin.000023 这个文件不会再有后续写入了(便于我们分析原因及查找pos点),以后所有数据库操作都会写入到下一个日志文件;
  mysql> flush logs;
  mysql> show master status;

4.读取binlog日志,分析问题
  方式一:使用mysqlbinlog读取binlog日志:
    # /usr/local/mysql/bin/mysqlbinlog  /usr/local/mysql/data/mysql-bin.000023

  方式二:登录服务器,并查看(推荐):
    mysql> show binlog events in 'mysql-bin.000023';

    以下为末尾片段:
    +------------------+------+------------+-----------+-------------+------------------------------------------------------------+
    | Log_name         | Pos  | Event_type | Server_id | End_log_pos | Info                                                       |
    +------------------+------+------------+-----------+-------------+------------------------------------------------------------+
    | mysql-bin.000023 |  922 | Xid        |         1 |         953 | COMMIT /* xid=3820 */                                      |
    | mysql-bin.000023 |  953 | Query      |         1 |        1038 | BEGIN                                                      |
    | mysql-bin.000023 | 1038 | Query      |         1 |        1164 | use `zyyshop`; update zyyshop.tt set name='李四' where id=4|
    | mysql-bin.000023 | 1164 | Xid        |         1 |        1195 | COMMIT /* xid=3822 */                                      |
    | mysql-bin.000023 | 1195 | Query      |         1 |        1280 | BEGIN                                                      |
    | mysql-bin.000023 | 1280 | Query      |         1 |        1406 | use `zyyshop`; update zyyshop.tt set name='小二' where id=2|
    | mysql-bin.000023 | 1406 | Xid        |         1 |        1437 | COMMIT /* xid=3823 */                                      |
    | mysql-bin.000023 | 1437 | Query      |         1 |        1538 | drop database zyyshop                                      |
    +------------------+------+------------+-----------+-------------+------------------------------------------------------------+

    通过分析,造成数据库破坏的pos点区间是介于 1437--1538 之间,只要恢复到1437前就可。

5.现在把凌晨备份的数据恢复:

  # /usr/local/mysql/bin/mysql -uroot -p123456 -v < /root/BAK.zyyshop.sql;

  注: 至此截至当日凌晨(4:00)前的备份数据都恢复了。
      但今天一整天(4:00--18:00)的数据肿么办呢?就得从前文提到的 mysql-bin.000023 新日志做文章了......

6.从binlog日志恢复数据

  恢复语法格式:
  # mysqlbinlog mysql-bin.0000xx | mysql -u用户名 -p密码 数据库名

    常用选项:
      --start-position=953                   起始pos点
      --stop-position=1437                   结束pos点
      --start-datetime="2013-11-29 13:18:54" 起始时间点
      --stop-datetime="2013-11-29 13:21:53"  结束时间点
      --database=zyyshop                     指定只恢复zyyshop数据库(一台主机上往往有多个数据库,只限本地log日志)

    不常用选项:    
      -u --user=name              Connect to the remote server as username.连接到远程主机的用户名
      -p --password[=name]        Password to connect to remote server.连接到远程主机的密码
      -h --host=name              Get the binlog from server.从远程主机上获取binlog日志
      --read-from-remote-server   Read binary logs from a MySQL server.从某个MySQL服务器上读取binlog日志

  小结:实际是将读出的binlog日志内容,通过管道符传递给mysql命令。这些命令、文件尽量写成绝对路径;

  A.完全恢复(本例不靠谱,因为最后那条 drop database zyyshop 也在日志里,必须想办法把这条破坏语句排除掉,做部分恢复)
    # /usr/local/mysql/bin/mysqlbinlog  /usr/local/mysql/data/mysql-bin.000021 | /usr/local/mysql/bin/mysql -uroot -p123456 -v zyyshop 

  B.指定pos结束点恢复(部分恢复):
    @ --stop-position=953 pos结束点
    注:此pos结束点介于“导入实验数据”与更新“name='李四'”之间,这样可以恢复到更改“name='李四'”之前的“导入测试数据”
    # /usr/local/mysql/bin/mysqlbinlog --stop-position=953 --database=zyyshop /usr/local/mysql/data/mysql-bin.000023 | /usr/local/mysql/bin/mysql -uroot -p123456 -v zyyshop

    在另一终端登录查看结果(成功恢复了):
    mysql> select * from zyyshop.tt;
    +----+----------+-----+-----+---------+
    | id | name     | sex | age | classid |
    +----+----------+-----+-----+---------+
    |  1 | yiyi     | w   |  20 | cls1    |
    |  2 | xiaoer   | m   |  22 | cls3    |
    |  3 | zhangsan | w   |  21 | cls5    |
    |  4 | lisi     | m   |  20 | cls4    |
    |  5 | wangwu   | w   |  26 | cls6    |
    +----+----------+-----+-----+---------+

  C.指定pso点区间恢复(部分恢复):
    更新 name='李四' 这条数据,日志区间是Pos[1038] --> End_log_pos[1164],按事务区间是:Pos[953] --> End_log_pos[1195];

    更新 name='小二' 这条数据,日志区间是Pos[1280] --> End_log_pos[1406],按事务区间是:Pos[1195] --> End_log_pos[1437];

    c1.单独恢复 name='李四' 这步操作,可这样:
       # /usr/local/mysql/bin/mysqlbinlog --start-position=1038 --stop-position=1164 --database=zyyshop  /usr/local/mysql/data/mysql-bin.000023 | /usr/local/mysql/bin/mysql -uroot -p123456 -v zyyshop

       也可以按事务区间单独恢复,如下:
       # /usr/local/mysql/bin/mysqlbinlog --start-position=953 --stop-position=1195 --database=zyyshop  /usr/local/mysql/data/mysql-bin.000023 | /usr/local/mysql/bin/mysql -uroot -p123456 -v zyyshop

    c2.单独恢复 name='小二' 这步操作,可这样:
       # /usr/local/mysql/bin/mysqlbinlog --start-position=1280 --stop-position=1406 --database=zyyshop  /usr/local/mysql/data/mysql-bin.000023 | /usr/local/mysql/bin/mysql -uroot -p123456 -v zyyshop

       也可以按事务区间单独恢复,如下:
       # /usr/local/mysql/bin/mysqlbinlog --start-position=1195 --stop-position=1437 --database=zyyshop  /usr/local/mysql/data/mysql-bin.000023 | /usr/local/mysql/bin/mysql -uroot -p123456 -v zyyshop

    c3.将 name='李四'、name='小二' 多步操作一起恢复,需要按事务区间,可这样:
       # /usr/local/mysql/bin/mysqlbinlog --start-position=953 --stop-position=1437 --database=zyyshop  /usr/local/mysql/data/mysql-bin.000023 | /usr/local/mysql/bin/mysql -uroot -p123456 -v zyyshop

  D.在另一终端登录查看目前结果(两名称也恢复了):
    mysql> select * from zyyshop.tt;
    +----+----------+-----+-----+---------+
    | id | name     | sex | age | classid |
    +----+----------+-----+-----+---------+
    |  1 | yiyi     | w   |  20 | cls1    |
    |  2 | 小二     | m   |  22 | cls3    |
    |  3 | zhangsan | w   |  21 | cls5    |
    |  4 | 李四     | m   |  20 | cls4    |
    |  5 | wangwu   | w   |  26 | cls6    |
    +----+----------+-----+-----+---------+

  E.也可指定时间区间恢复(部分恢复):除了用pos点的办法进行恢复,也可以通过指定时间区间进行恢复,按时间恢复需要用mysqlbinlog命令读取binlog日志内容,找时间节点。
    比如,我把刚恢复的tt表删除掉,再用时间区间点恢复
    mysql> drop table tt;

    @ --start-datetime="2013-11-29 13:18:54"  起始时间点
    @ --stop-datetime="2013-11-29 13:21:53"   结束时间点

    # /usr/local/mysql/bin/mysqlbinlog --start-datetime="2013-11-29 13:18:54" --stop-datetime="2013-11-29 13:21:53" --database=zyyshop /usr/local/mysql/data/mysql-bin.000021 | /usr/local/mysql/bin/mysql -uroot -p123456 -v zyyshop

  总结:所谓恢复,就是让mysql将保存在binlog日志中指定段落区间的sql语句逐个重新执行一次而已。
bingoohuang commented 3 years ago

Streaming MySQL tables in real-time to Kafka

image

image

image

https://engineeringblog.yelp.com/2016/08/streaming-mysql-tables-in-real-time-to-kafka.html

bingoohuang commented 3 years ago

MySQL BLOB size

Type of BLOB Maximum amount of Data that can be stored Overhead
TINYBLOB Up to 255 bytes 1 byte
BLOB Up to 64 Kb 2 bytes
MEDIUMBLOB Up to 16 Mb 3 bytes
LONGBLOB Up to 4 Gb 1 Bytes
bingoohuang commented 3 years ago

partition table

CREATE TABLE t1 (
    id INT,
    year_col INT
)
    PARTITION BY RANGE (year_col) (
    PARTITION p0 VALUES LESS THAN (1991),
    PARTITION p1 VALUES LESS THAN (1995),
    PARTITION p2 VALUES LESS THAN (1999)
);
ALTER TABLE t1 ADD PARTITION (PARTITION p3 VALUES LESS THAN (2002));
ALTER TABLE t1 DROP PARTITION p0, p1;
insert into t1(id, year_col) values(1,1990),(2, 1991),(3,1995=4),(4,1995),(5,1998),(6,1999),(7,2001);
> SELECT  PARTITION_NAME, TABLE_ROWS, AVG_ROW_LENGTH, DATA_LENGTH FROM INFORMATION_SCHEMA.PARTITIONS WHERE  TABLE_NAME = 't1';
+----------------+------------+----------------+-------------+
| PARTITION_NAME | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH |
+----------------+------------+----------------+-------------+
| p0             |          4 |           4096 |       16384 |
| p1             |          1 |          16384 |       16384 |
| p2             |          2 |           8192 |       16384 |
| p3             |          2 |           8192 |       16384 |
+----------------+------------+----------------+-------------+

root@3aaaf11ee798:/var/lib/mysql/mydb# ls -lh t1*
-rw-r----- 1 mysql mysql  96K Feb  4 00:58 t1#P#p0.ibd
-rw-r----- 1 mysql mysql  96K Feb  4 00:57 t1#P#p1.ibd
-rw-r----- 1 mysql mysql  96K Feb  4 00:57 t1#P#p2.ibd
-rw-r----- 1 mysql mysql  96K Feb  4 00:57 t1#P#p3.ibd
-rw-r----- 1 mysql mysql 8.4K Feb  4 00:55 t1.frm
bingoohuang commented 3 years ago

find tables without primary keys

SELECT t.table_schema, t.table_name
FROM INFORMATION_SCHEMA.TABLES t
         LEFT JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS c
                   ON (
                           t.TABLE_NAME = c.TABLE_NAME
                           AND c.CONSTRAINT_SCHEMA = t.TABLE_SCHEMA
                           AND c.constraint_name = 'PRIMARY'
                       )
WHERE t.table_schema not in ('mysql', 'information_schema', 'performance_schema', 'sys')
AND  t.table_type = 'BASE TABLE'
  AND c.constraint_name IS NULL;
bingoohuang commented 3 years ago

导库脚本

export db1="t1 t2"
export db2="t3 t4"

echo $db1
echo $db2

# 这里需要改一下数据库主机(-h后面的部分),端口(-P后面的部分),以及密码(-p后面的部分)
# 验证连通性 `mysql -h127.0.0.1 -P3306 -uroot -proot` 做一下测试,验证能否正常连接,有时候 root 只能从 127.0.0.1 (本机)连接,直接 IP 不通
export dump_cmd="mysqldump --force --single-transaction --set-gtid-purged=OFF --hex-blob -h127.0.0.1 -P3306 -uroot -proot"

$dump_cmd db1 $db1 | gzip -c > db.db1.$(date +%F.%H%M%S).sql.gz
$dump_cmd db2 $db2 | gzip -c > db.db2.$(date +%F.%H%M%S).sql.gz
  1. --single-transaction: 避免锁表,保证 InnoDB 的数据是完全一致的. 此选项会将隔离级别设置为:REPEATABLE READ。并且随后再执行一条START TRANSACTION语句,让整个数据在dump过程中保证 InnoDB 数据的一致性,且不会锁表。但是这个不能保证 MyISAM 表和 MEMORY 表的数据一致性(必须加 --lock-all-tables) 为了确保使用 --single-transaction 命令时,保证dump文件的有效性。需没有下列语句ALTER TABLE, CREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE,因为一致性读不能隔离上述语句。所以如果在dump过程中,使用上述语句,可能会导致dump出来的文件数据不一致或者不可用。
  2. --default-character-set=utf8 导出的dump文件字符集为uft8,检验文件字符集的命令可以使用file -i
  3. --set-gtid-purged=OFF 导入新数据库时,触发新数据库的 binlog 日志。如果不加,则新数据库不记录 binlog 日志。 所以在我们做主从用了gtid时,用mysqldump备份时就要加 --set-gtid-purged=OFF,否则你在主上导入恢复了数据,主没有了binlog日志。
  4. --force 在导出过程中忽略出现的SQL错误
  5. --hex-blob 使用十六进制格式导出二进制字符串字段。如果有二进制数据就必须使用该选项。影响到的字段类型有BINARY、VARBINARY、BLOB。

导入

$ gunzip < db1.sql.gz | mysql -h 127.0.0.1 -uroot -proot -P3306 db1
bingoohuang commented 3 years ago

查找表大小

SELECT TABLE_SCHEMA                                      AS `Database`,
       TABLE_NAME                                        AS `Table`,
       TABLE_ROWS                                        AS `Rows`,
       ROUND((DATA_LENGTH + INDEX_LENGTH) / 1024 / 1024) AS `Size (MB)`
FROM information_schema.TABLES
WHERE TABLE_ROWS > 0
AND TABLE_SCHEMA NOT IN ('mysql', 'sys', 'performance_schema')
ORDER BY (DATA_LENGTH + INDEX_LENGTH) DESC, TABLE_ROWS DESC;
bingoohuang commented 3 years ago

表批量改名

生成改名SQL

SELECT Concat('ALTER TABLE ', TABLE_SCHEMA, '.`', TABLE_NAME, '` RENAME TO ', TABLE_SCHEMA, '.`', TABLE_NAME, '_bck_20211013`;') AS RENAME_SQL
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'your_db'
  AND TABLE_NAME IN ( 't1', 't2' );
bingoohuang commented 2 years ago

查询所有表的索引

select index_schema,
       table_name,
       index_name,
       group_concat(column_name order by seq_in_index) as index_columns,
       index_type,
       IF(non_unique = 1, 'Not Unique', 'Unique')      as is_unique
from information_schema.statistics
where table_schema not in ('information_schema', 'mysql',
                           'performance_schema', 'sys')
--  and TABLE_NAME = 'tbl_authorization_context'
group by 1, 2, 3;
SHOW INDEX FROM zz.tbl_authorization_context;
bingoohuang commented 2 years ago

查找表名重复

-- 查找表名重复
select table_name, table_schema
from information_schema.tables
where table_name in (select table_name
                     from (select table_name, count(*) c
                           from information_schema.tables
                           where table_schema not in ('information_schema', 'performance_schema', 'sys', 'mysql')
                           group by table_name
                           having c > 1) t)
order by table_name;
bingoohuang commented 2 years ago
  1. docker pull mysql:5.7.36
  2. docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7.36
bingoohuang commented 2 years ago

见识一下耗时 36 分钟多的超慢查询,优化后只要 60 毫秒,快了 3 万倍

先说结论吧:

  1. 强烈建议在 MySQL 开启慢查询: SET GLOBAL slow_query_log = 1;
  2. 慢查询时间建议放在 3 秒: SET GLOBAL long_query_time = 3;
  3. 不定期观察慢 SQL 日志文件: show global variables like '%slow_query%';
  4. 找出慢 SQL,优化慢 SQL 的写法,或者 优化执行(加索引等)。

Z 环境慢查询演示

Z 环境 的 MySQL 数据库,在无法登录、无法正常重启后,只能使用强制手段(没猜错,就是著名的 kill -9)重启,就顺便开启了一下慢查询 ,抓到了一个排名第一的耗时 36 分钟的查询语句,如下(已脱敏):

# Time: 2022-03-21T21:26:11.783246Z
# User@Host: bj[bj] @  [192.6.3.8]  Id:   381
# Query_time: 2183.175542  Lock_time: 0.000072 Rows_sent: 0  Rows_examined: 68550
SET timestamp=1647897971;
select li0_.ID as ID1_154_, li0_.CREATOR_ID as CREATOR_2_154_, li0_.CREATION_TIME as CREATION3_154_, li0_.LAST_MODIFICATOR_ID as LAST_MOD4_154_, li0_.LAST_MODIFICATION_TIME as LAST_MOD5_154_, li0_.FILE_DATA as FILE_DAT6_154_, li0_.FILE_NAME as FILE_NAM7_154_, li0_.FILE_PATH as FILE_PAT8_154_, li0_.FILE_TYPE as FILE_TYP9_154_, li0_.ERROR_COR_RECORD_ID as LICENSE10_154_, li0_.NAME as NAME11_154_, li0_.ORIGINAL_FILE_NAME as ORIGINA12_154_, li0_.SORT_ORDER as SORT_OR13_154_ from TBL_ERROR_COR_ATTA li0_ where li0_.ERROR_COR_RECORD_ID in ('DC4495E7-AB38-4462-9671-3050E31F89C7');

抓取前 10 名的脚本:grep Query_time /var/lib/mysql/VM000006030-slow.log | sort -k 3 -nr | head -10

# grep Query_time /var/lib/mysql/VM000006030-slow.log | sort -k 3 -nr | head -10
# Query_time: 2183.175542  Lock_time: 0.000072 Rows_sent: 0  Rows_examined: 68550
# Query_time: 2154.709467  Lock_time: 0.000072 Rows_sent: 0  Rows_examined: 68550
# Query_time: 2136.858803  Lock_time: 0.000072 Rows_sent: 1  Rows_examined: 68555
# Query_time: 2128.993168  Lock_time: 0.000070 Rows_sent: 0  Rows_examined: 68550
# Query_time: 2113.997908  Lock_time: 0.000069 Rows_sent: 0  Rows_examined: 68550
# Query_time: 2081.212847  Lock_time: 0.000172 Rows_sent: 4  Rows_examined: 68544
# Query_time: 2076.821708  Lock_time: 0.000073 Rows_sent: 0  Rows_examined: 68550
# Query_time: 2063.169121  Lock_time: 0.000161 Rows_sent: 1  Rows_examined: 68599
# Query_time: 2050.307478  Lock_time: 0.000078 Rows_sent: 4  Rows_examined: 68544
# Query_time: 2049.958196  Lock_time: 0.000075 Rows_sent: 2  Rows_examined: 68582

找到表在哪个库,顺便看到了它有 5 万多条数据:

mysql> select * from information_schema.TABLES where TABLE_NAME  = 'TBL_ERROR_COR_ATTA'\G
*************************** 1. row ***************************
  TABLE_CATALOG: def
   TABLE_SCHEMA: dc
     TABLE_NAME: tbl_error_cor_atta
     TABLE_TYPE: BASE TABLE
         ENGINE: InnoDB
        VERSION: 10
     ROW_FORMAT: Dynamic
     TABLE_ROWS: 56319
 AVG_ROW_LENGTH: 230228
    DATA_LENGTH: 12966248448
MAX_DATA_LENGTH: 0
   INDEX_LENGTH: 0
      DATA_FREE: 4194304
 AUTO_INCREMENT: NULL
    CREATE_TIME: 2020-09-28 19:10:02
    UPDATE_TIME: 2022-03-23 11:38:30
     CHECK_TIME: NULL
TABLE_COLLATION: utf8_general_ci
       CHECKSUM: NULL
 CREATE_OPTIONS: 
  TABLE_COMMENT: 
1 row in set (0.00 sec)

看一下表结构: show create table TBL_ERROR_COR_ATTA,中间还有一个 longblob 类型的字段:

CREATE TABLE `tbl_error_cor_atta` (
  `ID` varchar(50) NOT NULL COMMENT '唯一标识',
  `CREATOR_ID` varchar(50) DEFAULT NULL COMMENT '唯一标识',
  `CREATION_TIME` datetime NOT NULL,
  `LAST_MODIFICATOR_ID` varchar(50) DEFAULT NULL,
  `LAST_MODIFICATION_TIME` datetime NOT NULL,
  `NAME` varchar(100) DEFAULT NULL COMMENT '文件名称',
  `FILE_NAME` varchar(200) DEFAULT NULL COMMENT '文件系统名称',
  `FILE_TYPE` varchar(20) DEFAULT NULL COMMENT '文件类型',
  `FILE_PATH` varchar(50) DEFAULT NULL COMMENT '文件路径',
  `FILE_DATA` longblob COMMENT '文件数据',
  `ORIGINAL_FILE_NAME` varchar(100) DEFAULT NULL COMMENT '原始名称',
  `SORT_ORDER` decimal(22,0) DEFAULT NULL COMMENT '排序字段',
  `ERROR_COR_RECORD_ID` varchar(50) DEFAULT NULL COMMENT '纠错记录ID',
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

再看一下超慢 SQL 的执行分析:explain {上面的慢SQL},一看就是全表扫描啦(看 possible_keys 和 keys 列):

id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 SIMPLE err0_ NULL ALL NULL NULL NULL NULL 56319 10.00 Using where

既然 ERROR_COR_RECORD_ID 这个字段叫做 ID,那加个普通索引吧(数据量在 5 万级,可以放心加,应该较快才是):

CREATE INDEX tbl_error_cor_atta_idx1 ON tbl_error_cor_atta (ERROR_COR_RECORD_ID);

加完索引,再跑一下执行分析,马上可以看到,可以走新建的索引了,执行时间也是常数级别了

id select_type table partitions type possible_keys key key_len ref rows filtered Extra
1 SIMPLE err0_ NULL ref tbl_error_cor_atta_idx1 tbl_error_cor_atta_idx1 153 const 2 100.00 NULL

再次执行一下查询,结果只需要 0.06 秒,从 2183 s 到 0.06 s,是不是快了 3 万倍

2 rows in set (0.06 sec)

其实,回过头来看,这张表,就应该平时应该不需要根据 ID 来查询,大部分是根据 ERROR_COR_RECORD_ID 来查询,而且有大字段(longblob), 确实应该在 ERROR_COR_RECORD_ID 上加好索引才是。

bingoohuang commented 2 years ago

mysql -uroot -p -A

On the first "use" command after login, MySQL scans database, tables and columns name for auto completion. If you have many db, tables it could take a while.

Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

To avoid that, launch your client with the -A option (or --no-auto-rehash)

mysql -uroot -p -A

You could also add the disable_auto_rehash variable in your my.cnf (in the [mysql] section) if you want to disable it completely. This change does not require a reboot (it is a client, not server, variable).

bingoohuang commented 2 years ago

ALTER TABLE table_name AUTO_INCREMENT = 1 allows the database to reset the AUTO_INCREMENT to:

MAX(auto_increment_column)+1

It does not reset it to 1.

This prevents any duplication of AUTO_INCREMENT values. Also, since AUTO_INCREMENT values are either primary/unique, duplication would never happen anyway. The method to do this is available for a reason. It will not alter any database records; simply the internal counter so that it points to the max value available. As stated earlier by someone, don't try to outsmart the database... just let it handle it. It handles the resetting of AUTO_INCREMENT very well.

bingoohuang commented 2 years ago

打印死锁日志

  1. 查看参数是否开启 show variables like 'innodb_print_all_deadlocks';
  2. 开启innodb_print_all_deadlocks,此参数是全局参数,可以动态调整。记得要加入到配置文件中: set global innodb_print_all_deadlocks = 1;
  3. 查看死锁:show engine innodb status 输出的 LATEST DETECTED DEADLOCK 部分

https://mp.weixin.qq.com/s/EMESVM3VKHQWinoTC_3ZFA