Let's assume a table structure like:
CREATE TABLE `tbl` (
`id` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
);
and INSERT
queries like:
INSERT INTO tbl(id) VALUES (NULL);
In the real code there are also other columns in the table and they are also present in the INSERT
query but we can safely ignore them because they don't bring any value to this specific issue.
When the value of column id
reaches its maximum value no more rows can be inserted in the table using the query above. The next INSERT
fails with the error:
SQL Error (167): Out of range value for column 'id'.
If there are gaps in the values of the id
column then you can still insert rows that use values not present in the table but you have to specify the values for id
in the INSERT
query.
Anyway, if the type of your AUTO_INCREMENT
column is BIGINT
you don't have to worry.
Assuming the code inserts one million records each second (this is highly overrated, to not say impossible), there are enough values for the id
column for the next half of million years. Or just 292,277
years if the column is not UNSIGNED
.
I witnessed the behaviour on a live web server that was using INT(11)
(and not UNSIGNED
) as the AUTO_INCREMENT
ed PK for a table that records information about the visits of the web site. It failed in the middle of the night, after several years of running smoothly, when the visits number reached 2^31
(2
billions and something).
Changing the column type from INT
to BIGINT
is not a solution on a 2-billion records table (it takes ages to complete and when the system is live, there is never enough time). The solution was to create a new table with the same structure but with BIGINT
for the PK column and an initial value for the AUTO_INCREMENT
column and then switch the tables:
CREATE TABLE `tbl_new` (
`id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) AUTO_INCREMENT=2200000000;
RENAME TABLE `tbl` TO `tbl_old`, `tbl_new` TO `tbl`;
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…