In the world of database administration, few things are more frustrating than a failed dump right when everything seemed to be under control. That’s the case with the dreaded Error 2013: Lost connection to MySQL server during query when dumping table, which appears in both mysqldump and mariadb-dump.
The error message is usually blunt:
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table XXXX at row N
Code language: JavaScript (javascript)
or in MariaDB’s variant:
mariadb-dump: Error 2013: Lost connection to MySQL server during query when dumping table XXXX at row N
Code language: JavaScript (javascript)
Behind this warning lies a very specific technical problem: the tool loses its connection to the MySQL or MariaDB server while trying to extract data from a table that’s too large or contains oversized records. It’s not rare, and it often shows up in environments where there are tables with millions of rows, heavy BLOB/TEXT fields, or where the server and client are set with limits that are too restrictive in terms of memory and timeouts.
The good news is that the community has long identified practical solutions. This article compiles those measures in a useful guide, explained in accessible terms, so that any professional or system administrator can face it with confidence.
Why Does Error 2013 Occur?
The error code 2013 corresponds to a lost connection between the client running mysqldump
(or mariadb-dump
) and the MySQL/MariaDB server. It usually happens in three main scenarios:
- Oversized rows: when a query returns a record larger than the allowed value in the
max_allowed_packet
variable, the connection is dropped. - Timeouts: long-running dumps or slow connections may exceed the read/write timeouts.
- Infrastructure/network issues: firewalls, load balancers, or network drops in distributed environments can interrupt large-scale dumps.
Although the message often points to the specific row where the dump failed, it doesn’t necessarily mean that row is “corrupt.” What usually fails is the configuration of MySQL or MariaDB limits.
Step One: Adjust max_allowed_packet
The most immediate and common fix is to increase the maximum packet size allowed for client-server communication. By default, this value may be too low for operations involving large binary data or very long SQL statements.
On the client side, simply run:
$ mysqldump --opt --max_allowed_packet=1G database_name > db.sql
The suffix accepts values in K, M, or G, meaning kilobytes, megabytes, or gigabytes. A value of 1G is usually more than enough in most environments.
On the server side, you can check the current setting:
SHOW VARIABLES LIKE 'max_allowed_packet';
Code language: JavaScript (javascript)
And increase it temporarily if necessary:
SET GLOBAL max_allowed_packet = 1073741824; -- 1G
Code language: PHP (php)
To make it permanent, edit the my.cnf
or my.ini
configuration file:
[mysqld]
max_allowed_packet = 1G
The Power of --quick
and --single-transaction
Another crucial measure is to optimize how mysqldump
handles queries. Two options are highly recommended:
--quick
: reads and writes rows one by one instead of loading the entire table into memory.--single-transaction
: creates a consistent snapshot for InnoDB engines, preventing locks and ensuring reliable dumps without halting activity.
Practical example:
mysqldump --single-transaction --quick --max_allowed_packet=512M database_name > backup.sql
For many administrators, these two flags alone are enough to prevent crashes in large-scale databases.
Avoiding Oversized Extended Inserts
By default, mysqldump
generates extended insert statements, which combine many rows into a single INSERT
. This is efficient during restoration, but it can also create statements hundreds of megabytes long, easily exceeding the packet size.
The --skip-extended-insert
option forces each row to be written as a separate INSERT
. The dump file will be larger, and restores slightly slower, but the risk of hitting max_allowed_packet
limits is drastically reduced.
Example:
mysqldump --single-transaction --quick --skip-extended-insert database_name > backup.sql
Code language: CSS (css)
Increasing Timeouts
If the problem lies in timeout exhaustion, review the net_read_timeout
and net_write_timeout
variables. In environments with heavy operations, a low value may cut off valid connections.
Check current values:
SHOW VARIABLES LIKE 'net_read_timeout';
SHOW VARIABLES LIKE 'net_write_timeout';
Code language: JavaScript (javascript)
Increase them on the fly:
SET GLOBAL net_read_timeout = 600;
SET GLOBAL net_write_timeout = 600;
Code language: PHP (php)
This adjustment gives dumps 10 minutes of breathing room instead of timing out after 30 seconds.
Dumping Large Tables in Parts
If the error always occurs on the same table, the most practical solution is to split the dump into smaller parts using the --where
clause. This requires a primary key or some reference field.
Example:
mysqldump database_name large_table --where="id BETWEEN 1 AND 100000" > part1.sql
mysqldump database_name large_table --where="id BETWEEN 100001 AND 200000" >> part1.sql
Code language: JavaScript (javascript)
This way, you avoid processing millions of records in a single go.
Using mysqlpump
or MariaDB Parallelism
For those using newer versions of MySQL, the mysqlpump
tool supports parallelism and native compression, improving the handling of large packets. In MariaDB, mariadb-dump
itself offers a --parallel
option to process multiple tables simultaneously.
Example with MySQL:
mysqlpump --default-parallelism=4 --compress-output=GZIP database_name > backup.sql.gz
Code language: JavaScript (javascript)
Example with MariaDB:
mariadb-dump --single-transaction --quick --parallel=4 database_name > backup.sql
Best Practices to Prevent Error 2013
Beyond technical tweaks, a few good practices help minimize the likelihood of hitting this error:
- Test in staging environments: always run trial dumps before attempting production.
- Monitor resources: keep an eye on CPU and memory usage during dumps to catch bottlenecks.
- Compress on the fly: use
gzip
orxz
to reduce output file size and disk writes. - Schedule during off-peak hours: backups during low activity reduce contention and risk.
Conclusion
The mysqldump/mariadb-dump error 2013 is an old nemesis for database administrators. While its sudden appearance is nerve-wracking, the silver lining is that it’s well-documented and has multiple proven fixes.
From increasing max_allowed_packet
, to enabling --quick
and --single-transaction
, splitting large tables, or moving to more modern tools like mysqlpump
, the alternatives are plentiful. The key lies in diagnosing whether the root cause is record size, memory limits, or connection timeouts.
At a time when databases form the beating heart of digital services, having reliable backups is non-negotiable. And knowing how to overcome errors like this makes any administrator an invaluable asset to their organization.
Frequently Asked Questions (FAQ)
1. What exactly does MySQL error 2013 mean?
It indicates that the client lost its connection to the server while executing a query, in this case during a database dump with mysqldump
or mariadb-dump
.
2. What is the recommended value for max_allowed_packet
?
It depends on the size of your rows. For standard environments, 256M is often enough. For databases with BLOBs or massive tables, 512M or even 1G may be necessary.
3. Can I run a full dump without locking tables?
Yes, by using the --single-transaction
option with InnoDB databases, which creates a consistent snapshot without locking.
4. What if the error always happens on the same table?
Split the dump into chunks using the --where
option, so millions of rows aren’t processed in one massive operation.