How to improve database import with MySQL and MariaDB

In this post, I want to emphasise the approach to speed up importing data to a MySQL or MariaDB database using the exported file.

mysql -u dbuser -p dbpass mydatabase < exported_data_file.sql
mysql --user=dbuser --pasword=dbpass mydatabase < exported_data_file.sql

Overview

Often, this command finishes in some minutes if the data file is not too large. However, it will trap you if the file is large and the data is huge. For example, some tables have millions records.

How to speed up the process

To remind how to export the data from MySQL database, we can use mysqldump. In contrast, using mysql command to import the exported data to the database.

Apart from the network latency, (just like you disabled auto-commit) setting
SET unique_checks = 0; SET foreign_key_checks = 0;
at the beginning of the dump then enabling them at its end
SET foreign_key_checks = 1; SET unique_checks = 1; COMMIT;
 
When installing MariaDB 10.11.11 inside AWS EC2, I didn’t customise the database server to improve performance. I leave all server variables at default. I killed all too slow running import processes and began tuning these variables recommended here. In addition, I turned off the checks as you suggested, and the next import process completed around 20-30 minutes. What performance!P/S: Our EC2 instance specs are 32G RAM.
 
cat /etc/my.cnf
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

[mysqld]
# -- Source - https://stackoverflow.com/a/70864907
# -- Posted by Ahtisham, modified by community. See post 'Timeline' for change history
# -- Retrieved 2025-11-13, License - CC BY-SA 4.0
innodb_buffer_pool_size=16G
#innodb_buffer_pool_instances=16
innodb_log_buffer_size=512M
innodb_log_file_size=4G
innodb_write_io_threads=32
innodb_flush_log_at_trx_commit=0
innodb_doublewrite=0

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
Someone might ask the innodb_flush_log_at_trx_commit=0 variable, so I googled. For anyone interested :
  • 0: Write and flush the logs to disk once per second. < max performance
  • 1: Write and flush the logs to disk at every transaction commit. < less performance more ACID
  • 2: Write the logs to the log file at every commit, but flush to disk only once per second. < half way.
Voila!
Nguyen Vu Ngoc Tung

I love making new professional acquaintances. Don't hesitate to contact me via nguyenvungoctung@gmail.com if you want to talk about information technology, education, and research on complex networks analysis (i.e., metabolic networks analysis), data analysis, and applications of graph theory. Specialties: researching and proposing innovative business approaches to organizations, evaluating and consulting about usability engineering, training and employee development, web technologies, software architecture.

https://www.itersdesktop.com/author/nvntung/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.