In this post, I want to emphasise the approach to speed up importing data to a MySQL or MariaDB database using the exported file.
mysql -u dbuser -p dbpass mydatabase < exported_data_file.sql mysql --user=dbuser --pasword=dbpass mydatabase < exported_data_file.sql
Overview
Often, this command finishes in some minutes if the data file is not too large. However, it will trap you if the file is large and the data is huge. For example, some tables have millions records.
How to speed up the process
To remind how to export the data from MySQL database, we can use mysqldump. In contrast, using mysql command to import the exported data to the database.
SET unique_checks = 0; SET foreign_key_checks = 0;SET foreign_key_checks = 1; SET unique_checks = 1; COMMIT;cat /etc/my.cnf # # This group is read both both by the client and the server # use it for options that affect everything # [client-server] [mysqld] # -- Source - https://stackoverflow.com/a/70864907 # -- Posted by Ahtisham, modified by community. See post 'Timeline' for change history # -- Retrieved 2025-11-13, License - CC BY-SA 4.0 innodb_buffer_pool_size=16G #innodb_buffer_pool_instances=16 innodb_log_buffer_size=512M innodb_log_file_size=4G innodb_write_io_threads=32 innodb_flush_log_at_trx_commit=0 innodb_doublewrite=0 # # include all files from the config directory # !includedir /etc/my.cnf.d
- 0: Write and flush the logs to disk once per second. < max performance
- 1: Write and flush the logs to disk at every transaction commit. < less performance more ACID
- 2: Write the logs to the log file at every commit, but flush to disk only once per second. < half way.
