I want to create a docker image on top of the mysql one that already contains the necessary scheme for my app.
I tried adding lines to the Dockerfile that will import my scheme as a sql file. I did so as such (my Dockerfile):
FROM mysql
ENV MYSQL_ROOT_PASSWORD="bagabu"
ENV MYSQL_DATABASE="imhere"
ADD imhere.sql /tmp/imhere.sql
RUN "mysql -u root --password="bagabu" imhere < /tmp/imhere.sql"
To my understanding, that didn't work because the mysql docker image does not contain a mysql client (best practices state "don't add things just because they will be nice to have") (am I wrong about this?)
what might be a good way to do this? I have had a few things in mind, but they all seem like messy workarounds.
- install the mysql client, do what I have to do with it, then remove/purge it.
- copy the mysql client binary to the image, do what I have to do, then remove it.
- Create the schema in another sql server and copy the db file themselves directly (this seems very messy and sounds to me like a contaminated pool of problems)
Any suggestions? Hopefully in a way that will be easy to maintain later and maybe conform with the best practices as well?
I had to do this for tests purposes.
Here's how i did by leveraging the actual MySQL/MariaDB images on dockerhub and the multi-stage build:
Full working example here : https://github.com/lindycoder/prepopulated-mysql-container-example
You should put your init script in a directory mounted as
/docker-entrypoint-initdb.d
- see "Initializing a fresh instance" section in the MySQL Docker image docs.Credits to @Martin Roy
Made minor changes to work for mysql...
Content Dockerfile
Content setup.sql
Full working example here : https://github.com/iamdvr/prepopulated-mysql-container-example
This works and it's follows the interface with the mariadb entry point script. Files with settings, tables, data and users are copied to folders. When the container starts the entry point script finds the files and creates an new database.
Here is my docker file working absolutely fine. /sql-scripts/ will contain your custom sql file (containing your prepared db) which is executed once the container runs. You may want to look at volume mounting though.
@Venkateswara Rao and @Martin Roy 's answer work great if you don't care if changes to dockerized DB are still put in a Volume. In my case, I wanted a Docker Image prepopulated with data, but in a project where we have frequent Database Migrations. Thus, i wanted to occasionally make a DB Container, run migrations, then somehow push those changes back up so that the next user to use the DB image would already have migrations applied. Since the DB files live in a Volume, they don't get rolled into an image with
docker commmit my-migrated-db-container my-new-db-image
.My solution was to use the official MySQL (5.6 in my. case) Dockerfile (and entrypoint file) to recreate their work after removing the
VOLUME /var/lib/mysql
line. From the resulting image, I then used Venkateswara's files to make a pre-loaded DB image where even all future DB data is stored in the container itself (not on a Volume).NOTE: The linked Dockerfile file is old refers to obsolete GPG keyservers (ie
keys.openpgp.org
). You can replace these server names with, say,keyserver.ubuntu.com
.Management software like Ansible can help you to automate a mysql import easy without need to install and reinstall a client. Ansible has great built-in features to manage docker images and containers and mysql databases.