Can I set up MySQL as a slave only replicating structure changes? (CREATE
/ ALTER TABLE
etc.). I've got 2 reasons for this:
- Multiple developers with development branches in code which should always work with a 'fresh' datastructure, fetched from a 'main' development database.
- Multiple projects / clients which share (part of) the same code/scripts (think of a symlinked 'core' directory), but with different databases (1 per client), for which structure should be updated simultaneously with code changes in the shared portion of the code, with a minimum amount of downtime.
If this isn't possible, what are other good ways to sync mysql-datastructures without the actual data across sites / databases / servers? And no, structional changes don't happen all that often, but with enough different databases hanging around it's quite some time spend syncing all instances, and I'd like to both limit downtime and spend my time on more interesting jobs ;).
I typically dump the schema using mysqldump. However, I believe the blackhole engine could be used as you describe.
You would
ALTER
all tables toBLACKHOLE
andINSERT
,UPDATE
, andDELETE
records would not affect the data, as blackhole does not store the data.You can
skip-innodb
and setdefault-storage-engine=BLACKHOLE
and all tables created will default toBLACKHOLE
except those that are created asMyISAM
, asMyISAM
cannot be disabled. You need to be aware of this as you create tables on the master, as you may need to laterALTER
back toBLACKHOLE
on the slave replicant.Nevertheless, what you describe is an administrative issue and not a technical issue. I recommend configuring at least two development databases, which would be maintained by the infrastructure team.
The first, would be a development database. The developers have more access to this database and it would be regularly refreshed with scrubbed data.
The second database would be a QA or testing environment, which developers would not have RW access to. This allows proper staging.
Both of these databases would be verified by the IT person making production changes to the database and would be included as part of the staging process to production, which insures consistent schemas across environments.
If you want developers to have databases all over the place, they simply cannot be maintained and it will have to be their responsibility to insure that their dataset is current enough for their needs. You could programmatically provide a schema dump on a shared drive or perhaps a blackhole replication slave with RO access.
Unfortunately, MySQL replication does not allow that. What you can do is regularly run
mysqldump -d
which will dump the database and table structure, but leave out the data. You can then import this structure to a new DB.From
man mysqldump
:We do this where I work currently. As part of any changes to the DB, we run this command and save it to a file and push it into source control. Devs simply pull down the file and apply the changes.
Since you mentioned symlinking, I'll assume the instances of MySQL, or multiple databases, are either on the same server or have access to the same drives.
Assuming this I would try symlinking the master's .frm to the slave's .frm. This way whenever the master is updated with structural changes the slave is automatically updated, but they won't share data. I haven't actually tried this, but it should work in theory.
"Each MyISAM table is stored on disk in three files. The files have names that begin with the table name and have an extension to indicate the file type. An .frm file stores the table format. The data file has an .MYD (MYData) extension. The index file has an .MYI (MYIndex) extension."