Can Mulit-DB operate in production BEFORE tables are moved to new DBs?

I’m just now in the process of moving to multi-db… I understand the basic principal of how it works, although I admit I haven’t totally digested the code yet… So I apologize that the answer to this question exists if I just read the code… :slight_smile:

Can you run the multi-db code in production BEFORE you start to move the tables/blogs over to the multiple DBs?

It seems to me that you can’t really do this… my minimal testing/reading of the code seems to suggest that you really have to do the DB move before using this code.

But, it also seems like you might be able to set up your “old” data base as the global database, and list every table as a global table and run in that mode. As you copy things over to the new hashed databases, then you could remove from the global list…

What’s the realistic best approach to do this?

Oh yeah… I have about 6000 blogs.

  • trent
    • Site Builder, Child of Zeus

    I think once you implement the solution it will start using the blog md5 hash to look for the proper database for each blog and not find it since it would be in the global database. It might have been other reasons it didn’t work for me, but I actually tried that on a test install and it didn’t work.


  • ZappoMan
    • Design Lord, Child of Thor

    Trent, thanks for the quick reply…

    Ok, another way to ask this question….

    Is there anyway to get this running on a live production site with no down time?

    Based on your reply (and my reading of the install process) I think the answer is no, and I assume the migration goes something like this….

    1) Prepare: Do you setup steps of creating new databses, setting up your configs, etc…

    2) Go “into maintenance” mode, taking your blogs offline

    3) backup your database (of course this has already been done automatically, right?!)

    4) copy the table files into the new databases (seems like this is best done outside of mysql by just moving the files around from the old directory to the new directory)

    5) Fire up the new blogs setup pointing to the new multi-db setup.

    I was hoping that there may be a way to leave the OLD tables in the global database until you migrate them over to their “proper” database… Then you could do something where you don’t really have to take the system off line, you just leave everything running in the global database, and you move blogs over and take them out of the global table list.

  • Luke
    • The Crimson Coder

    Yes, this would be possible. Even on initial installation.

    You would have to make sure that you have everything squared away and ready to go, but other than that it should be fine.

    It’s also much easier to do it from the start, although with the move blogs script it isn’t too terribly painful. The downside to migration later is that you have to take the site completely offline, so that there isn’t anything missed during the migration process. This can take some time on large sites, which means late nights doing it.

    For me, I like putting the main blog (id 1) in it’s own vip database. One thing I like about this latest version is that it is much easier to do so, and you can add as many db servers as you want for things like that, with special names to match.

    Let’s say you have a popular blog that’s paid for some extra lovin’ from you. One thing you can use as another feature/selling point is an exclusive database. It can sound real nice and important if you describe it correctly. If the blogs name is “PopularBlog” for example, you can add a database called “popularblog”, add their blog id and dbname through the add vip function, and that’s it.

    Definitely a good upgrade on this one. Less code included for config stuff, it’s easy to upgrade with it in place, and seems to be more efficient overall as well.

    Getting back on topic, a database is a database. As long as it calls the wpdb object correctly, it should add in tables just fine to their proper location. The only problem I could see, and this is speculation, would be if the db.php file is not included vice the normal wpdb.php. However, IIRC, right before wpdb is included, it looks for db.php first. That being the case, it should install just fine from scratch with this in place.

    I’ll have to give the new one a try, but I’ve tried it with the older version and it was OK.

  • trent
    • Site Builder, Child of Zeus

    I would have to say that you define the information in the /wp-content/scripts/move_blogs.php and when you think you have it, just run the script. It first tells you whether you have the configuration correct and forces you to manually push another link to actually “run” the script. That gets you to the point you know it will run (checks for new DB’s and permissions, etc).

    After that I would make sure you have the db-config.php setup with all the database information and global tables. I managed to miss a few global tables and they showed up instantly in my error logs which gave a few errors for a matter of minutes to add them to db-config.

    Once you are ready to run the move_blogs script, with 6000 blogs I would take it offline. I have quite a few less, so I just locked the tables with read access only, ran the script (runs quickly) and once it gave me the green light, I had my FTP ready to drop db-config.php and db.php into the /wp-content/ folder. I was offline with the blogs on my small install for about 2 minutes as I tested the db-config.php information on another server first knowing it was correct.

    If all is defined correctly in the config and move blogs script, your site will only be offline for as long as it takes to run the move blogs script unless I am missing something else.


  • Luke
    • The Crimson Coder

    I did a site with that many recently. It wasn’t difficult, but it was kind of a pain.

    Had to up the execution time in php.ini quite a bit (don’t want the script to quit in the middle of it), then I kicked all the users out. For that, I just renamed index.php and put in a temporary one. Also renamed htaccess and used a temporary one to send file requests to index.php, as well as wp-admin requests.

    The temporary index.php file kicked up a 503 header (service temporarily unavailable), with a notice for spiders to check back in X amount of time. Also it spit out a quick visual note for users about the site being under maintenance. It also has an area where I can put my IP address, which can be checked and then if it’s me, it fires up WP instead.

    It worked out really well, and covered all the bases. Users knew what was going on, and the database wasn’t touched.

    Moved the blogs over to their db’s, note that all the prep work was done beforehand in terms of getting everything ready (other than actually moving them), added db.php, checked the site for problems, then

    put the original index and htaccess back in place, and that was it.

    I did run into a problem with the 1.0.0 version on a test site I was playing with, but that was resolved (1.0.1 corrects it, if I’m not mistaken).

  • ZappoMan
    • Design Lord, Child of Thor

    Here’s the ugly part of the move that I’m seeing in my testing… basically, if you drop the new db class in and your new databases and tables aren’t set up, then… you get pretty much a broken site where the wpdb class can’t read a sites table or other important tables to make the basic wpmu loop work.

    So without bring the site down, it seems like the move_blogs script

    Ok, I’m thinking this through a little bit more…

    I’d prefer to have the site down for NO TIME at all… or the shortest possible window…

    So, I was just thinking about this a little more, and I’m considering the following strategy…

    Background: I use master/slave today to be able to do nearly continuous backups of the database, I have a slave that’s sitting there happily keeping things in sync, and then I regularly stop the slave, back it up, and then bring it back online… the master is live all the time and the world is happy.

    So here’s my crazy idea… Tell me if you think this is nuts…

    What if I setup the SLAVES to run in the new multi-db mode, but had them slaving off only the tables that should belong in their DB after the move.

    Then I can basically get the slaves running and be in sync for the new split DB model, and when I’m ready to make the switch just drop in dp.php…

    What do you think?

  • Andrew
    • Champion of Loops
      What do you think?

    I think that would work. Although I also think it will be more trouble than it’s worth.

    Just figure out what day/time you have the least visitors, put up a notice in the admin panel for a few days and take the site down as needed. Switching to multiple database is a one time thing. Most users will understand the downtime if you explain it to them.



  • Luke
    • The Crimson Coder

    I wholly agree with Andrew.

    Test it on your local copy, take it down in the middle of the night for an hour or so, be like Nike and just do it.

    Trying to jack with all that is involved while running live isn’t the brightest idea I’ve ever heard. Too much room for errors and data loss, and not to mention more of a pain in the ass than need be.

    Create your databases, create your configuration file, get everything ready. From there, take the site down, run the move script, upload the config and db.php, and be done with it.

  • ZappoMan
    • Design Lord, Child of Thor

    Yeah, it doesn’t help my case that I may have actually been drunk when I wrote that… (not really)… but what the heck was going on with my grammar…. that was a bunch of half written non-sense… wasn’t it?

  • ZappoMan
    • Design Lord, Child of Thor

    Ok, I know you guys are strongly encouraging me to simply do an outage, do the migration, and flip it on in full multi-db mode…

    But I will admit I am a glutton for punishment, and so I feel compelled to find the easiest hard solution. :wink:

    So what about this idea:

    1) Set up multi-db config so that ALL existing blogs are “VIP” blogs.

    2) Set up multi-db config so that the VIP db and the global db are the same.

    3) Back up the database!!!! — Make sure you are ready to go nuts —

    4) Turn on multi-db.

    5) — I think the following should happen —

    a) all new blogs will be placed in their happy new hash based home

    b) all old blogs will be served out of the global db

    c) site pretty much behaves the same as before

    — Now, it’s time for migration —

    6) Stepping through each blog (in what ever order makes sense)

    a) switch the blog in wp_blogs as “archived”

    b) do the cool blog move trick as described in the scripts

    c) remove the blog from the VIP list

    d) switch the blog back to “public” (assuming it was public to start)

    7) Sit back and watch the magic

    Once you’ve walked through all the blogs, you have successfully migrated to Multi-DB with no system wide down time…

    Thoughts? Reactions?

    No, I wasn’t drunk or on crack when I came up with this idea.

  • Andrew
    • Champion of Loops
      You have to admit… that if I could pull it off… you’d all be impressed. :slight_smile:

    I’d really just be amazed that you went through all that trouble :wink:

    Seriously though, what you described should work just fine. However I still think it’s more trouble than it’s worth.

    Out of curiosity, how many blogs do you have?



  • Andrew
    • Champion of Loops
      6000 blogs

    Personally I wouldn’t go through that much trouble for 6000 blogs because the downtime would be minimal.

    If a majority of your users are in the US then this weekend is the perfect time for a bit of downtime. Very few people would notice the site being down for a couple of hours Sunday night.



  • Luke
    • The Crimson Coder

    Honestly, and being pretty frank, trying to migrate data like this would be irresponsible.

    Sure, neat to discuss in theory, but actually putting end users in a position that could potentially compromise data through loss is, IMHO, a reckless attempt at cleverness at the end users expense.

  • ZappoMan
    • Design Lord, Child of Thor

    Ok Luke, I guess I’ll take the bait…

    Not sure why you think it’s any more reckless than any other solution.

    Either way, you’ve got to have a roll back strategy.

    In some sense, I feel like the roll back with what I’m proposing is pretty clean… After all, if VIPs work. Then I’ve got a clean running system that works out of the gate. Then I’m only migrating 1 blog at a time. I can take a lot of care in moving every blog over, making sure things are going well, and then move to the next blog.

    There’s actually a pretty large school of thought in the IT world that intentionally choosing the more complicated approach, will FORCE you to slow down, triple check every step, and make sure you do it in an error free way.

    The tone of my own self-deprecating humor may sound like “cleaverness” and may even sound a little cavalier, but make no mistake, I absolutely believe in the sanctity of the users data.

  • Luke
    • The Crimson Coder

    Why I see it as reckless is that at some point, there has to be a database moved and then connected to. The potential loss of data comes from this, as the potential for something to change between the time the copy starts, stops, and then the connection is changed is too large of a window to chance. Yeah, we could be talking seconds here, but that’s enough time for a post to get lost.

    I’ve heard and seen that large school of thought, and any IT circle I’ve traveled has laughed them out of it. Just because a solution is as simple as possible doesn’t render it ineffective or error prone. It’s the work ethic which causes that, nothing more.

    I did say it was interesting, but in my own mind it just leaves too much to chance.

    Create all new DB’s.

    Disconnect user interaction.

    Copy all tables to new db’s.

    Upload config, then db.php.



    If something goes wrong, you didn’t damage any data as the move blogs file accompanying this script only copies tables. If something goes bad, you truncate the new db’s, and try again.

    With users disconnected, there can be no chance of something being updated and missed because the data was copied but there was that window between connection updates.

    It’s like a mechanic working on a car. They wouldn’t try to do it while it was running down the road. They’d pull it over, put it on the lift, fix it, and carry on.

  • ZappoMan
    • Design Lord, Child of Thor

    I do like the image of the mechanic working on the car while it’s running down the road. Actually, they do this on the Tour de France all the time… bike mechanic rolling along hanging out of the car trying to adjust the brakes or some other mechanical aspect of the bike while the cyclist is racing along at 25mph trying not to get dropped by the pack…. WOW, does it make for some SPECTACULAR CRASHES! when something goes wrong.

    Ok, so I hear you loud and clear…

    And simply for the sake of the continued technical discussion of what is actually happening in the wpmu code and the multi-db code.

    One of the reasons I like my idea is this…

    When you set a blog to ARCHIVED (or spam) then you will never ever ever touch the DB for that blog. Basically wp-settings.php the archivedness or spamness of a blog is checked via wp_blogs, and then no other tables are touched. So you can be certain that the wpmu core isn’t doing any reads or writes to those blog tables.

    That being said, it certainly could be the case that a plugin doesn’t pay attention to the archived flag and does read/write from the blog… but that’s probably an issue that one would want to address anyway.

Thank NAME, for their help.

Let NAME know exactly why they deserved these points.

Gift a custom amount of points.