InnoDB, given its (relatively) recent change as the default engine of MySQL, can quickly take an unfamiliar administrator off-guard if they’re not aware of the way that InnoDB works behind the scenes. With that in mind, there’s a few key concepts you should probably be aware of if you’re working with any InnoDB tables – which, if you’re running cPanel on a remotely updated system, then by default, you are:
1. Save for the transportable tablespace functionality introduced in MySQL 5.6, InnoDB table files should not be considered portable.
In MyISAM, the table file structure was fairly flexible in that the engine would adapt when it detected new table files or database folders were present. At a glance, it sounds like this is a great feature, and it is great in the context of MyISAM, but the reason that this is not feasible in InnoDB is not because they’re trying to tie you down or make things more difficult for you – it’s because InnoDB adheres to data consistency standards, or specifically, ACID compliance. This stands for Atomicity, Consistency, Isolation, and Durability.
In short, this means: Operations only succeed or fail (no in-between), data integrity is ensured, operations are kept from interfering with one another, and the data is stored in a way that minimizes impact from any environmental failures. To actually meet these standards, though, InnoDB must maintain a method of verifying the data. To do this, it stores information about your InnoDB tables and their respective databases in places other than the table files themselves. If something is moved out of place or changed by hand, InnoDB will notice and raise the alarm – because as far as it’s concerned, the data integrity has been compromised.
While 5.6′s transportable tablespace functionality is very useful, it’s still not as simple as throwing the table file into the database directory and firing up MySQL, unfortunately, though it’s much closer than we were before. If you’re interested in taking advantage of this feature, I’d certainly suggest looking through the documentation, as it can come in handy.
2. Re-initializing the ibdata file is almost certainly not the answer.
There are very few situations when the data file must be re-created to resolve an issue. I’ve come across a number of guides and articles online recommending this, and I highly suggest not following their advice unless you are absolutely sure that it is the right move for your situation. It’s presented often as a “solution to everything InnoDB related” method, which – in a way – it is. If MySQL is crashing because of InnoDB, it will certainly bring MySQL back online. However, it will do so without the inclusion of any of your data, though it will see the table files present and ask questions about them. So if the goal is to get MySQL online, and you’re not concerned about any of the data involved, this will do the trick, though it’d be a good idea to at least move the old database folders out of your data directory first.
On the other hand, re-creating the log files (ib_logfile*) can actually be helpful in a few different situations, and is not quite as risky (though of course backups should always be made prior – be ready to replace with the original files in case you run into trouble). For example, in some versions of MySQL, if your log files become corrupted, there’s a chance that it will cause MySQL to segfault, producing a “Signal 11″ error in the error logs (signal 11 is the POSIX signal code for a generic segmentation fault, so this is not the only reason that you might run into this). In situations like this, it makes complete sense to re-initialize the ib_logfiles. Afterwards, you’ll see log sequence number errors flood the error logs – don’t panic. That’s not a bad thing – the logs are fresh, and don’t contain any data. It will take a minute for InnoDB to run through the data and get everything up to speed. Keep in mind that the log files are exclusively for the purpose of crash recovery. There is no other time that its records are utilized.
3. The innodb_force_recovery option should not be used as a way of keeping your server online
I’ve come across too many servers where, upon investigating, it turns out that the my.cnf file has unnecessarily contained “innodb_force_recovery=6″, with the administrator not aware, meaning that it may have been enabled in this way for a significant period of time. Granted, it’s difficult to go for any length of time in this mode of recovery without noticing something, given the restrictions that it involves, but yet it does happen. On the surface, it may seem like a good idea to leave it this way, or in any of the other recovery modes – the important part is that your server stays online, right? The problem there is due to the way that innodb_force_recovery works, and why it allows your MySQL server to start at all. Each mode, with the lowest severity being 1, highest being 6, disables core components of InnoDB. These components are what help to maintain the integrity and the structure of your data. The higher you go, the more aspects of the InnoDB engine you’re invalidating. It is included as a feature solely for the purpose of allowing you to perform dumps or other retrieve specific MySQL data for recovery, so that you can stop MySQL again, and continue with your primary recovery method.
Operating with innodb_force_recovery enabled, even at its lowest value, is just putting your data at additional risk, and potentially introducing new problems to the mix. The rule of thumb is that modes 1-4 can generally be enabled without much risk to your data (though stability and functionality will be affected), and that 5-6 should only be used in extreme situations. I posted a guide a little while ago on the forums that includes a bit more detail on innodb_force_recovery as well as some circumstances to use it in, however there’s no substitute for going through the official documentation on this topic.
4. Seeing an InnoDB crash recovery in the MySQL error logs, on its own, is a good thing.
Believe it or not, InnoDB is designed to crash. That’s a bit of an exaggeration, but its components function the way that they do because they know that there are fail-safes in place to save them if things go awry. One of the reasons that InnoDB’s performance is highly regarded is due to its ability to perform sequential I/O, which in the context of InnoDB requires that writes are delayed so that they can be grouped and re-ordered before flushing to the disk. The problem there is that, when you include delays like this, where data changes that have not been written to disk yet are still hanging around in the memory, you involve the risk that a system crash will occur, and anything that was in memory will be wiped clean.
To solve this, InnoDB has the redo log files. By default, InnoDB stores two of these in your data directory, going by the names “ib_logfile0” and “ib_logfile1” (the amount of files stored, and their sizes, can be adjusted via startup parameters defined in my.cnf, as documented here) . InnoDB treats these as if they were a single, concatenated file, which functions as a circular buffer, essentially meaning that data is appended to the top of the file, then rotated out at the end of the file. This makes the size of these files very relevant, when it comes to the recovery procedure, its capabilities, and its performance.
When changes are made, a corresponding record is created within the log files which includes a log sequence number that is tracked by InnoDB. In the event of a crash, upon MySQL starting back up, InnoDB will be able to determine whether any changes exist that had not been flushed to disk prior to the crash, and it will automatically recover these. This is one of the ways that InnoDB maintains its “durability” side of the ACID concept.
With that in mind, if you see crash recoveries occurring – again, don’t panic. As long as everything came online, and there are no other, more severe errors occurring, you could be A-OK. However, if you’re seeing frequent crash recoveries, or if the crash recoveries are the result of MySQL being unexpectedly terminated and restarting frequently, then you could have another problem on your hands, and it would be a good idea to examine the system logs to try to determine what’s going on, at that point.
These are just a few of the big issues I run into fairly frequently, and that can be easily avoided to save you from going through a lot more pain than a simple corrupt page or a data dictionary error. If you have any questions about anything you run into, or if you have anything to add here, feel free to leave a comment for us here!