Sunday, January 11, 2015

Backup and Restore MYSQL Database

Back up and Restore MySQL database from the command line

Backing up via the command line:


Type the following at the prompt with the appropriate USERNAME and DATABASE name:
mysqldump -u USERNAME -p DATABASE > dump.sql

You will be prompted for your database password and then the DATABASE will be dumped to a plain-text file called dump.sql.

Restoring via the command line:

First drop and recreate the database as needed:

Drop the database
mysqladmin -u USERNAME -p drop DATABASE

Recreate the database
mysqladmin -u USERNAME -p create DATABASE

Import the backup data
mysql -u USERNAME -p DATABASE < dump.sql

Linux Replacing A Failed Hard Drive In A Software RAID1 Array


1- In this example I have two hard drives, /dev/sda and /dev/sdb, with the partitions /dev/sda1 and /dev/sda2 as well as /dev/sdb1 and /dev/sdb2.

/dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0.
/dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1.

/dev/sda1 + /dev/sdb1 = /dev/md0
/dev/sda2 + /dev/sdb2 = /dev/md1

/dev/sdb has failed, and we want to replace it.

2-  How Do I Tell If A Hard Disk Has Failed?

If a disk has failed, you will probably find a lot of error messages in the log files, e.g. /var/log/messages or /var/log/syslog.

You can also run

cat /proc/mdstat
and instead of the string [UU] you will see [U_] if you have a degraded RAID1 array.

3-  Removing The Failed Disk

To remove /dev/sdb, we will mark /dev/sdb1 and /dev/sdb2 as failed and remove them from their respective RAID arrays (/dev/md0 and /dev/md1).

First we mark /dev/sdb1 as failed:

mdadm –manage /dev/md0 –fail /dev/sdb1

The output of cat /proc/mdstat should look like this:

techyfix:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[2](F)
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]
unused devices: <none>

Then we remove /dev/sdb1 from /dev/md0:

mdadm –manage /dev/md0 –remove /dev/sdb1

The output should be like this:

techyfix1:~# mdadm –manage /dev/md0 –remove /dev/sdb1
mdadm: hot removed /dev/sdb1

And cat /proc/mdstat should show this:

techyfix:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]
unused devices: <none>

Now we do the same steps again for /dev/sdb2 (which is part of /dev/md1):

mdadm –manage /dev/md1 –fail /dev/sdb2

The output of cat /proc/mdstat should look like this:

techyfix:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[2](F)
24418688 blocks [2/1] [U_]
unused devices: <none>

mdadm –manage /dev/md1 –remove /dev/sdb2

techyfix:~# mdadm –manage /dev/md1 –remove /dev/sdb2
mdadm: hot removed /dev/sdb2

The output of cat /proc/mdstat should look like this:

techyfix:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0]
24418688 blocks [2/1] [U_]
unused devices: <none>

Then power down the system:


shutdown -h now

and replace the old /dev/sdb hard drive with a new one (it must have at least the same size as the old one – if it’s only a few MB smaller than the old one then rebuilding the arrays will fail).


4- Adding The New Hard Disk

After you have changed the hard disk /dev/sdb, boot the system.

The first thing we must do now is to create the exact same partitioning as on /dev/sda. We can do this with one simple command:

sfdisk -d /dev/sda | sfdisk /dev/sdb

You can run

fdisk -l
to check if both hard drives have the same partitioning now.

Next we add /dev/sdb1 to /dev/md0 and /dev/sdb2 to /dev/md1:

mdadm –manage /dev/md0 –add /dev/sdb1

techyfix:~# mdadm –manage /dev/md0 –add /dev/sdb1
mdadm: re-added /dev/sdb1

mdadm –manage /dev/md1 –add /dev/sdb2

techyfix:~# mdadm –manage /dev/md1 –add /dev/sdb2
mdadm: re-added /dev/sdb2

Now both arays (/dev/md0 and /dev/md1) will be synchronized. Run

cat /proc/mdstat
to see when it’s finished.

During the synchronization the output will look like this:

techyfix:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/1] [U_]
[=>...................]  recovery =  9.9% (2423168/24418688) finish=2.8min speed=127535K/sec
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/1] [U_]
[=>...................]  recovery =  6.4% (1572096/24418688) finish=1.9min speed=196512K/sec
unused devices: <none>

When the synchronization is finished, the output will look like this:

techyfix:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]
unused devices: <none>

That’s it, you have successfully replaced /dev/sdb!



Change TimeZone in Linux

Login to the Server through SSH :


root@techyfix ~]#

Edit the Clock file located at described path


root@techyfix ~]#  vi /etc/sysconfig/clock
# change to your location

ZONE="
Asia/Kolkata
"
Save the File after changing the Zone.

[root@techyfix ~]# source /etc/sysconfig/clock
# reload

copy your timezone file under the "/usr/share/zoneinfo" like follows:

[root@techyfix ~]# cp -p /usr/share/zoneinfo/Asia/Kolkata  /etc/localtime

It will ask to overwrite the file "localtime", confirm by typing Y

cp: overwrite `/etc/localtime'?
y

Now, Your Server TimeZone has been Reset as needed.