Upgrade MySQL on CentOS

Sometimes you may run across a scenerio where you have to update MySQL. This is easy enough to do, however you should always test this out on a dev server before applying to production just in case you run into problems.

As a critical note, before performing the update, make sure you have a working MySQLdump of all your databases. This cannot be stressed enough! There are many ways of performing a MySQLdump. Be sure you can actually restore from those backups as well! One possible method of performing the backup of all the databases into a single large file, which locks the tables creating possible downtime, would be:

[root@db01 ~]# mysqldump --all-databases --master-data | gzip -1 > /root/all.sql.gz

On CentOS, I prefer to use the IUS repo’s as they are actively maintained, and they do not overwrite stock packages which is important.

So to get started, first setup the IUS repo if it isn’t already installed on your server:

# CentOS 6
[root@db01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/ius-release-1.0-14.ius.centos6.noarch.rpm

# CentOS 7
[root@db01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/7/x86_64/ius-release-1.0-14.ius.centos7.noarch.rpm

To upgrade MySQL, yum has a plugin called ‘yum-replace’, which will automatically replace one package with another of your choosing. This simplifies the process of upgrading MySQL.

First, confirm that you are not already running another custom version of MySQL:

[root@db01 ~]# rpm -qa |grep -i mysql
mysql55-server-5.5.45-1.ius.el6.6.z.x86_64
mysql55-5.5.45-1.ius.el6.6.z.x86_64
...

Using the output from above, it looks like we just have MySQL 5.5 installed. I want to upgrade from MySQL 5.5 to MySQL 5.6. Here is how you would run it:

[root@db01 ~]# yum install yum-plugin-replace
[root@db01 ~]# yum replace mysql55 --replace-with mysql56u

During the upgrade process, I noticed that I could no longer log in with the root MySQL user. So to reset the root MySQL password:

[root@db01 ~]# service mysqld stop
[root@db01 ~]# mysql -uroot
mysql> use mysql;
mysql> update user set password=PASSWORD("enternewpasswordhere") where User='root';
mysql> flush privileges;
mysql> quit
[root@db01 ~]# service mysqld restart

Once the version has been updated, be sure to run mysql_upgrade. mysql_upgrade examines all tables in all databases for incompatibilities with the current version of MySQL Server. mysql_upgrade also upgrades the system tables so that you can take advantage of new privileges or capabilities that might have been added.

[root@db01 ~]# mysql_upgrade

If you find that the upgrade is not going to work for your environment, you can roll back to the original version:

[root@db01 ~]# yum replace mysql56u --replace-with mysql55

The yum-replace plugin makes upgrading and downgrading MySQL very fast and simple. But just to reiterate an earlier statement, make sure you test this out on a development server before applying to your production server! It is always possible that something may not be compatible with the new version of MySQL! So always test first so you know what to expect!

Upgrade PHP on CentOS

The version of PHP that ships with CentOS 6 and CentOS 7 is getting a bit outdated. Oftentimes, people will want to use a newer version of PHP, such as PHP 5.6. This is easy enough to do, however you should always test this out on a dev server before applying to production just in case you run into problems.

On CentOS, I prefer to use the IUS repo’s as they are actively maintained, and they do not overwrite stock packages which is important.

So to get started, first setup the IUS repo if it isn’t already installed on your server:

# CentOS 6
[root@web01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/ius-release-1.0-14.ius.centos6.noarch.rpm

# CentOS 7
[root@web01 ~]# rpm -ivh http://dl.iuscommunity.org/pub/ius/stable/CentOS/7/x86_64/ius-release-1.0-14.ius.centos7.noarch.rpm

To upgrade PHP, yum has a plugin called ‘yum-replace’, which will automatically replace one package with another of your choosing. This simplifies the process of upgrading PHP greatly.

First, confirm that you are not already running another custom version of PHP:

[root@web01 ~]# rpm -qa |grep -i php
php-tcpdf-dejavu-sans-fonts-6.2.11-1.el6.noarch
php-cli-5.3.3-46.el6_7.1.x86_64
php-pdo-5.3.3-46.el6_7.1.x86_64
...

Using the output from above, it looks like we just have the stock PHP version installed. I want to upgrade from PHP 5.3 which is the default package on CentOS 6, and replace it with PHP 5.6. Here is how you would run it:

[root@web01 ~]# yum install yum-plugin-replace
[root@web01 ~]# yum replace php --replace-with php56u

Perhaps you find that your application doesn’t work with PHP 5.6, so you want to try PHP 5.5 instead:

[root@web01 ~]# yum install yum-plugin-replace
[root@web01 ~]# yum replace php56u --replace-with php55u

Or maybe you find that the upgrade is not going to work for your environment, so you want to roll back to the original version:

[root@web01 ~]# yum replace php55u --replace-with php

The yum-replace plugin makes upgrading and downgrading PHP very fast and simple. But just to reiterate an earlier statement, make sure you test this out on a development server before applying to your production server! Its always possible that a module that worked in PHP 5.3 is deprecated in a newer version of PHP, or perhaps your site code is using deprecated functions that no longer exist! So always test first so you know what to expect!

Upgrade PHP on Ubuntu

This guide will not cover Ubuntu 12.04 at this time as the PPA from ondrej appears to upgrade Apache 2.2 to 2.4, and I have not been able to install this cleanly. If PHP 5.5 or 5.6 is needed, I am going to have to suggest migrating to Ubuntu 14.04 for the time being.

The version of PHP that ships with Ubuntu 14.04 is PHP 5.5, which is starting to get a bit outdated. Oftentimes, people will want to use a newer version of PHP, such as PHP 5.6 or PHP 7.0. This is easy enough to do, however you should always test this out on a dev server before applying to production just in case you run into problems.

On Ubuntu, it looks like the preferred method is to use the PPA from ondrej. So to get started, first update your existing repos, then add the new PPA:

# Update PHP 5.5 on Ubuntu 14.04 to PHP 5.6
[root@web01 ~]# apt-get -y update
[root@web01 ~]# apt-get install software-properties-common
[root@web01 ~]# add-apt-repository ppa:ondrej/php
[root@web01 ~]# apt-get -y update
[root@web01 ~]# apt-get -y install php5.6 php5.6-cli php5.6-mysql php5.6-mcrypt php5.6-mbstring php5.6-curl php5.6-gd php5.6-intl php5.6-xsl php5.6-zip
[root@web01 ~]# a2dismod php5
[root@web01 ~]# a2enmod php5.6
[root@web01 ~]# service apache2 reload

# Update PHP 5.5 on Ubuntu 14.04 to PHP 7.0
[root@web01 ~]# apt-get -y update
[root@web01 ~]# apt-get install software-properties-common
[root@web01 ~]# add-apt-repository ppa:ondrej/php
[root@web01 ~]# apt-get -y update
[root@web01 ~]# apt-get -y install php7.0 php7.0-cli php7.0-mysql php7.0-mcrypt php7.0-mbstring php7.0-curl php7.0-gd php7.0-intl php7.0-xsl php7.0-zip
[root@web01 ~]# a2dismod php5
[root@web01 ~]# a2enmod php7.0
[root@web01 ~]# service apache2 restart

Perhaps you find that your application doesn’t work with PHP 5.6 or PHP 7.0, so you want to roll back to stock PHP 5.5 instead:

# Downgrade PHP 5.6 on Ubuntu 14.04 to PHP 5.5
[root@web01 ~]# apt-get install ppa-purge
[root@web01 ~]# add-apt-repository -r ppa:ondrej/php
[root@web01 ~]# ppa-purge ppa:ondrej/php
[root@web01 ~]# apt-get remove php5.6-common php5.6-cli
[root@web01 ~]# apt-get autoremove
[root@web01 ~]# apt-get install php5 php5-cli php5-common
[root@web01 ~]# a2enmod php5
[root@web01 ~]# service apache2 restart

# Downgrade PHP 7.0 on Ubuntu 14.04 to PHP 5.5
[root@web01 ~]# apt-get install ppa-purge 
[root@web01 ~]# add-apt-repository -r ppa:ondrej/php
[root@web01 ~]# ppa-purge ppa:ondrej/php
[root@web01 ~]# apt-get remove php7.0-common php7.0-cli
[root@web01 ~]# apt-get autoremove
[root@web01 ~]# apt-get install php5 php5-cli php5-common
[root@web01 ~]# a2enmod php5
[root@web01 ~]# service apache2 restart

Logrotate examples

Logrotate is a useful application for automatically rotating your log files. If you choose to store certain logs in directories that logrotate doesn’t know about, you need to create a definition for this.

I have posted a few articles about this for various scenarios, but I wanted to include one that just contains examples for reference.

Typically, entires for logrotate should be stored inside of: /etc/logrotate.d/

To rotate out the MySQL slow query log after it reaches 125M in size, and have a retention rate of 4 logs, use the following:

[root@db01 ~]# vim /etc/logrotate.d/mysqllogs
/var/lib/mysql/slow-log {
        missingok
        rotate 2
        size 125M
        create 640 mysql mysql
}

To rotate out a custom log file for your application daily, and keep 7 day’s worth of logs, compress it, and ensure the ownership stays owned by apache:

[root@web01 ~]# vim /etc/logrotate.d/applicationname
/var/www/vhosts/example.com/application/logs/your_app.log {
        missingok
        daily
        rotate 7
        compress
        create 644 apache apache
}

If you would like to rotate your Holland backup logs weekly, keeping one months worth of logs, compress it, and ensure the ownership stays owned by root:

[root@db01 ~]# vim /etc/logrotate.d/holland
/var/log/holland/holland.log {
    rotate 4
    weekly
    compress
    missingok
    create root adm
}

If you would like to rotate out 2 logs, using one defination, simply add it to the first line as shown below:

[root@db01 ~]# vim /etc/logrotate.d/holland
/var/log/holland.log
/var/log/holland/holland.log {
    rotate 4
    weekly
    compress
    missingok
    create root adm
}

Fixing invalid system activity with sar

Sometimes sar can return errors complaining about an invalid system activity file such as:

sar
Invalid system activity file: /var/log/sysstat/sa03

It can be resolved by:

rm /var/log/sysstat/sa03
/etc/init.d/sysstat start
* Starting the system activity data collector sadc [ OK ]

Now if you rerun sar, you should see the stats output starting to populate:

sar
Linux 3.2.0-83-virtual (web01) 04/03/2014 _x86_64_ (8 CPU)

How to add a 2G swap file to a server

Some cloud servers do not come with swap. There are many arguments about why this is a good thing, and other arguments as to why it is bad. However sometimes you just want that extra buffer for one reason or another. So here is a quick way to add a 2G swap file to an existing cloud server:

Create the initial 2G file

mkdir /opt/swapfiles
touch /opt/swapfiles/2G-swap
dd if=/dev/zero of=/opt/swapfiles/2G-swap bs=1024 count=2097152

Now set it up to be swap, and enable it:

mkswap /opt/swapfiles/2G-swap
chmod 600 /opt/swapfiles/2G-swap
swapon /opt/swapfiles/2G-swap

Set it in /etc/fstab so it will persist across reboots

vi /etc/fstab
# Add
/opt/swapfiles/2G-swap none swap sw 0 0

Confirm your vm.swappiness is not set to 0:

vi /etc/sysctl.conf
# Change
vm.swappiness = 0
# To
vm.swappiness = 10

Finally, update the sysctl without having to reboot:

sysctl vm.swappiness=10

You can confirm the swap space is now active by:

free -h
             total       used       free     shared    buffers     cached
Mem:          990M       905M        85M        43M       124M       527M
-/+ buffers/cache:       253M       736M 
Swap:         2.0G         0B       2.0G 

Scrutiny

Being asked at 9AM to determine what caused a system to have problems at 2:30AM can be a weary task. If the normal system logs do not give us any real hints about what may have caused the issue, we oftentimes get trapped having to give the really poor answer of “We cannot replicate the issue that you experienced during the overnight, and the logs are not giving us enough information to go on. So we’ll have to watch for it tonight to see if it re-occurs.” Times like that makes a sysadmin feel completely helpless.

What if you could see what processes were running on the system at prescribed intervals? And not just processes, but what about what queries were running, how many people were hitting Apache, perhaps what types of network connections you were getting, on top of a bunch of other information that can be gathered from tools like vmstat, iostat, etc? Now you can draw better conclusions cause you will know what was happening at that single point in time.

Welcome Scrutiny! Located over on github. A tool based off of recap, rewriten to suit my own needs for portability between Red Hat, Debian, and FreeBSD based systems, as well as allowing for simple modifications of the metrics needed to best suit your own environment.

Features

– Simple code base for quick customizations
– Ability to enable/disable groups of checks
– Easy to add/modify/remove individual metric gathering
– Uses tools such as ps, top, df, vmstat, iostat, netstat, mysqladmin, and apache’s server-status module to help create a point in time snapshot of the systems events.

Configuration

The currently configurable options and thresholds are listed below:

# Enable / Disable Statistics
process_log=on
resource_log=on
network_log=on
mysql_log=on
apache_log=on

# Retention Days
retension=2

# Logs
basedir=/var/log/scrutiny

Implementation

Download script to desired directory and set it to be executable:

# Linux based systems
cd /root
git clone https://github.com/stephenlang/scrutiny/linux/scrutiny.sh

# FreeBSD based systems
git clone https://github.com/stephenlang/scrutiny/freebsd/scrutiny.sh
chmod 755 scrutiny.sh

After configuring the tunables in the script (see above), create a cron job to execute the script every 10 minutes:

crontab -e
*/10 * * * * /root/scrutiny.sh

Now days later, if a problem was reported during the overnight and you were able to narrow it down to a specific timeframe, you will be able to look at the point in time snapshots of system events that occurred:

ls /var/log/scrutiny

Keeping multiple web servers in sync with rsync

People looking to create a load balanced web server solution often ask, how can they keep their web servers in sync with each other? There are many ways to go about this: NFS, lsync, rsync, etc. This guide will discuss a technique using rsync that runs from a cron job every 10 minutes.

There will be two different options presented, pulling the updates from the master web server, and pushing the updates from the master web server down to the slave web servers.

Our example will consist of the following servers:

web01.example.com (192.168.1.1) # Master Web Server
web02.example.com (192.168.1.2) # Slave Web Server
web03.example.com (192.168.1.3) # Slave Web Server

Our master web server is going to the single point of truth for the web content of our domain. Therefore, the web developers will only be modifying content from the master web server, and will let rsync handle keeping all the slave nodes in sync with each other.

There are a few prerequisites that must be in place:
1. Confirm that rsync is installed.
2. If pulling updates from the master web server, all slave servers must be able to SSH to the master server using a SSH key with no pass phrase.
3. If pushing updates from the master down to the slave servers, the master server must be able to SSH to the slave web servers using a SSH key with no passphrase.

To be proactive about monitoring the status of the rsync job, both scripts posted below allow you to perform a http content check against a status file to see if the string “SUCCESS” exists. If something other then SUCCESS is found, that means that the rsync script may have failed and should be investigated. An example of this URL to monitor would be is: 192.168.1.1/datasync.status

Please note that the assumption is being made that your web server will serve files that are placed in /var/www/html/. If not, please update the $status variable accordingly.

Using rsync to pull changes from the master web server:

This is especially useful if you are in a cloud environment and scale your environment by snapshotting an existing slave web server to provision a new one. When the new slave web server comes online, and assuming it already has the SSH key in place, it will automatically grab the latest content from the master server with no interaction needed by yourself except to test, then enable in your load balancer.

The disadvantage with using the pull method for your rsync updates comes into play when you have multiple slave web servers all running the rsync job at the same time. This can put a strain on the master web servers CPU, which can cause performance degradation. However if you have under 10 servers, or if your site does not have a lot of content, then the pull method should work fine.

Below will show the procedure for setting this up:

1. Create SSH keys on each slave web server:

ssh-keygen -t dsa

2. Now copy the public key generated on the slave web server (/root/.ssh/id_dsa.pub) and append it to the master web servers, /root/.ssh/authorized_keys2 file.

3. Test ssh’ing in as root from the slave web server to the master web server
# On web02

ssh [email protected]

4. Assuming you were able to log in to the master web server cleanly, then its time to create the rsync script on each slave web server. Please note that I am assuming your sites documentroot’s are stored in /var/www/vhosts. If not, please change the script accordingly and test!

mkdir -p /opt/scripts/
vi /opt/scripts/pull-datasync.sh

#!/bin/bash
# pull-datasync.sh : Pull site updates down from master to front end web servers via rsync

status="/var/www/html/datasync.status"

if [ -d /tmp/.rsync.lock ]; then
echo "FAILURE : rsync lock exists : Perhaps there is a lot of new data to pull from the master server. Will retry shortly" > $status
exit 1
fi

/bin/mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "FAILURE : can not create lock" > $status
exit 1
else
echo "SUCCESS : created lock" > $status
fi

echo "===== Beginning rsync ====="

nice -n 20 /usr/bin/rsync -axvz --delete -e ssh [email protected]:/var/www/vhosts/ /var/www/vhosts/

if [ $? = "1" ]; then
echo "FAILURE : rsync failed. Please refer to solution documentation" > $status
exit 1
fi

echo "===== Completed rsync ====="

/bin/rm -rf /tmp/.rsync.lock
echo "SUCCESS : rsync completed successfully" > $status

Be sure to set executable permissions on this script so cron can run it:

chmod 755 /opt/scripts/pull-datasync.sh

Using rsync to push changes from the master web server down to slave web servers:

Using rsync to push changes from the master down to the slaves also has some important advantages. First off, the slave web servers will not have SSH access to the master server. This could become critical if one of the slave servers is ever compromised and try’s to gain access to the master web server. The next advantage is the push method does not cause a serious CPU strain cause the master will run rsync against the slave servers, one at a time.

The disadvantage here would be if you have a lot of web servers syncing content that changes often. Its possible that your updates will not be pushed down to the web servers as quickly as expected since the master server is syncing the servers one at a time. So be sure to test this out to see if the results work for your solution. Also if you are cloning your servers to create additional web servers, you will need to update the rsync configuration accordingly to include the new node.

Below will show the procedure for setting this up:

1. To make administration easier, its recommended to setup your /etc/hosts file on the master web server to include a list of all the servers hostnames and internal IP’s.

vi /etc/hosts
192.168.1.1 web01 web01.example.com
192.168.1.2 web02 web02.example.com
192.168.1.3 web03 web03.example.com

2. Create SSH keys on the master web server:

ssh-keygen -t dsa

3. Now copy the public key generated on the master web server (/root/.ssh/id_dsa.pub) and append it to the slave web servers, /root/.ssh/authorized_keys2 file.

4. Test ssh’ing in as root from the master web server to each slave web server
# On web01

ssh root@web02

5. Assuming you were able to log in to the slave web servers cleanly, then its time to create the rsync script on the master web server. Please note that I am assuming your sites documentroot’s are stored in /var/www/vhosts. If not, please change the script accordingly and test!

mkdir -p /opt/scripts/
vi /opt/scripts/push-datasync.sh

#!/bin/bash
# push-datasync.sh - Push site updates from master server to front end web servers via rsync

webservers=(web01 web02 web03 web04 web05)
status="/var/www/html/datasync.status"

if [ -d /tmp/.rsync.lock ]; then
echo "FAILURE : rsync lock exists : Perhaps there is a lot of new data to push to front end web servers. Will retry soon." > $status
exit 1
fi

/bin/mkdir /tmp/.rsync.lock

if [ $? = "1" ]; then
echo "FAILURE : can not create lock" > $status
exit 1
else
echo "SUCCESS : created lock" > $status
fi

for i in ${webservers[@]}; do

echo "===== Beginning rsync of $i ====="

nice -n 20 /usr/bin/rsync -avzx --delete -e ssh /var/www/vhosts/ root@$i:/var/www/vhosts/

if [ $? = "1" ]; then
echo "FAILURE : rsync failed. Please refer to the solution documentation " > $status
exit 1
fi

echo "===== Completed rsync of $i =====";
done

/bin/rm -rf /tmp/.rsync.lock
echo "SUCCESS : rsync completed successfully" > $status

Be sure to set executable permissions on this script so cron can run it:

chmod 755 /opt/scripts/push-datasync.sh

Now that you have the script in place and tested, its now time to set this up to run automatically via cron. For the example here, I am setting up cron to run this script every 10 minutes.

If using the push method, put the following into the master web servers crontab:

crontab -e
# Datasync script
*/10 * * * * /opt/scripts/push-datasync.sh

If using the pull method, put the following onto each slave web servers crontab:

crontab -e
# Datasync script
*/10 * * * * /opt/scripts/pull-datasync.sh

Using logrotate for custom log directories with wildcards

Logrotate is a useful application for automatically rotating your log files. If you choose to store certain logs in directories that logrotate doesn’t know about, you need to create a definition for this.

In the example listed near the bottom of the post, we are showing how we can rotate our logs that are stored in /home/sites/logs. These logs are for Apache access and error logs. But also has a third log file that ends in .logs.

If you ran through this quickly, one might simply create a definition as a wildcard, such as:
/home/sites/logs/*

Unfortunately, logrotate will read this literally and now will rotate compressed log files that were already in rotation, leaving you with a mess in your log directories like this:

/home/sites/logs/http-access.log.1.gz
/home/sites/logs/http-access.log.1.gz.1
/home/sites/logs/http-access.log.1.gz.1.1
/home/sites/logs/http-access.log.1.gz.1.1.1
/home/sites/logs/http-access.log.1.gz.1.1.1.1

And it just goes down hill from there. This exact thing happened to me cause I forgot to read the man page which clearly stated:

 Please use wildcards with caution. If you specify *, log rotate will
rotate all files, including previously rotated ones. A way around this
is to use the olddir directive or a more exact wildcard (such as
*.log).

So using wildcards are still acceptable, but use them with caution. As I had three types of files to rotate in this example, I found that I can string them together as follows:

/home/sites/logs/*.error /home/sites/logs/*.access /home/sites/logs/*.log

I’ll post the example configuration below that was implemented on a FreeBSD solution for logging this custom directory:

cat /usr/local/etc/logrotate.conf
# see "man log rotate" for details
# rotate log files weekly
daily

# keep 4 weeks worth of backlogs
rotate 30

# create new (empty) log files after rotating old ones
create

# uncomment this if you want your log files compressed
compress

# RPM packages drop log rotation information into this directory
# include /usr/local/etc/logrotate.d

# system-specific logs may be configured here
/home/sites/logs/*.error
/home/sites/logs/*.access
/home/sites/logs/*.log {
daily
rotate 30
sharedscripts
postrotate
/usr/local/etc/rc.d/apache22 reload > /dev/null 2>/dev/null
endscript
}

Now lets test this out to confirm the logs will rotate properly by running:

logrotate -f /usr/local/etc/logrotate.conf
logrotate -f /usr/local/etc/logrotate.conf
logrotate -f /usr/local/etc/logrotate.conf

When you check your logs directory, you should now see the files rotating out properly:

/home/sites/logs/http-access.log
/home/sites/logs/http-access.log.1.gz
/home/sites/logs/http-access.log.2.gz
/home/sites/logs/http-access.log.3.gz

RCS – Introduction

When there are 40+ admin’s logging into a client’s server, it can become difficult to keep track of who modified what. And more importantly, in the event that a change created an undesired result, being able to find out exactly what was changed so it can be quickly rolled back. This also becomes a critical component of change control if the client requires specific security requirements such as PCI-DSS 2.0.

This system of revision control is much cleaner to track changes rather creating a bunch of apache2.bak, apache2.20120212, apahce2.conf.031212, etc. Instead, you can view all the versions of the file available simply by:
rlog /etc/apache2/apache2.conf

RCS offers the following features in a very easy to use CLI:

- Store and retrieve multiple revisions of text
- Maintain a complete history of changes
- Maintain a tree of revisions
- Automatically identify each revision with name, revision number, creation time, author, etc
- And much more

For our specific use case, critical files to check into RCS would be configuration files such as /etc/sysctl.conf, /etc/ssh/sshd_config /etc/vsftpd/vsftpd.conf, /etc/httpd/conf/httpd.conf and stuff of that nature.

If RCS is not already installed, then simply run the following depending on your operating system:

yum install rcs
apt-get install rcs

Basic Use Case
The easiest way to learn RCS is to see it in action. So in the use case below, we are going to perform a series of changes to the httpd.conf file.  Before making changes to the file, check it into RCS first so we have a starting point:

root@web01:/etc/apache2# ci -l -wjdoe /etc/apache2/apache2.conf
/etc/apache2/apache2.conf,v  <--  /etc/apache2/apache2.conf
enter description, terminated with single '.' or end of file:
NOTE: This is NOT the log message!
>> Original Apache Configuration File
>> .
initial revision: 1.1
done

Now we can make our change to the config. As an example, we are going to be making some tuning changes to Apache.

vi /etc/apache2/apache2.conf

Once our changes are made, we check the changes in:

root@web01:/etc/apache2# ci -l -wjdoe /etc/apache2/apache2.conf
/etc/apache2/apache2.conf,v  <--  /etc/apache2/apache2.conf
new revision: 1.2; previous revision: 1.1
enter log message, terminated with single '.' or end of file:
>> Tuning changes per ticket #123456
>> .
done

Pretend a few days go by and you receive a call from the client reporting issues with Apache. You log into the server and checks to see if anyone recently made changes to Apache:

root@web01:/etc/apache2# rlog /etc/apache2/apache2.conf

RCS file: /etc/apache2/apache2.conf,v
Working file: /etc/apache2/apache2.conf
head: 1.2
branch:
locks: strict
        root: 1.2
access list:
symbolic names:
keyword substitution: kv
total revisions: 2;     selected revisions: 2
description:
Original Apache Configuration File
----------------------------
revision 1.2    locked by: root;
date: 2012/03/19 15:44:06;  author: jdoe;  state: Exp;  lines: +3 -3
Tuning changes per ticket #123456
----------------------------
revision 1.1
date: 2012/03/19 15:28:38;  author: jdoe;  state: Exp;
Initial revision
=============================================================================

So this tells us that user jdoe make changes to the apache2.conf on 3/19/2012 per ticket #123456. Lets see what changes he made by comparing version 1.1 to version 1.2:

root@web01:/etc/apache2# rcsdiff -r1.1 -r1.2 /etc/apache2/apache2.conf
===================================================================
RCS file: /etc/apache2/apache2.conf,v
retrieving revision 1.1
retrieving revision 1.2
diff -r1.1 -r1.2
77c77
< KeepAlive On
---
> KeepAlive Off
105,106c105,106
<     MaxSpareServers      10
<     MaxClients          150
---
>     MaxSpareServers      1
>     MaxClients          15
root@web01:/etc/apache2#

From the looks of this, it appears he may have typo’ed the MaxClient and MaxSpareServer variable when working that ticket. So lets roll back the configuration file to version 1.1 since that was the last known working version:

root@web01:/etc/apache2# co -r1.1 /etc/apache2/apache2.conf
/etc/apache2/apache2.conf,v  -->  /etc/apache2/apache2.conf
revision 1.1
writable /etc/apache2/apache2.conf exists; remove it? [ny](n): y
done

Then test Apache to confirm everything is working again. Be sure to commit your changes as a rollback is still a change:

root@web01:/etc/apache2# ci -l -wmsmith /etc/apache2/apache2.conf
/etc/apache2/apache2.conf,v  <--  /etc/apache2/apache2.conf
new revision: 1.3; previous revision: 1.2
enter log message, terminated with single '.' or end of file:
>> Rolling back changes made in ticket #123456 due to problems
>> .
done

When the next person logs in to see what changes have been made to the apache.conf, they will see the following:

root@web01:/etc/apache2# rlog /etc/apache2/apache2.conf

RCS file: /etc/apache2/apache2.conf,v
Working file: /etc/apache2/apache2.conf
head: 1.3
branch:
locks: strict
        root: 1.3
access list:
symbolic names:
keyword substitution: kv
total revisions: 3;     selected revisions: 3
description:
Original Apache Configuration File
----------------------------
revision 1.3    locked by: root;
date: 2012/03/19 16:00:38;  author: msmith;  state: Exp;  lines: +3 -3
Rolling back changes made in ticket #123456 due to problems
----------------------------
revision 1.2
date: 2012/03/19 15:44:06;  author: jdoe;  state: Exp;  lines: +3 -3
Tuning changes per ticket #123456
----------------------------
revision 1.1
date: 2012/03/19 15:28:38;  author: jdoe;  state: Exp;
Initial revision
=============================================================================