Install the Sybase ASE Cluster Edition Software
With the new 15.5 version, you have to option to install the software either as shared installation or private installation. If you use the shared installation, you need to have access to a shared filesystem that is accessible from every node in the cluster. Although the shared installation is more convenient, it presents other risks like a single point of failure if the shared filesystem goes offline.
The private installation provides a dependency separation between nodes for added stability and protection. The private installation installs the Sybase software on each node, and does not require a shared filesystem. You need to maintain a strict file structure and placement discipline, because every node must access the software identically.
Once you completed the prerequisites, you need to shift your focus to preparing the databases, and the database server.
- If you are upgrading Adaptive Server, the previously installed version of the server must be running. If you are upgrading Backup Server, Historical Server, Monitor Server, or XP Server, those servers must not be running.
- Stored procedure text in the syscomments table is required for the upgrade. If you deleted the text, you must add it back again.
Note: As a best practice: if you don’t want to display the text, hide it by using the sp_hide_text stored procedure instead of deleting it.
- Resolve reserved words using quoted identifiers. This is a simple check by installing the upgrade package and then executing sp_checkreswords.
Caution: This step is simple enough, but if omitted, can lead to serious issues during the upgrade process.
- Perform some standard tasks that apply to any database server upgrade.
- Verify users are logged off.
- Check for database integrity. Run DBCC commands to complete this step.
- Back up the databases. As mentioned before, this will be your lifeline in case of a failed upgrade.
- Ensure that master is the default database for the “sa” user.
- Prepare the database and devices for upgrade by following these steps:
- Disable auditing
- Disable Job Scheduler by ensuring the “enable Job Scheduler” is off.
- Archive auditing data and truncate auditing tables.
- Disable disk mirroring.
Note: Sybase ASE Cluster Edition 15.5 does not support disk mirroring. This is important if you used the disk mirror approach to move your local database devices to the SAN. Please make sure that all device mirrors have been disabled.
- Verify that your $SYBASE environment variable points to the location of the new Adaptive Server software files you just installed.
Manual Upgrade of an Existing ASE Server
Your upgrade approach will be completely different based on the various upgrade options. I want to focus on the manual upgrade from a non-cluster ASE server to the ASE Cluster Edition 15.5.
For the full details of the manual upgrade, please review the Sybase ASE Cluster Edition Upgrade manual. The summary of the steps is:
- In order for Sybase ASE Cluster Edition to work and communicate, the unified agent must be running on each node of the cluster.
Note: Now is a good time to get into the habit of starting, and verifying the unified agent before starting any database server.
Start the Unified Agent:
- Start your existing Sybase ASE server. Change the $SYBASE and $SYBASE_ASE variables to reflect the new location of the software. This process must be repeated when a restart of the existing Sybase ASE server is required.
- Execute the $SYBASE/$SYBASE_ASE/upgrade/preupgrade command from the new software location to prepare your server for the upgrade. If there are errors reported, correct them and restart your existing Sybase ASE server. Repeat this step until no errors are displayed.
- Check your existing Sybase ASE databases for new “reserved words” by installing and executing the sp_checkreswords stored procedure. Correct any errors prior to continuing the upgrade process.
Caution: Omitting this step can lead to serious problems during the upgrade process.
- One important part of installing a Sybase ASE Cluster Edition server is the requirement to have at least 2 network connections; 3 connections are even better. The additional network connections are needed for the server to interconnect via a primary private network and an optional secondary private network. In our example, we are using 2 private interconnects. Plus the public network access.
- After shutting down the old server, you need to proceed with the cluster preparation. The first step is creation of a new cluster input file that describes your cluster environment. The first instance of the cluster must be the old server name. For this example, the filename mycluster.inp has been chosen.
In addition, you need to have the network interconnect working. This is the back bone connection between the cluster nodes.
Here is an example of the mycluster.inp file, based on a shared installation:
#all input files must begin with a comment [cluster] name = mycluster max instances = 2 master device = /dev/raw/raw1 interfaces path = /sybase/ traceflags = primary protocol = udp secondary protocol = udp [management nodes] hostname = syb1 hostname = syb2 [instance] id = 1 name = syb1 node = syb1 primary address = syb1-ppriv primary port start = 38456 secondary address = syb1-spriv secondary port start = 38466 errorlog = /sybase/ASE-15_0/install/syb1.log interfaces path = /sybase/ traceflags = additional run parameters = [instance] id = 2 name = syb2 node = syb2 primary address = syb2-ppriv primary port start = 38556 secondary address = syb2-spriv secondary port start = 38566 errorlog = /sybase/ASE-15_0/install/syb2.log interfaces path = /sybase/ traceflags = additional run parameters =
- Create the quorum device with the input file create in step 6. This is the core of the share disk cluster.
Start the new instance with the old master device:
$SYBASE/$SYBASE_ASE/bin/dataserver --instance=server_name --cluster-input=mycluster.inp --quorum-dev=/dev/raw/raw102 --buildquorum -M$SYBASE
- You’re ready to run the upgrade utility. instance_name is the first instance in your cluster that has the same name as the server from which you are upgrading:
$SYBASE/$SYBASE_ASE/upgrade/upgrade -S instance_name –Ppassword
- Create a tempdb for each instance in the cluster.
Note: This step is important. Without having the global temporary database for the second node in place, the cluster won’t start.
1>create system temporary database tempdb1 for instance syb1 on tempdb1 = 100 2>go 1>create system temporary database tempdb2 for instance syb2 on tempdb2 = 100 2>go
tempdb1 and tempdb2 are new raw devices on the SAN, accessible by both nodes. The size of the tempdb is arbitrary.
- Restart the cluster with the quorum device in the run file:
$SYBASE/$SYBASE_ASE/bin/dataserver --instance=server_name --quorum-dev=/dev/raw/raw102 -M$SYBASE
- Finish the upgrade with running a few scripts as described in the Sybase ASE Cluster Edition installation manual.
Note: This is an abbreviated version of the entire install procedure, but it demonstrates how straight-forward the upgrade actually is. As always, please review the Sybase ASE Cluster Edition Upgrade manual for details, as the configurations may be different for your environment.
Once you upgraded your existing Sybase ASE server, you can add new nodes and convert your non-clustered ASE server into a multi node cluster with ease.
Upgrading your existing Sybase ASE server to Sybase ASE Cluster Edition is pretty straight forward. Especially if your ASE Server is on release 15.x. Keep in mind that with the 15.x release, a new query optimizer was introduced, and extra steps to mitigate possible performance degradation have to be exercised. Once you upgraded your ASE server, you now have access to new tools and methods to address availability and scalability challenges.
In my humble opinion, this is possibly the easiest upgrade path from a non-cluster database system to a shared disk cluster. Sybase ASE Cluster Edition brings your organization better database resources, uses less hardware, and strengthens your computer applications.