This guide will take you through how to deploy ClustrixDB in the Rackspace Cloud, including:
This section describes the required preparation steps on the Rackspace Cloud servers prior to installation of ClustrixDB.
Step 1: Create server instances
The Rackspace cloud offers the Performance 2 cloud servers which are a great fit for ClustrixDB. The 15GB version only has 4 cores, which is enough for compatibility testing but for production environments, we recommend the 30GB 8-core version (or higher). This is the server type we will use in this example.
To build a ClustrixDB cluster, start with a minimum of 3 identical servers:
- Server name: Choose a name for your Clustrix node, for example CLXDB-01
- Image: Select CentOS or RedHat Enterprise Linux 6.4
- Flavor: Performance 2 30GB
- Network: Keep the PublicNet and ServiceNet checked. We also recommend creating a dedicated network for the backend network of the cluster that way inter-node traffic does not go over the busy servicenet network. Click "Create Network" and fill out the the network name 'CLXDB-backend'. Make note of the 192.168.x network preselected or modify it to your liking. Click Create Network. Now your nodes will have 3 network interfaces.
- Once ready, click "Create Server" and make note of the root password.
- Create an additional 2 or more identical servers to be able to form a cluster.
Step 2: Set the root password
Each node needs to share the same password for ease of administration. Ssh into each node and set the root password:
Step 3: Set Up the Storage
The Rackspace Performance 2 servers come with 1 or more RAID 10-protected SSD storage devices (data disk) attached. This step describes how to configure them to store ClustrixDB data. We also recommend creating a separate log partition on the data volume. This is a safeguard to store the logs on a separate partition than the database, where they could compete for space.
In the case of a single data disk:
In the case of multiple data disks:
Step 4: Configure Network security
The Rackspace cloud servers are protected via IPtables. If you have RackConnect enabled on your Rackspace cloud account, IPtables are dynamically updated by the portal's Firewall manager, which will overwrite any custom rules you add. To work around this issue see this article from Rackspace's KB.
Cloud servers spin up with IPtables pre-configured with :
The IPtables configuration above allows:
- SSH and ICMP from any
- RELATED and ESTABLISHED connections
- Opens the loopback interface to any
We need to edit the IPtables rules to allow the correct ports to be opened as described in the security considerations section of the getting started guide. Below is an example script (using bogus IPs) that will configure IPtables for Clustrix DB properly. The example assumes a multi-homed environment with 3 networks configured: eth0: public, eth1: ServiceNet, and eth2: private backend dedicated to CLX clusters.
Note that the script first flushes all rules, then adds the correct rules, saves the config and keeps a copy in /etc/sysconfig/iptables.
The IPtables script is self explanatory, it opens the ports required in the security considerations section of the getting started guide. Note that the load balancer rules are explained in the next section. Edit the script with the correct IPs and add rules to fit your environment. You can tighten security a bit more by only allowing port 3306 on the ServiceNet IP nic eth1 only, if all clients are on ServiceNet. If the backend network isn't dedicated to ClustrixDB, then you can lock down the clx ports per source IP.
Step 1: Run the installer :
Download the installer, then run it as root :
and follow the steps outlined in the command output. For more detailed steps on how to install ClustrixDB, visit Getting Started on ClustrixDB Software.
Step 2: Specify Installer Options:
Once the installer is running and you are prompted for configuration options, we recommend changing the following:
- Item 7: private IP, select the IP from the dedicated network we created for Clustrix when making the instances.
- Item 11: Allow ClustrixDB to modify sshd_config and /etc/hosts , should be set to yes to allow for easy inter-node communication.
Once changes are made, type yes to accept the license terms and the installer will start ClustrixDB installation on that node.
Step 3: Run installer on the other nodes:
Once the installer completes on the first node, there are two ways to repeat that installation on the other nodes:
- Run the installer using the same method as above. Make sure to specify the same configuration as on your first node.
- The final step of the installer on node1 will display a large bash command you can run on the other nodes to get them configured with ClustrixDB in a single command. This will ensure that each node is configured the same and is running the same version of ClustrixDB. Be sure to replace the IP address for the backend interface in the command with the correct one for that node.
Step 4: Complete Cluster Creation using the Installation Wizard:
Once you are done with installing on all desired nodes, open your browser to one of the nodes public IP and configure your cluster using the installation wizard.
Step 5: Edit path:
Clustrix management tools are located in /opt/clustrix/bin. For easy execution of those tools it is recommended to add that directory to the path and add to your bash_profile.
Step 6: Sanity check:
At this point, the cluster should be up and running. As a sanity check verify the following :
- ssh to any node as root and open the mysql prompt by typing "mysql". you do not need to provide a password when accessing the db from a cluster node.
- open the mysql prompt from an app server or administration machine with: mysql -h < node ip> -u root -p <password>
- test the internal tools "clx" :
- clx stat : gives an overall status of the cluster. All nodes should list OK
- clx cmd : runs a command on every server using ssh host authentication. Try: clx cmd date
Step 1: Set Up a Load Balancer
Next, set up a load balancer to distribute sql connections across every node of the cluster. To do this we will use Rackspace Cloud Load Balancers (another option is to set up Load Balancing Clustrix with HAproxy). Set up your cloud load balancer using the following settings (for more info on CLB check the Rackspace knowledge center)
- Identification: ClustrixLB or any other name.
- Configuration: We recommend using the Private Rackspace Network option for performance and security reasons.
- Protocol: MySQL
- Algorithm: We recommend using Least Connections or Round Robin
- Region: Select the same region as the Clustrix nodes.
- Nodes: Add every Clustrix node to the load balancer
- Click "Create Load Balancer" , which will create the LB and open the details page for it.
- Enable Health monitoring with the following settings : connect, 30, 5, 2
- Enable Session Persistence
- Access control: allow all application clients and management servers to access load balancer. Because it is an internal LB, you need to use the local (ServiceNet) IPs in CIDR when adding new rules.
Note: IPtables on the nodes need to allow for connections from the Cloud Load Balancer. Because the Cloud Load Balancer's IP is subject to change at anytime, it is recommended to use the entire range of IP where the Cloud Load Balancer could be provisioned:
#Load Balancer: Allow all IPs for cloud load balancer network at RAX ORD iptables -A INPUT -p tcp -s 10.183.250.0/23 --dport 3306 -j ACCEPT iptables -A INPUT -p tcp -s 10.183.252.0/23 --dport 3306 -j ACCEPT iptables -A INPUT -p tcp -s 10.183.254.0/23 --dport 3306 -j ACCEPT iptables -A INPUT -p tcp -s 10.183.245.0/24 --dport 3306 -j ACCEPT iptables -A INPUT -p tcp -s 10.183.246.0/23 --dport 3306 -j ACCEPT
The IP ranges above are used by Rackspace in ORD, and we allow connections on port 3306. See this KB article from Rackspace on that subject. Note that the LB is not configured to load balance on port 80 to access the webUI. It is fine to use any node's IP to access the webUI.
Step 2: Set Up NTP
The final step is to set up NTP, which is required for ClustrixDB to run properly. See Setting Up NTP for ClustrixDB on CentOS