Core services, resilience, and continuity
This document outlines how core Triton services can be deployed in a high availability (HA), also known as clustered, configuration for resilience and continuity. The document describes high-level steps for deploying specific core services and summarizes the implications for recoverability.
Joyent provides this information as reference material. Please contact Joyent Support before enabling HA. Joyent will review your current architecture and configuration and provide recommendations on the best configuration for your environment.
Triton provides a service-oriented architecture. Each service performs a specific role and runs within a zone. In an initial deployment of Triton, each zone runs on the server designated as the head node. Compute nodes are used to hold additional copies of services in the case of clusters or redundant configurations of services. The minimum recommended production configuration has three instances: one head node and two compute nodes.
Most services are stateless. The only stateful service is manatee, a storage system built on top of Postgres. Manatee is designed to run in a three (3) instance configuration consisting of a primary, a synchronous slave, and an asynchronous slave.
Manatee uses Zookeeper (ZK) for leader election and state tracking. Manatee is designed to maintain read/write access if it encounters a failure of one of it's instances. Read-only access will be maintained in the event that there are simultaneous failures of two instances.
Access to manatee is provided to the other core services indirectly either via moray (a key/value store) or UFDS (ldap).
As a consequence of this architecture, resilience to failure and continuity of service can be achieved by implementing multiple instances of each service on different compute nodes.
The current state for each core services falls into one of three categories:
Services in this category can and should be deployed in a high availability or clustered configuration. Failures of these components should be communicated to Joyent Support prior to attempting recovery.
Services in this category can have multiple copies deployed. In the event of a failure, they can be recreated. The process of recreating a service instance takes about 1-2 minutes depending on the size/speed of your hardware and network. This is done using the
Services in this category should only have one instance running. This instance should be recreated after a failure.
The table below summarizes the various services and recommended deployment schemes. The Operator Restorable column indicates the failure modes that can be recovered by your operator, but all customers with a current Triton support contract should contact Joyent Support for assistance with any recovery effort, regardless of the component involved.
|Service||HA/Cluster||Multiple Instances||Operator Restorable|
There are two design constraints for creating a HA cluster:
- You must have a minimum of three servers (one head node, and two compute nodes). You cannot run a cluster in a two-server Triton installation.
- You must use multiple compute nodes for the cluster members. Placing multiple instances of a service on one compute node will introduce a potential point of failure into the configuration.
Ideally, when you create HA clusters with multiple services, don't place them on the same nodes. For example, place additional manatees on different compute nodes than the clustered ZK configuration. If you deploy additional morays, set them up on compute nodes that do not contain manatee or zookeeper instances.
Zookeeper is used inside the binder instances to manage the leader elections and state of clusters in Triton, such as with manatee. It is possible to create a zookeeper cluster (binder cluster) using the
First update the
sdcadmcomponent, using the
sdcadm self-updatecommand to ensure that you are running the most recent version of this component.
Setup and identify two compute nodes to use for the zookeeper cluster.
- Create the ZK cluster via
sdcadm post-setup ha-binder headnode SERVER1 SERVER2, where
SERVER2are the hostnames or uuid's of the servers that will be hosting the additional instances. Note that one zookeeper instance runs on the head node by default.
To validate that the zookeeper cluster is up and running properly:
From the head node, as the root user, obtain the IP addresses of the zookeepers:
ZK_IPS=$(sdc-vmapi '/vms?query=(%26(tags=*smartdc_role=binder*)(state=running))' | json -Ha nics.ip)
See if the zookeepers are reporting as up. You should see "imok" three times, once for each ZK:
for IP in $ZK_IPS; do echo ruok | nc $IP 2181; echo "" done
See if they are a cluster:
for IP in $ZK_IPS; do echo stat | nc $IP 2181 | egrep "(leader|follower|standalone)" done
You should see one leader and two followers. Customers with a current Triton support contract should contact Joyent Support through their normal channels if they run into any issues with these tests.
You can run the
sdcadm command to deploy the additional instances required to put manatee into HA mode.
sdcadmcomponent, using the
sdcadm self-updatecommand to ensure that you are running the most recent version.
Identify two additional compute nodes to hold the additional manatees. If possible, the manatees should not be the same nodes used for the clustered zookeeper configuration.
- Create the HA manatee nodes by issuing the command:
sdcadm post-setup ha-manatee -s SERVER1_UUID -s SERVER2_UUID.
- Check to make sure the manatee cluster is up and stable by logging into a manatee zone via
- Check the status via
manatee-adm status | json.
You should see three manatee nodes. Note whether they are replicating properly, from primary to sync and sync to async.
To deploy multiple moray instances:
Determine which compute nodes that you will be using for the deployment. Ideally, these nodes will not already contain manatee or zookeeper instances.
- Use the
sdcadmcommand to add the additional moray instances. For example, to add two new moray instances to a new Triton installation you can run something like:
headnode# sdcadm create moray -s SERVER1 headnode# sdcadm create moray -s SERVER2
SERVER2 are the hostnames or uuid's of the servers that will be hosting the additional instances.
Note: The first moray created when you install Triton is named
moray0. You can verify this by running the following command:
headnode# sdc-vmapi '/vms?query=(%26(tags=*smartdc_role=moray*)(state=running))' | json -Hag uuid alias nics.ip state 42184f34-638f-4e75-98a6-33c26d834d3d moray0 10.1.1.17 running
The output also enables you to verify that the additional morays are provisioned and running.
Follow the instructions in Github to create portolan HA instances.
You can find more information on installing and configuring Triton on the following pages: