Triton Elastic Container Infrastructure

Modified: 24 Jul 2015 16:12 UTC
Stability: Unknown

Triton Elastic Container Infrastructure (formerly “SDC” or “SmartDataCenter”), is a complete cloud management solution for server and network virtualization, operations management, and customer self-service. It is the software that runs the Triton Elastic Container Service and can be used to provide public, private, and hybrid clouds on customer premises.

This page will break down the initial concepts, terminology, and architecture of Triton to help get Operators started with planning, deployment, and operation of the program. It contains almost no diagrams and is designed to be a quick read so you can move on to the detail of each topic rapidly. Links to the detailed sections are provided where appropriate. 

If you are looking for documentation on the Triton Elastic Container Service, please see the section entitled [Joyent public cloud](( either via the link or through the navigation pane on the left of this page.


A Triton installation consists of two (2) or more servers. One server acts as the management server (“head node" or "HN") and the remainder are “compute nodes" (or "CN") which run the instances.

Triton is the cloud orchestration software that consists of the following components:

Triton also configures and allocates network resources to instances giving them both internal, intra-instance, and external internet connectivity.

The underlying hypervisor, SmartOS, is a powerful and lightweight container based virtualization layer that natively supports OS virtualization (containers) and hardware virtualization (KVM) for Linux, Windows, and FreeBSD.

The diagram below shows the basic architecture of Triton.



Core services

Triton uses a service oriented architecture. Each core service is instantiated from an image and works with other services as necessary to manage the instance, user, and networking capabilities of Triton.

The core services perform a variety of roles in support of instance management. They include a single data repository (Manatee), data access/management services, APIs, and communications services as summarized below. Each core service runs in its own infrastructure container running SmartOS and all of them except the Manatee data store are stateless and can operate as multiple redundant instances. (See resilience and continuity documentation for more details.) The core services themselves are managed though the Services API (SAPI) and can easily be upgraded by re-provisioning from updated images.

Follow the link from each service for a detailed description:

Service Description
adminui The Operator's Management Interface / Operations Portal
amon Altering and monitoring service
amonredis Redis data store for Amon
assets Manages storage of images on the head and compute nodes
binder Internal DNS service for Triton
ca Cloud Analytics
cnapi Compute node API
clouadpi The end user Public API for managing customer instances
dapi Designation API, used during instance provisioning to determine which compute nodes are eligible for the instance
dhcpd Manages IP address allocation and creation of boot images for compute nodes
fwapi Firewall API
imgapi Image API
manatee High availability Postgres based data store for Triton
moray Key value data store based on Manatee. Most APIs store their data through Moray
napi Network API papi Package API
rabbitmq Inter-server message management
redis Data store used for caching by data by NAPI, AdminUI and VMAPI (Data here is not persistent)
sapi Services API
sdc Triton Tools
ufds Unified Foundational Directory Service, an LDAP implementation built on top of Moray
vmapi Virtual Machine API (management of instances)
workflow Workflow API


Triton provides for Role based management of infrastructure. The Account owner can create sub-users who are assigned to Roles that will allow them to perform infrastructure administration tasks on behalf of the Account holder. Through the definition of Policies allocated to Roles users can be enabled to perform such tasks as provision instances and manage firewall rules. This capability is described in detail in the section on user management.

Only very basic user information is stored in Triton; this includes name, login, email, phone number, and password.

Every Triton user must have an SSH key in their account. This key is used to authorize access to instances as described in the Instances section below. It is also used to authorize access to the publicly accessible CloudAPI.



The Internal APIs are only accessible from the Admin VLAN and are mainly used by the pre-defined workflows during the execution of jobs (provision, destroy, etc). The Operations Portal provides an interface to the internal APIs and is the recommended method of managing Triton and obtaining information.

However, as Operators gain experience and system complexity grows, it is often expedient to use the APIs for reporting, searching and summarizing information. There is also some functionality of Triton that is not available in the Operations Portal and has to be executed through the appropriate API.

APIs are RESTful and can be manipulated using curl commands. Triton also comes with an easy to use Command Line Interface (CLI) for the APIs that simplify their usage and syntax. Each API CLI is a command named sdc-xxxapi (e.g. sdc-vmapi, sdc-cnapi, etc). Details of each API can be found by following the links in the Services list above. 

Public API

The end user public API is called the CloudAPI. It is an internet accessible RESTful API authenticated using end user credentials (SSH keys). It allows end user to create and manage their own instances using an industry standard interface. Joyent also provides a Command Line Interface (CLI) for the CloudAPI written in Node.js which can be installed on any client environment. As with the internal APIs the CLI simplifies the usage and syntax of the API.

Information on installing the CloudAPI CLI and a detailed description of the API is available in

Instances (containers and virtual machines)

Triton supports three types of instances:

Infrastructure containers function as completely isolated virtual servers within the single operating system instance of the host. The technology consolidates multiple sets of application services onto one system and places each into isolated instances. This approach delivers maximum utilization, reduced cost, and provides a wall of separation similar to the protections of separate instances on a single machine.

The container hypervisor enjoys complete visibility into containers, both Docker and infrastructure. This visibility allows the container hypervisor to provide containers with as-needed access to a large pool of available resources, while still providing each instance with minimum guaranteed access to resources based on a pre-established fair share scheduling algorithm. In normal operating conditions, all RAM and CPU resources are fully utilized either directly by applications or for data caching. Joyent ZFS provides a write-back cache that increases I/O throughput significantly.

Instances of all types have one or more fixed IP addresses that are allocated at the time the instance is provisioned or when additional Network Interface Connectors (NICs) are added.

Access to instances is initially by SSH only. As described above, each user of Triton must have at least one SSH key added to their account before provisioning any instances. The keys on the account are used to permit access to instances in the following ways:



Triton provides networking capability for instances for both internal, intra instance communication, and external internet access. Networks are configured in Triton as Logical Networks, which map to the configuration of networks in the core networking switches and routers of your organization. Networks used by Triton must be dedicated to Triton and should not have external devices or servers attached to them other than network switches and routers.

Logical Network definitions in Triton must match the core network configuration in every respect:

When an instance is provisioned the end user selects one or more logical networks to use with their instance (up to a maximum of 32). The instance will be created with a Virtual Network Interface (VNIC) for each selected network. Each VNIC will have a static IP address. VNIC's can be added and removed from the instance at any time after provisioning.

NIC tags

Networks used by Triton are connected to the Physical NICs on the Servers from the Top of Rack Switches. The Servers do not have plumbed interfaces for these networks (apart from the admin VLAN). The VNICs created for the instances are attached to the correct physical NIC using a mechanism called NIC Tags. A NIC tag is a simple label such as external or internal which is associated with both the physical NIC on the server and the Logical Network definition. When an instance is provisioned, Triton will search for servers with the required NIC tag(s) and assign the instance to one of them. As the instance is booted up its VNICs are created.

Network pools

Network pools are used to group Logical Networks together for selection during provisioning. This process is typically used when there are 2 or more subnets providing the full range of IPs available to Triton. Each of the subnets is defined as a separate Logical Network and then included in a network pool. End users then chose the network pool when creating an instance and an IP will be assigned from the first Logical Network within the pool that has an available IP address.


Triton provides a built in firewall capability which can be managed via both the internal APIs and CloudAPI. End users can define a single set of firewall rules to be applied to any or all of their instances. Rules can be enabled, disabled, and reconfigured dynamically without rebooting an instance. Rule syntax closely follows the syntax used for iptables as shown in this example that allows any external system to access a specific instance on port 80.

FROM any to vm 04128191-d2cb-43fc-a970-e4deefe970d8 ALLOW tcp port 80