Triton CLI tool and CloudAPI

Modified: 06 Feb 2017 17:25 UTC

The Triton CLI tool uses CloudAPI to manage infrastructure in Triton data centers. Many of the tasks that you can perform through the portal are also possible through the triton CLI tool and CloudAPI, including:

In this page, you will learn how to install the triton command line tool. You can learn more about CloudAPI methods and resources here.

Need a visual reference? Watch the screencast which covers how to install the Triton CLI tool and use CloudAPI to manage infrastructure in Triton data centers.

Installation

The CloudAPI tools require Node.js. You can find the latest version of Node.js for your operating system and architechture at nodejs.org.

Once Node.js is installed, you can use npm to install the triton CLI tool:

$ sudo npm install -g triton
. . .
/usr/local/bin/triton -> /usr/local/lib/node_modules/triton/bin/triton
triton@4.11.0 /usr/local/lib/node_modules/triton
├── bigspinner@3.1.0
├── assert-plus@0.2.0
├── extsprintf@1.0.2
├── wordwrap@1.0.0
├── strsplit@1.0.0
├── node-uuid@1.4.3
├── read@1.0.7 (mute-stream@0.0.6)
├── semver@5.1.0
├── vasync@1.6.3
├── once@1.3.2 (wrappy@1.0.2)
├── backoff@2.4.1 (precond@0.2.3)
├── verror@1.6.0 (extsprintf@1.2.0)
├── which@1.2.4 (isexe@1.1.2, is-absolute@0.1.7)
├── cmdln@3.5.4 (extsprintf@1.3.0, dashdash@1.13.1)
├── lomstream@1.1.0 (assert-plus@0.1.5, extsprintf@1.3.0, vstream@0.1.0)
├── mkdirp@0.5.1 (minimist@0.0.8)
├── sshpk@1.7.4 (ecc-jsbn@0.1.1, jsbn@0.1.0, asn1@0.2.3, jodid25519@1.0.2, dashdash@1.13.1, tweetnacl@0.14.3)
├── rimraf@2.4.4 (glob@5.0.15)
├── tabula@1.7.0 (assert-plus@0.1.5, dashdash@1.13.1, lstream@0.0.4)
├── smartdc-auth@2.3.0 (assert-plus@0.1.2, once@1.3.0, clone@0.1.5, dashdash@1.10.1, sshpk@1.7.1, sshpk-agent@1.2.0, vasync@1.4.3, http-signature@1.1.1)
├── restify-errors@3.0.0 (assert-plus@0.1.5, lodash@3.10.1)
├── bunyan@1.5.1 (safe-json-stringify@1.0.3, mv@2.1.1, dtrace-provider@0.6.0)
└── restify-clients@1.1.0 (assert-plus@0.1.5, tunnel-agent@0.4.3, keep-alive-agent@0.0.1, lru-cache@2.7.3, mime@1.3.4, lodash@3.10.1, restify-errors@4.2.3, dtrace-provider@0.6.0)

Configuration

The triton CLI uses "profiles" to store access information. Profiles contain the data center CloudAPI URL, your login name, and SSH key fingerprint so that you can switch between them conveniently. Profiles make it easy to connect to different data centers, or connect to the same data center as different users.

The triton profile create command steps through a series of questions to make profile setup and configuration easy:

$ triton profile create

A profile name. A short string to identify a CloudAPI endpoint to the `triton` CLI.
name: us-sw-1

The CloudAPI endpoint URL.
url: https://us-sw-1.api.joyent.com

Your account login name.
account: jill

The fingerprint of the SSH key you have registered for your account. You may enter a local path to a public or private key to have the fingerprint calculated for you.
keyId: ~/.ssh/<ssh key name>.id_rsa
Fingerprint: 2e:c9:f9:89:ec:78:04:5d:ff:fd:74:88:f3:a5:18:a5

Saved profile "us-sw-1"

Select a CloudAPI endpoint URL from any of our global data centers, or use a Triton-powered data center of your own (remember: it's open source).

To test the installation and configuration, let's use triton info:

$ triton info
login: jill
name: Jill Example
email: jill@example.com
url: https://us-sw-1.api.joyent.com
totalDisk: 65.8 GiB
totalMemory: 2.0 GiB
instances: 2
running: 2

The triton info output above shows that Jill's account already has two instances running.

Using profiles

You can view all configured profiles with the triton profiles command:

$ triton profiles
NAME     CURR  ACCOUNT      USER  URL
env            jill         -     https://us-sw-1.api.joyent.com
us-sw-1  *     jill         -     https://us-sw-1.api.joyent.com

Next let's make a profile for each data center. To do this we will use triton commands to make a copy of the us-sw-1 profile for each of the data center urls. Copy this snippet below to add the new profiles:

triton datacenters -H -o name | while read -r dc; do triton profile get -j us-sw-1 | sed "s/us-sw-1/$dc/g" | triton profile create -f -; done

Okay, let's run triton profiles again to check to see that it worked. We should have a new profile for each data center listed in triton datacenters:

$ triton profiles
NAME       CURR  ACCOUNT      USER  URL
env              jill         -     https://us-sw-1.api.joyent.com
eu-ams-1         jill         -     https://eu-ams-1.api.joyent.com
us-east-1        jill         -     https://us-east-1.api.joyent.com
us-east-2        jill         -     https://us-east-2.api.joyent.com
us-east-3        jill         -     https://us-east-3.api.joyent.com
us-sw-1    *     jill         -     https://us-sw-1.api.joyent.com
us-west-1        jill         -     https://us-west-1.api.joyent.com

You can change the default profile with the triton profile set command:

$ triton profile set us-east-1
Set "us-east-1" as current profile

Completions

You can also configure bash completions with this command:

# Mac OSX
$ triton completion > /usr/local/etc/bash_completion.d/triton

# Linux
$ triton completion > /etc/bash_completion.d/triton

# Windows bash shell
$ triton completion >> ~/.bash_completion

Quick start: create an instance

With triton installed and configured, we can jump right into provisioning instances. Here's an example of provisioning an infrastructure container running Ubuntu. Think of infrastructure containers like virtual machines, only faster and more efficient. Let's run triton instance create and we'll talk about the pieces after:

$ triton instance create -w --name=server-1 ubuntu-14.04 t4-standard-1G
Creating instance server-1 (e9314cd2-e727-4622-ad5b-e6a6cac047d4, ubuntu-14.04@20160114.5, t4-standard-1G)
Created instance server-1 (e9314cd2-e727-4622-ad5b-e6a6cac047d4) in 22s

Now that we have an instance, we can run triton ssh to connect to it. This is an awesome addition to our tools because it means that we don't need to copy SSH keys or even lookup the IP address of the instance.

$ triton ssh server-1
Welcome to Ubuntu 14.04 (GNU/Linux 3.19.0 x86_64)

 * Documentation:  https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

   __        .                   .
 _|  |_      | .-. .  . .-. :--. |-
|_    _|     ;|   ||  |(.-' |  | |
  |__|   `--'  `-' `;-| `-' '  ' `-'
                   /  ;  Instance (Ubuntu 14.04 20151105)
                   `-'   https://docs.joyent.com/images/container-native-linux

root@8367b339-799b-cff5-a662-a211e1927797:~#

Instance creation options and details

In our quick start example, we ran triton instance create -w --name=server-1 ubuntu-14.04 t4-standard-1G. That command has three parameters:

  1. We gave our instance a name using --name server-1
  2. We used -w to wait the instance to be created
  3. We used ubuntu-14.04 as our image
  4. We set t4-standard-1G as our package

Let's look at each of those in detail to see how you can set the options that will work best for your needs.

Specifying the instance name

Names for instances can be up to 189 characters and include any alphanumeric character plus _, -, and .

Selecting an image

Finding our Ubuntu image is pretty easy. We use triton images to list the images and add name=~ubuntu to do a substring search for Ubuntu. It's sorted by published date so usually we'll pick the most recent. Today we'll choose 14.04 because it has wider support.

$ triton images name=~ubuntu type=lx-dataset
SHORTID   NAME          VERSION   FLAGS  OS     TYPE        PUBDATE
...
c8d68a9e  ubuntu-14.04  20150819  P      linux  lx-dataset  2015-08-19
52be84d0  ubuntu-14.04  20151005  P      linux  lx-dataset  2015-10-05
ffe82a0a  ubuntu-15.04  20151105  P      linux  lx-dataset  2015-11-05

Selecting a package

Next we'll use triton package to search for a package with 1 gigabyte of RAM. We'll pick the t4-standard-1G because it's the newest.

$ triton packages memory=1024
SHORTID   NAME                   DEFAULT  MEMORY  SWAP  DISK  VCPUS
d9396ca5  Small 1GB              true         1G    2G   30G      1
11a01166  g3-standard-1-smartos  false        1G    2G   33G      1
85284e54  g3-standard-1-kvm      false        1G    2G   33G      -
20e583d5  t4-standard-1G         false        1G    4G   25G      -

I've been trying to convince you of the magic of using the command line. However, we're missing an API that can fetch pricing details for our different packages, so you'll have to lookup the prices in my.joyent.com or on our public pricing page. I recommend using the public pricing page because you can click on a box to learn it's API name. Today we'll use t4-standard-1G and the price is $0.026 per hour.

Bootstrapping an instance with a script

Our quick start example didn't include one of the most useful options for automating infrastructure on Triton: specifying a script for containers to run at startup.

We'll show how to use triton to run the examples from Casey's blog post on setting up Couchbase in infrastructure containers. I only want to show what the equivalent triton commands look like. We'll skip over the details, but you can read the original post to learn more.

The command below sets up a 16GB CentOS infrastructure container, and installs Couchbase. The --script file installs Couchbase, and the triton ssh runs cat /root/couchbase.txt to show the address of the Couchbase dashboard.

curl -sL -o couchbase-install-triton-centos.bash https://raw.githubusercontent.com/misterbisson/couchbase-benchmark/master/bin/install-triton-centos.bash

triton instance create \
    --name=couch-bench-1 \
     $(triton images name=~centos-6 type=lx-dataset -Ho id | tail -1) \
    'Large 16GB' \
    --wait \
    --script=./couchbase-install-triton-centos.bash

triton ssh couch-bench-1 'cat /root/couchbase.txt'

Working with instances

Of course, infrastructure management isn't just about creating instances, and triton offers some of its biggest improvements in this space.

List instances

$ triton instances
SHORTID    NAME           IMG                    STATE    PRIMARYIP         AGO
1fdc4b78   couch-bench-1  8a1dbc62               running  165.225.136.140   3m
8367b039   server-1       ubuntu-14.04@20151005  running  165.225.122.69    3m

Wait for tasks

By default the triton tool does not wait for tasks to finish. This is great because it means that your commands return control back to you very quickly. However sometimes you'll need to wait for a task to complete before you do the next one. When this happens you can wait by using either the --wait or -w flags, or the triton instance wait command. In the example above we used --wait so that the instance would be ready by the time the triton ssh command ran.

Show instance details

Use triton instance get -j to view your instance's details as a JSON blob. To parse fields out of the blob, I recommend using json although there are many other great tools out there.

$ triton instance get -j couch-bench-1
{
    "id": "1fdc4b78-62ec-cb97-d7ff-f99feb8b3d2a",
    "name": "couch-bench-1",
    "type": "smartmachine",
    "state": "running",
    "image": "82cf0a0a-6afc-11e5-8f79-273b6aea6443",
    "ips": [
        "165.225.136.140",
        "10.112.2.230"
    ],
    "memory": 16384,
    "disk": 409600,
    "metadata": {
        "user-script": "#!/bin/bash\n...\n\n",
        "root_authorized_keys": "ssh-rsa ..."
    },
    "tags": {},
    "created": "2015-12-18T03:44:42.314Z",
    "updated": "2015-12-18T03:45:10.000Z",
    "networks": [
        "65ae3604-7c5c-4255-9c9f-6248e5d78900",
        "56f0fd52-4df1-49bd-af0c-81c717ea8bce"
    ],
    "dataset": "82cf0a0a-6afc-11e5-8f79-273b6aea6443",
    "primaryIp": "165.225.136.140",
    "firewall_enabled": false,
    "compute_node": "44454c4c-4400-1059-804e-b5c04f383432",
    "package": "t4-standard-16G"
}

Up above you can see that the user-script that we ran is part of the metadata.

You can pull out individual values by piping the output to json KEYNAME. For example you could get the IP address of an instance like this:

$ triton instance get -j couch-bench-1 | json primaryIp
165.225.136.140

Clean up

Let's wrap up with this container. We'll delete it using the triton instance delete command:

$ triton instance delete server-1 couch-bench-1
Delete (async) instance server-1 (8367b039-759b-c6f5-a6c2-a210e1926798)
Delete (async) instance couch-bench-1 (1fdc4b78-62ec-cb97-d7ff-f99feb8b3d2a)

For something a bit more dangerous you can delete all your instances using this command:

$ triton instance delete $(triton instances -Ho shortid)

Be careful, this will delete all your instances regardless of whether they running or stopped. If you use docker, you'll noticed that this is equivalent to using docker rm -f $(docker ps -aq) to forcefully delete all your containers. Although triton might be faster since it deletes the machines in parallel.

Watch the screencast

This screencast covers how to install the Triton CLI tool and use CloudAPI to manage infrastructure in Triton data centers.

If you skipped ahead to the video, you can go back and review the installation process for step-by-step instructions.

CloudAPI and Docker API commands together

In addition to CloudAPI and the Triton CLI tool, you can also create and manage bare metal Docker containers on Triton using the Docker API and Docker CLI tools. The two APIs work in parralel, though Docker API can only create and manage bare metal Docker containers on Triton. CloudAPI and the Triton CLI tool can manage almost every aspect of Docker containers with the exception of provisining bare metal Docker containers on Triton. See our comparison table for full details.