Command-line Interface

The program is called like this:

$ ec3 [-l <file>] [-ll <level>] [-q] launch|list|show|templates|ssh|reconfigure|destroy|clone|migrate|stop|restart|tranfer|update [args...]
-l <file>, --log-file <file>

Path to file where logs are written. Default value is standard output error.

-ll <level>, --log-level <level>

Only write in the log file messages with level more severe than the indicated: 1 for debug, 2 for info, 3 for warning and 4 for error.

-q, --quiet

Don’t show any message in console except the front-end IP.

Command launch

To deploy a cluster issue this command:

ec3 launch <clustername> <template_0> [<template_1> ...] [-a <file>] [-u <url>] [-y]
clustername

Name of the new cluster.

template_0 ...

Template names that will be used to deploy the cluster. ec3 tries to find files with these names and extension .radl in ~/.ec3/templates and /etc/ec3/templates. Templates are RADL descriptions of the virtual machines (e.g., instance type, disk images, networks, etc.) and contextualization scripts. See Command templates to list all available templates.

--add

Add a piece of RADL. This option is useful to set some features. The following example deploys a cluster with the Torque LRMS with up to four working nodes:

./ec3 launch mycluster torque ubuntu-ec2 –add “system wn ( ec3_max_instances = 4 )”
-u <url>, --restapi-url <url>

URL to the IM REST API service.

-a <file>, --auth-file <file>

Path to the authorization file, see Authorization file. This option is compulsory.

--dry-run

Validate options but do not launch the cluster.

-n, --not-store

The new cluster will not be stored in the local database.

-p, --print

Print final RADL description if the cluster after cluster being successfully configured.

--json

If option -p indicated, print RADL in JSON format instead.

--on-error-destroy

If the cluster deployment fails, try to destroy the infrastructure (and relinquish the resources).

-y, --yes

Do not ask for confirmation when the connection to IM is not secure. Proceed anyway.

-g, --golden-images

Generate a VMI from the first deployed node, to accelerate the contextualization process of next node deployments.

Command reconfigure

The command reconfigures a previously deployed clusters. It can be called after a failed deployment (resources provisioned will be maintained and a new attempt to configure them will take place). It can also be used to apply a new configuration to a running cluster:

ec3 reconfigure <clustername>
-a <file>, --auth-file <file>

Append authorization entries in the provided file. See Authorization file.

--add

Add a piece of RADL. This option is useful to include additional features to a running cluster. The following example updates the maximum number of working nodes to four:

./ec3 reconfigure mycluster --add "system wn ( ec3_max_instances = 4 )"
-r, --reload

Reload templates used to launch the cluster and reconfigure it with them (useful if some templates were modified).

--template, -t

Add a new template/recipe. This option is useful to add new templates to a running cluster. The following example adds the docker recipe to the configuration of the cluster (i.e. installs Docker):

./ec3 reconfigure mycluster -r -t docker

Command ssh

The command opens a SSH session to the infrastructure front-end:

ec3 ssh <clustername>
--show-only

Print the command line to invoke SSH and exit.

Command destroy

The command undeploys the cluster and removes the associated information in the local database.:

ec3 destroy <clustername> [--force]
--force

Removes local information of the cluster even when the cluster could not be undeployed successfully.

Command show

The command prints the RADL description of the cluster stored in the local database:

ec3 show <clustername> [-r] [--json]
-r, --refresh

Get the current state of the cluster before printing the information.

--json

Print RADL description in JSON format.

Command list

The command prints a table with information about the clusters that have been launched:

ec3 list [-r] [--json]
-r, --refresh

Get the current state of the cluster before printing the information.

--json

Print the information in JSON format.

Command templates

The command displays basic information about the available templates like name, kind and a summary description:

ec3 templates [-s/--search <pattern>] [-f/--full-description] [--json]
-s, --search

Show only templates in which the <pattern> appears in the description.

-n, --name

Show only the template with that name.

-f, --full-description

Instead of the table, it shows all the information about the templates.

--json

Print the information in JSON format.

If you want to see more information about templates and its kinds in EC3, visit Templates.

Command clone

The command clones an infrastructure front-end previously deployed from one provider to another:

ec3 clone <clustername> [-a/--auth-file <file>] [-u <url>] [-d/--destination <provider>] [-e]
-a <file>, --auth-file <file>

New authorization file to use to deploy the cloned cluster. See Authorization file.

-d <provider>, --destination <provider>

Provider ID, it must match with the id provided in the auth file. See Authorization file.

-u <url>, --restapi-url <url>

URL to the IM REST API service. If not indicated, EC3 uses the default value.

-e, --eliminate

Indicate to destroy the original cluster at the end of the clone process. If not indicated, EC3 leaves running the original cluster.

Command migrate

The command migrates a previously deployed cluster and its running tasks from one provider to another. It is mandatory that the original cluster to migrate has been deployed with SLURM and BLCR, if not, the migration process can’t be performed. Also, this operation only works with clusters which images are selected by the VMRC, it does not work if the URL of the VMI/AMI is explicitly written in the system RADL:

ec3 migrate <clustername> [-b/--bucket <bucket_name>] [-a/--auth-file <file>] [-u <url>] [-d/--destination <provider>] [-e]
-b <bucket_name>, --bucket <bucket_name>

Bucket name of an already created bucket in the S3 account displayed in the auth file.

-a <file>, --auth-file <file>

New authorization file to use to deploy the cloned cluster. It is mandatory to have valid AWS credentials in this file to perform the migration operation, since it uses Amazon S3 to store checkpoint files from jobs running in the cluster. See Authorization file.

-d <provider>, --destination <provider>

Provider ID, it must match with the id provided in the auth file. See Authorization file.

-u <url>, --restapi-url <url>

URL to the IM REST API service. If not indicated, EC3 uses the default value.

-e, --eliminate

Indicate to destroy the original cluster at the end of the migration process. If not indicated, EC3 leaves running the original cluster.

Command stop

To stop a cluster to later continue using it, issue this command:

ec3 stop <clustername> [-a <file>] [-u <url>] [-y]
clustername

Name of the new cluster to stop.

-a <file>, --auth-file <file>

Path to the authorization file, see Authorization file.

-u <url>, --restapi-url <url>

URL to the IM REST API external service.

-y, --yes

Do not ask for confirmation to stop the cluster. Proceed anyway.

Command restart

To restart an already stopped cluster, use this command:

ec3 restart <clustername> [-a <file>] [-u <url>]
clustername

Name of the new cluster to restart.

-a <file>, --auth-file <file>

Path to the authorization file, see Authorization file.

-u <url>, --restapi-url <url>

URL to the IM REST API external service.

Command transfer

To transfers an already launched cluster that has not been transfered to the internal IM, use this command:

ec3 transfer <clustername> [-a <file>] [-u <url>]
clustername

Name of the new cluster to transfer.

-a <file>, --auth-file <file>

Path to the authorization file, see Authorization file.

-u <url>, --restapi-url <url>

URL to the IM REST API external service.

Command update

The command updates a previously deployed clusters. It can be called to update the RADL of the WNs enabling to change some of their features (url or the image, cpu, memory …) that will be used in next “power on” operations on the cluster:

ec3 update <clustername>
-a <file>, --auth-file <file>

Append authorization entries in the provided file. See Authorization file.

--add

Add a piece of RADL. This option enables to include additional features to a running cluster. The following example updates the number of cpus of the WNs:

./ec3 update mycluster --add "system wn ( cpu.count = 2 )"

Configuration file

Default configuration values are read from ~/.ec3/config.yml. If this file doesn’t exist, it is generated with all the available options and their default values.

The file is formated in YAML. The options that are related to files admit the next values:

  • an scalar: it will be treated as the content of the file, e.g.:

    auth_file: |
       type = OpenNebula; host = myone.com:9999; username = user; password = 1234
       type = EC2; username = AKIAAAAAAAAAAAAAAAAA; password = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    
  • a mapping with the key filename: it will be treated as the file path, e.g.:

    auth_file:
       filename: /home/user/auth.txt
    
  • a mapping with the key stream: it will select either standard output (stdout) or standard error (stderr), e.g.:

    log_file:
       stream: stdout
    

Authorization file

The authorization file stores in plain text the credentials to access the cloud providers, the IM service and the VMRC service. Each line of the file is composed by pairs of key and value separated by semicolon, and refers to a single credential. The key and value should be separated by ” = “, that is an equals sign preceded and followed by one white space at least, like this:

id = id_value ; type = value_of_type ; username = value_of_username ; password = value_of_password

Values can contain “=”, and “\n” is replaced by carriage return. The available keys are:

  • type indicates the service that refers the credential. The services supported are InfrastructureManager, VMRC, OpenNebula, EC2, OpenStack, OCCI, LibCloud, Docker, GCE, Azure, and LibVirt.
  • username indicates the user name associated to the credential. In EC2 it refers to the Access Key ID. In Azure it refers to the user Subscription ID. In GCE it refers to Service Account’s Email Address.
  • password indicates the password associated to the credential. In EC2 it refers to the Secret Access Key. In GCE it refers to Service Private Key. See how to get it and how to extract the private key file from here info). In OpenStack sites using 3.x_oidc_access_token authentication it indicates the OIDC access token.
  • tenant indicates the tenant associated to the credential. This field is only used in the OpenStack plugin.
  • host indicates the address of the access point to the cloud provider. This field is not used in IM and EC2 credentials.
  • proxy indicates the content of the proxy file associated to the credential. To refer to a file you must use the function “file(/tmp/proxyfile.pem)” as shown in the example. This field is only used in the OCCI plugin.
  • project indicates the project name associated to the credential. This field is only used in the GCE plugin.
  • public_key indicates the content of the public key file associated to the credential. To refer to a file you must use the function “file(cert.pem)” as shown in the example. This field is only used in the Azure plugin. See how to get it here
  • private_key indicates the content of the private key file associated to the credential. To refer to a file you must use the function “file(key.pem)” as shown in the example. This field is only used in the Azure plugin. See how to get it here
  • id associates an identifier to the credential. The identifier should be used as the label in the deploy section in the RADL.
  • token indicates the OpenID token associated to the credential. This field is used in the OCCI and also to authenticate with the InfrastructureManager. To refer to the output of a command you must use the function “command(command)” as shown in the examples.

An example of the auth file:

id = one; type = OpenNebula; host = oneserver:2633; username = user; password = pass
id = ost; type = OpenStack; host = ostserver:5000; username = user; password = pass; tenant = tenant
type = InfrastructureManager; username = user; password = pass
type = VMRC; host = http://server:8080/vmrc; username = user; password = pass
id = ec2; type = EC2; username = ACCESS_KEY; password = SECRET_KEY
id = gce; type = GCE; username = username.apps.googleusercontent.com; password = pass; project = projectname
id = docker; type = Docker; host = http://host:2375
id = occi; type = OCCI; proxy = file(/tmp/proxy.pem); host = https://fc-one.i3m.upv.es:11443
id = azure; type = Azure; username = subscription-id; public_key = file(cert.pem); private_key = file(key.pem)
id = kub; type = Kubernetes; host = http://server:8080; username = user; password = pass
type = InfrastructureManager; token = command(oidc-token OIDC_ACCOUNT)

Notice that the user credentials that you specify are only employed to provision the resources (Virtual Machines, security groups, keypairs, etc.) on your behalf. No other resources will be accessed/deleted. However, if you are concerned about specifying your credentials to EC3, note that you can (and should) create an additional set of credentials, perhaps with limited privileges, so that EC3 can access the Cloud on your behalf. In particular, if you are using Amazon Web Services, we suggest you use the Identity and Access Management (IAM) service to create a user with a new set of credentials. This way, you can rest assured that these credentials can be cancelled at anytime.

Usage of Golden Images

Golden images are a mechanism to accelerate the contextualization process of working nodes in the cluster. They are created when the first node of the cluster is deployed and configured. It provides a preconfigured AMI specially created for the cluster, with no interaction with the user required. Each golden image has a unique id that relates it with the infrastructure. Golden images are also deleted when the cluster is destroyed.

There are two ways to indicate to EC3 the usage of this strategy:

  • Command option in the CLI interface: as explained before, the launch command offers the option -g, --golden-images to indicate to EC3 the usage of golden images, e.g.:

    ./ec3 launch mycluster slurm  ubuntu -a auth.dat --golden-images
    
  • In the RADL: as an advanced mode, the user can also specify the usage of golden images in the RADL file that describes the system architecture of the working nodes, e.g.:

    system wn (
      cpu.arch = 'x86_64' and
      cpu.count >= 1 and
      memory.size >= 1024m and
      disk.0.os.name = 'linux' and
      disk.0.os.credentials.username = 'ubuntu' and
      disk.0.os.credentials.password = 'dsatrv' and
      ec3_golden_images = 'true'
    )
    

Currently this feature is only available in the command-line interface for OpenNebula and Amazon Web Services providers. The list of supported providers will be uploaded soon.