Title: | R Interface with Google Compute Engine |
---|---|
Description: | Interact with the 'Google Compute Engine' API in R. Lets you create, start and stop instances in the 'Google Cloud'. Support for preconfigured instances, with templates for common R needs. |
Authors: | Mark Edmondson [aut, cre] , Scott Chamberlain [ctb], Winston Chang [ctb], Henrik Bengtsson [ctb], Jacki Novik [ctb] |
Maintainer: | Mark Edmondson <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.3.0.9000 |
Built: | 2024-11-18 04:24:03 UTC |
Source: | https://github.com/cloudyr/googlecomputeenginer |
S3 method for as.cluster()
in the future package.
## S3 method for class 'gce_instance' as.cluster( x, project = gce_get_global_project(), zone = gce_get_global_zone(), rshopts = ssh_options(x), ..., recursive = FALSE )
## S3 method for class 'gce_instance' as.cluster( x, project = gce_get_global_project(), zone = gce_get_global_zone(), rshopts = ssh_options(x), ..., recursive = FALSE )
x |
The instance to make a future cluster |
project |
The GCE project |
zone |
The GCE zone |
rshopts |
Options for the SSH |
... |
Other arguments passed to makeDockerClusterPSOCK |
recursive |
Not used. |
Only works for r-base containers created via gce_vm_template("r-base")
or for
docker containers created using the --net=host
argument flag
A cluster
object.
## Not run: vm <- gce_vm("r-base", name = "future", predefined_type = "f1-micro") plan(cluster, workers = vm) ## equivalent to workers = as.cluster(vm) x %<-% { Sys.getinfo() } print(x) ## End(Not run)
## Not run: vm <- gce_vm("r-base", name = "future", predefined_type = "f1-micro") plan(cluster, workers = vm) ## equivalent to workers = as.cluster(vm) x %<-% { Sys.getinfo() } print(x) ## End(Not run)
Retrieve logs for a container.
container_logs(container, timestamps = FALSE, follow = FALSE)
container_logs(container, timestamps = FALSE, follow = FALSE)
container |
A container object |
timestamps |
Show timestamps. |
follow |
Follow log output as it is happening. |
Winston Change [email protected]
## Not run: container_rm(con) ## End(Not run)
## Not run: container_rm(con) ## End(Not run)
Delete a container.
container_rm(container, force = FALSE)
container_rm(container, force = FALSE)
container |
A container object |
force |
Force removal of a running container. |
Winston Change [email protected]
## Not run: container_rm(con) ## End(Not run)
## Not run: container_rm(con) ## End(Not run)
Report whether a container is currently running.
container_running(container)
container_running(container)
container |
A container object |
Winston Change [email protected]
## Not run: container_running(con) ## End(Not run)
## Not run: container_running(con) ## End(Not run)
This queries docker (on the host) for information about the container, and saves the returned information into a container object, which is returned. This does not use reference semantics, so if you want to store the updated information, you need to save the result.
container_update_info(container)
container_update_info(container)
container |
A container object |
Winston Change [email protected]
## Not run: con <- container_update_info(con) ## End(Not run)
## Not run: con <- container_update_info(con) ## End(Not run)
Get list of all containers on a host.
containers(host = localhost, ...)
containers(host = localhost, ...)
host |
A host object. |
... |
Other arguments passed to the SSH command for the host |
Winston Change [email protected]
Uploads a folder with a Dockerfile
and supporting files to an instance and builds it
docker_build( host = localhost, dockerfolder, new_image, folder = "buildimage", wait = FALSE, ... )
docker_build( host = localhost, dockerfolder, new_image, folder = "buildimage", wait = FALSE, ... )
host |
A host object. |
dockerfolder |
Local location of build directory including valid |
new_image |
Name of the new image |
folder |
Where on host to build dockerfile |
wait |
Whether to block R console until finished build |
... |
Other arguments passed to the SSH command for the host |
Dockerfiles are best practice when creating your own docker images, rather than logging into a Docker container, making changes and committing.
A table of active images on the instance
Best practices for writing Dockerfiles
An example Dockerfile for rOpensci
General R Docker images found at rocker-org
## Not run: docker_build(localhost, "/home/stuff/dockerfolder" ,"new_image", wait = TRUE) docker_run(localhost, "new_image") ## End(Not run)
## Not run: docker_build(localhost, "/home/stuff/dockerfolder" ,"new_image", wait = TRUE) docker_run(localhost, "new_image") ## End(Not run)
Run a docker command on a host.
docker_cmd( host, cmd = NULL, args = NULL, docker_opts = NULL, capture_text = FALSE, ... )
docker_cmd( host, cmd = NULL, args = NULL, docker_opts = NULL, capture_text = FALSE, ... )
host |
A host object. |
cmd |
A docker command, such as "run" or "ps" |
args |
Arguments to pass to the docker command |
docker_opts |
Options to docker. These are things that come before the docker command, when run on the command line. |
capture_text |
If |
... |
Other arguments passed to the SSH command for the host |
Winston Change [email protected]
## Not run: docker_cmd(localhost, "ps", "-a") ## End(Not run)
## Not run: docker_cmd(localhost, "ps", "-a") ## End(Not run)
Docker S3 method for use with harbor package
## S3 method for class 'gce_instance' docker_cmd( host, cmd = NULL, args = NULL, docker_opts = NULL, capture_text = FALSE, nvidia = FALSE, ... )
## S3 method for class 'gce_instance' docker_cmd( host, cmd = NULL, args = NULL, docker_opts = NULL, capture_text = FALSE, nvidia = FALSE, ... )
host |
The GCE instance |
cmd |
The command to pass to docker |
args |
arguments to the command |
docker_opts |
options for docker |
capture_text |
whether to return the output |
nvidia |
If true will use |
... |
other arguments passed to gce_ssh |
Instances launched in the google-containers
image family automatically add your user to the docker group,
but for others you will need to run sudo usermod -a -G docker ${USER}
and log out and back in.
Inspect one or more containers, given name(s) or ID(s).
docker_inspect(host = localhost, names = NULL, ...)
docker_inspect(host = localhost, names = NULL, ...)
host |
A host object. |
names |
Names of the containers |
... |
Other arguments passed to the SSH command for the host |
A list of lists, where each sublist represents one container. This is the output of 'docker inspect' translated directly from raw JSON to an R object.
Winston Change [email protected]
## Not run: docker_run(localhost, "debian:testing", "echo foo", name = "harbor-test") docker_inspect(localhost, "harbor-test") ## End(Not run)
## Not run: docker_run(localhost, "debian:testing", "echo foo", name = "harbor-test") docker_inspect(localhost, "harbor-test") ## End(Not run)
Pull a docker image onto a host.
docker_pull(host = localhost, image, ...)
docker_pull(host = localhost, image, ...)
host |
A host object. |
image |
The docker image to pull e.g. |
... |
Other arguments passed to the SSH command for the host |
The host
object.
Winston Change [email protected]
## Not run: docker_pull(localhost, "debian:testing") ## End(Not run)
## Not run: docker_pull(localhost, "debian:testing") ## End(Not run)
Run a command in a new container on a host.
docker_run( host = localhost, image = NULL, cmd = NULL, name = NULL, rm = FALSE, detach = FALSE, docker_opts = NULL, ... )
docker_run( host = localhost, image = NULL, cmd = NULL, name = NULL, rm = FALSE, detach = FALSE, docker_opts = NULL, ... )
host |
An object representing the host where the container will be run. |
image |
The name or ID of a docker image. |
cmd |
A command to run in the container. |
name |
A name for the container. If none is provided, a random name will be used. |
rm |
If |
detach |
If |
docker_opts |
Options to docker. These are things that come before the docker command, when run on the command line. |
... |
Other arguments passed to the SSH command for the host |
A container
object. When rm=TRUE
, this function returns
NULL
instead of a container object, because the container no longer
exists.
Winston Change [email protected]
## Not run: docker_run(localhost, "debian:testing", "echo foo") #> foo # Arguments will be concatenated docker_run(localhost, "debian:testing", c("echo foo", "bar")) #> foo bar docker_run(localhost, "rocker/r-base", c("Rscript", "-e", "1+1")) #> [1] 2 ## End(Not run)
## Not run: docker_run(localhost, "debian:testing", "echo foo") #> foo # Arguments will be concatenated docker_run(localhost, "debian:testing", c("echo foo", "bar")) #> foo bar docker_run(localhost, "rocker/r-base", c("Rscript", "-e", "1+1")) #> [1] 2 ## End(Not run)
Attaches a Disk resource to an instance.
gce_attach_disk( instance, source = NULL, autoDelete = NULL, boot = NULL, deviceName = NULL, diskEncryptionKey = NULL, index = NULL, initializeParams = NULL, interface = NULL, licenses = NULL, mode = NULL, type = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_attach_disk( instance, source = NULL, autoDelete = NULL, boot = NULL, deviceName = NULL, diskEncryptionKey = NULL, index = NULL, initializeParams = NULL, interface = NULL, licenses = NULL, mode = NULL, type = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
instance |
The instance name for this request |
source |
Specifies a valid partial or full URL to an existing Persistent Disk resource |
autoDelete |
Specifies whether the disk will be auto-deleted when the instance is deleted (but not when the disk is detached from the instance) |
boot |
Indicates that this is a boot disk |
deviceName |
Specifies a unique device name of your choice that is reflected into the /dev/disk/by-id/google-* tree of a Linux operating system running within the instance |
diskEncryptionKey |
Encrypts or decrypts a disk using a customer-supplied encryption key |
index |
Assigns a zero-based index to this disk, where 0 is reserved for the boot disk |
initializeParams |
A gce_make_boot_disk object for creating boot disks. Cannot be used with |
interface |
Specifies the disk interface to use for attaching this disk, which is either SCSI or NVME |
licenses |
[Output Only] Any valid publicly visible licenses |
mode |
The mode in which to attach this disk, either READ_WRITE or READ_ONLY |
type |
Specifies the type of the disk, either SCRATCH or PERSISTENT |
project |
Project ID for this request |
zone |
The name of the zone for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
Other AttachedDisk functions:
AttachedDisk()
No longer used. Authenticate via downloading a JSON file and setting in your environment arguments instead.
gce_auth(new_user = FALSE, no_auto = FALSE)
gce_auth(new_user = FALSE, no_auto = FALSE)
new_user |
If TRUE, reauthenticate via Google login screen |
no_auto |
Will ignore auto-authentication settings if TRUE |
Invisibly, the token that has been saved to the session
Check GPU installed ok
gce_check_gpu(vm)
gce_check_gpu(vm)
vm |
The instance to check |
The NVIDIA-SMI output via ssh
https://cloud.google.com/compute/docs/gpus/add-gpus#verify-driver-install
Other GPU instances:
gce_list_gpus()
,
gce_vm_gpu()
Calls API for the current SSH settings for an instance
gce_check_ssh(instance)
gce_check_ssh(instance)
instance |
An instance to check |
A data.frame of SSH users and public keys
Check the docker logs of a container
gce_container_logs(instance, container) gce_check_container(...)
gce_container_logs(instance, container) gce_check_container(...)
instance |
The instance running docker |
container |
A running container to get logs of |
... |
Arguments passed to gce_container_logs |
logs
Deletes an Access Config, Typically for an External IP Address.
gce_delete_access_config( instance, access_config = "external-nat", network_interface = "nic0", project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_delete_access_config( instance, access_config = "external-nat", network_interface = "nic0", project = gce_get_global_project(), zone = gce_get_global_zone() )
instance |
Name of the instance resource, or an instance object e.g. from gce_get_instance |
access_config |
The name of the access config to delete. |
network_interface |
The name of the network interface. |
project |
Project ID for this request, default as set by gce_get_global_project |
zone |
The name of the zone for this request, default as set by gce_get_global_zone |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
A list of operation objects with pending status
Deleting a disk removes its data permanently and is irreversible.
gce_delete_disk( disk, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_delete_disk( disk, project = gce_get_global_project(), zone = gce_get_global_zone() )
disk |
Name of the persistent disk to delete |
project |
Project ID for this request |
zone |
The name of the zone for this request |
However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots.
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
Deletes a firewall rule of name specified
gce_delete_firewall_rule(name, project = gce_get_global_project())
gce_delete_firewall_rule(name, project = gce_get_global_project())
name |
Name of the firewall rule |
project |
The Google Cloud project |
API Documentation https://cloud.google.com/compute/docs/reference/latest/firewalls/delete
Other firewall functions:
gce_get_firewall_rule()
,
gce_list_firewall_rules()
,
gce_make_firewall_rule()
,
gce_make_firewall_webports()
Deletes the specified Operations resource.
gce_delete_op(operation)
gce_delete_op(operation)
operation |
Name of the Operations resource to delete |
TRUE if successful
Deletes the specified global Operations resource.
## S3 method for class 'gce_global_operation' gce_delete_op(operation)
## S3 method for class 'gce_global_operation' gce_delete_op(operation)
operation |
Name of the Operations resource to delete |
The deleted operation
Deletes the specified zone-specific Operations resource.
## S3 method for class 'gce_zone_operation' gce_delete_op(operation)
## S3 method for class 'gce_zone_operation' gce_delete_op(operation)
operation |
Name of the Operations resource to delete |
The deleted operation
Extract zone and project from an instance object
gce_extract_projectzone(instance)
gce_extract_projectzone(instance)
instance |
The instance |
A list of $project and $zone
Returns a specified persistent disk.
gce_get_disk( disk, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_get_disk( disk, project = gce_get_global_project(), zone = gce_get_global_zone() )
disk |
Name of the persistent disk to return |
project |
Project ID for this request |
zone |
The name of the zone for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Get the external IP of an instance
gce_get_external_ip(instance, verbose = TRUE, ...)
gce_get_external_ip(instance, verbose = TRUE, ...)
instance |
Name or instance object to find the external IP for |
verbose |
Give a user message about the IP |
... |
passed to gce_get_instance This is a helper to extract the external IP of an instance |
The external IP
Get a firewall rule of name specified
gce_get_firewall_rule(name, project = gce_get_global_project())
gce_get_firewall_rule(name, project = gce_get_global_project())
name |
Name of the firewall rule |
project |
The Google Cloud project |
API Documentation https://cloud.google.com/compute/docs/reference/latest/firewalls/get
Other firewall functions:
gce_delete_firewall_rule()
,
gce_list_firewall_rules()
,
gce_make_firewall_rule()
,
gce_make_firewall_webports()
Project name set this session to use by default
gce_get_global_project()
gce_get_global_project()
Set the project name via gce_global_project
Project name
zone name set this session to use by default
gce_get_global_zone()
gce_get_global_zone()
Set the zone name via gce_global_zone
zone name
Returns the specified image.
gce_get_image(image_project, image)
gce_get_image(image_project, image)
image_project |
Project ID of where the image lies |
image |
Name of the image resource to return |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
You may want to use gce_get_image_family instead to ensure the most up to date image is used.
Returns the latest image that is part of an image family and is not deprecated.
gce_get_image_family(image_project, family)
gce_get_image_family(image_project, family)
image_project |
Project ID for this request |
family |
Name of the image family to search for |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Returns the specified Instance resource.
gce_get_instance( instance, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_get_instance( instance, project = gce_get_global_project(), zone = gce_get_global_zone() )
instance |
Name of the instance resource |
project |
Project ID for this request, default as set by gce_get_global_project |
zone |
The name of the zone for this request, default as set by gce_get_global_zone |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Returns the specified machine type.
gce_get_machinetype( machineType, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_get_machinetype( machineType, project = gce_get_global_project(), zone = gce_get_global_zone() )
machineType |
Name of the machine type to return |
project |
Project ID for this request |
zone |
The name of the zone for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Extract metadata from an instance object
gce_get_metadata(instance, key = NULL)
gce_get_metadata(instance, key = NULL)
instance |
instance to get metadata from |
key |
optional metadata key to filter metadata result |
data.frame $key and $value of metadata or NULL
Get project wide metadata
gce_get_metadata_project(project = gce_global_project())
gce_get_metadata_project(project = gce_global_project())
project |
The project to get the project-wide metadata from |
Returns the specified network.
gce_get_network(network, project = gce_get_global_project())
gce_get_network(network, project = gce_get_global_project())
network |
Name of the network to return |
project |
Project ID for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
s3 method dispatcher
gce_get_op(operation = .Last.value)
gce_get_op(operation = .Last.value)
operation |
Name of the Operations resource to return |
S3 Methods for classes
gce_get_op.gce_zone_operation
gce_get_op.gce_global_operation
gce_get_op.gce_region_operation
Retrieves the specified global Operations resource.
## S3 method for class 'gce_global_operation' gce_get_op(operation)
## S3 method for class 'gce_global_operation' gce_get_op(operation)
operation |
Name of the Operations resource to return |
Retrieves the specified zone-specific Operations resource.
## S3 method for class 'gce_zone_operation' gce_get_op(operation)
## S3 method for class 'gce_zone_operation' gce_get_op(operation)
operation |
Name of the Operations resource to return |
Returns the specified Project resource.
gce_get_project(project = gce_get_global_project())
gce_get_project(project = gce_get_global_project())
project |
Project ID for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Returns the specified Zone resource. Get a list of available zones by making a list() request.
gce_get_zone(project, zone)
gce_get_zone(project, zone)
project |
Project ID for this request |
zone |
Name of the zone resource to return |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Set a project name used for this R session
gce_global_project(project = gce_get_global_project())
gce_global_project(project = gce_get_global_project())
project |
project name you want this session to use by default, or a project object |
This sets a project to a global environment value so you don't need to supply the project argument to other API calls.
The project name (invisibly)
Set a zone name used for this R session
gce_global_zone(zone)
gce_global_zone(zone)
zone |
zone name you want this session to use by default, or a zone object |
This sets a zone to a global environment value so you don't need to supply the zone argument to other API calls.
The zone name (invisibly)
Retrieves a list of persistent disks contained within the specified zone.
gce_list_disks( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_list_disks( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
Project ID for this request |
zone |
The name of the zone for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Retrieves an aggregated list of persistent disks across all zones.
gce_list_disks_all( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project() )
gce_list_disks_all( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
Project ID for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Get a firewall rule of name specified
gce_list_firewall_rules( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project() )
gce_list_firewall_rules( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
The Google Cloud project |
API Documentation https://cloud.google.com/compute/docs/reference/latest/firewalls/list
Other firewall functions:
gce_delete_firewall_rule()
,
gce_get_firewall_rule()
,
gce_make_firewall_rule()
,
gce_make_firewall_webports()
Retrieves a list GPUs you can attach to an instance
gce_list_gpus( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_list_gpus( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
Project ID for this request |
zone |
The name of the zone for this request |
To filter you need a single string in the form field_name eq|ne string
e.g. gce_list_instances("status eq RUNNING")
where eq
is 'equals' and ne
is 'not-equals'.
Other GPU instances:
gce_check_gpu()
,
gce_vm_gpu()
Retrieves the list of private images available to the specified project.
gce_list_images( image_project, filter = NULL, maxResults = NULL, pageToken = NULL )
gce_list_images( image_project, filter = NULL, maxResults = NULL, pageToken = NULL )
image_project |
Project ID for this request |
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
If you want to get a list of publicly-available images, use this method to make a request to the respective image project, such as debian-cloud, windows-cloud or google-containers.
Retrieves the list of instances contained within the specified zone.
gce_list_instances( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_list_instances( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
Project ID for this request |
zone |
The name of the zone for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
To filter you need a single string in the form field_name eq|ne string
e.g. gce_list_instances("status eq RUNNING")
where eq
is 'equals' and ne
is 'not-equals'.
Retrieves a list of machine types available to the specified project.
gce_list_machinetype( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_list_machinetype( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
Project ID for this request |
zone |
The name of the zone for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Retrieves an aggregated list of machine types from all zones.
gce_list_machinetype_all( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project() )
gce_list_machinetype_all( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
Project ID for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Retrieves the list of networks available to the specified project.
gce_list_networks( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project() )
gce_list_networks( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
Project ID for this request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Retrieves a list of Operation resources contained within the specified zone.
gce_list_zone_op( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_list_zone_op( filter = NULL, maxResults = NULL, pageToken = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
project |
Project ID for this request |
zone |
Name of the zone for request |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Retrieves the list of Zone resources available to the specified project.
gce_list_zones(project, filter = NULL, maxResults = NULL, pageToken = NULL)
gce_list_zones(project, filter = NULL, maxResults = NULL, pageToken = NULL)
project |
Project ID for this request |
filter |
Sets a filter expression for filtering listed resources, in the form filter=expression |
maxResults |
The maximum number of results per page that should be returned |
pageToken |
Specifies a page token to use |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/compute.readonly
Make a boot disk for attachment to an instance
gce_make_boot_disk( diskName = NULL, diskSizeGb = NULL, diskType = NULL, sourceImage = NULL, sourceImageEncryptionKey = NULL )
gce_make_boot_disk( diskName = NULL, diskSizeGb = NULL, diskType = NULL, sourceImage = NULL, sourceImageEncryptionKey = NULL )
diskName |
Specifies the disk name |
diskSizeGb |
Specifies the size of the disk in base-2 GB |
diskType |
Specifies the disk type to use to create the instance |
sourceImage |
The source image used to create this disk |
sourceImageEncryptionKey |
The customer-supplied encryption key of the source image |
Specifies the parameters for a new disk that will be created alongside the new instance.
Use initialization parameters to create boot disks or local SSDs attached to the new instance.
This property is mutually exclusive with the source property; you can only define one or the other, but not both.
AttachedDiskInitializeParams object
You can create a disk with a sourceImage, a sourceSnapshot, or create an empty 500 GB data disk by omitting all properties.
gce_make_disk( name, sourceImage = NULL, sizeGb = NULL, description = NULL, diskEncryptionKey = NULL, licenses = NULL, sourceSnapshot = NULL, sourceImageEncryptionKey = NULL, sourceSnapshotEncryptionKey = NULL, type = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_make_disk( name, sourceImage = NULL, sizeGb = NULL, description = NULL, diskEncryptionKey = NULL, licenses = NULL, sourceSnapshot = NULL, sourceImageEncryptionKey = NULL, sourceSnapshotEncryptionKey = NULL, type = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
name |
Name of the resource |
sourceImage |
The source image used to create this disk |
sizeGb |
Size of the persistent disk, specified in GB |
description |
An optional description of this resource |
diskEncryptionKey |
Encrypts the disk using a customer-supplied encryption key |
licenses |
Any applicable publicly visible licenses |
sourceSnapshot |
The source snapshot used to create this disk |
sourceImageEncryptionKey |
The customer-supplied encryption key of the source image |
sourceSnapshotEncryptionKey |
The customer-supplied encryption key of the source snapshot |
type |
URL of the disk type resource describing which disk type to use to create the disk |
project |
Project ID for this request |
zone |
The name of the zone for this request |
You can also create a disk that is larger than the default size by specifying the sizeGb property.
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
a zone operation
Use this to create firewall rules to apply to the network settings. Most commonly this is to setup web access (port 80 and 443)
gce_make_firewall_rule( name, protocol, ports, sourceRanges = NULL, sourceTags = NULL, project = gce_get_global_project() )
gce_make_firewall_rule( name, protocol, ports, sourceRanges = NULL, sourceTags = NULL, project = gce_get_global_project() )
name |
Name of the firewall rule |
protocol |
Protocol such as |
ports |
Port numbers to open |
sourceRanges |
From where to accept connections. If |
sourceTags |
A list of instance tags this rule applies to. One or both of |
project |
The Google Cloud project |
A global operation object
If both properties are set, an inbound connection is allowed if the range or the tag of the source matches the sourceRanges OR matches the sourceTags property; the connection does not need to match both properties.
API Documentation https://cloud.google.com/compute/docs/reference/latest/firewalls/insert
Other firewall functions:
gce_delete_firewall_rule()
,
gce_get_firewall_rule()
,
gce_list_firewall_rules()
,
gce_make_firewall_webports()
## Not run: gce_make_firewall_rule("allow-http", protocol = "tcp", ports = 80) gce_make_firewall_rule("allow-https", protocol = "tcp", ports = 443) gce_make_firewall_rule("shiny", protocol = "tcp", ports = 3838) gce_make_firewall_rule("rstudio", protocol = "tcp", ports = 8787) ## End(Not run)
## Not run: gce_make_firewall_rule("allow-http", protocol = "tcp", ports = 80) gce_make_firewall_rule("allow-https", protocol = "tcp", ports = 443) gce_make_firewall_rule("shiny", protocol = "tcp", ports = 3838) gce_make_firewall_rule("rstudio", protocol = "tcp", ports = 8787) ## End(Not run)
Do the common use case of opening HTTP and HTTPS ports
gce_make_firewall_webports(project = gce_get_global_project())
gce_make_firewall_webports(project = gce_get_global_project())
project |
The project the firewall will open for |
This will invoke gce_make_firewall_rule and look for the rules named allow-http
and allow-https
.
If not present, it will create them.
Vector of the firewall objects
Other firewall functions:
gce_delete_firewall_rule()
,
gce_get_firewall_rule()
,
gce_list_firewall_rules()
,
gce_make_firewall_rule()
Make initial disk image object
gce_make_image_source_url(image_project, image = NULL, family = NULL)
gce_make_image_source_url(image_project, image = NULL, family = NULL)
image_project |
Project ID of where the image lies |
image |
Name of the image resource to return |
family |
Name of the image family to search for |
The selfLink of the image object
Construct a machineType URL
gce_make_machinetype_url( predefined_type = NULL, cpus = NULL, memory = NULL, zone = gce_get_global_zone() )
gce_make_machinetype_url( predefined_type = NULL, cpus = NULL, memory = NULL, zone = gce_get_global_zone() )
predefined_type |
A predefined machine type from gce_list_machinetype |
cpus |
If not defining |
memory |
If not defining |
zone |
zone for URL |
cpus
must be in multiples of 2 up to 32
memory
must be in multiples of 256
A url for use in instance creation
Make a network interface for instance creation
gce_make_network( network = "default", name = NULL, subnetwork = NULL, externalIP = NULL, project = gce_get_global_project() )
gce_make_network( network = "default", name = NULL, subnetwork = NULL, externalIP = NULL, project = gce_get_global_project() )
network |
Name of network resource |
name |
Name of the access config |
subnetwork |
A subnetwork name if its exists You need to provide accessConfig explicitly if you want an ephemeral IP assigned, see |
externalIP |
An external IP you have created previously, leave NULL to have one assigned or "none" for none |
project |
Project ID for this request |
A Network object
This turns instance metadata into an environment argument R (and other software) can see. Only works on a running instance.
gce_metadata_env(key)
gce_metadata_env(key)
key |
The metadata key. Pass "" to list the keys |
The metadata key value, if successful
Load a previously saved private Google Container
gce_pull_registry( instance, container_name, container_url = "gcr.io", pull_only = FALSE, project = gce_get_global_project(), ... )
gce_pull_registry( instance, container_name, container_url = "gcr.io", pull_only = FALSE, project = gce_get_global_project(), ... )
instance |
The VM to run within |
container_name |
The name of the saved container |
container_url |
The URL of where the container was saved |
pull_only |
If TRUE, will not run the container, only pull to the VM |
project |
Project ID for this request, default as set by gce_get_global_project |
... |
Other arguments passed to docker_run or docker_pull After starting a VM, you can load the container again using this command.
|
The instance
Other container registry functions:
gce_push_registry()
,
gce_tag_container()
Commit and save a running container or docker image to the Google Container Registry
gce_push_registry( instance, save_name, container_name = NULL, image_name = NULL, container_url = "gcr.io", project = gce_get_global_project(), wait = FALSE )
gce_push_registry( instance, save_name, container_name = NULL, image_name = NULL, container_url = "gcr.io", project = gce_get_global_project(), wait = FALSE )
instance |
The VM to run within |
save_name |
The new name for the saved image |
container_name |
A running docker container. Can't be set if |
image_name |
A docker image on the instance. Can't be set if |
container_url |
The URL of where to save container |
project |
Project ID for this request, default as set by gce_get_global_project This will only work on the Google Container optimised containers of image_family google_containers. Otherwise you will need to get a container authentication yourself (for now) It will start the push but it may take a long time to finish, especially the first time, this function will return whilst waiting but don't turn off the VM until its finished. |
wait |
Will wait for operation to finish on the instance if TRUE |
The tag the image was tagged with on GCE
Other container registry functions:
gce_pull_registry()
,
gce_tag_container()
RStudio has users based on unix user accounts
gce_rstudio_adduser( instance, username, password, admin = TRUE, container = "rstudio" )
gce_rstudio_adduser( instance, username, password, admin = TRUE, container = "rstudio" )
instance |
An instance with RStudio installed via gce_vm_template |
username |
The user to create |
password |
The user password |
admin |
Default TRUE - Will the user be able to install packages and other sudo tasks? |
container |
The rstudio container to add the user to |
The instance
RStudio has users based on unix user accounts
gce_rstudio_password(instance, username, password, container = "rstudio")
gce_rstudio_password(instance, username, password, container = "rstudio")
instance |
An instance with RStudio installed via gce_vm_template |
username |
The user to change the password for |
password |
The user password |
container |
The rstudio container to add the user to |
The instance
Utility function to start a VM to run a docker container on a schedule. You will need to create and build the Dockerfile first.
gce_schedule_docker( docker_image, schedule = "53 4 * * *", vm = gce_vm_scheduler() )
gce_schedule_docker( docker_image, schedule = "53 4 * * *", vm = gce_vm_scheduler() )
docker_image |
the hosted docker image to run on a schedule |
schedule |
The schedule you want to run via cron |
vm |
A VM object to schedule the script upon that you can SSH into |
You may need to run gce_vm_scheduler yourself first and then set
up SSH details if not defaults, to pass to argument vm
You can create a Dockerfile with your R script installed by
running it through containeRit::dockerfile
. It also takes care of any dependencies.
It is recommended to create a script that is self contained in output and input,
e.g. don't save files to the VM, instead upload or download any files
from Google Cloud Storage via authentication via googleAuthR::gar_gce_auth()
then downloading and uploading data using library(googleCloudStorageR)
or similar.
Once the script is working locally, build it and upload to a repository
so it can be reached via argument docker_image
You can build via Google cloud repository build triggers, in which case the name can be created via gce_tag_container or build via docker_build to build on another VM or locally, then push to a registry via gce_push_registry
Any Docker image can be run, it does not have to be an R one.
The crontab schedule of the VM including your script
Other scheduler functions:
gce_vm_scheduler()
## Not run: # create a Dockerfile of your script if(!require(containeRit)){ remotes::install_github("o2r-project/containerit") library(containeRit) } ## create your scheduled script, example below named schedule.R ## it will run the script whilst making the dockerfile container <- dockerfile("schedule.R", copy = "script_dir", cmd = CMD_Rscript("schedule.R"), soft = TRUE) write(container, file = "Dockerfile") ## upload created Dockerfile to GitHub, then use a Build Trigger to create Docker image "demoDockerScheduler" ## built trigger uses "demo-docker-scheduler" as must be lowercase ## After image is built: ## Create a VM to run the schedule vm <- gce_vm_scheduler("my_scheduler") ## setup any SSH not on defaults vm <- gce_vm_setup(vm, username = "mark") ## get the name of the just built Docker image that runs your script docker_tag <- gce_tag_container("demo-docker-scheduler", project = "gcer-public") ## Schedule the docker_tag to run every day at 0453AM gce_schedule_docker(docker_tag, schedule = "53 4 * * *", vm = vm) ## End(Not run)
## Not run: # create a Dockerfile of your script if(!require(containeRit)){ remotes::install_github("o2r-project/containerit") library(containeRit) } ## create your scheduled script, example below named schedule.R ## it will run the script whilst making the dockerfile container <- dockerfile("schedule.R", copy = "script_dir", cmd = CMD_Rscript("schedule.R"), soft = TRUE) write(container, file = "Dockerfile") ## upload created Dockerfile to GitHub, then use a Build Trigger to create Docker image "demoDockerScheduler" ## built trigger uses "demo-docker-scheduler" as must be lowercase ## After image is built: ## Create a VM to run the schedule vm <- gce_vm_scheduler("my_scheduler") ## setup any SSH not on defaults vm <- gce_vm_setup(vm, username = "mark") ## get the name of the just built Docker image that runs your script docker_tag <- gce_tag_container("demo-docker-scheduler", project = "gcer-public") ## Schedule the docker_tag to run every day at 0453AM gce_schedule_docker(docker_tag, schedule = "53 4 * * *", vm = vm) ## End(Not run)
Changes the machine type for a stopped instance to the machine type specified in the request.
gce_set_machinetype( predefined_type, cpus, memory, instance, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_set_machinetype( predefined_type, cpus, memory, instance, project = gce_get_global_project(), zone = gce_get_global_zone() )
predefined_type |
A predefined machine type from gce_list_machinetype |
cpus |
If not defining |
memory |
If not defining |
instance |
Name of the instance resource to change |
project |
Project ID for this request, default as set by gce_get_global_project |
zone |
The name of the zone for this request, default as set by gce_get_global_zone |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
A zone operation job
Set, change and append metadata for an instance.
gce_set_metadata( metadata, instance, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_set_metadata( metadata, instance, project = gce_get_global_project(), zone = gce_get_global_zone() )
metadata |
A named list of metadata key/value pairs to assign to this instance |
instance |
Name of the instance scoping this request. If "project-wide" will set the metadata project wide, available to all instances |
project |
Project ID for this request, default as set by gce_get_global_project |
zone |
The name of the zone for this request, default as set by gce_get_global_zone |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
To append to existing metadata passed a named list.
To change existing metadata pass a named list with the same key and modified value you will change.
To delete metadata pass an empty string ""
with the same key
Other Metadata functions:
Metadata()
## Not run: # Use "project-wide" to set "enable-oslogin" = "TRUE" to take advantage of OS Login. # But you won't be able to login via SSH if you do gce_set_metadata(list("enable-oslogin" = "TRUE"), instance = "project-wide") # enable google logging gce_set_metadata(list("google-logging-enabled"="True"), instance = "project-wide") ## End(Not run)
## Not run: # Use "project-wide" to set "enable-oslogin" = "TRUE" to take advantage of OS Login. # But you won't be able to login via SSH if you do gce_set_metadata(list("enable-oslogin" = "TRUE"), instance = "project-wide") # enable google logging gce_set_metadata(list("google-logging-enabled"="True"), instance = "project-wide") ## End(Not run)
Set a minCPU platform on a stopped instance
gce_set_mincpuplatform(instance, minCpuPlatform)
gce_set_mincpuplatform(instance, minCpuPlatform)
instance |
The (stopped) instance to set a minimum CPU platform upon |
minCpuPlatform |
The platform to set |
Add a local shiny app to a running Shiny VM installed via gce_vm_template via docker_build and gce_push_registry / gce_pull_registry.
gce_shiny_addapp(instance, app_image, dockerfolder = NULL)
gce_shiny_addapp(instance, app_image, dockerfolder = NULL)
instance |
The instance running Shiny |
app_image |
The name of the Docker image to create or use existing from Google Container Registry. Must be numbers, dashes or lowercase letters only. |
dockerfolder |
The folder location containing the |
To deploy a Shiny app, you first need to construct a Dockerfile
which load the R packages and
dependencies, as well as copying over the Shiny app in the same folder.
This function will take the Dockerfile, build it into a Docker image and upload it to Google Container Registry for use later.
If already created, then the function will download the app_image
from Google Container Registry
and start it on the instance provided.
Any existing Shiny Docker containers are stopped and removed,
so if you want multiple apps put them in the same Dockerfile
.
The instance
Example Dockerfile
's are found in
system.file("dockerfiles",package = "googleComputeEngineR")
The Dockerfile is in the same folder as your shiny app,
which consists of a ui.R
and server.R
in a shiny subfolder.
This is copied into the Dockerfile in the last line.
Change the name of the subfolder to have that name appear
in the final URL of the Shinyapp.
This is then run using the R commands below:
The vignette entry called Shiny App
has examples and a walk through.
## Not run: vm <- gce_vm("shiny-test", template = "shiny", predefined_type = "n1-standard-1") vm <- vm_ssh_setup(vm) app_dir <- system.file("dockerfiles","shiny-googleAuthRdemo", package = "googleComputeEngineR") gce_shiny_addapp(vm, app_image = "gceshinydemo", dockerfolder = app_dir) # a new VM, it loads the Shiny docker image from before gce_shiny_addapp(vm2, app_image = "gceshinydemo") ## End(Not run)
## Not run: vm <- gce_vm("shiny-test", template = "shiny", predefined_type = "n1-standard-1") vm <- vm_ssh_setup(vm) app_dir <- system.file("dockerfiles","shiny-googleAuthRdemo", package = "googleComputeEngineR") gce_shiny_addapp(vm, app_image = "gceshinydemo", dockerfolder = app_dir) # a new VM, it loads the Shiny docker image from before gce_shiny_addapp(vm2, app_image = "gceshinydemo") ## End(Not run)
List shiny apps on the instance
gce_shiny_listapps(instance)
gce_shiny_listapps(instance)
instance |
Instance with Shiny apps installed |
character vector
Get the latest shiny logs for a shinyapp
gce_shiny_logs(instance, shinyapp = NULL)
gce_shiny_logs(instance, shinyapp = NULL)
instance |
Instance with Shiny app installed |
shinyapp |
Name of shinyapp to see logs for. If NULL will return general shiny logs |
log printout
Assumes that you have ssh & scp installed. If on Windows see website and examples for workarounds.
gce_ssh( instance, ..., key.pub = NULL, key.private = NULL, wait = TRUE, capture_text = "", username = Sys.info()[["user"]] ) gce_ssh_upload( instance, local, remote, username = Sys.info()[["user"]], key.pub = NULL, key.private = NULL, verbose = FALSE, wait = TRUE ) gce_ssh_download( instance, remote, local, username = Sys.info()[["user"]], key.pub = NULL, key.private = NULL, verbose = FALSE, overwrite = FALSE, wait = TRUE )
gce_ssh( instance, ..., key.pub = NULL, key.private = NULL, wait = TRUE, capture_text = "", username = Sys.info()[["user"]] ) gce_ssh_upload( instance, local, remote, username = Sys.info()[["user"]], key.pub = NULL, key.private = NULL, verbose = FALSE, wait = TRUE ) gce_ssh_download( instance, remote, local, username = Sys.info()[["user"]], key.pub = NULL, key.private = NULL, verbose = FALSE, overwrite = FALSE, wait = TRUE )
instance |
Name of the instance of run ssh command upon |
... |
Shell commands to run. Multiple commands are combined with
|
key.pub |
The filepath location of the public key |
key.private |
The filepath location of the private key |
wait |
Whether then SSH output should be waited for or run it asynchronously. |
capture_text |
Possible values are "", to the R console (the default), NULL or FALSE (discard output), TRUE (capture the output in a character vector) or a character string naming a file. |
username |
The username you used to generate the key-pair |
local , remote
|
Local and remote paths. |
verbose |
If TRUE, will print command before executing it. |
overwrite |
If TRUE, will overwrite the local file if exists. |
Only works connecting to linux based instances.
On Windows you will need to install an ssh command line client - see examples for an example using RStudio's built in client.
You will need to generate a new SSH key-pair if you have not connected to the instance before via say the gcloud SDK.
To customise SSH connection see gce_ssh_setup
capture_text
is passed to stdout
and stderr
of system2
Otherwise, instructions for generating SSH keys can be found here: https://cloud.google.com/compute/docs/instances/connecting-to-instance.
Uploads and downloads are recursive, so if you specify a directory, everything inside the directory will also be downloaded.
https://cloud.google.com/compute/docs/instances/connecting-to-instance
Other ssh functions:
gce_ssh_addkeys()
,
gce_ssh_browser()
,
gce_ssh_setup()
## Not run: vm <- gce_vm("my-instance") ## if you have already logged in via gcloud, the default keys will be used ## no need to run gce_ssh_addkeys ## run command on instance gce_ssh(vm, "echo foo") #> foo ## if running on Windows, use the RStudio default SSH client ## e.g. add C:\Program Files\RStudio\bin\msys-ssh-1000-18 to your PATH ## then run: vm2 <- gce_vm("my-instance2") ## add SSH info to the VM object ## custom info vm2 <- gce_ssh_setup(vm2, username = "mark", key.pub = "C://.ssh/id_rsa.pub", key.private = "C://.ssh/id_rsa") ## run command on instance gce_ssh(vm2, "echo foo") #> foo ## End(Not run)
## Not run: vm <- gce_vm("my-instance") ## if you have already logged in via gcloud, the default keys will be used ## no need to run gce_ssh_addkeys ## run command on instance gce_ssh(vm, "echo foo") #> foo ## if running on Windows, use the RStudio default SSH client ## e.g. add C:\Program Files\RStudio\bin\msys-ssh-1000-18 to your PATH ## then run: vm2 <- gce_vm("my-instance2") ## add SSH info to the VM object ## custom info vm2 <- gce_ssh_setup(vm2, username = "mark", key.pub = "C://.ssh/id_rsa.pub", key.private = "C://.ssh/id_rsa") ## run command on instance gce_ssh(vm2, "echo foo") #> foo ## End(Not run)
Add SSH details to a gce_instance
gce_ssh_addkeys( instance, key.pub = NULL, key.private = NULL, username = Sys.info()[["user"]], overwrite = FALSE )
gce_ssh_addkeys( instance, key.pub = NULL, key.private = NULL, username = Sys.info()[["user"]], overwrite = FALSE )
instance |
The gce_instance |
key.pub |
filepath to public SSH key |
key.private |
filepath to the private SSK key |
username |
SSH username to login with |
overwrite |
Overwrite existing SSH details if they exist |
You will only need to run this yourself if you save your SSH keys somewhere other
than $HOME/.ssh/google_compute_engine.pub
or use a different username than
your local username as found in Sys.info[["user"]]
, otherwise it will configure
itself automatically the first time you use gce_ssh in an R session.
If key.pub is NULL then will look for default Google credentials at
file.path(Sys.getenv("HOME"), ".ssh", "google_compute_engine.pub")
The instance with SSH details included in $ssh
Other ssh functions:
gce_ssh_browser()
,
gce_ssh_setup()
,
gce_ssh()
## Not run: library(googleComputeEngineR) vm <- gce_vm("my-instance") ## if you have already logged in via gcloud, the default keys will be used ## no need to run gce_ssh_addkeys ## run command on instance gce_ssh(vm, "echo foo") ## if running on Windows, use the RStudio default SSH client ## e.g. add C:\Program Files\RStudio\bin\msys-ssh-1000-18 to your PATH ## then run: vm2 <- gce_vm("my-instance2") ## add SSH info to the VM object ## custom info vm <- gce_ssh_setup(vm, username = "mark", key.pub = "C://.ssh/id_rsa.pub", key.private = "C://.ssh/id_rsa") ## run command on instance gce_ssh(vm, "echo foo") #> foo ## example to check logs of rstudio docker container gce_ssh(vm, "sudo journalctl -u rstudio") ## End(Not run)
## Not run: library(googleComputeEngineR) vm <- gce_vm("my-instance") ## if you have already logged in via gcloud, the default keys will be used ## no need to run gce_ssh_addkeys ## run command on instance gce_ssh(vm, "echo foo") ## if running on Windows, use the RStudio default SSH client ## e.g. add C:\Program Files\RStudio\bin\msys-ssh-1000-18 to your PATH ## then run: vm2 <- gce_vm("my-instance2") ## add SSH info to the VM object ## custom info vm <- gce_ssh_setup(vm, username = "mark", key.pub = "C://.ssh/id_rsa.pub", key.private = "C://.ssh/id_rsa") ## run command on instance gce_ssh(vm, "echo foo") #> foo ## example to check logs of rstudio docker container gce_ssh(vm, "sudo journalctl -u rstudio") ## End(Not run)
This will open an SSH from the browser session if getOption("browser")
is not NULL
gce_ssh_browser(instance)
gce_ssh_browser(instance)
instance |
the instance resource |
You will need to login the first time with an email that has access to the instance.
Opens a browser window to the SSH session, returns the SSH URL.
https://cloud.google.com/compute/docs/ssh-in-browser
Other ssh functions:
gce_ssh_addkeys()
,
gce_ssh_setup()
,
gce_ssh()
Uploads ssh-keys to an instance
gce_ssh_setup( instance, key.pub = NULL, key.private = NULL, ssh_overwrite = FALSE, username = Sys.info()[["user"]] )
gce_ssh_setup( instance, key.pub = NULL, key.private = NULL, ssh_overwrite = FALSE, username = Sys.info()[["user"]] )
instance |
Name of the instance of run ssh command upon |
key.pub |
The filepath location of the public key |
key.private |
The filepath location of the private key |
ssh_overwrite |
Will check if SSH settings already set and overwrite them if TRUE |
username |
The username you used to generate the key-pair |
This loads a public ssh-key to an instance's metadata. It does not use the project SSH-Keys, that may be set separately.
You will need to generate a new SSH key-pair if you have not connected to an instance before.
Instructions for this can be found here: https://cloud.google.com/compute/docs/instances/connecting-to-instance. Once you have generated run this function once to initiate setup.
If you have historically connected via gcloud or some other means, ssh keys may have been generated automatically.
These will be looked for and used if found, at file.path(Sys.getenv("HOME"), ".ssh", "google_compute_engine.pub")
TRUE if successful
https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
Other ssh functions:
gce_ssh_addkeys()
,
gce_ssh_browser()
,
gce_ssh()
## Not run: library(googleComputeEngineR) vm <- gce_vm("my-instance") ## if you have already logged in via gcloud, the default keys will be used ## no need to run gce_ssh_addkeys ## run command on instance gce_ssh(vm, "echo foo") ## if running on Windows, use the RStudio default SSH client ## e.g. add C:\Program Files\RStudio\bin\msys-ssh-1000-18 to your PATH ## then run: vm2 <- gce_vm("my-instance2") ## add SSH info to the VM object ## custom info vm <- gce_ssh_setup(vm, username = "mark", key.pub = "C://.ssh/id_rsa.pub", key.private = "C://.ssh/id_rsa") ## run command on instance gce_ssh(vm, "echo foo") #> foo ## example to check logs of rstudio docker container gce_ssh(vm, "sudo journalctl -u rstudio") ## End(Not run)
## Not run: library(googleComputeEngineR) vm <- gce_vm("my-instance") ## if you have already logged in via gcloud, the default keys will be used ## no need to run gce_ssh_addkeys ## run command on instance gce_ssh(vm, "echo foo") ## if running on Windows, use the RStudio default SSH client ## e.g. add C:\Program Files\RStudio\bin\msys-ssh-1000-18 to your PATH ## then run: vm2 <- gce_vm("my-instance2") ## add SSH info to the VM object ## custom info vm <- gce_ssh_setup(vm, username = "mark", key.pub = "C://.ssh/id_rsa.pub", key.private = "C://.ssh/id_rsa") ## run command on instance gce_ssh(vm, "echo foo") #> foo ## example to check logs of rstudio docker container gce_ssh(vm, "sudo journalctl -u rstudio") ## End(Not run)
Get startup script logs
gce_startup_logs(instance, type = c("shell", "cloud-config", "nginx"))
gce_startup_logs(instance, type = c("shell", "cloud-config", "nginx"))
instance |
The instance to get startup script logs from |
type |
The type of log to run Will use SSH so that needs to be setup |
Return a container tag for Google Container Registry
gce_tag_container( container_name, project = gce_get_global_project(), container_url = "gcr.io" )
gce_tag_container( container_name, project = gce_get_global_project(), container_url = "gcr.io" )
container_name |
A running docker container. Can't be set if |
project |
Project ID for this request, default as set by gce_get_global_project This will only work on the Google Container optimised containers of image_family google_containers. Otherwise you will need to get a container authentication yourself (for now) It will start the push but it may take a long time to finish, especially the first time, this function will return whilst waiting but don't turn off the VM until its finished. |
container_url |
The URL of where to save container |
A tag for use in Google Container Registry
Other container registry functions:
gce_pull_registry()
,
gce_push_registry()
Pass in the instance name to fetch its object, or create the instance via gce_vm_create.
gce_vm( name, ..., project = gce_get_global_project(), zone = gce_get_global_zone(), open_webports = TRUE )
gce_vm( name, ..., project = gce_get_global_project(), zone = gce_get_global_zone(), open_webports = TRUE )
name |
The name of the instance |
... |
Arguments passed on to
|
project |
Project ID for this request |
zone |
The name of the zone for this request |
open_webports |
If TRUE, will open firewall ports 80 and 443 if not open already |
Will get or create the instance as specified. Will wait for instance to be created if necessary.
Make sure the instance is big enough to handle what you need,
for instance the default f1-micro
will hang the instance when trying to install large R libraries.
A gce_instance
object
You need these parameters defined to call the right function for creation. Check the function definitions for more details.
If the VM name exists but is not running, it start the VM and return the VM object
If the VM is running, it will return the VM object
If you specify the argument template
it will call gce_vm_template
If you specify one of file
or cloud_init
it will call gce_vm_container
Otherwise it will call gce_vm_create
## Not run: library(googleComputeEngineR) ## auto auth, project and zone pre-set ## list your VMs in the project/zone the_list <- gce_list_instances() ## start an existing instance vm <- gce_vm("markdev") ## for rstudio, you also need to specify a username and password to login vm <- gce_vm(template = "rstudio", name = "rstudio-server", username = "mark", password = "mark1234") ## specify your own cloud-init file and pass it into gce_vm_container() vm <- gce_vm(cloud_init = "example.yml", name = "test-container", predefined_type = "f1-micro") ## specify disk size at creation vm <- gce_vm('my-image3', disk_size_gb = 20) ## End(Not run)
## Not run: library(googleComputeEngineR) ## auto auth, project and zone pre-set ## list your VMs in the project/zone the_list <- gce_list_instances() ## start an existing instance vm <- gce_vm("markdev") ## for rstudio, you also need to specify a username and password to login vm <- gce_vm(template = "rstudio", name = "rstudio-server", username = "mark", password = "mark1234") ## specify your own cloud-init file and pass it into gce_vm_container() vm <- gce_vm(cloud_init = "example.yml", name = "test-container", predefined_type = "f1-micro") ## specify disk size at creation vm <- gce_vm('my-image3', disk_size_gb = 20) ## End(Not run)
This wraps the commands for creating a cluster suitable for future workloads.
gce_vm_cluster( vm_prefix = "r-cluster-", cluster_size = 3, docker_image = NULL, ..., ssh_args = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_vm_cluster( vm_prefix = "r-cluster-", cluster_size = 3, docker_image = NULL, ..., ssh_args = NULL, project = gce_get_global_project(), zone = gce_get_global_zone() )
vm_prefix |
The prefix of the VMs you want to make. Will be appended the cluster number |
cluster_size |
The number of VMs in your cluster |
docker_image |
The docker image the jobs on the cluster will run on. Default NULL will use |
... |
Passed to gce_vm_template |
ssh_args |
A list of optional arguments that will be passed to gce_ssh_setup |
project |
The project to launch the cluster in |
zone |
The zone to launch the cluster in |
## Not run: library(future) library(googleComputeEngineR) vms <- gce_vm_cluster() ## make a future cluster plan(cluster, workers = as.cluster(vms)) ## End(Not run)
## Not run: library(future) library(googleComputeEngineR) vms <- gce_vm_cluster() ## make a future cluster plan(cluster, workers = as.cluster(vms)) ## End(Not run)
This lets you specify docker images when creating the VM. These are a special class of Google instances that are setup for running Docker containers.
gce_vm_container( file = NULL, cloud_init = NULL, shell_script = NULL, image_family = "cos-stable", image_project = "cos-cloud", ... )
gce_vm_container( file = NULL, cloud_init = NULL, shell_script = NULL, image_family = "cos-stable", image_project = "cos-cloud", ... )
file |
file location of a valid cloud-init or shell_script file.
One of |
cloud_init |
contents of a cloud-init file, for example read via |
shell_script |
contents of a shell_script file, for example read via |
image_family |
An image-family. It must come from the |
image_project |
An image-project, where the image-family resides. |
... |
Other arguments passed to gce_vm_create |
file
expects a filepath to a https://cloudinit.readthedocs.io/en/latest/topics/format.html configuration file or a valid bash script e.g. has !#/bin/
or #cloud-config
at top of file.
image_project
will be ignored if set, overriden to cos-cloud
.
If you want to set it then use the gce_vm_create function directly that this function wraps with some defaults.
A zone operation
https://cloud.google.com/container-optimized-os/docs/how-to/create-configure-instance - help using cloud-init files
Creates an instance resource in the specified project using the data included in the request.
gce_vm_create( name, predefined_type = "f1-micro", image_project = "debian-cloud", image_family = "debian-9", cpus = NULL, memory = NULL, image = "", disk_source = NULL, network = gce_make_network("default", project = project), externalIP = NULL, canIpForward = NULL, description = NULL, metadata = NULL, scheduling = NULL, serviceAccounts = NULL, tags = NULL, minCpuPlatform = NULL, project = gce_get_global_project(), zone = gce_get_global_zone(), dry_run = FALSE, disk_size_gb = NULL, use_beta = FALSE, acceleratorCount = NULL, acceleratorType = "nvidia-tesla-p4" )
gce_vm_create( name, predefined_type = "f1-micro", image_project = "debian-cloud", image_family = "debian-9", cpus = NULL, memory = NULL, image = "", disk_source = NULL, network = gce_make_network("default", project = project), externalIP = NULL, canIpForward = NULL, description = NULL, metadata = NULL, scheduling = NULL, serviceAccounts = NULL, tags = NULL, minCpuPlatform = NULL, project = gce_get_global_project(), zone = gce_get_global_zone(), dry_run = FALSE, disk_size_gb = NULL, use_beta = FALSE, acceleratorCount = NULL, acceleratorType = "nvidia-tesla-p4" )
name |
The name of the resource, provided by the client when initially creating the resource |
predefined_type |
A predefined machine type from gce_list_machinetype |
image_project |
Project ID of where the image lies |
image_family |
Name of the image family to search for |
cpus |
If not defining |
memory |
If not defining |
image |
Name of the image resource to return |
disk_source |
Specifies a valid URL to an existing Persistent Disk resource. |
network |
A network object created by gce_make_network |
externalIP |
An external IP you have previously reserved, leave NULL to have one assigned or |
canIpForward |
Allows this instance to send and receive packets with non-matching destination or source IPs |
description |
An optional description of this resource |
metadata |
A named list of metadata key/value pairs assigned to this instance |
scheduling |
Scheduling options for this instance, such as preemptible instances |
serviceAccounts |
A list of service accounts, with their specified scopes, authorized for this instance |
tags |
A list of tags to apply to this instance |
minCpuPlatform |
Specify a minimum CPU platform as per https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform |
project |
Project ID for this request |
zone |
The name of the zone for this request |
dry_run |
whether to just create the request JSON |
disk_size_gb |
If not NULL, override default size of the boot disk (size in GB) |
use_beta |
If set to TRUE will use the beta version of the API. Should not be used for production purposes. |
acceleratorCount |
Number of GPUs to add to instance. If using this, you may want to instead use gce_vm_gpu which sets some defaults for GPU instances. |
acceleratorType |
Name of GPU to add, see gce_list_gpus |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
cpus
must be in multiples of 2 up to 32
memory
must be in multiples of 256
One of image
or image_family
must be supplied
To create an instance you need to specify:
Name
Project [if not default]
Zone [if not default]
Machine type - either a predefined type or custom CPU and memory
Network - usually default, specifies open ports etc.
Image - a source image containing the operating system
You can add metadata to the server such as startup-script
and shutdown-script
. Details available here: https://cloud.google.com/compute/docs/storing-retrieving-metadata
If you want to not have an external IP then modify the instance afterwards
A zone operation, or if the name already exists the VM object from gce_get_instance
You can set preemptible VMs by passing this in the scheduling
arguments scheduling = list(preemptible = TRUE)
This creates a VM that may be shut down prematurely by Google - you will need to sort out how to save state if that happens in a shutdown script etc. However, these are much cheaper.
Some defaults for launching GPU enabled VMs are available at gce_vm_gpu
You can add GPUs to your instance, but they must be present in the zone you have specified - use gce_list_gpus to see which are available. Refer to this link for a list of current GPUs per zone.
Deletes the specified Instance resource.
gce_vm_delete( instances, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_vm_delete( instances, project = gce_get_global_project(), zone = gce_get_global_zone() )
instances |
Name of the instance resource, or an instance object e.g. from gce_get_instance, or a list of instances to delete |
project |
Project ID for this request, default as set by gce_get_global_project |
zone |
The name of the zone for this request, default as set by gce_get_global_zone |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
Toggle deletion protection for existing instances
gce_vm_deletion_protection( instance, cmd = c("status", "true", "false"), project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_vm_deletion_protection( instance, cmd = c("status", "true", "false"), project = gce_get_global_project(), zone = gce_get_global_zone() )
instance |
The vm to work with its deletion protection |
cmd |
Whether to get the status, or toggle "true" or "false" on deletion protection for this VM |
project |
The projectId |
zone |
The zone |
## Not run: # a workflow for deleting lots of VMs across zones that have deletion protection zones <- gce_list_zones() instances <- lapply(zones$name, function(x) gce_list_instances(zone = x)) instances_e <- lapply(instances, function(x) x$items$name) names(instances_e) <- zones$name status <- lapply(zones$name, function(x){ lapply(instances_e[[x]], function(y) { gce_vm_deletion_protection(y, cmd = "false", zone = x))) } } deletes <- lapply(zones$name, function(x){ lapply(instances_e[[x]], function(y) { gce_vm_delete(y, zone = x))) } } ## End(Not run)
## Not run: # a workflow for deleting lots of VMs across zones that have deletion protection zones <- gce_list_zones() instances <- lapply(zones$name, function(x) gce_list_instances(zone = x)) instances_e <- lapply(instances, function(x) x$items$name) names(instances_e) <- zones$name status <- lapply(zones$name, function(x){ lapply(instances_e[[x]], function(y) { gce_vm_deletion_protection(y, cmd = "false", zone = x))) } } deletes <- lapply(zones$name, function(x){ lapply(instances_e[[x]], function(y) { gce_vm_delete(y, zone = x))) } } ## End(Not run)
Helper function that fills in some defaults passed to gce_vm
gce_vm_gpu(..., return_dots = FALSE)
gce_vm_gpu(..., return_dots = FALSE)
... |
arguments passed to gce_vm |
return_dots |
Only return the settings, do not call gce_vm |
If not specified, this function will enter defaults to get a GPU instance up and running.
acceleratorCount: 1
acceleratorType: "nvidia-tesla-p4"
scheduling: list(onHostMaintenance = "TERMINATE", automaticRestart = TRUE)
image_project: "deeplearning-platform-release"
image_family: "tf2-ent-latest-gpu"
predefined_type: "n1-standard-8"
metadata: "install-nvidia-driver" = "True"
A VM object
https://cloud.google.com/deep-learning-vm/docs/quickstart-cli
Other GPU instances:
gce_check_gpu()
,
gce_list_gpus()
Saves a few clicks
gce_vm_logs(instance, open_browser = TRUE)
gce_vm_logs(instance, open_browser = TRUE)
instance |
The VM to see serial console output for |
open_browser |
Whether to return a URL or open the browser |
a URL
Performs a hard reset on the instance.
gce_vm_reset( instances, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_vm_reset( instances, project = gce_get_global_project(), zone = gce_get_global_zone() )
instances |
Name of the instance resource, or an instance object e.g. from gce_get_instance |
project |
Project ID for this request, default as set by gce_get_global_project |
zone |
The name of the zone for this request, default as set by gce_get_global_zone |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
This starts up a VM with cron and docker installed that can be used to schedule scripts
gce_vm_scheduler(vm_name = "scheduler", ...)
gce_vm_scheduler(vm_name = "scheduler", ...)
vm_name |
The name of the VM scheduler to create or return |
... |
Arguments passed on to
|
A VM object
Other scheduler functions:
gce_schedule_docker()
Starts an instance that was stopped using the using the stop method.
gce_vm_start( instances, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_vm_start( instances, project = gce_get_global_project(), zone = gce_get_global_zone() )
instances |
Name of the instance resource, or an instance object e.g. from gce_get_instance |
project |
Project ID for this request, default as set by gce_get_global_project |
zone |
The name of the zone for this request, default as set by gce_get_global_zone |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
A list of operation objects with pending status
Stops a running instance, shutting it down cleanly, and allows you to restart the instance at a later time.
gce_vm_stop( instances, project = gce_get_global_project(), zone = gce_get_global_zone() ) gce_vm_suspend( instances, project = gce_get_global_project(), zone = gce_get_global_zone() ) gce_vm_resume( instances, project = gce_get_global_project(), zone = gce_get_global_zone() )
gce_vm_stop( instances, project = gce_get_global_project(), zone = gce_get_global_zone() ) gce_vm_suspend( instances, project = gce_get_global_project(), zone = gce_get_global_zone() ) gce_vm_resume( instances, project = gce_get_global_project(), zone = gce_get_global_zone() )
instances |
Names of the instance resource, or an instance object e.g. from gce_get_instance |
project |
Project ID for this request, default as set by gce_get_global_project |
zone |
The name of the zone for this request, default as set by gce_get_global_zone |
Authentication scopes used by this function are:
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
Stopped instances do not incur per-minute, virtual machine usage charges while they are stopped, but any resources that the virtual machine is using, such as persistent disks and static IP addresses, will continue to be charged until they are deleted.
This lets you specify templates for the VM you want to launch It passes the template on to gce_vm_container
gce_vm_template( template = c("rstudio", "shiny", "opencpu", "r-base", "r-parallel", "dynamic", "rstudio-gpu", "rstudio-shiny", "rstudio-noauth"), username = NULL, password = NULL, dynamic_image = NULL, image_family = "cos-stable", wait = TRUE, ... )
gce_vm_template( template = c("rstudio", "shiny", "opencpu", "r-base", "r-parallel", "dynamic", "rstudio-gpu", "rstudio-shiny", "rstudio-noauth"), username = NULL, password = NULL, dynamic_image = NULL, image_family = "cos-stable", wait = TRUE, ... )
template |
The template available |
username |
username if needed (RStudio) |
password |
password if needed (RStudio) |
dynamic_image |
Supply an alternative to the default Docker image for the template |
image_family |
An image-family. It must come from the |
wait |
Whether to wait for the VM to launch before returning. Default |
... |
Arguments passed on to
|
Templates available are:
rstudio An RStudio server docker image with tidyverse and devtools
rstudio-gpu An RStudio server with popular R machine learning libraries and GPU driver. Will launch a GPU enabled VM.
rstudio-shiny An RStudio server with Shiny also installed, proxied to /shiny
shiny A Shiny docker image
opencpu An OpenCPU docker image
r-base Latest version of R stable
r-parallel Image with future enabled for parallel workloads
dynamic Supply your own docker image within dynamic_image
For dynamic
templates you will need to launch the docker image with any ports you want opened,
other settings etc. via docker_run.
Use dynamic_image
to override the default rocker images e.g. rocker/shiny
for shiny, etc.
The VM object, or the VM startup operation if wait=FALSE
## Not run: library(googleComputeEngineR) ## make instance using R-base vm <- gce_vm_template("r-base", predefined_type = "f1-micro", name = "rbase") ## run an R function on the instance within the R-base docker image docker_run(vm, "rocker/r-base", c("Rscript", "-e", "1+1"), user = "mark") #> [1] 2 ## End(Not run)
## Not run: library(googleComputeEngineR) ## make instance using R-base vm <- gce_vm_template("r-base", predefined_type = "f1-micro", name = "rbase") ## run an R function on the instance within the R-base docker image docker_run(vm, "rocker/r-base", c("Rscript", "-e", "1+1"), user = "mark") #> [1] 2 ## End(Not run)
Will periodically check an operation until its status is DONE
gce_wait(operation, wait = 3, verbose = TRUE, timeout_tries = 50)
gce_wait(operation, wait = 3, verbose = TRUE, timeout_tries = 50)
operation |
The operation object |
wait |
Time in seconds between checks, default 3 seconds. |
verbose |
Whether to give user feedback |
timeout_tries |
Number of times to wait |
The completed job object, invisibly
This gets the folder location of available Dockerfile examples
get_dockerfolder(dockerfile_folder)
get_dockerfolder(dockerfile_folder)
dockerfile_folder |
The folder containing |
file location
See demos and examples at the https://cloudyr.github.io/googleComputeEngineR/.
An object representing the current computer that R is running on.
localhost
localhost
An object of class localhost
(inherits from host
) of length 0.
Called by as.cluster
makeDockerClusterPSOCK( workers, docker_image = "rocker/r-parallel", rscript = c("docker", "run", "--net=host", docker_image, "Rscript"), rscript_args = NULL, install_future = FALSE, ..., verbose = FALSE )
makeDockerClusterPSOCK( workers, docker_image = "rocker/r-parallel", rscript = c("docker", "run", "--net=host", docker_image, "Rscript"), rscript_args = NULL, install_future = FALSE, ..., verbose = FALSE )
workers |
The VMs being called upon |
docker_image |
The docker image to use on the cluster |
rscript |
The Rscript command to run on the cluster |
rscript_args |
Arguments to the RScript |
install_future |
Whether to check if future is installed first (not needed if using docker derived from rocker/r-parallel which is recommended) |
... |
Other arguments passed to makeClusterPSOCK |
verbose |
How much feedback to show |
Henrik Bengtsson [email protected]