Workspace Service TP1.3.1
Some cool new features:
On behalf of the workspace team, I am happy to announce the TP 1.3.1 release of the Workspace Service. You can download the new release from: http://workspace.globus.org/downloads/index.html
The main new feature in this release is the implementation of the workspace pilot which provides non-invasive adaptations to batch schedulers (such as PBS) enabling sites to run virtual machines alongside jobs. The details of this approach are described in: workspace-pilot-europar08.pdf
In addition, the release also contains the ensemble service that allows clients to create ensembles of heterogeneous virtual machines to be deployed and managed together, improvements to the client, and several bug fixes. The complete changelog can be found at: http://workspace.globus.org/vm/TP1.3.1/index.html#changelog
We welcome comments, feedback, and bug reports. Information about the project, software downloads, documentation and instructions on how to join the workspace-user mailing list for support questions can be found at: http://workspace.globus.org
Happy Valentine’s Day!
As you can read there, the main new feature is the pilot infrastructure. The paper Kate refers to in the announcement is a relatively short read and lays out the ideas (and a practical evaluation) in an organized way. But briefy: the pilot is a program the service will submit to a local site resource manager in order to obtain time on the VMM nodes. When not allocated to the workspace service, these nodes will be used for jobs as normal. Those jobs run in normal system accounts in Xen domain 0 with no guest VMs running.
Importantly, the approach leaves the site resource manager in full control of the nodes and requires no modifications to the site resource manager. Save perhaps possible configuration changes you might like to make. For example, you can mark particular nodes as able to accomodate guest VMs: the workspace service supports sending pilot requests to particular LRM queues, or providing a particular node property etc. This allows you to really organize not just when but where VMs can run.
Several extra safeguards have been added to make sure the node is returned from VM hosting mode at the proper time, including support for:
- the workspace service being down or malfunctioning
- LRM preemption (including deliberate LRM job cancellation)
- node reboot/shutdown
Also included is a one-command “kill 9″ facility for administrators as a “worst case scenario” contingency.
So as a buzzword experiment, I want to put in a particular keyword here and see how the search engine hits work out :-). I think you know what it may be…
Go make a cloud!
And with the workspace pilot, you won’t have to switch over all at once. Take it for a test run and tell us about it on workspace-user.
We’ve got some exciting stuff in the pipeline for the next few months, too (see the last release announcement and the self-configuring 100 node VM cluster news). I am really happy with where the project is going and has been recently.