At TG07, Kate gave a talk which is online. The paper she presented discusses among other things contextualization, the structure and mechanisms by which an appliance/workspace is “told” what it needs in order to adapt to its deployed environment. This is not just adaptation to site specific services but also to other appliances that may be deployed with it such as in a virtual cluster deployment.
Amidst the bustle we implemented a new backend to the Workspace Service, to Amazon’s Elastic Compute Cloud (EC2). We’ve deployed it to the University of Chicago’s Teraport cluster and will currently pay for usage by selected collaborators.
Besides being somewhat fun to implement (including getting the Globus and Amazon Secure Message stacks on the same wavelength), I think it’s going to be interesting.
Because grid resources are cautiously approaching the pioneering switch to virtualizing resources , even in part, it is going to be interesting and educational to see what people will be able to accomplish with workspaces when a large pool of resources is actually available on tap — today.
Because the same deployment protocols can be used for both native and EC2 resources, there are of course capacity overflow use cases. In the right situations, VMs are a good mechanism for providers to dynamically reach more consumers as the need arises.
For a feature list and description, see What is the EC2 backend?
 and some would say inevitable switch, even with the performance costs. Consider also that ‘virtualizing resources’ may mean physical node re-imaging, cf. Virtual Workspaces: Achieving Quality of Service and Quality of Life in the Grid.