Skip to content

Visions of Cloud Computing – PaaS Everywhere

Just came back from the CloudConnect conference in Santa Clara and after visiting a very large number of vendor booths on the expo floor I was somewhat disappointed at the lack of progress in the solutions being offered. There was pretty much no differentiation. Every vendor had their “me too” management console focused on VM provision and monitoring with some degree of AWS integration. There was some sparks of innovation in the storage pace but nothing radical that would really take us all to the next stage in cloud computing – getting rid of the computer (at least vm image). In fact some of the vendors present had pretty much nothing to offer specific to cloud computing except for maybe they use the cloud themselves to generate load to performance/stress test other sites (not necessarily in the cloud). There was a lot of talk but its translation to the cloud and its resulting transformation in the engineering community was largely absent. I don’t even recall seeing any PaaS signage on the expo floor.

Maybe CloudConnect is not the right conference for this or that PaaS is viewed to be a market that will eventually only be dominated by a few very big vendors not present except for Microsoft (who offered the nicest t-shirt). Its a great conference to connect with others pushing ahead (in their minds and hallway chats) but the conference itself really has an looming identity crisis beyond this point (at least for me) which is probably a result of the cloud encompassing so many technologies, processes, service delivery models even ideologies (Ops => DevOps => NoOps).

On the long haul home I decided it was time to blog on some of the visions I have of cloud computing that I would like to eventually see become subject matter at such a conference with this first one looking briefly at PaaS but not as most see it today which is largely focused on taking existing enterprise apps deployed to a fixed number of application server targets in some data center and pushing them out to the cloud with a potentially unlimited number of application server targets and in many cases switching in cloud service based version of services offering access to a resource pool of storage and computing.

After spending considerable time engineering CORBA and grid based solutions the scalability problems the above approach presents (on many levels) is all too familiar. But what if in designing such interfaces engineers offered a means to see and touch more of the data (form and function), the data hidden behind the remote interface because of matters related to this mode/style of interaction. What if there was a remote interface as well as a set of local interfaces which could be accessed by code pushed to (or pulled from) the remote service. The code would execute within some managed (and metered) shell (or container), collocated with the data and functions, and controlled by the context within its own payload. What if the code could transmit responses or signals back to its source whilst it migrated the flow of execution (or control) from one service end point to another. Every cloud service provider would then become a PaaS solution provider but in a narrow domain specific way. Instead of the application (or web app) being the deployment unit it would be an activity containing data, code as well context which would expose metadata for the purpose of security, metering, quality of service (QoS), billing, routing and service delegation (a subject of the next vision blog).

This is not as far fetched as one might initially think. We already have client browsers on end user devices performing some of aspects of this today as well as SaaS solutions offering ways to execute custom business logic under very controlled (governed) conditions. The future of cloud computing it not about provisioning vm instances…its about being able to move our (transient) data, code and workflow (process) from one computing/storage device to another…its about abstracting the consumption of computing power and in doing so expanding the potential capacity beyond that of a single public provider even one as big as Amazon AWS. Its about creating a large service economy in the cloud and applying service supply chain management in the design and deployment of applications and services.

If this could be realized we might actually come back to the situation in which software vendors don’t have to ship proprietary (black box) processors and storage devices with their software which is kind of a like what SaaS is today without the ability to hug such devices. In theory a software vendor would make available their software at some point on the grid with resource consumption (or “provisioning” ) done via one of more service’s (delegates) exposed in the activity context and owned (or billed) to the calling client (or app or user). Do we really want every service provider needing to provision and be billed for computing capacity when the cause of such consumption rests elsewhere in the caller chain?

In forthcoming blogs I would like to explore in greater detail how this could be achieved and the challenges the industry faces without some standardization of the service context meta data needed for safe, efficient and (cost) controlled execution and flow migration.

Metering In The Cloud – Visualizations Part 1