Our vision will change the way software/system engineers design, develop, deploy and deliver applications and services to end users. It goes beyond being a truly intelligent multi-dimensional measurement solution that passively observes the most important behavioral aspects of a software in terms of resource consumption, response time, business value and costing.
Its a vision that sees activity based metering being the cortex that connects context with cost, control with code, coordination with continuation. Its powers the next generation of software applications, services and platforms that are inherently self adaptive. It sits on top of powerful high performance runtime engines like the Java Hotspot VM and orchestrates the execution based on policies and profiles assigned to the current activity context (user, code, (work)flow, application) by the application management team. It does not just measure behavior, it actively (and in real-time) supervises, controls and directs the execution of software that is instrumented and metered.
We see six significant stages in the maturity of an organizations management of its applications and services – profile, protect, police, prioritize, predict, and provision.
Profile: This is the starting point for most of our customers who are looking for a solution that can instrument, measure, and collect performance data for a significant portion of their application code base across development, testing and production environments and up and down the latency and frequency spectrum. No other technology has achieved this in practice. Without a comprehensive understanding and model of the software execution behavior and resource consumption patterns none of the next stages can be realistically achieved.
Protect: A significant number of problems in production are as a result of not having adequate supervisory code routines that are embedded into the runtime to govern the execution flow and its consumption of system resources (and with the cloud charges). Our metering technology not only provides the hooks for such routines to intervene at the point of execution and/or consumption but it also provides the in-flight metering data that is needed to make judgements that drive actions.
Police: Our innovative QoS for Apps extension actively polices activities watching for non-compliant behavior and taking corrective measures to enforce call flow contracts and agreements associated with one or more aspects of the activity context.
Prioritize: Following the delivery of self-stablizing applications and systems, customers typically move onto differentiating the quality of service that is offered to applications, activities and actors (users and clients). Here our QoS technology is used to introduce dynamic scheduling of execution flows within runtimes.
Predict: With the ability to access current and historical metering observations at any point in the execution, immediate and local, customers can install and manage sophisticated process controllers built on powerful feedback loop mechanisms afforded by our technology.
Provision: Applications need to be able to dynamically scale to meet unexpected workloads or peak volumes. This needs to be done in a smart and safe manner and not completely ignorant of (or blind to) the activities and context that drive consumption of resources. Provisioning and starting up one or more VM’s on a sudden spike in CPU will simply not cut it outside of a cloud demo. With our activity metering technology customers are able to assess what has happened (recent consumption), what is currently happening (current consumption) and what is likely to happen (remaining consumption) and more importantly whether the activities or users driving such demand justify the additional capacity and cost.
Data in Real-time (DIRT)
With environments becoming much more dynamic in terms of workload, capacity, code, and topology as well as increasingly distributed its seems futile to be still trying to manage the performance of applications the way it has always been done. What is needed is DIRT – data in real-time. Data that is accessible at the point of its creation (measurement and collection) and within its current execution context be it a process, thread, transaction or request. Data that informs the application of its immediate past, its current processing and its predicted path. This data has much more value but only if it can be acted on in near real-time at the resolution of the processing itself. Beyond this the data is bound for a black hole unless it can be mined for patterns which are then codified into controllers or supervisory routines.
Being able to observe behavior remotely with a time lag that a machine would consider years is not being in control. The first step in scaling and managing the ever increasing complexity in applications and runtimes starts with local (software) autonomy, self-observation (monitoring / awareness), self judgement (analysis / evaluation) and self reaction (change / adjustment).