Skip to content

JXInsight/Simz – Parallel Execution Analysis Simplified – Apache Hadoop

JXInsight/Simz makes it incredibly easy to get the big performance picture on BIG DATA by feeding in near real-time the metering data collected in local application runtimes, via dynamic bytecode instrumentation, to remote Simz services in which the whole distributed and parallel execution (in terms of instrumentation calls in the JXInsight/OpenCore Probes API) is replayed out by threads.

All that is required is the adding of a system property to the jxinsight.override.config file used by the application runtime agent.

jxinsight.server.probes.simz.enabled=true

To integrate the instrumentation agent with Apache Hadoop simply add the following to the core-site.xml file similar to the setup described here.

Below is the metering analysis for the distributed parallel execution of the Pi estimation MapReduce sample application that ships with the default Apache Hadoop distribution.

bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100

In total 13 JVM processes are launched in executing the above command excluding the actual job scheduling client runtime. From the hierarchical metering view below, a snapshot taken of the Simz metering model, it can be seen that the time spent in the examples package is extremely small relative to the time spent in other packages used in executing the tasks scheduled by the job and the management of distributed state and output. The job took approximately 150 seconds of which 0.60538 secs was spent performing map and reduce work within the application codebase. The combined clock time, across threads and processes, was 1058 seconds (think of this figure as man/thread years).

Ideally the application runtime of the job client should also be covered by the instrumentation agent and metering measurement engine especially if we want to track the end user (client) job latency. This can be done by exporting the value of the mapred.child.java.opts property above as HADOOP_OPTS before executing the command.

Here is the revised model that now includes the job scheduling client application runtime. Again this distributed parallel execution analysis is obtained by taking a snapshot of the metering model within the Simz service which is simulating and replaying the metered thread execution behavior of the connected application runtimes. Whilst the total cumulative clock.time for the examples package has increased, the inherent (self) total is significantly low compared to everything else.

We can eliminate much of the noise (though you should be concerned about its proportion here) from the metering model by only metering instrumented methods when called directly or indirectly by classes and methods in the examples package (in the scope of a threads stack). This configuration can be applied on either the client or Simz service side (both is possible).

jxinsight.server.probes.entrypoint.enabled=true
jxinsight.server.probes.entrypoint.name.groups=org.apache.hadoop.examples

Getting call stack dumps across the entire distributed job/task execution has never been easier. Simply enable the stack metering extension within the Simz service metered runtime.

jxinsight.serve.probes.stack.enabled=true

Here is a snapshot of the metered thread stacks within the Simz service process during the execution of the job. The call stack of the job client is presented alongside the stack of any tasks executing in parallel in remote application runtimes. This is extremely powerful especially in that delay (time resolution of information) is minimized across the entire infrastructure (and it scales).

But the real power of our unique approach to parallel execution analysis shines when it comes what can be done within the Simz simulated environment such as adding plugins and extensions that diagnose performance issues across the entire infrastructure in real-time which is greatly simplified by making this all appear as one single application.

The following is an extremely simple example that allows us to trace the distributed execution of instrumented and metered method invocations as it happens and within the context of a thread simulating the behavior of a real application thread. It logs the ${client} [->|<-] ${probe} ${thread} at the start and end of a metered method invocation.

The following configuration applies the above to only methods within the example package.

j.s.p.interceptor.enabled=true
j.s.p.interceptors=org.jinspired.jxinsight.probes.examples.dtrace.InterceptorFactory
j.s.p.interceptor.filter.enabled=true
j.s.p.interceptor.filter.include.name.groups=org.apache.hadoop.examples

Here is some sample output at the beginning of the job execution with 3 client application runtimes connected concurrently.

Below is the output at the tail end of the job execution.