Skip to content

JXInsight/OpenCore 6.4.EA.12 Released

The twelfth early access build of  JXInsight/OpenCore 6.4 “ACE” has been published on our developer site. This release includes a development-only edition of JXInsight/Simz.

Instant Distributed Thread (Probe) Stack Dumps
Also included is an enhanced management probes provider extension that allows printing of all thread probes stacks to System.out via a remote JMX operation invocation on the Probes ControlMBean registered with the local MBeanServer. Below is a sample output from a Simz service, which was at the time replaying in near real-time the metered execution behavior of 3 connected JVM applications (a betting service, processor and client) running a Scala/Akka remote actor demo. This is in fact a distributed multi-JVM thread stack dump, outputted to a single shell/terminal window pane. Each thread name is prefixed with the server UUID, the host address, the client port, the process id and the thread id. For each instrumented and metered stack frame, the probe name (method) and its frame id are printed. The frame id is an auto incrementing number, unique to each frames execution (stack push) within its thread execution context.


Comparison across multiple dumps is made incredibly easy and transparent with frames ids. If it looks the same it is the same, whereas without frame ids you can never tell whether it is the same execution (stalled) or simply a repeat of the same execution path by the same thread.

With the frame id being an auto incrementing number with thread scope you can also calculate the number of instrumented probes (methods) that have been pushed between a caller (lower in stack) and callee (higher in stack) by simply subtracting one frame id from another frame id. Subtract another 1 from the difference if you don’t want to include the callee.

Note: There is no need for post thread stack dump filtering because the output only includes those methods that are instrumented and still being metered within both the application runtime and the simz simulated runtime as each environment can have its own distinct intelligent activity metering configuration.

The above is made possible because the Simz service is in fact a parallel universe for all connected JVM applications in which there is only one JVM container but very many threads, each one busy firing probes in step (but not blocking) with its counterpart in the “real” application universe.


Simz does not need to concern itself with state (heap) within the applications, at least not all of it. The execution essence of most software can be described by a model that includes (process) flows, which are threads, activities, which are (named) probes, resources, which are (named) meters, and consumption modeled as (meter) readings. The simulated environment does not even have (or need) access to the application code.

Today we use high level programming languages to construct execution behavior (code blocks) that is turned into low level machine code at runtime. Code written in Scala, Java, JavaScript or Ruby is turned into class bytecode then machine code – dynamically and adaptively. At each stage the number of (domain) instructions is increased. Simz is kind of the reverse. It instruments class bytecode then transmits the executing activity identifier (method or SQL or URL) to a remote environment, where it is then replayed by simulating threads. The number of instructions to replay the activity, which is basically a pair of begin() and end() calls, is far smaller than what is actually executed. This process is aided by intelligent activity metering within the actual application runtimes, that dynamically disables instrumented probes using various adaptive measurement extensions and strategies, much like how the Hotspot JVM works in deciding which methods get compiled and optimized.

This is the closest thing to a Matrix for JVM applications. Big Data, make room for Big Activity aka Big Simz!