OverOps was built to run under extreme performance restrictions in production and staging environments. It’s designed to leverage the computing power in the cloud to do the heavy lifting of the complicated analysis and processing tasks, so that your server will only be marginally affected. This allows you to use OverOps where it matters most – in your production servers.
The OverOps installation on your server includes a JVM agent and an OS process (daemon). Both components were designed with a strong CPU consumption consciousness in mind, with every action being monitored and measured to guarantee a minimal affect on the server’s CPU. This is done by writing an extremely efficient code, focusing on the important data (only the information that is most relevant and necessary to correct the error), and utilizing CPU throttling.
The OverOps installed components use a small pre-allocated block of memory during their continuous operation. This design ensures that the memory consumption of these two components will not increase uncontrollably, and remain virtually unnoticed. Understanding the significant impact that memory leaks can have on the performance of your server and application, it is OverOps' first priority to tightly contain and control its components on your server.
OverOps turns the byte code loaded to your server into an abstract graph structure and then uploads it to the cloud for further analysis. The majority of this activity takes place immediately after the installation is completed, the time at which your entire byte code is introduced to OverOps. This outbound communication, although a function of the size of your byte code, usually does not exceed a few hundred MB. Following this initial upload, the ongoing graph upload is differential and includes only new or changed code that OverOps has not seen. The inbound communication to your server includes specific segments of your code that is related to the exceptions that occurred and is very marginal. Overall, the network overhead introduced by OverOps will be near invisible to your server and environment.
OverOps hard-limits the number of exceptions and breakpoints it collects at any given time to support even your most extreme performance peaks. It does that by utilizing intelligent sampling, showing you only the most relevant representation of the same event that happens multiple times, thus assuring that even when millions of exceptions are thrown by your JVM, OverOps will not affect your application performance and at the same time will show you accurate data about your errors.
The OverOps JVM agent is designed to leave only a small footprint on your disk space (a few MB). This design, in combination with a daily maintenance job to clear any unused files and a predefined disk space limitation enforced by the OverOps daemon, are all mechanisms put in place to ensure a minimal disk space usage on your server.
OverOps doesn’t perform any IO calls (disk or network) from within your process. Data is placed directly in shared memory and is asynchronously encrypted and sent to the OverOps servers by the daemon process.
The OverOps JVM agent is written in native C++ and Assembly and does not allocate Java objects or capture any reference to your own – helping ensure zero impact on your garbage collection.
OverOps does not require you to change your build in any way. You do not need to add annotations, or make any adjustments to your code deployment in order to have OverOps monitor your JVM.
If for some reason the OverOps service is experiencing technical difficulties (network issues or other), this will under no circumstances affect the performance or behavior of your server. The OverOps agent and daemon are designed as stand-alone components and your server or JVM have no dependency on them.