OverOps Glossary

The following table describes the key terms in the OverOps installation and product:

Agent (micro-agent)

The component installed alongside a JVM that monitors the customer applications.

Collector

The endpoint to which Agents report. The Collector can be installed on the same machine as an Agent or a remotely on a separate host.

Remote Collector

A Collector installed on a remote host, separate from the Agent.

SaaS Deployment

A Software-as-a Service deployment option where only a Agent and Collector components are installed. All other components are administered by OverOps as a service.

Hybrid Deployment

An extension to the SaaS deployment option designed to address data locality concerns where a Storage Server is installed behind your company Firewall. With a Hybrid deployment, both source code and variable state data are physically stored and managed by you offering an additional layer of security to the SaaS deployment.

On-Premises Deployment

A deployment option in which the entire OverOps infrastructure is installed on-premises.

Installation Key

A unique identifier and a means of encrypting data to limit access to authorized individuals.

Dashboard

The OverOps Dashboard serves as the main hub to detect, prioritize and fix critical errors in your staging and production application.

Automated Root Cause (ARC)

Displays single events, including stack frames and variable state values that led to a specific error, as well as the distribution of that error over time.

Event (Snapshot)

An event instance that OverOps captured, displayed on the Dashboard and the Automated Root Cause page. OverOps records event information such as: exceptions, log errors/warnings, HTTP errors, etc.

Process

In the context of OverOps, process refers to the operating system running process that would show up in a Linux ps command or the Windows process monitor.

Endpoint

Endpoints are HTTP URLs that can be polled on a regular interval to discern availability.

StatsD

StatsD is a simple protocol for sending application metrics via UDP. These metrics can be sent to a Telegraf instance, where they are aggregated and eventually flushed to Splunk or other output sinks that you have configured.
In detail, it is a network daemon that runs on the Node.js platform and listens for statistics, like counters and timers, sent over UDP or TCP and sends aggregates to one or more pluggable backend services.

Telegraf

Telegraf is an agent written in Go and accepts StatsD protocol metrics over UDP, then periodically forwards the metrics to defined output sinks.

Splunk

Splunk is software for searching, monitoring, and analyzing machine-generated big data, such as application log files via a Web-style interface.

Have more questions? Submit a request