This document describes how to configure OverOps with Elastic Load Balancing in AWS (ELB) to run for multiple OverOps Collectors.
AWS Elastic Load Balancing supports three types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers. This configuration requires a Network Load Balancer (TCP).
To configure a Network Load-Balancer, see the following tutorial from Amazon.
This procedure consists of four basic steps:
- Configuring a load-balancer and a listener
- Configuring a target group
- Registering targets with the target group
- Creating the load-balancer.
Perform these steps to direct traffic to a Network Load-Balancer (TCP) target group. The target group consists of a defined number of Linux environments with the installed OverOps Remote Collector.
The Remote Collectors defined within the target group of the Load-Balancer must share a common OverOps installation key.
To connect the Remote Collector to the Load-Balancer:
- From the Remote Collector takipi.properties file, set the port on which the collector listens to the Load-Balancer. In this case, it is the port the Load-Balancer uses when routing traffic to targets in the target group. It is recommended to use 6060 when defining within AWS.
- Restart the Collector.
In the following step, point the Agent to the AWS Network Load-Balancer.
To configure the load-balancing in the Agent:
- From the Agent takipi.properties file, set masterHost and masterPort.
masterPort= <AWS Load-Balancer Port>
- AWS features auto-scaling options that provide high-availability by scaling up more OverOps Remote Collectors when demand is high, and optimize costs by scaling down the Collectors, when demand is lower. This is done automatically and in real-time.
- Run the Agent with the JVM argument.
- AWS Auto-ScalingStop the active Collector and verify that the Agent connects to the next Collector automatically.
The reference architecture below illustrates the use of Amazon’s EC2 Auto-Scaling feature that adds new instances of the OverOps Remote Collector from an Amazon Machine Image (AMI), and terminates them as necessary. This type of architecture is mostly applicable to organizations that deploy OverOps in SaaS or Hybrid models. The following architecture illustrates a SaaS deployment. The specific architecture may vary based on load-balancing and scaling requirements.
For information about auto-scaling options with AWS, see https://aws.amazon.com/autoscaling/faqs/.
The figure below describes the OverOps SaaS deployment with AWS load-balancing and auto-scaling in detail:
Figure 1: OverOps SaaS Deployment with AWS Load-Balancing and Auto-Scaling