For decades Oracle Enterprise Manager (OEM) has been a central configuration management and governance tool for many enterprises. In this era of containerization, we’re moving our Fusion Middleware/WebLogic environments to Kubernetes. Most organizations do not run just one FMW environment: environments for different use cases, different departments and through lifecycle stages (development, test, acceptance, production). In a complex landscape of multiple databases and Fusion Middleware environments, OEM can help monitor all those targets. However, due to the architecture, OEM requires agents to have connectivity to each environment it monitors.
The blog Monitor WebLogic on Kubernetes using Oracle Enterprise Manager describes a setup to expose a WebLogic domain on Kubernetes that is managed by the WebLogic Kubernetes operator so that it can be discovered by Oracle Enterprise Manager. It suggests creating Kubernetes services of a load-balancer type for every WebLogic server. This is workable for a single domain on Kubernetes. It implies a separate service load-balancer for each WebLogic server in the domain. However, you often incur charges for each service load balancer, and each WebLogic server-load balancer pair requires a separate external IP address. Therefore, this setup may not be ideal.
To mitigate the number of load-balancer services, here are some options:
- Have an OEM Agent running in a VM in the same service network as the FMW domains. This can be done using Kubevirt, that enables running a VM in a Kubernetes environment. This is complex and resource intensive.
- Create just one load-balancer service with multiple ports and route each port to a specific WebLogic server. This is hard to manage, because of the opening and closing of ports upon creation and decommissioning of environments. This setup is complex to manage using automation code.
- Use Istio to route T3 traffic to specific WebLogic Servers from one service load-balancer. Since this is highly configurable, this is the preferred solution. But, since Istio does not know the proprietary application protocols, we need some other technology that Istio can use to route the traffic.
In our current solution, we use Istio to route HTTP traffic to the consoles and end-user applications based on host name. To do so, the Istio Ingress Gateway performs the decryption, also known as TLS off-loading. Then with HTTP traffic, Istio VirtualServices define the hostname as a matching rule. For plain TCP traffic this is not possible, Istio can only do matching on the port.
Please read the complete article by Martien van den Akker











