Overview
Microgateway can be run within a Kubernetes (k8s) environment. Kubernetes provides a platform for automating deployment, scaling, and operations of services. The basic scheduling unit in Kubernetes is a pod. It adds a higher level of abstraction by grouping containerized components. A pod consists of one or more containers that are co-located on the host machine and can share resources. A Kubernetes service is a set of pods that work together, such as one tier of a multi-tier application.
The Kubernetes support includes the following:
Liveliness check to support Kubernetes pod lifecycle: This helps in verifying that the
Microgateway container is up and responding. You can perform the liveliness check by checking the alive file of
Microgateway.
Readiness check to support Kubernetes pod lifecycle: This helps in verifying that the
Microgateway container is ready to serve requests.
For details on pod lifecycle, see Kubernetes documentation.
Prometheus metrics to support the monitoring of
Microgateway pods.
Microgateway exposes metrics in Prometheus format. The Prometheus based monitoring provides information relevant for the
Microgateway operation. You use the metrics endpoint
/rest/microgateway/metrics to gather the required metrics. The metrics gathered are of two types; the server-level metrics and API-level metrics. For details of the server-level metrics and API-level metrics collected, see
Prometheus Microgateway Metrics.
The following sections describe in detail the various ways of deploying Microgateway in Kubernetes. Each of the deployment models described require an existing Kubernetes environment. For details on setting up of a Kubernetes environment, see Kubernetes documentation.