You are here

Kubernetes Monitoring

Requires Opsview Cloud or Opsview Monitor 6.7
check_circle
Opsview Supported

Kubernetes Monitoring

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation.

What You Can Monitor

Opsview provides an all in one Kubernetes monitoring hosted locally or on the cloud. Monitor live usage metrics such as CPU, Memory, Disk and Network Status from your cluster down to your individual pods. Additionally, this Opspack collects other useful metrics such as HTTP statistics, file descriptors and more.

Host Templates

The following Host Templates are provided within this Opspack. Click the name of each Host Template to be taken to the relevant information page, including a full Service Check description and usage instructions.

Kubernetes Monitoring Prerequisites

To access live usage metrics, you must install metrics-server on your cluster and follow the correct authentication setup for your host.
It is assumed that kubectl is installed and configured for use with your cluster.

Kubernetes Monitoring Setup

  • Install Metrics Server on the cluster
  • Retrieve the API Server address and port number for the cluster
  • Setup the appropriate authentication depending on your environment setup

Install Metrics Server

Local cluster

If you are using a local Kubernetes cluster, run the following commands from the location of your cluster:

git clone https://github.com/kubernetes-incubator/metrics-server.git

# deploy the latest metric-server
cd metrics-server
kubectl create -f deploy/1.8+/
kubectl edit deploy -n kube-system metrics-server

When the edit window opens, add the following flags underneath spec.containers.name:

args:
- --kubelet-insecure-tls  # only required if using self signed certificates
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

AWS

If you are using a Kubernetes cluster hosted on AWS / EKS, refer to the Installing Metrics Server on AWS guide.

Google Cloud Platform (GCP) or Microsoft Azure

If you are using a GCP or Azure Kubernetes cluster, the Metrics Server is installed and configured by default. Ensure you have setup the read-only service account and role bindings shown in the steps below.

Retrieve API Server address and the port number

From the location of your cluster:

kubectl config view

This will give you a list of all the configuration information for your Kubernetes environment.

It will look something like:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://1.1.1.1:6443   # COPY THIS ADDRESS
  name: kubernetes

See the cluster, server address shown above. The port may or may not be present, copy the entire URL (including the port, to the KUBERNETES_CLUSTER_DETAILS, API server address variable.

Setup an authentication mechanism

This Opspack supports client authentication through X509 Client Certs and Bearer Tokens.

For more details, refer to Kubernetes authentication strategies

Client authentication using X509 Client Certs

Client certificate authentication is enabled by supplying the CA path, client certificate and client key arguments in the KUBERNETES_CERTIFICATES variable.

Client authentication using Bearer Tokens

Setup a service account for authentication

To create a service account for authentication, copy and paste the following commands into your Kubernetes cluster terminal.

kubectl create sa opsview  # create the service account

# create the read only role
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: 'true'
  labels:
  name: opsview-read-only
  namespace: default
rules:
- apiGroups: ['*']
  resources: ['*']
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - extensions
  resources: ['*']
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - apps
  resources: ['*']
  verbs:
  - get
  - list
  - watch
- nonResourceURLs:
  - /metrics
  - /api/*
  verbs:
  - get
  - list
  - watch
EOF

# bind the role to the service account
cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: opsview-binding
subjects:
- kind: ServiceAccount
  name: opsview
  namespace: default
roleRef:
  kind: ClusterRole
  name: opsview-read-only
  apiGroup: rbac.authorization.k8s.io
EOF
Retrieve the bearer token for authentication
Local

If your Kubernetes environment has been set up locally, you will need to run the following commands:

SECRET_NAME=$(kubectl get serviceaccount opsview -o jsonpath='{.secrets[0].name}')

TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)

echo $TOKEN

Copy the value of $TOKEN to your KUBERNETES_CLUSTER_DETAILS Opsview variable.

AWS

If your Kubernetes environment has been set up on AWS, you will need to run the following commands:

Ensure you have the AWS CLI installed. For details on how to install the AWS CLI, refer to: Installing the AWS CLI

# update kubectl config with your AWS setup
aws eks --region YOUR_REGION update-kubeconfig --name YOUR_CLUSTER_NAME

# download the aws kubernetes config map
curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml

# edit the config map, replacing the rolearn variable with the Role ARN shown in your EKS dashboard
nano aws-auth-cm.yaml

# apply the config map
kubectl apply -f aws-auth-cm.yaml

APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
SECRET_NAME=$(kubectl get serviceaccount opsview -o jsonpath='{.secrets[0].name}')
TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)
echo $TOKEN

Copy the value of $TOKEN to your KUBERNETES_CLUSTER_DETAILS Opsview variable.

To ensure the communication between the cluster and nodes, AWS requires you to add inbound and outbound rules in your security group for the node pool to allow HTTPS connections on port 443 with the source of 0.0.0.0/0.

Google Cloud Platform (GCP)

If your Kubernetes environment has been set up on GCP, you will need to run the following commands:

SECRET_NAME=$(kubectl get serviceaccount opsview -o jsonpath='{.secrets[0].name}')

TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)

echo $TOKEN

Copy the value of $TOKEN to your KUBERNETES_CLUSTER_DETAILS Opsview variable.

Microsoft Azure

If your Kubernetes environment has been set up on Azure, you will need to run the following commands:

Ensure you have the Azure CLI installed. For details on how to install the Azure CLI, refer to: Installing the Azure CLI

# login to azure
az login

# get kube config for azure
az aks get-credentials --resource-group YOUR_RESOURCE_GROUP --name YOUR_CLUSTER_NAME

SECRET_NAME=$(kubectl get serviceaccount opsview -o jsonpath='{.secrets[0].name}')

TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)

echo $TOKEN

Copy the value of $TOKEN to your KUBERNETES_CLUSTER_DETAILS Opsview variable.

Importing this Opspack

Download the application-kubernetes.opspack file from the Releases section of this repository, and import it into your Opsview Monitor instance. Now you can add the Host Templates you want following the info links in the table at the top.

For more information, refer to Opsview Knowledge Center - Importing an Opspack.