Monitoring kube-dns pods in GKE

Problem! I have deployed GKE cluster and run our workloads in there. As a monitoring stack I deployed Prometheus and Grafana using stable helm charts. Problem here is, how we can monitor these kube-dns pods? I can get cpu and memory usage of these kube-dns pods, but other metrics like skydns_skydns_dns_error_count_total etc won’t scrape by prometheus, by default. The reason for this is those metrics endpoint are not exposed yet....

July 21, 2019 · 1 min · 205 words · Me

Prometheus with Openshift 3.2

Create Service accounts for the project you deploying the prometheus pod. oc create serviceaccount -n oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:: oc create serviceaccount metrics -n paas-prometheus oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:paas-prometheus:metrics Creating the Docker image #Base Image FROM docker.io/prom/prometheus #copy the config yaml file to the directory ADD prometheus.yml /etc/prometheus/ #expose the port EXPOSE 9090 Prometheus.yml File global: scrape_interval: 10s evaluation_interval: 10s rule_files: - "*.rules" scrape_configs: - job_name: 'kubernetes-cluster' tls_config: ca_file: /var/run/secrets/kubernetes....

October 2, 2017 · 3 min · 469 words · Me