gcp devops certification training

This post is solely a learning aide to provide insight into areas you would want to study before taking the exam. Due to the lengthy content I have broken this up into two blogposts.

For Part Two Check Back soon. Part two will cover SRE, Service Incidents and more.

Note that as an added bonus I have several downloads listed for you in the text.

Keep an eye out for a short course as well.

Google Cloud Professional Cloud DevOps is an exam not to be under estimated. Its up there on the difficulty level and not one exam I was able to breeze through. Even though the exam title is DevOps is actually far reaching into GCP Services and SRE as well.

What is a GCP DevOps Engineer?

From Googles Certification Site—-

A Professional Cloud DevOps Engineer is responsible for efficient development operations that can balance service reliability and delivery speed.

They are skilled at using Google Cloud Platform to build software delivery pipelines, deploy and monitor services, and manage and learn from incidents.

The Professional Cloud DevOps Engineer exam assesses your ability to:

  • Apply site reliability engineering principles to a service
  • Optimize service performance
  • Implement service monitoring strategies
  • Build and implement CI/CD pipelines for a service
  • Manage service incidents

Exam Questions — 50

Exam Time — 2 Hours

FREE Practice Exams Here

Deep Dive Notes – Lets Get Started

The exam tested a wide area around DevOps, GCP Services,Third Party Services such as Spinnaker and Jenkins as well as SRE. Of course Kubernetes/Docker was around 30% from what I seen on the exam.

I have broken down the deep dive notes into three sections.

  1. Deep Dive Notes
  2. Resources to Review
  3. Test Tips

Available Now on Amazon


DevOps 101

Just a quick reminder on understanding DevOps. If your taking this exam then I am assuming this is your game.

We will be diving into DevOps services for pipelines shortly.

Kubernetes and Kubernetes Engine

First and foremost was Kubernetes. Major part of the exam.

Kubernetes is “an open-source system for automating deployment, scaling
and managing of containerized applications” (https://kubernetes.io)
The word Kubernetes comes from Greek, and means “pilot of the ship”

  • Taking the exam you must have a significant background for Kubenetes Engine (kubectl/gcloud cli commands) (See Link in Resources for Commands to learn)
  • You must know what the Error Codes 400/403 are in several contexts.
  • Know the complementary services around containers and Kuberentes Engine but also how to monitor containers. More on this in the Stackdriver Section

What is Google Kubernetes Engine (GKE)?

•Google Kubernetes Engine (GKE) is a managed, production-ready environment for running Docker containers on Google Cloud Platform. (Cloud Container Orchestrator)

•Enables creation of multiple-node clusters while also providing access to all Kubernetes features you are familiar with.

  • Understand these complimentary services. More on these coming up
This image has an empty alt attribute; its file name is 1*N3wxZXjjdommXOz7iUdK5g.png

For Example we know that Kubernetes can run in many environments
• In a public cloud
• On-premise or in a datacenter
• Using minikube for testing and developing
• Try to use minikube on Google Cloud Compute Engine (Download instructions here.

• Learn how to set up a Kubernetes cluster using kubeadm
• Tip: for a quick online try without installation, use tryk8s.com

  • Terminology, brush up on the terminology around containers. Link here

Containers at Google – Projects

A container is a self-contained ready to run application
• This is what makes it different from a virtual machine!
• Containers have all on board that is required to start the application
• To start a container, a container runtime is required
• The container runtime is running on a host platform and established
communication between the local host kernel and the container
• So all containers no matter what they do run on top of the same local host

Google deploys 2 billion containers a week. Basically, every second of every minute of every hour of every day, Google is firing up on average some 3,300 containers. They have 10 plus years of experience with containers.

History of Containers at Google

  • Borg
  • Omega
  • Kubernetes

Kubernetes Engine Costing Model

Kubernetes Engine is FREE but the resources used is the real costing model (Pay as you go)

  • Use Compute Engine based in image used and number of instances. •Type of VM Image used and will vary by region deployed does matter
  • Persistent Disks Used
  • Load Balancing
  • Bandwidth (Egress)
  • Networking Requirements
  • APIs are a minor cost as well

In a nutshell GKE uses Google Compute Engine instances for nodes in the cluster.

You are billed for each of those instances according to Compute Engine’s pricing, until the nodes are deleted.

Note that Compute Engine resources are billed on a per-second basis with a one-minute minimum usage cost

Pricing Calculator. https://cloud.google.com/products/calculator

Building and implementing CI/CD pipelines for a service

Pipelines Services

With Kubernetes Engine you will be spending

  • Cloud Build – Run your container image builds in a fast, consistent, and reliable environment. Builds Docker container images for deployment in various environments.
  • Container Registry – manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control for pipelines.
  • Cloud Repositories – Design, develop, and securely manage your code. Fully featured, scalable, and private Git repository. Extend Git workflow by connecting to other GCP tools.

For example you would want to know how to deploy CI and CD with Cloud Build

Understanding YAML

The exam will have some expectations around being able to review a YAML template and understanding whats happening. Declarative.

Download the sheet here.

Basic GCLOUD Commands and Kubectl

Download here my command reference for a full list of commands

These are teasers.

– Gcloud Container Commands

Remember to set project

gcloud config set project

Remember to set zone or region

gcloud config set compute/zone us-central1-b

Kube Login credentials

gcloud auth application-default login

Create container cluster with three nodes in US Central

gcloud container clusters create hello-cluster –num-nodes=3 –zone us–central-b

Obtain credentials from cluster.

gcloud container clusters get-credentials mykubecluster

View context

kubectl config current-context

List Clusters

gcloud container clusters list

Describe cluster

gcloud container clusters describe cluster-name

Resize cluster to 4 nodes.

gcloud container clusters resize mygkecluster  –num-nodes –size 4

– Kubectl Commands

kubectl run hello-web –image=gcr.io/${PROJECT_ID}/hello-app:v1 –port 8080

kubectl get pods

kubectl get nodes

kubectl expose deployment hello-web –type=LoadBalancer –port 8080

kubectl get services

kubectl scale deployment hello-web –replicas=3 Add (Expand)

Service Accounts & IAM with Kubernetes Engine

Service Accounts and GKE

  • You should create and use a minimally privileged service account to run your GKE cluster instead of using the Compute Engine default service account.
  • GKE requires, at a minimum, the service account to have the monitoring.viewer, monitoring.metricWriter, and logging.logWriter roles
  • With the launch of Workload Identity, we suggest a more limited use case for the node service account.


Understanding Roles

Know your roles for Kubernetes Engine and always use “Principle of Least Privilege”

Pod Security basics

  • PodSecurityPolicies specify a list of restrictions, requirements, and defaults for Pods created under the policy.
  • E.G – Limiting the use of privileged containers and host networking or setting defaulting profiles
  • Three YAML files – POD Security Policy, Cluster Role and Cluster Role Binding functions
  • Privileged PODs – utilize the Linux capabilities such as manipulating the Host Networking stack and accessing the host resources and devices.
  • Privileged mode processes which are running inside the container have the same capacity such as the processes outside the container which can leverage some management capabilities

Kubernetes Private Clusters

  • A Private cluster is to ensure workloads on Kubernetes are isolated from the public internet.
  • Private clusters enable the nodes to have internal RFC-1918 IP address only – private nodes.
  • In a private cluster one can control access to the cluster master.
  • gcloud container clusters create private-cluster-0 …
  • gcloud container clusters delete -q private-cluster-0

Use HTTPS Load Balancing which can use an internal load balancing Note as well a private cluster requires VPC network peering. – Uses Private and Public Endpoints

Workload Identity

Workload Identity is the recommended way to access Google Cloud services from within GKE due to its improved security properties and manageability Workloads running on GKE must authenticate to use Google Cloud APIs

Two alternative methods to access Cloud APIs from GKE.

  1. Export service account keys and store them as Kubernetes Secrets Google service account keys expire after 10 years and are rotated manually
  2. Compute Engine default service account on your nodes. The Compute Engine default service account is shared by all workloads deployed on that node

Secrets with GKE

Know your best practices…. Hint

Storing Secrets Best Practices in order. •

  • 1. Storing secrets in code that are encrypted with a key from Cloud KMS
  • 2. Storing secrets in a storage bucket in Cloud Storage which is encrypted at rest
  • 3. Using a third-party secret management solution.
  • NOTE!!!  Storing your secrets directly in code is not a best practice to GCP

Key Management

Know your Options

  • Cloud HSM
  • Cloud KSM
  • Hashicorp

Cloud IAM – Managing Permissions for secrets

The two ways of managing permissions are:

  • Without a service account. This is the recommended option.
  • With a service account

Cloud Audit Logs – This service consists of two log streams.

  • Admin Activity
  • Data Access (GCP services).

Note that these streams help you answer the question of “who did what, where, and when?” within your GCP projects.

Lets Knowledge Check

You would like to deploy your containers with your templates on Kubernetes Engine and remove manual mistakes. This time you want to ensure you secrets are safe.  

What two options can you consider? (Select Two)

  • a.Metadata
  • b.Secret Container
  • c.Home Directory
  • d.Cloud Storage

Other Security Services t learn about before exam

•Cloud Armor •Cloud IAP •Cloud Audit Logs •VPC Flow Logs •Cloud NAT

If you answered A and B. Great Job.

You want to ensure your personal SSH key works on every instance in your project, including your Kubernetes cluster. 

What would be the best option? (Select One)

  • a.Upload your public ssh key to each instance Metadata.
  • b.Upload your public ssh key to the project Metadata.
  • c.Use gcloud compute ssh to automatically copy your public ssh key to the instance.
  • d.Use gcloud compute ssh to automatically copy your public ssh key to Cloud Storage.

If you answered B. Your amazing. If not look more into the SSH links in reference.

Identity Aware Proxy (IAP)

With IAP we want to understand how to integrate and why use with GKE

  • IAP benefits such as faster sign in than a VPN
  • What service supports HTTP, Load balancing. 
  • Cloud IAP works by verifying user identity and context of the request to determine if a user should be allowed to access an application or a VM.


OS Login simplifies SSH access management by linking your Linux user account to your Google identity. Use OS Login to manage SSH access to your instances using IAM without having to create and manage individual SSH keys. OS Login maintains a consistent Linux user identity across VM instances and is the recommended way to manage many users across multiple instances or projects.


Stackdriver, Yes that’s Correct Stackdriver…

Typically when I teach developers I speak about monitoring and logging this is generally a big yawn. Developers really want to create and not manage or monitor.

Stackdriver Monitoring and Logging

  • Understand Monitoring and Logging in Stackdriver
  • Native Monitoring vs Legacy
  • Install Agents
  • Stackdriver Monitoring
  • Stackdriver Logging

However, Stackdriver is much more than just management or monitoring of your GCP Services.

Uptime Checks

Health Checks

Troubleshoot network issues (e.g., VPC flow logs, firewall logs, latency, view network details)

Lets Knowledge Check

Your company is running out of network capacity to run a critical application in the on-premises data center. You want to migrate the application to GCP.  Secondly, the Security team can not lose their ability to monitor traffic to and from Compute Engine instances hosting the containers. What products would be a good solution (Select Two)

  • a.Cloud Audit Logs
  • b.VPN Logs
  • c.VPC Logs
  • d.Stackdriver Trace
  • e.Stackdriver Profiler

Load Balancing

Internal Load Balancing

  • Internal Load Balancing with Kubernetes Engine
  • Network Load Balancing
  • Network load balancing distributes incoming traffic across multiple instances •   
  • Supports non-HTTP(S) protocols (TCP/UDP)
  • Can be used for HTTPS traffic when you want to terminate connection on your   instances (not at HTTPS load balancer)

HTTP(S) Load Balancing

  • HTTP(S) Load Balancing distributes HTTP(S) traffic among instance groups based on proximity to user or URL or both
  • Autoscalers can be attached to HTTP(S)load balancers
  • Web and Mobile Apps

HTTP Load Balancing with Kubernetes Engine

Lets Knowledge Check

The Enterprise web application is currently hosted in the us-east1 region. Users experience high latency when traveling in the the kingdom of the EU. You’ve configured a network load balancer, but users have not experienced a performance improvement. How can you decrease the latency(Select One)

  • a.Configure an HTTP load balancer and direct the traffic to it.
  • b.Configure a Kubernetes Engine Network Policy and direct traffic to it.
  • c.Configure Dynamic Routing for the subnet hosting the application.
  • d.Configure an internal load balancer and direct the traffic to it

If you answered A. Great work

Network Policies and GKE

Learn to Enable Network Policies

Create New Cluster with network policy

•gcloud container clusters create [CLUSTER_NAME] –enable-network-policy

Update Existing Cluster

•gcloud container clusters update [CLUSTER_NAME] –update-addons=NetworkPolicy=ENABLED •gcloud container clusters update [CLUSTER_NAME] –enable-network-policy

Disable with network policy

gcloud container clusters update [CLUSTER_NAME] –no-enable-network-policy

GKE currently supports only Tigera’s Calico implementation for network policy


For Part Two Check Back soon. Part two will cover SRE, Service Incidents and more.

Resources to Study

Install MiniKube on Compute Engine


Google Cloud Links

· Google Cloud Platform https://cloud.google.com/

· GCP Console https://console.cloud.google.com/

· GCP Storage https://cloud.google.com/products/storage/

· Documentation https://cloud.google.com/docs/

· Pricing https://cloud.google.com/pricing/

· Free Tier https://cloud.google.com/free/

· Code Labs https://codelabs.developers.google.com/

· Qwiklabs https://qwiklabs.com/dashboard

· Stackoverflow https://stackoverflow.com/

GCP Services


App Engine and Cloud Endpoints

· Google App Engine


· Google App Engine Flexible Environment


· Google App Engine Standard Environment


· Google Cloud Endpoints


· Apigee Edge



Java Cloud Endpoints


● Python Cloud Endpoints


● JavaScript clients


Deploying and Managing Services

· Cloud Source Repositories


· Deployment Manager




· Google Stackdriver


Stackdriver Uptime Checks


Google Cloud Best Practices for Enterprises



· Google Site Reliability Book

https://landing.google.com/sre/book/index.html (Ebook)

https://amzn.to/2JDDJ6p (Amazon)

· GCP Diagram Templates


· GCP to AWS Services


· Kinsta Blogpost

Joe Holbrook, The Cloud Tech Guy