tech/google/cloud

CLOUD

Google Cloud Platform (GCP) skills. Use skills in this domain when:

production gcloud CLI, bq, gsutil, GCP SDK v3 (Node/Python/Go), Terraform
requires: tech/google
improves: tech/googletech

Google Cloud Platform (GCP)

GCP is the 2nth.ai choice for:

The pattern is: Cloudflare at the edge → Cloud Run for the heavy lift → BigQuery / Vertex AI for data + AI.

2nth-specific usage map

Service2nth usage
Cloud RunOn-demand ERPNext/Frappe instances (2nth_erp) — scale to zero between sessions
Cloud SQL (MySQL 8.0)Frappe DB for ERPNext, client ERP databases
BigQueryToken economy analytics, Penny briefing rollups, client activity warehouse
Vertex AIGemini endpoints + Claude via Model Garden for Penny briefings, embeddings for skill retrieval
Pub/SubGmail Watch → Cloud Run / Worker push, webhook fanout, inter-service events
Cloud SchedulerCold-wake for 2nth_erp Cloud Run, daily token rollups into BQ, Gmail watch renewal
Secret ManagerWorkspace-bridge SA keys, Frappe DB creds, third-party API keys

Sub-skills

PathFocusStatus
tech/google/cloud/computeCloud Run, Cloud Functions, GKE, Compute Engine✓ production
tech/google/cloud/aiVertex AI, Gemini, embeddings, AI Studio✓ production
tech/google/cloud/dataBigQuery, Pub/Sub, Dataflow, Dataproc✓ production
tech/google/cloud/securityIAM, Secret Manager, KMS, Cloud Armor, VPC-SCstub
tech/google/cloud/storageCloud Storage, Filestore, Transfer Servicestub
tech/google/cloud/databaseCloud SQL, Spanner, Firestore, AlloyDB, Bigtablestub
tech/google/cloud/networkingVPC, Cloud Load Balancing, Cloud CDN, Cloud DNSstub

gcloud project setup

# Create project
gcloud projects create my-app-prod --name "My App (prod)" --organization 123456789012

# Set active project
gcloud config set project my-app-prod

# Link billing account (required for most services)
gcloud billing projects link my-app-prod \
  --billing-account 0X0X0X-0X0X0X-0X0X0X

# Enable APIs you need (always explicit)
gcloud services enable \
  run.googleapis.com \
  cloudfunctions.googleapis.com \
  bigquery.googleapis.com \
  aiplatform.googleapis.com \
  pubsub.googleapis.com \
  secretmanager.googleapis.com \
  cloudbuild.googleapis.com \
  artifactregistry.googleapis.com

IAM: primitive, predefined, custom roles

Role typeExampleWhen to use
Primitiveroles/owner, roles/editor, roles/viewerNever in production — too broad
Predefinedroles/run.invoker, roles/bigquery.dataViewerDefault — least privilege built-in
Customroles/organizations/123/customRoleIdWhen predefined doesn't fit; org-level definition
# Grant Cloud Run invoker to a service account
gcloud run services add-iam-policy-binding my-service \
  --region africa-south1 \
  --member "serviceAccount:[email protected]" \
  --role "roles/run.invoker"

# Grant BigQuery job-user (can run queries) at project level
gcloud projects add-iam-policy-binding my-app-prod \
  --member "serviceAccount:[email protected]" \
  --role "roles/bigquery.jobUser"

# Grant BigQuery dataset-level read (not project-wide)
bq add-iam-policy-binding \
  --member "serviceAccount:[email protected]" \
  --role "roles/bigquery.dataViewer" \
  my-app-prod:my_dataset

Scale-to-zero pattern (2nth_erp / Frappe)

Cloud Run's scale-to-zero is the foundation of the 2nth_erp on-demand ERPNext pattern: an instance that costs nothing when idle and warms on first request. Pairs with Cloud SQL (MySQL 8.0 for Frappe) connected over the built-in Cloud SQL proxy — no network plumbing.

# Frappe needs the Cloud SQL connection wired into the Cloud Run service
gcloud run services update 2nth-erp --region africa-south1 \
  --set-cloudsql-instances=my-project:africa-south1:frappe-db \
  --set-env-vars DB_HOST=/cloudsql/my-project:africa-south1:frappe-db \
  --min-instances 0 --max-instances 3 \
  --memory 2Gi --cpu 2

# Cold-wake via Cloud Scheduler before a scheduled session
gcloud scheduler jobs create http wake-2nth-erp \
  --schedule "55 7 * * 1-5" --time-zone "Africa/Johannesburg" \
  --uri "https://2nth-erp-xyz-ew.a.run.app/healthz" \
  --http-method GET

Idle timeout is ~15 min — first request wears the cold-start cost, subsequent requests within the window are warm.

Region reference

RegionCodeNotes
Johannesburgafrica-south1SA primary; POPIA data residency
Londoneurope-west2EU fallback for services not in africa-south1; Vertex AI
Belgiumeurope-west1GDPR EU, broader service availability
Iowaus-central1Most features, cheapest — avoid for SA personal data
Las Vegasus-west4Vertex AI specific model availability

Check cloud.google.com/about/locations before committing — africa-south1 is newer and some services land there last.

Cloud Build — container → deploy

# cloudbuild.yaml
steps:
  - name: gcr.io/cloud-builders/docker
    args: [build, -t, europe-west2-docker.pkg.dev/$PROJECT_ID/my-repo/my-service:$SHORT_SHA, .]
  - name: gcr.io/cloud-builders/docker
    args: [push, europe-west2-docker.pkg.dev/$PROJECT_ID/my-repo/my-service:$SHORT_SHA]
  - name: gcr.io/google.com/cloudsdktool/cloud-sdk
    entrypoint: gcloud
    args:
      - run
      - deploy
      - my-service
      - --image=europe-west2-docker.pkg.dev/$PROJECT_ID/my-repo/my-service:$SHORT_SHA
      - --region=africa-south1
      - --platform=managed

# Submit a build
gcloud builds submit --config cloudbuild.yaml .

Artifact Registry (container + package hosting)

# Create a Docker repo (replaces deprecated Container Registry / gcr.io)
gcloud artifacts repositories create my-repo \
  --repository-format=docker \
  --location=europe-west2 \
  --description="Container images for my-app"

# Configure Docker auth
gcloud auth configure-docker europe-west2-docker.pkg.dev

# Push
docker tag my-service europe-west2-docker.pkg.dev/my-app-prod/my-repo/my-service:v1
docker push europe-west2-docker.pkg.dev/my-app-prod/my-repo/my-service:v1

Cloud Logging & Monitoring

GCP auto-collects structured JSON logs from Cloud Run, Functions, GKE, GCE. Write one-line JSON to stdout and Cloud Logging parses every field.

// Structured logging — Cloud Logging auto-parses
const log = (severity: 'INFO' | 'WARNING' | 'ERROR', message: string, extra?: object) => {
  console.log(JSON.stringify({
    severity,
    message,
    timestamp: new Date().toISOString(),
    ...extra,
  }));
};

log('INFO', 'request', { method: 'POST', path: '/api/process', userId: 'u_123' });
log('ERROR', 'bq_query_failed', { error: err.message, queryId: job.id });
# Tail Cloud Run logs
gcloud run services logs read my-service --region africa-south1 --limit 50

# Structured log filter (Cloud Logging query language)
gcloud logging read '
  resource.type="cloud_run_revision"
  resource.labels.service_name="my-service"
  severity>=ERROR
' --limit 20 --format json

Cost controls

# Budget alert at $500 for the project
gcloud alpha billing budgets create \
  --billing-account 0X0X0X-0X0X0X-0X0X0X \
  --display-name "my-app-prod monthly" \
  --budget-amount 500USD \
  --threshold-rule percent=0.5,basis=CURRENT_SPEND \
  --threshold-rule percent=0.9,basis=CURRENT_SPEND \
  --threshold-rule percent=1.0,basis=CURRENT_SPEND \
  --filter-projects projects/my-app-prod

Expect surprises from: egress (especially cross-region + to internet), BigQuery on-demand queries (scan bytes, not rows), Vertex AI endpoint provisioned capacity (billed idle), Cloud NAT (per-GB + hourly).

Common gotchas

See Also