Guia Pratico de Introducao ao Kubernetes - Dos Fundamentos a Operacao de Orquestracao de Containers

Intermediario | 60 min leitura | 2025.12.02

Kubernetes e uma plataforma open source para automatizar o deploy, escalonamento e gerenciamento de aplicacoes containerizadas. Este artigo explica desde os fundamentos do Kubernetes ate a operacao pratica.

Arquitetura do Kubernetes

Estrutura do Cluster

flowchart TB
    subgraph ControlPlane["Control Plane"]
        API["API Server"]
        Sched["Scheduler"]
        CM["Controller Manager"]
        etcd["etcd<br/>(Data Store)"]
    end

    subgraph WorkerNodes["Worker Nodes"]
        subgraph Node1["Node 1"]
            kubelet1["kubelet"]
            proxy1["kube-proxy"]
            runtime1["Container Runtime<br/>(containerd)"]
            Pod1["Pod"]
            Pod2["Pod"]
        end
        subgraph Node2["Node 2"]
            kubelet2["kubelet"]
            proxy2["kube-proxy"]
            runtime2["Container Runtime<br/>(containerd)"]
            Pod3["Pod"]
            Pod4["Pod"]
        end
    end

    ControlPlane -->|comunicacao kubelet| WorkerNodes

Componentes Principais

ComponenteFuncao
API ServerProcessa todas as requisicoes de API para o cluster
etcdArmazenamento KV distribuido que guarda o estado do cluster
SchedulerAloca Pods nos nos apropriados
Controller ManagerExecuta varios controladores (ReplicaSet, Deployment, etc.)
kubeletGerencia Pods no no
kube-proxyProxy de rede, balanceamento de carga de servicos

Configuracao do Ambiente Local

Instalacao do minikube

# macOS (Homebrew)
brew install minikube

# Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Windows (winget)
winget install Kubernetes.minikube

# Iniciar cluster
minikube start --driver=docker --cpus=4 --memory=8192

# Verificar status
minikube status

# Iniciar dashboard do Kubernetes
minikube dashboard

Instalacao do kind (Alternativa)

# Kubernetes IN Docker - Opcao mais leve
# macOS/Linux
brew install kind

# Criar cluster (multi-node)
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF

# Listar clusters
kind get clusters

# Deletar cluster
kind delete cluster

Configuracao do kubectl

# Instalacao
brew install kubectl

# Verificar contextos
kubectl config get-contexts

# Trocar contexto
kubectl config use-context minikube

# Verificar informacoes do cluster
kubectl cluster-info
kubectl get nodes

Recursos Basicos

Pod

# pod.yaml - Unidade minima de deploy
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
    environment: development
spec:
  containers:
  - name: nginx
    image: nginx:1.25
    ports:
    - containerPort: 80
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
    livenessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 10
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 5
      periodSeconds: 5
# Operacoes com Pod
kubectl apply -f pod.yaml
kubectl get pods
kubectl describe pod nginx-pod
kubectl logs nginx-pod
kubectl exec -it nginx-pod -- /bin/bash
kubectl delete pod nginx-pod

Deployment

# deployment.yaml - Deployment de aplicacao
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
      - name: web-app
        image: myapp:1.0.0
        ports:
        - containerPort: 3000
        env:
        - name: NODE_ENV
          value: "production"
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: database-url
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "1000m"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /ready
            port: 3000
          initialDelaySeconds: 10
          periodSeconds: 5
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: web-app
              topologyKey: kubernetes.io/hostname
# Operacoes com Deployment
kubectl apply -f deployment.yaml
kubectl get deployments
kubectl get pods -l app=web-app

# Escalonamento
kubectl scale deployment web-app --replicas=5

# Rolling Update
kubectl set image deployment/web-app web-app=myapp:2.0.0

# Rollback
kubectl rollout undo deployment/web-app
kubectl rollout history deployment/web-app
kubectl rollout status deployment/web-app

Service

# service.yaml - Exposicao de servico
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
spec:
  type: ClusterIP  # ClusterIP, NodePort, LoadBalancer
  selector:
    app: web-app
  ports:
  - name: http
    port: 80
    targetPort: 3000
    protocol: TCP
---
# Servico NodePort (para acesso externo)
apiVersion: v1
kind: Service
metadata:
  name: web-app-nodeport
spec:
  type: NodePort
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 3000
    nodePort: 30080
---
# Servico LoadBalancer (para ambiente cloud)
apiVersion: v1
kind: Service
metadata:
  name: web-app-lb
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
  type: LoadBalancer
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 3000

Ingress

# ingress.yaml - Roteamento HTTP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - app.example.com
    secretName: app-tls-secret
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-app-service
            port:
              number: 80
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 80
# Instalacao do Ingress Controller (minikube)
minikube addons enable ingress

# Verificar Ingress
kubectl get ingress
kubectl describe ingress web-app-ingress

Gerenciamento de Configuracao

ConfigMap

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  # Chave-valor simples
  LOG_LEVEL: "info"
  API_TIMEOUT: "30s"

  # Montar como arquivo
  nginx.conf: |
    server {
      listen 80;
      server_name localhost;

      location / {
        proxy_pass http://backend:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
      }
    }

  # Configuracao JSON
  config.json: |
    {
      "database": {
        "host": "postgres",
        "port": 5432
      },
      "cache": {
        "enabled": true,
        "ttl": 3600
      }
    }
# Exemplo de uso de ConfigMap
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-with-config
spec:
  template:
    spec:
      containers:
      - name: app
        image: myapp:1.0
        # Injetar como variaveis de ambiente
        envFrom:
        - configMapRef:
            name: app-config
        # Variaveis de ambiente individuais
        env:
        - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: LOG_LEVEL
        # Montar como volume
        volumeMounts:
        - name: config-volume
          mountPath: /etc/config
          readOnly: true
      volumes:
      - name: config-volume
        configMap:
          name: app-config
          items:
          - key: nginx.conf
            path: nginx.conf
          - key: config.json
            path: config.json

Secret

# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  # Codificado em Base64
  database-url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc0Bsb2NhbGhvc3Q6NTQzMi9teWRi
  api-key: c3VwZXJzZWNyZXRhcGlrZXk=
stringData:
  # Texto plano (codificado automaticamente em Base64)
  jwt-secret: my-super-secret-jwt-key
---
# Secret para autenticacao do Docker Registry
apiVersion: v1
kind: Secret
metadata:
  name: docker-registry-secret
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: eyJhdXRocyI6ey...
# Criar Secret (linha de comando)
kubectl create secret generic app-secrets \
  --from-literal=database-url='postgresql://user:pass@localhost:5432/mydb' \
  --from-literal=api-key='supersecretapikey'

# Criar Secret a partir de arquivo
kubectl create secret generic tls-secret \
  --from-file=tls.crt=./server.crt \
  --from-file=tls.key=./server.key

# Verificar Secret (valores mascarados)
kubectl get secrets
kubectl describe secret app-secrets

Armazenamento Persistente

PersistentVolume e PersistentVolumeClaim

# storage.yaml
# PersistentVolume (criado pelo administrador do cluster)
apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard
  hostPath:  # Para desenvolvimento local
    path: /data/postgres
---
# PersistentVolumeClaim (solicitado pelo desenvolvedor)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: standard
---
# StorageClass (provisionamento dinamico)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer

StatefulSet (para Banco de Dados)

# statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: postgres
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15
        ports:
        - containerPort: 5432
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-secrets
              key: username
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secrets
              key: password
        - name: POSTGRES_DB
          value: myapp
        - name: PGDATA
          value: /var/lib/postgresql/data/pgdata
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
  volumeClaimTemplates:
  - metadata:
      name: postgres-storage
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: fast-ssd
      resources:
        requests:
          storage: 20Gi
---
# Headless Service para StatefulSet
apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  clusterIP: None
  selector:
    app: postgres
  ports:
  - port: 5432

Exemplo de Configuracao Pratica

Configuracao Completa de Aplicacao Web

# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    name: production
---
# resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: production-quota
  namespace: production
spec:
  hard:
    requests.cpu: "10"
    requests.memory: 20Gi
    limits.cpu: "20"
    limits.memory: 40Gi
    pods: "50"
---
# limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
  namespace: production
spec:
  limits:
  - default:
      cpu: "500m"
      memory: "512Mi"
    defaultRequest:
      cpu: "100m"
      memory: "128Mi"
    type: Container
# complete-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: production
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: myapp/frontend:1.0
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "500m"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      serviceAccountName: backend-sa
      containers:
      - name: backend
        image: myapp/backend:1.0
        ports:
        - containerPort: 3000
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: database-url
        - name: REDIS_URL
          value: "redis://redis:6379"
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "1000m"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 3000
          initialDelaySeconds: 10
          periodSeconds: 5
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        ports:
        - containerPort: 6379
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "128Mi"
            cpu: "100m"
---
# Services
apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: production
spec:
  selector:
    app: frontend
  ports:
  - port: 80
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: production
spec:
  selector:
    app: backend
  ports:
  - port: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: production
spec:
  selector:
    app: redis
  ports:
  - port: 6379
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: main-ingress
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend
            port:
              number: 80
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: backend
            port:
              number: 3000

Monitoramento e Depuracao

Comandos de Depuracao do kubectl

# Listar Pods (detalhado)
kubectl get pods -o wide -n production

# Uso de recursos
kubectl top nodes
kubectl top pods -n production

# Detalhes do Pod
kubectl describe pod <pod-name> -n production

# Verificar logs
kubectl logs <pod-name> -n production
kubectl logs <pod-name> -c <container-name>  # Multi-container
kubectl logs -f <pod-name>  # Tempo real
kubectl logs --previous <pod-name>  # Container anterior

# Executar comando dentro do Pod
kubectl exec -it <pod-name> -- /bin/sh
kubectl exec -it <pod-name> -c <container-name> -- /bin/sh

# Port forward
kubectl port-forward <pod-name> 8080:80
kubectl port-forward svc/<service-name> 8080:80

# Verificar eventos
kubectl get events -n production --sort-by='.lastTimestamp'

# Exportar YAML do recurso
kubectl get deployment <name> -o yaml

Horizontal Pod Autoscaler

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: backend-hpa
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: backend
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 10
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
      - type: Pods
        value: 4
        periodSeconds: 15
      selectPolicy: Max
# Verificar HPA
kubectl get hpa -n production
kubectl describe hpa backend-hpa -n production

Gerenciamento de Pacotes com Helm

Basico do Helm

# Instalar Helm
brew install helm

# Adicionar repositorio
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Buscar charts
helm search repo nginx

# Instalar chart
helm install my-nginx bitnami/nginx

# Instalar com valores customizados
helm install my-nginx bitnami/nginx \
  --set service.type=ClusterIP \
  --set replicaCount=3

# Usar values.yaml
helm install my-nginx bitnami/nginx -f values.yaml

# Upgrade
helm upgrade my-nginx bitnami/nginx -f values.yaml

# Rollback
helm rollback my-nginx 1

# Desinstalar
helm uninstall my-nginx

# Listar releases
helm list

Criando Chart Customizado

# Criar template do chart
helm create myapp
myapp/
├── Chart.yaml          # Metadados do chart
├── values.yaml         # Valores padrao
├── templates/
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   ├── configmap.yaml
│   ├── secret.yaml
│   ├── hpa.yaml
│   ├── _helpers.tpl    # Template helpers
│   └── NOTES.txt       # Mensagem pos-instalacao
└── charts/             # Charts dependentes
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "myapp.fullname" . }}
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "myapp.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "myapp.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        ports:
        - containerPort: {{ .Values.service.port }}
        resources:
          {{- toYaml .Values.resources | nindent 12 }}
        {{- if .Values.env }}
        env:
          {{- range $key, $value := .Values.env }}
          - name: {{ $key }}
            value: {{ $value | quote }}
          {{- end }}
        {{- end }}
# values.yaml
replicaCount: 3

image:
  repository: myapp/backend
  tag: "1.0.0"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 3000

ingress:
  enabled: true
  className: nginx
  hosts:
    - host: myapp.example.com
      paths:
        - path: /
          pathType: Prefix

resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "1000m"

env:
  NODE_ENV: production
  LOG_LEVEL: info

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

Resumo

Kubernetes se tornou o padrao para orquestracao de containers.

Etapas de Aprendizado

EtapaConteudo
1. FundamentosPod, Deployment, Service
2. ConfiguracaoConfigMap, Secret
3. ArmazenamentoPV, PVC, StatefulSet
4. RedeIngress, NetworkPolicy
5. OperacaoHPA, Helm, Monitoramento

Melhores Praticas

  1. Configurar limites de recursos: Sempre definir requests/limits
  2. Health checks: Implementar liveness/readinessProbe
  3. Usar labels: Estrategia consistente de rotulagem
  4. Isolamento por Namespace: Separar por ambiente ou equipe
  5. GitOps: Versionar manifestos

Dominar o Kubernetes permite operar aplicacoes escalaveis e confiaveis.

← Voltar para a lista