为什么用 Pulumi 管理 K8s?
Kubernetes 资源传统上用 YAML 文件描述,通过 kubectl apply 部署。这种方式的问题是:YAML 是静态的,难以表达条件逻辑;在多环境间共享配置需要借助 Helm/Kustomize;缺乏类型检查,拼写错误只在 apply 时才被发现。
Pulumi 的 @pulumi/kubernetes SDK 让你用 TypeScript/Python 写 K8s 资源——完整的类型安全、条件逻辑、循环创建,并且与 Pulumi 的 AWS/GCP 资源无缝集成(同一个程序可以同时创建 EKS 集群和向其中部署应用)。
安装 Kubernetes SDK
# TypeScript
npm install @pulumi/kubernetes
# Python
pip install pulumi_kubernetes
# 创建 K8s 专用项目
pulumi new kubernetes-python # Python 模板
pulumi new kubernetes-typescript # TypeScript 模板
Provider 配置:连接集群
import pulumi
import pulumi_kubernetes as k8s
# 方式1:使用默认 kubeconfig(~/.kube/config)
# 不需要显式创建 Provider,SDK 自动读取
# 方式2:指定 kubeconfig 文件
provider = k8s.Provider("k8s-provider",
kubeconfig="/path/to/kubeconfig",
namespace="default",
)
# 方式3:使用 EKS 集群的 kubeconfig(与 AWS 资源联动)
# eks_cluster.kubeconfig 是 Output[str]
provider = k8s.Provider("eks-provider",
kubeconfig=eks_cluster.kubeconfig, # Output 直接用
)
# 所有 k8s 资源指定 provider
opts = pulumi.ResourceOptions(provider=provider)
核心资源:Deployment + Service
import pulumi_kubernetes as k8s
app_name = "my-api"
app_labels = {"app": app_name}
# Namespace
ns = k8s.core.v1.Namespace("app-ns",
metadata=k8s.meta.v1.ObjectMetaArgs(name="production"),
)
# ConfigMap
config_map = k8s.core.v1.ConfigMap("app-config",
metadata=k8s.meta.v1.ObjectMetaArgs(
name="app-config", namespace=ns.metadata.name
),
data={
"APP_ENV": "production",
"LOG_LEVEL": "info",
"PORT": "8080",
},
)
# Secret(注意:值需要 base64 编码,Pulumi 自动处理)
secret = k8s.core.v1.Secret("app-secret",
metadata=k8s.meta.v1.ObjectMetaArgs(
name="app-secret", namespace=ns.metadata.name
),
string_data={ # 用 stringData,Pulumi 自动 base64
"DB_PASSWORD": db_password, # Output[str],Pulumi 自动解析
"API_KEY": "sk-xxx",
},
)
# Deployment
deployment = k8s.apps.v1.Deployment("api-deployment",
metadata=k8s.meta.v1.ObjectMetaArgs(
name=app_name, namespace=ns.metadata.name,
labels=app_labels,
),
spec=k8s.apps.v1.DeploymentSpecArgs(
replicas=3,
selector=k8s.meta.v1.LabelSelectorArgs(
match_labels=app_labels
),
template=k8s.core.v1.PodTemplateSpecArgs(
metadata=k8s.meta.v1.ObjectMetaArgs(labels=app_labels),
spec=k8s.core.v1.PodSpecArgs(
containers=[k8s.core.v1.ContainerArgs(
name=app_name,
image="myorg/my-api:v1.0.0",
ports=[k8s.core.v1.ContainerPortArgs(container_port=8080)],
env_from=[
k8s.core.v1.EnvFromSourceArgs(
config_map_ref=k8s.core.v1.ConfigMapEnvSourceArgs(
name=config_map.metadata.name
)
),
k8s.core.v1.EnvFromSourceArgs(
secret_ref=k8s.core.v1.SecretEnvSourceArgs(
name=secret.metadata.name
)
),
],
resources=k8s.core.v1.ResourceRequirementsArgs(
requests={"cpu": "100m", "memory": "128Mi"},
limits={"cpu": "500m", "memory": "512Mi"},
),
readiness_probe=k8s.core.v1.ProbeArgs(
http_get=k8s.core.v1.HTTPGetActionArgs(
path="/health", port=8080
),
initial_delay_seconds=5,
),
)],
),
),
),
)
# Service(ClusterIP + LoadBalancer)
service = k8s.core.v1.Service("api-service",
metadata=k8s.meta.v1.ObjectMetaArgs(
name=app_name, namespace=ns.metadata.name
),
spec=k8s.core.v1.ServiceSpecArgs(
selector=app_labels,
ports=[k8s.core.v1.ServicePortArgs(port=80, target_port=8080)],
type="LoadBalancer",
),
)
pulumi.export("service_ip", service.status.load_balancer.ingress[0].ip)
Helm Chart 部署
# 方式1:使用 Chart(v3,推荐)
nginx_ingress = k8s.helm.v3.Chart("nginx-ingress",
k8s.helm.v3.ChartOpts(
chart="ingress-nginx",
version="4.9.0",
fetch_opts=k8s.helm.v3.FetchOptsArgs(
repo="https://kubernetes.github.io/ingress-nginx"
),
namespace="ingress-nginx",
values={
"controller": {
"replicaCount": 2,
"service": {"type": "LoadBalancer"},
}
},
),
)
# 方式2:Release(推荐生产使用,等待 rollout 完成)
cert_manager = k8s.helm.v3.Release("cert-manager",
k8s.helm.v3.ReleaseArgs(
chart="cert-manager",
version="v1.14.0",
repository_opts=k8s.helm.v3.RepositoryOptsArgs(
repo="https://charts.jetstack.io"
),
namespace="cert-manager",
create_namespace=True,
values={
"installCRDs": True,
"replicaCount": 2,
},
wait=True, # 等待所有 Pod 就绪
),
)
与 EKS 集成:同一程序创建集群并部署应用
import pulumi
import pulumi_aws as aws
import pulumi_kubernetes as k8s
# 1. 创建 EKS 集群(来自第4章的代码)
cluster = aws.eks.Cluster("my-cluster", /* ... */)
# 2. 生成 kubeconfig(从 EKS cluster 的输出构建)
kubeconfig = pulumi.Output.all(
cluster.name,
cluster.endpoint,
cluster.certificate_authority.data,
).apply(lambda args: {
"apiVersion": "v1",
"clusters": [{
"cluster": {
"server": args[1],
"certificate-authority-data": args[2],
},
"name": args[0],
}],
"contexts": [{
"context": {"cluster": args[0], "user": "aws"},
"name": args[0],
}],
"current-context": args[0],
"users": [{
"name": "aws",
"user": {
"exec": {
"apiVersion": "client.authentication.k8s.io/v1beta1",
"command": "aws",
"args": ["eks", "get-token", "--cluster-name", args[0]],
}
},
}],
})
# 3. 创建 K8s Provider,使用 EKS kubeconfig
k8s_provider = k8s.Provider("eks-k8s",
kubeconfig=kubeconfig.apply(import("json").dumps)
)
k8s_opts = pulumi.ResourceOptions(provider=k8s_provider)
# 4. 向 EKS 集群部署应用
app = k8s.apps.v1.Deployment("my-app",
spec=k8s.apps.v1.DeploymentSpecArgs(
replicas=3,
selector=k8s.meta.v1.LabelSelectorArgs(match_labels={"app": "my-app"}),
template=k8s.core.v1.PodTemplateSpecArgs(
metadata=k8s.meta.v1.ObjectMetaArgs(labels={"app": "my-app"}),
spec=k8s.core.v1.PodSpecArgs(
containers=[k8s.core.v1.ContainerArgs(
name="my-app",
image="nginx:latest",
)],
),
),
),
opts=k8s_opts, # 指定 EKS Provider
)
Pulumi K8s vs kubectl YAML vs Helm 对比
- kubectl YAML:静态,适合简单场景;多环境共享困难;类型不安全
- Helm Chart:模板化,有生态;但模板语法复杂(Go template),调试困难
- Pulumi:完整编程语言;与云资源(EKS/VPC/IAM)无缝集成;类型安全;可测试
本章小结
本章核心要点
- k8s.Provider:通过 kubeconfig 连接集群,当 kubeconfig 是 Output 时(来自 EKS),Pulumi 自动在集群创建后连接。
- K8s 资源结构:与 YAML 完全对应,但用 Python 对象表示,类名遵循
k8s.apps.v1.Deployment格式。 - Helm Chart vs Release:Chart 更快(不等待 rollout),Release 更安全(等待 Pod 就绪),生产推荐 Release。
- 云资源 + K8s 联动:EKS 集群创建后,将其 kubeconfig Output 传给 k8s.Provider,实现基础设施和应用层的统一管理。
- stringData vs data:K8s Secret 用
stringData,Pulumi 自动处理 base64,避免手动编码。