v0.10 to v1.0
Follow the Regular Upgrading Process.
Upgrading Notable Changes
Introduced karmada-aggregated-apiserver
component
In the releases before v1.0.0, we are using CRD to extend the Cluster API, and starts v1.0.0 we use API Aggregation(AA) to extend it.
Based on the above change, perform the following operations during the upgrade:
Step 1: Stop karmada-apiserver
You can stop karmada-apiserver
by updating its replica to 0
.
Step 2: Remove Cluster CRD from ETCD
Remove the Cluster CRD
from ETCD directly by running the following command.
etcdctl --cert="/etc/kubernetes/pki/etcd/karmada.crt" \
--key="/etc/kubernetes/pki/etcd/karmada.key" \
--cacert="/etc/kubernetes/pki/etcd/server-ca.crt" \
del /registry/apiextensions.k8s.io/customresourcedefinitions/clusters.cluster.karmada.io
Note: This command only removed the
CRD
resource, all theCR
(Cluster objects) not changed. That's the reason why we don't remove CRD bykarmada-apiserver
.
Step 3: Prepare the certificate for the karmada-aggregated-apiserver
To avoid CA Reusage and Conflicts, create CA signer and sign a certificate to enable the aggregation layer.
Update karmada-cert-secret
secret in karmada-system
namespace:
apiVersion: v1
kind: Secret
metadata:
name: karmada-cert-secret
namespace: karmada-system
type: Opaque
data:
...
+ front-proxy-ca.crt: |
+ {{front_proxy_ca_crt}}
+ front-proxy-client.crt: |
+ {{front_proxy_client_crt}}
+ front-proxy-client.key: |
+ {{front_proxy_client_key}}
Then update karmada-apiserver
deployment's container command:
- - --proxy-client-cert-file=/etc/kubernetes/pki/karmada.crt
- - --proxy-client-key-file=/etc/kubernetes/pki/karmada.key
+ - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
+ - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- - --requestheader-client-ca-file=/etc/kubernetes/pki/server-ca.crt
+ - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
After the update, restore the replicas of karmada-apiserver
instances.
Step 4: Deploy karmada-aggregated-apiserver
:
Deploy karmada-aggregated-apiserver
instance to your host cluster
by following manifests:
unfold me to see the yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: karmada-aggregated-apiserver
namespace: karmada-system
labels:
app: karmada-aggregated-apiserver
apiserver: "true"
spec:
selector:
matchLabels:
app: karmada-aggregated-apiserver
apiserver: "true"
replicas: 1
template:
metadata:
labels:
app: karmada-aggregated-apiserver
apiserver: "true"
spec:
automountServiceAccountToken: false
containers:
- name: karmada-aggregated-apiserver
image: swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver:v1.0.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: k8s-certs
mountPath: /etc/kubernetes/pki
readOnly: true
- name: kubeconfig
subPath: kubeconfig
mountPath: /etc/kubeconfig
command:
- /bin/karmada-aggregated-apiserver
- --kubeconfig=/etc/kubeconfig
- --authentication-kubeconfig=/etc/kubeconfig
- --authorization-kubeconfig=/etc/kubeconfig
- --karmada-config=/etc/kubeconfig
- --etcd-servers=https://etcd-client.karmada-system.svc.cluster.local:2379
- --etcd-cafile=/etc/kubernetes/pki/server-ca.crt
- --etcd-certfile=/etc/kubernetes/pki/karmada.crt
- --etcd-keyfile=/etc/kubernetes/pki/karmada.key
- --tls-cert-file=/etc/kubernetes/pki/karmada.crt
- --tls-private-key-file=/etc/kubernetes/pki/karmada.key
- --audit-log-path=-
- --feature-gates=APIPriorityAndFairness=false
- --audit-log-maxage=0
- --audit-log-maxbackup=0
resources:
requests:
cpu: 100m
volumes:
- name: k8s-certs
secret:
secretName: karmada-cert-secret
- name: kubeconfig
secret:
secretName: kubeconfig
---
apiVersion: v1
kind: Service
metadata:
name: karmada-aggregated-apiserver
namespace: karmada-system
labels:
app: karmada-aggregated-apiserver
apiserver: "true"
spec:
ports:
- port: 443
protocol: TCP
targetPort: 443
selector:
app: karmada-aggregated-apiserver
Then, deploy APIService
to karmada-apiserver
by following manifests.
unfold me to see the yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1alpha1.cluster.karmada.io
labels:
app: karmada-aggregated-apiserver
apiserver: "true"
spec:
insecureSkipTLSVerify: true
group: cluster.karmada.io
groupPriorityMinimum: 2000
service:
name: karmada-aggregated-apiserver
namespace: karmada-system
version: v1alpha1
versionPriority: 10
---
apiVersion: v1
kind: Service
metadata:
name: karmada-aggregated-apiserver
namespace: karmada-system
spec:
type: ExternalName
externalName: karmada-aggregated-apiserver.karmada-system.svc.cluster.local
Step 5: check clusters status
If everything goes well, you can see all your clusters just as before the upgrading.
kubectl get clusters
karmada-agent
requires an extra impersonate
verb
In order to proxy user's request, the karmada-agent
now request an extra impersonate
verb.
Please check the ClusterRole
configuration or apply the following manifest.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: karmada-agent
rules:
- apiGroups: ['*']
resources: ['*']
verbs: ['*']
- nonResourceURLs: ['*']
verbs: ["get"]
MCS feature now supports Kubernetes v1.21+
Since the discovery.k8s.io/v1beta1
of EndpointSlices
has been deprecated in favor of discovery.k8s.io/v1
, in
Kubernetes v1.21, Karmada adopt
this change at release v1.0.0.
Now the MCS feature requires
member cluster version no less than v1.21.