Skip to main content
Version: v1.12

Unified Authentication

For one or a group of user subjects (users, groups, or service accounts) in a member cluster, we can import them into Karmada control plane and grant them the clusters/proxy permission, so that we can access the member cluster with permission of the user subject through Karmada.

In this section, we use a serviceaccount named tom for the test.

Step1: Create ServiceAccount in member1 cluster (optional)

If the serviceaccount has been created in your environment, you can skip this step.

Create a serviceaccount that does not have any permission:

kubectl --kubeconfig $HOME/.kube/members.config --context member1 create serviceaccount tom

Step2: Create ServiceAccount in Karmada control plane

kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver create serviceaccount tom

In order to grant serviceaccount the clusters/proxy permission, apply the following rbac yaml file:

cluster-proxy-rbac.yaml:

unfold me to see the yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-proxy-clusterrole
rules:
- apiGroups:
- 'cluster.karmada.io'
resources:
- clusters/proxy
resourceNames:
- member1
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-proxy-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-proxy-clusterrole
subjects:
- kind: ServiceAccount
name: tom
namespace: default
# The token generated by the serviceaccount can parse the group information. Therefore, you need to specify the group information below.
- kind: Group
name: "system:serviceaccounts"
- kind: Group
name: "system:serviceaccounts:default"
kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f cluster-proxy-rbac.yaml

Step3: Access member1 cluster

Manually create a long-lived api token for the serviceaccount tom:

kubectl apply --kubeconfig ~/.kube/karmada.config -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: tom
annotations:
kubernetes.io/service-account.name: tom
type: kubernetes.io/service-account-token
EOF

Obtain token of serviceaccount tom:

kubectl get secret tom -oyaml | grep token: | awk '{print $2}' | base64 -d

Then construct a kubeconfig file tom.config for tom serviceaccount:

apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: {karmada-apiserver-address} # Replace {karmada-apiserver-address} with karmada-apiserver-address. You can find it in /root/.kube/karmada.config file.
name: tom
contexts:
- context:
cluster: tom
user: tom
name: tom
current-context: tom
kind: Config
users:
- name: tom
user:
token: {token} # Replace {token} with the token obtain above.

Run the command below to access member1 cluster:

kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/apis

We can find that we were able to access, but run the command below:

kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes

It will fail because serviceaccount tom does not have any permissions in the member1 cluster.

Step4: Grant permission to Serviceaccount in member1 cluster

Apply the following YAML file:

member1-rbac.yaml

unfold me to see the yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tom
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tom
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tom
subjects:
- kind: ServiceAccount
name: tom
namespace: default
kubectl --kubeconfig $HOME/.kube/members.config --context member1 apply -f member1-rbac.yaml

Run the command that failed in the previous step again:

kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes

The access will be successful.

Or we can append /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy to the server address of tom.config, and then you can directly use:

kubectl --kubeconfig tom.config get node

Note: For a member cluster that joins Karmada in pull mode and allows only cluster-to-karmada access, we can deploy apiserver-network-proxy (ANP) to access it.