Skip to main content
Version: Next

Karmada Event Reference

This document provides a comprehensive reference of all Kubernetes events emitted by Karmada components. Events provide real-time visibility into operational state changes, errors, and important milestones in resource propagation and cluster management workflows.

About Karmada Events

Karmada uses the Kubernetes event mechanism to record significant state transitions and operations across its control plane. Events are ephemeral records that help operators:

  • Troubleshoot issues: Identify failure points in resource propagation
  • Monitor operations: Track the progress of scheduling, synchronization, and status aggregation
  • Audit changes: Review what actions Karmada has taken on resources
  • Understand workflows: Follow resources through the complete propagation lifecycle

Events are attached to the Kubernetes objects they describe (resource templates, bindings, works, clusters, etc.) and can be viewed using kubectl describe or kubectl get events.

For comprehensive monitoring and alerting, consider using Karmada metrics which provide quantitative operational data complementary to events.

Events by Propagation Stage

Karmada's resource propagation follows a multi-stage pipeline. Events are organized below by the stage where they occur.

1. Policy Matching

These events occur when Karmada determines which propagation policy applies to a resource template and creates a ResourceBinding or ClusterResourceBinding.

ApplyPolicySucceed

  • Component: resource-detector
  • Type: ✅ Normal
  • Involved Object: Resource template
  • Description: Successfully matched and applied a PropagationPolicy or ClusterPropagationPolicy to the resource. A ResourceBinding or ClusterResourceBinding will be created.

ApplyPolicyFailed

  • Component: resource-detector
  • Type: ⚠️ Warning
  • Involved Object: Resource template
  • Description: Failed to apply propagation policy. Common causes: invalid policy configuration, conflicting policies, or internal errors. Resource will not be propagated until resolved.
  • Troubleshooting:
    • Verify the PropagationPolicy/ClusterPropagationPolicy exists and matches the resource
    • Check that policy selectors correctly target the resource
    • Ensure no conflicting policies with the same priority exist
  • Related Metrics:

PreemptPolicySucceed

  • Component: resource-detector
  • Type: ✅ Normal
  • Involved Object: Resource template
  • Description: Successfully preempted an existing policy with a higher-priority policy. The resource will be re-propagated according to the new policy.

PreemptPolicyFailed

  • Component: resource-detector
  • Type: ⚠️ Warning
  • Involved Object: Resource template
  • Description: Failed to preempt existing policy. The resource continues using the current policy. Check for policy conflicts or permission issues.
  • Related Metrics:

2. Scheduling

Scheduling determines which clusters should receive the resource and how replicas should be distributed.

ScheduleBindingSucceed

ScheduleBindingFailed

DescheduleBindingSucceed

  • Component: karmada-descheduler
  • Type: ✅ Normal
  • Involved Object: ResourceBinding, ClusterResourceBinding, Resource template
  • Description: Successfully removed scheduling results due to descheduling policy or cluster conditions (e.g., cluster taints). Resources may be rescheduled to other clusters.
  • Related Metrics:

DescheduleBindingFailed

  • Component: karmada-descheduler
  • Type: ⚠️ Warning
  • Involved Object: ResourceBinding, ClusterResourceBinding, Resource template
  • Description: Failed to deschedule binding. Manual intervention may be required.
  • Related Metrics:

3. Dependency Propagation

The dependencies-distributor automatically propagates dependent resources (ConfigMaps, Secrets, ServiceAccounts) referenced by propagated workloads. This happens after scheduling because dependencies must be sent to the same clusters as the main resource.

GetDependenciesSucceed

  • Component: dependencies-distributor
  • Type: ✅ Normal
  • Involved Object: Resource template
  • Description: Successfully identified dependent resources (ConfigMaps, Secrets, etc.) referenced by the workload.

GetDependenciesFailed

  • Component: dependencies-distributor
  • Type: ⚠️ Warning
  • Involved Object: Resource template
  • Description: Failed to identify dependencies. Dependent resources may not be propagated, causing workload failures in member clusters.
  • Troubleshooting:
    • Verify dependent resources exist in the same namespace
    • Check that dependent resources are readable by dependencies-distributor
    • Review ResourceInterpreterCustomization for custom dependency rules

SyncScheduleResultToDependenciesSucceed

  • Component: dependencies-distributor
  • Type: ✅ Normal
  • Involved Object: ResourceBinding, ClusterResourceBinding
  • Description: Successfully propagated dependencies to the same clusters as the main workload.

SyncScheduleResultToDependenciesFailed

  • Component: dependencies-distributor
  • Type: ⚠️ Warning
  • Involved Object: ResourceBinding, ClusterResourceBinding
  • Description: Failed to propagate dependencies. Workloads may fail to start in member clusters due to missing ConfigMaps/Secrets.
  • Troubleshooting:
    • Verify dependent resources exist in the same namespace
    • Check that dependent resources are readable by dependencies-distributor
    • Review ResourceInterpreterCustomization for custom dependency rules

DependencyPolicyConflict

  • Component: dependencies-distributor
  • Type: ⚠️ Warning
  • Involved Object: ResourceBinding, ClusterResourceBinding
  • Description: Detected a conflict between dependency propagation and an existing PropagationPolicy for a dependent resource. Manual policy adjustment may be needed.
  • Troubleshooting: Check if a conflicting PropagationPolicy exists for the dependency.

4. Work Creation & Override Application

After scheduling and dependency propagation, the binding controller creates Work objects for each target cluster. During this phase, OverridePolicy/ClusterOverridePolicy are applied to customize resources per cluster before creating Work objects.

ApplyOverridePolicySucceed

  • Component: override-manager
  • Type: ✅ Normal
  • Involved Object: Resource template
  • Description: Successfully applied OverridePolicy or ClusterOverridePolicy to modify the resource for a specific cluster (e.g., different image registry, resource limits, labels).
  • Related Metrics:

ApplyOverridePolicyFailed

  • Component: override-manager
  • Type: ⚠️ Warning
  • Involved Object: Resource template
  • Description: Failed to apply override policy for a cluster. Resource may be propagated without intended overrides. Verify override policy syntax and selectors.
  • Related Metrics:

SyncWorkSucceed

  • Component: binding-controller, cluster-resource-binding-controller
  • Type: ✅ Normal
  • Involved Object: ResourceBinding, ClusterResourceBinding, Resource template, FederatedResourceQuota
  • Description: Successfully created or updated Work objects in execution namespaces. Each Work contains the resource manifest for one cluster.

SyncWorkFailed

  • Component: binding-controller, cluster-resource-binding-controller
  • Type: ⚠️ Warning
  • Involved Object: ResourceBinding, ClusterResourceBinding, Resource template, FederatedResourceQuota
  • Description: Failed to create/update Work objects. Common causes: execution namespace doesn't exist, permission issues, or API server errors. Resources won't be propagated to clusters.
  • Troubleshooting:
    • Verify execution namespace exists: kubectl get ns karmada-es-<cluster-name>
    • Check binding controller logs
    • Verify RBAC permissions for the binding controller
  • Related Metrics:

CleanupWorkFailed

  • Component: binding-controller, cluster-resource-binding-controller
  • Type: ⚠️ Warning
  • Involved Object: ResourceBinding, ClusterResourceBinding
  • Description: Failed to clean up Work objects after resource deletion or cluster removal. Orphaned Work objects may remain. Manual cleanup may be needed.

AggregateStatusSucceed

  • Component: resource-binding-status-controller, cluster-resource-binding-status-controller
  • Type: ✅ Normal
  • Involved Object: ResourceBinding, ClusterResourceBinding, Resource template, FederatedResourceQuota
  • Description: Successfully aggregated status from all member clusters back to the resource template. Status reflects combined state across clusters.

AggregateStatusFailed

  • Component: resource-binding-status-controller, cluster-resource-binding-status-controller
  • Type: ⚠️ Warning
  • Involved Object: ResourceBinding, ClusterResourceBinding, Resource template, FederatedResourceQuota
  • Description: Failed to aggregate status from member clusters. Status information may be incomplete or outdated. Check Work status in execution namespaces.
  • Related Metrics:

EvictWorkloadFromClusterSucceed

  • Component: binding-controller, taint-manager, graceful-eviction-controller
  • Type: ✅ Normal
  • Involved Object: ResourceBinding, ClusterResourceBinding, Resource template
  • Description: Successfully evicted workload from a cluster (graceful eviction due to cluster failure, taint, or rebalancing).

EvictWorkloadFromClusterFailed

5. Work Execution

The execution controller (in push mode) or karmada-agent (in pull mode) applies Work resources to member clusters.

WorkDispatching

  • Component: execution-controller, karmada-agent
  • Type: ✅ Normal
  • Involved Object: Work
  • Description: Work dispatching status has changed. Indicates the Work is being processed for deployment to the member cluster.

SyncSucceed

SyncFailed

  • Component: execution-controller, karmada-agent
  • Type: ⚠️ Warning
  • Involved Object: Work
  • Description: Failed to apply resource to member cluster. Common causes: cluster unreachable, resource conflicts in member cluster, invalid manifest, or insufficient permissions.
  • Troubleshooting:
    • Check cluster connectivity and health: kubectl get cluster <name>
    • Verify the cluster has necessary CRDs for the resource type
    • Check for resource conflicts in the member cluster
    • Review execution controller or karmada-agent logs
    • For resource conflicts, check if resource already exists with different ownership
  • Related Metrics:

6. Status Aggregation

The work-status-controller monitors Work objects and reflects their status back through the propagation pipeline.

ReflectStatusSucceed

  • Component: work-status-controller
  • Type: ✅ Normal
  • Involved Object: Work
  • Description: Successfully reflected the resource status from member cluster back to the Work object. Status is now available for aggregation.

ReflectStatusFailed

  • Component: work-status-controller
  • Type: ⚠️ Warning
  • Involved Object: Work
  • Description: Failed to reflect status from member cluster. Status information will be incomplete. Check member cluster connectivity.

InterpretHealthSucceed

  • Component: work-status-controller
  • Type: ✅ Normal
  • Involved Object: Work
  • Description: Successfully interpreted the health status of the resource in the member cluster using health interpretation rules.

InterpretHealthFailed

  • Component: work-status-controller
  • Type: ⚠️ Warning
  • Involved Object: Work
  • Description: Failed to interpret resource health. Default health rules will be used. Verify ResourceInterpreterCustomization if using custom health checks.
  • Troubleshooting:
    • Check if ResourceInterpreterCustomization exists for the resource type
    • Verify health interpretation rules are valid
    • Review work-status-controller logs

Cluster Lifecycle Events

These events track member cluster registration, health, and infrastructure operations.

CreateExecutionSpaceSucceed

  • Component: cluster-controller
  • Type: ✅ Normal
  • Involved Object: Cluster
  • Description: Successfully created the execution namespace (karmada-es-<cluster-name>) for the cluster. This namespace stores Work objects for the cluster.

CreateExecutionSpaceFailed

  • Component: cluster-controller
  • Type: ⚠️ Warning
  • Involved Object: Cluster
  • Description: Failed to create execution namespace. Work objects cannot be created for this cluster until resolved.
  • Related Metrics:

RemoveExecutionSpaceSucceed

  • Component: cluster-controller
  • Type: ✅ Normal
  • Involved Object: Cluster
  • Description: Successfully removed execution namespace during cluster de-registration. Clean uninstall completed.

RemoveExecutionSpaceFailed

  • Component: cluster-controller
  • Type: ⚠️ Warning
  • Involved Object: Cluster
  • Description: Failed to remove execution namespace. Manual cleanup may be required.

TaintClusterSucceed

  • Component: cluster-controller
  • Type: ✅ Normal
  • Involved Object: Cluster
  • Description: Successfully applied taint to cluster (e.g., due to cluster failure detection). Workloads will be evicted based on tolerations.

TaintClusterFailed

SyncImpersonationConfigSucceed

  • Component: unified-auth-controller
  • Type: ✅ Normal
  • Involved Object: Cluster
  • Description: Successfully synchronized impersonation configuration for cross-cluster authentication. Cluster is ready for pull mode or multi-cluster service discovery.

SyncImpersonationConfigFailed

  • Component: unified-auth-controller
  • Type: ⚠️ Warning
  • Involved Object: Cluster
  • Description: Failed to sync impersonation config. Cross-cluster operations may fail. Check service account and RBAC configuration.

Multi-Cluster Service Events

These events track multi-cluster service discovery and load balancing.

SyncDerivedServiceSucceed

  • Component: service-import-controller
  • Type: ✅ Normal
  • Involved Object: ServiceImport
  • Description: Successfully created or updated the derived Service for cross-cluster service discovery. The service is now discoverable across clusters.

SyncDerivedServiceFailed

  • Component: service-import-controller
  • Type: ⚠️ Warning
  • Involved Object: ServiceImport
  • Description: Failed to create derived Service. Cross-cluster service discovery will not work. Check ServiceImport configuration and permissions.

SyncServiceSucceed

  • Component: multiclusterservice-controller
  • Type: ✅ Normal
  • Involved Object: MultiClusterService
  • Description: Successfully synchronized the MultiClusterService configuration. Service exposure is configured correctly.

SyncServiceFailed

  • Component: multiclusterservice-controller
  • Type: ⚠️ Warning
  • Involved Object: MultiClusterService
  • Description: Failed to synchronize MultiClusterService. Cross-cluster load balancing may not work.

DispatchEndpointSliceSucceed

  • Component: endpointslice-dispatch-controller
  • Type: ✅ Normal
  • Involved Object: EndpointSlice
  • Description: Successfully dispatched EndpointSlice to consumer clusters. Cross-cluster endpoints are now available for load balancing.

DispatchEndpointSliceFailed

  • Component: endpointslice-dispatch-controller
  • Type: ⚠️ Warning
  • Involved Object: EndpointSlice
  • Description: Failed to dispatch EndpointSlice to consumer clusters. Traffic routing may be incomplete.
  • Troubleshooting: Check network connectivity and permissions.

ClusterNotFound

  • Component: endpointslice-dispatch-controller
  • Type: ⚠️ Warning
  • Involved Object: EndpointSlice
  • Description: Consumer cluster specified in MultiClusterService not found. Verify cluster name and registration.
  • Troubleshooting: Verify cluster exists with kubectl get clusters.

APIIncompatible

  • Component: endpointslice-dispatch-controller
  • Type: ⚠️ Warning
  • Involved Object: EndpointSlice
  • Description: Member cluster does not support EndpointSlice API (requires Kubernetes 1.19+). Consider upgrading the cluster.
  • Troubleshooting: Check member cluster Kubernetes version.

Resource Quota Events

These events track federated resource quota enforcement across clusters.

Note: FederatedResourceQuota controllers reuse some event reason strings that are also used by binding controllers. The context (involved object type) distinguishes them.

SyncWorkSucceed (FederatedResourceQuota)

  • Component: federated-resource-quota-sync-controller
  • Type: ✅ Normal
  • Involved Object: FederatedResourceQuota
  • Description: Successfully synchronized the FederatedResourceQuota specification to member clusters. Quota limits are enforced.

SyncWorkFailed (FederatedResourceQuota)

  • Component: federated-resource-quota-sync-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedResourceQuota
  • Description: Failed to sync quota to member clusters. Quota enforcement may be incomplete.

AggregateStatusSucceed (FederatedResourceQuota)

  • Component: federated-resource-quota-status-controller
  • Type: ✅ Normal
  • Involved Object: FederatedResourceQuota
  • Description: Successfully collected resource usage from member clusters. Status reflects current quota utilization.

AggregateStatusFailed (FederatedResourceQuota)

  • Component: federated-resource-quota-status-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedResourceQuota
  • Description: Failed to collect status from member clusters. Usage information may be incomplete.

CollectOverallStatusSucceed

  • Component: federated-resource-quota-enforcement-controller
  • Type: ✅ Normal
  • Involved Object: FederatedResourceQuota
  • Description: Successfully collected and aggregated overall quota status across all clusters.

CollectOverallStatusFailed

  • Component: federated-resource-quota-enforcement-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedResourceQuota
  • Description: Failed to collect overall status. Quota enforcement decisions may be based on stale data.

Autoscaling Events

These events track FederatedHPA and CronFederatedHPA operations for multi-cluster autoscaling.

FederatedHPA Events

SuccessfulRescale

FailedRescale

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: Failed to rescale the target workload. Common causes: invalid target reference, insufficient cluster capacity, or permission issues.
  • Troubleshooting:
    • Verify the scale target reference exists and is accessible
    • Check cluster capacity and resource availability
    • Review federatedhpa-controller logs
    • Ensure RBAC permissions allow scaling operations
  • Related Metrics:

FailedGetScale

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: Failed to get scale information for the target resource. The resource may not support scaling or may not exist.
  • Troubleshooting: Verify the target resource exists and supports the scale subresource.

FailedComputeMetricsReplicas

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: Failed to compute desired replicas based on metrics. Common causes: metrics unavailable, metrics server unreachable, or invalid metric queries.
  • Troubleshooting:

FailedGetBindings

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: Failed to get ResourceBinding or ClusterResourceBinding for the target workload. The workload may not be propagated.
  • Troubleshooting: Verify the target workload has a corresponding binding and is being propagated.

FailedGetTargetClusters

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: Failed to determine target clusters for autoscaling. Cannot aggregate metrics or distribute replicas.
  • Troubleshooting: Check that the target workload is scheduled to clusters via PropagationPolicy.

FailedGetScaleTargetRef

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: Failed to get the scale target reference. The referenced resource may not exist.
  • Troubleshooting: Verify the scaleTargetRef in FederatedHPA spec points to an existing resource.

SelectorRequired

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: A selector is required for the metric but was not provided in the FederatedHPA spec.
  • Troubleshooting: Add the required selector to the metric specification.

InvalidSelector

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: The provided selector is invalid or malformed.
  • Troubleshooting: Verify selector syntax follows Kubernetes label selector format.

AmbiguousSelector

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: The selector matches multiple resources when only one is expected.
  • Troubleshooting: Make the selector more specific to match a single resource.

FailedUpdateStatus

  • Component: federatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: FederatedHPA
  • Description: Failed to update the FederatedHPA status. Status information may be outdated.
  • Troubleshooting: Check API server connectivity and controller permissions.

CronFederatedHPA Events

StartRuleFailed

ScaleFailed

  • Component: cronfederatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: CronFederatedHPA
  • Description: Failed to scale workload based on cron schedule. The scheduled scaling operation did not complete successfully.
  • Troubleshooting:
    • Verify target workload exists and is scalable
    • Check cluster capacity for the desired replica count
    • Review scheduling rules and target replica counts
  • Related Metrics:

UpdateCronFederatedHPAFailed

  • Component: cronfederatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: CronFederatedHPA
  • Description: Failed to update the CronFederatedHPA object. Changes to the cron schedule may not be applied.
  • Troubleshooting: Check API server connectivity and controller permissions.

UpdateStatusFailed

  • Component: cronfederatedhpa-controller
  • Type: ⚠️ Warning
  • Involved Object: CronFederatedHPA
  • Description: Failed to update the CronFederatedHPA status. Status information may be outdated, including last execution time and active rules.
  • Troubleshooting: Check API server connectivity and controller permissions.

Event Reference Table

Quick reference of all unique event reasons organized alphabetically. Note that some event reasons are reused across different object types (e.g., SyncWorkFailed for both bindings and FederatedResourceQuota).

Event ReasonTypeComponentInvolved Objects
AggregateStatusFailedWarningresource-binding-status-controller, cluster-resource-binding-status-controller, federated-resource-quota-status-controllerResourceBinding, ClusterResourceBinding, Resource template, FederatedResourceQuota
AggregateStatusSucceedNormalresource-binding-status-controller, cluster-resource-binding-status-controller, federated-resource-quota-status-controllerResourceBinding, ClusterResourceBinding, Resource template, FederatedResourceQuota
AmbiguousSelectorWarningfederatedhpa-controllerFederatedHPA
APIIncompatibleWarningendpointslice-dispatch-controllerEndpointSlice
ApplyOverridePolicyFailedWarningoverride-managerResource template
ApplyOverridePolicySucceedNormaloverride-managerResource template
ApplyPolicyFailedWarningdetectorResource template
ApplyPolicySucceedNormaldetectorResource template
CleanupWorkFailedWarningbinding-controllerResourceBinding, ClusterResourceBinding
ClusterNotFoundWarningendpointslice-dispatch-controllerEndpointSlice
CollectOverallStatusFailedWarningfederated-resource-quota-enforcement-controllerFederatedResourceQuota
CollectOverallStatusSucceedNormalfederated-resource-quota-enforcement-controllerFederatedResourceQuota
CreateExecutionSpaceFailedWarningcluster-controllerCluster
CreateExecutionSpaceSucceedNormalcluster-controllerCluster
DependencyPolicyConflictWarningdependencies-distributorResourceBinding, ClusterResourceBinding
DescheduleBindingFailedWarningdeschedulerResourceBinding, ClusterResourceBinding, Resource template
DescheduleBindingSucceedNormaldeschedulerResourceBinding, ClusterResourceBinding, Resource template
DispatchEndpointSliceFailedWarningendpointslice-dispatch-controllerEndpointSlice
DispatchEndpointSliceSucceedNormalendpointslice-dispatch-controllerEndpointSlice
EvictWorkloadFromClusterFailedWarningbinding-controller, taint-managerResourceBinding, ClusterResourceBinding, Resource template
EvictWorkloadFromClusterSucceedNormalbinding-controller, taint-managerResourceBinding, ClusterResourceBinding, Resource template
FailedComputeMetricsReplicasWarningfederatedhpa-controllerFederatedHPA
FailedGetBindingsWarningfederatedhpa-controllerFederatedHPA
FailedGetScaleWarningfederatedhpa-controllerFederatedHPA
FailedGetScaleTargetRefWarningfederatedhpa-controllerFederatedHPA
FailedGetTargetClustersWarningfederatedhpa-controllerFederatedHPA
FailedRescaleWarningfederatedhpa-controllerFederatedHPA
FailedUpdateStatusWarningfederatedhpa-controllerFederatedHPA
GetDependenciesFailedWarningdependencies-distributorResource template
GetDependenciesSucceedNormaldependencies-distributorResource template
InterpretHealthFailedWarningwork-status-controllerWork
InterpretHealthSucceedNormalwork-status-controllerWork
InvalidSelectorWarningfederatedhpa-controllerFederatedHPA
PreemptPolicyFailedWarningdetectorResource template
PreemptPolicySucceedNormaldetectorResource template
ReflectStatusFailedWarningwork-status-controllerWork
ReflectStatusSucceedNormalwork-status-controllerWork
RemoveExecutionSpaceFailedWarningcluster-controllerCluster
RemoveExecutionSpaceSucceedNormalcluster-controllerCluster
ScaleFailedWarningcronfederatedhpa-controllerCronFederatedHPA
ScheduleBindingFailedWarningschedulerResourceBinding, ClusterResourceBinding, Resource template
ScheduleBindingSucceedNormalschedulerResourceBinding, ClusterResourceBinding, Resource template
SelectorRequiredWarningfederatedhpa-controllerFederatedHPA
StartRuleFailedWarningcronfederatedhpa-controllerCronFederatedHPA
SuccessfulRescaleNormalfederatedhpa-controllerFederatedHPA
SyncDerivedServiceFailedWarningservice-import-controllerServiceImport
SyncDerivedServiceSucceedNormalservice-import-controllerServiceImport
SyncFailedWarningexecution-controllerWork
SyncImpersonationConfigFailedWarningunified-auth-controllerCluster
SyncImpersonationConfigSucceedNormalunified-auth-controllerCluster
SyncScheduleResultToDependenciesFailedWarningdependencies-distributorResourceBinding, ClusterResourceBinding
SyncScheduleResultToDependenciesSucceedNormaldependencies-distributorResourceBinding, ClusterResourceBinding
SyncServiceFailedWarningmulticlusterservice-controllerMultiClusterService
SyncServiceSucceedNormalmulticlusterservice-controllerMultiClusterService
SyncSucceedNormalexecution-controllerWork
SyncWorkFailedWarningbinding-controller, federated-resource-quota-sync-controllerResourceBinding, ClusterResourceBinding, Resource template, FederatedResourceQuota
SyncWorkSucceedNormalbinding-controller, federated-resource-quota-sync-controllerResourceBinding, ClusterResourceBinding, Resource template, FederatedResourceQuota
TaintClusterFailedWarningcluster-controllerCluster
TaintClusterSucceedNormalcluster-controllerCluster
UpdateCronFederatedHPAFailedWarningcronfederatedhpa-controllerCronFederatedHPA
UpdateStatusFailedWarningcronfederatedhpa-controllerCronFederatedHPA
WorkDispatchingNormalexecution-controllerWork

Working with Events

Viewing Events

View events for a specific resource

# View events for a deployment
kubectl describe deployment my-app

# View events for a ResourceBinding
kubectl describe rb my-app-deployment

# View events for a Cluster
kubectl describe cluster member1

View all events in a namespace

# All events in default namespace
kubectl get events -n default

# All events in karmada-system
kubectl get events -n karmada-system

# Watch events in real-time
kubectl get events -n karmada-system --watch

Filter events by reason

# Show only scheduling failures
kubectl get events --field-selector reason=ScheduleBindingFailed

# Show all sync failures
kubectl get events --field-selector reason=SyncWorkFailed

Common Event Patterns and Solutions

Pattern 1: Propagation Stuck at Scheduling

Symptoms:

  • ApplyPolicySucceed event appears on resource template
  • No ScheduleBindingSucceed event
  • May see ScheduleBindingFailed event

Root Causes:

  • No clusters match scheduling constraints (affinity, tolerations)
  • All matching clusters are NotReady
  • Insufficient cluster capacity
  • Scheduler is not running

Resolution:

  1. Check cluster status: kubectl get clusters
  2. Review PropagationPolicy placement rules
  3. Check scheduler logs: kubectl logs -n karmada-system karmada-scheduler-xxx
  4. Review scheduler_pending_bindings metric

Pattern 2: Work Created but Not Applied

Symptoms:

  • SyncWorkSucceed event on binding
  • No SyncSucceed on Work
  • May see SyncFailed event

Root Causes:

  • Member cluster unreachable or NotReady
  • Resource conflicts in member cluster
  • Missing CRDs in member cluster
  • Permission issues

Resolution:

  1. Verify cluster health: kubectl get cluster <name> -o yaml
  2. Check Work status: kubectl get work -n karmada-es-<cluster> <work-name> -o yaml
  3. Review execution controller/agent logs
  4. Check for conflicts: kubectl get <resource> -n <namespace> --context=<member-cluster>

Pattern 3: Status Not Aggregating

Symptoms:

  • SyncWorkloadSucceed on Work
  • No ReflectStatusSucceed or AggregateStatusSucceed
  • Resource status in Karmada is empty or outdated

Root Causes:

  • Work status controller not running
  • Network issues preventing status collection
  • Health interpretation errors

Resolution:

  1. Check work-status-controller logs
  2. Verify Work has status: kubectl get work -n karmada-es-<cluster> <work-name> -o yaml
  3. Review InterpretHealthFailed events if present
  4. Check ResourceInterpreterCustomization if using custom rules

Pattern 4: Dependencies Not Propagating

Symptoms:

  • GetDependenciesSucceed on resource template
  • SyncScheduleResultToDependenciesFailed on binding
  • Workloads fail in member clusters due to missing ConfigMaps/Secrets

Root Causes:

  • Dependent resources don't exist
  • Permission issues reading dependencies
  • Policy conflicts

Resolution:

  1. Verify dependent ConfigMaps/Secrets exist: kubectl get cm,secret -n <namespace>
  2. Check for DependencyPolicyConflict events
  3. Review dependencies-distributor logs
  4. Verify propagation policies don't conflict

Event Retention and History

Kubernetes events are ephemeral and automatically garbage collected:

  • Default TTL: 1 hour for most events
  • Storage: Events are stored in etcd like other Kubernetes objects
  • Limits: Kubernetes limits the number of events per object to prevent etcd exhaustion

For long-term event storage and analysis:

  1. Export to external systems: Use event exporters to send events to:

    • Elasticsearch + Kibana
    • Loki + Grafana
    • Cloud logging services (CloudWatch, Stackdriver, etc.)
  2. Use audit logs: Kubernetes audit logs provide more comprehensive history

  3. Monitor with alerts: Use related metrics to monitor with Prometheus alerts:

    # Example alert for scheduling failures
    - alert: HighSchedulingFailureRate
    expr: |
    rate(karmada_scheduler_schedule_attempts_total{result="error"}[5m])
    / rate(karmada_scheduler_schedule_attempts_total[5m]) > 0.1
    annotations:
    summary: High scheduling failure rate (>10%)