Skip to content

Conversation

@jeff-phillips-18
Copy link
Member

@jeff-phillips-18 jeff-phillips-18 commented Jan 12, 2026

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features

    • Added Machine Set and MachineConfigPool columns to the Nodes page and filters to search by them.
    • Added optional VMs column to show running VM counts when VM integration is enabled.
    • Enabled sorting/display by running VM counts.
  • Metrics

    • Node metrics now include running VM counts when VM integration is active.
  • Localization

    • Added English labels/translations for new columns and filter options.

✏️ Tip: You can customize this high-level summary in your review settings.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jan 12, 2026
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 12, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 12, 2026

@jeff-phillips-18: GitHub didn't allow me to request PR reviews from the following users: kybaker.

Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@coderabbitai
Copy link

coderabbitai bot commented Jan 12, 2026

Walkthrough

Adds i18n entries and feature-gated node table extensions for MachineSet (owner), MachineConfigPool, and VMs; introduces conditional watches for Machine/MachineSet/ControlPlaneMachineSet/MachineConfigPool; extends node row types, filters, columns, and metrics; VM metrics/loading gated by KubeVirt flag.

Changes

Cohort / File(s) Summary
Localization
frontend/packages/console-app/locales/en/console-app.json
Added five i18n entries: Machine set, VMs, MachineConfigPool, Filter by machine set, Filter by MachineConfigPool. Additive only.
Nodes page & types
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Added watches for Machine, MachineSet, ControlPlaneMachineSet, MachineConfigPool; extended exported types (NodeRowItem, NodeFilters, NodeListProps) with machineOwner, machineConfigPool, machineOwners, machineConfigPools, and flags ownerEnabled, mcpEnabled, vmsEnabled; augmentation of node rows with ResourceLinks and VM counts.
Columns, filtering & rendering
frontend/packages/console-app/src/components/nodes/...
useNodesColumns signature changed to accept (ownerEnabled, mcpEnabled, vmsEnabled); conditional columns for MachineSet owner, MachineConfigPool, and VMs added; filters and initialFilters extended to include machineOwners and machineConfigPools.
Watches & resource helpers
frontend/packages/console-app/src/components/nodes/...
Introduced generic useWatchResourcesIfAllowed and used it to conditionally watch the new resources; logic added to compute machineOwner and associate machineConfigPool per Node.
Metrics / actions
frontend/public/actions/ui.ts
NodeMetrics extended with runningVms: MetricValuesByName; fetchNodeMetrics signature updated to accept vmsEnabled to optionally load VM metrics.
Sorting / utils
frontend/packages/console-shared/src/sorts/nodes.ts
Added nodeRunningVms(node: NodeKind): number to surface the runningVms metric for sorting/display.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and specifically describes the main change: adding three new columns (Machine Set, MachineConfigPool, and VMs) to the Nodes list view, which aligns perfectly with the changeset scope.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot added component/core Related to console core functionality kind/i18n Indicates issue or PR relates to internationalization or has content that needs to be translated labels Jan 12, 2026
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 12, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features
  • Added Machine Set and MachineConfigPool columns to the nodes page for improved cluster resource visibility.
  • Introduced VM count column with KubeVirt integration support for virtual machine awareness.
  • Added filter options to search nodes by Machine Set and MachineConfigPool.
  • Expanded localization support for new UI elements.

✏️ Tip: You can customize this high-level summary in your review settings.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 294-306: In NodesPage.tsx update the sort key for the
MachineConfigPool column: locate the column object created when mcpEnabled is
true (the entry using nodeColumnInfo.machineConfigPool.id) and fix the typo in
the sort string from "machineConfigPool.metadta.name" to
"machineConfigPool.metadata.name" so sorting works correctly for that column.
- Around line 204-215: The VM column sort key is wrong: in the node column
definition inside NodesPage (the object using nodeColumnInfo.vms.id) change the
sort property from 'vmCount' to 'vmsCount' to match the NodeRowItem field name
(vmsCount) so column sorting works correctly; update the sort string where the
VMs column is created in NodesPage to 'vmsCount'.
- Around line 798-802: The filter block in NodesPage.tsx uses
filters.machineConfigPools and directly accesses
resource.machineConfigPool.metadata.name but NodeRowItem marks
resource.machineConfigPool as optional, which can cause a TypeError; update the
condition to first guard that resource.machineConfigPool (and its metadata)
exists before checking .metadata.name (e.g., use optional chaining or an
explicit null check on resource.machineConfigPool) so resources without a
MachineConfigPool or CSR resources do not throw when applying
filters.machineConfigPools.
- Around line 971-979: The code sets nodeVMs to undefined when
isKubevirtPluginActive is false, then unconditionally accesses nodeVMs.length
causing a TypeError; update the return object in NodesPage.tsx so that vmsCount
is derived safely (use a conditional or nullish check on nodeVMs, e.g. vmsCount
= nodeVMs ? nodeVMs.length : 0 or vmsCount = nodeVMs?.length ?? 0) referencing
the nodeVMs variable and the vmsCount property in the returned object to avoid
calling .length on undefined.
🧹 Nitpick comments (4)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (4)

985-993: Consider propagating additional resource load errors.

The loadError computation omits machinesLoadError, machineSetsLoadError, controlPlaneMachineSetsLoadError, and machineConfigPoolsLoadError. While ownerEnabled/mcpEnabled disable columns on error, users won't see why data is missing.

For better UX, consider logging these errors or showing a degraded state indicator when supplementary resources fail to load.

💡 Example enhancement
   const loadError = nodesLoadError || csrsLoadError || (isKubevirtPluginActive && vmisLoadError);
+  
+  // Log supplementary resource errors for debugging
+  useEffect(() => {
+    if (machinesLoadError) console.warn('Failed to load Machines:', machinesLoadError);
+    if (machineSetsLoadError) console.warn('Failed to load MachineSets:', machineSetsLoadError);
+    if (controlPlaneMachineSetsLoadError) console.warn('Failed to load ControlPlaneMachineSets:', controlPlaneMachineSetsLoadError);
+    if (machineConfigPoolsLoadError) console.warn('Failed to load MachineConfigPools:', machineConfigPoolsLoadError);
+  }, [machinesLoadError, machineSetsLoadError, controlPlaneMachineSetsLoadError, machineConfigPoolsLoadError]);

975-978: Consider filtering ownerReferences by expected kind.

Taking ownerReferences?.[0] assumes the first reference is always the MachineSet or ControlPlaneMachineSet. If a Machine has multiple owners, this could display the wrong resource. Consider filtering by kind:

♻️ Proposed improvement
         return {
           ...node,
-          machineOwner: owner?.metadata.ownerReferences?.[0],
+          machineOwner: owner?.metadata.ownerReferences?.find(
+            (ref) => ref.kind === 'MachineSet' || ref.kind === 'ControlPlaneMachineSet',
+          ),
           machineConfigPool,
           vmsCount: nodeVMs?.length,
         };

664-677: Add null safety for optional array props.

machineSets and controlPlaneMachineSets are typed as optional (?) in NodeListProps (lines 571-572), but .map() is called directly without null checks. While the current call site always provides arrays from useK8sWatchResource, the type contract should be honored for maintainability.

♻️ Proposed fix
   const machineSetFilterOptions = useMemo<DataViewFilterOption[]>(
     () =>
       [
-        ...machineSets.map((machineSet) => ({
+        ...(machineSets ?? []).map((machineSet) => ({
           value: machineSet.metadata.name,
           label: machineSet.metadata.name,
         })),
-        ...controlPlaneMachineSets.map((machineSet) => ({
+        ...(controlPlaneMachineSets ?? []).map((machineSet) => ({
           value: machineSet.metadata.name,
           label: machineSet.metadata.name,
         })),
       ].sort((a, b) => a.label.localeCompare(b.label)),
     [machineSets, controlPlaneMachineSets],
   );

679-686: Apply same null safety to machineConfigPools.

Similar to the previous comment, machineConfigPools is optional but .map() is called directly.

♻️ Proposed fix
   const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>(
     () =>
-      machineConfigPools.map((machineConfigPool) => ({
+      (machineConfigPools ?? []).map((machineConfigPool) => ({
         value: machineConfigPool.metadata.name,
         label: machineConfigPool.metadata.name,
       })),
     [machineConfigPools],
   );
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between a1355a3 and e74ec0b.

📒 Files selected for processing (2)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (3)
  • K8sGroupVersionKind (106-106)
  • NodeKind (822-847)
  • OwnerReference (29-36)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
  • sortWithCSRResource (38-48)
frontend/public/module/k8s/types.ts (4)
  • MachineSetKind (730-744)
  • ControlPlaneMachineSetKind (746-765)
  • MachineConfigPoolKind (842-850)
  • MachineKind (711-728)
frontend/packages/console-app/src/components/data-view/types.ts (1)
  • ResourceFilters (5-8)
frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
  • useK8sWatchResource (28-69)
frontend/public/models/index.ts (3)
  • MachineModel (894-909)
  • MachineSetModel (911-926)
  • ControlPlaneMachineSetModel (928-943)
frontend/public/module/k8s/label-selector.js (1)
  • LabelSelector (16-278)
🔇 Additional comments (3)
frontend/packages/console-app/locales/en/console-app.json (1)

409-423: LGTM!

New localization keys for "Machine set", "VMs", "MachineConfigPool", and their corresponding filter placeholders are correctly added. These align with the new columns and filters introduced in NodesPage.tsx.

frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)

101-106: Acknowledged temporary VMI integration.

The TODO comment correctly identifies this as a workaround until the kubevirt plugin can contribute its own column. The GVK definition is correct for kubevirt.io/v1/VirtualMachineInstance. Consider tracking this technical debt in an issue to ensure it gets addressed when the plugin extension mechanism supports column contributions.


466-477: getGroupVersionKindForResource is called with OwnerReference, but the function signature declares K8sResourceCommon.

OwnerReference (with required name, uid, apiVersion, kind) doesn't structurally match K8sResourceCommon (which has optional apiVersion, kind, and metadata). While the function implementation only accesses apiVersion and kind fields—so it works at runtime—this violates the type contract and may trigger strict TypeScript checking. Consider either:

  1. Widening the function signature to accept types with apiVersion and kind properties, or
  2. Casting: getGroupVersionKindForResource(machineOwner as K8sResourceCommon)

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 12, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features

  • Added Machine Set and MachineConfigPool columns to the Nodes page for improved cluster visibility.

  • Introduced a VM count column with KubeVirt integration to show virtual machine totals.

  • Added filters to search nodes by Machine Set and MachineConfigPool.

  • Localization

  • Expanded English translations for the new UI labels and filter options.

✏️ Tip: You can customize this high-level summary in your review settings.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)

664-686: Add defensive fallbacks for optional array props to prevent runtime errors.

The machineSets, controlPlaneMachineSets, and machineConfigPools props are typed as optional (?), but the code calls .map() on them directly. While useK8sWatchResource returns [] for unloaded lists, the type contract allows undefined, which would cause a TypeError.

♻️ Proposed defensive fix
   const machineSetFilterOptions = useMemo<DataViewFilterOption[]>(
     () =>
       [
-        ...machineSets.map((machineSet) => ({
+        ...(machineSets ?? []).map((machineSet) => ({
           value: machineSet.metadata.name,
           label: machineSet.metadata.name,
         })),
-        ...controlPlaneMachineSets.map((machineSet) => ({
+        ...(controlPlaneMachineSets ?? []).map((machineSet) => ({
           value: machineSet.metadata.name,
           label: machineSet.metadata.name,
         })),
       ].sort((a, b) => a.label.localeCompare(b.label)),
     [machineSets, controlPlaneMachineSets],
   );
 
   const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>(
     () =>
-      machineConfigPools.map((machineConfigPool) => ({
+      (machineConfigPools ?? []).map((machineConfigPool) => ({
         value: machineConfigPool.metadata.name,
         label: machineConfigPool.metadata.name,
       })),
     [machineConfigPools],
   );

970-973: Consider memoizing LabelSelector instances to reduce allocations.

A new LabelSelector is instantiated for every MCP during each node iteration, resulting in O(nodes × MCPs) object allocations on every render cycle. For large clusters this could impact performance.

♻️ Pre-compute MCP selectors outside the node loop
+  const mcpSelectors = useMemo(
+    () =>
+      machineConfigPools.map((mcp) => ({
+        mcp,
+        selector: new LabelSelector(mcp.spec?.nodeSelector || {}),
+      })),
+    [machineConfigPools],
+  );

   const data = useMemo(() => {
     // ...
     return [
       ...csrBundle,
       ...nodes.map((node) => {
         // ...
-        const machineConfigPool = machineConfigPools.find((mcp) => {
-          const labelSelector = new LabelSelector(mcp.spec.nodeSelector || {});
-          return labelSelector.matches(node);
-        });
+        const machineConfigPool = mcpSelectors.find(({ selector }) =>
+          selector.matches(node),
+        )?.mcp;
         // ...
       }),
     ];
-  }, [csrs, nodes, machines, machineConfigPools, vmis, isKubevirtPluginActive]);
+  }, [csrs, nodes, machines, mcpSelectors, vmis, isKubevirtPluginActive]);
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between e74ec0b and a620b03.

📒 Files selected for processing (2)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (6)
frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
  • getGroupVersionKindForResource (413-413)
frontend/public/module/k8s/types.ts (4)
  • MachineSetKind (730-744)
  • ControlPlaneMachineSetKind (746-765)
  • MachineConfigPoolKind (842-850)
  • MachineKind (711-728)
frontend/packages/console-app/src/components/data-view/types.ts (1)
  • ResourceFilters (5-8)
frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
  • useK8sWatchResource (28-69)
frontend/public/models/index.ts (4)
  • MachineModel (894-909)
  • MachineSetModel (911-926)
  • ControlPlaneMachineSetModel (928-943)
  • MachineConfigPoolModel (962-977)
frontend/public/module/k8s/label-selector.js (1)
  • LabelSelector (16-278)
🔇 Additional comments (7)
frontend/packages/console-app/locales/en/console-app.json (1)

409-423: LGTM — i18n keys correctly added for new Node columns and filters.

The translation keys align with the UI elements introduced in NodesPage.tsx. Capitalization is consistent with existing conventions (e.g., MachineConfigPool matching line 81).

frontend/packages/console-app/src/components/nodes/NodesPage.tsx (6)

101-106: Reasonable temporary integration for KubeVirt VMI awareness.

The GVK definition is correct for kubevirt.io/v1/VirtualMachineInstance. The TODO comment clearly signals this should be refactored once the kubevirt-plugin can contribute the column directly via extension points.


1011-1018: Conditional column enablement provides reasonable graceful degradation.

The logic correctly enables columns based on resource availability:

  • ownerEnabled requires machines and at least one of MachineSets/ControlPlaneMachineSets
  • mcpEnabled requires both machines and MCPs
  • vmsEnabled gates on KubeVirt feature flag

This is a solid UX pattern for optional functionality.


904-917: KubeVirt integration follows correct patterns.

The double-gate (KUBEVIRT_DYNAMIC flag + plugin presence check) is appropriately defensive. Conditionally passing undefined to useK8sWatchResource is the standard pattern to skip unnecessary watches.


167-248: Column definitions cleanly implement feature-flagged columns.

The conditional spreading pattern for machineOwner, vms, and machineConfigPool columns is idiomatic. Moving Roles to additional: true aligns with the updated default column set in the PR description.


466-490: Cell rendering correctly uses dynamic GVK resolution for ResourceLinks.

Using getGroupVersionKindForResource() is the right approach for rendering links to owner references and MCPs where the exact GVK is determined at runtime.


836-848: Type definitions accurately model the extended node data and filter state.

The optional fields on NodeRowItem and the new filter arrays in NodeFilters correctly reflect the runtime data model.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 12, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features

  • Added Machine Set and MachineConfigPool columns to the Nodes page for improved cluster visibility.

  • Added an optional VM count column (visible when the VM integration is active) to show virtual machine totals.

  • Added filters to search nodes by Machine Set and MachineConfigPool.

  • Localization

  • Added English translations for the new UI labels and filter options.

✏️ Tip: You can customize this high-level summary in your review settings.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 664-686: The useMemo hooks assume machineSets,
controlPlaneMachineSets, and machineConfigPools are always arrays but they are
optional in NodeListProps; update the two hooks (machineSetFilterOptions and
machineConfigPoolFilterOptions) to defensively handle undefined by using a
fallback empty array when mapping (e.g., use (machineSets ?? []) and
(controlPlaneMachineSets ?? []) and (machineConfigPools ?? []) before .map()),
so the map and sort operations never run on undefined and no runtime error
occurs.
🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)

904-908: Consider defensive check for window.SERVER_FLAGS.

The Array.isArray(window.SERVER_FLAGS.consolePlugins) check is good, but window.SERVER_FLAGS itself could theoretically be undefined in certain environments (unit tests, SSR edge cases). A more defensive approach would prevent potential test failures.

💡 Optional: Add null check for SERVER_FLAGS
   const isKubevirtPluginActive =
-    Array.isArray(window.SERVER_FLAGS.consolePlugins) &&
-    window.SERVER_FLAGS.consolePlugins.includes('kubevirt-plugin') &&
+    Array.isArray(window.SERVER_FLAGS?.consolePlugins) &&
+    window.SERVER_FLAGS?.consolePlugins.includes('kubevirt-plugin') &&
     kubevirtFeature;
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between a620b03 and 602755a.

📒 Files selected for processing (2)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (8)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
  • K8sGroupVersionKind (106-106)
frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
  • getGroupVersionKindForResource (413-413)
frontend/public/module/k8s/types.ts (3)
  • MachineSetKind (730-744)
  • ControlPlaneMachineSetKind (746-765)
  • MachineConfigPoolKind (842-850)
frontend/packages/console-app/src/components/data-view/ConsoleDataView.tsx (1)
  • initialFiltersDefault (30-30)
frontend/packages/console-app/src/components/data-view/types.ts (1)
  • ResourceFilters (5-8)
frontend/public/models/index.ts (4)
  • MachineModel (894-909)
  • MachineSetModel (911-926)
  • ControlPlaneMachineSetModel (928-943)
  • MachineConfigPoolModel (962-977)
frontend/packages/console-dynamic-plugin-sdk/src/utils/flags.ts (1)
  • useFlag (11-12)
frontend/public/module/k8s/label-selector.js (1)
  • LabelSelector (16-278)
🔇 Additional comments (8)
frontend/packages/console-app/locales/en/console-app.json (1)

409-423: LGTM! New i18n keys are well-structured.

The new translation keys for Machine set, VMs, MachineConfigPool, and associated filter placeholders follow existing conventions and align with the NodesPage.tsx usage. Good to go.

frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)

101-106: Acknowledged: Temporary VMI GVK definition.

The hardcoded VirtualMachineInstanceGroupVersionKind with the TODO note is a reasonable interim approach. Once the KubeVirt plugin can contribute the VMs column via extension points, this can be removed. The GVK values are correct for the kubevirt.io API.


115-132: LGTM! Column info entries are well-defined.

The new column identifiers (machineOwner, vms, machineConfigPool) and the repositioned role column follow the existing pattern. These IDs correctly map to the cell rendering logic in getNodeDataViewRows.

Also applies to: 148-150


167-171: LGTM! Conditional column rendering is well-implemented.

The feature-flagged column pattern using conditional spreading (...(flag ? [...] : [])) is clean and maintains type safety. The sort paths correctly reference the augmented NodeRowItem properties. The additional: true marking on secondary columns aligns with the column management UX pattern.

Also applies to: 192-215, 240-248, 294-306, 343-343


792-802: LGTM! Filter logic handles optional properties correctly.

The optional chaining (resource.machineOwner?.name, resource.machineConfigPool?.metadata.name) ensures nodes without these associations are safely filtered out when the respective filters are active. This is the expected UX behavior.


994-1003: Graceful degradation pattern is well-designed.

The separation of concerns here is solid: machine-related load errors disable the associated columns/filters rather than failing the entire Nodes page. The explicit comment on line 1002 clarifies the intent.

One consideration: the loaded state waits for all resources including Machines/MachineSets/MCPs even when they'll be hidden due to errors. If these APIs are slow or unavailable on certain cluster configurations (e.g., non-OCP clusters), the initial load could be delayed. This is likely acceptable for the target environment (OpenShift Console).


1017-1024: Consider: Does mcpEnabled need to depend on machinesLoadError?

The mcpEnabled condition includes !machinesLoadError, but the MachineConfigPool association (lines 970-973) uses only LabelSelector matching against node labels—it doesn't require the Machines API.

If Machines fail to load but MachineConfigPools load successfully, the MCP column would be unnecessarily hidden.

Was the machinesLoadError dependency intentional for mcpEnabled? If not:

💡 Potential fix
-          mcpEnabled={!machinesLoadError && !machineConfigPoolsLoadError}
+          mcpEnabled={!machineConfigPoolsLoadError}

970-973: No action needed - empty nodeSelector handling is correct.

LabelSelector with an empty selector {} returns false when matching nodes, not true. When mcp.spec.nodeSelector is undefined, the fallback to {} results in zero conjuncts, causing isEmpty() to return true. Since emptySelectsAll defaults to false, the selector correctly returns false for all nodes. MCPs without explicit node selectors will not incorrectly match any nodes.

The .find() returning only the first matching MCP is acceptable for typical MachineConfigPool association patterns.

@kybaker
Copy link

kybaker commented Jan 13, 2026

This part looks good, we might need to modify the default columns later.

Copy link
Member

@rhamilto rhamilto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, @jeff-phillips-18! Claude highlighted a few areas to address.

}

if (filters.machineOwners.length > 0) {
if (!filters.machineOwners.includes(resource.machineOwner?.name)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude suggests the following to fix a potential null reference:

Suggested change
if (!filters.machineOwners.includes(resource.machineOwner?.name)) {
if (!resource.machineOwner || !filters.machineOwners.includes(resource.machineOwner.name)) {

}

if (filters.machineConfigPools.length > 0) {
if (!filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude suggests the following to fix a potential null reference:

Suggested change
if (!filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) {
if (!filters.machineConfigPools || !filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) {

const loadError = nodesLoadError || csrsLoadError;
return [
...csrBundle,
...nodes.map((node) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude highlights a performance concern that seems worth addressing:

  NodesPage.tsx:964-990
  ...nodes.map((node) => {
    const [machineName] = node ? getNodeMachineNameAndNamespace(node) : ['', ''];
    const owner = machines.find((machine) => machine.metadata.name === machineName);
    const machineConfigPool = machineConfigPools.find((mcp) => {
      const labelSelector = new LabelSelector(mcp.spec.nodeSelector || {});
      return labelSelector.matches(node);
    });
    const nodeVMs = isKubevirtPluginActive
      ? vmis.filter((vmi) => vmi.status?.nodeName === node.metadata.name)
      : undefined;
  This does O(n*m) operations for each node:
  - Finding machine by name: O(m) for each node
  - Finding MCP with label matching: O(m) with nested matching logic
  - Filtering VMIs: O(v) for each node

  For large clusters, consider building lookup maps first:
  const machineByName = useMemo(() =>
    new Map(machines.map(m => [m.metadata.name, m])),
    [machines]
  );

  const vmisByNode = useMemo(() => {
    const map = new Map();
    vmis.forEach(vmi => {
      const nodeName = vmi.status?.nodeName;
      if (nodeName) {
        map.set(nodeName, (map.get(nodeName) || []).concat(vmi));
      }
    });
    return map;
  }, [vmis]);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have made the suggested changes for machines.
I found a better way to get the VMs count via prometheus rather than fetching VMIs and matching by node name.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 13, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features

  • Added Machine Set and MachineConfigPool columns to the Nodes page for improved cluster visibility.

  • Added an optional VM count column (shown when the VM integration is active) to display running VM totals.

  • Added filters to search nodes by Machine Set and MachineConfigPool.

  • Node metrics now include running VM counts when VM integration is enabled.

  • Localization

  • Added English translations for the new UI labels and filter options.

✏️ Tip: You can customize this high-level summary in your review settings.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 949-951: The machinesByName Map keys only use m.metadata.name
which can collide across namespaces; update the useMemo that creates
machinesByName to key each entry by a namespaced identifier like
`${m.metadata.namespace}/${m.metadata.name}` (refer to the machinesByName
variable and its useMemo), and then update any places that look up machines from
this Map (e.g., the data lookup that currently uses just the name) to construct
and use the same `${namespace}/${name}` key so machines in different namespaces
are distinguished.
- Around line 984-992: The machine-related watches in NodesPage.tsx must be
gated by RBAC like the CSRs pattern: add useAccessReview checks for the
Machines, MachineSets, ControlPlaneMachineSets, and MachineConfigPools resources
and, where you currently call useK8sWatchResource for those resources, pass
undefined when access is denied (same approach used around the csrs/CSR
accessReview and useK8sWatchResource calls). This ensures useK8sWatchResource
returns loaded=true when the user lacks access so machinesLoaded,
machineSetsLoaded, controlPlaneMachineSetsLoaded, and machineConfigPoolsLoaded
won’t block the overall loaded flag.
🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)

373-373: Consider: Type consistency for runningVMs default.

Line 373 defaults to '0' (string) while the metric value is a number. This works for display but consider using 0 (number) and letting React coerce it, or explicitly String(runningVMs) for clarity. Minor nit.

Also applies to: 480-482


679-686: Minor: Consider sorting MachineConfigPool filter options.

machineSetFilterOptions is sorted alphabetically (line 675), but machineConfigPoolFilterOptions is not. Consider adding .sort((a, b) => a.label.localeCompare(b.label)) for consistency.

♻️ Suggested fix
   const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>(
     () =>
       machineConfigPools.map((machineConfigPool) => ({
         value: machineConfigPool.metadata.name,
         label: machineConfigPool.metadata.name,
-      })),
+      })).sort((a, b) => a.label.localeCompare(b.label)),
     [machineConfigPools],
   );
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 602755a and 6b123bb.

📒 Files selected for processing (3)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
  • frontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/public/actions/ui.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
  • MetricValuesByName (869-871)
🔇 Additional comments (13)
frontend/public/actions/ui.ts (1)

43-52: LGTM! Clean type extension for VM metrics.

The runningVms field follows the established MetricValuesByName pattern used by other node metrics. This additive change maintains backward compatibility.

frontend/packages/console-app/src/components/nodes/NodesPage.tsx (12)

42-60: LGTM! Well-organized imports for machine-related functionality.

The new model and kind imports are properly grouped and all appear to be utilized in the implementation below.


106-141: Clean column info structure for new columns.

The machineOwner, vms, and machineConfigPool column definitions follow the established pattern.


158-162: Good approach with feature flags for conditional columns.

The hook signature change to accept boolean flags enables clean conditional rendering of feature-gated columns. This aligns well with the KubeVirt plugin activation pattern.


231-239: Verify: Roles column demoted to additional.

The additional: true flag means Roles will no longer be shown by default, aligning with the PR objective stating default columns are: Name, Status, Machine set, VMs, Pods, Memory, CPU. Users can still enable it via column management.


470-479: LGTM! Correct handling for cluster-scoped MachineConfigPool.

MachineConfigPool is cluster-scoped, so omitting the namespace prop on ResourceLink is correct.


566-598: LGTM! Well-structured props for feature flags.

Clean separation of concerns with boolean enable flags and corresponding data arrays. Default empty arrays prevent undefined access errors.


792-803: LGTM! Filter logic handles optional properties correctly.

The optional chaining ensures CSR resources (which lack machineOwner and machineConfigPool) are properly handled. They'll be filtered out when these filters are active, which is appropriate since CSRs don't have machine associations.


904-908: Good: Dual-check for KubeVirt activation.

Requiring both the KUBEVIRT_DYNAMIC feature flag AND plugin presence in consolePlugins is a resilient pattern that prevents edge cases where one condition is true but not the other.


953-982: LGTM! Correct association logic for machine ownership and MCPs.

The LabelSelector matching for MachineConfigPools and ownerReferences traversal for machine ownership are correct approaches. The O(n×m) complexity is acceptable given typical cluster sizes have few MCPs.


1006-1012: LGTM! Graceful degradation for machine API access.

The boolean logic correctly enables features only when required APIs are accessible. The OR condition for machineSetsLoadError || controlPlaneMachineSetsLoadError allows the Machine Set column to show even if one of the machine set APIs fails.


543-550: No action needed — KubeVirt metric correctly exposes node labels.

The kubevirt_vmi_phase_count metric includes a node label per KubeVirt's metric specification, so the query correctly returns per-node VM counts. The code's use of data.metric.instance || data.metric.node on line 557 will properly key results by node.


458-469: No issue—getGroupVersionKindForResource correctly handles OwnerReference.

The machineOwner is an OwnerReference from metadata.ownerReferences, which has both apiVersion and kind fields. The helper only extracts these two fields, so it works seamlessly with OwnerReference objects. The implementation is sound.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 13, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features

  • Added Machine Set and MachineConfigPool columns to the Nodes page for improved cluster visibility.

  • Added an optional VM count column (visible when VM integration is active) to show running VM totals.

  • Added filters to search nodes by Machine Set and MachineConfigPool.

  • Node metrics now include running VM counts when VM integration is enabled.

  • Localization

  • Added English translations for the new UI labels and filter options.

✏️ Tip: You can customize this high-level summary in your review settings.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (3)

679-686: Consider sorting MachineConfigPool filter options for consistency.

machineSetFilterOptions sorts the combined list alphabetically (line 675), but machineConfigPoolFilterOptions does not. For consistent UX across filters, consider adding .sort((a, b) => a.label.localeCompare(b.label)).

♻️ Optional: Add sorting to MCP filter options
   const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>(
     () =>
       machineConfigPools.map((machineConfigPool) => ({
         value: machineConfigPool.metadata.name,
         label: machineConfigPool.metadata.name,
-      })),
+      })).sort((a, b) => a.label.localeCompare(b.label)),
     [machineConfigPools],
   );

839-843: vmsCount is declared but never populated on row items.

The NodeRowItem type includes vmsCount?: number, but the data mapping in useMemo (lines 961-984) doesn't actually set this field. VM counts are retrieved from nodeMetrics.runningVms in getNodeDataViewRows instead. Consider removing vmsCount from the type if it's intentionally not used on the row object, or document this design choice.

♻️ Remove unused vmsCount from NodeRowItem
 type NodeRowItem = (NodeKind | NodeCertificateSigningRequestKind) & {
   machineOwner?: OwnerReference;
   machineConfigPool?: MachineConfigPoolKind;
-  vmsCount?: number;
 };

1009-1016: Feature enablement logic is correct but could use a clarifying comment.

The boolean logic for ownerEnabled ensures the column/filter appears when machines load successfully AND at least one owner resource type (MachineSets OR ControlPlaneMachineSets) is available. This is correct—clusters may have only one type of machine set. Consider adding a brief inline comment explaining this OR logic for future maintainers.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 6b123bb and 49e3cb5.

📒 Files selected for processing (3)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
  • frontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • frontend/public/actions/ui.ts
🔇 Additional comments (15)
frontend/packages/console-app/locales/en/console-app.json (2)

409-411: LGTM! New i18n keys for node columns are well-placed.

These localization entries for Machine set, VMs, and MachineConfigPool follow existing conventions and are positioned near related node terminology. Good to see i18n being handled properly from the start.


422-423: Filter labels follow established patterns.

The filter placeholder strings Filter by machine set and Filter by MachineConfigPool are consistent with existing filter patterns in this file (e.g., Filter by status, Filter by roles).

frontend/packages/console-app/src/components/nodes/NodesPage.tsx (13)

42-48: Model imports align with the feature scope.

Importing MachineModel, MachineConfigPoolModel, MachineSetModel, and ControlPlaneMachineSetModel is appropriate for the new column data requirements. These are well-established OpenShift API models.


106-123: Column info structure follows established patterns.

The new machineOwner, vms, and later machineConfigPool entries in nodeColumnInfo are well-structured. The role column being marked as additional: true in useNodesColumns aligns with the PR objective of updating default columns.


158-162: Clean feature-flag-driven column composition.

The useNodesColumns signature accepting ownerEnabled, mcpEnabled, and vmsEnabled is a solid approach for conditional column rendering. This keeps the UI clean when backing resources aren't available or features are disabled.


470-479: MachineConfigPool cell correctly handles cluster-scoped resource.

Good attention to detail—MachineConfigPool is cluster-scoped, so no namespace prop is passed to ResourceLink. Using getGroupVersionKindForResource(machineConfigPool) on the full resource object is the right approach here.


688-698: Filter initialization is properly structured.

The initialFilters correctly initializes machineOwners and machineConfigPools as empty arrays, ensuring no filters are pre-selected. Good use of spreading initialFiltersDefault for base properties.


792-805: Filter matching logic is correct.

The machineOwners and machineConfigPools filter logic properly handles cases where the resource lacks these associations. Note: Line 801 has resource.machineConfigPool?.metadata.name but the outer condition already confirms resource.machineConfigPool exists—the optional chaining is harmless but redundant.


907-911: Robust KubeVirt plugin detection.

The three-way check (Array.isArray, includes, and feature flag) is appropriately defensive. This ensures VM columns/metrics only appear when KubeVirt is both installed and enabled, avoiding UI confusion on clusters without virtualization support.


961-985: Data composition is well-structured.

The augmentation logic correctly:

  1. Uses Map for O(1) machine lookups
  2. Matches MCPs using LabelSelector.matches(node)
  3. Finds owner references filtering for MachineSet or ControlPlaneMachineSet kinds

The O(nodes × MCPs) complexity for MCP matching is acceptable for typical cluster sizes.


987-995: Intentional graceful degradation for machine-related errors.

The comment at line 994 clarifies the design: machine load errors hide columns/filters rather than blocking the entire page. This is a good UX pattern—users can still see nodes even if machine-related data isn't available. The ownerEnabled and mcpEnabled props correctly reflect this at lines 1009-1014.


895-929: Six concurrent K8s watches may impact performance on large clusters.

The page now maintains watches for Nodes, Machines, MachineSets, ControlPlaneMachineSets, MachineConfigPools, and CSRs. While this is a common pattern, consider:

  1. Could any watches be made conditional based on feature flags (e.g., skip MCP watch if feature is known-disabled)?
  2. For very large clusters, this increases memory footprint from held resource lists.

This isn't a blocker—the pattern is established in console—but worth monitoring if users report sluggishness on large deployments.


933-949: Metrics polling implementation is solid.

The 30-second interval with proper cleanup prevents memory leaks. The dependency on isKubevirtPluginActive ensures VM metrics are fetched when the feature becomes available. Good defensive error handling with the console.error fallback.


458-469: No action needed—OwnerReference is fully compatible with getGroupVersionKindForResource.

The OwnerReference type includes apiVersion and kind as required fields, matching exactly what getGroupVersionKindForResource expects. The utility safely returns undefined if either property is missing. Additionally, the code already guards against null/undefined machineOwner with the conditional rendering (lines 460–468), falling back to DASH. This is sound.


543-550: The query is correct as-is. KubeVirt's kubevirt_vmi_phase_count metric natively exposes a node label for each VMI, so the reducer at line 557 will correctly receive data.metric.node without requiring an explicit node selector in the query. This is the right design: unlike kubelet_running_pods (which needs sum by(node) aggregation), KubeVirt metrics already carry per-VMI granularity with node attribution included.

Likely an incorrect or invalid review comment.

@openshift-ci openshift-ci bot added the component/shared Related to console-shared label Jan 13, 2026
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 13, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features

  • Added Machine Set and MachineConfigPool columns to the Nodes page for improved cluster visibility.

  • Added optional VM count column (shown when VM integration is active) to display running VM totals.

  • Added filters to search nodes by Machine Set and MachineConfigPool.

  • Node metrics now include running VM counts when VM integration is enabled.

  • Localization

  • Added English translations for the new UI labels and filter options.

✏️ Tip: You can customize this high-level summary in your review settings.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)

856-874: Consider improving type safety for the empty array fallback.

The hook logic is correct—it properly gates resource watching behind RBAC checks. The !checkIsLoading && loaded ensures we wait for the access review before reporting loaded state.

The type assertion [] as R on line 873 works because R extends K8sResourceCommon[], but a slightly safer pattern would be to use a type predicate or simply return the empty array without assertion since TypeScript infers correctly for array types.

♻️ Optional: Minor type improvement
-  return [resources || ([] as R), !checkIsLoading && loaded, loadError];
+  return [(resources ?? []) as R, !checkIsLoading && loaded, loadError];

Or if you want to avoid the assertion entirely:

const useWatchResourcesIfAllowed = <R extends K8sResourceCommon>(
  model: K8sModel,
): [R[], boolean, unknown] => {
  // ... implementation
  return [resources || [], !checkIsLoading && loaded, loadError];
};
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 49e3cb5 and 70ab099.

📒 Files selected for processing (4)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
  • frontend/packages/console-shared/src/sorts/nodes.ts
  • frontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • frontend/public/actions/ui.ts
  • frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
  • NodeKind (822-847)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (9)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
  • nodeRunningVms (27-28)
frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
  • getGroupVersionKindForResource (413-413)
frontend/public/module/k8s/types.ts (4)
  • MachineSetKind (730-744)
  • ControlPlaneMachineSetKind (746-765)
  • MachineConfigPoolKind (842-850)
  • MachineKind (711-728)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (5)
  • NodeKind (822-847)
  • NodeCertificateSigningRequestKind (865-867)
  • OwnerReference (29-36)
  • K8sResourceCommon (67-71)
  • CertificateSigningRequestKind (849-863)
frontend/packages/console-app/src/components/data-view/types.ts (1)
  • ResourceFilters (5-8)
frontend/packages/console-dynamic-plugin-sdk/src/api/common-types.ts (1)
  • K8sModel (33-60)
frontend/public/models/index.ts (4)
  • MachineModel (894-909)
  • MachineSetModel (911-926)
  • ControlPlaneMachineSetModel (928-943)
  • MachineConfigPoolModel (962-977)
frontend/packages/console-dynamic-plugin-sdk/src/utils/flags.ts (1)
  • useFlag (11-12)
frontend/packages/console-app/src/components/nodes/csr.ts (1)
  • getNodeClientCSRs (36-75)
🔇 Additional comments (12)
frontend/packages/console-shared/src/sorts/nodes.ts (1)

27-28: LGTM!

Clean addition that follows the established pattern for metric accessors in this file (e.g., nodeCPU, nodePods). Consistent with the new runningVms metric being wired up in ui.ts.

frontend/packages/console-app/src/components/nodes/NodesPage.tsx (11)

103-158: LGTM!

Column info structure is well-organized and consistent with existing patterns. The new machineOwner, vms, and machineConfigPool entries follow the established convention.


162-340: LGTM!

The conditional column inclusion pattern using spread operators is clean and the useMemo dependencies correctly track the feature flags. This approach properly hides columns when their backing resources aren't accessible.


924-928: LGTM!

Good defensive check for KubeVirt integration. The Array.isArray guard prevents runtime errors if consolePlugins is undefined, and combining with the feature flag ensures the feature is properly gated.


963-971: LGTM on MCP matching logic.

Using LabelSelector.matches() is the correct approach for determining which MachineConfigPool applies to a node based on its labels. The owner reference lookup correctly filters for MachineSet and ControlPlaneMachineSet kinds.


1004-1011: LGTM on feature gating logic.

The ownerEnabled condition correctly allows the column when machines load AND at least one of MachineSet or ControlPlaneMachineSet is accessible. The mcpEnabled and vmsEnabled conditions are straightforward. This provides graceful degradation when RBAC restricts access to certain resources.


668-690: LGTM on filter options construction.

Combining MachineSets and ControlPlaneMachineSets into a single sorted list for the filter dropdown provides a consistent UX. The alphabetical sort is a nice touch for usability.


982-990: LGTM on error handling strategy.

The design to exclude machine-related errors from loadError while hiding their columns/filters is a solid resilience pattern. Users can still view core node information even if they lack RBAC access to machine resources.


843-854: LGTM on type definitions.

The NodeRowItem intersection type cleanly extends the base node types with optional machine-related fields. The NodeFilters extension properly models the new filter state.


190-202: All i18n keys for new columns and filters are properly defined.

The locale strings—Machine set, VMs, MachineConfigPool, Filter by machine set, and Filter by MachineConfigPool—are all present in frontend/packages/console-app/locales/en/console-app.json and translated across all supported locales (English, Spanish, French, Japanese, Korean, and Chinese). The namespace prefix convention (console-app~) is correctly applied.


547-554: Metric verification confirmed.

The kubevirt_vmi_phase_count{phase='running'} metric is correct per KubeVirt documentation, which confirms it reports VMI counts per phase and node. The code defensively accesses both instance and node labels (line 561), ensuring robustness across different metric exposures.


462-486: No action required—type compatibility verified.

OwnerReference is structurally compatible with K8sResourceCommon. Both apiVersion and kind are required fields on OwnerReference, satisfying the function's needs. The implementation uses optional chaining defensively, and this pattern is consistent across the codebase (VPA, exports, etc.). The cell renderers are correct.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 13, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features

  • Added Machine Set and MachineConfigPool columns to the Nodes page and filters to search by them.

  • Added optional VMs column to show running VM counts when VM integration is enabled.

  • Node metrics now include running VM counts when VM integration is active.

  • Localization

  • Added English labels/translations for new columns and filter options.

✏️ Tip: You can customize this high-level summary in your review settings.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 1006-1012: The mcpEnabled prop incorrectly gates MCP UI on
machinesLoadError; change the prop passed to mcpEnabled in NodesPage (currently
using machinesLoadError and machineConfigPoolsLoadError) so it only checks
!machineConfigPoolsLoadError. Locate the mcpEnabled usage in NodesPage.tsx and
update it to rely solely on machineConfigPoolsLoadError (since MCP assignments
are derived from node labels via LabelSelector(mcp.spec.nodeSelector) and do not
require Machines to be loaded), leaving machineConfigPools and any LabelSelector
logic untouched.
- Around line 547-554: The kubevirt query for the 'runningVms' series is missing
an explicit aggregation and may not yield a data.metric.node or
data.metric.instance label expected by the reducer (see reducer at line 561);
update the query string for the object with key 'runningVms' (inside the
vmsEnabled branch) to use an explicit aggregation such as sum
by(node)(kubevirt_vmi_phase_count{phase='running'}) so the returned series
include the node label and remain consistent with the other metrics.
🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)

858-876: Type cast on empty array may be too permissive.

The [] as R cast on line 875 bypasses type safety. While functionally fine for array types, consider using a more explicit type or adjusting the generic constraint:

-const useWatchResourcesIfAllowed = <R extends K8sResourceCommon[]>(
+const useWatchResourcesIfAllowed = <T extends K8sResourceCommon>(
   model: K8sModel,
-): [R, boolean, unknown] => {
+): [T[], boolean, unknown] => {
   // ...
-  return [resources || ([] as R), !checkIsLoading && loaded, loadError];
+  return [resources || [], !checkIsLoading && loaded, loadError];
 };

This avoids the explicit cast and makes the return type clearer. Low priority given the current usage is correct.


377-377: Inconsistent default value for VMs compared to other metrics.

The pods metric (line 382) defaults to DASH when unavailable, but runningVMs defaults to '0'. This could be misleading – a missing metric versus "0 VMs running" are different states:

-    const runningVMs = nodeMetrics?.runningVms?.[nodeName] ?? '0';
+    const runningVMs = nodeMetrics?.runningVms?.[nodeName] ?? DASH;

Alternatively, if 0 is intentional because VM-capable nodes with no VMs should show "0", consider adding a comment explaining the rationale.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 70ab099 and f652608.

📒 Files selected for processing (4)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
  • frontend/packages/console-shared/src/sorts/nodes.ts
  • frontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (3)
frontend/public/actions/ui.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
  • MetricValuesByName (869-871)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
  • NodeKind (822-847)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (10)
frontend/public/components/factory/table.tsx (1)
  • TableColumn (662-669)
frontend/packages/console-shared/src/sorts/nodes.ts (2)
  • sortWithCSRResource (40-50)
  • nodeRunningVms (27-28)
frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
  • getGroupVersionKindForResource (413-413)
frontend/public/module/k8s/types.ts (2)
  • MachineSetKind (730-744)
  • ControlPlaneMachineSetKind (746-765)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (4)
  • NodeKind (822-847)
  • NodeCertificateSigningRequestKind (865-867)
  • K8sResourceCommon (67-71)
  • CertificateSigningRequestKind (849-863)
frontend/packages/console-app/src/components/data-view/types.ts (1)
  • ResourceFilters (5-8)
frontend/packages/console-dynamic-plugin-sdk/src/api/common-types.ts (1)
  • K8sModel (33-60)
frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
  • useK8sWatchResource (28-69)
frontend/public/models/index.ts (4)
  • MachineModel (894-909)
  • MachineSetModel (911-926)
  • ControlPlaneMachineSetModel (928-943)
  • MachineConfigPoolModel (962-977)
frontend/packages/console-app/src/components/nodes/csr.ts (1)
  • getNodeClientCSRs (36-75)
🔇 Additional comments (9)
frontend/public/actions/ui.ts (1)

43-52: LGTM – Type extension follows established patterns.

The new runningVms field aligns with the existing metric naming convention and type structure. Backward compatible addition.

frontend/packages/console-shared/src/sorts/nodes.ts (1)

27-28: LGTM – Consistent with existing metric accessors.

Follows the established pattern used by nodeCPU and nodePods. The Number() wrapper correctly handles the conversion from the metric store.

frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)

965-968: First-match behavior for MCP is acceptable but worth noting.

A node could theoretically match multiple MachineConfigPools based on label selectors. The current .find() returns only the first match. This is fine for the UI column display, but ensure this aligns with how OpenShift MCO actually assigns nodes to pools (typically nodes belong to one primary pool).


984-992: Loaded state aggregation is reasonable.

The decision to wait for all resources before rendering prevents partial/inconsistent data. The comment on line 991 correctly explains that machine-related load errors hide columns rather than fail the page. Good defensive approach.


187-210: i18n handling is correct for new columns.

All new column titles and filter labels are properly wrapped with t() using the console-app~ namespace. Follows established patterns.


926-930: Solid KubeVirt feature detection.

The dual check (plugin presence via SERVER_FLAGS.consolePlugins AND feature flag via useFlag('KUBEVIRT_DYNAMIC')) is a good defensive pattern. The Array.isArray guard handles undefined cases.


951-982: Data memoization and mapping looks correct.

Dependencies are properly tracked. The machinesByName Map optimization (lines 901-904) provides O(1) lookups instead of repeated array scans. Node enrichment with machineOwner and machineConfigPool is cleanly implemented.


922-924: CSR watching now properly RBAC-gated.

Using useWatchResourcesIfAllowed for CSRs ensures the watch is only attempted when the user has list permissions. This prevents console errors for users without CSR access.


462-473: No issue here. getGroupVersionKindForResource only requires apiVersion and kind properties—both of which OwnerReference provides. The function signature works with any resource type that includes these fields, not exclusively with K8sResourceCommon.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 13, 2026

@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue.

Details

In response to this:

Description

Add columns to the Nodes list view: Machine set, MachineConfigPools, and VMs.

Add filters for Machine set and MachineConfigPools.

Notes:

  • Machine set could be either a MachineSet or ControlPlaneMachineSet
  • VMs are shown only if the kubevirt plugin is installed and feature flagged
  • Updated the default Node columns to:
    Name
    Status
    Machine set
    VMs
    Pods
    Memory
    CPU

Screenshots

image image

/cc @kybaker

Summary by CodeRabbit

  • New Features

  • Added Machine Set and MachineConfigPool columns to the Nodes page and filters to search by them.

  • Added optional VMs column to show running VM counts when VM integration is enabled.

  • Enabled sorting/display by running VM counts.

  • Metrics

  • Node metrics now include running VM counts when VM integration is active.

  • Localization

  • Added English labels/translations for new columns and filter options.

✏️ Tip: You can customize this high-level summary in your review settings.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Line 377: The display variable runningVMs is given a string default ('0')
while nodeMetrics?.runningVms?.[nodeName] is a number, causing a mixed type;
change the default to a numeric 0 so runningVMs is always a number by replacing
the fallback from the string '0' to the number 0 (i.e., set runningVMs =
nodeMetrics?.runningVms?.[nodeName] ?? 0).
🧹 Nitpick comments (4)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (4)

858-876: Consider edge case when access check is still loading.

The hook looks solid overall, but there's a subtle timing consideration: when checkIsLoading is true, useK8sWatchResource receives undefined and immediately returns loaded: true. The combined !checkIsLoading && loaded guards against premature "loaded" states, which is correct.

However, for clarity and consistency, consider initializing the return tuple more explicitly when access check is pending:

💡 Optional improvement for readability
 const useWatchResourcesIfAllowed = <R extends K8sResourceCommon[]>(
   model: K8sModel,
 ): [R, boolean, unknown] => {
   const [isAllowed, checkIsLoading] = useAccessReview({
     group: model.apiGroup || '',
     resource: model.plural,
     verb: 'list',
   });
   const [resources, loaded, loadError] = useK8sWatchResource<R>(
     isAllowed
       ? {
           kind: referenceForModel(model),
           isList: true,
         }
       : undefined,
   );
 
-  return [resources || ([] as R), !checkIsLoading && loaded, loadError];
+  // When access denied or still checking, return empty array as loaded
+  const effectiveLoaded = !checkIsLoading && (isAllowed ? loaded : true);
+  return [resources || ([] as R), effectiveLoaded, isAllowed ? loadError : undefined];
 };

This makes it clearer that when access is denied, we're intentionally returning an empty array as "loaded" with no error.


963-973: Consider stricter owner reference matching by apiVersion.

The current logic matches owner references by kind only. While this works in practice (MachineSet/ControlPlaneMachineSet are unique to machine-api), stricter matching against apiVersion would be more robust:

💡 Optional enhancement for stricter matching
         const machineOwner = owner?.metadata.ownerReferences?.find(
           (ref) =>
-            ref.kind === MachineSetModel.kind || ref.kind === ControlPlaneMachineSetModel.kind,
+            (ref.kind === MachineSetModel.kind && 
+             ref.apiVersion?.startsWith(`${MachineSetModel.apiGroup}/`)) ||
+            (ref.kind === ControlPlaneMachineSetModel.kind && 
+             ref.apiVersion?.startsWith(`${ControlPlaneMachineSetModel.apiGroup}/`)),
         );

This is a minor hardening suggestion; the current implementation is functionally correct for the OpenShift machine-api ecosystem.


547-565: Consider resilient metric fetching for partial failures.

The current implementation uses Promise.all, meaning if any single metric query fails (e.g., the KubeVirt metric when the cluster has partial VMI issues), all node metrics fail to load.

For a more resilient UX, consider using Promise.allSettled to handle partial metric availability:

💡 Suggested improvement for resilience
-  return Promise.all(promises).then((data: any[]) => _.assign({}, ...data));
+  return Promise.allSettled(promises).then((results) => {
+    const successfulData = results
+      .filter((r): r is PromiseFulfilledResult<any> => r.status === 'fulfilled')
+      .map((r) => r.value);
+    return _.assign({}, ...successfulData);
+  });

This ensures node metrics still display even if the KubeVirt metric endpoint is temporarily unavailable. This is particularly relevant for clusters with intermittent Prometheus issues.


804-811: Minor: Redundant optional chaining.

Line 807 uses resource.machineConfigPool?.metadata.name but the preceding condition on line 806 already guards against !resource.machineConfigPool. The optional chaining is harmless but unnecessary:

🔧 Minor cleanup
     if (filters.machineConfigPools.length > 0) {
       if (
         !resource.machineConfigPool ||
-        !filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)
+        !filters.machineConfigPools.includes(resource.machineConfigPool.metadata.name)
       ) {
         return false;
       }
     }
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between f652608 and 1a3ed51.

📒 Files selected for processing (4)
  • frontend/packages/console-app/locales/en/console-app.json
  • frontend/packages/console-app/src/components/nodes/NodesPage.tsx
  • frontend/packages/console-shared/src/sorts/nodes.ts
  • frontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • frontend/packages/console-shared/src/sorts/nodes.ts
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/public/actions/ui.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
  • MetricValuesByName (869-871)
🔇 Additional comments (10)
frontend/public/actions/ui.ts (1)

51-51: LGTM!

The runningVms metric field follows the established naming conventions and type patterns already present in NodeMetrics. Clean, additive change that integrates well with the existing metrics architecture.

frontend/packages/console-app/locales/en/console-app.json (1)

409-423: LGTM!

The new i18n entries follow the established patterns in this locale file. The display names are appropriately user-friendly ("Machine set" with space) while technical identifiers like "MachineConfigPool" remain as-is for consistency with K8s resource naming. Filter placeholders align with existing conventions.

frontend/packages/console-app/src/components/nodes/NodesPage.tsx (8)

926-930: LGTM - Solid feature gating for KubeVirt integration.

The dual check (plugin presence in consolePlugins array AND the KUBEVIRT_DYNAMIC flag) is appropriately defensive. This ensures VMs columns/metrics are only enabled when KubeVirt is genuinely active, preventing confusing empty columns or failed Prometheus queries.


965-968: LGTM - MachineConfigPool matching is correct.

Using LabelSelector.matches() with the MCP's nodeSelector is the proper approach. The || {} fallback handles MCPs without explicit nodeSelectors gracefully. The find behavior (returning first match) is appropriate since nodes typically belong to a single MCP.


1006-1008: Logic review: partial owner data scenario.

The ownerEnabled condition allows the "Machine set" column when at least one of machineSets/controlPlaneMachineSets loads successfully. However, if one fails and the other succeeds, users may see incomplete data:

  • MachineSet owners display correctly
  • ControlPlaneMachineSet owners show as DASH (or vice versa)

This could be confusing. Consider either:

  1. Requiring both to succeed: !machinesLoadError && !machineSetsLoadError && !controlPlaneMachineSetsLoadError
  2. Documenting this as acceptable degraded behavior

The current approach may be intentional (show what we can), but it's worth validating this UX decision with the team.


694-704: LGTM - Filter state initialization is correct.

Including machineOwners and machineConfigPools arrays in initialFilters regardless of feature enablement is the right approach. This ensures a stable filter state shape while the conditional filter UI handles visibility.


932-948: LGTM - Effect cleanup and dependencies are correct.

The metrics fetching effect properly:

  • Depends on isKubevirtPluginActive to re-fetch when KubeVirt availability changes
  • Cleans up the interval on unmount or dependency change
  • Uses a reasonable 30-second refresh interval

187-210: LGTM - Conditional column definitions are clean.

The spread-based conditional column inclusion is idiomatic and maintainable. Column properties (sort paths, modifiers) are correctly specified. The memoization dependencies at line 338 properly include all three flags.


984-992: Code handles access-denied resources correctly without blocking.

The useWatchResourcesIfAllowed hook (lines 858–876) returns loaded: true when access is denied because it passes undefined to useK8sWatchResource for unauthorized resources. This allows the loaded state to complete once all permission checks finish and core resources load, rather than blocking indefinitely on inaccessible machine-related resources.

The error handling properly gates machine owner and MCP columns via ownerEnabled and mcpEnabled flags (lines 1006–1011), which respect the load errors from those resources. The strategy is sound: permission-denied resources are treated as "loaded but unavailable," and their columns are gracefully hidden rather than failing the entire page render. The comment on line 991 documents this intentional behavior well.


462-473: getGroupVersionKindForResource works correctly with OwnerReference.

The function implementation only accesses apiVersion and kind via safe optional chaining, both of which are available on OwnerReference. While OwnerReference is a distinct type from K8sResourceCommon, the structural compatibility is sound, and TypeScript compilation confirms no type errors. No changes needed.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 14, 2026

@jeff-phillips-18: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link
Member

@rhamilto rhamilto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Jan 14, 2026
@jhadvig
Copy link
Member

jhadvig commented Jan 15, 2026

QE Approver:
/assign @yapei
Docs Approver:
/assign @jseseCCS
PX Approver:
/assign @sferich888

@sg00dwin
Copy link
Member

/jira refresh

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jan 15, 2026

@sg00dwin: This pull request references CONSOLE-4943 which is a valid jira issue.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@sferich888
Copy link

/label px-approved

@openshift-ci openshift-ci bot added the px-approved Signifies that Product Support has signed off on this PR label Jan 15, 2026
@yapei
Copy link
Contributor

yapei commented Jan 16, 2026

@XiyunZhao will work on PR testing

@XiyunZhao
Copy link
Contributor

Hi @jeff-phillips-18, a small question, when a user only has watch permission (role: cloud-node-manager) on the Nodes page, VM list values are displayed as 0. Displaying 0 is misleading I think, the VM list should show - to indicate insufficient permissions
Differs from other resource (MachineSets,Pods, CPU/Memory, MachineConfigPools), which render - to indicate inaccessible data now
console4943_issue

@jeff-phillips-18
Copy link
Member Author

Hi @jeff-phillips-18, a small question, when a user only has watch permission (role: cloud-node-manager) on the Nodes page, VM list values are displayed as 0. Displaying 0 is misleading I think, the VM list should show - to indicate insufficient permissions Differs from other resource (MachineSets,Pods, CPU/Memory, MachineConfigPools), which render - to indicate inaccessible data now

@XiyunZhao OK, I'll add the permission checks.

Copy link

@jseseCCS jseseCCS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please carefully review my comments, but for expediency and bc i trust you guys, I'll approve regardless.

"Kube-Proxy version": "Kube-Proxy version",
"Machine set": "Machine set",
"VMs": "VMs",
"MachineConfigPool": "MachineConfigPool",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re this line and also 423: Strings "MachineConfigPool" and "Filter by MachineConfigPool" expose the internal API resource name directly in the UI, right? Please confirm whether this is intentionally user-facing. If not, update to a human-readable label like “Machine config pool” and make sure of consistency with nearby labels like “Machine set” and “VMs"?

},
...(mcpEnabled
? [
{

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thru 299: This column header uses internal resource name MachineConfigPool (title: t('console-app~MachineConfigPool')). Please confirm this term is intended to be user-facing. If not, replace with human-readable label (like “Machine config pool”) and keep it consistent with the related filter label/placeholder?

]
: []),
...(mcpEnabled
? [

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thru 749: Filter title and placeholder expose internal resource name MachineConfigPool (title: t('console-appMachineConfigPool'), placeholder: t('console-appFilter by MachineConfigPool')), correct? Please confirm this is intentional. If not, update to a human-readable label and align it with column header wording?

modifier: 'nowrap',
},
},
...(ownerEnabled

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to about 195: This column seems to show owning MachineSet or ControlPlaneMachineSet (id: nodeColumnInfo.machineOwner.id). Please confirm that “Machine set” accurately describes what users will see here. Or consider a more explicit label (“Owning machine set”?).

...(ownerEnabled
? [
<DataViewCheckboxFilter
key="machineOwners"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to 738: This filter corresponds to owning MachineSet or ControlPlaneMachineSet? Please confirm that “Machine set” is intended user-facing term for this ownership relationship, aligns with the column label.

},
]
: []),
...(vmsEnabled

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please confirm that “VMs” is preferred user-facing abbreviation here (versus “Virtual machines”) and that it aligns with terminology used elsewhere in console.

data={data}
loaded={loaded}
loadError={loadError}
ownerEnabled={

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Columns, filters are disabled when related resources fail to load? Please confirm this is the intended UX; that users receive appropriate indication when filters/columns are unavailable bc of load errors.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 16, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jeff-phillips-18, jseseCCS, rhamilto

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jseseCCS
Copy link

/label docs-approved

@openshift-ci openshift-ci bot added the docs-approved Signifies that Docs has signed off on this PR label Jan 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. component/core Related to console core functionality component/shared Related to console-shared docs-approved Signifies that Docs has signed off on this PR jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. kind/i18n Indicates issue or PR relates to internationalization or has content that needs to be translated lgtm Indicates that a PR is ready to be merged. px-approved Signifies that Product Support has signed off on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants