-
Notifications
You must be signed in to change notification settings - Fork 667
CONSOLE-4943: Add Machine Set, MachineConfigPool, and VMs columns to Nodes list view #15909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@jeff-phillips-18: GitHub didn't allow me to request PR reviews from the following users: kybaker. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
WalkthroughAdds i18n entries and feature-gated node table extensions for MachineSet (owner), MachineConfigPool, and VMs; introduces conditional watches for Machine/MachineSet/ControlPlaneMachineSet/MachineConfigPool; extends node row types, filters, columns, and metrics; VM metrics/loading gated by KubeVirt flag. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 294-306: In NodesPage.tsx update the sort key for the
MachineConfigPool column: locate the column object created when mcpEnabled is
true (the entry using nodeColumnInfo.machineConfigPool.id) and fix the typo in
the sort string from "machineConfigPool.metadta.name" to
"machineConfigPool.metadata.name" so sorting works correctly for that column.
- Around line 204-215: The VM column sort key is wrong: in the node column
definition inside NodesPage (the object using nodeColumnInfo.vms.id) change the
sort property from 'vmCount' to 'vmsCount' to match the NodeRowItem field name
(vmsCount) so column sorting works correctly; update the sort string where the
VMs column is created in NodesPage to 'vmsCount'.
- Around line 798-802: The filter block in NodesPage.tsx uses
filters.machineConfigPools and directly accesses
resource.machineConfigPool.metadata.name but NodeRowItem marks
resource.machineConfigPool as optional, which can cause a TypeError; update the
condition to first guard that resource.machineConfigPool (and its metadata)
exists before checking .metadata.name (e.g., use optional chaining or an
explicit null check on resource.machineConfigPool) so resources without a
MachineConfigPool or CSR resources do not throw when applying
filters.machineConfigPools.
- Around line 971-979: The code sets nodeVMs to undefined when
isKubevirtPluginActive is false, then unconditionally accesses nodeVMs.length
causing a TypeError; update the return object in NodesPage.tsx so that vmsCount
is derived safely (use a conditional or nullish check on nodeVMs, e.g. vmsCount
= nodeVMs ? nodeVMs.length : 0 or vmsCount = nodeVMs?.length ?? 0) referencing
the nodeVMs variable and the vmsCount property in the returned object to avoid
calling .length on undefined.
🧹 Nitpick comments (4)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (4)
985-993: Consider propagating additional resource load errors.The
loadErrorcomputation omitsmachinesLoadError,machineSetsLoadError,controlPlaneMachineSetsLoadError, andmachineConfigPoolsLoadError. WhileownerEnabled/mcpEnableddisable columns on error, users won't see why data is missing.For better UX, consider logging these errors or showing a degraded state indicator when supplementary resources fail to load.
💡 Example enhancement
const loadError = nodesLoadError || csrsLoadError || (isKubevirtPluginActive && vmisLoadError); + + // Log supplementary resource errors for debugging + useEffect(() => { + if (machinesLoadError) console.warn('Failed to load Machines:', machinesLoadError); + if (machineSetsLoadError) console.warn('Failed to load MachineSets:', machineSetsLoadError); + if (controlPlaneMachineSetsLoadError) console.warn('Failed to load ControlPlaneMachineSets:', controlPlaneMachineSetsLoadError); + if (machineConfigPoolsLoadError) console.warn('Failed to load MachineConfigPools:', machineConfigPoolsLoadError); + }, [machinesLoadError, machineSetsLoadError, controlPlaneMachineSetsLoadError, machineConfigPoolsLoadError]);
975-978: Consider filtering ownerReferences by expected kind.Taking
ownerReferences?.[0]assumes the first reference is always the MachineSet or ControlPlaneMachineSet. If a Machine has multiple owners, this could display the wrong resource. Consider filtering by kind:♻️ Proposed improvement
return { ...node, - machineOwner: owner?.metadata.ownerReferences?.[0], + machineOwner: owner?.metadata.ownerReferences?.find( + (ref) => ref.kind === 'MachineSet' || ref.kind === 'ControlPlaneMachineSet', + ), machineConfigPool, vmsCount: nodeVMs?.length, };
664-677: Add null safety for optional array props.
machineSetsandcontrolPlaneMachineSetsare typed as optional (?) inNodeListProps(lines 571-572), but.map()is called directly without null checks. While the current call site always provides arrays fromuseK8sWatchResource, the type contract should be honored for maintainability.♻️ Proposed fix
const machineSetFilterOptions = useMemo<DataViewFilterOption[]>( () => [ - ...machineSets.map((machineSet) => ({ + ...(machineSets ?? []).map((machineSet) => ({ value: machineSet.metadata.name, label: machineSet.metadata.name, })), - ...controlPlaneMachineSets.map((machineSet) => ({ + ...(controlPlaneMachineSets ?? []).map((machineSet) => ({ value: machineSet.metadata.name, label: machineSet.metadata.name, })), ].sort((a, b) => a.label.localeCompare(b.label)), [machineSets, controlPlaneMachineSets], );
679-686: Apply same null safety tomachineConfigPools.Similar to the previous comment,
machineConfigPoolsis optional but.map()is called directly.♻️ Proposed fix
const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>( () => - machineConfigPools.map((machineConfigPool) => ({ + (machineConfigPools ?? []).map((machineConfigPool) => ({ value: machineConfigPool.metadata.name, label: machineConfigPool.metadata.name, })), [machineConfigPools], );
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (3)
K8sGroupVersionKind(106-106)NodeKind(822-847)OwnerReference(29-36)frontend/packages/console-shared/src/sorts/nodes.ts (1)
sortWithCSRResource(38-48)frontend/public/module/k8s/types.ts (4)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)MachineConfigPoolKind(842-850)MachineKind(711-728)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
useK8sWatchResource(28-69)frontend/public/models/index.ts (3)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)frontend/public/module/k8s/label-selector.js (1)
LabelSelector(16-278)
🔇 Additional comments (3)
frontend/packages/console-app/locales/en/console-app.json (1)
409-423: LGTM!New localization keys for "Machine set", "VMs", "MachineConfigPool", and their corresponding filter placeholders are correctly added. These align with the new columns and filters introduced in
NodesPage.tsx.frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)
101-106: Acknowledged temporary VMI integration.The TODO comment correctly identifies this as a workaround until the kubevirt plugin can contribute its own column. The GVK definition is correct for
kubevirt.io/v1/VirtualMachineInstance. Consider tracking this technical debt in an issue to ensure it gets addressed when the plugin extension mechanism supports column contributions.
466-477:getGroupVersionKindForResourceis called withOwnerReference, but the function signature declaresK8sResourceCommon.
OwnerReference(with requiredname,uid,apiVersion,kind) doesn't structurally matchK8sResourceCommon(which has optionalapiVersion,kind, andmetadata). While the function implementation only accessesapiVersionandkindfields—so it works at runtime—this violates the type contract and may trigger strict TypeScript checking. Consider either:
- Widening the function signature to accept types with
apiVersionandkindproperties, or- Casting:
getGroupVersionKindForResource(machineOwner as K8sResourceCommon)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
e74ec0b to
a620b03
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)
664-686: Add defensive fallbacks for optional array props to prevent runtime errors.The
machineSets,controlPlaneMachineSets, andmachineConfigPoolsprops are typed as optional (?), but the code calls.map()on them directly. WhileuseK8sWatchResourcereturns[]for unloaded lists, the type contract allowsundefined, which would cause aTypeError.♻️ Proposed defensive fix
const machineSetFilterOptions = useMemo<DataViewFilterOption[]>( () => [ - ...machineSets.map((machineSet) => ({ + ...(machineSets ?? []).map((machineSet) => ({ value: machineSet.metadata.name, label: machineSet.metadata.name, })), - ...controlPlaneMachineSets.map((machineSet) => ({ + ...(controlPlaneMachineSets ?? []).map((machineSet) => ({ value: machineSet.metadata.name, label: machineSet.metadata.name, })), ].sort((a, b) => a.label.localeCompare(b.label)), [machineSets, controlPlaneMachineSets], ); const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>( () => - machineConfigPools.map((machineConfigPool) => ({ + (machineConfigPools ?? []).map((machineConfigPool) => ({ value: machineConfigPool.metadata.name, label: machineConfigPool.metadata.name, })), [machineConfigPools], );
970-973: Consider memoizing LabelSelector instances to reduce allocations.A new
LabelSelectoris instantiated for every MCP during each node iteration, resulting in O(nodes × MCPs) object allocations on every render cycle. For large clusters this could impact performance.♻️ Pre-compute MCP selectors outside the node loop
+ const mcpSelectors = useMemo( + () => + machineConfigPools.map((mcp) => ({ + mcp, + selector: new LabelSelector(mcp.spec?.nodeSelector || {}), + })), + [machineConfigPools], + ); const data = useMemo(() => { // ... return [ ...csrBundle, ...nodes.map((node) => { // ... - const machineConfigPool = machineConfigPools.find((mcp) => { - const labelSelector = new LabelSelector(mcp.spec.nodeSelector || {}); - return labelSelector.matches(node); - }); + const machineConfigPool = mcpSelectors.find(({ selector }) => + selector.matches(node), + )?.mcp; // ... }), ]; - }, [csrs, nodes, machines, machineConfigPools, vmis, isKubevirtPluginActive]); + }, [csrs, nodes, machines, mcpSelectors, vmis, isKubevirtPluginActive]);
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (6)
frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
getGroupVersionKindForResource(413-413)frontend/public/module/k8s/types.ts (4)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)MachineConfigPoolKind(842-850)MachineKind(711-728)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
useK8sWatchResource(28-69)frontend/public/models/index.ts (4)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)MachineConfigPoolModel(962-977)frontend/public/module/k8s/label-selector.js (1)
LabelSelector(16-278)
🔇 Additional comments (7)
frontend/packages/console-app/locales/en/console-app.json (1)
409-423: LGTM — i18n keys correctly added for new Node columns and filters.The translation keys align with the UI elements introduced in
NodesPage.tsx. Capitalization is consistent with existing conventions (e.g.,MachineConfigPoolmatching line 81).frontend/packages/console-app/src/components/nodes/NodesPage.tsx (6)
101-106: Reasonable temporary integration for KubeVirt VMI awareness.The GVK definition is correct for
kubevirt.io/v1/VirtualMachineInstance. The TODO comment clearly signals this should be refactored once the kubevirt-plugin can contribute the column directly via extension points.
1011-1018: Conditional column enablement provides reasonable graceful degradation.The logic correctly enables columns based on resource availability:
ownerEnabledrequires machines and at least one of MachineSets/ControlPlaneMachineSetsmcpEnabledrequires both machines and MCPsvmsEnabledgates on KubeVirt feature flagThis is a solid UX pattern for optional functionality.
904-917: KubeVirt integration follows correct patterns.The double-gate (
KUBEVIRT_DYNAMICflag + plugin presence check) is appropriately defensive. Conditionally passingundefinedtouseK8sWatchResourceis the standard pattern to skip unnecessary watches.
167-248: Column definitions cleanly implement feature-flagged columns.The conditional spreading pattern for
machineOwner,vms, andmachineConfigPoolcolumns is idiomatic. MovingRolestoadditional: truealigns with the updated default column set in the PR description.
466-490: Cell rendering correctly uses dynamic GVK resolution for ResourceLinks.Using
getGroupVersionKindForResource()is the right approach for rendering links to owner references and MCPs where the exact GVK is determined at runtime.
836-848: Type definitions accurately model the extended node data and filter state.The optional fields on
NodeRowItemand the new filter arrays inNodeFilterscorrectly reflect the runtime data model.
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
a620b03 to
602755a
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 664-686: The useMemo hooks assume machineSets,
controlPlaneMachineSets, and machineConfigPools are always arrays but they are
optional in NodeListProps; update the two hooks (machineSetFilterOptions and
machineConfigPoolFilterOptions) to defensively handle undefined by using a
fallback empty array when mapping (e.g., use (machineSets ?? []) and
(controlPlaneMachineSets ?? []) and (machineConfigPools ?? []) before .map()),
so the map and sort operations never run on undefined and no runtime error
occurs.
🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)
904-908: Consider defensive check forwindow.SERVER_FLAGS.The
Array.isArray(window.SERVER_FLAGS.consolePlugins)check is good, butwindow.SERVER_FLAGSitself could theoretically be undefined in certain environments (unit tests, SSR edge cases). A more defensive approach would prevent potential test failures.💡 Optional: Add null check for SERVER_FLAGS
const isKubevirtPluginActive = - Array.isArray(window.SERVER_FLAGS.consolePlugins) && - window.SERVER_FLAGS.consolePlugins.includes('kubevirt-plugin') && + Array.isArray(window.SERVER_FLAGS?.consolePlugins) && + window.SERVER_FLAGS?.consolePlugins.includes('kubevirt-plugin') && kubevirtFeature;
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (8)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
K8sGroupVersionKind(106-106)frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
getGroupVersionKindForResource(413-413)frontend/public/module/k8s/types.ts (3)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)MachineConfigPoolKind(842-850)frontend/packages/console-app/src/components/data-view/ConsoleDataView.tsx (1)
initialFiltersDefault(30-30)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/public/models/index.ts (4)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)MachineConfigPoolModel(962-977)frontend/packages/console-dynamic-plugin-sdk/src/utils/flags.ts (1)
useFlag(11-12)frontend/public/module/k8s/label-selector.js (1)
LabelSelector(16-278)
🔇 Additional comments (8)
frontend/packages/console-app/locales/en/console-app.json (1)
409-423: LGTM! New i18n keys are well-structured.The new translation keys for Machine set, VMs, MachineConfigPool, and associated filter placeholders follow existing conventions and align with the NodesPage.tsx usage. Good to go.
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)
101-106: Acknowledged: Temporary VMI GVK definition.The hardcoded
VirtualMachineInstanceGroupVersionKindwith the TODO note is a reasonable interim approach. Once the KubeVirt plugin can contribute the VMs column via extension points, this can be removed. The GVK values are correct for the kubevirt.io API.
115-132: LGTM! Column info entries are well-defined.The new column identifiers (
machineOwner,vms,machineConfigPool) and the repositionedrolecolumn follow the existing pattern. These IDs correctly map to the cell rendering logic ingetNodeDataViewRows.Also applies to: 148-150
167-171: LGTM! Conditional column rendering is well-implemented.The feature-flagged column pattern using conditional spreading (
...(flag ? [...] : [])) is clean and maintains type safety. The sort paths correctly reference the augmentedNodeRowItemproperties. Theadditional: truemarking on secondary columns aligns with the column management UX pattern.Also applies to: 192-215, 240-248, 294-306, 343-343
792-802: LGTM! Filter logic handles optional properties correctly.The optional chaining (
resource.machineOwner?.name,resource.machineConfigPool?.metadata.name) ensures nodes without these associations are safely filtered out when the respective filters are active. This is the expected UX behavior.
994-1003: Graceful degradation pattern is well-designed.The separation of concerns here is solid: machine-related load errors disable the associated columns/filters rather than failing the entire Nodes page. The explicit comment on line 1002 clarifies the intent.
One consideration: the
loadedstate waits for all resources including Machines/MachineSets/MCPs even when they'll be hidden due to errors. If these APIs are slow or unavailable on certain cluster configurations (e.g., non-OCP clusters), the initial load could be delayed. This is likely acceptable for the target environment (OpenShift Console).
1017-1024: Consider: DoesmcpEnabledneed to depend onmachinesLoadError?The
mcpEnabledcondition includes!machinesLoadError, but the MachineConfigPool association (lines 970-973) uses onlyLabelSelectormatching against node labels—it doesn't require the Machines API.If Machines fail to load but MachineConfigPools load successfully, the MCP column would be unnecessarily hidden.
Was the
machinesLoadErrordependency intentional formcpEnabled? If not:💡 Potential fix
- mcpEnabled={!machinesLoadError && !machineConfigPoolsLoadError} + mcpEnabled={!machineConfigPoolsLoadError}
970-973: No action needed - emptynodeSelectorhandling is correct.
LabelSelectorwith an empty selector{}returnsfalsewhen matching nodes, nottrue. Whenmcp.spec.nodeSelectoris undefined, the fallback to{}results in zero conjuncts, causingisEmpty()to returntrue. SinceemptySelectsAlldefaults tofalse, the selector correctly returnsfalsefor all nodes. MCPs without explicit node selectors will not incorrectly match any nodes.The
.find()returning only the first matching MCP is acceptable for typical MachineConfigPool association patterns.
602755a to
4581817
Compare
|
This part looks good, we might need to modify the default columns later. |
rhamilto
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, @jeff-phillips-18! Claude highlighted a few areas to address.
| } | ||
|
|
||
| if (filters.machineOwners.length > 0) { | ||
| if (!filters.machineOwners.includes(resource.machineOwner?.name)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude suggests the following to fix a potential null reference:
| if (!filters.machineOwners.includes(resource.machineOwner?.name)) { | |
| if (!resource.machineOwner || !filters.machineOwners.includes(resource.machineOwner.name)) { |
| } | ||
|
|
||
| if (filters.machineConfigPools.length > 0) { | ||
| if (!filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude suggests the following to fix a potential null reference:
| if (!filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) { | |
| if (!filters.machineConfigPools || !filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name)) { |
| const loadError = nodesLoadError || csrsLoadError; | ||
| return [ | ||
| ...csrBundle, | ||
| ...nodes.map((node) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude highlights a performance concern that seems worth addressing:
NodesPage.tsx:964-990
...nodes.map((node) => {
const [machineName] = node ? getNodeMachineNameAndNamespace(node) : ['', ''];
const owner = machines.find((machine) => machine.metadata.name === machineName);
const machineConfigPool = machineConfigPools.find((mcp) => {
const labelSelector = new LabelSelector(mcp.spec.nodeSelector || {});
return labelSelector.matches(node);
});
const nodeVMs = isKubevirtPluginActive
? vmis.filter((vmi) => vmi.status?.nodeName === node.metadata.name)
: undefined;
This does O(n*m) operations for each node:
- Finding machine by name: O(m) for each node
- Finding MCP with label matching: O(m) with nested matching logic
- Filtering VMIs: O(v) for each node
For large clusters, consider building lookup maps first:
const machineByName = useMemo(() =>
new Map(machines.map(m => [m.metadata.name, m])),
[machines]
);
const vmisByNode = useMemo(() => {
const map = new Map();
vmis.forEach(vmi => {
const nodeName = vmi.status?.nodeName;
if (nodeName) {
map.set(nodeName, (map.get(nodeName) || []).concat(vmi));
}
});
return map;
}, [vmis]);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have made the suggested changes for machines.
I found a better way to get the VMs count via prometheus rather than fetching VMIs and matching by node name.
4581817 to
6b123bb
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 949-951: The machinesByName Map keys only use m.metadata.name
which can collide across namespaces; update the useMemo that creates
machinesByName to key each entry by a namespaced identifier like
`${m.metadata.namespace}/${m.metadata.name}` (refer to the machinesByName
variable and its useMemo), and then update any places that look up machines from
this Map (e.g., the data lookup that currently uses just the name) to construct
and use the same `${namespace}/${name}` key so machines in different namespaces
are distinguished.
- Around line 984-992: The machine-related watches in NodesPage.tsx must be
gated by RBAC like the CSRs pattern: add useAccessReview checks for the
Machines, MachineSets, ControlPlaneMachineSets, and MachineConfigPools resources
and, where you currently call useK8sWatchResource for those resources, pass
undefined when access is denied (same approach used around the csrs/CSR
accessReview and useK8sWatchResource calls). This ensures useK8sWatchResource
returns loaded=true when the user lacks access so machinesLoaded,
machineSetsLoaded, controlPlaneMachineSetsLoaded, and machineConfigPoolsLoaded
won’t block the overall loaded flag.
🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)
373-373: Consider: Type consistency for runningVMs default.Line 373 defaults to
'0'(string) while the metric value is a number. This works for display but consider using0(number) and letting React coerce it, or explicitlyString(runningVMs)for clarity. Minor nit.Also applies to: 480-482
679-686: Minor: Consider sorting MachineConfigPool filter options.
machineSetFilterOptionsis sorted alphabetically (line 675), butmachineConfigPoolFilterOptionsis not. Consider adding.sort((a, b) => a.label.localeCompare(b.label))for consistency.♻️ Suggested fix
const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>( () => machineConfigPools.map((machineConfigPool) => ({ value: machineConfigPool.metadata.name, label: machineConfigPool.metadata.name, - })), + })).sort((a, b) => a.label.localeCompare(b.label)), [machineConfigPools], );
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/public/actions/ui.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
MetricValuesByName(869-871)
🔇 Additional comments (13)
frontend/public/actions/ui.ts (1)
43-52: LGTM! Clean type extension for VM metrics.The
runningVmsfield follows the establishedMetricValuesByNamepattern used by other node metrics. This additive change maintains backward compatibility.frontend/packages/console-app/src/components/nodes/NodesPage.tsx (12)
42-60: LGTM! Well-organized imports for machine-related functionality.The new model and kind imports are properly grouped and all appear to be utilized in the implementation below.
106-141: Clean column info structure for new columns.The
machineOwner,vms, andmachineConfigPoolcolumn definitions follow the established pattern.
158-162: Good approach with feature flags for conditional columns.The hook signature change to accept boolean flags enables clean conditional rendering of feature-gated columns. This aligns well with the KubeVirt plugin activation pattern.
231-239: Verify: Roles column demoted to additional.The
additional: trueflag means Roles will no longer be shown by default, aligning with the PR objective stating default columns are: Name, Status, Machine set, VMs, Pods, Memory, CPU. Users can still enable it via column management.
470-479: LGTM! Correct handling for cluster-scoped MachineConfigPool.MachineConfigPool is cluster-scoped, so omitting the
namespaceprop onResourceLinkis correct.
566-598: LGTM! Well-structured props for feature flags.Clean separation of concerns with boolean enable flags and corresponding data arrays. Default empty arrays prevent undefined access errors.
792-803: LGTM! Filter logic handles optional properties correctly.The optional chaining ensures CSR resources (which lack
machineOwnerandmachineConfigPool) are properly handled. They'll be filtered out when these filters are active, which is appropriate since CSRs don't have machine associations.
904-908: Good: Dual-check for KubeVirt activation.Requiring both the
KUBEVIRT_DYNAMICfeature flag AND plugin presence inconsolePluginsis a resilient pattern that prevents edge cases where one condition is true but not the other.
953-982: LGTM! Correct association logic for machine ownership and MCPs.The
LabelSelectormatching for MachineConfigPools andownerReferencestraversal for machine ownership are correct approaches. The O(n×m) complexity is acceptable given typical cluster sizes have few MCPs.
1006-1012: LGTM! Graceful degradation for machine API access.The boolean logic correctly enables features only when required APIs are accessible. The OR condition for
machineSetsLoadError || controlPlaneMachineSetsLoadErrorallows the Machine Set column to show even if one of the machine set APIs fails.
543-550: No action needed — KubeVirt metric correctly exposes node labels.The
kubevirt_vmi_phase_countmetric includes anodelabel per KubeVirt's metric specification, so the query correctly returns per-node VM counts. The code's use ofdata.metric.instance || data.metric.nodeon line 557 will properly key results by node.
458-469: No issue—getGroupVersionKindForResourcecorrectly handlesOwnerReference.The
machineOwneris anOwnerReferencefrommetadata.ownerReferences, which has bothapiVersionandkindfields. The helper only extracts these two fields, so it works seamlessly withOwnerReferenceobjects. The implementation is sound.
frontend/packages/console-app/src/components/nodes/NodesPage.tsx
Outdated
Show resolved
Hide resolved
6b123bb to
49e3cb5
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (3)
679-686: Consider sorting MachineConfigPool filter options for consistency.
machineSetFilterOptionssorts the combined list alphabetically (line 675), butmachineConfigPoolFilterOptionsdoes not. For consistent UX across filters, consider adding.sort((a, b) => a.label.localeCompare(b.label)).♻️ Optional: Add sorting to MCP filter options
const machineConfigPoolFilterOptions = useMemo<DataViewFilterOption[]>( () => machineConfigPools.map((machineConfigPool) => ({ value: machineConfigPool.metadata.name, label: machineConfigPool.metadata.name, - })), + })).sort((a, b) => a.label.localeCompare(b.label)), [machineConfigPools], );
839-843:vmsCountis declared but never populated on row items.The
NodeRowItemtype includesvmsCount?: number, but the data mapping inuseMemo(lines 961-984) doesn't actually set this field. VM counts are retrieved fromnodeMetrics.runningVmsingetNodeDataViewRowsinstead. Consider removingvmsCountfrom the type if it's intentionally not used on the row object, or document this design choice.♻️ Remove unused vmsCount from NodeRowItem
type NodeRowItem = (NodeKind | NodeCertificateSigningRequestKind) & { machineOwner?: OwnerReference; machineConfigPool?: MachineConfigPoolKind; - vmsCount?: number; };
1009-1016: Feature enablement logic is correct but could use a clarifying comment.The boolean logic for
ownerEnabledensures the column/filter appears when machines load successfully AND at least one owner resource type (MachineSets OR ControlPlaneMachineSets) is available. This is correct—clusters may have only one type of machine set. Consider adding a brief inline comment explaining this OR logic for future maintainers.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/public/actions/ui.ts
🔇 Additional comments (15)
frontend/packages/console-app/locales/en/console-app.json (2)
409-411: LGTM! New i18n keys for node columns are well-placed.These localization entries for
Machine set,VMs, andMachineConfigPoolfollow existing conventions and are positioned near related node terminology. Good to see i18n being handled properly from the start.
422-423: Filter labels follow established patterns.The filter placeholder strings
Filter by machine setandFilter by MachineConfigPoolare consistent with existing filter patterns in this file (e.g.,Filter by status,Filter by roles).frontend/packages/console-app/src/components/nodes/NodesPage.tsx (13)
42-48: Model imports align with the feature scope.Importing
MachineModel,MachineConfigPoolModel,MachineSetModel, andControlPlaneMachineSetModelis appropriate for the new column data requirements. These are well-established OpenShift API models.
106-123: Column info structure follows established patterns.The new
machineOwner,vms, and latermachineConfigPoolentries innodeColumnInfoare well-structured. Therolecolumn being marked asadditional: trueinuseNodesColumnsaligns with the PR objective of updating default columns.
158-162: Clean feature-flag-driven column composition.The
useNodesColumnssignature acceptingownerEnabled,mcpEnabled, andvmsEnabledis a solid approach for conditional column rendering. This keeps the UI clean when backing resources aren't available or features are disabled.
470-479: MachineConfigPool cell correctly handles cluster-scoped resource.Good attention to detail—
MachineConfigPoolis cluster-scoped, so nonamespaceprop is passed toResourceLink. UsinggetGroupVersionKindForResource(machineConfigPool)on the full resource object is the right approach here.
688-698: Filter initialization is properly structured.The
initialFilterscorrectly initializesmachineOwnersandmachineConfigPoolsas empty arrays, ensuring no filters are pre-selected. Good use of spreadinginitialFiltersDefaultfor base properties.
792-805: Filter matching logic is correct.The
machineOwnersandmachineConfigPoolsfilter logic properly handles cases where the resource lacks these associations. Note: Line 801 hasresource.machineConfigPool?.metadata.namebut the outer condition already confirmsresource.machineConfigPoolexists—the optional chaining is harmless but redundant.
907-911: Robust KubeVirt plugin detection.The three-way check (
Array.isArray,includes, and feature flag) is appropriately defensive. This ensures VM columns/metrics only appear when KubeVirt is both installed and enabled, avoiding UI confusion on clusters without virtualization support.
961-985: Data composition is well-structured.The augmentation logic correctly:
- Uses
Mapfor O(1) machine lookups- Matches MCPs using
LabelSelector.matches(node)- Finds owner references filtering for
MachineSetorControlPlaneMachineSetkindsThe O(nodes × MCPs) complexity for MCP matching is acceptable for typical cluster sizes.
987-995: Intentional graceful degradation for machine-related errors.The comment at line 994 clarifies the design: machine load errors hide columns/filters rather than blocking the entire page. This is a good UX pattern—users can still see nodes even if machine-related data isn't available. The
ownerEnabledandmcpEnabledprops correctly reflect this at lines 1009-1014.
895-929: Six concurrent K8s watches may impact performance on large clusters.The page now maintains watches for Nodes, Machines, MachineSets, ControlPlaneMachineSets, MachineConfigPools, and CSRs. While this is a common pattern, consider:
- Could any watches be made conditional based on feature flags (e.g., skip MCP watch if feature is known-disabled)?
- For very large clusters, this increases memory footprint from held resource lists.
This isn't a blocker—the pattern is established in console—but worth monitoring if users report sluggishness on large deployments.
933-949: Metrics polling implementation is solid.The 30-second interval with proper cleanup prevents memory leaks. The dependency on
isKubevirtPluginActiveensures VM metrics are fetched when the feature becomes available. Good defensive error handling with the console.error fallback.
458-469: No action needed—OwnerReferenceis fully compatible withgetGroupVersionKindForResource.The
OwnerReferencetype includesapiVersionandkindas required fields, matching exactly whatgetGroupVersionKindForResourceexpects. The utility safely returnsundefinedif either property is missing. Additionally, the code already guards against null/undefinedmachineOwnerwith the conditional rendering (lines 460–468), falling back toDASH. This is sound.
543-550: The query is correct as-is. KubeVirt'skubevirt_vmi_phase_countmetric natively exposes anodelabel for each VMI, so the reducer at line 557 will correctly receivedata.metric.nodewithout requiring an explicit node selector in the query. This is the right design: unlikekubelet_running_pods(which needssum by(node)aggregation), KubeVirt metrics already carry per-VMI granularity with node attribution included.Likely an incorrect or invalid review comment.
49e3cb5 to
70ab099
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (1)
856-874: Consider improving type safety for the empty array fallback.The hook logic is correct—it properly gates resource watching behind RBAC checks. The
!checkIsLoading && loadedensures we wait for the access review before reporting loaded state.The type assertion
[] as Ron line 873 works becauseR extends K8sResourceCommon[], but a slightly safer pattern would be to use a type predicate or simply return the empty array without assertion since TypeScript infers correctly for array types.♻️ Optional: Minor type improvement
- return [resources || ([] as R), !checkIsLoading && loaded, loadError]; + return [(resources ?? []) as R, !checkIsLoading && loaded, loadError];Or if you want to avoid the assertion entirely:
const useWatchResourcesIfAllowed = <R extends K8sResourceCommon>( model: K8sModel, ): [R[], boolean, unknown] => { // ... implementation return [resources || [], !checkIsLoading && loaded, loadError]; };
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/packages/console-shared/src/sorts/nodes.tsfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/public/actions/ui.ts
- frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
NodeKind(822-847)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (9)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
nodeRunningVms(27-28)frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
getGroupVersionKindForResource(413-413)frontend/public/module/k8s/types.ts (4)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)MachineConfigPoolKind(842-850)MachineKind(711-728)frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (5)
NodeKind(822-847)NodeCertificateSigningRequestKind(865-867)OwnerReference(29-36)K8sResourceCommon(67-71)CertificateSigningRequestKind(849-863)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/packages/console-dynamic-plugin-sdk/src/api/common-types.ts (1)
K8sModel(33-60)frontend/public/models/index.ts (4)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)MachineConfigPoolModel(962-977)frontend/packages/console-dynamic-plugin-sdk/src/utils/flags.ts (1)
useFlag(11-12)frontend/packages/console-app/src/components/nodes/csr.ts (1)
getNodeClientCSRs(36-75)
🔇 Additional comments (12)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
27-28: LGTM!Clean addition that follows the established pattern for metric accessors in this file (e.g.,
nodeCPU,nodePods). Consistent with the newrunningVmsmetric being wired up inui.ts.frontend/packages/console-app/src/components/nodes/NodesPage.tsx (11)
103-158: LGTM!Column info structure is well-organized and consistent with existing patterns. The new
machineOwner,vms, andmachineConfigPoolentries follow the established convention.
162-340: LGTM!The conditional column inclusion pattern using spread operators is clean and the
useMemodependencies correctly track the feature flags. This approach properly hides columns when their backing resources aren't accessible.
924-928: LGTM!Good defensive check for KubeVirt integration. The
Array.isArrayguard prevents runtime errors ifconsolePluginsis undefined, and combining with the feature flag ensures the feature is properly gated.
963-971: LGTM on MCP matching logic.Using
LabelSelector.matches()is the correct approach for determining which MachineConfigPool applies to a node based on its labels. The owner reference lookup correctly filters forMachineSetandControlPlaneMachineSetkinds.
1004-1011: LGTM on feature gating logic.The
ownerEnabledcondition correctly allows the column when machines load AND at least one of MachineSet or ControlPlaneMachineSet is accessible. ThemcpEnabledandvmsEnabledconditions are straightforward. This provides graceful degradation when RBAC restricts access to certain resources.
668-690: LGTM on filter options construction.Combining MachineSets and ControlPlaneMachineSets into a single sorted list for the filter dropdown provides a consistent UX. The alphabetical sort is a nice touch for usability.
982-990: LGTM on error handling strategy.The design to exclude machine-related errors from
loadErrorwhile hiding their columns/filters is a solid resilience pattern. Users can still view core node information even if they lack RBAC access to machine resources.
843-854: LGTM on type definitions.The
NodeRowItemintersection type cleanly extends the base node types with optional machine-related fields. TheNodeFiltersextension properly models the new filter state.
190-202: All i18n keys for new columns and filters are properly defined.The locale strings—
Machine set,VMs,MachineConfigPool,Filter by machine set, andFilter by MachineConfigPool—are all present infrontend/packages/console-app/locales/en/console-app.jsonand translated across all supported locales (English, Spanish, French, Japanese, Korean, and Chinese). The namespace prefix convention (console-app~) is correctly applied.
547-554: Metric verification confirmed.The
kubevirt_vmi_phase_count{phase='running'}metric is correct per KubeVirt documentation, which confirms it reports VMI counts per phase and node. The code defensively accesses bothinstanceandnodelabels (line 561), ensuring robustness across different metric exposures.
462-486: No action required—type compatibility verified.
OwnerReferenceis structurally compatible withK8sResourceCommon. BothapiVersionandkindare required fields onOwnerReference, satisfying the function's needs. The implementation uses optional chaining defensively, and this pattern is consistent across the codebase (VPA, exports, etc.). The cell renderers are correct.
70ab099 to
f652608
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Around line 1006-1012: The mcpEnabled prop incorrectly gates MCP UI on
machinesLoadError; change the prop passed to mcpEnabled in NodesPage (currently
using machinesLoadError and machineConfigPoolsLoadError) so it only checks
!machineConfigPoolsLoadError. Locate the mcpEnabled usage in NodesPage.tsx and
update it to rely solely on machineConfigPoolsLoadError (since MCP assignments
are derived from node labels via LabelSelector(mcp.spec.nodeSelector) and do not
require Machines to be loaded), leaving machineConfigPools and any LabelSelector
logic untouched.
- Around line 547-554: The kubevirt query for the 'runningVms' series is missing
an explicit aggregation and may not yield a data.metric.node or
data.metric.instance label expected by the reducer (see reducer at line 561);
update the query string for the object with key 'runningVms' (inside the
vmsEnabled branch) to use an explicit aggregation such as sum
by(node)(kubevirt_vmi_phase_count{phase='running'}) so the returned series
include the node label and remain consistent with the other metrics.
🧹 Nitpick comments (2)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (2)
858-876: Type cast on empty array may be too permissive.The
[] as Rcast on line 875 bypasses type safety. While functionally fine for array types, consider using a more explicit type or adjusting the generic constraint:-const useWatchResourcesIfAllowed = <R extends K8sResourceCommon[]>( +const useWatchResourcesIfAllowed = <T extends K8sResourceCommon>( model: K8sModel, -): [R, boolean, unknown] => { +): [T[], boolean, unknown] => { // ... - return [resources || ([] as R), !checkIsLoading && loaded, loadError]; + return [resources || [], !checkIsLoading && loaded, loadError]; };This avoids the explicit cast and makes the return type clearer. Low priority given the current usage is correct.
377-377: Inconsistent default value for VMs compared to other metrics.The
podsmetric (line 382) defaults toDASHwhen unavailable, butrunningVMsdefaults to'0'. This could be misleading – a missing metric versus "0 VMs running" are different states:- const runningVMs = nodeMetrics?.runningVms?.[nodeName] ?? '0'; + const runningVMs = nodeMetrics?.runningVms?.[nodeName] ?? DASH;Alternatively, if
0is intentional because VM-capable nodes with no VMs should show "0", consider adding a comment explaining the rationale.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/packages/console-shared/src/sorts/nodes.tsfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/packages/console-app/locales/en/console-app.json
🧰 Additional context used
🧬 Code graph analysis (3)
frontend/public/actions/ui.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
MetricValuesByName(869-871)
frontend/packages/console-shared/src/sorts/nodes.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
NodeKind(822-847)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (10)
frontend/public/components/factory/table.tsx (1)
TableColumn(662-669)frontend/packages/console-shared/src/sorts/nodes.ts (2)
sortWithCSRResource(40-50)nodeRunningVms(27-28)frontend/packages/console-dynamic-plugin-sdk/src/api/dynamic-core-api.ts (1)
getGroupVersionKindForResource(413-413)frontend/public/module/k8s/types.ts (2)
MachineSetKind(730-744)ControlPlaneMachineSetKind(746-765)frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (4)
NodeKind(822-847)NodeCertificateSigningRequestKind(865-867)K8sResourceCommon(67-71)CertificateSigningRequestKind(849-863)frontend/packages/console-app/src/components/data-view/types.ts (1)
ResourceFilters(5-8)frontend/packages/console-dynamic-plugin-sdk/src/api/common-types.ts (1)
K8sModel(33-60)frontend/packages/console-dynamic-plugin-sdk/src/utils/k8s/hooks/useK8sWatchResource.ts (1)
useK8sWatchResource(28-69)frontend/public/models/index.ts (4)
MachineModel(894-909)MachineSetModel(911-926)ControlPlaneMachineSetModel(928-943)MachineConfigPoolModel(962-977)frontend/packages/console-app/src/components/nodes/csr.ts (1)
getNodeClientCSRs(36-75)
🔇 Additional comments (9)
frontend/public/actions/ui.ts (1)
43-52: LGTM – Type extension follows established patterns.The new
runningVmsfield aligns with the existing metric naming convention and type structure. Backward compatible addition.frontend/packages/console-shared/src/sorts/nodes.ts (1)
27-28: LGTM – Consistent with existing metric accessors.Follows the established pattern used by
nodeCPUandnodePods. TheNumber()wrapper correctly handles the conversion from the metric store.frontend/packages/console-app/src/components/nodes/NodesPage.tsx (7)
965-968: First-match behavior for MCP is acceptable but worth noting.A node could theoretically match multiple MachineConfigPools based on label selectors. The current
.find()returns only the first match. This is fine for the UI column display, but ensure this aligns with how OpenShift MCO actually assigns nodes to pools (typically nodes belong to one primary pool).
984-992: Loaded state aggregation is reasonable.The decision to wait for all resources before rendering prevents partial/inconsistent data. The comment on line 991 correctly explains that machine-related load errors hide columns rather than fail the page. Good defensive approach.
187-210: i18n handling is correct for new columns.All new column titles and filter labels are properly wrapped with
t()using theconsole-app~namespace. Follows established patterns.
926-930: Solid KubeVirt feature detection.The dual check (plugin presence via
SERVER_FLAGS.consolePluginsAND feature flag viauseFlag('KUBEVIRT_DYNAMIC')) is a good defensive pattern. TheArray.isArrayguard handles undefined cases.
951-982: Data memoization and mapping looks correct.Dependencies are properly tracked. The
machinesByNameMap optimization (lines 901-904) provides O(1) lookups instead of repeated array scans. Node enrichment withmachineOwnerandmachineConfigPoolis cleanly implemented.
922-924: CSR watching now properly RBAC-gated.Using
useWatchResourcesIfAllowedfor CSRs ensures the watch is only attempted when the user has list permissions. This prevents console errors for users without CSR access.
462-473: No issue here.getGroupVersionKindForResourceonly requiresapiVersionandkindproperties—both of whichOwnerReferenceprovides. The function signature works with any resource type that includes these fields, not exclusively withK8sResourceCommon.
f652608 to
1a3ed51
Compare
|
@jeff-phillips-18: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @frontend/packages/console-app/src/components/nodes/NodesPage.tsx:
- Line 377: The display variable runningVMs is given a string default ('0')
while nodeMetrics?.runningVms?.[nodeName] is a number, causing a mixed type;
change the default to a numeric 0 so runningVMs is always a number by replacing
the fallback from the string '0' to the number 0 (i.e., set runningVMs =
nodeMetrics?.runningVms?.[nodeName] ?? 0).
🧹 Nitpick comments (4)
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (4)
858-876: Consider edge case when access check is still loading.The hook looks solid overall, but there's a subtle timing consideration: when
checkIsLoadingis true,useK8sWatchResourcereceivesundefinedand immediately returnsloaded: true. The combined!checkIsLoading && loadedguards against premature "loaded" states, which is correct.However, for clarity and consistency, consider initializing the return tuple more explicitly when access check is pending:
💡 Optional improvement for readability
const useWatchResourcesIfAllowed = <R extends K8sResourceCommon[]>( model: K8sModel, ): [R, boolean, unknown] => { const [isAllowed, checkIsLoading] = useAccessReview({ group: model.apiGroup || '', resource: model.plural, verb: 'list', }); const [resources, loaded, loadError] = useK8sWatchResource<R>( isAllowed ? { kind: referenceForModel(model), isList: true, } : undefined, ); - return [resources || ([] as R), !checkIsLoading && loaded, loadError]; + // When access denied or still checking, return empty array as loaded + const effectiveLoaded = !checkIsLoading && (isAllowed ? loaded : true); + return [resources || ([] as R), effectiveLoaded, isAllowed ? loadError : undefined]; };This makes it clearer that when access is denied, we're intentionally returning an empty array as "loaded" with no error.
963-973: Consider stricter owner reference matching by apiVersion.The current logic matches owner references by
kindonly. While this works in practice (MachineSet/ControlPlaneMachineSet are unique to machine-api), stricter matching againstapiVersionwould be more robust:💡 Optional enhancement for stricter matching
const machineOwner = owner?.metadata.ownerReferences?.find( (ref) => - ref.kind === MachineSetModel.kind || ref.kind === ControlPlaneMachineSetModel.kind, + (ref.kind === MachineSetModel.kind && + ref.apiVersion?.startsWith(`${MachineSetModel.apiGroup}/`)) || + (ref.kind === ControlPlaneMachineSetModel.kind && + ref.apiVersion?.startsWith(`${ControlPlaneMachineSetModel.apiGroup}/`)), );This is a minor hardening suggestion; the current implementation is functionally correct for the OpenShift machine-api ecosystem.
547-565: Consider resilient metric fetching for partial failures.The current implementation uses
Promise.all, meaning if any single metric query fails (e.g., the KubeVirt metric when the cluster has partial VMI issues), all node metrics fail to load.For a more resilient UX, consider using
Promise.allSettledto handle partial metric availability:💡 Suggested improvement for resilience
- return Promise.all(promises).then((data: any[]) => _.assign({}, ...data)); + return Promise.allSettled(promises).then((results) => { + const successfulData = results + .filter((r): r is PromiseFulfilledResult<any> => r.status === 'fulfilled') + .map((r) => r.value); + return _.assign({}, ...successfulData); + });This ensures node metrics still display even if the KubeVirt metric endpoint is temporarily unavailable. This is particularly relevant for clusters with intermittent Prometheus issues.
804-811: Minor: Redundant optional chaining.Line 807 uses
resource.machineConfigPool?.metadata.namebut the preceding condition on line 806 already guards against!resource.machineConfigPool. The optional chaining is harmless but unnecessary:🔧 Minor cleanup
if (filters.machineConfigPools.length > 0) { if ( !resource.machineConfigPool || - !filters.machineConfigPools.includes(resource.machineConfigPool?.metadata.name) + !filters.machineConfigPools.includes(resource.machineConfigPool.metadata.name) ) { return false; } }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
frontend/packages/console-app/locales/en/console-app.jsonfrontend/packages/console-app/src/components/nodes/NodesPage.tsxfrontend/packages/console-shared/src/sorts/nodes.tsfrontend/public/actions/ui.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/packages/console-shared/src/sorts/nodes.ts
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/public/actions/ui.ts (1)
frontend/packages/console-dynamic-plugin-sdk/src/extensions/console-types.ts (1)
MetricValuesByName(869-871)
🔇 Additional comments (10)
frontend/public/actions/ui.ts (1)
51-51: LGTM!The
runningVmsmetric field follows the established naming conventions and type patterns already present inNodeMetrics. Clean, additive change that integrates well with the existing metrics architecture.frontend/packages/console-app/locales/en/console-app.json (1)
409-423: LGTM!The new i18n entries follow the established patterns in this locale file. The display names are appropriately user-friendly ("Machine set" with space) while technical identifiers like "MachineConfigPool" remain as-is for consistency with K8s resource naming. Filter placeholders align with existing conventions.
frontend/packages/console-app/src/components/nodes/NodesPage.tsx (8)
926-930: LGTM - Solid feature gating for KubeVirt integration.The dual check (plugin presence in
consolePluginsarray AND theKUBEVIRT_DYNAMICflag) is appropriately defensive. This ensures VMs columns/metrics are only enabled when KubeVirt is genuinely active, preventing confusing empty columns or failed Prometheus queries.
965-968: LGTM - MachineConfigPool matching is correct.Using
LabelSelector.matches()with the MCP'snodeSelectoris the proper approach. The|| {}fallback handles MCPs without explicit nodeSelectors gracefully. Thefindbehavior (returning first match) is appropriate since nodes typically belong to a single MCP.
1006-1008: Logic review: partial owner data scenario.The
ownerEnabledcondition allows the "Machine set" column when at least one of machineSets/controlPlaneMachineSets loads successfully. However, if one fails and the other succeeds, users may see incomplete data:
- MachineSet owners display correctly
- ControlPlaneMachineSet owners show as DASH (or vice versa)
This could be confusing. Consider either:
- Requiring both to succeed:
!machinesLoadError && !machineSetsLoadError && !controlPlaneMachineSetsLoadError- Documenting this as acceptable degraded behavior
The current approach may be intentional (show what we can), but it's worth validating this UX decision with the team.
694-704: LGTM - Filter state initialization is correct.Including
machineOwnersandmachineConfigPoolsarrays ininitialFiltersregardless of feature enablement is the right approach. This ensures a stable filter state shape while the conditional filter UI handles visibility.
932-948: LGTM - Effect cleanup and dependencies are correct.The metrics fetching effect properly:
- Depends on
isKubevirtPluginActiveto re-fetch when KubeVirt availability changes- Cleans up the interval on unmount or dependency change
- Uses a reasonable 30-second refresh interval
187-210: LGTM - Conditional column definitions are clean.The spread-based conditional column inclusion is idiomatic and maintainable. Column properties (sort paths, modifiers) are correctly specified. The memoization dependencies at line 338 properly include all three flags.
984-992: Code handles access-denied resources correctly without blocking.The
useWatchResourcesIfAllowedhook (lines 858–876) returnsloaded: truewhen access is denied because it passesundefinedtouseK8sWatchResourcefor unauthorized resources. This allows theloadedstate to complete once all permission checks finish and core resources load, rather than blocking indefinitely on inaccessible machine-related resources.The error handling properly gates machine owner and MCP columns via
ownerEnabledandmcpEnabledflags (lines 1006–1011), which respect the load errors from those resources. The strategy is sound: permission-denied resources are treated as "loaded but unavailable," and their columns are gracefully hidden rather than failing the entire page render. The comment on line 991 documents this intentional behavior well.
462-473:getGroupVersionKindForResourceworks correctly withOwnerReference.The function implementation only accesses
apiVersionandkindvia safe optional chaining, both of which are available onOwnerReference. WhileOwnerReferenceis a distinct type fromK8sResourceCommon, the structural compatibility is sound, and TypeScript compilation confirms no type errors. No changes needed.
|
@jeff-phillips-18: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
rhamilto
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
QE Approver: |
|
/jira refresh |
|
@sg00dwin: This pull request references CONSOLE-4943 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/label px-approved |
|
@XiyunZhao will work on PR testing |
|
Hi @jeff-phillips-18, a small question, when a user only has watch permission (role: cloud-node-manager) on the Nodes page, VM list values are displayed as 0. Displaying 0 is misleading I think, the VM list should show - to indicate insufficient permissions |
@XiyunZhao OK, I'll add the permission checks. |
jseseCCS
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please carefully review my comments, but for expediency and bc i trust you guys, I'll approve regardless.
| "Kube-Proxy version": "Kube-Proxy version", | ||
| "Machine set": "Machine set", | ||
| "VMs": "VMs", | ||
| "MachineConfigPool": "MachineConfigPool", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re this line and also 423: Strings "MachineConfigPool" and "Filter by MachineConfigPool" expose the internal API resource name directly in the UI, right? Please confirm whether this is intentionally user-facing. If not, update to a human-readable label like “Machine config pool” and make sure of consistency with nearby labels like “Machine set” and “VMs"?
| }, | ||
| ...(mcpEnabled | ||
| ? [ | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thru 299: This column header uses internal resource name MachineConfigPool (title: t('console-app~MachineConfigPool')). Please confirm this term is intended to be user-facing. If not, replace with human-readable label (like “Machine config pool”) and keep it consistent with the related filter label/placeholder?
| ] | ||
| : []), | ||
| ...(mcpEnabled | ||
| ? [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thru 749: Filter title and placeholder expose internal resource name MachineConfigPool (title: t('console-appMachineConfigPool'), placeholder: t('console-appFilter by MachineConfigPool')), correct? Please confirm this is intentional. If not, update to a human-readable label and align it with column header wording?
| modifier: 'nowrap', | ||
| }, | ||
| }, | ||
| ...(ownerEnabled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to about 195: This column seems to show owning MachineSet or ControlPlaneMachineSet (id: nodeColumnInfo.machineOwner.id). Please confirm that “Machine set” accurately describes what users will see here. Or consider a more explicit label (“Owning machine set”?).
| ...(ownerEnabled | ||
| ? [ | ||
| <DataViewCheckboxFilter | ||
| key="machineOwners" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to 738: This filter corresponds to owning MachineSet or ControlPlaneMachineSet? Please confirm that “Machine set” is intended user-facing term for this ownership relationship, aligns with the column label.
| }, | ||
| ] | ||
| : []), | ||
| ...(vmsEnabled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please confirm that “VMs” is preferred user-facing abbreviation here (versus “Virtual machines”) and that it aligns with terminology used elsewhere in console.
| data={data} | ||
| loaded={loaded} | ||
| loadError={loadError} | ||
| ownerEnabled={ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Columns, filters are disabled when related resources fail to load? Please confirm this is the intended UX; that users receive appropriate indication when filters/columns are unavailable bc of load errors.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jeff-phillips-18, jseseCCS, rhamilto The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/label docs-approved |



Description
Add columns to the Nodes list view:
Machine set,MachineConfigPools, andVMs.Add filters for
Machine setandMachineConfigPools.Notes:
MachineSetorControlPlaneMachineSetName
Status
Machine set
VMs
Pods
Memory
CPU
Screenshots
/cc @kybaker
Summary by CodeRabbit
New Features
Metrics
Localization
✏️ Tip: You can customize this high-level summary in your review settings.