-
Notifications
You must be signed in to change notification settings - Fork 120
OCPBUGS-74027: AA: E2E: LLC: Add tests related to odd cpus #1458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mrniranjan The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Adds tests related to Odd integer cpu alignment with full-pcpus-only: false Assisted-by: Cursor v2.1.39 AI Attribution: AIA PAI Hin v1.0 Signed-off-by: Niranjan M.R <mniranja@redhat.com>
ae53b6c to
b9c4461
Compare
|
@mrniranjan: This pull request references Jira Issue OCPBUGS-74027, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@openshift-ci-robot: GitHub didn't allow me to request PR reviews from the following users: mrniranjan. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/retest |
Tal-or
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall looking good.
few comments inside
| // Restore original policy with full-pcpus-only: true | ||
| perfProfile, err := profiles.GetByNodeLabels(testutils.NodeSelectorLabels) | ||
| Expect(err).ToNot(HaveOccurred()) | ||
| prof := perfProfile.DeepCopy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already DeepCopy the original profile in the BeforeAll we can use it for the revert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in latest commit
| testlog.TaggedInfof("Pod", "CPUs used by %q are: %q (requested: %d)", testpod.Name, podCpuset.String(), requestedCPUs) | ||
|
|
||
| // Verify the pod got exactly the requested number of CPUs | ||
| Expect(podCpuset.Size()).To(Equal(requestedCPUs), "Pod should have exactly %d CPUs, got %d", requestedCPUs, podCpuset.Size()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: checking that we got the exact number of requested cpus is basically checking the kubelet core basic logic.
I would say it's redundant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in latest commit
| testlog.TaggedInfof("Pod", "CPUs used by %q are: %q (requested: %d)", testpod.Name, podCpuset.String(), requestedCPUs) | ||
|
|
||
| // Verify the pod got exactly the requested number of CPUs | ||
| Expect(podCpuset.Size()).To(Equal(requestedCPUs), "Pod should have exactly %d CPUs, got %d", requestedCPUs, podCpuset.Size()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in latest commit
Signed-off-by: Niranjan M.R <mniranja@redhat.com>
ffromani
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the general direction looks OK, will have a final pass once the minorish inline comments are addressed
| llcPolicyOddCPUs := `{"cpuManagerPolicyOptions":{"prefer-align-cpus-by-uncorecache":"true", "full-pcpus-only":"false"}}` | ||
| perfProfile, err := profiles.GetByNodeLabels(testutils.NodeSelectorLabels) | ||
| Expect(err).ToNot(HaveOccurred()) | ||
| initialProfile = perfProfile.DeepCopy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to use the top-level initial profile or do we want to add a more localized variable?
I'm slightly leaning towards the latter but I don't really have strong preferences
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in latest commit
| BeforeEach(func() { | ||
| hasBaremetal := false | ||
| for i := range workerRTNodes { | ||
| isVM, err := infrastructure.IsVM(context.Background(), &workerRTNodes[i]) | ||
| Expect(err).ToNot(HaveOccurred()) | ||
| if !isVM { | ||
| hasBaremetal = true | ||
| break | ||
| } | ||
| } | ||
| if !hasBaremetal { | ||
| Skip("Skipping test. All workers in this test setup are VMs, no baremetal node was found") | ||
| } | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should probably be the first thing we do in BeforeAll. Furthermore, this checks we have at least 1 BM worker. Would it be useful to track and only use the BM workers later on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in latest commit
|
|
||
| It("[test_id:87072] Odd integer CPU request: 3 CPUs should prefer whole cores", func(ctx context.Context) { | ||
| requestedCPUs := 3 | ||
| targetNode := workerRTNodes[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this picks a random worker node. Which is ok, as long as we pick a random worker node among the BM ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test will exclusively run on Hardware that supports Uncore Cache . And typically in our downstream CI we label worker-cnf only to those nodes that support this feature
Signed-off-by: Niranjan M.R <mniranja@redhat.com>
|
/retest |
|
@mrniranjan: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Adds tests related to Odd integer cpu alignment
with full-pcpus-only: false
Assisted-by: Cursor v2.1.39
AI Attribution: AIA PAI Hin v1.0 Model: Auto