-
Notifications
You must be signed in to change notification settings - Fork 259
fix topology-updater cpu report #1979
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
fix topology-updater cpu report #1979
Conversation
Welcome @AllenXu93! |
Hi @AllenXu93. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
✅ Deploy Preview for kubernetes-sigs-nfd ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
15fda0b
to
109cb7c
Compare
109cb7c
to
a694d91
Compare
/cc Need to review the code before to comment :) |
/ok-to-test |
We seem to have some as the tests failed. But yes, we should have specific test case for this scenario- @AllenXu93 could you fix and update the tests? |
ok I will fix in this week |
/test pull-node-feature-discovery-build-image-cross-generic |
281bc7e
to
17d374d
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: AllenXu93 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Now the unit tests fail |
b41f3ad
to
5d40f79
Compare
@AllenXu93 any thoughts on these comments? |
@AllenXu93: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
The latest code is updated as you suggestion. I rename suggestion
Updated the code just now
rename to :
Do you think these changes ok ? |
if err != nil { | ||
return false, false, err | ||
} | ||
|
||
isIntegralGuaranteed := hasExclusiveCPUs(pod) | ||
isPodHasIntegralCPUs := podHasIntegralCPUs(pod) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not a blocker by any means but the name isPodHas...
is just a funny. Some suggestions for renaming, as the PR needs to be udpated in any case because of build failures
isPodHasIntegralCPUs := podHasIntegralCPUs(pod) | |
podHasExclusiveCPUs := checkPodExclusiveCPUs(pod) |
OR
isPodHasIntegralCPUs := podHasIntegralCPUs(pod) | |
podHasExclusiveCPUs := hasExclusiveCPUs(pod) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry it took so long to me. I have mostly minorish comments, once addressed I think LGTM
@@ -19,13 +19,12 @@ package resourcemonitor | |||
import ( | |||
"context" | |||
"fmt" | |||
"k8s.io/kubernetes/pkg/apis/core/v1/helper/qos" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
misplaced import
// In Scan(), if watchable is false, this pods scan will skip | ||
// so we can return directly if pod's namespace is not watchable | ||
func (resMon *PodResourcesScanner) isWatchable(podResource *podresourcesapi.PodResources) (bool, bool, error) { | ||
if resMon.namespace != "*" && resMon.namespace != podResource.Namespace { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
podResource.GetNamespace()
hanbdles gracefully nil
pointer, but I guess it's minor(ish)?
func (m *PodResources) GetNamespace() string {
if m != nil {
return m.Namespace
}
return ""
}
(my preference is to always use GetNamespace everything else being equal)
}, | ||
} | ||
cpuIDs := container.GetCpuIds() | ||
if len(cpuIDs) > 0 && isExclusiveCPUs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: isExclusiveCPUs
should be true only if the pod does request some CPUs. IOW, I'd handle the corner case on which the container requests 0 CPUs (!!!!) inside the functions, not outside.
Arguably, a container which requests 0 CPUs does not have any exclusive CPUs assigned
cpuIDs := container.GetCpuIds() | ||
if len(cpuIDs) > 0 && isExclusiveCPUs { | ||
var resCPUs []string | ||
for _, cpuID := range container.GetCpuIds() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(re)use cpuIDs
computed on line 157 above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Yes, the original code called it twice unnecessarily)
Fix #1978