-
Notifications
You must be signed in to change notification settings - Fork 5
feat: kubelet proxy #42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded@BetaCat0 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 11 minutes and 21 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
WalkthroughAdds a TLS-enabled kubelet proxy and Pod log-forwarding handler, wires optional proxy startup into module-controller using new env vars and an in-cluster Kubernetes client, adds service IP discovery and pseudo-node IP support, introduces related constants, bumps dependencies, and adds unit tests for the proxy and handler. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Env as Env
participant Main as module-controller
participant K8s as Kubernetes API
participant Proxy as Kubelet Proxy (TLS)
participant VNode as VNodeController
Env->>Main: Read envs (CLIENT_ID,NAMESPACE,KUBELET_PROXY_ENABLED,KUBELET_PROXY_PORT,...)
Main->>K8s: InClusterConfig -> Clientset
alt Kubelet proxy enabled
Main->>K8s: Lookup Service by label (kubelet-proxy-service)
K8s-->>Main: ClusterIP
Main->>Proxy: StartKubeletProxy(ctx, cert, key, addr, clientset)
Proxy-->>Main: Initialized (goroutine)
end
Main->>VNode: BuildVNodeControllerConfig(PseudoNodeIP=serviceIP|podIP)
Note over Main,VNode: Manager and healthz initialized
sequenceDiagram
autonumber
participant Client as kubectl/logs
participant Proxy as Kubelet Proxy
participant K8s as Kubernetes API
Client->>Proxy: GET /containerLogs/{ns}/{vPod}/...
Proxy->>K8s: Get vPod
K8s-->>Proxy: vPod
Proxy->>K8s: Get Node (vPod.spec.nodeName)
K8s-->>Proxy: Node (labels include basePod)
Proxy->>K8s: Get basePod (ns, name from node label or hostname)
K8s-->>Proxy: basePod
Proxy->>K8s: Stream PodLogs(basePod, container, PodLogOptions)
K8s-->>Proxy: Log stream
Proxy-->>Client: 200 OK (stream) / 404/5xx on errors
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (16)
staging/kubelet_proxy/proxy_test.go (3)
30-31: Avoid fixed port to prevent flakes; use an ephemeral portBinding to :10250 can collide with a local kubelet or other tests. Use 127.0.0.1:0 to let the OS choose a free port.
- err := StartKubeletProxy(ctx, cert, key, ":10250", fake.NewClientset()) + err := StartKubeletProxy(ctx, cert, key, "127.0.0.1:0", fake.NewClientset()) @@ - err := StartKubeletProxy(ctx, "", key, ":10250", fake.NewClientset()) + err := StartKubeletProxy(ctx, "", key, "127.0.0.1:0", fake.NewClientset()) @@ - err := StartKubeletProxy(ctx, cert, "", ":10250", fake.NewClientset()) + err := StartKubeletProxy(ctx, cert, "", "127.0.0.1:0", fake.NewClientset())Also applies to: 39-41, 46-48
20-24: Use t.TempDir and avoid manual cleanupRely on testing’s temp directory lifecycle; also switch the helper to accept *testing.T and remove panics.
- cert, key := genTestCertPair() - defer func() { - os.Remove(cert) - os.Remove(key) - }() + cert, key := genTestCertPair(t)
52-90: Refactor cert helper: no panics, temp files, and test-scoped cleanupMake the helper test-friendly: use t.Helper(), require for assertions, and CreateTemp.
-func genTestCertPair() (certFileLoc string, keyFileLoc string) { - certFileLoc = "test_cert.pem" - keyFileLoc = "test_key.pem" - - priv, err := rsa.GenerateKey(rand.Reader, 2048) - if err != nil { - panic("failed to generate private key: " + err.Error()) - } - template := x509.Certificate{ - SerialNumber: big.NewInt(1), - Subject: pkix.Name{CommonName: "localhost"}, - NotBefore: time.Now(), - NotAfter: time.Now().Add(8760 * time.Hour), // 365 天 - DNSNames: []string{"localhost"}, - } - cert, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv) - if err != nil { - panic("failed to create certificate: " + err.Error()) - } - certFile, err := os.Create(certFileLoc) - if err != nil { - panic("failed to create cert file: " + err.Error()) - } - defer certFile.Close() - err = pem.Encode(certFile, &pem.Block{Type: "CERTIFICATE", Bytes: cert}) - if err != nil { - panic("failed to write cert to file: " + err.Error()) - } - keyFile, err := os.Create(keyFileLoc) - if err != nil { - panic("failed to create key file: " + err.Error()) - } - defer keyFile.Close() - err = pem.Encode(keyFile, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(priv)}) - if err != nil { - panic("failed to write key to file: " + err.Error()) - } - return certFileLoc, keyFileLoc -} +func genTestCertPair(t *testing.T) (certFileLoc, keyFileLoc string) { + t.Helper() + + priv, err := rsa.GenerateKey(rand.Reader, 2048) + require.NoError(t, err, "failed to generate private key") + + template := x509.Certificate{ + SerialNumber: big.NewInt(1), + Subject: pkix.Name{CommonName: "localhost"}, + NotBefore: time.Now(), + NotAfter: time.Now().Add(8760 * time.Hour), // 365 days + DNSNames: []string{"localhost"}, + } + certDER, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv) + require.NoError(t, err, "failed to create certificate") + + certFile, err := os.CreateTemp("", "test_cert_*.pem") + require.NoError(t, err, "failed to create cert file") + t.Cleanup(func() { _ = os.Remove(certFile.Name()) }) + defer certFile.Close() + require.NoError(t, pem.Encode(certFile, &pem.Block{Type: "CERTIFICATE", Bytes: certDER})) + + keyFile, err := os.CreateTemp("", "test_key_*.pem") + require.NoError(t, err, "failed to create key file") + t.Cleanup(func() { _ = os.Remove(keyFile.Name()) }) + defer keyFile.Close() + require.NoError(t, pem.Encode(keyFile, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(priv)})) + + return certFile.Name(), keyFile.Name() +}staging/kubelet_proxy/pod_handler_test.go (1)
59-93: Sanity-checking fake log streamingThese cases expect "fake logs" via the Kubernetes logs API using client-go’s fake client. Depending on client-go version/configuration, Pod logs streaming may not be automatically stubbed and could error or hang. If you see flakes or failures in CI, consider stubbing the log response via a custom RESTClient or reactor to deterministically return the expected bytes.
staging/kubelet_proxy/pod_handler.go (3)
102-103: Handle unscheduled vPod (empty NodeName)If the vPod hasn’t been scheduled yet, NodeName is empty and the subsequent Node GET will fail; surface a clear NotFound early.
- vNodeName := vPod.Spec.NodeName + vNodeName := vPod.Spec.NodeName + if vNodeName == "" { + return nil, apierrors.NewNotFound(corev1.Resource("nodes"), vNodeName) + }
72-84: Ensure SinceTime and SinceSeconds are mutually exclusiveKubernetes PodLogOptions expects only one of SinceTime or SinceSeconds. If clients set both, the current code may set both and get rejected by the API. Prefer SinceTime if provided.
- SinceTime: func() *metav1.Time { - if !opts.SinceTime.IsZero() { - return &metav1.Time{Time: opts.SinceTime} - } - return nil - }(), - SinceSeconds: func() *int64 { - if opts.SinceSeconds > 0 { - return ptr.To[int64](int64(opts.SinceSeconds)) - } - return nil - }(), + SinceTime: func() *metav1.Time { + if !opts.SinceTime.IsZero() { + return &metav1.Time{Time: opts.SinceTime} + } + return nil + }(), + SinceSeconds: func() *int64 { + if opts.SinceSeconds > 0 && opts.SinceTime.IsZero() { + return ptr.To[int64](int64(opts.SinceSeconds)) + } + return nil + }(),
55-57: Optional: fallback to requested container name before defaulting to "base"If the base pod isn’t labeled with LabelKeyOfBaseContainerName, consider using the requested containerName (route param) as a fallback before "base".
- Container: utils.OrElse(basePod.Labels[vkModel.LabelKeyOfBaseContainerName], "base"), + Container: utils.OrElse(basePod.Labels[vkModel.LabelKeyOfBaseContainerName], utils.OrElse(containerName, "base")),staging/kubelet_proxy/proxy.go (4)
40-45: Do not call Fatalf from a library goroutine; also avoid redundant cert/key argsExiting the whole process from inside a library goroutine makes callers harder to control lifecycle. Log the error and let the caller decide. Also, since TLSConfig already includes the certificate, pass empty cert/key to ListenAndServeTLS to avoid redundancy.
Apply this diff:
- log.G(ctx).Infof("Starting kubelet proxy server on %s", listenAddr) - if err := srv.ListenAndServeTLS(certFile, keyFile); err != nil && !errors.Is(err, http.ErrServerClosed) { - log.G(ctx).Fatalf("Failed to start kubelet proxy server: %v", err) - } + log.G(ctx).Infof("Starting kubelet proxy server on %s", listenAddr) + if err := srv.ListenAndServeTLS("", ""); err != nil && !errors.Is(err, http.ErrServerClosed) { + log.G(ctx).Errorf("Failed to start kubelet proxy server: %v", err) + }
47-53: Graceful shutdown can block forever; add a timeoutShutdown with context.Background() may hang indefinitely if connections never close. Use a bounded timeout.
Apply this diff:
- <-ctx.Done() - log.G(ctx).Info("Shutting down kubelet proxy server") - if err := srv.Shutdown(context.Background()); err != nil { - log.G(ctx).Errorf("Failed to gracefully shutdown kubelet proxy server: %v", err) - } + <-ctx.Done() + log.G(ctx).Info("Shutting down kubelet proxy server") + shutdownCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + defer cancel() + if err := srv.Shutdown(shutdownCtx); err != nil { + log.G(ctx).Errorf("Failed to gracefully shutdown kubelet proxy server: %v", err) + }Add the missing import:
// Add to the import block "time"
58-62: Nit: polish error message wordingMinor wording improvement for clarity.
- return nil, fmt.Errorf("fail to load TLS certificate and key: %w", err) + return nil, fmt.Errorf("failed to load TLS certificate/key: %w", err)
64-67: Optional: consider supporting mTLS (client cert verification)If this proxy should be accessed only by trusted in-cluster clients, consider adding optional ClientAuth=RequireAndVerifyClientCert and wiring in a ClientCAs pool via configuration.
cmd/module-controller/main.go (5)
55-59: Name/comment clarify: it’s a default port, not a full addressThe constant represents a port. Renaming improves clarity and avoids implying it includes host formatting. Update its usage accordingly.
-const ( - certFilePath = "/etc/virtual-kubelet/tls/tls.crt" - keyFilePath = "/etc/virtual-kubelet/tls/tls.key" - DefaultKubeletHttpListenAddr = "10250" // Default listen address for the HTTP server -) +const ( + certFilePath = "/etc/virtual-kubelet/tls/tls.crt" + keyFilePath = "/etc/virtual-kubelet/tls/tls.key" + DefaultKubeletHTTPSPort = "10250" // Default listen port for the kubelet proxy HTTPS server +)And:
- ":"+utils.GetEnv(model.EnvKeyOfKubeletProxyPort, DefaultKubeletHttpListenAddr), + ":"+utils.GetEnv(model.EnvKeyOfKubeletProxyPort, DefaultKubeletHTTPSPort),Also applies to: 198-198
137-143: Consistency: use utils.GetEnv and avoid Fatalf in main path
- Use utils.GetEnv for consistency with the rest of the file.
- Prefer explicit os.Exit after logging (instead of log.G(ctx).Fatalf) to keep logging style consistent (zaplogger) and make control flow clear.
- if os.Getenv(model.EnvKeyOfKubeletProxyEnabled) == "true" { + if utils.GetEnv(model.EnvKeyOfKubeletProxyEnabled, "false") == "true" { moduleControllerServiceIP, err = lookupProxyServiceIP(ctx, deployNamespace, k8sClientSet) if err != nil { - log.G(ctx).Fatalf("Failed to lookup kubelet proxy service IP: %v", err) + zlogger.Error(err, "Failed to lookup kubelet proxy service IP") + os.Exit(1) } kubeletProxyEnabled = true }
192-205: Lifecycle integration: consider running the kubelet proxy as a controller-runtime RunnableHooking the proxy into the manager via manager.Add(Runnable) ensures consistent start/stop ordering, readiness, and shared context with other components. It will also simplify graceful shutdown.
262-275: Typo in error string (“deteched”)Fix the spelling in the error returned when multiple services are found.
- return "", fmt.Errorf("multiple kubelet proxy services deteched in namespace %s, expected only one", namespace) + return "", fmt.Errorf("multiple kubelet proxy services detected in namespace %s, expected only one", namespace)
278-280: Optional: use corev1.ClusterIPNone constantTo avoid magic strings, prefer corev1.ClusterIPNone instead of "None". This makes intent clearer and protects against typos.
If you choose to apply this, you’ll need to import:
- k8s.io/api/core/v1 as corev1
And change the comparison to:
firstSvc.Spec.ClusterIP == corev1.ClusterIPNone
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (7)
cmd/module-controller/main.go(7 hunks)common/model/consts.go(1 hunks)go.mod(4 hunks)staging/kubelet_proxy/pod_handler.go(1 hunks)staging/kubelet_proxy/pod_handler_test.go(1 hunks)staging/kubelet_proxy/proxy.go(1 hunks)staging/kubelet_proxy/proxy_test.go(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (5)
staging/kubelet_proxy/proxy.go (1)
staging/kubelet_proxy/pod_handler.go (1)
NewPodHandler(21-26)
staging/kubelet_proxy/proxy_test.go (1)
staging/kubelet_proxy/proxy.go (1)
StartKubeletProxy(15-56)
staging/kubelet_proxy/pod_handler_test.go (1)
staging/kubelet_proxy/pod_handler.go (1)
NewPodHandler(21-26)
cmd/module-controller/main.go (4)
common/model/consts.go (8)
EnvKeyOfClientID(24-24)EnvKeyOfClusterModeEnabled(30-30)EnvKeyOfWorkloadMaxLevel(32-32)EnvKeyOfVNodeWorkerNum(34-34)EnvKeyOfNamespace(26-26)EnvKeyOfKubeletProxyEnabled(36-36)EnvKeyOfKubeletProxyPort(38-38)LabelKeyOfKubeletProxyService(18-18)common/zaplogger/logger.go (2)
GetLogger(11-13)WithLogger(15-17)staging/kubelet_proxy/proxy.go (1)
StartKubeletProxy(15-56)controller/module_deployment_controller/module_deployment_controller.go (5)
moduleDeploymentController(263-303)moduleDeploymentController(206-210)env(32-40)moduleDeploymentController(200-203)moduleDeploymentController(58-155)
common/model/consts.go (1)
controller/module_deployment_controller/module_deployment_controller.go (1)
env(32-40)
🔇 Additional comments (1)
common/model/consts.go (1)
17-19: LGTM: new label/env constants are clear and scopedThe additions read well and align with the wiring described in the PR (proxy enablement, port, and service discovery). No issues from a typing or naming standpoint.
Also applies to: 21-39
| var kubeletProxyEnabled bool | ||
| if os.Getenv(model.EnvKeyOfKubeletProxyEnabled) == "true" { | ||
| moduleControllerServiceIP, err = lookupProxyServiceIP(ctx, deployNamespace, k8sClientSet) | ||
| if err != nil { | ||
| log.G(ctx).Fatalf("Failed to lookup kubelet proxy service IP: %v", err) | ||
| } | ||
| kubeletProxyEnabled = true | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify label exists on the Service manifest used for discovery
Ensure your kubelet proxy Service is labeled with virtual-kubelet.koupleless.io/kubelet-proxy-service=true in the deployNamespace; otherwise discovery will fail.
Run this script to confirm the label exists in your repo manifests:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching for kubelet proxy Service label..."
rg -n -C2 'virtual-kubelet\.koupleless\.io/kubelet-proxy-service\s*:\s*"?true"?' -g '!**/vendor/**' -g '!**/node_modules/**'
echo "Checking Services in code (if generated in Go):"
rg -n -C3 'Labels:\s*map\[string\]string\{[^}]*virtual-kubelet\.koupleless\.io/kubelet-proxy-service' --type=go -g '!**/vendor/**'Length of output: 245
Add missing discovery label to the kubelet proxy Service manifest
I ran the provided search across the repository and found no occurrences of
virtual-kubelet.koupleless.io/kubelet-proxy-service: "true"
in your YAML manifests or Go code. Without this label, lookupProxyServiceIP will be unable to find the Service when KUBELET_PROXY_ENABLED=true, causing runtime failures.
Please update the Service manifest used for discovery (e.g., under your deploy/ directory) to include:
metadata:
labels:
virtual-kubelet.koupleless.io/kubelet-proxy-service: "true"🤖 Prompt for AI Agents
In cmd/module-controller/main.go around lines 136 to 144, the code enables
kubelet proxy discovery but the Service manifest lacks the discovery label so
lookupProxyServiceIP cannot find it at runtime; update the Service manifest used
for deployment (e.g., deploy/ directory) to add the metadata.labels key
virtual-kubelet.koupleless.io/kubelet-proxy-service: "true" to the kubelet proxy
Service so lookupProxyServiceIP can discover the Service when
KUBELET_PROXY_ENABLED=true.
5e391da to
985346d
Compare
985346d to
8d5a548
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
cmd/module-controller/main.go (1)
262-282: Service discovery helper: fix typo, verify RBAC and label availability
- Typo: “deteched” -> “detected”.
- Ensure RBAC allows listing Services in the deploy namespace; otherwise discovery will fail at runtime.
- Ensure the Service manifest includes the discovery label; without it, this function won’t find the Service.
Apply the typo fix:
- return "", fmt.Errorf("multiple kubelet proxy services deteched in namespace %s, expected only one", namespace) + return "", fmt.Errorf("multiple kubelet proxy services detected in namespace %s, expected only one", namespace)Run this to verify RBAC and label presence:
#!/bin/bash set -euo pipefail echo "Checking RBAC for listing core services (apiGroup \"\")..." rg -n -C3 -g '!**/vendor/**' -g '!**/node_modules/**' --type=yaml -P ' kind:\s*(?:ClusterRole|Role).*$\ (?s).*?rules:\s*\[\s*\]|\s*-\s*apiGroups:\s*\[\s*""\s*\]\s*(?s).*?resources:\s*\[.*"services".*\]\s*(?s).*?verbs:\s*\[.*(list|get|watch).*\] ' echo echo "Searching for kubelet proxy Service discovery label..." rg -n -C2 -g '!**/vendor/**' -g '!**/node_modules/**' --type=yaml 'virtual-kubelet\.koupleless\.io/kubelet-proxy-service\s*:\s*"?true"?'Note: The label check duplicates an earlier review reminder; keeping it here for completeness because this helper strictly depends on it.
🧹 Nitpick comments (7)
Dockerfile (1)
2-2: Pin builder image to a patch version aligned with toolchain (reproducibility).go.mod declares toolchain go1.24.5, but the builder uses an unpinned 1.24 tag. Pinning avoids drift and matches the chosen toolchain.
Apply:
-FROM golang:1.24 as builder +FROM golang:1.24.5 as builderdebug.Dockerfile (1)
2-2: Pin builder image to a patch version for determinism and to match toolchain.Same rationale as the main Dockerfile; keeps CI reproducible with go1.24.5.
Apply:
-FROM golang:1.24 as builder +FROM golang:1.24.5 as buildergo.mod (1)
97-97: Remove the commented local replace before merging.Although it’s commented out, it can confuse future maintainers and tooling. Safer to delete.
Apply:
-//replace github.com/koupleless/virtual-kubelet => /Users/youji.zzl/Documents/workspace/koupleless/virtual-kubeletcmd/module-controller/main.go (4)
55-60: Nit: constant name/comment mislead — it’s an HTTPS port, not an “HTTP listen addr”10250 is kubelet’s HTTPS port, and the value is a port string, not an address. Consider renaming for clarity and fixing the comment.
Example:
-const ( - certFilePath = "/etc/virtual-kubelet/tls/tls.crt" - keyFilePath = "/etc/virtual-kubelet/tls/tls.key" - DefaultKubeletHttpListenAddr = "10250" // Default listen address for the HTTP server -) +const ( + certFilePath = "/etc/virtual-kubelet/tls/tls.crt" + keyFilePath = "/etc/virtual-kubelet/tls/tls.key" + DefaultKubeletHTTPSPort = "10250" // Default HTTPS port for the kubelet proxy +)(Remember to update references accordingly.)
71-71: Nit: spacing in commentAdd a space: “server (if enabled)”.
-// 9. Starts the kubelet proxy server(if enabled) +// 9. Starts the kubelet proxy server (if enabled)
135-144: Harden proxy enablement: consistent env access + avoid Fatalf + clearer variable name
- Use utils.GetEnv for consistency with the rest of the file.
- Prefer logging via zlogger + os.Exit for consistency (instead of log.G(ctx).Fatalf).
- Consider renaming moduleControllerServiceIP to kubeletProxyServiceIP for clarity.
Apply within this block:
- if os.Getenv(model.EnvKeyOfKubeletProxyEnabled) == "true" { + if utils.GetEnv(model.EnvKeyOfKubeletProxyEnabled, "false") == "true" { moduleControllerServiceIP, err = lookupProxyServiceIP(ctx, deployNamespace, k8sClientSet) if err != nil { - log.G(ctx).Fatalf("Failed to lookup kubelet proxy service IP: %v", err) + zlogger.Error(err, "failed to lookup kubelet proxy service IP") + os.Exit(1) } kubeletProxyEnabled = true }If you’d like, I can follow up with a small PR to rename moduleControllerServiceIP to kubeletProxyServiceIP across the file.
192-205: Validate KUBELET_PROXY_PORT and improve configurability
- Validate the port to fail fast to a safe default on bad input.
- Consider making cert/key paths overrideable via env in the future.
Apply within this block:
if kubeletProxyEnabled { zlogger.Info("starting kubelet proxy server") - err := kubelet_proxy.StartKubeletProxy( - ctx, - certFilePath, - keyFilePath, - ":"+utils.GetEnv(model.EnvKeyOfKubeletProxyPort, DefaultKubeletHttpListenAddr), - k8sClientSet, - ) + portStr := utils.GetEnv(model.EnvKeyOfKubeletProxyPort, DefaultKubeletHttpListenAddr) + if _, perr := strconv.Atoi(portStr); perr != nil { + zlogger.Error(perr, "invalid KUBELET_PROXY_PORT value, falling back to default", "value", portStr, "default", DefaultKubeletHttpListenAddr) + portStr = DefaultKubeletHttpListenAddr + } + listenAddr := ":" + portStr + err := kubelet_proxy.StartKubeletProxy(ctx, certFilePath, keyFilePath, listenAddr, k8sClientSet) if err != nil { zlogger.Error(err, "failed to start kubelet proxy server") os.Exit(1) } }Optionally, consider env overrides (e.g., KUBELET_PROXY_CERT_FILE, KUBELET_PROXY_KEY_FILE) to avoid hard-coded paths.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (10)
.golangci.yml(1 hunks)Dockerfile(2 hunks)cmd/module-controller/main.go(7 hunks)common/model/consts.go(1 hunks)debug.Dockerfile(2 hunks)go.mod(4 hunks)staging/kubelet_proxy/pod_handler.go(1 hunks)staging/kubelet_proxy/pod_handler_test.go(1 hunks)staging/kubelet_proxy/proxy.go(1 hunks)staging/kubelet_proxy/proxy_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- staging/kubelet_proxy/pod_handler_test.go
- staging/kubelet_proxy/proxy_test.go
- staging/kubelet_proxy/pod_handler.go
- staging/kubelet_proxy/proxy.go
🧰 Additional context used
🧬 Code Graph Analysis (2)
common/model/consts.go (1)
controller/module_deployment_controller/module_deployment_controller.go (1)
env(32-40)
cmd/module-controller/main.go (4)
common/model/consts.go (8)
EnvKeyOfClientID(24-24)EnvKeyOfClusterModeEnabled(30-30)EnvKeyOfWorkloadMaxLevel(32-32)EnvKeyOfVNodeWorkerNum(34-34)EnvKeyOfNamespace(26-26)EnvKeyOfKubeletProxyEnabled(36-36)EnvKeyOfKubeletProxyPort(38-38)LabelKeyOfKubeletProxyService(18-18)common/zaplogger/logger.go (2)
GetLogger(11-13)WithLogger(15-17)staging/kubelet_proxy/proxy.go (1)
StartKubeletProxy(15-56)controller/module_deployment_controller/module_deployment_controller.go (1)
env(32-40)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: unit-test
🔇 Additional comments (11)
Dockerfile (1)
39-39: LGTM on ENTRYPOINT newline.No functional change; fine to keep.
debug.Dockerfile (1)
46-46: LGTM on CMD trailing newline.No behavioral impact for the debug image.
go.mod (3)
5-5: Toolchain bump looks good; ensure builder uses the same patch to prevent drift.go.mod pins toolchain go1.24.5. Please keep Docker builder pinned to 1.24.5 as well (see Dockerfiles).
13-13: virtual-kubelet fork bump to v0.3.9 looks fine.No concerns from here; aligns with new kubelet proxy usage.
18-18: Double-check need for both upstream and forked virtual-kubelet modules.You depend on both:
- github.com/koupleless/virtual-kubelet v0.3.9
- github.com/virtual-kubelet/virtual-kubelet v1.11.0
If intentional, all good; otherwise it can increase binary size and risk import confusion. Consider consolidating.
[provide_follow_up]
If helpful, I can scan the repo to list which packages import each module to assess consolidation.common/model/consts.go (2)
17-19: Label constant addition LGTM.Clear name and scope for kubelet proxy service selection.
21-38: Env key additions LGTM; consistent naming and docs.Nice consolidation of env keys; this improves discoverability and avoids stringly-typed duplication.
cmd/module-controller/main.go (4)
28-34: LGTM: imports and wiring look correctReintroducing controller-runtime logger, health checks, and adding kube client, report_server, and kubelet_proxy integration all look consistent with usage below.
Also applies to: 38-38, 44-47, 49-49
88-109: LGTM: environment configuration via model keys and sane defaultsUsing model.EnvKey* constants and defaulting the namespace is a good improvement for consistency and operability.
133-134: LGTM: in-cluster ClientSet initializationUsing NewForConfigOrDie with the same kubeConfig as the manager is appropriate here.
153-155: LGTM: PseudoNodeIP wiringPassing the service IP to PseudoNodeIP (with fallback semantics) aligns with the intended behavior.
0ec1184 to
0e7f60f
Compare
0e7f60f to
b79c8b0
Compare
|
尚未补齐文档,需要讨论一下 tls 证书方案。 |
lylingzhen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reviewed
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #42 +/- ##
==========================================
+ Coverage 65.60% 67.11% +1.50%
==========================================
Files 10 12 +2
Lines 1314 1426 +112
==========================================
+ Hits 862 957 +95
- Misses 390 403 +13
- Partials 62 66 +4
🚀 New features to boost your workflow:
|
Derrickers
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
提案 koupleless/koupleless#412 的 module-controller 侧改动。
Summary by CodeRabbit
New Features
Tests
Chores