-
-
Notifications
You must be signed in to change notification settings - Fork 47
Add security vulnerability management #423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ity scanner, autonomous patcher, and scheduler. Implemented CLI commands for scanning, patching, and scheduling security tasks. Added documentation for usage and configuration.
📝 WalkthroughWalkthroughAdds a new security stack: OSV-backed VulnerabilityScanner with caching/history and CLI, an AutonomousPatcher with policy-driven planning/execution and CLI, a SecurityScheduler with scheduling/systemd helpers and CLI wiring, docs, and comprehensive unit tests. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Scanner as VulnerabilityScanner
participant Cache as SQLite Cache
participant OSV as OSV API
participant Result as ScanResult
User->>Scanner: scan_all_packages()
activate Scanner
Scanner->>Scanner: _get_installed_packages()
loop per package
Scanner->>Cache: _check_cache(pkg, ver)
alt cache hit
Cache-->>Scanner: cached data
else cache miss
Scanner->>OSV: _query_osv(pkg, ver)
OSV-->>Scanner: vuln data
Scanner->>Cache: _save_cache(pkg, ver, data)
end
Scanner->>Scanner: aggregate
end
Scanner->>Result: build ScanResult
Scanner->>Cache: _save_scan_history(result)
Scanner-->>User: return ScanResult
deactivate Scanner
sequenceDiagram
participant User
participant Patcher as AutonomousPatcher
participant Scanner as VulnerabilityScanner
participant System as apt/dpkg
participant History as InstallationHistory
User->>Patcher: patch_vulnerabilities(optional vulns)
activate Patcher
Patcher->>Patcher: _load_config()
alt vulnerabilities not provided
Patcher->>Scanner: scan_all_packages()
Scanner-->>Patcher: ScanResult
end
Patcher->>Patcher: create_patch_plan(vulns)
loop per package in plan
Patcher->>Patcher: _should_patch(vuln)
alt approved
Patcher->>Patcher: _check_package_update_available(pkg)
end
end
Patcher->>Patcher: apply_patch_plan(plan)
alt dry_run
Patcher-->>User: simulated PatchResult
else live apply
Patcher->>System: apt-get install/upgrade
System-->>Patcher: output
Patcher->>History: record installations
Patcher-->>User: PatchResult
end
deactivate Patcher
sequenceDiagram
participant User
participant Scheduler as SecurityScheduler
participant Scanner as VulnerabilityScanner
participant Patcher as AutonomousPatcher
participant Systemd as systemd
User->>Scheduler: create_schedule(id, freq, ...)
activate Scheduler
Scheduler->>Scheduler: _calculate_next_run()
Scheduler->>Scheduler: _save_schedules()
Scheduler-->>User: SecuritySchedule
deactivate Scheduler
User->>Scheduler: install_systemd_timer(id)
activate Scheduler
Scheduler->>Systemd: write service & timer units
Systemd-->>Scheduler: enabled/started
Scheduler-->>User: result
deactivate Scheduler
Systemd->>Scheduler: trigger run_schedule(id)
activate Scheduler
alt scan_enabled
Scheduler->>Scanner: scan_all_packages()
Scanner-->>Scheduler: ScanResult
end
alt patch_enabled
Scheduler->>Patcher: patch_vulnerabilities(ScanResult)
Patcher-->>Scheduler: PatchResult
end
Scheduler->>Scheduler: update last_run/next_run and _save_schedules()
Scheduler-->>Systemd: aggregated results
deactivate Scheduler
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🧹 Nitpick comments (6)
cortex/vulnerability_scanner.py (3)
408-418: Cache is not saved when no vulnerabilities are found.When
_query_osvreturns an empty list,_save_cacheis not called. This means packages with no known vulnerabilities will always hit the API on subsequent scans, defeating the purpose of the 24-hour cache. Consider caching empty results as well to reduce API calls.🔎 Proposed fix to cache empty results
def scan_package(self, package_name: str, version: str) -> list[Vulnerability]: cached = self._check_cache(package_name, version) if cached is not None: return cached vulnerabilities = self._query_osv(package_name, version) - if vulnerabilities: - self._save_cache(package_name, version, vulnerabilities) + # Cache results even if empty to avoid repeated API calls + self._save_cache(package_name, version, vulnerabilities) return vulnerabilitiesNote: You'll also need to update
_check_cacheto distinguish between "not in cache" and "cached empty result" - currently both returnNone.
80-91: Missing docstring for public classVulnerabilityScanner.Per coding guidelines, docstrings are required for all public APIs. The class has a brief single-line docstring but lacks documentation of the constructor parameters and the class's public interface.
310-406: Add explicit timeout exception handling for clearer diagnostics.The 10-second timeout is properly set, and the OSV API query format is correct for the Debian ecosystem. However,
requests.RequestExceptioncatches all request errors generically. Sincerequests.Timeout(and its subclassesConnectTimeoutandReadTimeout) are subclasses ofRequestException, consider catching them explicitly before the broader exception handler to provide more specific logging and distinguish timeout failures from other request errors. Additionally, consider using separate connect and read timeouts (e.g.,timeout=(5, 15)) rather than a single value for finer-grained control.cortex/cli.py (1)
1030-1036: Accessing private method_get_installed_packagesfrom CLI handler.The CLI handler accesses
scanner._get_installed_packages()which is a private method (underscore prefix). Consider either making this method public or adding a public API to check if a package is installed.cortex/security_scheduler.py (1)
249-254: Duplicate calculation of next_run is inefficient.
_calculate_next_runis called twice in succession. Store the result in a variable to avoid redundant computation.🔎 Proposed fix
# Update schedule schedule.last_run = datetime.now().isoformat() - schedule.next_run = ( - self._calculate_next_run(schedule.frequency, schedule.custom_cron).isoformat() - if self._calculate_next_run(schedule.frequency, schedule.custom_cron) - else None - ) + next_run = self._calculate_next_run(schedule.frequency, schedule.custom_cron) + schedule.next_run = next_run.isoformat() if next_run else None self._save_schedules()cortex/autonomous_patcher.py (1)
134-142: Missing type hint for return value in_run_command.Per coding guidelines, type hints are required. The method docstring mentions the return type, but consider adding it to the signature for clarity.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/autonomous_patcher.pycortex/cli.pycortex/security_scheduler.pycortex/vulnerability_scanner.pydocs/SECURITY_MANAGEMENT.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/security_scheduler.pycortex/vulnerability_scanner.pycortex/autonomous_patcher.pycortex/cli.py
🧬 Code graph analysis (2)
cortex/security_scheduler.py (2)
cortex/autonomous_patcher.py (1)
PatchStrategy(29-35)cortex/vulnerability_scanner.py (2)
VulnerabilityScanner(80-576)scan_all_packages(420-485)
cortex/vulnerability_scanner.py (3)
cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)cortex/logging_system.py (4)
info(200-202)debug(196-198)warning(204-206)critical(212-214)cortex/cli.py (1)
history(813-881)
🪛 GitHub Actions: CI
cortex/autonomous_patcher.py
[error] 318-318: Command 'ruff check . --output-format=github' failed with F541: f-string without any placeholders in cortex/autonomous_patcher.py:318.
🪛 GitHub Check: lint
cortex/security_scheduler.py
[failure] 475-475: Ruff (F541)
cortex/security_scheduler.py:475:23: F541 f-string without any placeholders
cortex/vulnerability_scanner.py
[failure] 176-176: Ruff (W293)
cortex/vulnerability_scanner.py:176:1: W293 Blank line contains whitespace
[failure] 173-173: Ruff (W293)
cortex/vulnerability_scanner.py:173:1: W293 Blank line contains whitespace
cortex/autonomous_patcher.py
[failure] 522-522: Ruff (F541)
cortex/autonomous_patcher.py:522:19: F541 f-string without any placeholders
[failure] 516-516: Ruff (F541)
cortex/autonomous_patcher.py:516:19: F541 f-string without any placeholders
[failure] 438-438: Ruff (F541)
cortex/autonomous_patcher.py:438:21: F541 f-string without any placeholders
[failure] 318-318: Ruff (F541)
cortex/autonomous_patcher.py:318:13: F541 f-string without any placeholders
cortex/cli.py
[failure] 1192-1192: Ruff (F541)
cortex/cli.py:1192:35: F541 f-string without any placeholders
[failure] 1069-1069: Ruff (F541)
cortex/cli.py:1069:31: F541 f-string without any placeholders
[failure] 1058-1058: Ruff (F541)
cortex/cli.py:1058:27: F541 f-string without any placeholders
🪛 GitHub Check: Lint
cortex/security_scheduler.py
[failure] 475-475: Ruff (F541)
cortex/security_scheduler.py:475:23: F541 f-string without any placeholders
cortex/vulnerability_scanner.py
[failure] 176-176: Ruff (W293)
cortex/vulnerability_scanner.py:176:1: W293 Blank line contains whitespace
[failure] 173-173: Ruff (W293)
cortex/vulnerability_scanner.py:173:1: W293 Blank line contains whitespace
cortex/autonomous_patcher.py
[failure] 522-522: Ruff (F541)
cortex/autonomous_patcher.py:522:19: F541 f-string without any placeholders
[failure] 516-516: Ruff (F541)
cortex/autonomous_patcher.py:516:19: F541 f-string without any placeholders
[failure] 438-438: Ruff (F541)
cortex/autonomous_patcher.py:438:21: F541 f-string without any placeholders
[failure] 318-318: Ruff (F541)
cortex/autonomous_patcher.py:318:13: F541 f-string without any placeholders
cortex/cli.py
[failure] 1192-1192: Ruff (F541)
cortex/cli.py:1192:35: F541 f-string without any placeholders
[failure] 1069-1069: Ruff (F541)
cortex/cli.py:1069:31: F541 f-string without any placeholders
[failure] 1058-1058: Ruff (F541)
cortex/cli.py:1058:27: F541 f-string without any placeholders
🪛 LanguageTool
docs/SECURITY_MANAGEMENT.md
[uncategorized] ~100-~100: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...| |--------|---------|-------| | OSV (Open Source Vulnerabilities) | Primary database, ...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🪛 markdownlint-cli2 (0.18.1)
docs/SECURITY_MANAGEMENT.md
117-117: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
165-165: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
215-215: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Agent
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
🔇 Additional comments (8)
cortex/vulnerability_scanner.py (1)
38-57: LGTM! Dataclass with mutable default handled correctly.The
Vulnerabilitydataclass properly handles the mutable default forreferencesby usingNoneas the default and initializing to an empty list in__post_init__. This avoids the common Python pitfall of shared mutable defaults.docs/SECURITY_MANAGEMENT.md (1)
1-235: Documentation is comprehensive and well-structured.The documentation clearly explains the problem, solution, CLI commands, safety controls, and architecture. The acceptance criteria align with the implementation in the code modules.
cortex/cli.py (2)
979-1003: Good error handling structure with verbose traceback support.The security command handler follows the established pattern with proper exception handling and verbose mode support for debugging.
1991-2030: CLI argument structure for security commands is well-designed.The subcommand structure with scan, patch, and schedule actions follows the existing CLI patterns and provides clear help text. The default dry-run mode for patch is a good safety measure.
cortex/security_scheduler.py (1)
58-84: Schedule loading handles errors gracefully but silently continues.The
_load_schedulesmethod logs a warning on failure but doesn't propagate the error. This is acceptable for resilience but could mask configuration corruption. Consider adding a more prominent warning when the config file exists but fails to parse.cortex/autonomous_patcher.py (3)
63-93: Well-designed safety controls with sensible defaults.The
AutonomousPatcherclass initializes with safe defaults:dry_run=True,auto_approve=False, andmin_severity=MEDIUM. The whitelist/blacklist mechanism provides good flexibility for operators.
277-314: Dry-run mode correctly prevents actual package modifications.The
apply_patch_planmethod properly checksself.dry_runbefore executing any apt commands and returns a result indicating what would have been done. This is a critical safety feature.
340-358: Package installation uses specific version pinning when available.The code correctly uses
package=versionformat for apt to ensure the intended version is installed. This prevents accidental upgrades beyond the known-patched version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements a comprehensive security vulnerability management system for Cortex Linux with autonomous patching capabilities. The implementation adds vulnerability scanning against CVE databases (OSV), automated security patching with safety controls, and scheduled security maintenance through systemd timers.
Key Changes:
- Vulnerability scanning system querying OSV API for CVE data with 24-hour caching
- Autonomous patcher with configurable strategies (critical-only, high-and-above, automatic) and dry-run mode by default
- Security scheduler for automated scans/patches with systemd integration
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 25 comments.
Show a summary per file
| File | Description |
|---|---|
docs/SECURITY_MANAGEMENT.md |
Comprehensive documentation of security features, architecture, and usage examples |
cortex/vulnerability_scanner.py |
Core scanning engine querying CVE databases with caching and scan history tracking |
cortex/autonomous_patcher.py |
Automated patching system with whitelist/blacklist controls and rollback support |
cortex/security_scheduler.py |
Schedule management for recurring security tasks with systemd timer integration |
cortex/cli.py |
CLI integration adding cortex security command with scan/patch/schedule subcommands |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…r. Implemented ensure_apt_updated method to manage apt updates efficiently, reducing redundant calls and improving performance. Updated documentation for package update checks.
…nd addition. Remove unnecessary variable assignment for the 'list' command. Clean up imports in vulnerability_scanner.py by removing unused urllib.parse import.
…scheduler.py. Improved error handling for permission issues and updated method name for clarity.
…urity_scheduler.py to remove unnecessary f-strings. Clean up whitespace in vulnerability_scanner.py for improved readability.
…ggregate valid vulnerabilities and handle expired cache entries. Improved error handling and ensured proper deletion of outdated records.
…-patch to --enable-patching and adjust code block formatting to specify text type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (11)
cortex/cli.py (4)
1029-1033: Encapsulation violation: avoid calling private methods.Line 1030 calls
scanner._get_installed_packages()which is a private method (indicated by the underscore prefix). Consider adding a public method to VulnerabilityScanner for checking package existence.Suggested refactor
Add a public method to VulnerabilityScanner:
def is_package_installed(self, package_name: str) -> bool: """Check if a package is installed.""" packages = self._get_installed_packages() return package_name in packages def get_package_version(self, package_name: str) -> str | None: """Get version of an installed package.""" packages = self._get_installed_packages() return packages.get(package_name)Then update CLI to use:
- if package: - packages = scanner._get_installed_packages() - if package not in packages: + if package: + version = scanner.get_package_version(package) + if version is None: self._print_error(f"Package {package} not found") return 1 - - version = packages[package] vulns = scanner.scan_package(package, version)
1102-1102: Simplify dry-run logic.The expression
getattr(args, "dry_run", True) and not getattr(args, "apply", False)is confusing because argparse already setsdry_rundefault toTrue. Simplify to just check the--applyflag.- dry_run = getattr(args, "dry_run", True) and not getattr(args, "apply", False) + dry_run = not getattr(args, "apply", False)
1188-1193: Improve error message specificity.The error message on line 1192 should include the schedule ID for better context.
elif schedule_action == "install-timer": if scheduler.install_systemd_timer(args.id): cx_print(f"✅ Installed systemd timer for {args.id}", "success") else: - self._print_error(f"Failed to install systemd timer for {args.id}") + self._print_error(f"Failed to install systemd timer for {args.id}") return 1Note: The f-string is now correct with the placeholder, but this was flagged in past reviews.
2004-2004: Remove default value from boolean flag.The
--dry-runflag hasdefault=Truewhich makes it always True even when not specified. For boolean flags withaction="store_true", omit the default parameter.- sec_patch_parser.add_argument("--dry-run", action="store_true", default=True, help="Dry run mode (default)") + sec_patch_parser.add_argument("--dry-run", action="store_true", help="Dry run mode (default)")Then simplify the logic at line 1102 to just check the
--applyflag.cortex/security_scheduler.py (3)
168-170: Monthly calculation uses fixed 30-day interval.The monthly frequency uses
timedelta(days=30)which doesn't account for months with different lengths (28, 29, 31 days). This will cause schedule drift over time.Consider using
dateutil.relativedeltaor proper month arithmetic:# Option 1: Use dateutil from dateutil.relativedelta import relativedelta return now + relativedelta(months=1) # Option 2: Manual calculation next_month = now.month + 1 if now.month < 12 else 1 next_year = now.year if now.month < 12 else now.year + 1 return now.replace(year=next_year, month=next_month)
249-253: Avoid duplicate _calculate_next_run call.Lines 250-251 call
_calculate_next_runtwice with the same parameters, which is inefficient and could produce different results if the calculation is time-sensitive.# Update schedule schedule.last_run = datetime.now().isoformat() - schedule.next_run = ( - self._calculate_next_run(schedule.frequency, schedule.custom_cron).isoformat() - if self._calculate_next_run(schedule.frequency, schedule.custom_cron) - else None - ) + next_run_dt = self._calculate_next_run(schedule.frequency, schedule.custom_cron) + schedule.next_run = next_run_dt.isoformat() if next_run_dt else None self._save_schedules()
287-287: Hardcoded cortex binary path will fail in many environments.Line 287 hardcodes
ExecStart=/usr/bin/cortexwhich won't work for development installations, pip installs, or non-standard locations.Use dynamic path detection similar to
mcp/cortex_mcp_server.py:def _find_cortex_binary(self) -> str: """Find the cortex binary path.""" import shutil import sys # Try which command cortex_path = shutil.which("cortex") if cortex_path: return cortex_path # Try relative to current Python executable python_dir = Path(sys.executable).parent cortex_in_venv = python_dir / "cortex" if cortex_in_venv.exists(): return str(cortex_in_venv) # Fallback to /usr/bin/cortex return "/usr/bin/cortex"Then use in service content:
+ cortex_path = self._find_cortex_binary() service_content = f"""[Unit] Description=Cortex Security Scan/Patch - {schedule_id} After=network.target [Service] Type=oneshot -ExecStart=/usr/bin/cortex security run {schedule_id} +ExecStart={cortex_path} security run {schedule_id} User=root """Based on learnings from
mcp/cortex_mcp_server.pywhich implements similar path detection logic.cortex/autonomous_patcher.py (1)
75-87: Remove unused auto_approve parameter.The
auto_approveparameter is defined and stored but never used anywhere in the class. Remove it to avoid confusion, or implement the approval logic if it's intended for future use.def __init__( self, strategy: PatchStrategy = PatchStrategy.CRITICAL_ONLY, dry_run: bool = True, - auto_approve: bool = False, ): """ Initialize the autonomous patcher. Args: strategy: Patching strategy dry_run: If True, only show what would be patched - auto_approve: If True, automatically approve patches (dangerous!) """ self.strategy = strategy self.dry_run = dry_run - self.auto_approve = auto_approvecortex/vulnerability_scanner.py (3)
156-188: Document CVSS approximation and fix whitespace.The CVSS score calculation is a simplified approximation that doesn't match the official CVSS v3.1 specification. Document this in the docstring to set correct expectations.
def _parse_cvss_vector(self, vector_string: str) -> float | None: - """Parse a CVSS vector string and estimate a severity score.""" + """ + Parse a CVSS vector string and estimate a severity score. + + Note: This is a simplified approximation, not an official CVSS v3.1 calculation. + It provides a reasonable severity estimate but should not be used for compliance + or official security reporting. For accurate CVSS scores, use a dedicated library. + """ if not vector_string or not vector_string.startswith("CVSS:"): return None try: parts = vector_string.split("/") metrics = {} for part in parts[1:]: if ":" in part: key, value = part.split(":", 1) metrics[key] = value score = 0.0 av_scores = {"N": 2.5, "A": 2.0, "L": 1.5, "P": 1.0} score += av_scores.get(metrics.get("AV", "L"), 1.5) ac_scores = {"L": 1.5, "H": 0.5} score += ac_scores.get(metrics.get("AC", "L"), 1.0) pr_scores = {"N": 1.5, "L": 1.0, "H": 0.5} score += pr_scores.get(metrics.get("PR", "L"), 1.0) - + impact_scores = {"H": 1.5, "L": 0.75, "N": 0.0} score += impact_scores.get(metrics.get("C", "N"), 0.5)
150-154: Add lock for thread-safe rate limiting.The rate limiting has a race condition where multiple threads could bypass the check simultaneously. Add a lock to protect the critical section.
+ def __init__(self, db_path: str | None = None): + # ... existing init code ... + self._rate_limit_lock = threading.Lock() + def _rate_limit(self): - elapsed = time.time() - self.last_api_call - if elapsed < self.min_api_interval: - time.sleep(self.min_api_interval - elapsed) - self.last_api_call = time.time() + with self._rate_limit_lock: + elapsed = time.time() - self.last_api_call + if elapsed < self.min_api_interval: + time.sleep(self.min_api_interval - elapsed) + self.last_api_call = time.time()
457-475: Consider using Rich Progress for cleaner output.The progress output uses
print()directly which bypasses logging configuration and can interfere with other output. Since the codebase already uses the Rich library (viacortex.branding), consider usingrich.progress.Progressfor more professional progress display.This is optional because the current implementation works, but Rich Progress would provide:
- Better formatting and terminal control
- Automatic cleanup on completion
- No interference with logging output
- Spinners and progress bars
🧹 Nitpick comments (1)
cortex/security_scheduler.py (1)
352-367: Remove trailing whitespace from blank lines.Lines 355 and 359 contain trailing whitespace. Remove it to fix linting errors.
def _has_root_privileges(self) -> bool: """Check if we have root privileges (running as root or have passwordless sudo)""" import os - + # Check if running as root if os.geteuid() == 0: return True - + # Check if we have passwordless sudo access try: result = subprocess.run( ["sudo", "-n", "true"], capture_output=True, timeout=2 ) return result.returncode == 0 except Exception: return False
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/autonomous_patcher.pycortex/cli.pycortex/security_scheduler.pycortex/vulnerability_scanner.pydocs/SECURITY_MANAGEMENT.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pycortex/security_scheduler.pycortex/autonomous_patcher.pycortex/vulnerability_scanner.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
🧬 Code graph analysis (4)
cortex/cli.py (5)
cortex/first_run_wizard.py (1)
_print_error(746-748)cortex/vulnerability_scanner.py (5)
Severity(27-34)VulnerabilityScanner(79-593)_get_installed_packages(190-212)scan_package(425-435)scan_all_packages(437-502)cortex/branding.py (1)
cx_print(49-69)cortex/autonomous_patcher.py (3)
AutonomousPatcher(68-509)PatchStrategy(34-40)patch_vulnerabilities(456-491)cortex/security_scheduler.py (6)
ScheduleFrequency(25-31)SecurityScheduler(49-384)create_schedule(113-156)list_schedules(369-371)run_schedule(178-262)install_systemd_timer(264-339)
cortex/security_scheduler.py (2)
cortex/autonomous_patcher.py (1)
PatchStrategy(34-40)cortex/vulnerability_scanner.py (2)
VulnerabilityScanner(79-593)scan_all_packages(437-502)
cortex/autonomous_patcher.py (2)
cortex/installation_history.py (3)
InstallationHistory(73-628)InstallationStatus(36-42)InstallationType(26-33)cortex/vulnerability_scanner.py (2)
Severity(27-34)Vulnerability(38-55)
cortex/vulnerability_scanner.py (1)
cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)
🪛 GitHub Actions: CI
cortex/autonomous_patcher.py
[error] 152-152: ruff check reported W293: Blank line contains whitespace. (Command failed: 'ruff check . --output-format=github')
🪛 GitHub Check: lint
cortex/security_scheduler.py
[failure] 359-359: Ruff (W293)
cortex/security_scheduler.py:359:1: W293 Blank line contains whitespace
[failure] 355-355: Ruff (W293)
cortex/security_scheduler.py:355:1: W293 Blank line contains whitespace
cortex/autonomous_patcher.py
[failure] 188-188: Ruff (W293)
cortex/autonomous_patcher.py:188:1: W293 Blank line contains whitespace
[failure] 174-174: Ruff (W293)
cortex/autonomous_patcher.py:174:1: W293 Blank line contains whitespace
[failure] 170-170: Ruff (W293)
cortex/autonomous_patcher.py:170:1: W293 Blank line contains whitespace
[failure] 163-163: Ruff (W293)
cortex/autonomous_patcher.py:163:1: W293 Blank line contains whitespace
[failure] 160-160: Ruff (W293)
cortex/autonomous_patcher.py:160:1: W293 Blank line contains whitespace
[failure] 155-155: Ruff (W293)
cortex/autonomous_patcher.py:155:1: W293 Blank line contains whitespace
[failure] 152-152: Ruff (W293)
cortex/autonomous_patcher.py:152:1: W293 Blank line contains whitespace
cortex/vulnerability_scanner.py
[failure] 178-178: Ruff (W293)
cortex/vulnerability_scanner.py:178:1: W293 Blank line contains whitespace
🪛 GitHub Check: Lint
cortex/security_scheduler.py
[failure] 359-359: Ruff (W293)
cortex/security_scheduler.py:359:1: W293 Blank line contains whitespace
[failure] 355-355: Ruff (W293)
cortex/security_scheduler.py:355:1: W293 Blank line contains whitespace
cortex/autonomous_patcher.py
[failure] 188-188: Ruff (W293)
cortex/autonomous_patcher.py:188:1: W293 Blank line contains whitespace
[failure] 174-174: Ruff (W293)
cortex/autonomous_patcher.py:174:1: W293 Blank line contains whitespace
[failure] 170-170: Ruff (W293)
cortex/autonomous_patcher.py:170:1: W293 Blank line contains whitespace
[failure] 163-163: Ruff (W293)
cortex/autonomous_patcher.py:163:1: W293 Blank line contains whitespace
[failure] 160-160: Ruff (W293)
cortex/autonomous_patcher.py:160:1: W293 Blank line contains whitespace
[failure] 155-155: Ruff (W293)
cortex/autonomous_patcher.py:155:1: W293 Blank line contains whitespace
[failure] 152-152: Ruff (W293)
cortex/autonomous_patcher.py:152:1: W293 Blank line contains whitespace
cortex/vulnerability_scanner.py
[failure] 178-178: Ruff (W293)
cortex/vulnerability_scanner.py:178:1: W293 Blank line contains whitespace
🪛 LanguageTool
docs/SECURITY_MANAGEMENT.md
[uncategorized] ~100-~100: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...| |--------|---------|-------| | OSV (Open Source Vulnerabilities) | Primary database, ...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🪛 markdownlint-cli2 (0.18.1)
docs/SECURITY_MANAGEMENT.md
165-165: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
215-215: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (7)
cortex/cli.py (1)
979-1003: LGTM!The security command handler follows the established CLI patterns with proper error handling and subcommand routing.
cortex/security_scheduler.py (1)
305-309: Good privilege check before systemd file writes.The code properly checks for root privileges upfront before attempting to write systemd files, with clear error messages guiding the user to run with sudo.
cortex/autonomous_patcher.py (4)
28-31: Good thread-safe apt update tracking.The module-level tracking with a lock prevents redundant apt-get update calls across multiple patcher instances, addressing the performance concern from past reviews.
149-184: Excellent fix for redundant apt updates.The
ensure_apt_updated()method with thread-safe rate limiting directly addresses the performance issue flagged in past reviews. The implementation is solid with proper locking and clear logging.However, remove trailing whitespace from lines 152, 155, 160, 163, 170, 174 to fix linting errors:
def ensure_apt_updated(self, force: bool = False) -> bool: """ Ensure apt package list is updated. Thread-safe and rate-limited. - + Args: force: If True, force update even if recently updated - + Returns: True if update succeeded or was recently done, False on failure """ global _apt_last_updated - + with _apt_update_lock: now = datetime.now() - + # Check if we need to update if not force and _apt_last_updated is not None: elapsed = (now - _apt_last_updated).total_seconds() if elapsed < _APT_UPDATE_INTERVAL_SECONDS: logger.debug(f"Apt cache still fresh ({elapsed:.0f}s old), skipping update") return True - + # Run apt-get update logger.info("Updating apt package list...") success, stdout, stderr = self._run_command(["apt-get", "update", "-qq"]) - + if success:
185-211: Good separation of concerns for apt update.The method now correctly defers apt updates to
ensure_apt_updated(), and the docstring clearly documents this dependency. This fixes the performance issue from past reviews.Remove trailing whitespace on line 188:
def _check_package_update_available(self, package_name: str) -> str | None: """ Check if an update is available for a package. - + Note: Call ensure_apt_updated() before iterating over multiple packages to avoid repeated apt-get update calls. """
296-303: Correct usage of ensure_apt_updated.Line 297 correctly calls
ensure_apt_updated()once before iterating through packages, ensuring efficient apt cache management. This is the proper implementation pattern.cortex/vulnerability_scanner.py (1)
251-298: Excellent fix for cache retrieval.The cache logic now correctly uses
fetchall()and aggregates all vulnerabilities for a package/version, fixing the issue where only the first CVE was returned. The expiry handling is also improved with proper cleanup.
…ability Scanner modules - Implemented comprehensive test cases for the Autonomous Patcher, covering patch strategies, plan creation, and command execution. - Developed tests for the Security Scheduler, including schedule creation, retrieval, deletion, and execution. - Added tests for the Vulnerability Scanner, focusing on vulnerability detection, caching, and API interactions. - Ensured proper validation of enums and dataclasses across all modules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (10)
tests/test_vulnerability_scanner.py (5)
6-6: Unused importjson.The
jsonmodule is imported but never used in this test file.Proposed fix
-import json
343-352: Moverequestsimport to module level.The
requestsimport on line 346 is inside the test method. While functional, imports should be at module level for consistency and to avoid repeated import overhead.Proposed fix
Add to the imports at the top of the file:
import requestsThen update the test:
@patch("requests.post") def test_query_osv_network_error(self, mock_post): """Test OSV query handles network errors""" - import requests - mock_post.side_effect = requests.RequestException("Network error") vulns = self.scanner._query_osv("test-pkg", "1.0.0") self.assertEqual(len(vulns), 0) # Should return empty list on error
410-422: Timing-based test may be flaky in CI environments.This test relies on precise timing measurements, which can be unreliable on loaded CI systems. The loose threshold (0.05 vs 0.1) helps, but consider mocking
time.sleepinstead to make the test deterministic.Also, the
import timeon line 412 should be at module level.Alternative approach using mock
@patch("time.sleep") @patch("time.time") def test_rate_limit_enforces_delay(self, mock_time, mock_sleep): """Test rate limiting enforces minimum delay""" self.scanner.min_api_interval = 0.1 # Simulate last_api_call was 0.05 seconds ago mock_time.side_effect = [0.05, 0.05] # current time checks self.scanner.last_api_call = 0.0 self.scanner._rate_limit() # Should have slept for remaining time mock_sleep.assert_called()
398-408: Consider extracting shared test setup to a base class.The
setUpandtearDownmethods are duplicated betweenTestVulnerabilityScannerandTestVulnerabilityScannerRateLimiting. A shared base class would reduce duplication.Proposed refactor
class VulnerabilityScannerTestBase(unittest.TestCase): """Base class with shared setup for VulnerabilityScanner tests""" def setUp(self): """Set up test fixtures""" self.temp_db = tempfile.NamedTemporaryFile(delete=False, suffix=".db") self.temp_db.close() self.scanner = VulnerabilityScanner(db_path=self.temp_db.name) def tearDown(self): """Clean up temporary files""" if os.path.exists(self.temp_db.name): os.unlink(self.temp_db.name) class TestVulnerabilityScanner(VulnerabilityScannerTestBase): """Test cases for VulnerabilityScanner""" # ... tests ... class TestVulnerabilityScannerRateLimiting(VulnerabilityScannerTestBase): """Test rate limiting functionality""" # ... tests ...
103-396: Consider adding a test forscan_all_packagesmethod.The
scan_all_packagesmethod is a core feature per the PR objectives (scan all installed packages with optional filtering, aggregate vulnerabilities, compute severity counts). While individual components are well-tested, an integration test for this method would improve coverage of the end-to-end scan workflow.Example test
@patch.object(VulnerabilityScanner, "_get_installed_packages") @patch.object(VulnerabilityScanner, "scan_package") def test_scan_all_packages(self, mock_scan_package, mock_get_packages): """Test scanning all installed packages""" mock_get_packages.return_value = {"pkg1": "1.0", "pkg2": "2.0"} mock_scan_package.side_effect = [ [Vulnerability( cve_id="CVE-1", package_name="pkg1", installed_version="1.0", affected_versions="all", severity=Severity.CRITICAL, description="Test" )], [] ] result = self.scanner.scan_all_packages() self.assertEqual(result.total_packages_scanned, 2) self.assertEqual(result.vulnerabilities_found, 1) self.assertEqual(result.critical_count, 1)tests/test_security_scheduler.py (4)
99-115: Remove unusedconfig_patcherand_init_scheduler.The
config_patcheris defined insetUpbut never started viaconfig_patcher.start()and_init_scheduleris only referenced by this unused patcher. This is dead code that adds confusion.🔎 Proposed fix
def setUp(self): """Set up test fixtures""" self.temp_dir = tempfile.mkdtemp() self.config_path = os.path.join(self.temp_dir, "security_schedule.json") - # Patch the config path - self.config_patcher = patch.object( - SecurityScheduler, - '__init__', - lambda self_obj: self._init_scheduler(self_obj), - ) - - def _init_scheduler(self, scheduler_obj): - """Custom init for testing with temp config path""" - from pathlib import Path - scheduler_obj.config_path = Path(self.config_path) - scheduler_obj.schedules = {} - # Don't call _load_schedules since config doesn't exist yet
133-149: Consider extracting repeated setup to a helper method.The pattern of patching
__init__, creating a scheduler, importingPath, and settingconfig_path/schedulesis repeated in nearly every test method. While acceptable for tests, a helper method would reduce duplication.🔎 Example helper extraction
def _create_test_scheduler(self): """Create a scheduler instance for testing.""" from pathlib import Path scheduler = SecurityScheduler.__new__(SecurityScheduler) scheduler.config_path = Path(self.config_path) scheduler.schedules = {} return schedulerThis would simplify each test to:
def test_create_schedule(self): """Test creating a schedule""" scheduler = self._create_test_scheduler() schedule = scheduler.create_schedule(...)
397-425: Consider adding a test for successfulinstall_systemd_timerexecution.The tests cover failure cases (no root privileges, nonexistent schedule) but don't test the success path where the timer is actually installed. This would improve coverage of the systemd integration.
🔎 Example test for success case
@patch("subprocess.run") @patch("builtins.open", new_callable=MagicMock) @patch.object(SecurityScheduler, "_has_root_privileges") def test_install_systemd_timer_success(self, mock_has_root, mock_open, mock_run): """Test successfully installing systemd timer""" mock_has_root.return_value = True mock_run.return_value = MagicMock(returncode=0) with patch.object(SecurityScheduler, '__init__', lambda x: None): from pathlib import Path scheduler = SecurityScheduler() scheduler.config_path = Path(self.config_path) scheduler.schedules = {} scheduler.create_schedule( schedule_id="timer_success", frequency=ScheduleFrequency.DAILY, scan_enabled=True, ) success = scheduler.install_systemd_timer("timer_success") self.assertTrue(success) # Verify systemctl commands were called self.assertEqual(mock_run.call_count, 3) # daemon-reload, enable, start
597-610: Simplify the mocking intest_load_schedules.The
mock_homepatch is unused sinceconfig_pathis set explicitly. The broadPath.existspatch could cause unintended side effects. Since the config file is created before loading, consider removing these unnecessary patches.🔎 Simplified version
- # Create scheduler with patched home path - with patch("pathlib.Path.home") as mock_home: - mock_home.return_value = Path(self.temp_dir) - with patch("pathlib.Path.exists", return_value=True): - # Manually load - scheduler = SecurityScheduler.__new__(SecurityScheduler) - scheduler.config_path = config_path - scheduler.schedules = {} - scheduler._load_schedules() + # Manually load schedules + scheduler = SecurityScheduler.__new__(SecurityScheduler) + scheduler.config_path = config_path + scheduler.schedules = {} + scheduler._load_schedules() - self.assertIn("loaded_schedule", scheduler.schedules) - schedule = scheduler.schedules["loaded_schedule"] - self.assertEqual(schedule.frequency, ScheduleFrequency.MONTHLY) + self.assertIn("loaded_schedule", scheduler.schedules) + schedule = scheduler.schedules["loaded_schedule"] + self.assertEqual(schedule.frequency, ScheduleFrequency.MONTHLY)tests/test_autonomous_patcher.py (1)
447-447: Simplify the complex mock structure.The mock setup for
Path.home()uses deeply nested anonymous classes that are difficult to read and maintain. This pattern makes the test harder to understand and debug.🔎 Suggested approach using a clearer mock pattern
@patch("pathlib.Path.home") def test_save_and_load_config(self, mock_home): """Test saving and loading configuration""" - mock_home.return_value = type("Path", (), {"__truediv__": lambda s, x: type("Path", (), {"exists": lambda s: False, "mkdir": lambda s, **k: None, "parent": type("Path", (), {"mkdir": lambda s, **k: None})(), "__truediv__": lambda s, x: s})()})() + # Create a mock path that doesn't exist and allows mkdir + mock_config_path = MagicMock() + mock_config_path.exists.return_value = False + mock_config_path.parent.mkdir = MagicMock() + mock_config_path.mkdir = MagicMock() + + mock_cortex_dir = MagicMock() + mock_cortex_dir.__truediv__.return_value = mock_config_path + + mock_home_path = MagicMock() + mock_home_path.__truediv__.return_value = mock_cortex_dir + mock_home.return_value = mock_home_pathAlternatively, use
@patchon the config file path directly or create an actual temp file path for more realistic testing.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/test_autonomous_patcher.pytests/test_security_scheduler.pytests/test_vulnerability_scanner.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_vulnerability_scanner.pytests/test_security_scheduler.pytests/test_autonomous_patcher.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_vulnerability_scanner.pytests/test_security_scheduler.pytests/test_autonomous_patcher.py
🧬 Code graph analysis (2)
tests/test_vulnerability_scanner.py (1)
cortex/vulnerability_scanner.py (15)
ScanResult(59-76)Severity(27-34)Vulnerability(38-55)VulnerabilityScanner(79-593)_vulnerability_to_dict(214-227)_dict_to_vulnerability(229-249)_parse_cvss_vector(156-188)_get_installed_packages(190-212)_check_cache(251-298)_save_scan_history(504-537)get_scan_history(539-560)get_critical_vulnerabilities(562-593)_query_osv(327-423)scan_package(425-435)_rate_limit(150-154)
tests/test_security_scheduler.py (1)
cortex/security_scheduler.py (6)
ScheduleFrequency(25-31)SecuritySchedule(35-46)_frequency_to_systemd(341-350)_has_root_privileges(352-367)install_systemd_timer(264-339)_load_schedules(58-83)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (9)
tests/test_vulnerability_scanner.py (4)
21-36: LGTM!The Severity enum tests correctly validate enum values and string construction. The coverage is adequate for verifying the enum behavior.
38-77: LGTM!The Vulnerability dataclass tests properly validate required fields and the
__post_init__behavior for thereferencesdefault. The optional fields test covers key scenarios.
79-101: LGTM!The ScanResult test verifies basic dataclass creation and the
__post_init__behavior for theerrorsfield default.
425-427: LGTM!Standard unittest entry point.
tests/test_security_scheduler.py (5)
1-19: LGTM!Imports are well-organized following PEP 8 conventions (standard library first, then internal modules). The module docstring is present.
21-33: LGTM!Concise tests for the
ScheduleFrequencyenum values. All four frequency options are covered with clear assertions.
35-94: LGTM!Comprehensive tests for the
SecurityScheduledataclass covering creation, default values, patch-enabled scenarios, and custom cron expressions. The assertions correctly verify expected field values.
612-614: LGTM!Standard test runner invocation following Python conventions.
502-530: No action needed. The test assertion is correct—run_scheduleexplicitly transformspatch_result.packages_updatedto a count usinglen()at line 237 ofcortex/security_scheduler.py, so the mock returning a list of 1 package and the assertion expecting the integer1are consistent with the implementation.Likely an incorrect or invalid review comment.
by integrating a progress indicator for package scanning. Updated scan_all_packages method to accept a progress parameter and improved logging of vulnerabilities found during the scan.
…us Patcher - Introduced _compare_versions method to compare Debian package versions using dpkg. - Implemented _update_fixes_vulnerability method to check if a candidate version resolves a vulnerability. - Enhanced create_patch_plan to log skipped vulnerabilities and ensure only updates that fix vulnerabilities are included. - Added unit tests for version comparison and vulnerability fix verification to ensure functionality and correctness.
- Added optional config_path parameter to the AutonomousPatcher class for custom configuration file locations. - Updated _load_config and _save_config methods to utilize the specified config_path. - Modified unit tests to validate the new configuration handling, ensuring tests do not affect real user configurations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (5)
tests/test_autonomous_patcher.py (1)
6-10: Remove unused imports flagged by static analysis.
json(line 6) anddatetime(line 10) are imported but never used in this test file.🔎 Proposed fix
#!/usr/bin/env python3 """ Tests for Autonomous Patcher Module """ -import json import os import tempfile import unittest -from datetime import datetime from unittest.mock import MagicMock, patchcortex/vulnerability_scanner.py (3)
93-94: NVD API endpoint is defined but never used.The
nvd_apiendpoint is set but no code queries it. Either implement NVD as a fallback source (as documented in the module docstring mentioning "NVD, OSV, Safety DB") or remove this unused variable.🔎 Proposed fix (if not implementing NVD fallback)
# API endpoints self.osv_api = "https://api.osv.dev/v1/query" - self.nvd_api = "https://services.nvd.nist.gov/rest/json/cves/2.0"
334-336: Ecosystem is hardcoded to "Debian".The OSV query hardcodes the ecosystem to "Debian". While Cortex Linux may be Debian-based, consider making this configurable or auto-detecting from the system for portability.
151-155: Rate limiting has a potential race condition in multi-threaded scenarios.If multiple threads call
_rate_limit()simultaneously, they could all pass the elapsed check before any of them updatelast_api_call, leading to burst API calls that violate rate limits.🔎 Proposed fix using a lock
+import threading + class VulnerabilityScanner: """Scans installed packages for security vulnerabilities""" def __init__(self, db_path: str | None = None): # ... existing init code ... + self._rate_limit_lock = threading.Lock() def _rate_limit(self): - elapsed = time.time() - self.last_api_call - if elapsed < self.min_api_interval: - time.sleep(self.min_api_interval - elapsed) - self.last_api_call = time.time() + with self._rate_limit_lock: + elapsed = time.time() - self.last_api_call + if elapsed < self.min_api_interval: + time.sleep(self.min_api_interval - elapsed) + self.last_api_call = time.time()cortex/autonomous_patcher.py (1)
71-76:auto_approveparameter is accepted but never used.The
auto_approveparameter is documented as "If True, automatically approve patches (dangerous!)" but there's no logic in the class that checks this value. Either implement the approval workflow or remove the parameter to avoid confusion.🔎 Option 1: Remove unused parameter
def __init__( self, strategy: PatchStrategy = PatchStrategy.CRITICAL_ONLY, dry_run: bool = True, - auto_approve: bool = False, config_path: str | Path | None = None, ): """ Initialize the autonomous patcher. Args: strategy: Patching strategy dry_run: If True, only show what would be patched - auto_approve: If True, automatically approve patches (dangerous!) config_path: Optional path to config file (defaults to ~/.cortex/patcher_config.json) """ self.strategy = strategy self.dry_run = dry_run - self.auto_approve = auto_approve
🧹 Nitpick comments (3)
tests/test_autonomous_patcher.py (1)
87-92: Consider extractingshutilimport to module level.The
import shutilstatement is repeated inside eachtearDownmethod. While functionally correct, moving it to the module-level imports would be cleaner.🔎 Proposed fix
Add to imports at the top:
import os +import shutil import tempfileThen remove inline imports from each
tearDownmethod:def tearDown(self): """Clean up temporary files""" - import shutil - if os.path.exists(self.temp_dir): shutil.rmtree(self.temp_dir)Also applies to: 422-426, 482-486, 517-521
cortex/vulnerability_scanner.py (1)
426-436: Cache is only saved when vulnerabilities are found.When
scan_packagereturns an empty list (no vulnerabilities found), nothing is cached. This means subsequent scans will re-query OSV for "clean" packages. Consider caching negative results as well to reduce API calls.🔎 Proposed fix to cache negative results
def scan_package(self, package_name: str, version: str) -> list[Vulnerability]: cached = self._check_cache(package_name, version) if cached is not None: return cached vulnerabilities = self._query_osv(package_name, version) - if vulnerabilities: - self._save_cache(package_name, version, vulnerabilities) + # Cache both positive and negative results to avoid redundant API calls + self._save_cache(package_name, version, vulnerabilities) return vulnerabilitiesNote: You'd also need to update
_check_cacheto distinguish between "not cached" (None) and "cached as empty" ([]).cortex/autonomous_patcher.py (1)
107-123: Importjsonat module level instead of inside methods.The
jsonmodule is imported inside_load_config(line 111) and_save_config(line 130). Since JSON operations are core to config management, import it at the module level for consistency and slight performance improvement.🔎 Proposed fix
import logging +import json import subprocess import threadingThen remove the inline imports from
_load_configand_save_config.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cortex/autonomous_patcher.pycortex/vulnerability_scanner.pytests/test_autonomous_patcher.pytests/test_vulnerability_scanner.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_vulnerability_scanner.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/autonomous_patcher.pytests/test_autonomous_patcher.pycortex/vulnerability_scanner.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_autonomous_patcher.py
🧠 Learnings (1)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/vulnerability_scanner.py
🧬 Code graph analysis (3)
cortex/autonomous_patcher.py (2)
cortex/installation_history.py (4)
InstallationStatus(36-42)InstallationType(26-33)record_installation(254-312)update_installation(314-363)cortex/vulnerability_scanner.py (3)
Severity(28-35)Vulnerability(39-56)VulnerabilityScanner(80-598)
tests/test_autonomous_patcher.py (2)
cortex/autonomous_patcher.py (9)
PatchPlan(44-51)PatchResult(55-65)PatchStrategy(34-40)_should_patch(276-319)_run_command(144-152)ensure_apt_updated(154-188)_check_package_update_available(190-216)_compare_versions(218-237)_update_fixes_vulnerability(239-274)cortex/vulnerability_scanner.py (2)
Severity(28-35)Vulnerability(39-56)
cortex/vulnerability_scanner.py (2)
cortex/progress_indicators.py (2)
ProgressIndicator(177-378)get_progress_indicator(665-672)cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)
🪛 GitHub Actions: CI
cortex/autonomous_patcher.py
[error] 157-157: ruff check failed with W293: Blank line contains whitespace. Command: 'ruff check . --output-format=github'.
🪛 GitHub Check: lint
cortex/autonomous_patcher.py
[failure] 244-244: Ruff (W293)
cortex/autonomous_patcher.py:244:1: W293 Blank line contains whitespace
[failure] 226-226: Ruff (W293)
cortex/autonomous_patcher.py:226:1: W293 Blank line contains whitespace
[failure] 221-221: Ruff (W293)
cortex/autonomous_patcher.py:221:1: W293 Blank line contains whitespace
[failure] 193-193: Ruff (W293)
cortex/autonomous_patcher.py:193:1: W293 Blank line contains whitespace
[failure] 179-179: Ruff (W293)
cortex/autonomous_patcher.py:179:1: W293 Blank line contains whitespace
[failure] 175-175: Ruff (W293)
cortex/autonomous_patcher.py:175:1: W293 Blank line contains whitespace
[failure] 168-168: Ruff (W293)
cortex/autonomous_patcher.py:168:1: W293 Blank line contains whitespace
[failure] 165-165: Ruff (W293)
cortex/autonomous_patcher.py:165:1: W293 Blank line contains whitespace
[failure] 160-160: Ruff (W293)
cortex/autonomous_patcher.py:160:1: W293 Blank line contains whitespace
[failure] 157-157: Ruff (W293)
cortex/autonomous_patcher.py:157:1: W293 Blank line contains whitespace
🪛 GitHub Check: Lint
cortex/autonomous_patcher.py
[failure] 244-244: Ruff (W293)
cortex/autonomous_patcher.py:244:1: W293 Blank line contains whitespace
[failure] 226-226: Ruff (W293)
cortex/autonomous_patcher.py:226:1: W293 Blank line contains whitespace
[failure] 221-221: Ruff (W293)
cortex/autonomous_patcher.py:221:1: W293 Blank line contains whitespace
[failure] 193-193: Ruff (W293)
cortex/autonomous_patcher.py:193:1: W293 Blank line contains whitespace
[failure] 179-179: Ruff (W293)
cortex/autonomous_patcher.py:179:1: W293 Blank line contains whitespace
[failure] 175-175: Ruff (W293)
cortex/autonomous_patcher.py:175:1: W293 Blank line contains whitespace
[failure] 168-168: Ruff (W293)
cortex/autonomous_patcher.py:168:1: W293 Blank line contains whitespace
[failure] 165-165: Ruff (W293)
cortex/autonomous_patcher.py:165:1: W293 Blank line contains whitespace
[failure] 160-160: Ruff (W293)
cortex/autonomous_patcher.py:160:1: W293 Blank line contains whitespace
[failure] 157-157: Ruff (W293)
cortex/autonomous_patcher.py:157:1: W293 Blank line contains whitespace
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (8)
tests/test_autonomous_patcher.py (3)
74-85: Test isolation issue has been resolved.The
config_pathparameter is now correctly passed toAutonomousPatcherinitialization (line 84), ensuring tests use the temporary directory instead of the real user config at~/.cortex/patcher_config.json. This addresses the previous review concern about test interference and non-deterministic behavior.
488-506: Save/load configuration test now properly validates persistence.The test correctly:
- Creates a patcher with temp config path
- Sets whitelist/blacklist/min_severity values
- Calls
add_to_whitelistwhich triggers_save_config- Verifies the config file was created on disk (line 500)
- Creates a new patcher instance and verifies loaded values match
This addresses the previous review comment about the test not actually verifying save/load operations.
592-606: Good edge case coverage for missingfixed_version.The test correctly validates that when a vulnerability has no
fixed_versionspecified,_update_fixes_vulnerabilityreturnsTrueto allow the update. This matches the defensive behavior in the implementation that permits updates when fix verification isn't possible.cortex/vulnerability_scanner.py (2)
252-294: Cache lookup now correctly aggregates all CVEs for a package/version.The implementation properly uses
fetchall()(line 262) and iterates over all rows (lines 270-283), aggregating vulnerabilities from each row. Expired entries are cleaned up (lines 286-291). This addresses the previous review concern about only processing the first cached row.
459-480: Good integration with progress indicator for scan feedback.The scan loop properly uses the injected or global
ProgressIndicatorfor progress bars (line 466-468) and warning output (line 473-475). This provides good UX during long-running scans.cortex/autonomous_patcher.py (3)
28-31: Good thread-safe apt cache management implementation.The module-level tracking of apt updates with a lock (
_apt_update_lock,_apt_last_updated) and rate limiting (_APT_UPDATE_INTERVAL_SECONDS = 300) is well designed. Theensure_apt_updatedmethod properly:
- Uses a lock for thread safety
- Checks elapsed time to avoid redundant updates
- Sets timestamp even on failure to prevent hammering
This addresses the previous review concern about calling
apt-get updateper package.Also applies to: 154-188
362-401: Thorough vulnerability fix verification before patching.The
create_patch_planmethod properly:
- Calls
ensure_apt_updated()once before checking packages (line 360)- Verifies each update actually fixes the vulnerability via
_update_fixes_vulnerability(line 379)- Tracks skipped vulnerabilities with reasons (lines 383-384)
- Only includes packages that fix at least one vulnerability (line 388)
- Logs skipped vulnerabilities for transparency (lines 396-401)
This addresses the previous concern about not validating if updates actually fix reported vulnerabilities.
476-494: Package installation uses list-based subprocess commands.The apt-get install commands are properly constructed using list format (lines 482, 484) rather than shell string concatenation, which prevents command injection vulnerabilities even with adversarial package names.
…leanliness and maintainability.
…lity in autonomous_patcher.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
cortex/autonomous_patcher.py (2)
147-147: Consider making command timeout configurable.The hardcoded 300-second timeout may be insufficient for large packages or slow network connections, especially during
apt-get installoperations that download and compile large dependencies.🔎 Suggested improvement
Make the timeout configurable with a reasonable default:
def __init__( self, strategy: PatchStrategy = PatchStrategy.CRITICAL_ONLY, dry_run: bool = True, auto_approve: bool = False, config_path: str | Path | None = None, + command_timeout: int = 600, # 10 minutes default ): """ Initialize the autonomous patcher. Args: strategy: Patching strategy dry_run: If True, only show what would be patched auto_approve: If True, automatically approve patches (dangerous!) config_path: Optional path to config file (defaults to ~/.cortex/patcher_config.json) + command_timeout: Timeout in seconds for package operations """ # ... existing code ... + self.command_timeout = command_timeoutThen update
_run_command:def _run_command(self, cmd: list[str]) -> tuple[bool, str, str]: """Execute command and return success, stdout, stderr""" try: - result = subprocess.run(cmd, capture_output=True, text=True, timeout=300) + result = subprocess.run(cmd, capture_output=True, text=True, timeout=self.command_timeout) return (result.returncode == 0, result.stdout, result.stderr) except subprocess.TimeoutExpired: return (False, "", "Command timed out")Based on learnings from past review comments.
75-75: Remove unusedauto_approveparameter.The
auto_approveparameter is accepted in__init__and stored inself.auto_approve, but it's never referenced anywhere in the class. This was previously flagged but remains unaddressed.🔎 Proposed fix
If automatic approval is not needed for the current implementation, remove the parameter:
def __init__( self, strategy: PatchStrategy = PatchStrategy.CRITICAL_ONLY, dry_run: bool = True, - auto_approve: bool = False, config_path: str | Path | None = None, ): """ Initialize the autonomous patcher. Args: strategy: Patching strategy dry_run: If True, only show what would be patched - auto_approve: If True, automatically approve patches (dangerous!) config_path: Optional path to config file (defaults to ~/.cortex/patcher_config.json) """ self.strategy = strategy self.dry_run = dry_run - self.auto_approve = auto_approveIf automatic approval will be implemented later, add a TODO comment explaining where it should be used (e.g., in
apply_patch_planto skip interactive prompts).Based on learnings from past review comments.
Also applies to: 89-89
🧹 Nitpick comments (1)
cortex/autonomous_patcher.py (1)
25-25: Avoid module-levelbasicConfigto prevent logging conflicts.Calling
logging.basicConfig()at module import time configures the root logger and can interfere with the application's logging setup. If the importing application has already configured logging, this call will have no effect (first call wins), leading to inconsistent behavior. If it hasn't, this forces INFO level on all loggers.🔎 Recommended approach
Remove the
basicConfigcall and let the application configure logging. Thelogger = logging.getLogger(__name__)on line 26 will work correctly regardless:-logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__)Applications using this module can configure logging as needed, and the logger will inherit appropriate settings from the root logger.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cortex/autonomous_patcher.pytests/test_autonomous_patcher.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_autonomous_patcher.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/autonomous_patcher.py
🧬 Code graph analysis (1)
cortex/autonomous_patcher.py (2)
cortex/installation_history.py (4)
InstallationStatus(36-42)InstallationType(26-33)record_installation(254-312)update_installation(314-363)cortex/vulnerability_scanner.py (2)
Severity(28-35)Vulnerability(39-56)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
…unctionality - Removed unnecessary logging imports from the CortexCLI class to streamline the code. - Updated the SecurityScheduler's constructor to accept a cortex_binary argument, enhancing flexibility for systemd service file paths. - Modified the patching logic in SecurityScheduler to allow AutonomousPatcher to handle its own strategy and severity filters. - Updated the vulnerability scanner documentation to clarify API endpoint usage and future support plans.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @cortex/cli.py:
- Around line 1289-1359: The create branch in _security_schedule currently
hardcodes dry_run=True; add a CLI flag (e.g. --no-dry-run or --apply) to the
"create" subparser and pass its value into SecurityScheduler.create_schedule()
so real patching can be enabled. Update the argparse setup for the create action
to define the flag (e.g. store_true on args.no_dry_run or args.apply), then
change the create_schedule call to use dry_run=not getattr(args, "no_dry_run",
False) (or dry_run=not getattr(args, "apply", False)) instead of the hardcoded
True. Ensure the flag defaults preserve existing behavior (dry_run by default)
and references are to _security_schedule and SecurityScheduler.create_schedule.
In @cortex/vulnerability_scanner.py:
- Around line 395-491: Add an explicit TODO noting the NVD fallback is not
implemented and linking to the tracking issue: insert a clear TODO comment in
the _query_osv method (or in scan_package where OSV is invoked) stating that NVD
fallback for critical packages is planned, include the issue/PR URL or tracker
ID (e.g., "TODO: implement NVD fallback - see ISSUE-XXXX"), or alternatively add
an entry in SECURITY_MANAGEMENT.md documenting that NVD support was
intentionally deferred with the same issue link.
🧹 Nitpick comments (7)
cortex/vulnerability_scanner.py (2)
33-35: Avoidlogging.basicConfigin library modulesCalling
logging.basicConfigat import time in a reusable module can override application logging configuration unexpectedly. Prefer configuring handlers in the top-level CLI (e.g., cortex/cli.py) and leaving this module to just uselogger = logging.getLogger(__name__).
691-725: Clarify or parameterize the 30‑day window for “critical” view
get_critical_vulnerabilitiesfilters to the lastdays(default 30), but the CLI--criticalhelp text only says “Show only critical vulnerabilities”. Users might reasonably expect “all critical” rather than “last 30 days”.Either:
- Expose
daysas a CLI option (with 30 as the default), or- Explicitly mention the time window in the CLI help/SECURITY_MANAGEMENT docs.
cortex/cli.py (2)
1107-1164: Avoid relying on private_get_installed_packagesfrom CLI
_security_scanusesscanner._get_installed_packages()to resolve a single package before scanning. It works, but it couples the CLI to a private helper.Consider exposing this via a small public method on
VulnerabilityScanner(e.g.,get_installed_packages()orresolve_installed_version(pkg)), and have the CLI use that instead.
1207-1287:--dry-runflag is redundant; dry-run is derived solely from--applyIn
_security_patch:
- The parser defines
--dry-runwithdefault=True, butdry_runis computed asnot getattr(args, "apply", False), ignoringargs.dry_run.This makes
--dry-runa no-op: patches are always dry-run unless--applyis passed, and specifying--dry-rundoesn’t change behavior.To reduce confusion, either:
- Drop the
--dry-runoption and rely solely on--applyfor switching to real patching, or- Honor
args.dry_runexplicitly (e.g.,dry_run = args.dry_run and not args.apply), and adjust defaults/help text accordingly.cortex/security_scheduler.py (1)
21-23: Avoidlogging.basicConfigin scheduler moduleSimilar to the scanner/patcher, configuring logging with
logging.basicConfigat import time can interfere with applications embedding this module. Prefer leaving configuration to the top-level CLI and only usinglogger = logging.getLogger(__name__)here.cortex/autonomous_patcher.py (2)
26-28: Avoidlogging.basicConfigin patcher moduleAs with the other security modules, setting
logging.basicConfighere can override application logging configuration. It’s safer to remove this call and let the top-level CLI or embedding app configure logging.
447-578: apply_patch_plan integrates with InstallationHistory and respects dry-run defaults
apply_patch_plan:
- Returns a fast-success result when
packages_to_updateis empty.- In dry-run mode, lists planned updates without executing anything.
- When executing, records the operation via
InstallationHistory, runsapt-getcommands per package (with version pinning when available), and returns aPatchResultcontaining arollback_id.This meets the rollback and safety requirements. One minor nit:
auto_approveconstructor parameter is currently unused; if it’s not planned for near-term use, consider removing it to keep the API lean.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
cortex/autonomous_patcher.pycortex/cli.pycortex/security_scheduler.pycortex/vulnerability_scanner.pytests/test_autonomous_patcher.pytests/test_security_scheduler.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_security_scheduler.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_autonomous_patcher.pycortex/cli.pycortex/autonomous_patcher.pycortex/security_scheduler.pycortex/vulnerability_scanner.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_autonomous_patcher.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/vulnerability_scanner.py
🧬 Code graph analysis (2)
cortex/security_scheduler.py (2)
cortex/autonomous_patcher.py (3)
AutonomousPatcher(80-640)PatchStrategy(46-52)patch_vulnerabilities(579-622)cortex/vulnerability_scanner.py (2)
VulnerabilityScanner(89-725)scan_all_packages(527-626)
cortex/vulnerability_scanner.py (2)
cortex/progress_indicators.py (2)
ProgressIndicator(177-378)get_progress_indicator(665-672)cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)
🪛 GitHub Actions: CI
cortex/security_scheduler.py
[error] 1-1: Black formatting check failed. 1 file would be reformatted by Black (security_scheduler.py). Command 'black --check . --exclude "(venv|.venv|build|dist)"' exited with code 1. Run 'black --write' to fix.
🪛 GitHub Actions: Cortex Automation
cortex/security_scheduler.py
[error] 1-1: black --check formatting failed: 1 file would be reformatted; please run 'black .' to fix.
🔇 Additional comments (8)
tests/test_autonomous_patcher.py (1)
20-629: Comprehensive coverage of patcher behaviorThese tests exercise the key behaviors of
AutonomousPatcher(strategies, whitelist/blacklist, min_severity, apt/update flows, version comparisons, config persistence, dry-run vs apply). They’re well-structured and should give good confidence in the new module.cortex/cli.py (3)
1081-1106: Security command dispatcher is clean and consistentThe
securitymethod mirrors the patterns used elsewhere in the CLI (env,notify, etc.) with clear subcommand routing and consolidated error handling/traceback onverbose. This looks good.
2533-2603: Security subcommand parser wiring looks correct and completeThe new
security->scan/patch/scheduleparser tree is wired consistently with other subcommands and provides the options described in the PR objectives (package vs all scan, critical filter, scan-and-patch, strategies, schedule create/list/run/install-timer). No structural issues spotted.
2865-2866: Main dispatch correctly routessecuritycommandsAdding the
elif args.command == "security": return cli.security(args)keeps the main routing cohesive and localized. Looks good.cortex/security_scheduler.py (2)
183-260: Scheduler run path is straightforward and delegates strategy correctly
run_schedulecleanly wiresVulnerabilityScanner.scan_all_packages()andAutonomousPatcherwith the configuredpatch_strategyanddry_runflags, and updateslast_run/next_runalong with persisted config. The result dict is well-shaped for both CLI and programmatic use.
262-343: Systemd timer installation logic is reasonable, but ensure formatting passes BlackThe systemd unit generation and root-privilege checks (
_has_root_privileges) look safe and aligned with the "install systemd timers" requirement. However, CI indicates thatsecurity_scheduler.pyfailsblack --check(one file would be reformatted).Please run Black locally (using the same settings as CI) to reformat this file and clear the pipeline failure:
black cortex/security_scheduler.pycortex/autonomous_patcher.py (2)
172-205: Apt update handling is thread-safe and rate-limitedUsing a module-level
_apt_update_lockplus an instance_apt_last_updatedwith a 5‑minute window is a good balance between avoiding redundantapt-get updatecalls and preventing concurrent updates. The behavior on failure (log warning but still rate-limit retries) is sensible.
341-445: Patch plan creation correctly filters, verifies fixes, and flags reboot requirements
create_patch_plan:
- Filters vulnerabilities through
_should_patch(strategy, min_severity, whitelist/blacklist).- Ensures the apt cache is updated once before checking all packages.
- Verifies that candidate versions actually fix each vulnerability via
_update_fixes_vulnerability.- Tracks kernel-related packages to set
requires_reboot.This aligns well with the autonomous patching requirements and provides a solid basis for safe patch application.
| def _query_osv(self, package_name: str, version: str) -> list[Vulnerability]: | ||
| vulnerabilities = [] | ||
|
|
||
| try: | ||
| self._rate_limit() | ||
|
|
||
| query = { | ||
| "package": {"name": package_name, "ecosystem": "Debian"}, | ||
| "version": version, | ||
| } | ||
|
|
||
| response = requests.post( | ||
| self.osv_api, json=query, timeout=10, headers={"Content-Type": "application/json"} | ||
| ) | ||
|
|
||
| if response.status_code == 200: | ||
| data = response.json() | ||
| if "vulns" in data: | ||
| for vuln in data["vulns"]: | ||
| severity = Severity.UNKNOWN | ||
| cvss_score = None | ||
|
|
||
| if "database_specific" in vuln: | ||
| db_spec = vuln["database_specific"] | ||
| if "severity" in db_spec: | ||
| sev_str = db_spec["severity"].upper() | ||
| if sev_str in ["CRITICAL", "CRIT"]: | ||
| severity = Severity.CRITICAL | ||
| elif sev_str == "HIGH": | ||
| severity = Severity.HIGH | ||
| elif sev_str == "MEDIUM": | ||
| severity = Severity.MEDIUM | ||
| elif sev_str == "LOW": | ||
| severity = Severity.LOW | ||
|
|
||
| if "severity" in vuln: | ||
| for sev in vuln["severity"]: | ||
| if sev["type"] == "CVSS_V3": | ||
| score_value = sev.get("score", "") | ||
| if isinstance(score_value, (int, float)): | ||
| cvss_score = float(score_value) | ||
| elif isinstance(score_value, str): | ||
| try: | ||
| cvss_score = float(score_value) | ||
| except ValueError: | ||
| cvss_score = self._parse_cvss_vector(score_value) | ||
|
|
||
| if cvss_score is not None: | ||
| if cvss_score >= 9.0: | ||
| severity = Severity.CRITICAL | ||
| elif cvss_score >= 7.0: | ||
| severity = Severity.HIGH | ||
| elif cvss_score >= 4.0: | ||
| severity = Severity.MEDIUM | ||
| else: | ||
| severity = Severity.LOW | ||
|
|
||
| affected = "unknown" | ||
| fixed_version = None | ||
| if "affected" in vuln: | ||
| for affected_range in vuln["affected"]: | ||
| if "ranges" in affected_range: | ||
| for range_item in affected_range["ranges"]: | ||
| if "events" in range_item: | ||
| affected = str(range_item["events"]) | ||
| for event in range_item["events"]: | ||
| if "fixed" in event: | ||
| fixed_version = event["fixed"] | ||
|
|
||
| references = [] | ||
| if "references" in vuln: | ||
| for ref in vuln["references"]: | ||
| if "url" in ref: | ||
| references.append(ref["url"]) | ||
|
|
||
| vuln_obj = Vulnerability( | ||
| cve_id=vuln.get("id", "UNKNOWN"), | ||
| package_name=package_name, | ||
| installed_version=version, | ||
| affected_versions=affected, | ||
| severity=severity, | ||
| description=vuln.get("summary", "No description available"), | ||
| published_date=vuln.get("published", ""), | ||
| fixed_version=fixed_version, | ||
| cvss_score=cvss_score, | ||
| source="osv", | ||
| references=references, | ||
| ) | ||
|
|
||
| vulnerabilities.append(vuln_obj) | ||
|
|
||
| except requests.RequestException as e: | ||
| logger.warning(f"OSV query failed for {package_name}: {e}") | ||
| except Exception as e: | ||
| logger.warning(f"Error processing OSV response: {e}") | ||
|
|
||
| return vulnerabilities |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
git ls-files | grep -E "(vulnerability_scanner|SECURITY_MANAGEMENT)" | head -20Repository: cortexlinux/cortex
Length of output: 180
🏁 Script executed:
fd cortex/vulnerability_scanner.pyRepository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Get the file size and check context around the _query_osv method
wc -l cortex/vulnerability_scanner.pyRepository: cortexlinux/cortex
Length of output: 97
🏁 Script executed:
# Search for NVD references in the codebase
rg -i "nvd" --type pyRepository: cortexlinux/cortex
Length of output: 505
🏁 Script executed:
# Search for any TODO comments related to NVD or vulnerability queries
rg "TODO|FIXME" cortex/vulnerability_scanner.py -A 2Repository: cortexlinux/cortex
Length of output: 44
🏁 Script executed:
# Check for SECURITY_MANAGEMENT.md or similar documentation
fd -e md | grep -i securityRepository: cortexlinux/cortex
Length of output: 101
🏁 Script executed:
# Read the vulnerability_scanner.py file to get full context
cat -n cortex/vulnerability_scanner.pyRepository: cortexlinux/cortex
Length of output: 36633
🏁 Script executed:
# Read the documentation about security management
cat -n docs/SECURITY_MANAGEMENT.mdRepository: cortexlinux/cortex
Length of output: 10268
🏁 Script executed:
# Check the test file to see what NVD expectations exist
cat -n tests/test_vulnerability_scanner.py | grep -B 5 -A 5 "nvd"Repository: cortexlinux/cortex
Length of output: 743
Add explicit TODO pointing to NVD fallback implementation
The documentation specifies NVD as a fallback source for critical packages, but the current implementation only queries OSV. The module docstring notes NVD support is "planned for future releases," but no TODO comment exists at the relevant code location pointing to a tracking issue.
Add a clear TODO comment in _query_osv() or scan_package() referencing the NVD fallback work, or document in SECURITY_MANAGEMENT.md that NVD fallback was intentionally deferred with a link to the tracking issue.
🤖 Prompt for AI Agents
In @cortex/vulnerability_scanner.py around lines 395 - 491, Add an explicit TODO
noting the NVD fallback is not implemented and linking to the tracking issue:
insert a clear TODO comment in the _query_osv method (or in scan_package where
OSV is invoked) stating that NVD fallback for critical packages is planned,
include the issue/PR URL or tracker ID (e.g., "TODO: implement NVD fallback -
see ISSUE-XXXX"), or alternatively add an entry in SECURITY_MANAGEMENT.md
documenting that NVD support was intentionally deferred with the same issue
link.
- Simplified the call to patcher.patch_vulnerabilities by removing unnecessary line breaks, enhancing code clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @cortex/security_scheduler.py:
- Around line 21-22: Remove the module-level logging configuration by deleting
the call to logging.basicConfig() and retain only the module logger creation
(logger = logging.getLogger(__name__)); ensure no other import-time logging
configuration remains in cortex/security_scheduler.py and document that logging
setup should be performed by the application/CLI entry point instead.
- Around line 342-351: The current _frequency_to_systemd returns "monthly" for
unknown frequencies which causes CUSTOM schedules to be mis-scheduled; update
install_systemd_timer to explicitly check for ScheduleFrequency.CUSTOM (use
schedule.frequency and the custom_cron field) and refuse installation by logging
an error (e.g., logger.error(...)) and returning False, and change
_frequency_to_systemd to raise a ValueError or similar for
ScheduleFrequency.CUSTOM (do not default to "monthly") so callers cannot
silently generate a wrong OnCalendar value; this prevents systemd timers from
being installed for CUSTOM schedules unless a proper cron->systemd conversion is
later implemented.
- Around line 163-181: In _calculate_next_run, the final "return None" is
unreachable because every ScheduleFrequency branch (DAILY, WEEKLY, MONTHLY,
CUSTOM) already returns; remove the trailing "return None" statement at the end
of the function (in method _calculate_next_run) so there is no redundant
unreachable return; optionally, if you want to handle unexpected enum values,
replace it with an explicit raise (e.g., ValueError) instead of the redundant
return.
🧹 Nitpick comments (3)
cortex/security_scheduler.py (3)
63-64: Add return type hint.Private methods should still include return type hints per the coding guidelines.
📝 Suggested addition
- def _load_schedules(self): + def _load_schedules(self) -> None: """Load schedules from configuration file"""
90-91: Add return type hint.Private methods should still include return type hints per the coding guidelines.
📝 Suggested addition
- def _save_schedules(self): + def _save_schedules(self) -> None: """Save schedules to configuration file"""
353-366: Move import to module level and add return type hint.The
osimport should be at the top of the file with other imports, and the method should have a return type annotation.📝 Suggested changes
At the top of the file, add to imports:
import osThen update the method signature:
- def _has_root_privileges(self) -> bool: + def _has_root_privileges(self) -> bool: """Check if we have root privileges (running as root or have passwordless sudo)""" - import os # Check if running as root if os.geteuid() == 0:
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cortex/security_scheduler.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/security_scheduler.py
🧬 Code graph analysis (1)
cortex/security_scheduler.py (2)
cortex/autonomous_patcher.py (3)
AutonomousPatcher(80-640)PatchStrategy(46-52)patch_vulnerabilities(579-622)cortex/vulnerability_scanner.py (2)
VulnerabilityScanner(89-725)scan_all_packages(527-626)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test (Python 3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (3)
cortex/security_scheduler.py (3)
183-258: LGTM! Well-structured schedule execution.The method correctly orchestrates scanning and patching with proper error handling, result aggregation, and schedule updates. The integration with
VulnerabilityScannerandAutonomousPatcherfollows the expected patterns.
260-340: LGTM! Proper privilege checks and error handling for systemd integration.The method correctly checks for root privileges before attempting to write to system directories, provides helpful error messages, and handles subprocess failures appropriately.
386-502: LGTM! Well-structured CLI interface.The CLI provides a comprehensive interface for schedule management with clear subcommands, helpful output formatting, and appropriate error handling.
| logging.basicConfig(level=logging.INFO) | ||
| logger = logging.getLogger(__name__) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove module-level logging configuration.
Calling logging.basicConfig() at module import time affects the global logging configuration and can interfere with the application's logging setup. Library modules should only create loggers, not configure them.
🔧 Proposed fix
-logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)Let the application or CLI entry point handle logging configuration.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logging.basicConfig(level=logging.INFO) | |
| logger = logging.getLogger(__name__) | |
| logger = logging.getLogger(__name__) |
🤖 Prompt for AI Agents
In @cortex/security_scheduler.py around lines 21 - 22, Remove the module-level
logging configuration by deleting the call to logging.basicConfig() and retain
only the module logger creation (logger = logging.getLogger(__name__)); ensure
no other import-time logging configuration remains in
cortex/security_scheduler.py and document that logging setup should be performed
by the application/CLI entry point instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 17 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def _run_command(self, cmd: list[str]) -> tuple[bool, str, str]: | ||
| """Execute command and return success, stdout, stderr""" | ||
| try: | ||
| result = subprocess.run(cmd, capture_output=True, text=True, timeout=300) |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The command timeout is hardcoded to 300 seconds (5 minutes). Large package updates or slow network connections might exceed this timeout, causing legitimate updates to fail. Consider making this timeout configurable or increasing it for certain operations.
| # Still set timestamp to avoid hammering on repeated failures | ||
| self._apt_last_updated = now |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The timestamp is set after a failure, which means a failed apt update will prevent retries for the next 5 minutes. This could leave systems vulnerable if the update failure was transient. Consider not setting the timestamp on failure or using a shorter retry interval for failures.
| # Still set timestamp to avoid hammering on repeated failures | |
| self._apt_last_updated = now | |
| # Do not update _apt_last_updated on failure so transient errors can be retried promptly |
| # Estimate duration (rough: 1 minute per package) | ||
| estimated_duration = len(packages_to_update) * 1.0 |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The estimated duration is calculated as simply "1 minute per package" which is quite simplistic. This doesn't account for package size, dependencies, or system load. While it's noted as "rough," consider adding a note in documentation that this is a very rough estimate, or improve the estimation by considering package metadata.
| ) | ||
| sec_patch_parser.add_argument("--package", help="Patch specific package only") | ||
| sec_patch_parser.add_argument( | ||
| "--dry-run", action="store_true", default=True, help="Dry run mode (default)" |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The --dry-run flag is set to default=True which combined with the logic in line 1219 makes the flag behavior confusing. Setting --dry-run explicitly would have no effect since it's already the default. The current implementation is correct (dry_run unless --apply is set), but the argparse definition is misleading. Consider removing default=True or changing the help text to clarify.
| "--dry-run", action="store_true", default=True, help="Dry run mode (default)" | |
| "--dry-run", | |
| action="store_true", | |
| default=True, | |
| help="Run in dry-run mode (default; use --apply to actually apply patches)", |
| try: | ||
| result = subprocess.run(["sudo", "-n", "true"], capture_output=True, timeout=2) | ||
| return result.returncode == 0 | ||
| except Exception: |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The subprocess timeout is set to 2 seconds which might be too short for checking sudo privileges on busy systems or systems with slow authentication backends. Consider increasing this timeout to avoid false negatives.
- Added a new --apply argument to enable real patching, defaulting to dry-run mode. - Updated schedule creation logic to reflect the dry-run status in the output.
- Introduced a sentinel CVE ID to cache "no vulnerabilities found" results, preventing unnecessary re-querying for clean packages. - Updated cache check logic to handle valid entries and return appropriate results. - Ensured that both actual vulnerabilities and sentinel entries are stored in the cache for improved efficiency.
…nfigurability - Renamed the global apt update interval constant to a default instance-level attribute in Autonomous Patcher, allowing for customizable update intervals. - Updated the constructor of Autonomous Patcher to accept an `apt_update_interval` parameter, with documentation clarifying its usage. - Modified the Vulnerability Scanner to include a `cache_hours` parameter for controlling the caching duration of vulnerability results, enhancing flexibility for security-critical systems. - Improved documentation in SECURITY_MANAGEMENT.md to provide patching frequency guidelines based on system types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 10 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Properly calculate next month, handling varying month lengths | ||
| year = now.year | ||
| month = now.month + 1 | ||
| if month > 12: | ||
| month = 1 | ||
| year += 1 | ||
| # Clamp day to max days in target month (e.g., Jan 31 -> Feb 28) | ||
| max_day = calendar.monthrange(year, month)[1] | ||
| day = min(now.day, max_day) | ||
| return now.replace(year=year, month=month, day=day) |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The month calculation on lines 175-184 handles month rollover but doesn't preserve time components. If the schedule is created at 2:30 PM on January 31st, the next monthly run would be scheduled for February 28th (or 29th) but at the current time when _calculate_next_run is called, not at 2:30 PM. This may lead to inconsistent execution times. Consider preserving the time of day for scheduled runs.
| # Security patch | ||
| sec_patch_parser = security_subs.add_parser("patch", help="Patch vulnerabilities") | ||
| sec_patch_parser.add_argument( | ||
| "--scan-and-patch", action="store_true", help="Scan and patch automatically" | ||
| ) | ||
| sec_patch_parser.add_argument("--package", help="Patch specific package only") | ||
| sec_patch_parser.add_argument( | ||
| "--dry-run", action="store_true", default=True, help="Dry run mode (default)" | ||
| ) | ||
| sec_patch_parser.add_argument("--apply", action="store_true", help="Actually apply patches") | ||
| sec_patch_parser.add_argument( | ||
| "--strategy", | ||
| choices=["automatic", "critical_only", "high_and_above"], | ||
| default="critical_only", | ||
| help="Patching strategy", | ||
| ) |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The --patch-strategy argument is missing from the patch parser but is used in line 1220. This will cause an AttributeError when trying to access args.strategy. The argument needs to be added to the sec_patch_parser.
| sec_schedule_create = sec_schedule_subs.add_parser("create", help="Create a schedule") | ||
| sec_schedule_create.add_argument("id", help="Schedule ID") | ||
| sec_schedule_create.add_argument( | ||
| "--frequency", | ||
| choices=["daily", "weekly", "monthly"], | ||
| default="monthly", | ||
| help="Schedule frequency", | ||
| ) | ||
| sec_schedule_create.add_argument("--enable-patch", action="store_true", help="Enable patching") | ||
| sec_schedule_create.add_argument( | ||
| "--apply", | ||
| action="store_true", | ||
| help="Enable real patching (default: dry-run mode)", | ||
| ) | ||
| sec_schedule_subs.add_parser("list", help="List schedules") | ||
| sec_schedule_run = sec_schedule_subs.add_parser("run", help="Run a schedule") | ||
| sec_schedule_run.add_argument("id", help="Schedule ID") | ||
| sec_schedule_install = sec_schedule_subs.add_parser( | ||
| "install-timer", help="Install systemd timer" | ||
| ) | ||
| sec_schedule_install.add_argument("id", help="Schedule ID") |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The --patch-strategy argument is missing from the schedule create parser but is referenced in line 1307 with a default. While this won't cause an error due to the getattr with default, it's inconsistent with the autonomous_patcher.py CLI (lines 667-669) which has this argument. Users may expect this option to be available.
| [Service] | ||
| Type=oneshot | ||
| ExecStart={self.cortex_binary} security schedule run {schedule_id} | ||
| User=root |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The systemd service files are generated with User=root (line 293) which means all security operations run as root. This is necessary for package management, but the generated service file should include additional security hardening directives like ProtectSystem=strict, ProtectHome=true, NoNewPrivileges=true, etc., to limit the blast radius if the service is compromised.
| User=root | |
| User=root | |
| NoNewPrivileges=true | |
| PrivateTmp=true | |
| PrivateDevices=true | |
| ProtectControlGroups=true | |
| ProtectKernelTunables=true | |
| ProtectKernelModules=true | |
| ProtectKernelLogs=true | |
| ProtectClock=true | |
| RestrictSUIDSGID=true | |
| RestrictRealtime=true | |
| LockPersonality=true | |
| RestrictNamespaces=true |
| def test_update_fixes_vulnerability_no_fixed_version_default(self): | ||
| """Test update verification rejects when no fixed_version is specified (default)""" | ||
| vuln = Vulnerability( | ||
| cve_id="CVE-2023-12345", | ||
| package_name="nginx", | ||
| installed_version="1.18.0", | ||
| affected_versions="< 1.20.0", | ||
| severity=Severity.HIGH, | ||
| description="Test vulnerability", | ||
| fixed_version=None, # No fixed version specified | ||
| ) | ||
|
|
||
| # Should return False by default when fixed_version is unknown (refuse unverified) | ||
| self.assertFalse(self.patcher.allow_unverified_patches) | ||
| result = self.patcher._update_fixes_vulnerability("1.20.0", vuln) | ||
| self.assertFalse(result) |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test at line 589 uses a hardcoded approach to verify the update_fixes_vulnerability function rejects unverified patches by default. However, line 602 sets self.assertFalse(self.patcher.allow_unverified_patches) which tests the instance variable directly rather than the behavior. This test would pass even if the logic in _update_fixes_vulnerability were broken. The test should verify that the method returns False when fixed_version is None, not just that the flag is set correctly.
| ) | ||
| sec_patch_parser.add_argument("--package", help="Patch specific package only") | ||
| sec_patch_parser.add_argument( | ||
| "--dry-run", action="store_true", default=True, help="Dry run mode (default)" |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment on line 2576 states "--dry-run, action="store_true", default=True" which creates a confusing situation. The default=True for a store_true action doesn't make sense since store_true actions are False by default and become True when the flag is present. This creates ambiguity about whether dry-run is enabled by default. The actual logic at line 1219 correctly handles this (dry_run = not args.apply), but the parser definition is misleading.
| "--dry-run", action="store_true", default=True, help="Dry run mode (default)" | |
| "--dry-run", | |
| action="store_true", | |
| help="Explicitly run in dry-run mode (default is dry-run unless --apply is used)", |
cortex/vulnerability_scanner.py
Outdated
| def __init__(self, db_path: str | None = None, cache_hours: float = 24.0): | ||
| """ | ||
| Initialize the vulnerability scanner. | ||
| Args: | ||
| db_path: Path to the SQLite cache database. Defaults to ~/.cortex/vulnerability_cache.db | ||
| cache_hours: How long to cache vulnerability results in hours. Defaults to 24. | ||
| For security-critical systems, consider reducing this (e.g., 1-6 hours) | ||
| to catch newly disclosed vulnerabilities faster. | ||
| """ | ||
| if db_path is None: | ||
| db_path = str(Path.home() / ".cortex" / "vulnerability_cache.db") | ||
|
|
||
| self.db_path = db_path | ||
| self.cache_hours = cache_hours |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cache_hours parameter defaults to 24.0 hours in the docstring (line 98), but the documentation in SECURITY_MANAGEMENT.md line 128 claims "24-hour caching". For security-critical systems, this is quite long. The vulnerability scanner should have a more aggressive default cache expiration (e.g., 1-6 hours) for newly disclosed vulnerabilities, or the documentation should clarify the trade-offs more explicitly.
| def _query_osv(self, package_name: str, version: str) -> list[Vulnerability]: | ||
| vulnerabilities = [] | ||
|
|
||
| try: | ||
| self._rate_limit() | ||
|
|
||
| query = { | ||
| "package": {"name": package_name, "ecosystem": "Debian"}, | ||
| "version": version, | ||
| } | ||
|
|
||
| response = requests.post( | ||
| self.osv_api, json=query, timeout=10, headers={"Content-Type": "application/json"} | ||
| ) | ||
|
|
||
| if response.status_code == 200: | ||
| data = response.json() | ||
| if "vulns" in data: | ||
| for vuln in data["vulns"]: | ||
| severity = Severity.UNKNOWN | ||
| cvss_score = None | ||
|
|
||
| if "database_specific" in vuln: | ||
| db_spec = vuln["database_specific"] | ||
| if "severity" in db_spec: | ||
| sev_str = db_spec["severity"].upper() | ||
| if sev_str in ["CRITICAL", "CRIT"]: | ||
| severity = Severity.CRITICAL | ||
| elif sev_str == "HIGH": | ||
| severity = Severity.HIGH | ||
| elif sev_str == "MEDIUM": | ||
| severity = Severity.MEDIUM | ||
| elif sev_str == "LOW": | ||
| severity = Severity.LOW | ||
|
|
||
| if "severity" in vuln: | ||
| for sev in vuln["severity"]: | ||
| if sev["type"] == "CVSS_V3": | ||
| score_value = sev.get("score", "") | ||
| if isinstance(score_value, (int, float)): | ||
| cvss_score = float(score_value) | ||
| elif isinstance(score_value, str): | ||
| try: | ||
| cvss_score = float(score_value) | ||
| except ValueError: | ||
| cvss_score = self._parse_cvss_vector(score_value) | ||
|
|
||
| if cvss_score is not None: | ||
| if cvss_score >= 9.0: | ||
| severity = Severity.CRITICAL | ||
| elif cvss_score >= 7.0: | ||
| severity = Severity.HIGH | ||
| elif cvss_score >= 4.0: | ||
| severity = Severity.MEDIUM | ||
| else: | ||
| severity = Severity.LOW | ||
|
|
||
| affected = "unknown" | ||
| fixed_version = None | ||
| if "affected" in vuln: | ||
| for affected_range in vuln["affected"]: | ||
| if "ranges" in affected_range: | ||
| for range_item in affected_range["ranges"]: | ||
| if "events" in range_item: | ||
| affected = str(range_item["events"]) | ||
| for event in range_item["events"]: | ||
| if "fixed" in event: | ||
| fixed_version = event["fixed"] | ||
|
|
||
| references = [] | ||
| if "references" in vuln: | ||
| for ref in vuln["references"]: | ||
| if "url" in ref: | ||
| references.append(ref["url"]) | ||
|
|
||
| vuln_obj = Vulnerability( | ||
| cve_id=vuln.get("id", "UNKNOWN"), | ||
| package_name=package_name, | ||
| installed_version=version, | ||
| affected_versions=affected, | ||
| severity=severity, | ||
| description=vuln.get("summary", "No description available"), | ||
| published_date=vuln.get("published", ""), | ||
| fixed_version=fixed_version, | ||
| cvss_score=cvss_score, | ||
| source="osv", | ||
| references=references, | ||
| ) | ||
|
|
||
| vulnerabilities.append(vuln_obj) | ||
|
|
||
| except requests.RequestException as e: | ||
| logger.warning(f"OSV query failed for {package_name}: {e}") | ||
| except Exception as e: | ||
| logger.warning(f"Error processing OSV response: {e}") | ||
|
|
||
| return vulnerabilities |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The OSV API is being called without authentication, which means rate limiting may apply. The code has a 0.5 second rate limit (line 118), but there's no handling for HTTP 429 (Too Many Requests) responses or exponential backoff. If the rate limit is exceeded, the scanner will silently return empty results rather than informing the user of the issue.
| def _get_installed_packages(self) -> dict[str, str]: | ||
| packages = {} | ||
| skipped_count = 0 | ||
|
|
||
| try: | ||
| result = subprocess.run( | ||
| ["dpkg-query", "-W", "-f=${Package}|${Version}\n"], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=30, | ||
| ) | ||
|
|
||
| if result.returncode == 0: | ||
| for line in result.stdout.strip().split("\n"): | ||
| if "|" in line: | ||
| parts = line.split("|", 1) | ||
| if len(parts) == 2: | ||
| pkg_name = parts[0].strip() | ||
| pkg_version = parts[1].strip() | ||
|
|
||
| # Validate package name and version before use | ||
| if not self._is_valid_package_name(pkg_name): | ||
| logger.warning(f"Skipping package with invalid name: {pkg_name!r}") | ||
| skipped_count += 1 | ||
| continue | ||
|
|
||
| if not self._is_valid_version_string(pkg_version): | ||
| logger.warning( | ||
| f"Skipping {pkg_name} with invalid version: {pkg_version!r}" | ||
| ) | ||
| skipped_count += 1 | ||
| continue | ||
|
|
||
| packages[pkg_name] = pkg_version | ||
|
|
||
| if skipped_count > 0: | ||
| logger.warning(f"Skipped {skipped_count} packages with invalid names/versions") | ||
| logger.info(f"Found {len(packages)} installed packages") | ||
| except Exception as e: | ||
| logger.error(f"Failed to get installed packages: {e}") | ||
|
|
||
| return packages |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The subprocess commands use shell=False and pass commands as lists which is secure. However, the package_name and version strings are validated using regex patterns before being used in subprocess calls. This validation should occur immediately before subprocess calls to prevent time-of-check-time-of-use (TOCTOU) vulnerabilities if the validated data is stored and used later.
| commands = ["apt-get update"] | ||
| if packages_list: | ||
| commands.append("apt-get install -y " + " ".join(packages_list)) | ||
|
|
||
| install_id = self.history.record_installation( | ||
| InstallationType.UPGRADE, | ||
| packages_list, | ||
| commands, | ||
| start_time, | ||
| ) | ||
|
|
||
| # Execute patching | ||
| errors = [] | ||
| updated_packages = [] | ||
|
|
||
| try: | ||
| # Update package list | ||
| progress.print_info("Updating package list...") | ||
| success, stdout, stderr = self._run_command(["apt-get", "update", "-qq"]) | ||
| if not success: | ||
| errors.append(f"Failed to update package list: {stderr}") | ||
| progress.print_warning("Could not update package list") | ||
|
|
||
| # Install updates with progress | ||
| package_items = list(plan.packages_to_update.items()) | ||
| for package_name, target_version in progress.progress_bar( | ||
| package_items, description="📦 Installing updates" | ||
| ): | ||
| # Use apt-get install with specific version if available | ||
| if target_version: | ||
| cmd = ["apt-get", "install", "-y", f"{package_name}={target_version}"] | ||
| else: | ||
| cmd = ["apt-get", "install", "-y", package_name] |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The autonomous patcher runs apt-get install with -y flag (line 501, 529, 531) which auto-accepts prompts. While this is necessary for automation, if dry_run=False and the system requires user confirmation for certain package operations (e.g., removing essential packages as dependencies), this could lead to unintended system changes. Consider adding additional safety checks or logging warnings about auto-acceptance.
…VulnerabilityScanner initialization - Added error handling for CUSTOM frequency schedules in SecurityScheduler, preventing automatic systemd timer installation and requiring manual configuration. - Updated the _frequency_to_systemd method to raise a ValueError for CUSTOM frequencies, clarifying the need for manual conversion. - Enhanced the VulnerabilityScanner constructor to accept a dpkg_timeout parameter, allowing for customizable timeout settings during package queries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @tests/test_security_scheduler.py:
- Around line 1-628: The test file tests/test_security_scheduler.py fails the
Black formatting check; run black on that file (e.g., black
tests/test_security_scheduler.py or black .) to reformat it, then re-run the CI;
no code logic changes are required—just apply Black formatting to the file
containing the test classes (TestScheduleFrequencyEnum, TestSecuritySchedule,
TestSecurityScheduler, etc.) so the pipeline passes.
🧹 Nitpick comments (3)
docs/SECURITY_MANAGEMENT.md (1)
126-126: Minor documentation formatting improvements.The static analysis tools have flagged a few minor formatting issues that can be optionally addressed:
- Line 126: Consider hyphenating "Open Source" → "Open-Source" when used as a compound adjective before "Vulnerabilities"
- Line 191: The architecture diagram code block is missing a language specifier (consider adding
textorascii-art)- Line 241: Consider using a proper heading (
## Priority) instead of emphasis for the "Priority" sectionThese are optional improvements for documentation polish.
♻️ Optional formatting fixes
Line 126:
-| **OSV (Open Source Vulnerabilities)** | Primary database, comprehensive | Fast | +| **OSV (Open-Source Vulnerabilities)** | Primary database, comprehensive | Fast |Line 191 (add language hint):
-``` +```text ┌─────────────────────────────────────────────────────────────────┐Line 241 (use proper heading):
-**🔴 Critical** +## Priority + +🔴 CriticalAlso applies to: 191-214, 241-241
cortex/vulnerability_scanner.py (2)
61-66: Usefield(default_factory=list)for mutable defaults.Using
Noneas a default for a list field and converting it in__post_init__works but is not idiomatic for dataclasses.♻️ Proposed refactor
+from dataclasses import dataclass, field + @dataclass class Vulnerability: """Represents a security vulnerability""" cve_id: str package_name: str installed_version: str affected_versions: str severity: Severity description: str published_date: str | None = None fixed_version: str | None = None cvss_score: float | None = None source: str = "unknown" - references: list[str] = None - - def __post_init__(self): - if self.references is None: - self.references = [] + references: list[str] = field(default_factory=list)
82-86: Usefield(default_factory=list)for mutable defaults.Same issue as in the
Vulnerabilitydataclass.♻️ Proposed refactor
@dataclass class ScanResult: """Result of a vulnerability scan""" scan_id: str timestamp: str total_packages_scanned: int vulnerabilities_found: int critical_count: int high_count: int medium_count: int low_count: int vulnerabilities: list[Vulnerability] scan_duration_seconds: float - errors: list[str] = None - - def __post_init__(self): - if self.errors is None: - self.errors = [] + errors: list[str] = field(default_factory=list)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
cortex/autonomous_patcher.pycortex/cli.pycortex/security_scheduler.pycortex/vulnerability_scanner.pydocs/SECURITY_MANAGEMENT.mdtests/test_security_scheduler.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/autonomous_patcher.pycortex/vulnerability_scanner.pycortex/cli.pycortex/security_scheduler.pytests/test_security_scheduler.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_security_scheduler.py
🧠 Learnings (5)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/autonomous_patcher.pycortex/cli.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/autonomous_patcher.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/vulnerability_scanner.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*.py : Type hints required in Python code
Applied to files:
cortex/vulnerability_scanner.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to {setup.py,setup.cfg,pyproject.toml,**/__init__.py} : Use Python 3.10 or higher as the minimum supported version
Applied to files:
cortex/vulnerability_scanner.py
🧬 Code graph analysis (2)
cortex/vulnerability_scanner.py (3)
cortex/progress_indicators.py (2)
ProgressIndicator(177-378)get_progress_indicator(665-672)cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)cortex/logging_system.py (3)
info(200-202)debug(196-198)critical(212-214)
cortex/security_scheduler.py (2)
cortex/autonomous_patcher.py (3)
AutonomousPatcher(82-647)PatchStrategy(48-54)patch_vulnerabilities(586-629)cortex/vulnerability_scanner.py (2)
VulnerabilityScanner(89-774)scan_all_packages(576-675)
🪛 GitHub Actions: CI
tests/test_security_scheduler.py
[error] 1-1: Black formatting check failed. 1 file would be reformatted. Run 'black' to format. Command: 'black --check . --exclude "(venv|.venv|build|dist)"'
🪛 LanguageTool
docs/SECURITY_MANAGEMENT.md
[uncategorized] ~126-~126: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...| |--------|---------|-------| | OSV (Open Source Vulnerabilities) | Primary database, ...
(EN_COMPOUND_ADJECTIVE_INTERNAL)
🪛 markdownlint-cli2 (0.18.1)
docs/SECURITY_MANAGEMENT.md
191-191: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
241-241: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (19)
docs/SECURITY_MANAGEMENT.md (1)
1-261: Excellent documentation for the security management feature!The documentation is comprehensive, well-structured, and provides clear guidance on:
- Problem framing with real-world impact statistics
- Core features and safety controls
- Practical usage examples
- Technical implementation details
- Clear acceptance criteria
The content aligns well with the implementation in the codebase and provides valuable context for users and developers.
cortex/autonomous_patcher.py (3)
41-46: Excellent thread-safety and configurability design!The implementation includes:
- Module-level thread lock for apt update operations to prevent race conditions
- Configurable
apt_update_interval(default 300s/5min) that balances freshness with avoiding excessive apt-get update calls- Instance-level tracking of
_apt_last_updatedto avoid global state issues- Proper initialization with safe defaults (dry_run=True, allow_unverified_patches=False)
This design allows users to tune the update interval for their specific use case (e.g., increase for frequent scans, decrease for time-sensitive patching).
Based on learnings, the dry-run default aligns with project safety standards.
Also applies to: 85-131
260-304: Robust vulnerability verification and patch planning!The implementation correctly:
- Uses
dpkg --compare-versionsfor Debian version comparison (Line 252-254)- Verifies that candidate versions actually fix vulnerabilities before including them in the patch plan
- Provides the
allow_unverified_patchessafety valve with clear warnings (Lines 277-288)- Groups vulnerabilities by package and verifies each fix
- Detects kernel packages to set the
requires_rebootflag (Lines 433-434)- Provides detailed logging for skipped vulnerabilities
The single
ensure_apt_updated()call before checking all packages (Line 394) is a good performance optimization.Also applies to: 348-453
650-717: CLI correctly implements dry-run by default.The CLI interface properly enforces the project's safety standard:
- Line 660:
--dry-rundefaults toTrue- Line 663:
--applyexplicitly disables dry-run- Line 681:
dry_run = args.dry_run and not args.applyensures apply overrides default- Lines 699-700: Clear indication when dry-run mode is active
This matches the learned project standard: "Dry-run by default for all installations in command execution."
cortex/cli.py (3)
1081-1106: Clean security command routing with proper error handling.The security command handler follows the established patterns in the CLI:
- Action-based routing to specialized handlers (_security_scan, _security_patch, _security_schedule)
- Comprehensive exception handling with optional traceback in verbose mode
- Clear error messages for missing subcommands
1207-1288: Patch command correctly enforces dry-run by default.The implementation properly handles the safety default:
# Line 1219: Dry run is the default; only disabled when --apply is explicitly specified dry_run = not getattr(args, "apply", False)This ensures:
- Patches are previewed by default (dry-run mode)
- Users must explicitly use
--applyto perform actual patching- Clear visual indication of the mode (Lines 1228-1233)
The progress indicators and summary output provide excellent user feedback.
2557-2612: Well-structured argument parser for security commands.The argument parser setup includes:
- Separate subparsers for scan, patch, and schedule actions
- Clear help text for each option
- Appropriate defaults (e.g.,
--dry-rundefault on Line 2576,critical_onlystrategy on Line 2582)- Consistent naming conventions
The structure makes the CLI intuitive and discoverable for users.
cortex/security_scheduler.py (3)
50-163: Robust schedule management with proper persistence.The scheduler implementation includes:
- Automatic config directory creation with
mkdir(parents=True, exist_ok=True)(Line 93)- Comprehensive error handling for I/O operations (Lines 88-89, 116-117)
- Proper serialization/deserialization of enum values via
.valueattributes- Logging for diagnostics (Lines 87, 161)
The persistence layer is well-designed and handles edge cases appropriately.
164-191: Correct handling of month boundaries in schedule calculation.The
_calculate_next_runmethod for MONTHLY schedules (Lines 174-184) properly handles edge cases:# Lines 176-180: Handle year rollover month = now.month + 1 if month > 12: month = 1 year += 1 # Lines 182-184: Clamp day to valid range for target month max_day = calendar.monthrange(year, month)[1] day = min(now.day, max_day)This prevents issues like scheduling Jan 31 → Feb 31 (invalid) by clamping to Feb 28/29.
For CUSTOM frequency, returning
Noneis appropriate since cron parsing is not implemented (Line 188).
269-360: Proper handling of CUSTOM frequency systemd timer limitation.The implementation correctly prevents automatic systemd timer installation for CUSTOM frequency schedules:
Lines 286-293: Explicit refusal with clear error message:
if schedule.frequency == ScheduleFrequency.CUSTOM: logger.error( f"Cannot install systemd timer for schedule '{schedule_id}': " f"CUSTOM frequency requires manual configuration. " f"Use the custom_cron field ('{schedule.custom_cron}') with cron or " f"create a manual systemd timer with the appropriate OnCalendar value." ) return FalseLines 380-384:
_frequency_to_systemdraisesValueErrorfor CUSTOM, preventing accidental conversion.This design ensures users are explicitly aware that CUSTOM schedules require manual timer setup, avoiding silent failures or incorrect scheduling.
The root privilege checking (Lines 320-327) is also well-implemented with helpful guidance for users.
Also applies to: 361-387
cortex/vulnerability_scanner.py (9)
1-34: LGTM!The module structure, imports, and package name validation pattern are well-organized. Good use of the project's utilities for progress indicators and connection pooling.
92-127: LGTM!Well-documented initialization with sensible defaults and helpful guidance on parameter tuning for different use cases.
227-256: LGTM!Excellent input validation following security best practices. The validation prevents potential injection attacks and ensures only valid Debian package names and version strings are processed.
257-298: LGTM!Robust package enumeration with proper validation, timeout handling, and informative logging. The defensive validation of package names and versions prevents malformed data from entering the system.
337-441: LGTM!Excellent caching strategy using a sentinel value to cache "no vulnerabilities found" results. This prevents unnecessary API calls for clean packages while maintaining proper cache expiry and cleanup.
442-538: LGTM!Comprehensive OSV API integration with proper rate limiting, robust parsing, and good error handling. The method correctly prioritizes numeric CVSS scores from OSV and handles various response formats gracefully.
540-675: LGTM!Well-designed scanning methods with excellent user feedback. The cache statistics and progress reporting provide valuable insights into scan performance, and per-package error handling ensures robustness.
677-774: LGTM!Clean implementation of scan history persistence and retrieval. The deduplication logic in
get_critical_vulnerabilitiesensures unique results across multiple scans.
777-865: LGTM!Well-structured CLI interface with clear output formatting and proper error handling. The visual indicators (emojis) enhance readability of scan results.
| def _parse_cvss_vector(self, vector_string: str) -> float | None: | ||
| """Parse a CVSS vector string and estimate a severity score.""" | ||
| if not vector_string or not vector_string.startswith("CVSS:"): | ||
| return None | ||
|
|
||
| try: | ||
| parts = vector_string.split("/") | ||
| metrics = {} | ||
| for part in parts[1:]: | ||
| if ":" in part: | ||
| key, value = part.split(":", 1) | ||
| metrics[key] = value | ||
|
|
||
| score = 0.0 | ||
| av_scores = {"N": 2.5, "A": 2.0, "L": 1.5, "P": 1.0} | ||
| score += av_scores.get(metrics.get("AV", "L"), 1.5) | ||
|
|
||
| ac_scores = {"L": 1.5, "H": 0.5} | ||
| score += ac_scores.get(metrics.get("AC", "L"), 1.0) | ||
|
|
||
| pr_scores = {"N": 1.5, "L": 1.0, "H": 0.5} | ||
| score += pr_scores.get(metrics.get("PR", "L"), 1.0) | ||
|
|
||
| impact_scores = {"H": 1.5, "L": 0.75, "N": 0.0} | ||
| score += impact_scores.get(metrics.get("C", "N"), 0.5) | ||
| score += impact_scores.get(metrics.get("I", "N"), 0.5) | ||
| score += impact_scores.get(metrics.get("A", "N"), 0.5) | ||
|
|
||
| return min(10.0, max(0.0, score)) | ||
|
|
||
| except Exception as e: | ||
| logger.debug(f"Failed to parse CVSS vector '{vector_string}': {e}") | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Custom CVSS scoring deviates from the standard specification.
The _parse_cvss_vector method implements a simplified scoring algorithm that doesn't follow the official CVSS v3.x specification. The standard CVSS calculation involves complex formulas for Base Score, Impact Sub-Score, and Exploitability Sub-Score with specific coefficients and rounding rules.
While this appears to be a fallback (OSV typically provides numeric scores directly at lines 480-487), using non-standard scoring could lead to incorrect severity assessments when the fallback is triggered.
Consider one of these approaches:
- Use a well-established CVSS library (e.g.,
cvssPyPI package) for accurate calculations - Document this as an approximation and add a warning when it's used instead of actual CVSS scores
- If OSV always provides numeric scores, consider removing the vector parsing fallback entirely
Example using a standard library
# Add to requirements
# cvss==2.6
from cvss import CVSS3
def _parse_cvss_vector(self, vector_string: str) -> float | None:
"""Parse a CVSS vector string using standard CVSS v3 calculation."""
if not vector_string or not vector_string.startswith("CVSS:"):
return None
try:
c = CVSS3(vector_string)
return c.base_score
except Exception as e:
logger.debug(f"Failed to parse CVSS vector '{vector_string}': {e}")
return None| #!/usr/bin/env python3 | ||
| """ | ||
| Tests for Security Scheduler Module | ||
| """ | ||
|
|
||
| import json | ||
| import os | ||
| import tempfile | ||
| import unittest | ||
| from datetime import datetime | ||
| from unittest.mock import MagicMock, patch | ||
|
|
||
| from cortex.autonomous_patcher import PatchStrategy | ||
| from cortex.security_scheduler import ( | ||
| ScheduleFrequency, | ||
| SecuritySchedule, | ||
| SecurityScheduler, | ||
| ) | ||
|
|
||
|
|
||
| class TestScheduleFrequencyEnum(unittest.TestCase): | ||
| """Test cases for ScheduleFrequency enum""" | ||
|
|
||
| def test_frequency_values(self): | ||
| """Test frequency enum has correct values""" | ||
| self.assertEqual(ScheduleFrequency.DAILY.value, "daily") | ||
| self.assertEqual(ScheduleFrequency.WEEKLY.value, "weekly") | ||
| self.assertEqual(ScheduleFrequency.MONTHLY.value, "monthly") | ||
|
|
||
| def test_custom_frequency(self): | ||
| """Test custom frequency value""" | ||
| self.assertEqual(ScheduleFrequency.CUSTOM.value, "custom") | ||
|
|
||
|
|
||
| class TestSecuritySchedule(unittest.TestCase): | ||
| """Test cases for SecuritySchedule dataclass""" | ||
|
|
||
| def test_schedule_creation(self): | ||
| """Test creating security schedule object""" | ||
| schedule = SecuritySchedule( | ||
| schedule_id="monthly_scan", | ||
| frequency=ScheduleFrequency.MONTHLY, | ||
| scan_enabled=True, | ||
| patch_enabled=False, | ||
| ) | ||
|
|
||
| self.assertEqual(schedule.schedule_id, "monthly_scan") | ||
| self.assertEqual(schedule.frequency, ScheduleFrequency.MONTHLY) | ||
| self.assertTrue(schedule.scan_enabled) | ||
| self.assertFalse(schedule.patch_enabled) | ||
| self.assertTrue(schedule.dry_run) # Default value | ||
|
|
||
| def test_schedule_defaults(self): | ||
| """Test schedule default values""" | ||
| schedule = SecuritySchedule( | ||
| schedule_id="test", | ||
| frequency=ScheduleFrequency.DAILY, | ||
| ) | ||
|
|
||
| self.assertTrue(schedule.scan_enabled) | ||
| self.assertFalse(schedule.patch_enabled) | ||
| self.assertEqual(schedule.patch_strategy, PatchStrategy.CRITICAL_ONLY) | ||
| self.assertTrue(schedule.dry_run) | ||
| self.assertIsNone(schedule.last_run) | ||
| self.assertIsNone(schedule.next_run) | ||
| self.assertIsNone(schedule.custom_cron) | ||
|
|
||
| def test_schedule_with_patch_enabled(self): | ||
| """Test schedule with patching enabled""" | ||
| schedule = SecuritySchedule( | ||
| schedule_id="auto_patch", | ||
| frequency=ScheduleFrequency.WEEKLY, | ||
| scan_enabled=True, | ||
| patch_enabled=True, | ||
| patch_strategy=PatchStrategy.HIGH_AND_ABOVE, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| self.assertTrue(schedule.patch_enabled) | ||
| self.assertEqual(schedule.patch_strategy, PatchStrategy.HIGH_AND_ABOVE) | ||
| self.assertFalse(schedule.dry_run) | ||
|
|
||
| def test_schedule_with_custom_cron(self): | ||
| """Test schedule with custom cron expression""" | ||
| schedule = SecuritySchedule( | ||
| schedule_id="custom_schedule", | ||
| frequency=ScheduleFrequency.CUSTOM, | ||
| scan_enabled=True, | ||
| custom_cron="0 3 * * 0", # Every Sunday at 3 AM | ||
| ) | ||
|
|
||
| self.assertEqual(schedule.frequency, ScheduleFrequency.CUSTOM) | ||
| self.assertEqual(schedule.custom_cron, "0 3 * * 0") | ||
|
|
||
|
|
||
| class TestSecurityScheduler(unittest.TestCase): | ||
| """Test cases for SecurityScheduler""" | ||
|
|
||
| def setUp(self): | ||
| """Set up test fixtures""" | ||
| self.temp_dir = tempfile.mkdtemp() | ||
| self.config_path = os.path.join(self.temp_dir, "security_schedule.json") | ||
| # Patch the config path | ||
| self.config_patcher = patch.object( | ||
| SecurityScheduler, | ||
| "__init__", | ||
| lambda self_obj: self._init_scheduler(self_obj), | ||
| ) | ||
|
|
||
| def _init_scheduler(self, scheduler_obj): | ||
| """Custom init for testing with temp config path""" | ||
| from pathlib import Path | ||
|
|
||
| scheduler_obj.config_path = Path(self.config_path) | ||
| scheduler_obj.schedules = {} | ||
| # Don't call _load_schedules since config doesn't exist yet | ||
|
|
||
| def tearDown(self): | ||
| """Clean up temporary files""" | ||
| import shutil | ||
|
|
||
| if os.path.exists(self.temp_dir): | ||
| shutil.rmtree(self.temp_dir) | ||
|
|
||
| def test_initialization(self): | ||
| """Test scheduler initializes correctly""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.config_path = None | ||
| scheduler.schedules = {} | ||
| self.assertIsNotNone(scheduler) | ||
| self.assertIsInstance(scheduler.schedules, dict) | ||
|
|
||
| def test_create_schedule(self): | ||
| """Test creating a schedule""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| from pathlib import Path | ||
|
|
||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| schedule = scheduler.create_schedule( | ||
| schedule_id="test_schedule", | ||
| frequency=ScheduleFrequency.WEEKLY, | ||
| scan_enabled=True, | ||
| patch_enabled=False, | ||
| ) | ||
|
|
||
| self.assertEqual(schedule.schedule_id, "test_schedule") | ||
| self.assertIn("test_schedule", scheduler.schedules) | ||
|
|
||
| def test_create_schedule_with_patch(self): | ||
| """Test creating schedule with patching enabled""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| from pathlib import Path | ||
|
|
||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| schedule = scheduler.create_schedule( | ||
| schedule_id="patch_schedule", | ||
| frequency=ScheduleFrequency.MONTHLY, | ||
| scan_enabled=True, | ||
| patch_enabled=True, | ||
| patch_strategy=PatchStrategy.HIGH_AND_ABOVE, | ||
| dry_run=False, | ||
| ) | ||
|
|
||
| self.assertTrue(schedule.patch_enabled) | ||
| self.assertEqual(schedule.patch_strategy, PatchStrategy.HIGH_AND_ABOVE) | ||
| self.assertFalse(schedule.dry_run) | ||
|
|
||
| def test_get_schedule(self): | ||
| """Test getting a schedule by ID""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| from pathlib import Path | ||
|
|
||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| scheduler.create_schedule( | ||
| schedule_id="get_test", | ||
| frequency=ScheduleFrequency.DAILY, | ||
| scan_enabled=True, | ||
| ) | ||
|
|
||
| schedule = scheduler.get_schedule("get_test") | ||
|
|
||
| self.assertIsNotNone(schedule) | ||
| self.assertEqual(schedule.schedule_id, "get_test") | ||
|
|
||
| def test_get_nonexistent_schedule(self): | ||
| """Test getting non-existent schedule returns None""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| schedule = scheduler.get_schedule("nonexistent") | ||
| self.assertIsNone(schedule) | ||
|
|
||
| def test_delete_schedule(self): | ||
| """Test deleting a schedule""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| from pathlib import Path | ||
|
|
||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| scheduler.create_schedule( | ||
| schedule_id="to_delete", | ||
| frequency=ScheduleFrequency.DAILY, | ||
| scan_enabled=True, | ||
| ) | ||
|
|
||
| success = scheduler.delete_schedule("to_delete") | ||
|
|
||
| self.assertTrue(success) | ||
| self.assertNotIn("to_delete", scheduler.schedules) | ||
|
|
||
| def test_delete_nonexistent_schedule(self): | ||
| """Test deleting non-existent schedule returns False""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| from pathlib import Path | ||
|
|
||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| success = scheduler.delete_schedule("nonexistent") | ||
| self.assertFalse(success) | ||
|
|
||
| def test_list_schedules(self): | ||
| """Test listing all schedules""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| from pathlib import Path | ||
|
|
||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| scheduler.create_schedule( | ||
| schedule_id="schedule_1", | ||
| frequency=ScheduleFrequency.DAILY, | ||
| scan_enabled=True, | ||
| ) | ||
| scheduler.create_schedule( | ||
| schedule_id="schedule_2", | ||
| frequency=ScheduleFrequency.WEEKLY, | ||
| scan_enabled=True, | ||
| ) | ||
|
|
||
| schedules = scheduler.list_schedules() | ||
|
|
||
| self.assertEqual(len(schedules), 2) | ||
| schedule_ids = [s.schedule_id for s in schedules] | ||
| self.assertIn("schedule_1", schedule_ids) | ||
| self.assertIn("schedule_2", schedule_ids) | ||
|
|
||
| def test_calculate_next_run_daily(self): | ||
| """Test calculating next run time for daily schedule""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| next_run = scheduler._calculate_next_run(ScheduleFrequency.DAILY) | ||
|
|
||
| self.assertIsNotNone(next_run) | ||
| # Should be roughly 1 day from now | ||
| delta = next_run - datetime.now() | ||
| self.assertGreater(delta.total_seconds(), 23 * 3600) # At least 23 hours | ||
| self.assertLess(delta.total_seconds(), 25 * 3600) # Less than 25 hours | ||
|
|
||
| def test_calculate_next_run_weekly(self): | ||
| """Test calculating next run time for weekly schedule""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| next_run = scheduler._calculate_next_run(ScheduleFrequency.WEEKLY) | ||
|
|
||
| self.assertIsNotNone(next_run) | ||
| # Should be roughly 1 week from now | ||
| delta = next_run - datetime.now() | ||
| self.assertGreaterEqual(delta.days, 6) | ||
| self.assertLessEqual(delta.days, 8) | ||
|
|
||
| def test_calculate_next_run_monthly(self): | ||
| """Test calculating next run time for monthly schedule""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| next_run = scheduler._calculate_next_run(ScheduleFrequency.MONTHLY) | ||
|
|
||
| self.assertIsNotNone(next_run) | ||
| # Should be roughly 30 days from now | ||
| delta = next_run - datetime.now() | ||
| self.assertGreaterEqual(delta.days, 29) | ||
| self.assertLessEqual(delta.days, 31) | ||
|
|
||
| def test_calculate_next_run_custom(self): | ||
| """Test calculating next run for custom frequency returns None""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| next_run = scheduler._calculate_next_run(ScheduleFrequency.CUSTOM) | ||
| self.assertIsNone(next_run) | ||
|
|
||
|
|
||
| class TestSecuritySchedulerSystemd(unittest.TestCase): | ||
| """Test systemd timer generation""" | ||
|
|
||
| def setUp(self): | ||
| self.temp_dir = tempfile.mkdtemp() | ||
| self.config_path = os.path.join(self.temp_dir, "security_schedule.json") | ||
|
|
||
| def tearDown(self): | ||
| import shutil | ||
|
|
||
| if os.path.exists(self.temp_dir): | ||
| shutil.rmtree(self.temp_dir) | ||
|
|
||
| def test_frequency_to_systemd_daily(self): | ||
| """Test converting daily frequency to systemd format""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| result = scheduler._frequency_to_systemd(ScheduleFrequency.DAILY) | ||
| self.assertEqual(result, "daily") | ||
|
|
||
| def test_frequency_to_systemd_weekly(self): | ||
| """Test converting weekly frequency to systemd format""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| result = scheduler._frequency_to_systemd(ScheduleFrequency.WEEKLY) | ||
| self.assertEqual(result, "weekly") | ||
|
|
||
| def test_frequency_to_systemd_monthly(self): | ||
| """Test converting monthly frequency to systemd format""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| result = scheduler._frequency_to_systemd(ScheduleFrequency.MONTHLY) | ||
| self.assertEqual(result, "monthly") | ||
|
|
||
| def test_frequency_to_systemd_custom(self): | ||
| """Test custom frequency raises ValueError (cannot auto-convert to systemd)""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| with self.assertRaises(ValueError) as context: | ||
| scheduler._frequency_to_systemd(ScheduleFrequency.CUSTOM) | ||
|
|
||
| self.assertIn("CUSTOM frequency cannot be automatically converted", str(context.exception)) | ||
|
|
||
| @patch("os.geteuid") | ||
| def test_has_root_privileges_as_root(self, mock_geteuid): | ||
| """Test root privilege check when running as root""" | ||
| mock_geteuid.return_value = 0 | ||
|
|
||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| has_root = scheduler._has_root_privileges() | ||
| self.assertTrue(has_root) | ||
|
|
||
| @patch("os.geteuid") | ||
| @patch("subprocess.run") | ||
| def test_has_root_privileges_with_sudo(self, mock_run, mock_geteuid): | ||
| """Test root privilege check with passwordless sudo""" | ||
| mock_geteuid.return_value = 1000 # Non-root | ||
| mock_run.return_value = MagicMock(returncode=0) | ||
|
|
||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| has_root = scheduler._has_root_privileges() | ||
| self.assertTrue(has_root) | ||
|
|
||
| @patch("os.geteuid") | ||
| @patch("subprocess.run") | ||
| def test_has_root_privileges_without_sudo(self, mock_run, mock_geteuid): | ||
| """Test root privilege check without sudo access""" | ||
| mock_geteuid.return_value = 1000 # Non-root | ||
| mock_run.return_value = MagicMock(returncode=1) | ||
|
|
||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| has_root = scheduler._has_root_privileges() | ||
| self.assertFalse(has_root) | ||
|
|
||
| @patch.object(SecurityScheduler, "_has_root_privileges") | ||
| def test_install_systemd_timer_no_privileges(self, mock_has_root): | ||
| """Test installing timer fails without root""" | ||
| mock_has_root.return_value = False | ||
|
|
||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| from pathlib import Path | ||
|
|
||
| scheduler = SecurityScheduler() | ||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.cortex_binary = "/usr/bin/cortex" | ||
| scheduler.schedules = {} | ||
|
|
||
| scheduler.create_schedule( | ||
| schedule_id="no_root_test", | ||
| frequency=ScheduleFrequency.DAILY, | ||
| scan_enabled=True, | ||
| ) | ||
|
|
||
| success = scheduler.install_systemd_timer("no_root_test") | ||
| self.assertFalse(success) | ||
|
|
||
| def test_install_systemd_timer_nonexistent_schedule(self): | ||
| """Test installing timer for non-existent schedule fails""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| success = scheduler.install_systemd_timer("nonexistent") | ||
| self.assertFalse(success) | ||
|
|
||
|
|
||
| class TestSecuritySchedulerExecution(unittest.TestCase): | ||
| """Test schedule execution""" | ||
|
|
||
| def setUp(self): | ||
| self.temp_dir = tempfile.mkdtemp() | ||
| self.config_path = os.path.join(self.temp_dir, "security_schedule.json") | ||
|
|
||
| def tearDown(self): | ||
| import shutil | ||
|
|
||
| if os.path.exists(self.temp_dir): | ||
| shutil.rmtree(self.temp_dir) | ||
|
|
||
| @patch("cortex.security_scheduler.VulnerabilityScanner") | ||
| def test_run_schedule_scan_only(self, mock_scanner_class): | ||
| """Test running schedule with scan only""" | ||
| mock_scanner = MagicMock() | ||
| mock_scan_result = MagicMock() | ||
| mock_scan_result.vulnerabilities = [] | ||
| mock_scan_result.vulnerabilities_found = 0 | ||
| mock_scan_result.critical_count = 0 | ||
| mock_scan_result.high_count = 0 | ||
| mock_scan_result.medium_count = 0 | ||
| mock_scan_result.low_count = 0 | ||
| mock_scanner.scan_all_packages.return_value = mock_scan_result | ||
| mock_scanner_class.return_value = mock_scanner | ||
|
|
||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| from pathlib import Path | ||
|
|
||
| scheduler = SecurityScheduler() | ||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| scheduler.create_schedule( | ||
| schedule_id="scan_only", | ||
| frequency=ScheduleFrequency.DAILY, | ||
| scan_enabled=True, | ||
| patch_enabled=False, | ||
| ) | ||
|
|
||
| result = scheduler.run_schedule("scan_only") | ||
|
|
||
| self.assertTrue(result["success"]) | ||
| self.assertIsNotNone(result["scan_result"]) | ||
| mock_scanner.scan_all_packages.assert_called_once() | ||
|
|
||
| def test_run_nonexistent_schedule(self): | ||
| """Test running non-existent schedule raises error""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| scheduler = SecurityScheduler() | ||
| scheduler.schedules = {} | ||
|
|
||
| with self.assertRaises(ValueError) as context: | ||
| scheduler.run_schedule("nonexistent") | ||
|
|
||
| self.assertIn("not found", str(context.exception).lower()) | ||
|
|
||
| @patch("cortex.security_scheduler.VulnerabilityScanner") | ||
| @patch("cortex.security_scheduler.AutonomousPatcher") | ||
| def test_run_schedule_with_patching(self, mock_patcher_class, mock_scanner_class): | ||
| """Test running schedule with patching enabled""" | ||
| # Setup mock scanner | ||
| mock_scanner = MagicMock() | ||
| mock_vuln = MagicMock() | ||
| mock_vuln.severity.value = "critical" | ||
| mock_scan_result = MagicMock() | ||
| mock_scan_result.vulnerabilities = [mock_vuln] | ||
| mock_scan_result.vulnerabilities_found = 1 | ||
| mock_scan_result.critical_count = 1 | ||
| mock_scan_result.high_count = 0 | ||
| mock_scan_result.medium_count = 0 | ||
| mock_scan_result.low_count = 0 | ||
| mock_scanner.scan_all_packages.return_value = mock_scan_result | ||
| mock_scanner_class.return_value = mock_scanner | ||
|
|
||
| # Setup mock patcher | ||
| mock_patcher = MagicMock() | ||
| mock_patch_result = MagicMock() | ||
| mock_patch_result.packages_updated = ["test-pkg"] | ||
| mock_patch_result.vulnerabilities_patched = 1 | ||
| mock_patch_result.success = True | ||
| mock_patch_result.errors = [] | ||
| mock_patcher.patch_vulnerabilities.return_value = mock_patch_result | ||
| mock_patcher_class.return_value = mock_patcher | ||
|
|
||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| from pathlib import Path | ||
|
|
||
| scheduler = SecurityScheduler() | ||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| scheduler.create_schedule( | ||
| schedule_id="patch_test", | ||
| frequency=ScheduleFrequency.DAILY, | ||
| scan_enabled=True, | ||
| patch_enabled=True, | ||
| ) | ||
|
|
||
| result = scheduler.run_schedule("patch_test") | ||
|
|
||
| self.assertTrue(result["success"]) | ||
| self.assertIsNotNone(result["patch_result"]) | ||
| self.assertEqual(result["patch_result"]["packages_updated"], 1) | ||
|
|
||
|
|
||
| class TestSecuritySchedulerSaveLoad(unittest.TestCase): | ||
| """Test schedule persistence""" | ||
|
|
||
| def setUp(self): | ||
| self.temp_dir = tempfile.mkdtemp() | ||
| self.config_path = os.path.join(self.temp_dir, "security_schedule.json") | ||
|
|
||
| def tearDown(self): | ||
| import shutil | ||
|
|
||
| if os.path.exists(self.temp_dir): | ||
| shutil.rmtree(self.temp_dir) | ||
|
|
||
| def test_save_schedules(self): | ||
| """Test saving schedules to file""" | ||
| with patch.object(SecurityScheduler, "__init__", lambda x: None): | ||
| from pathlib import Path | ||
|
|
||
| scheduler = SecurityScheduler() | ||
| scheduler.config_path = Path(self.config_path) | ||
| scheduler.schedules = {} | ||
|
|
||
| scheduler.create_schedule( | ||
| schedule_id="save_test", | ||
| frequency=ScheduleFrequency.WEEKLY, | ||
| scan_enabled=True, | ||
| patch_enabled=True, | ||
| ) | ||
|
|
||
| # Verify file was created | ||
| self.assertTrue(os.path.exists(self.config_path)) | ||
|
|
||
| # Verify content | ||
| with open(self.config_path) as f: | ||
| data = json.load(f) | ||
|
|
||
| self.assertIn("schedules", data) | ||
| self.assertEqual(len(data["schedules"]), 1) | ||
| self.assertEqual(data["schedules"][0]["schedule_id"], "save_test") | ||
|
|
||
| def test_load_schedules(self): | ||
| """Test loading schedules from file""" | ||
| # Create a config file manually | ||
| from pathlib import Path | ||
|
|
||
| config_path = Path(self.config_path) | ||
| config_path.parent.mkdir(parents=True, exist_ok=True) | ||
|
|
||
| config_data = { | ||
| "schedules": [ | ||
| { | ||
| "schedule_id": "loaded_schedule", | ||
| "frequency": "monthly", | ||
| "scan_enabled": True, | ||
| "patch_enabled": False, | ||
| "patch_strategy": "critical_only", | ||
| "dry_run": True, | ||
| "last_run": None, | ||
| "next_run": None, | ||
| "custom_cron": None, | ||
| } | ||
| ] | ||
| } | ||
|
|
||
| with open(config_path, "w") as f: | ||
| json.dump(config_data, f) | ||
|
|
||
| # Create scheduler with patched home path | ||
| with patch("pathlib.Path.home") as mock_home: | ||
| mock_home.return_value = Path(self.temp_dir) | ||
| with patch("pathlib.Path.exists", return_value=True): | ||
| # Manually load | ||
| scheduler = SecurityScheduler.__new__(SecurityScheduler) | ||
| scheduler.config_path = config_path | ||
| scheduler.schedules = {} | ||
| scheduler._load_schedules() | ||
|
|
||
| self.assertIn("loaded_schedule", scheduler.schedules) | ||
| schedule = scheduler.schedules["loaded_schedule"] | ||
| self.assertEqual(schedule.frequency, ScheduleFrequency.MONTHLY) | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| unittest.main() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Black formatting check failed.
The pipeline failure indicates that this file needs to be reformatted with Black before merging:
[error] 1-1: Black formatting check failed. 1 file would be reformatted.
Run 'black' to format. Command: 'black --check . --exclude "(venv|\.venv|build|dist)"'
Please run the following command to fix the formatting:
#!/bin/bash
# Format the test file with Black
black tests/test_security_scheduler.pyThe test coverage itself is excellent with comprehensive validation of:
- Enum and dataclass behavior
- Schedule CRUD operations
- Systemd timer integration
- Schedule execution with mocked scanner/patcher
- Persistence (save/load)
Once formatted, this will be ready to merge.
🧰 Tools
🪛 GitHub Actions: CI
[error] 1-1: Black formatting check failed. 1 file would be reformatted. Run 'black' to format. Command: 'black --check . --exclude "(venv|.venv|build|dist)"'
🤖 Prompt for AI Agents
In @tests/test_security_scheduler.py around lines 1 - 628, The test file
tests/test_security_scheduler.py fails the Black formatting check; run black on
that file (e.g., black tests/test_security_scheduler.py or black .) to reformat
it, then re-run the CI; no code logic changes are required—just apply Black
formatting to the file containing the test classes (TestScheduleFrequencyEnum,
TestSecuritySchedule, TestSecurityScheduler, etc.) so the pipeline passes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 20 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| self.scanner = VulnerabilityScanner(db_path=self.temp_db.name) | ||
|
|
||
| def tearDown(self): | ||
| """Clean up temporary files""" |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test creates a temporary database file but the VulnerabilityScanner instance may hold references to the connection pool after tearDown. Consider explicitly closing the scanner's connection pool before unlinking the database file to avoid potential file locking issues on some platforms.
| """Clean up temporary files""" | |
| """Clean up scanner resources and temporary files""" | |
| # Ensure any database connections/connection pools are closed before deleting the file | |
| scanner_close = getattr(getattr(self, "scanner", None), "close", None) | |
| if callable(scanner_close): | |
| scanner_close() | |
| self.scanner = None |
| def test_rate_limit_enforces_delay(self): | ||
| """Test rate limiting enforces minimum delay""" | ||
| import time | ||
|
|
||
| self.scanner.min_api_interval = 0.1 # 100ms for testing | ||
| self.scanner.last_api_call = time.time() | ||
|
|
||
| start = time.time() | ||
| self.scanner._rate_limit() | ||
| elapsed = time.time() - start | ||
|
|
||
| # Should have waited at least some time | ||
| self.assertGreaterEqual(elapsed, 0.05) | ||
|
|
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test_rate_limit_enforces_delay test manually manipulates the scanner's last_api_call timestamp and expects a minimum delay. However, the assertion on line 421 checks for >= 0.05 seconds when the interval is set to 0.1 seconds. This is testing for "at least some time" but doesn't verify the actual rate limit is working correctly. Consider tightening this assertion to check for >= 0.09 seconds to ensure the rate limit is actually enforced.
| def _init_database(self): | ||
| try: | ||
| self._pool = get_connection_pool(self.db_path, pool_size=3) | ||
|
|
||
| with self._pool.get_connection() as conn: | ||
| cursor = conn.cursor() | ||
|
|
||
| cursor.execute( | ||
| """ | ||
| CREATE TABLE IF NOT EXISTS vulnerability_cache ( | ||
| package_name TEXT, | ||
| version TEXT, | ||
| cve_id TEXT, | ||
| severity TEXT, | ||
| cached_at TEXT, | ||
| expires_at TEXT, | ||
| data TEXT, | ||
| PRIMARY KEY (package_name, version, cve_id) | ||
| ) | ||
| """ | ||
| ) | ||
|
|
||
| cursor.execute( | ||
| """ | ||
| CREATE TABLE IF NOT EXISTS scan_history ( | ||
| scan_id TEXT PRIMARY KEY, | ||
| timestamp TEXT NOT NULL, | ||
| total_packages INTEGER, | ||
| vulnerabilities_found INTEGER, | ||
| scan_duration REAL, | ||
| result_json TEXT | ||
| ) | ||
| """ | ||
| ) | ||
|
|
||
| cursor.execute( | ||
| """ | ||
| CREATE INDEX IF NOT EXISTS idx_cache_expires | ||
| ON vulnerability_cache(expires_at) | ||
| """ | ||
| ) | ||
| cursor.execute( | ||
| """ | ||
| CREATE INDEX IF NOT EXISTS idx_scan_timestamp | ||
| ON scan_history(timestamp) | ||
| """ | ||
| ) | ||
|
|
||
| conn.commit() | ||
|
|
||
| logger.info(f"Vulnerability database initialized at {self.db_path}") | ||
| except Exception as e: | ||
| logger.error(f"Failed to initialize database: {e}") | ||
| raise | ||
|
|
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The database connection pool is obtained from get_connection_pool but never explicitly closed or cleaned up when the VulnerabilityScanner instance is destroyed. Consider adding a del method or context manager support to properly close the connection pool and release resources.
| def _compare_versions(self, version1: str, operator: str, version2: str) -> bool: | ||
| """ | ||
| Compare two Debian package versions using dpkg --compare-versions. | ||
| Args: | ||
| version1: First version string | ||
| operator: Comparison operator (lt, le, eq, ne, ge, gt) | ||
| version2: Second version string | ||
| Returns: | ||
| True if the comparison holds, False otherwise | ||
| """ | ||
| try: | ||
| success, _, _ = self._run_command( | ||
| ["dpkg", "--compare-versions", version1, operator, version2] | ||
| ) | ||
| return success | ||
| except Exception as e: | ||
| logger.warning(f"Version comparison failed: {e}") | ||
| return False | ||
|
|
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The version comparison relies on dpkg --compare-versions which may not be available on non-Debian systems. Since this is "Cortex Linux" which may support multiple distributions, consider documenting this dependency or providing fallback version comparison logic.
| fixed_version = vulnerability.fixed_version | ||
|
|
||
| # If no fixed version is specified, we can't verify the fix | ||
| if not fixed_version: | ||
| if self.allow_unverified_patches: | ||
| logger.warning( | ||
| f"⚠️ Applying UNVERIFIED update for {vulnerability.cve_id} on " | ||
| f"{vulnerability.package_name} - no fixed_version available to verify fix" | ||
| ) | ||
| return True | ||
| else: | ||
| logger.info( | ||
| f"Skipping {vulnerability.cve_id} on {vulnerability.package_name}: " | ||
| f"no fixed_version specified, cannot verify update fixes vulnerability " | ||
| f"(set allow_unverified_patches=True to override)" | ||
| ) | ||
| return False |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When allow_unverified_patches is enabled and no fixed_version is available, patches are applied without verification. This is a security risk and should be more prominently documented with warnings. Consider adding a confirmation prompt when this mode is enabled in interactive contexts.
| [Service] | ||
| Type=oneshot | ||
| ExecStart={self.cortex_binary} security schedule run {schedule_id} |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The systemd service file is configured to run as 'root' user (line 303), but there's no clear documentation or warning about the security implications. Since this will execute autonomous patching with root privileges, consider adding a security notice in the generated service file comments or documentation.
| ExecStart={self.cortex_binary} security schedule run {schedule_id} | |
| ExecStart={self.cortex_binary} security schedule run {schedule_id} | |
| # SECURITY NOTICE: | |
| # This service runs the Cortex security scheduler with root privileges | |
| # to perform autonomous patching and system changes. Review the command | |
| # above, restrict access to this unit file, and ensure only trusted | |
| # operators can modify or trigger it. |
|
|
||
| # For servers/critical systems, use weekly with critical-only strategy | ||
| cortex security schedule create weekly-critical --frequency weekly --enable-patch | ||
| cortex security schedule install-timer monthly-patch |
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The command example on line 86 is missing the '--apply' flag mentioned in the description on line 79. The example should include '--apply' to show how to actually apply patches in non-dry-run mode.
| cortex security schedule install-timer monthly-patch | |
| cortex security schedule install-timer monthly-patch --apply |
| errors: list[str] = None | ||
|
|
||
| def __post_init__(self): | ||
| if self.errors is None: | ||
| self.errors = [] | ||
|
|
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 'errors' field in ScanResult has the same mutable default issue as the Vulnerability dataclass. Use dataclasses.field(default_factory=list) instead of None with post_init conversion.
| def _run_command(self, cmd: list[str]) -> tuple[bool, str, str]: | ||
| """Execute command and return success, stdout, stderr""" | ||
| try: | ||
| result = subprocess.run(cmd, capture_output=True, text=True, timeout=300) | ||
| return (result.returncode == 0, result.stdout, result.stderr) | ||
| except subprocess.TimeoutExpired: | ||
| return (False, "", "Command timed out") | ||
| except Exception as e: | ||
| return (False, "", str(e)) | ||
|
|
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The timeout for subprocess commands is hardcoded to 300 seconds. For large package installations or slow systems, this may be insufficient. Consider making this configurable via a parameter or increasing it for package installation operations specifically.
| def ensure_apt_updated(self, force: bool = False) -> bool: | ||
| """ | ||
| Ensure apt package list is updated. Thread-safe and rate-limited. | ||
| Args: | ||
| force: If True, force update even if recently updated | ||
| Returns: | ||
| True if update succeeded or was recently done, False on failure | ||
| """ | ||
| with _apt_update_lock: | ||
| now = datetime.now() | ||
|
|
||
| # Check if we need to update | ||
| if not force and self._apt_last_updated is not None: | ||
| elapsed = (now - self._apt_last_updated).total_seconds() | ||
| if elapsed < self.apt_update_interval: | ||
| logger.debug(f"Apt cache still fresh ({elapsed:.0f}s old), skipping update") | ||
| return True | ||
|
|
||
| # Run apt-get update | ||
| logger.info("Updating apt package list...") | ||
| success, stdout, stderr = self._run_command(["apt-get", "update", "-qq"]) | ||
|
|
||
| if success: | ||
| self._apt_last_updated = now | ||
| logger.info("Apt package list updated successfully") | ||
| return True | ||
| else: | ||
| logger.warning(f"Failed to update apt package list: {stderr}") | ||
| # Still set timestamp to avoid hammering on repeated failures | ||
| self._apt_last_updated = now | ||
| return False | ||
|
|
Copilot
AI
Jan 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The apt-get commands are executed without checking for root/sudo privileges. While apt-get will fail without proper permissions, it would be better to check permissions upfront and provide a clearer error message to the user before attempting the operations.
…nner - Introduced a new method `get_package_version` in VulnerabilityScanner to retrieve the installed version of a specific package. - Updated CortexCLI to utilize the new method for checking package existence and version, improving error handling and code clarity. - Removed direct access to the internal method `_get_installed_packages` for better encapsulation.
- Introduced a new PrivilegeError exception to handle cases where root privileges are required. - Implemented _has_root_privileges method to check for root access or passwordless sudo. - Added _require_root_privileges method to enforce privilege checks before critical operations. - Updated ensure_apt_updated and package installation logic to log warnings and handle errors when root privileges are not available. - Enhanced documentation to clarify the need for root access during operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
cortex/vulnerability_scanner.py (3)
33-34: Module-level logging configuration may override application settings.Calling
logging.basicConfig()at module import time can override any logging configuration the importing application has already set up. Consider removing this or making it conditional.Suggested fix
-logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__)
309-320: Consider caching installed packages for repeated lookups.This method calls
_get_installed_packages()which runsdpkg-queryeach time. For repeated lookups, consider caching the package list at the instance level.
870-877: Sorting logic may not produce expected severity ordering.The sorting key
(v.severity == Severity.CRITICAL, v.severity == Severity.HIGH, v.cvss_score or 0)produces(True, False, score)or(False, True, score)etc. SinceTrue > False, this works, but it's not intuitive. Consider using a severity ordinal for clarity.Suggested improvement
sorted_vulns = sorted( result.vulnerabilities, key=lambda v: ( - v.severity == Severity.CRITICAL, - v.severity == Severity.HIGH, - v.cvss_score or 0, + {Severity.CRITICAL: 4, Severity.HIGH: 3, Severity.MEDIUM: 2, Severity.LOW: 1, Severity.UNKNOWN: 0}.get(v.severity, 0), + v.cvss_score or 0, ), reverse=True, )cortex/cli.py (1)
1147-1152: Duplicate severity color mapping.The severity-to-color mapping appears twice. Consider extracting to a constant or helper function.
Suggested refactor
_SEVERITY_COLORS = { Severity.CRITICAL: "red", Severity.HIGH: "yellow", Severity.MEDIUM: "blue", Severity.LOW: "green", } # Then use: severity_color = _SEVERITY_COLORS.get(vuln.severity, "white")Also applies to: 1188-1193
cortex/autonomous_patcher.py (1)
27-28: Module-level logging configuration may override application settings.Same issue as in
vulnerability_scanner.py. Consider removinglogging.basicConfig()at module level.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cortex/autonomous_patcher.pycortex/cli.pycortex/vulnerability_scanner.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/cli.pycortex/autonomous_patcher.pycortex/vulnerability_scanner.py
🧠 Learnings (5)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Dry-run by default for all installations in command execution
Applied to files:
cortex/cli.pycortex/autonomous_patcher.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : No silent sudo execution - require explicit user confirmation
Applied to files:
cortex/autonomous_patcher.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/vulnerability_scanner.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*.py : Type hints required in Python code
Applied to files:
cortex/vulnerability_scanner.py
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to {setup.py,setup.cfg,pyproject.toml,**/__init__.py} : Use Python 3.10 or higher as the minimum supported version
Applied to files:
cortex/vulnerability_scanner.py
🧬 Code graph analysis (2)
cortex/autonomous_patcher.py (2)
cortex/installation_history.py (3)
InstallationHistory(74-632)record_installation(258-316)update_installation(318-367)cortex/vulnerability_scanner.py (2)
Severity(37-44)Vulnerability(48-65)
cortex/vulnerability_scanner.py (2)
cortex/progress_indicators.py (2)
ProgressIndicator(177-378)get_progress_indicator(665-672)cortex/utils/db_pool.py (3)
SQLiteConnectionPool(19-172)get_connection_pool(181-222)get_connection(99-136)
🪛 GitHub Actions: CI
cortex/autonomous_patcher.py
[error] 1-1: Black formatting check failed. The file would be reformatted by Black.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: test (3.12)
🔇 Additional comments (28)
cortex/vulnerability_scanner.py (12)
47-66: LGTM!The
Vulnerabilitydataclass is well-designed with proper type hints and correctly handles the mutable default forreferencesusing__post_init__.
68-87: LGTM!The
ScanResultdataclass follows the same correct pattern for handling mutable defaults.
92-127: LGTM!Well-documented constructor with clear parameter descriptions including guidance for security-critical systems and systems with many packages.
132-185: LGTM!Database initialization creates appropriate tables with indexes for cache expiration and scan history queries. The use of connection pooling is good for concurrent access.
193-225: LGTM!The CVSS vector parsing provides a reasonable severity estimate. The implementation correctly handles malformed vectors by returning
None.
227-256: LGTM!Good security practice validating package names and versions before use. This prevents potential injection issues when these values are used in commands or queries.
257-298: LGTM!Safe subprocess handling with timeout and proper input validation. The logging of skipped packages helps with debugging.
359-462: LGTM!The caching implementation with the
_NO_VULNS_SENTINELpattern is well-designed. Caching "no vulnerabilities found" results prevents unnecessary API calls for clean packages.
464-560: LGTM!The OSV API integration is well-implemented with rate limiting, appropriate timeouts, and comprehensive parsing of vulnerability data including CVSS scores and fixed versions.
562-596: LGTM!The
scan_packagemethod efficiently uses cache-first lookup and properly caches all results including empty ones to avoid repeated API calls.
598-697: LGTM!Comprehensive scanning implementation with good progress reporting, cache statistics, and error collection. The integration with the progress indicator provides clear user feedback.
736-796: LGTM!History retrieval and critical vulnerability queries are well-implemented with proper ordering and deduplication.
cortex/cli.py (5)
1081-1105: LGTM!The
security()method follows the established pattern for subcommand routing in this CLI, with appropriate error handling and verbose traceback support.
1205-1285: LGTM!The patching implementation correctly defaults to dry-run mode (line 1217), requiring explicit
--applyto make changes. This aligns with the safety requirement for "Dry-run by default for all installations". Based on learnings.
1287-1361: LGTM!The scheduling implementation provides comprehensive management with safe defaults (dry-run mode) and clear status reporting.
2555-2610: LGTM!The argument parser setup for security commands is well-structured and matches the implementation. The
--dry-rundefault with explicit--applyoverride provides safe defaults.
2872-2873: LGTM!Command routing is correctly integrated with the existing CLI structure.
cortex/autonomous_patcher.py (11)
55-87: LGTM!Well-defined enums and dataclasses with proper type hints and documentation.
89-137: LGTM!The constructor implements safe defaults with
dry_run=Trueandauto_approve=False, aligning with the requirement for "Dry-run by default for all installations". Based on learnings.
186-220: LGTM!The privilege checking correctly implements the "No silent sudo execution" requirement by only checking for passwordless sudo access (
sudo -n), never prompting for a password. Based on learnings.
222-265: LGTM!Thread-safe, rate-limited apt update with proper privilege checking. Setting the timestamp even on failure (line 264) is a good defensive measure to prevent repeated failures.
314-357: LGTM!Proper verification that updates actually fix vulnerabilities using
dpkg --compare-versions. The conservative default (skip unverified) with opt-inallow_unverified_patchesis good security practice.
359-400: LGTM!The patch filtering logic correctly implements the priority order: blacklist blocks, whitelist allows, then severity and strategy checks apply.
402-506: LGTM!The patch plan creation is well-designed: single apt update before checking packages, verification that updates actually fix vulnerabilities, and clear tracking of skipped items with reasons.
508-655: LGTM!The patch application correctly:
- Checks root privileges before executing (lines 551-566)
- Records operations in history for rollback capability
- Uses version pinning for precise updates
- Collects per-package errors
This follows the "No silent sudo execution" requirement. Based on learnings.
657-700: LGTM!Good convenience method that provides clear progress feedback and plan summary before execution.
176-184: LGTM!Safe subprocess execution with timeout, captured output, and no
shell=True.
1-1: Unable to verify the Black formatting claim—Black is not installed in the verification environment, and the original review references an external CI pipeline failure that cannot be independently validated here. The file (cortex/autonomous_patcher.py) passes Python syntax validation and contains visible proper formatting with docstrings and type hints, but definitive confirmation of whether Black would reformat it cannot be determined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 14 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ) | ||
|
|
||
| # Sentinel CVE ID used to cache "no vulnerabilities found" results | ||
| _NO_VULNS_SENTINEL = "__NO_VULNERABILITIES__" |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sentinel value approach using "NO_VULNERABILITIES" as a fake CVE ID is fragile and could conflict with a real CVE ID if the format were to change. Consider using a separate boolean column in the database schema to explicitly mark "no vulnerabilities found" entries, or use a NULL CVE ID for sentinel entries rather than a string constant.
| _NO_VULNS_SENTINEL = "__NO_VULNERABILITIES__" | |
| _NO_VULNS_SENTINEL = None |
| # API endpoints | ||
| self.osv_api = "https://api.osv.dev/v1/query" | ||
| # NVD API endpoint reserved for future fallback support | ||
| # self.nvd_api = "https://services.nvd.nist.gov/rest/json/cves/2.0" |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation states "Note: NVD and Safety DB support are planned for future releases" but then includes commented-out NVD API endpoint. This creates maintenance debt and could confuse developers. Either implement the NVD fallback now, or remove the commented code and track it in a separate issue.
| # self.nvd_api = "https://services.nvd.nist.gov/rest/json/cves/2.0" |
| def _run_command(self, cmd: list[str]) -> tuple[bool, str, str]: | ||
| """Execute command and return success, stdout, stderr""" | ||
| try: | ||
| result = subprocess.run(cmd, capture_output=True, text=True, timeout=300) | ||
| return (result.returncode == 0, result.stdout, result.stderr) | ||
| except subprocess.TimeoutExpired: | ||
| return (False, "", "Command timed out") | ||
| except Exception as e: | ||
| return (False, "", str(e)) |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method '_run_command' has a hard-coded timeout of 300 seconds (5 minutes). This might be insufficient for systems with slow network connections or when updating large packages. Consider making the timeout configurable through the constructor, similar to how 'dpkg_timeout' is configurable in VulnerabilityScanner.
| # Per Debian Policy: lowercase alphanumeric, plus, minus, period | ||
| # Must start with alphanumeric, minimum 2 characters |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The validation regex for package names uses a minimum length of 2 characters, but the comment states "Per Debian Policy: lowercase alphanumeric, plus, minus, period / Must start with alphanumeric, minimum 2 characters". However, according to Debian Policy Manual section 5.6.1, package names must be at least two characters long and start with an alphanumeric character (lowercase letter or digit), which this regex correctly enforces. The regex is correct, but for clarity, consider adding a reference to the specific Debian Policy section in the comment.
| # Per Debian Policy: lowercase alphanumeric, plus, minus, period | |
| # Must start with alphanumeric, minimum 2 characters | |
| # Per Debian Policy Manual §5.6.1: lowercase alphanumeric (a-z0-9), | |
| # plus (+), minus (-), and period (.) / must start with a lowercase letter | |
| # or digit and be at least 2 characters long |
| # Module-level apt update lock for thread safety | ||
| _apt_update_lock = threading.Lock() | ||
| # Default apt update interval: 5 minutes balances freshness with avoiding | ||
| # excessive apt-get update calls during batch operations | ||
| _DEFAULT_APT_UPDATE_INTERVAL_SECONDS = 300 |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Module-level global state '_apt_update_lock' and '_DEFAULT_APT_UPDATE_INTERVAL_SECONDS' can cause issues with testing and make the code harder to reason about. Since the lock is meant to prevent concurrent apt-get update calls, but each AutonomousPatcher instance has its own '_apt_last_updated' timestamp, there's a mismatch between the shared lock (module-level) and per-instance state. Consider moving the lock to be an instance variable or using a class-level lock if truly global coordination is needed.
| # Write service file | ||
| service_path = Path(f"/etc/systemd/system/cortex-security-{schedule_id}.service") | ||
| with open(service_path, "w") as f: | ||
| f.write(service_content) | ||
|
|
||
| # Write timer file | ||
| timer_path = Path(f"/etc/systemd/system/cortex-security-{schedule_id}.timer") |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The systemd timer installation writes files directly to /etc/systemd/system without checking if files with the same name already exist. If a user creates multiple schedules with the same ID or if files already exist from a previous installation, this could silently overwrite existing timer configurations without warning the user.
| # Write service file | |
| service_path = Path(f"/etc/systemd/system/cortex-security-{schedule_id}.service") | |
| with open(service_path, "w") as f: | |
| f.write(service_content) | |
| # Write timer file | |
| timer_path = Path(f"/etc/systemd/system/cortex-security-{schedule_id}.timer") | |
| # Determine target paths for service and timer | |
| service_path = Path(f"/etc/systemd/system/cortex-security-{schedule_id}.service") | |
| timer_path = Path(f"/etc/systemd/system/cortex-security-{schedule_id}.timer") | |
| # Prevent accidental overwrite of existing systemd units | |
| if service_path.exists() or timer_path.exists(): | |
| logger.error( | |
| "Systemd unit files for schedule '%s' already exist: %s%s%s. " | |
| "Aborting installation to avoid overwriting existing configuration.", | |
| schedule_id, | |
| service_path if service_path.exists() else "", | |
| " " if service_path.exists() and timer_path.exists() else "", | |
| timer_path if timer_path.exists() else "", | |
| ) | |
| logger.info( | |
| "Use a different schedule ID or manually remove the existing " | |
| "systemd units before installing this timer." | |
| ) | |
| return False | |
| # Write service file | |
| with open(service_path, "w") as f: | |
| f.write(service_content) | |
| # Write timer file |
| **Security vulnerabilities in dependencies are the #1 attack vector for Linux systems.** According to recent CVE data: | ||
|
|
||
| - **25,000+ new CVEs** are published annually | ||
| - **60% of breaches** exploit known, unpatched vulnerabilities | ||
| - Average time from CVE publication to exploit: **15 days** | ||
| - Average enterprise patching cycle: **102 days** ❌ |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation references statistics like "Average enterprise patching cycle: 102 days" and "60% of breaches" but doesn't provide citations or sources for these claims. For credibility and to allow readers to verify these statistics, consider adding references to industry reports or research studies that support these numbers.
| class PrivilegeError(Exception): | ||
| """Raised when root/sudo privileges are required but not available.""" | ||
|
|
||
| pass |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The newly introduced PrivilegeError exception class is not tested in the test suite. Consider adding tests that verify this exception is raised when operations requiring root privileges are attempted without proper permissions, ensuring error handling works correctly.
| service_content = f"""[Unit] | ||
| Description=Cortex Security Scan/Patch - {schedule_id} | ||
| After=network.target | ||
| [Service] | ||
| Type=oneshot | ||
| ExecStart={self.cortex_binary} security schedule run {schedule_id} | ||
| User=root |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The systemd service file uses 'User=root' hardcoded, which means all security scans and patches will run as root. For better security, consider allowing the user to configure this or using a dedicated service account with limited privileges for scanning operations (patches would still need root/sudo).
| self.config_patcher = patch.object( | ||
| SecurityScheduler, | ||
| "__init__", | ||
| lambda self_obj: self._init_scheduler(self_obj), |
Copilot
AI
Jan 10, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This 'lambda' is just a simple wrapper around a callable object. Use that object directly.
| lambda self_obj: self._init_scheduler(self_obj), | |
| self._init_scheduler, |
|



Screencast.from.02-01-26.04.49.56.PM.IST.webm
Summary
Implements security vulnerability management with autonomous patching capabilities for Cortex Linux.
Closes #422
Changes
New Files
cortex/vulnerability_scanner.pycortex/autonomous_patcher.pycortex/security_scheduler.pydocs/SECURITY_MANAGEMENT.mdModified Files
cortex/cli.pycortex securitycommand with subcommandsFeatures
Usage
Scan all packages for vulnerabilities
cortex security scan --all
Scan specific package
cortex security scan --package openssl
Patch vulnerabilities (dry-run by default)
cortex security patch --scan-and-patch --strategy critical_only
Actually apply patches
cortex security patch --scan-and-patch --apply
Set up monthly automated patching
cortex security schedule create monthly-patch --frequency monthly --enable-patch## Testing
Run the vulnerability scanner
cortex security scan --all
Check specific package
cortex security scan --package curl## Screenshots / Output
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.