-
-
Notifications
You must be signed in to change notification settings - Fork 50
Removed the hardcoded Ollama Models and now works with any Ollama Model #435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded@sujay-d07 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 17 minutes and 9 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe changes remove hardcoded "llama3.2" Ollama model references and implement dynamic model resolution. A new shared utility Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
CLA Verification PassedAll contributors have signed the CLA.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR is good, but docs are missing. @sujay-d07
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR removes the hardcoded llama3.2 model from the Ollama integration and enables users to configure any Ollama model via environment variables or config file. This increases flexibility for users who want to use different models based on their hardware capabilities or preferences.
Key Changes:
- Added configurable Ollama model selection through
OLLAMA_MODELenvironment variable and~/.cortex/config.json - Enhanced first-run wizard to offer 5 model choices instead of forcing llama3.2
- Updated tests to auto-detect available models instead of assuming llama3.2 is installed
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| cortex/ask.py | Implements _get_ollama_model() method with precedence logic (env var → config file → default) |
| cortex/first_run_wizard.py | Adds interactive model selection menu with 5 preset choices plus custom option |
| tests/test_ollama_integration.py | Updates tests to use get_available_ollama_model() helper function for dynamic model detection |
| tests/test_ask.py | Adds comprehensive tests for environment variable and config file model selection |
| tests/test_cli.py | Updates test to properly mock Ollama fallback behavior |
| cortex/llm/interpreter.py | Minor formatting improvement (adds line break for better code style) |
| cortex/dependency_check.py | Minor formatting improvement (consistent quote style) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
cortex/ask.py (1)
203-203: Add encoding parameter for portability.When opening the config file, specify
encoding='utf-8'to ensure consistent behavior across different platforms and locales. As per coding guidelines for Python, explicit encoding is recommended.🔎 Proposed fix
- with open(config_file) as f: + with open(config_file, encoding='utf-8') as f:cortex/first_run_wizard.py (1)
158-159: Consider adding encoding parameter for consistency.For consistency with best practices, consider specifying
encoding='utf-8'when writing the config file. This ensures portability across different platforms and locales.🔎 Proposed fix
try: - with open(self.CONFIG_FILE, "w") as f: + with open(self.CONFIG_FILE, "w", encoding='utf-8') as f: json.dump(self.config, f, indent=2) except Exception as e:tests/test_ollama_integration.py (3)
28-47: Add type hint and enhance docstring.The function is missing a return type hint, and the docstring could be more comprehensive.
As per coding guidelines, type hints and complete docstrings are required for public APIs. Consider enhancing this helper function's documentation.
🔎 Suggested improvements
-def get_available_ollama_model(): - """Get the first available Ollama model, or None if none available.""" +def get_available_ollama_model() -> str | None: + """Get the first available Ollama model. + + Queries the Ollama installation via 'ollama list' and returns the first + available model name. + + Returns: + str: The name of the first available model, or None if no models + are installed or Ollama cannot be queried. + """
50-55: Consider usingshutil.whichfor portability.The current implementation uses the
whichcommand which is not available on Windows. While this may be acceptable if the project targets Linux/Unix systems exclusively, usingshutil.whichwould improve cross-platform compatibility.🔎 Portable alternative
+import shutil + # Mark all tests to skip if Ollama is not available or no models installed pytestmark = pytest.mark.skipif( - not subprocess.run(["which", "ollama"], capture_output=True).returncode == 0 + shutil.which("ollama") is None or get_available_ollama_model() is None, reason="Ollama is not installed or no models available. Install with: python scripts/setup_ollama.py", )
101-104: Consider reducing code duplication.The same 4-line pattern for obtaining and validating the test model appears in three test functions. This could be refactored into a pytest fixture or module-level helper to follow DRY principles.
🔎 Example using pytest fixture
Add near the top of the file after imports:
@pytest.fixture def ollama_test_model(): """Get available Ollama model for testing.""" test_model = os.environ.get("OLLAMA_MODEL") or get_available_ollama_model() if not test_model: pytest.skip("No Ollama models available") return test_modelThen update each test function signature:
def test_llm_router(ollama_test_model): """Test LLMRouter with Ollama.""" print("3. Testing LLM Router with Ollama...") try: # Initialize router with Ollama router = LLMRouter( ollama_base_url="http://localhost:11434", ollama_model=ollama_test_model, # ...Also applies to: 149-151, 184-186
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
cortex/ask.pycortex/dependency_check.pycortex/first_run_wizard.pycortex/llm/interpreter.pytests/test_ask.pytests/test_cli.pytests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/llm/interpreter.pycortex/ask.pycortex/dependency_check.pycortex/first_run_wizard.pytests/test_cli.pytests/test_ask.pytests/test_ollama_integration.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_cli.pytests/test_ask.pytests/test_ollama_integration.py
🧬 Code graph analysis (3)
cortex/ask.py (2)
cortex/llm/interpreter.py (1)
_get_ollama_model(69-91)cortex/env_manager.py (1)
load(571-595)
tests/test_cli.py (3)
cortex/llm/interpreter.py (1)
parse(320-378)cortex/packages.py (1)
parse(383-427)cortex/cli.py (1)
install(540-786)
tests/test_ollama_integration.py (1)
scripts/setup_ollama.py (1)
test_model(315-343)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Upload results
- GitHub Check: Build Package
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (7)
cortex/llm/interpreter.py (1)
115-116: LGTM!The comment relocation is a minor formatting adjustment with no functional impact.
cortex/dependency_check.py (1)
46-46: LGTM!The delimiter change from single to double quotes is cosmetic and doesn't affect functionality.
cortex/ask.py (1)
13-13: LGTM!The Path import and integration with
_get_ollama_model()properly implements the configurable model selection as intended by the PR objectives.Also applies to: 173-173
tests/test_cli.py (1)
61-71: LGTM!The test correctly validates that Cortex can successfully install packages using Ollama as a fallback when no API key is provided. The shift from testing an error case to testing successful Ollama fallback aligns with the PR objectives.
cortex/first_run_wizard.py (1)
374-407: LGTM!The interactive model selection implementation is well-structured and aligns perfectly with the PR objectives. The menu provides sensible defaults while allowing custom model selection, and the chosen model is properly persisted to the configuration.
tests/test_ask.py (1)
241-267: Well-structured test for Ollama model resolution!The test correctly validates both the environment variable override and the default behavior. The environment variable save/restore pattern is properly implemented, and the use of
tempfile.TemporaryDirectorywithos.path.expanduserpatching ensures deterministic testing of the default model without side effects.tests/test_ollama_integration.py (1)
110-110: Good parameterization of the Ollama model!Replacing the hardcoded
"llama3.2"with the dynamictest_modelvariable enables tests to work with any available Ollama model, aligning perfectly with the PR objectives to make model selection configurable.Also applies to: 156-156, 191-191
I have already added docstrings where it was required, and these changes don't require a separate doc file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cortex/first_run_wizard.py (1)
389-397: The invalid choice validation concern has been addressed.The else clause (lines 395-397) now handles invalid choices gracefully by defaulting to "llama3.2" and providing user feedback. This resolves the KeyError risk mentioned in the previous review.
However, consider adding validation for the custom model name input at line 392 to ensure it's not empty and doesn't contain unexpected characters.
🔎 Optional: Add validation for custom model input
if choice == "5": model_name = self._prompt("Enter model name: ", default="llama3.2") + if not model_name or not model_name.strip(): + print("Invalid model name, using default llama3.2") + model_name = "llama3.2"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
cortex/ask.pycortex/config_utils.pycortex/first_run_wizard.pycortex/llm/interpreter.pytests/test_ask.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/test_ask.py
- cortex/ask.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/config_utils.pycortex/llm/interpreter.pycortex/first_run_wizard.py
🧬 Code graph analysis (2)
cortex/config_utils.py (1)
cortex/env_manager.py (1)
load(571-595)
cortex/llm/interpreter.py (1)
cortex/config_utils.py (1)
get_ollama_model(11-48)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test (Python 3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
🔇 Additional comments (5)
cortex/llm/interpreter.py (2)
7-8: LGTM: Clean import addition.The import follows Python conventions and integrates the shared Ollama model resolution utility.
71-76: Excellent refactoring to eliminate duplication.Delegating to the shared
get_ollama_model()utility ensures consistent model resolution across the codebase while maintaining the same behavior. The docstring correctly documents the delegation.cortex/first_run_wizard.py (1)
399-410: Well-structured model pulling and configuration storage.The selected model name is correctly used for both pulling the model and storing it in the configuration as
ollama_model, which aligns with the resolution logic inget_ollama_model(). Error handling provides helpful feedback if the pull fails.cortex/config_utils.py (2)
1-9: Clean module structure with appropriate imports.The module docstring is concise yet descriptive, and imports are minimal and necessary.
11-48: Excellent implementation of centralized Ollama model resolution.This function provides a clean, well-documented API for resolving the Ollama model name with clear precedence rules. Key strengths:
- Comprehensive docstring documenting all three resolution steps
- Proper type hint
- Correct handling of edge cases (empty strings are treated as falsy)
- Appropriate error handling (silently falls back to next step)
- Complies with coding guidelines (PEP 8, type hints, docstrings)
The implementation correctly handles empty strings from both environment variables and config file by using truthy checks, which prevents returning empty model names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cortex/installation_history.py (1)
323-367: Critical: Database operations outside connection context.Lines 337-367 execute after the
with self._pool.get_connection() as conn:block ends at line 336. Thecursorandconnobjects are used after the connection is closed/returned to pool.🐛 Proposed fix - extend the context block
def update_installation( self, install_id: str, status: InstallationStatus, error_message: str | None = None ): """Update installation record after completion""" try: with self._pool.get_connection() as conn: cursor = conn.cursor() # Get packages from record cursor.execute( "SELECT packages, timestamp FROM installations WHERE id = ?", (install_id,) ) result = cursor.fetchone() if not result: logger.error(f"Installation {install_id} not found") return packages = json.loads(result[0]) - start_time = datetime.datetime.fromisoformat(result[1]) - duration = (datetime.datetime.now() - start_time).total_seconds() - - # Create after snapshot - after_snapshot = self._create_snapshot(packages) - - # Update record - cursor.execute( + start_time = datetime.datetime.fromisoformat(result[1]) + duration = (datetime.datetime.now() - start_time).total_seconds() + + # Create after snapshot + after_snapshot = self._create_snapshot(packages) + + # Update record + cursor.execute( """ UPDATE installations SET status = ?, after_snapshot = ?, error_message = ?, duration_seconds = ? WHERE id = ? - """, - ( - status.value, - json.dumps([asdict(s) for s in after_snapshot]), - error_message, - duration, - install_id, - ), - ) - - conn.commit() + """, + ( + status.value, + json.dumps([asdict(s) for s in after_snapshot]), + error_message, + duration, + install_id, + ), + ) + + conn.commit() - logger.info(f"Installation {install_id} updated: {status.value}") + logger.info(f"Installation {install_id} updated: {status.value}") except Exception as e: logger.error(f"Failed to update installation: {e}") raise
🤖 Fix all issues with AI agents
In @.github/workflows/spam-protection.yml:
- Around line 65-79: The YAML workflow has a syntax error because the multi-line
string passed as the body to github.rest.issues.createComment is not properly
escaped for YAML; update the call in the github.rest.issues.createComment
invocation so body is constructed as a single properly-escaped string (e.g., use
a JS template literal or join an array of lines) and interpolate flags and
pr.number via concatenation or template placeholders; ensure the body value is a
valid YAML scalar (quoted or folded) so context.repo.owner, context.repo.repo,
issue_number: pr.number, and flags are preserved and the workflow parses
correctly.
- Around line 45-55: The current guard `if (authorPRs.length > 5)` prevents
burst detection for new accounts; remove that length check so the recent-window
logic always runs: compute `recent` from `authorPRs` (using the existing Date
math) and push `'burst-activity'` when `recent.length > 3`; optionally, if you
want to specifically target new accounts, add an additional check using the
user's account creation date (e.g., `user.created_at`) to treat short-lived
accounts differently, but do not rely on `authorPRs.length > 5` to trigger burst
detection.
In @docs/COMMANDS.md:
- Around line 323-359: Update the `cortex docker permissions` docs to document
the required --execute (-e) flag that switches from dry-run to applying
ownership changes: add a row for `--execute, -e` to the Options table (alongside
the existing `--yes, -y`), clarify in the Notes that the command is dry-run by
default and only applies changes when --execute is supplied, and update the
Examples to show a dry-run example (no flags), an interactive apply (`cortex
docker permissions --execute`), and a non-interactive apply (`cortex docker
permissions --execute --yes`), referencing the command name and flags exactly as
implemented.
🧹 Nitpick comments (1)
cortex/permission_manager.py (1)
99-126: Consider more specific exception handling in YAML parsing.The method works correctly but uses overly broad exception handling at line 124:
except (ImportError, Exception):This catches ALL exceptions, which can mask unexpected errors. Consider being more specific about what can fail during YAML parsing:
♻️ More specific exception handling
- except (ImportError, Exception): + except (ImportError, yaml.YAMLError, OSError, KeyError, TypeError): # Silently fallback if PyYAML is missing or the file is malformed. passThis explicitly handles:
ImportError: PyYAML not installedyaml.YAMLError: Malformed YAMLOSError: File read errorsKeyError/TypeError: Unexpected YAML structure
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
.github/CODEOWNERS.github/scripts/cla_check.py.github/workflows/spam-protection.ymlREADME.mdcortex/api_key_detector.pycortex/cli.pycortex/dependency_importer.pycortex/installation_history.pycortex/permission_manager.pycortex/semantic_cache.pydocs/COMMANDS.mddocs/DOCKER_PERMISSIONS.mdtests/test_dependency_importer.pytests/test_permission_manager.py
💤 Files with no reviewable changes (1)
- .github/CODEOWNERS
✅ Files skipped from review due to trivial changes (1)
- docs/DOCKER_PERMISSIONS.md
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
cortex/semantic_cache.pytests/test_permission_manager.pycortex/permission_manager.pycortex/api_key_detector.pycortex/dependency_importer.pycortex/installation_history.pycortex/cli.pytests/test_dependency_importer.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_permission_manager.pytests/test_dependency_importer.py
**/*install*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*install*.py: Dry-run by default for all installations in command execution
No silent sudo execution - require explicit user confirmation
Implement audit logging to ~/.cortex/history.db for all package operations
Files:
cortex/installation_history.py
🧠 Learnings (2)
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*sandbox*.py : Implement Firejail sandboxing for package operations
Applied to files:
README.md
📚 Learning: 2025-12-11T12:03:24.071Z
Learnt from: CR
Repo: cortexlinux/cortex PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-11T12:03:24.071Z
Learning: Applies to **/*install*.py : Implement audit logging to ~/.cortex/history.db for all package operations
Applied to files:
cortex/installation_history.py
🧬 Code graph analysis (4)
tests/test_permission_manager.py (1)
cortex/permission_manager.py (3)
diagnose(65-85)check_compose_config(99-126)fix_permissions(128-173)
cortex/api_key_detector.py (1)
cortex/branding.py (1)
cx_print(49-69)
cortex/cli.py (1)
cortex/permission_manager.py (4)
PermissionManager(30-173)check_compose_config(99-126)diagnose(65-85)fix_permissions(128-173)
tests/test_dependency_importer.py (1)
cortex/dependency_importer.py (1)
parse(136-184)
🪛 actionlint (1.7.9)
.github/workflows/spam-protection.yml
70-70: could not parse as YAML: did not find expected key
(syntax-check)
🪛 YAMLlint (1.37.1)
.github/workflows/spam-protection.yml
[error] 71-71: syntax error: expected , but found ','
(syntax)
🔇 Additional comments (17)
.github/scripts/cla_check.py (1)
347-347: LGTM!Adding the GitHub Copilot bot email to the allowlist is appropriate. The pattern matches the existing approach used for
noreply@github.comon line 348.cortex/api_key_detector.py (4)
23-23: Consider platform compatibility forfcntlimport.The
fcntlmodule is Unix-only. While Cortex targets Linux, this will cause import errors on Windows/macOS development environments.If cross-platform development is a concern, consider lazy-importing or using a try/except:
try: import fcntl except ImportError: fcntl = None # Fallback for non-Unix systems
481-513: LGTM - File locking implementation is sound.The locked read-modify-write pattern correctly uses exclusive locking with proper cleanup in the
finallyblock. The.lockfile approach is a standard pattern.
116-131: LGTM!The Ollama provider detection from config file is well-implemented with proper quote stripping and error handling.
232-255: LGTM!The Ollama preference saving flow is consistent with the existing
_ask_to_save_keypattern and properly uses file locking for safe concurrent writes.cortex/installation_history.py (1)
83-96: LGTM - Writability check is a good defensive measure.The added
os.access(db_dir, os.W_OK)check ensures early detection of permission issues before attempting database operations, with proper fallback to user directory.Based on learnings, this file implements audit logging to
~/.cortex/history.dbas required for package operations.cortex/semantic_cache.py (1)
83-89: LGTM!The writability check is consistent with the same pattern in
installation_history.py, ensuring early detection of permission issues with proper fallback.README.md (1)
70-71: LGTM!Documentation for the Docker Permission Fixer feature is consistent across the Features table, Command Reference, and Project Status sections.
Also applies to: 151-151, 341-341
cortex/dependency_importer.py (1)
354-367: LGTM - Robust URL parsing for Git repositories.The URL parsing correctly handles compound schemes like
git+https://by detecting when the path starts with//and reconstructing the URL. The allowlist approach (github.com,gitlab.com,bitbucket.org) provides good security against arbitrary URL processing.cortex/cli.py (2)
41-117: Well-implemented permission management with appropriate safety guards.The
docker_permissionsmethod follows good practices:
- Comprehensive docstring with Args, Returns, and error conditions
- Safety prompt before sudo operations unless
--yesis provided- Graceful handling of user cancellation (EOFError, KeyboardInterrupt)
- Specific exception handling for different error types
- Clear user feedback at each step
The two-flag design (
--executefor dry-run vs apply,--yesfor skipping prompts) provides good UX flexibility.
1719-1734: Clear argument parser setup for Docker permissions feature.The argparse configuration is well-structured with:
- Dedicated subparser for
dockercommands- Proper flag definitions with both short and long forms
- Clear help text for the
--executeflag explaining dry-run default behavior- Consistent naming conventions
tests/test_dependency_importer.py (1)
351-416: Comprehensive test coverage for Git URL parsing with security considerations.The new tests effectively cover:
- Multiple Git hosting providers (GitHub, GitLab, Bitbucket)
- Security: substring attack prevention to avoid malicious hostnames like
evilgithub.comorgithub.com.evil.com- Edge cases: branch specifiers, URLs without
.gitextensionThe security test at lines 378-397 is particularly valuable, ensuring the parser doesn't fall victim to hostname substring attacks.
tests/test_permission_manager.py (1)
1-133: Excellent test coverage with proper environment isolation.The test suite demonstrates strong engineering:
- Proper fixture isolation (lines 10-22): The
managerfixture mocks OS-level calls (getuid,getgid,platform.system) to create a consistent test environment regardless of the actual host OS- Platform compatibility testing: Validates both Windows rejection (lines 24-35) and WSL detection (lines 38-52)
- Mock usage: Appropriate mocking of
os.walk,os.stat, and YAML parsing to avoid filesystem dependencies- Batch processing verification (lines 115-133): Confirms the implementation batches sudo calls to avoid ARG_MAX limits—a subtle but important detail
The tests cover critical paths without relying on actual Docker containers or root permissions.
cortex/permission_manager.py (4)
1-27: Well-designed exclusion list for directory scanning.The
EXCLUDED_DIRSconstant appropriately covers common project directories that should be excluded from permission scans:
- Virtual environments (venv, .venv)
- Version control (.git)
- Build artifacts (dist, build)
- Package managers (node_modules)
- Tool caches (pycache, .pytest_cache, .tox, .mypy_cache)
This prevents unnecessary scanning and avoids false positives from ephemeral files.
39-63: Appropriate platform validation with clear error messaging.The initialization correctly:
- Detects native Windows and raises
NotImplementedErrorwith a helpful message explaining why the tool requires Linux/WSL- Allows WSL by checking for "microsoft" in the release string (a reliable WSL indicator)
- Caches
host_uidandhost_gidto avoid repeated syscalls during batch operationsThe error message clearly explains the limitation and the reason (Docker Desktop handles permissions differently on native Windows).
65-85: Efficient directory traversal with proper pruning.The
diagnosemethod is well-implemented:
- In-place pruning (line 74):
dirs[:] = [...]correctly modifies the list thatos.walkuses for recursion, preventing descent into excluded directories- Graceful error handling (lines 82-83): Catches
PermissionErrorandFileNotFoundErrorto skip inaccessible or deleted files during the scan- Simple ownership check (line 80): Compares
st_uidagainst cached host UIDThis approach is both efficient (avoids scanning excluded dirs) and robust (handles race conditions).
128-173: Robust permission repair with proper batching and error handling.The
fix_permissionsmethod demonstrates careful engineering:
- ARG_MAX awareness (lines 159-168): Batches file paths to avoid shell argument length limits—critical for large projects
- Timeout protection (line 167): 60-second timeout prevents hanging on problematic sudo operations
- Specific exception handling (line 171): Catches only the relevant exceptions (
CalledProcessError,TimeoutExpired,PermissionError) rather than a broadException- Clear user feedback: Dry-run shows first 5 files and counts the rest; execute mode shows progress
The batching logic is especially important and often overlooked in permission-fixing tools.
…lines in various test files and the first_run_wizard module.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
f643449 to
0d15f92
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @tests/test_ask.py:
- Around line 260-268: The test fails to ensure OLLAMA_MODEL isn't set in the
environment, causing get_ollama_model() to pick it up; before creating
AskHandler in the second block, explicitly remove or clear the OLLAMA_MODEL env
var (e.g., use patch.dict(os.environ, {}, clear=True) or pop OLLAMA_MODEL from
os.environ) so AskHandler(api_key="test", provider="ollama") exercises the
built-in default model lookup; update the test around the
TemporaryDirectory/patch("cortex.config_utils.Path.home"... ) block to ensure
OLLAMA_MODEL is absent for the assertion that handler2.model == "llama3.2".
🧹 Nitpick comments (1)
cortex/first_run_wizard.py (1)
391-392: Consider validating custom model name input.When users select option 5 (custom model), the input is not validated. While the user can manually pull the model later if invalid, validating the format (e.g., checking for basic alphanumeric pattern) could improve the user experience.
💡 Optional validation pattern
if choice == "5": model_name = self._prompt("Enter model name: ", default="llama3.2") + # Basic validation: non-empty and reasonable characters + if not model_name or not all(c.isalnum() or c in ".:_-" for c in model_name): + print(f"Invalid model name format, using default llama3.2") + model_name = "llama3.2"
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
cortex/ask.pycortex/config_utils.pycortex/dependency_check.pycortex/first_run_wizard.pycortex/llm/interpreter.pycortex/sandbox/docker_sandbox.pytests/test_ask.pytests/test_cli.pytests/test_ollama_integration.py
✅ Files skipped from review due to trivial changes (1)
- cortex/sandbox/docker_sandbox.py
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/test_cli.py
- cortex/llm/interpreter.py
- tests/test_ollama_integration.py
- cortex/dependency_check.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs
Files:
tests/test_ask.pycortex/ask.pycortex/first_run_wizard.pycortex/config_utils.py
tests/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Maintain >80% test coverage for pull requests
Files:
tests/test_ask.py
🧬 Code graph analysis (2)
cortex/ask.py (2)
cortex/config_utils.py (1)
get_ollama_model(11-48)cortex/llm/interpreter.py (1)
_get_ollama_model(71-76)
cortex/config_utils.py (2)
cortex/kernel_features/llm_device.py (1)
open(158-159)cortex/env_manager.py (1)
load(571-595)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
🔇 Additional comments (4)
cortex/first_run_wizard.py (2)
389-397: Choice validation properly addresses past review concern.The if/elif/else structure correctly handles all input cases:
- Valid choices "1"-"4" use the model_choices dictionary
- Choice "5" prompts for custom input
- Invalid choices default to "llama3.2" with user feedback
The past review concern about KeyError has been resolved.
399-412: Model pull and configuration handling looks good.The implementation correctly:
- Pulls the selected model with appropriate user feedback
- Handles pull failures gracefully with a helpful error message
- Saves the user's model choice to config even if pull fails, allowing manual setup later
- Stores the selected model in
config["ollama_model"]for use by the shared resolvercortex/ask.py (1)
15-16: Clean integration of shared Ollama model resolver.The implementation correctly:
- Imports the shared
get_ollama_model()utility- Adds a well-documented wrapper method
_get_ollama_model()- Integrates the resolver into
_default_model()for the "ollama" provider- Includes proper type hints and docstring
This eliminates code duplication and centralizes Ollama model resolution logic as intended by the PR objectives.
Also applies to: 174-174, 179-184
cortex/config_utils.py (1)
1-48: Well-implemented shared configuration utility.The implementation is clean and robust:
- Comprehensive docstring clearly explains the three-step precedence (env var → config file → default)
- Proper type hints on the return value
- Correct error handling: silently ignores I/O and JSON parsing errors, falling back to the next resolution step
- Uses
encoding="utf-8"for portability- Checks truthiness of values (line 32, 42) to handle empty strings correctly
- Follows PEP 8 style guidelines
This utility successfully centralizes Ollama model resolution logic across the codebase and addresses the code duplication concern from the previous review.
…_MODEL environment variable
|
Suyashd999
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM Thanks @sujay-d07 !



Related Issue
Closes #383
Summary
Removed hardcoded llama3.2 model from Ollama integration. Users can now configure any Ollama model via environment variables or config file.
Type of Change
Changes
Old Behaviour:
Screencast.from.2026-01-02.11-59-25.webm
New Behaviour:
Screencast.from.2026-01-02.12-02-57.webm
Checklist
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.