Skip to content

Conversation

@sujay-d07
Copy link
Collaborator

@sujay-d07 sujay-d07 commented Dec 27, 2025

Related Issue

Closes #383

Summary

Removed hardcoded llama3.2 model from Ollama integration. Users can now configure any Ollama model via environment variables or config file.

Changes

Old Behaviour:

  • Ollama model was hardcoded to llama3.2 in ask.py and tests
  • Users couldn't use alternative models like mistral, phi3, llama3.1:8b, etc.
  • First-run wizard forced llama3.2 installation
  • Tests failed if llama3.2 wasn't installed
Screencast.from.2026-01-02.11-59-25.webm

New Behaviour:

  • Ollama model reads from (priority order):
    • OLLAMA_MODEL environment variable
    • ~/.cortex/config.json → ollama_model field
    • Defaults to llama3.2 if neither set
  • First-run wizard offers 5 model choices (llama3.2, llama3.2:1b, mistral, phi3, custom)
  • Tests auto-detect available models or skip gracefully
  • Works with any installed Ollama model
Screencast.from.2026-01-02.12-02-57.webm

Summary by CodeRabbit

  • New Features

    • Ollama model selection is now configurable via environment or config file with a sensible default.
    • First-run wizard adds an interactive menu to choose or enter an Ollama model.
  • Tests

    • Tests updated to cover env-var/config overrides and dynamic Ollama model selection; integration tests detect available Ollama models.
  • Style

    • Minor user-facing message formatting adjusted for clearer package-install instructions.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings December 27, 2025 13:14
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 27, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Adds dynamic Ollama model resolution: reads OLLAMA_MODEL env var or ~/.cortex/config.json, falls back to "llama3.2"; updates first-run wizard to let users pick an Ollama model; and adjusts tests and a few formatting lines to use the dynamic model selection.

Changes

Cohort / File(s) Summary
Core model resolution
cortex/ask.py
Adds AskHandler._get_ollama_model() to resolve model from OLLAMA_MODEL, ~/.cortex/config.json, or default "llama3.2"; _default_model() now calls this helper.
Setup wizard (interactive pull)
cortex/first_run_wizard.py
Replaces hard-coded Ollama pull with an interactive menu (predefined choices + Custom); uses chosen model for the Ollama pull and saves ollama_model to config.
Tests — ask & integration
tests/test_ask.py, tests/test_ollama_integration.py
tests/test_ask.py updated to assert env-var override and deterministic fallback; tests/test_ollama_integration.py adds get_available_ollama_model() and selects test model from env or available models (removes hardcoded "llama3.2").
Tests — CLI
tests/test_cli.py
test_install_no_api_key now patches cortex.cli.CommandInterpreter and injects a mock interpreter; test signature updated to accept the patched constructor.
Minor formatting / messages
cortex/dependency_check.py, cortex/llm/interpreter.py
Adjusts join/quote formatting in install instructions and reformats OpenAI client init call (whitespace/comment only).

Sequence Diagram(s)

(omitted — changes do not introduce a multi-component sequential control flow requiring visualization)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Suggested reviewers

  • mikejmorgan-ai
  • Suyashd999

Poem

🐰 I nibble code and sniff the breeze,
Env vars guide my model keys,
From config burrow or a prompt so spry,
Pick mistral, phi, or llama—hop high! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 65.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: removing hardcoded Ollama models and enabling flexible model configuration.
Description check ✅ Passed The description follows the template, includes the related issue (#383), provides a clear summary of old vs. new behavior, and covers the implementation details.
Linked Issues check ✅ Passed All requirements from issue #383 are met: hardcoded llama3.2 removed, OLLAMA_MODEL environment variable respected, config file support added, first-run wizard updated with model choices, and tests updated to auto-detect models.
Out of Scope Changes check ✅ Passed All changes are directly related to the linked issue #383. Formatting changes in dependency_check.py and interpreter.py are minor and support the overall refactoring.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2f01549 and a2e81e1.

📒 Files selected for processing (5)
  • cortex/ask.py
  • cortex/first_run_wizard.py
  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/first_run_wizard.py
  • tests/test_ask.py
  • tests/test_cli.py
  • cortex/ask.py
  • tests/test_ollama_integration.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🧬 Code graph analysis (1)
tests/test_ollama_integration.py (1)
scripts/setup_ollama.py (1)
  • test_model (315-343)
🪛 GitHub Actions: CI
cortex/first_run_wizard.py

[error] 390-390: W293 Blank line contains whitespace

🪛 GitHub Check: lint
cortex/first_run_wizard.py

[failure] 390-390: Ruff (W293)
cortex/first_run_wizard.py:390:1: W293 Blank line contains whitespace

tests/test_ask.py

[failure] 267-267: Ruff (W293)
tests/test_ask.py:267:1: W293 Blank line contains whitespace


[failure] 261-261: Ruff (W293)
tests/test_ask.py:261:1: W293 Blank line contains whitespace


[failure] 256-256: Ruff (W293)
tests/test_ask.py:256:1: W293 Blank line contains whitespace


[failure] 253-253: Ruff (W293)
tests/test_ask.py:253:1: W293 Blank line contains whitespace

tests/test_cli.py

[failure] 68-68: Ruff (W293)
tests/test_cli.py:68:1: W293 Blank line contains whitespace

🪛 GitHub Check: Lint
cortex/first_run_wizard.py

[failure] 390-390: Ruff (W293)
cortex/first_run_wizard.py:390:1: W293 Blank line contains whitespace

tests/test_ask.py

[failure] 267-267: Ruff (W293)
tests/test_ask.py:267:1: W293 Blank line contains whitespace


[failure] 261-261: Ruff (W293)
tests/test_ask.py:261:1: W293 Blank line contains whitespace


[failure] 256-256: Ruff (W293)
tests/test_ask.py:256:1: W293 Blank line contains whitespace


[failure] 253-253: Ruff (W293)
tests/test_ask.py:253:1: W293 Blank line contains whitespace

tests/test_cli.py

[failure] 68-68: Ruff (W293)
tests/test_cli.py:68:1: W293 Blank line contains whitespace

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Agent
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (6)
cortex/first_run_wizard.py (1)

374-407: LGTM! User-driven model selection aligns with PR objectives.

The implementation successfully replaces the hardcoded "llama3.2" default with a user-friendly selection menu. The chosen model is properly stored in config["ollama_model"] for later use by the dynamic model resolution logic in cortex/ask.py.

tests/test_ollama_integration.py (3)

27-46: LGTM! Robust helper for dynamic model detection.

The get_available_ollama_model() helper elegantly solves the problem of tests assuming specific models are installed. It parses ollama list output and returns the first available model, allowing tests to work with any Ollama installation.


49-53: LGTM! Enhanced skip condition prevents false test failures.

The updated pytestmark now checks both Ollama installation and model availability, ensuring tests skip gracefully when requirements aren't met rather than failing.


95-111: LGTM! Test respects environment-driven model selection.

The test now uses OLLAMA_MODEL environment variable first, then falls back to get_available_ollama_model(), matching the priority established in cortex/ask.py. This ensures tests verify the actual production behavior.

cortex/ask.py (2)

180-202: LGTM! Clean implementation of dynamic Ollama model resolution.

The _get_ollama_model() method correctly implements the three-tier priority described in the PR objectives:

  1. OLLAMA_MODEL environment variable
  2. ~/.cortex/config.jsonollama_model field
  3. Fallback to "llama3.2"

The silent exception handling (line 198-199) is appropriate for this fallback scenario, ensuring config read errors don't break the user experience.

Based on PR objectives and coding guidelines.


169-178: LGTM! Proper delegation to dynamic model resolver.

The change from hardcoded "llama3.2" to self._get_ollama_model() enables the flexible model selection that is the core objective of this PR.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR removes the hardcoded llama3.2 model from the Ollama integration, enabling users to configure any Ollama model via environment variables or configuration files. This resolves issue #383 where users were unable to specify alternative models like mistral, phi3, or llama3.1:8b.

Key Changes

  • Added configurable Ollama model selection with priority: OLLAMA_MODEL environment variable → ~/.cortex/config.json → default llama3.2
  • Enhanced first-run wizard with 5 model options (llama3.2, llama3.2:1b, mistral, phi3, custom)
  • Updated tests to auto-detect available Ollama models instead of assuming llama3.2

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
cortex/ask.py Added _get_ollama_model() method to read model from environment variable or config file with fallback to llama3.2
cortex/first_run_wizard.py Updated wizard to offer 5 model choices instead of forcing llama3.2 installation
tests/test_ask.py Modified test to verify model selection from environment variable and config
tests/test_ollama_integration.py Added get_available_ollama_model() helper and updated all tests to use detected models
tests/test_cli.py Updated test expectation to reflect that Ollama fallback allows install to succeed without API key

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
cortex/first_run_wizard.py (1)

390-390: Fix trailing whitespace on line 390.

Line 390 contains trailing whitespace, causing pipeline failures.

tests/test_ask.py (1)

253-253: Fix trailing whitespace on blank lines.

Multiple blank lines contain trailing whitespace (lines 253, 256, 261, 267), causing linting failures. These were flagged in the previous review and need to be addressed.

As per pipeline failures and coding guidelines.

Also applies to: 256-256, 261-261, 267-267

🧹 Nitpick comments (2)
tests/test_ask.py (2)

252-252: Remove redundant import.

The os module is already imported at the top of the file (line 3), so this import statement is unnecessary.

🔎 Proposed fix
     def test_default_model_ollama(self):
         """Test default model for Ollama."""
         # Test with environment variable
-        import os

         # Save and clear any existing OLLAMA_MODEL
         original_model = os.environ.get("OLLAMA_MODEL")

As per PEP 8 coding guidelines.


268-272: Consider verifying the default fallback value.

The test correctly validates that a model is configured, but it doesn't verify the specific fallback default. According to the PR objectives, when neither OLLAMA_MODEL nor the config file specifies a model, it should default to "llama3.2".

Consider adding a test case that explicitly verifies this fallback behavior, possibly by temporarily moving/mocking the config file to ensure the default is tested in isolation.

💡 Example approach

You could add an additional test or expand this one to verify the default:

# Test with mocked config that returns None for ollama_model
with patch('cortex.ask.load_config', return_value={}):
    handler3 = AskHandler(api_key="test", provider="ollama")
    self.assertEqual(handler3.model, "llama3.2")

This would ensure the fallback default is properly tested per the PR objectives.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2e81e1 and d536865.

📒 Files selected for processing (4)
  • cortex/first_run_wizard.py
  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/first_run_wizard.py
  • tests/test_ask.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_ask.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (1)
cortex/first_run_wizard.py (1)

374-409: Good implementation of dynamic model selection.

The model selection menu provides a good user experience with multiple preset options and custom model input. The implementation correctly:

  • Offers five sensible model choices with size information
  • Allows custom model names
  • Uses the selected model throughout (pull command, error messages)
  • Stores the selection in config["ollama_model"] for dynamic resolution

This aligns well with the PR objectives to remove hardcoded "llama3.2" references.

@jaysurse
Copy link
Collaborator

@Suyashd999 i have verified the changes made by this PR and it fulfills the requirements of issue #383

@Suyashd999
Copy link
Collaborator

@sujay-d07 please fix the conflicts

@github-actions
Copy link

github-actions bot commented Dec 30, 2025

CLA Verification Failed

The following contributors have not signed the Contributor License Agreement:

  • Copilot (175728472+Copilot@users.noreply.github.com)

How to Sign

  1. Read the CLA document
  2. Open a CLA signature request
  3. A maintainer will add you to the signers list
  4. Comment recheck on this PR to re-run verification

Verified Signers


This check runs automatically. Maintainers can update .github/cla-signers.json to add signers.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/test_cli.py (1)

62-82: Critical: Unresolved merge conflict markers.

The file contains Git merge conflict markers (<<<<<<< HEAD, =======, >>>>>>>), making it syntactically invalid Python. This is causing all pipeline failures and preventing tests from running.

You need to manually resolve this conflict by:

  1. Reviewing both versions of test_install_no_api_key (lines 63-70 vs 72-81)
  2. Deciding which implementation to keep or merging them
  3. Removing all conflict markers (<<<<<<<, =======, >>>>>>>)

Based on the PR objectives, the new version (lines 72-81) that mocks CommandInterpreter and expects success with Ollama appears to be the intended implementation.

🔎 Recommended resolution

Remove the HEAD version and conflict markers, keeping only the new implementation:

     @patch.dict(os.environ, {}, clear=True)
-<<<<<<< HEAD
-    def test_install_no_api_key(self):
-        # When no API key is set, the CLI falls back to Ollama.
-        # If Ollama is running, this should succeed. If not, it should fail.
-        # We'll mock Ollama to be unavailable to test the failure case.
-        with patch("cortex.llm.interpreter.CommandInterpreter.parse") as mock_parse:
-            mock_parse.side_effect = RuntimeError("Ollama not available")
-            result = self.cli.install("docker")
-            self.assertEqual(result, 1)
-=======
     @patch("cortex.cli.CommandInterpreter")
     def test_install_no_api_key(self, mock_interpreter_class):
         # Should work with Ollama (no API key needed)
         mock_interpreter = Mock()
         mock_interpreter.parse.return_value = ["apt update", "apt install docker"]
         mock_interpreter_class.return_value = mock_interpreter

         result = self.cli.install("docker")
         # Should succeed with Ollama as fallback provider
         self.assertEqual(result, 0)
->>>>>>> a2e81e1 (Removed the hardcoded Ollama Models and now works with any Ollama Model)
♻️ Duplicate comments (1)
cortex/first_run_wizard.py (1)

389-394: Critical: KeyError on invalid model choice.

Line 394 will raise KeyError if the user enters an invalid choice (e.g., "6", "abc", or any value not in "1"-"5"). The code only checks for "5" at line 391, but doesn't validate "1"-"4" before the dictionary lookup.

🔎 Proposed fix
         choice = self._prompt("\nEnter choice [1]: ", default="1")

         if choice == "5":
             model_name = self._prompt("Enter model name: ", default="llama3.2")
         else:
-            model_name = model_choices[choice]
+            model_name = model_choices.get(choice, "llama3.2")

Alternatively, add input validation:

         choice = self._prompt("\nEnter choice [1]: ", default="1")
+        
+        # Validate choice
+        while choice not in model_choices and choice != "5":
+            print("Invalid choice. Please enter a number between 1 and 5.")
+            choice = self._prompt("\nEnter choice [1]: ", default="1")

         if choice == "5":
             model_name = self._prompt("Enter model name: ", default="llama3.2")
         else:
             model_name = model_choices[choice]
🧹 Nitpick comments (1)
tests/test_ask.py (1)

245-257: Consider using @patch.dict for cleaner test isolation.

The test manually saves and restores the OLLAMA_MODEL environment variable. While this works, using @patch.dict(os.environ, ...) provides automatic cleanup even if the test fails and is more consistent with other tests in the file (e.g., test_install_no_api_key in test_cli.py).

🔎 Proposed refactor
-    def test_default_model_ollama(self):
+    @patch.dict(os.environ, {"OLLAMA_MODEL": "test-model"})
+    def test_default_model_ollama(self):
         """Test default model for Ollama."""
-        # Test with environment variable
-
-        # Save and clear any existing OLLAMA_MODEL
-        original_model = os.environ.get("OLLAMA_MODEL")
-
         # Test with custom env variable
-        os.environ["OLLAMA_MODEL"] = "test-model"
         handler = AskHandler(api_key="test", provider="ollama")
         self.assertEqual(handler.model, "test-model")
-
-        # Clean up
-        if original_model is not None:
-            os.environ["OLLAMA_MODEL"] = original_model
-        else:
-            os.environ.pop("OLLAMA_MODEL", None)

Note: You'll need to test the default behavior separately, potentially in a new test method without the env var set.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d536865 and 751edb8.

📒 Files selected for processing (5)
  • cortex/ask.py
  • cortex/first_run_wizard.py
  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • cortex/ask.py
  • tests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • tests/test_ask.py
  • cortex/first_run_wizard.py
  • tests/test_cli.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_ask.py
  • tests/test_cli.py
🧬 Code graph analysis (1)
tests/test_cli.py (1)
cortex/cli.py (1)
  • install (320-566)
🪛 GitHub Actions: Cortex Automation
tests/test_cli.py

[error] 62-62: IndentationError: unexpected unindent. Possible merge conflict markers (e.g., '<<<<<<< HEAD') in the file.

🪛 GitHub Check: lint
tests/test_cli.py

[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:7: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:5: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:3: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:1: invalid-syntax: Expected class, function definition or async function definition after decorator


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:7: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:5: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:3: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:1: invalid-syntax: Expected a statement

🪛 GitHub Check: Lint
tests/test_cli.py

[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:7: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:5: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:3: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:1: invalid-syntax: Expected class, function definition or async function definition after decorator


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:7: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:5: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:3: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:1: invalid-syntax: Expected a statement

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/scripts/cla_check.py (1)

1-389: Inconsistency: File changes unrelated to PR objectives.

This PR is titled "Removed the hardcoded Ollama Models" and the objectives describe changes to Ollama model configuration, environment variable resolution, and test updates. However, the file under review (.github/scripts/cla_check.py) is a CLA enforcement script with only formatting and type hint improvements—completely unrelated to Ollama.

Best practice: separate unrelated changes into distinct PRs to maintain clear scope, simplify reviews, and keep git history clean.

🧹 Nitpick comments (1)
tests/test_cli.py (1)

62-71: Test correctly validates Ollama fallback behavior.

The test properly mocks CommandInterpreter and verifies that installation succeeds with Ollama when no API key is available. The mocking pattern is consistent with other similar tests in this file.

💡 Optional: Consider adding assertions to verify provider selection

To make the test more explicit about Ollama fallback behavior, you could add assertions to verify the provider and API key:

@patch.dict(os.environ, {}, clear=True)
@patch("cortex.cli.CommandInterpreter")
def test_install_no_api_key(self, mock_interpreter_class):
    # Should work with Ollama (no API key needed)
    mock_interpreter = Mock()
    mock_interpreter.parse.return_value = ["apt update", "apt install docker"]
    mock_interpreter_class.return_value = mock_interpreter
    
    result = self.cli.install("docker")
    # Should succeed with Ollama as fallback provider
    self.assertEqual(result, 0)
    
    # Verify Ollama was used as the provider
    mock_interpreter_class.assert_called_once()
    call_kwargs = mock_interpreter_class.call_args[1]
    self.assertEqual(call_kwargs.get("provider"), "ollama")
    self.assertEqual(call_kwargs.get("api_key"), "ollama-local")

This would make the Ollama fallback behavior more explicit in the test verification.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 751edb8 and 0d058db.

📒 Files selected for processing (4)
  • .github/scripts/cla_check.py
  • cortex/dependency_check.py
  • cortex/llm/interpreter.py
  • tests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/llm/interpreter.py
  • tests/test_cli.py
  • cortex/dependency_check.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_cli.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (6)
cortex/llm/interpreter.py (1)

114-117: LGTM! Clean formatting improvement.

The multi-line argument formatting with trailing comma follows PEP 8 style guidelines and improves readability without any semantic changes.

cortex/dependency_check.py (1)

46-46: LGTM! Consistent formatting.

The comma-space separator for the package list improves readability in the error message.

.github/scripts/cla_check.py (4)

137-137: LGTM: Formatting improvements enhance readability.

The reformatting of API calls and JSON parameters to single lines improves consistency and readability without changing behavior.

Also applies to: 242-242, 260-260, 265-265, 291-291, 304-304, 312-312


339-348: LGTM: Multi-line bot list improves maintainability.

Reorganizing the bot patterns list to multi-line format enhances readability and makes future diffs cleaner when adding or removing bot patterns.


11-11: LGTM: Import grouping follows PEP 8.

The blank line correctly separates standard library imports from third-party imports, adhering to PEP 8 style guidelines.


89-89: No action needed. The project requires Python 3.10+ as the minimum supported version, which fully supports the union type syntax (str | None) used in the type hints. The from __future__ import annotations workaround is unnecessary.

@sujay-d07
Copy link
Collaborator Author

recheck

Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sujay-d07 Please update your branch and merge with main.

sujay-d07 and others added 10 commits January 1, 2026 16:19
…lines in various test files and the first_run_wizard module.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…pdate dependency check formatting; enhance CLI tests for Ollama fallback behavior
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
tests/test_ask.py (1)

261-264: Critical: Wrong patch target breaks test isolation.

Line 263 patches os.path.expanduser, but the implementation in cortex/ask.py uses Path.home() to locate the config file. Patching os.path.expanduser won't affect Path.home(), so the test will still read from the real ~/.cortex/config.json instead of the temporary directory, making the test non-deterministic and environment-dependent.

🔎 Proposed fix
+        from pathlib import Path
+        
         # Test deterministic default behavior when no env var or config file exists.
         # Point the home directory to a temporary location without ~/.cortex/config.json
         with (
             tempfile.TemporaryDirectory() as tmpdir,
-            patch("os.path.expanduser", return_value=tmpdir),
+            patch.object(Path, "home", return_value=Path(tmpdir)),
         ):
             handler2 = AskHandler(api_key="test", provider="ollama")
             # When no env var and no config file exist, AskHandler should use its built-in default.
             self.assertEqual(handler2.model, "llama3.2")

Based on learnings, this issue was previously flagged but remains unaddressed.

🧹 Nitpick comments (1)
tests/test_ask.py (1)

245-257: Consider using @patch.dict for better test isolation.

Direct manipulation of os.environ can affect test isolation. Using the @patch.dict(os.environ, ...) decorator ensures automatic cleanup and prevents potential interference with parallel or subsequent tests.

🔎 Proposed refactor using @patch.dict
-        # Save and clear any existing OLLAMA_MODEL
-        original_model = os.environ.get("OLLAMA_MODEL")
-
-        # Test with custom env variable
-        os.environ["OLLAMA_MODEL"] = "test-model"
-        handler = AskHandler(api_key="test", provider="ollama")
-        self.assertEqual(handler.model, "test-model")
-
-        # Clean up
-        if original_model is not None:
-            os.environ["OLLAMA_MODEL"] = original_model
-        else:
-            os.environ.pop("OLLAMA_MODEL", None)
+        # Test with custom env variable
+        with patch.dict(os.environ, {"OLLAMA_MODEL": "test-model"}):
+            handler = AskHandler(api_key="test", provider="ollama")
+            self.assertEqual(handler.model, "test-model")
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0d058db and 3c3af6a.

📒 Files selected for processing (7)
  • cortex/ask.py
  • cortex/dependency_check.py
  • cortex/first_run_wizard.py
  • cortex/llm/interpreter.py
  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🚧 Files skipped from review as they are similar to previous changes (6)
  • tests/test_cli.py
  • cortex/llm/interpreter.py
  • tests/test_ollama_integration.py
  • cortex/first_run_wizard.py
  • cortex/dependency_check.py
  • cortex/ask.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • tests/test_ask.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_ask.py
🧬 Code graph analysis (1)
tests/test_ask.py (1)
cortex/ask.py (1)
  • AskHandler (135-379)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)

Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sujay-d07 CLA check is failing, Kindly raise a new PR following the pattern of this PR #401

@sujay-d07
Copy link
Collaborator Author

@Anshgrover23 actually my details are already in the CLA json file, but I don't know why it's failing, either way, I have followed the procedure of PR #401 and created a new PR #414 adding the details again.
Please merge it so that this issue resolves.

@Anshgrover23
Copy link
Collaborator

@sujay-d07 No need to worry about CLA, will fix that soon. Also, add a before/after video.

@Suyashd999 Suyashd999 self-requested a review January 1, 2026 21:12
Suyashd999
Suyashd999 previously approved these changes Jan 1, 2026
Copy link
Collaborator

@Suyashd999 Suyashd999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM,

Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is before/after video ?

@sujay-d07 sujay-d07 requested a review from Anshgrover23 January 2, 2026 06:35
Copy link
Collaborator

@Anshgrover23 Anshgrover23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sujay-d07 LGTM, just address a minor comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c3af6a and dbea5e0.

📒 Files selected for processing (1)
  • cortex/ask.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/ask.py
🧬 Code graph analysis (1)
cortex/ask.py (1)
cortex/llm/interpreter.py (1)
  • _get_ollama_model (69-91)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
🔇 Additional comments (3)
cortex/ask.py (3)

13-13: LGTM! Path import correctly placed at module level.

The import has been moved to the top of the file as requested in previous reviews, following Python best practices.


173-173: Excellent refactoring!

Delegating Ollama model resolution to a dedicated method improves maintainability and aligns with the PR objectives of making model selection configurable.


178-212: Well-implemented method with comprehensive documentation.

The implementation correctly addresses both past review comments:

  • ✓ Comprehensive docstring explaining the precedence order
  • ✓ Path import moved to module level

The logic is sound: proper use of pathlib, context managers, defensive exception handling, and clear precedence order (env var → config file → default).

Comment on lines +178 to +212
def _get_ollama_model(self) -> str:
"""Determine which Ollama model to use.
The model name is resolved using the following precedence:
1. If the ``OLLAMA_MODEL`` environment variable is set, its value is
returned.
2. Otherwise, if ``~/.cortex/config.json`` exists and contains an
``"ollama_model"`` key, that value is returned.
3. If neither of the above sources provides a model name, the
hard-coded default ``"llama3.2"`` is used.
Any errors encountered while reading or parsing the configuration
file are silently ignored, and the resolution continues to the next
step in the precedence chain.
"""
# Try environment variable first
env_model = os.environ.get("OLLAMA_MODEL")
if env_model:
return env_model

# Try config file
try:
config_file = Path.home() / ".cortex" / "config.json"
if config_file.exists():
with open(config_file) as f:
config = json.load(f)
model = config.get("ollama_model")
if model:
return model
except Exception:
pass # Ignore errors reading config

# Default to llama3.2
return "llama3.2"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Extract duplicated _get_ollama_model logic to a shared utility module.

This method is duplicated identically in cortex/llm/interpreter.py (lines 68-90). Maintaining the same logic in multiple files violates DRY principles and creates maintenance risk—any bug fixes or enhancements must be applied in both locations.

Recommended refactoring approach

Create a shared utility module (e.g., cortex/config_utils.py) and extract this logic:

# cortex/config_utils.py
"""Configuration utilities for Cortex."""

import json
import os
from pathlib import Path


def get_ollama_model() -> str:
    """Determine which Ollama model to use.

    The model name is resolved using the following precedence:

    1. If the ``OLLAMA_MODEL`` environment variable is set, its value is
       returned.
    2. Otherwise, if ``~/.cortex/config.json`` exists and contains an
       ``"ollama_model"`` key, that value is returned.
    3. If neither of the above sources provides a model name, the
       hard-coded default ``"llama3.2"`` is used.

    Any errors encountered while reading or parsing the configuration
    file are silently ignored, and the resolution continues to the next
    step in the precedence chain.
    """
    # Try environment variable first
    env_model = os.environ.get("OLLAMA_MODEL")
    if env_model:
        return env_model

    # Try config file
    try:
        config_file = Path.home() / ".cortex" / "config.json"
        if config_file.exists():
            with open(config_file) as f:
                config = json.load(f)
                model = config.get("ollama_model")
                if model:
                    return model
    except Exception:
        pass  # Ignore errors reading config

    # Default to llama3.2
    return "llama3.2"

Then update both files to use the shared function:

# In cortex/ask.py
from cortex.config_utils import get_ollama_model

class AskHandler:
    def _get_ollama_model(self) -> str:
        """Determine which Ollama model to use."""
        return get_ollama_model()
# In cortex/llm/interpreter.py  
from cortex.config_utils import get_ollama_model

# Replace the _get_ollama_model method with:
def _get_ollama_model(self) -> str:
    """Get Ollama model from config file or environment."""
    return get_ollama_model()
🤖 Prompt for AI Agents
In cortex/ask.py around lines 178 to 212 the _get_ollama_model implementation is
duplicated elsewhere; extract this logic into a shared utility (e.g., create
cortex/config_utils.py with a get_ollama_model() function that implements the
same env → ~/.cortex/config.json → default resolution and silent error
handling), then in cortex/ask.py replace the method body to simply call and
return get_ollama_model() (add the import), and do the same replacement in
cortex/llm/interpreter.py (remove the duplicate implementation and import the
shared utility).

@sonarqubecloud
Copy link

sonarqubecloud bot commented Jan 2, 2026

@sujay-d07
Copy link
Collaborator Author

recheck

@Anshgrover23
Copy link
Collaborator

Duplicate of #435

@Anshgrover23 Anshgrover23 marked this as a duplicate of #435 Jan 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Remove hardcoded llama3.2 model, allow configurable Ollama model selection

4 participants