feat: add project detach/reattach functionality#137
feat: add project detach/reattach functionality#137cabana8471-arch wants to merge 3 commits intoAutoForgeAI:masterfrom
Conversation
📝 WalkthroughWalkthroughAdds a new detach/reattach subsystem (new Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Client as Web Client
participant API as Projects API
participant Detach as detach.py
participant FS as File System
participant Git as .gitignore
User->>Client: Click Detach
Client->>API: POST /projects/{name}/detach
API->>API: validate project & agent lock
API->>Detach: detach_project(name)
Detach->>Detach: acquire_detach_lock()
Detach->>FS: discover Autocoder files, compute checksums
Detach->>FS: create `.autocoder-backup/` and copy files
Detach->>Detach: write `manifest.json`
Detach->>Git: update `.gitignore`
Detach->>Detach: release_detach_lock()
Detach-->>API: DetachResponse
API-->>Client: 200 DetachResponse
sequenceDiagram
actor User
participant Client as Web Client
participant API as Projects API
participant Detach as detach.py
participant FS as File System
participant Git as .gitignore
User->>Client: Click Reattach
Client->>API: POST /projects/{name}/reattach
API->>API: validate project & backup exists
API->>Detach: reattach_project(name)
Detach->>Detach: acquire_detach_lock()
Detach->>FS: read `manifest.json`, verify entries
Detach->>FS: backup conflicting user files -> `.pre-reattach-backup/`
Detach->>FS: restore files to original paths (atomic replace)
Detach->>FS: remove `.autocoder-backup/` if complete
Detach->>Git: update `.gitignore`
Detach->>Detach: release_detach_lock()
Detach-->>API: ReattachResponse
API-->>Client: 200 ReattachResponse
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
start.py (1)
23-33:⚠️ Potential issue | 🟠 MajorResolve ruff import ordering failure.
CI reports an unsorted import block at lines 23-33. Reorder the local imports to place
import detach as detach_modulebefore thefromimports.🧼 Suggested ordering
-from prompts import ( - get_project_prompts_dir, - has_project_prompts, - scaffold_project_prompts, -) -from registry import ( - get_project_path, - list_registered_projects, - register_project, -) -import detach as detach_module +import detach as detach_module +from prompts import ( + get_project_prompts_dir, + has_project_prompts, + scaffold_project_prompts, +) +from registry import ( + get_project_path, + list_registered_projects, + register_project, +)
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 310-333: The restore loop currently increments files_restored by 1
per manifest entry, undercounting when an entry is a directory; after performing
shutil.move(str(src_path), str(dest_path)) in the loop, detect whether the moved
item is a directory (dest_path.is_dir()) and, if so, compute the number of files
it contains (e.g., os.walk to sum files) and add that count to files_restored;
otherwise add 1 for a single file. Update the logger.debug("Restored %s",
entry["path"]) to reflect the count (or log both path and restored count) so the
reported files_restored matches the actual number of files restored for each
manifest entry.
- Around line 613-621: In detach.py remove the unnecessary f-string prefixes on
the two non-interpolated prints: replace the f-strings used for the static
status messages (the print with " Status: DETACHED" and the print with "
Status: attached (Autocoder files present)") so they are plain strings (use
print(" Status: DETACHED") and print(" Status: attached (Autocoder files
present)")), leaving all other interpolated f-strings (e.g., print(f"\nProject:
{args.project}"), detached_at, backup_size, file_count) unchanged.
- Around line 181-187: The current check-then-touch flow creates a race; replace
it with an atomic create by removing the lock_file.exists() check and calling
lock_file.touch(exist_ok=False) inside the try block (using the existing
lock_file and DETACH_LOCK/project_dir symbols), catch FileExistsError and return
False, return True on successful touch, and re-raise or handle other exceptions
(instead of broadly catching OSError) so only the "already exists" case is
treated as lock acquisition failure.
In `@server/routers/projects.py`:
- Around line 52-61: The import ordering violates lint rules: move the top-level
import "import detach as detach_module" before the other from-imports so the
module import precedes the from-imports; update the block where _detach_module,
_check_spec_exists, _scaffold_project_prompts, _get_project_prompts_dir, and
_count_passing_tests are assigned so they still reference detach_module,
check_spec_exists, scaffold_project_prompts, get_project_prompts_dir, and
count_passing_tests after reordering.
…toForgeAI#137 CI fixes (ruff linting): - Remove extraneous f-string prefixes in detach.py - Fix import ordering in server/routers/projects.py and start.py CodeRabbit actionable fixes: - Make lock acquisition atomic with touch(exist_ok=False) to prevent race conditions - Fix restore file count to properly count directory contents using manifest file_count CSS fix: - Reorder @import rules in globals.css (url import before tailwindcss) Test improvements: - Add tests for SQLite WAL files (features.db-shm, features.db-wal) - Add tests for new file patterns (test-*.sql, test-*.php, create-*-test*.php, rollback-*.json) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Hey @cabana8471-arch, thanks for putting this together! The detach/reattach concept is a great addition — it solves a real pain point when transitioning completed projects to unrestricted Claude Code usage. I've done a thorough review and found a few issues that should be addressed before merging. Here's a summary: Critical (must fix before merge)1. Path traversal in
|
Security fixes: - Add path traversal protection in restore_backup (validates paths stay within project) - Implement copy-then-delete backup approach (prevents data loss on partial failures) - Add manifest version validation and structure checks - Fix stale lock recovery with PID/timestamp tracking API improvements: - Remove TOCTOU race condition in detach endpoint (rely on detach module's check) - Add agent lock check to reattach endpoint for consistency - Change async def to def for blocking I/O endpoints (FastAPI threadpool) - Return relative backup_path instead of absolute (security) UI fixes: - Make is_detached non-optional in ProjectSummary type - Add loadingLabel prop to ConfirmDialog (fixes hardcoded "Deleting...") - Add cache invalidations for project-specific queries in detach/reattach CLI fixes: - Use mutually exclusive group for --include-artifacts/--no-artifacts - Add KeyboardInterrupt handling with proper exit code - Fix import ordering (ruff I001) Other: - Fix gitignore substring check to use line-based matching - Add 53 security tests (path traversal, manifest validation, lock recovery) - Update CLAUDE.md documentation for detach feature Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 576-592: When reattaching, add the same guard used for detach to
block reattach if the agent is running: after resolving project_dir (around
get_project_path and project_dir = Path(project_dir).resolve()) check for the
agent lock file (e.g. project_dir / ".agent.lock") or call the existing helper
if present, and if it exists return False with a message like "Agent is running.
Stop the agent before reattaching." and code 0; keep this check before
has_backup and acquire_detach_lock so reattach cannot proceed while the agent is
active.
In `@start.py`:
- Around line 390-438: Wrap user input calls in handle_detach_reattach and
handle_project_detach_action with KeyboardInterrupt handling so Ctrl+C doesn't
raise a traceback; specifically catch KeyboardInterrupt around the input(...) in
handle_detach_reattach (where choice is read) and around the confirm input(...)
calls in handle_project_detach_action, then handle it gracefully by printing a
short newline/message and returning to the previous menu (or exiting the
function) instead of propagating the exception; ensure you still preserve
existing return behavior for 'b' and keep calls to detach_module.detach_project
and detach_module.reattach_project unchanged.
- Block reattach when agent is running (check .agent.lock) - Handle KeyboardInterrupt in CLI detach/reattach menu Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 448-455: The checksum check currently only logs a warning on
mismatch (using logger.warning) which allows restores to silently succeed;
change the behavior so that when verify_checksums is True and
compute_file_checksum(dest_path) != entry["checksum"] you log an error
(logger.error) and immediately return False from the function performing the
restore (i.e., the function containing the verify_checksums /
compute_file_checksum block), ensuring callers can detect and abort the overall
restore; also ensure any higher-level caller that expects a boolean handles this
False return appropriately.
- Around line 186-235: The lock logic in acquire_detach_lock still has a TOCTOU
window because write_text() isn't atomic; change to an atomic create-and-write
pattern: attempt to create the lock file atomically (use os.open with
O_CREAT|O_EXCL|O_WRONLY or pathlib.Path.touch(exist_ok=False)) and write the
JSON only if the atomic create succeeds, returning True; if atomic create fails
(FileExistsError), read the existing file (as you already do) to detect
stale/corrupted locks, unlink only if stale/corrupted, then retry the atomic
create once more and return True on success or False if creation still fails;
keep using DETACH_LOCK, LOCK_TIMEOUT_SECONDS and acquire_detach_lock names for
locating the code and preserve existing logging and error handling.
- Around line 425-430: Currently the code unconditionally deletes dest_path
before copying, which risks data loss if the copy fails; instead, write the copy
to a temporary path (e.g., dest_path.with_suffix or a tempfile in the same
directory) and only atomically replace dest_path after a successful copy using
an atomic rename/replace (os.replace or Path.replace), preserving behavior for
directories vs files; modify the logic around dest_path, the pre-copy removal
block and the copy block (the code surrounding dest_path.exists(), the copy
operation at lines ~436-446) to: create temp target, perform the copy to temp,
on success atomically replace existing dest_path, and cleanup temp on failure.
🧹 Nitpick comments (3)
start.py (1)
417-456: Unusedsuccessvariable from detach/reattach calls.The
successreturn value fromdetach_module.reattach_project()anddetach_module.detach_project()is captured but never used to provide feedback. Consider displaying different messages based on success/failure.♻️ Suggested improvement
if confirm == 'y': print("\nReattaching...") success, message, _ = detach_module.reattach_project(project_name) - print(f" {message}") + if success: + print(f" ✓ {message}") + else: + print(f" ✗ {message}") else: # Offer to detach print("\nThis project is attached. Autocoder files are present.") print("Detaching will move files to backup and allow Claude Code full access.") try: confirm = input("Detach project? [y/N]: ").strip().lower() except KeyboardInterrupt: print("\n\nCancelled.") return if confirm == 'y': print("\nDetaching...") success, message, manifest = detach_module.detach_project(project_name) - print(f" {message}") + if success: + print(f" ✓ {message}") + else: + print(f" ✗ {message}")detach.py (2)
100-106: MD5 is cryptographically weak for integrity verification.MD5 is vulnerable to collision attacks. While this is acceptable for detecting accidental corruption, consider using SHA-256 for stronger integrity guarantees, especially since this is used for verifying file restoration.
♻️ Suggested change to SHA-256
def compute_file_checksum(file_path: Path) -> str: - """Compute MD5 checksum for a file.""" - md5 = hashlib.md5() + """Compute SHA-256 checksum for a file.""" + sha256 = hashlib.sha256() with open(file_path, "rb") as f: for chunk in iter(lambda: f.read(8192), b""): - md5.update(chunk) - return md5.hexdigest() + sha256.update(chunk) + return sha256.hexdigest()
302-302: Hardcodedautocoder_versionin manifest.The version is hardcoded as
"1.0.0"but the PR comments mention this should ideally be dynamic. Consider reading from a central version file or__version__constant.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 706-711: The cleanup unconditionally deletes the backup directory
at the end of restore_pre_reattach_backup even if some files failed to restore;
update restore_pre_reattach_backup to track which entries were successfully
restored (e.g., maintain a restored_entries counter/set like in restore_backup)
and only call shutil.rmtree(backup_dir) when all expected entries were restored
or no non-traversal errors occurred; if any restore failures happen, skip
removal, log the failed entries and error details using logger (referencing
PRE_REATTACH_BACKUP_DIR, files_restored, restored_entries and the shutil.rmtree
call) so callers can inspect and retry safely.
In `@server/schemas.py`:
- Around line 73-90: Update the frontend TypeScript interfaces that mirror the
backend response schemas so they include the missing fields: add
user_files_restored: number to the DetachResponse interface, and add conflicts:
string[] and conflicts_backup_path: string | null (or optional
conflicts_backup_path?: string | null) to the ReattachResponse interface; ensure
the rest of the field types (success, files_moved/files_restored, backup_size,
backup_path, message) match the backend DetachResponse and ReattachResponse
definitions so the API responses are fully typed.
🧹 Nitpick comments (3)
detach.py (2)
46-57: Consider logging the exception inget_autocoder_version().The broad
except Exceptionsilently swallows all errors. While the fallback is reasonable, logging a debug message would help with troubleshooting when the version isn't being read correctly.🔧 Suggested improvement
def get_autocoder_version() -> str: """Get autocoder version from pyproject.toml, with fallback.""" try: pyproject_path = Path(__file__).parent / "pyproject.toml" if pyproject_path.exists(): with open(pyproject_path, "rb") as f: data = tomllib.load(f) version = data.get("project", {}).get("version", "1.0.0") return str(version) if version is not None else "1.0.0" - except Exception: - pass + except Exception as e: + logger.debug("Failed to read version from pyproject.toml: %s", e) return "1.0.0" # Fallback
84-95: Glob patterns may inadvertently match user files.The patterns
test-*.json,test-*.py,test-*.html, andgenerate-*.pyare very broad and will match legitimate user test and generator files (e.g.,test-utils.py,test-helpers.json) that exist in many projects. Since these matched files are moved to backup during detachment, users could lose their own files. Consider prefixing patterns with an Autocoder-specific identifier (e.g.,autocoder-test-*.pyor using a subdirectory check) to distinguish generated files from user files.server/routers/projects.py (1)
607-613: Redundant agent lock check for reattach endpoint.The router checks
.agent.lockat lines 608-613, butreattach_project()in detach.py (lines 805-808) also checks the same lock. This creates redundant checks and slightly different error messages.Consider either:
- Removing the router check and relying on the module (like detach does)
- Or removing the module check and having the router be consistent with other lock checks
The inconsistency is documented in the comment for detach (line 555-556) but not mirrored here.
🔧 Suggested consistency fix
# Check if agent is running (consistent with detach endpoint) - lock_file = project_dir / ".agent.lock" - if lock_file.exists(): - raise HTTPException( - status_code=409, - detail="Cannot reattach while agent is running. Stop the agent first." - ) + # Note: Agent lock check is handled inside reattach_project() to avoid TOCTOU race. + # The detach module will return an appropriate error message if agent is running. success, message, files_restored, conflicts = _detach_module.reattach_project(name)
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
ui/src/lib/types.ts (1)
13-24:⚠️ Potential issue | 🟠 MajorProjectDetail is missing
is_detached—frontend requires it via inheritance, but backend doesn't provide it.
ProjectDetailextendsProjectSummaryin the UI type system (which includes the requiredis_detachedfield), but the backendProjectDetailschema inserver/schemas.py(lines 52–59) omits it entirely. BothGET /projects/{name}andPATCH /projects/{name}/settingsreturnProjectDetailwithout this field, causingis_detachedto beundefinedat runtime despite the TypeScript type marking it as required.Fix either by:
- Adding
is_detachedto the backendProjectDetailresponse (retrieved from the detach module), or- Adjusting the frontend type to make
is_detachedoptional or explicitly exclude it fromProjectDetail(e.g.,Omit<ProjectSummary, 'is_detached'> & { prompts_dir: string }).
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 615-668: The manifest paths in detect_conflicts and
backup_conflicts must be validated against path traversal: for each
entry["path"] first resolve project_dir_resolved = project_dir.resolve(), then
compute target = (project_dir_resolved / entry["path"]).resolve() and ensure
target.relative_to(project_dir_resolved) does not raise ValueError (skip/log and
continue if it does); in backup_conflicts also validate that the computed dest
is inside backup_dir.resolve() before creating parents or copying (and skip/log
on invalid paths). Mirror the same resolve()/relative_to() checks used in
restore_backup() and restore_pre_reattach_backup(), and reference
PRE_REATTACH_BACKUP_DIR when computing backup_dir.
- Around line 378-418: The current single try/except around phases 1–3 deletes
the backup on any failure; change to track phases and only remove the backup
when failure occurs before deletion (Phase 1 copy or Phase 2 manifest).
Concretely, wrap the copy + manifest steps (use variables/calls around
copied_files, manifest_path and MANIFEST_FILE) in their own try/except that on
exception logs and removes backup_dir, re-raising; then run Phase 3 (deleting
originals) in a separate try/except that on exception logs the error via
logger.error but does NOT delete backup_dir (so the backup is preserved) and
re-raises; ensure symlink handling, shutil.copytree, shutil.rmtree and
file_path.unlink logic remains unchanged and that exceptions from Phase 3 do not
trigger backup cleanup.
- Around line 491-588: The restore currently treats partial restores as success:
after the loop computing expected_count/restored_count it always returns True on
completion; change the final logic in the restore routine (use the
restored_entries list and manifest["files"] comparison) so that if
restored_count != expected_count you log the partial-restore warning (keep the
backup dir) and return False, files_restored, conflicts; only when
restored_count == expected_count should you remove backup_dir, log success, and
return True, files_restored, conflicts; ensure this affects the function that
reattach() relies on.
🧹 Nitpick comments (1)
ui/src/lib/types.ts (1)
26-42: Align Detach/Reattach response fields with backend (currently marked optional).Backend schemas always serialize
user_files_restoredandconflicts(defaults are provided), but the TS types mark them optional. This weakens client typing unnecessarily. Consider making them required to match the API contract.💡 Suggested type tightening
export interface DetachResponse { success: boolean files_moved: number backup_size: number backup_path: string message: string - user_files_restored?: number + user_files_restored: number } export interface ReattachResponse { success: boolean files_restored: number message: string - conflicts?: string[] + conflicts: string[] conflicts_backup_path?: string }
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 430-596: The symlink restore path in restore_backup currently
calls dest_path.unlink(), which fails if an existing destination is a directory;
update the dest_path removal logic in the src_path.is_symlink() branch of
restore_backup to mirror the directory removal used elsewhere: if
dest_path.exists() and dest_path.is_dir() and not dest_path.is_symlink() then
use shutil.rmtree(dest_path), otherwise call dest_path.unlink(); keep the
subsequent os.readlink(src_path) and dest_path.symlink_to(link_target) behavior
and ensure imports/util usage (shutil, dest_path.is_symlink) are consistent.
- Around line 83-95: The AUTOCODER_PATTERNS list is too broad and is used with
project_dir.glob(...) causing any top-level user files like test-foo.py to be
moved; update detach.py so these patterns are scoped to Autocoder-owned
directories or made explicit: either prepend directory prefixes to patterns
(e.g., ".autocoder/**/test-*.py", "prompts/**/test-*.json") or change the code
that iterates patterns (where project_dir.glob(...) is called) to iterate only
over a whitelist of Autocoder dirs (e.g., AUTOCODER_DIRS) and use
Path(...).rglob(pattern) inside those dirs; ensure AUTOCODER_PATTERNS values and
the glob invocation are kept consistent and consider adding a config toggle to
allow expanding patterns only when explicitly enabled.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 596-608: The restore-file path needs to remove an existing
directory at the destination before calling os.replace to avoid
IsADirectoryError; in the block that creates temp_path and is about to call
os.replace(temp_path, dest_path), add a guard that if dest_path.exists() and
dest_path.is_dir() (and not dest_path.is_symlink()), remove it (e.g.,
shutil.rmtree(dest_path)) so the atomic os.replace can succeed; reference the
temp_path, dest_path, and src_path variables and perform the directory-removal
check just before os.replace, keeping temp_path cleanup behavior unchanged.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 1082-1085: The current try/except around parser.parse_args()
swallows all SystemExit and forces a 1 exit code, which breaks correct behavior
for --help/--version; change the handler to catch SystemExit as e (around
parser.parse_args()), and re-raise (or return the original code) when e.code ==
0 so help/version still exit with 0, while preserving the existing non-zero
handling (e.g., return 1 or return e.code) for argument error exits.
🧹 Nitpick comments (2)
detach.py (2)
142-143: Consider addingusedforsecurity=Falsefor FIPS mode compatibility.When running on systems with FIPS mode enabled, calling
hashlib.md5()withoutusedforsecurity=Falsemay raise an error or warning since MD5 is not approved for security use in FIPS. Since this is used for data integrity verification (not security), explicitly marking it as such is appropriate.♻️ Suggested fix
if algorithm == "md5": - hasher = hashlib.md5() + hasher = hashlib.md5(usedforsecurity=False) else: hasher = hashlib.sha256()
432-454: Unused variablecopied_files.The
copied_fileslist is populated during the copy phase but never used after successful completion. This appears to be dead code, possibly a remnant from a previous implementation. Consider removing it to reduce clutter.🧹 Suggested cleanup
# Create backup directory backup_dir.mkdir(parents=True, exist_ok=True) - copied_files: list[Path] = [] phase = 1 # Track phase: 1=Copy, 2=Manifest, 3=Delete originals try: # Phase 1: Copy files to backup (preserves originals on failure) for file_path in files: relative_path = file_path.relative_to(project_dir) dest_path = backup_dir / relative_path # Ensure parent directory exists dest_path.parent.mkdir(parents=True, exist_ok=True) # Copy file/directory (handle symlinks explicitly) if file_path.is_symlink(): # Preserve symlinks as symlinks link_target = os.readlink(file_path) dest_path.symlink_to(link_target) elif file_path.is_dir(): shutil.copytree(file_path, dest_path, symlinks=True) else: shutil.copy2(file_path, dest_path) - copied_files.append(file_path) logger.debug("Copied %s to backup", relative_path)
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 363-482: In create_backup, the manifest-building loop incorrectly
treats symlinks as directories because Path.is_dir() follows symlinks; change
the order to check file_path.is_symlink() first, and when a symlink use
file_path.lstat() (not stat()) to get size and do not call get_directory_info or
compute_file_checksum on the target; record the entry with "type": "symlink",
size from lstat().st_size, checksum or target info (e.g., readlink result) as
appropriate, and only call get_directory_info/compute_file_checksum for real
directories/files respectively so manifest_files entries accurately reflect
symlinks versus directories/files.
- Around line 184-228: The get_autocoder_files function scans
AUTOCODER_DIRECTORIES and AUTOCODER_PATTERNS but misses the include_artifacts
check during the pattern-scan loop and returns both parent directories and their
nested file entries causing delete-order errors; update the pattern-scan loop in
get_autocoder_files to skip the ".playwright-mcp" directory when
include_artifacts is False (same check used earlier: if not include_artifacts
and dir_name == ".playwright-mcp": continue) and, before returning, filter the
collected list to remove any paths that are children of an included parent
directory (i.e., drop items where any ancestor is already present in the files
list) so only top-level parent dirs remain alongside non-nested matches; use the
existing symbols AUTOCODER_DIRECTORIES, AUTOCODER_PATTERNS,
AUTOCODER_ROOT_PATTERNS and include_artifacts to locate and implement these
changes.
detach.py
Outdated
| def get_autocoder_files(project_dir: Path, include_artifacts: bool = True) -> list[Path]: | ||
| """ | ||
| Detect all Autocoder files in a project directory. | ||
|
|
||
| Args: | ||
| project_dir: Path to the project directory | ||
| include_artifacts: Whether to include .playwright-mcp and other artifacts | ||
|
|
||
| Returns: | ||
| List of Path objects for Autocoder files/directories | ||
| """ | ||
| files = [] | ||
|
|
||
| # Check directories | ||
| for dir_name in AUTOCODER_DIRECTORIES: | ||
| if not include_artifacts and dir_name == ".playwright-mcp": | ||
| continue | ||
| dir_path = project_dir / dir_name | ||
| if dir_path.exists(): | ||
| files.append(dir_path) | ||
|
|
||
| # Check exact files | ||
| for file_name in AUTOCODER_FILES: | ||
| file_path = project_dir / file_name | ||
| if file_path.exists(): | ||
| files.append(file_path) | ||
|
|
||
| # Check glob patterns ONLY in Autocoder-owned directories | ||
| # to avoid accidentally moving user files like test-myfeature.py | ||
| for dir_name in AUTOCODER_DIRECTORIES: | ||
| dir_path = project_dir / dir_name | ||
| if dir_path.exists() and dir_path.is_dir(): | ||
| for pattern in AUTOCODER_PATTERNS: | ||
| for match in dir_path.rglob(pattern): | ||
| if match.exists() and match not in files: | ||
| files.append(match) | ||
|
|
||
| # Check ROOT-safe patterns at project root level | ||
| # These are more specific patterns to avoid false positives | ||
| for pattern in AUTOCODER_ROOT_PATTERNS: | ||
| for match in project_dir.glob(pattern): # glob, not rglob - root level only | ||
| if match.exists() and match not in files: | ||
| files.append(match) | ||
|
|
||
| return sorted(files, key=lambda p: p.name) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and read the detach.py file
find . -name "detach.py" -type fRepository: leonvanzyl/autocoder
Length of output: 75
🏁 Script executed:
# Get the file size to determine how to read it
wc -l detach.pyRepository: leonvanzyl/autocoder
Length of output: 78
🏁 Script executed:
# Read the relevant section of the file
sed -n '150,250p' detach.pyRepository: leonvanzyl/autocoder
Length of output: 3628
🏁 Script executed:
# Also check for the constants referenced in the function
rg "AUTOCODER_DIRECTORIES|AUTOCODER_FILES|AUTOCODER_PATTERNS|AUTOCODER_ROOT_PATTERNS" -B2 -A2Repository: leonvanzyl/autocoder
Length of output: 2219
🏁 Script executed:
# Check how get_autocoder_files is used in the codebase
rg "get_autocoder_files" -B3 -A3Repository: leonvanzyl/autocoder
Length of output: 15201
🏁 Script executed:
# Search for create_backup function and see how it handles the files list
sed -n '290,400p' detach.pyRepository: leonvanzyl/autocoder
Length of output: 3579
🏁 Script executed:
# Look for where files are moved/deleted
rg "unlink|rmtree|remove" detach.py -B3 -A3Repository: leonvanzyl/autocoder
Length of output: 5348
🏁 Script executed:
# Find the test that checks include_artifacts behavior
rg "test_include_artifacts_excluded" -A 15 test_detach.pyRepository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Check what AUTOCODER_DIRECTORIES contains
sed -n '1,100p' detach.py | grep -A 10 "AUTOCODER_DIRECTORIES"Repository: leonvanzyl/autocoder
Length of output: 710
🏁 Script executed:
# Search for include_artifacts test more broadly
rg "include_artifacts" test_detach.py -B3 -A10Repository: leonvanzyl/autocoder
Length of output: 1181
🏁 Script executed:
# Let me check if there's a test that actually tests file deletion with nested entries
rg "test_moves_files_to_backup" test_detach.py -A 20Repository: leonvanzyl/autocoder
Length of output: 1073
🏁 Script executed:
# Check if there are any test failures or if deletion is actually tested
rg "test.*backup.*delete\|test.*delete\|rmtree" test_detach.py -B2 -A5Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Find the setUp for TestCreateBackup
rg "class TestCreateBackup" test_detach.py -A 50 | head -60Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Let me check what files are created in setUp and in the tests that use get_autocoder_files
rg "class TestCreateBackup" -A 200 test_detach.py | grep -A 30 "def setUp"Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Let me see the full TestCreateBackup class
sed -n '1,1204p' test_detach.py | grep -A 100 "class TestCreateBackup"Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Find where TestCreateBackup is defined
rg "class TestCreateBackup" test_detach.pyRepository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Get line numbers
grep -n "class TestCreateBackup" test_detach.pyRepository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Read the test setup for TestCreateBackup
awk '/class TestCreateBackup/,/def test_/' test_detach.py | head -40Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Get the line number and surrounding context for TestCreateBackup
grep -n "class TestCreateBackup" test_detach.pyRepository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Read around that line
sed -n '620,680p' test_detach.pyRepository: leonvanzyl/autocoder
Length of output: 2481
🏁 Script executed:
# Find all test classes and line numbers
grep -n "^class Test" test_detach.pyRepository: leonvanzyl/autocoder
Length of output: 787
🏁 Script executed:
# Get TestCreateBackup's setUp
awk '/^class TestCreateBackup/,/^class [A-Z]/' test_detach.py | head -100Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Get TestBackupCreation class (lines 260-350)
sed -n '260,350p' test_detach.pyRepository: leonvanzyl/autocoder
Length of output: 3937
🏁 Script executed:
# Check if there are any tests that create nested files with patterns in autocoder dirs
rg "test-.*\.json|test-.*\.py" test_detach.py | head -20Repository: leonvanzyl/autocoder
Length of output: 1352
🏁 Script executed:
# Let me look at the specific test that checks pattern matching with nested files
rg "test_detects_pattern_files" test_detach.py -A 15Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Find test that detects pattern files in autocoder directories
rg "test_detects.*pattern" test_detach.py -A 15Repository: leonvanzyl/autocoder
Length of output: 899
🏁 Script executed:
# Let me check create_backup phase 3 to see how deletion failures are handled
sed -n '400,450p' detach.pyRepository: leonvanzyl/autocoder
Length of output: 1871
🏁 Script executed:
# Check if there's error handling that masks the unlink failure
rg "Phase 3.*Deleting\|file_path.unlink" detach.py -B5 -A5Repository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Find Phase 3 deletion code
rg "Phase 3" detach.py -A 10Repository: leonvanzyl/autocoder
Length of output: 1022
🏁 Script executed:
# Create a simple test to verify the include_artifacts issue with nested files
python3 << 'EOF'
from pathlib import Path
import tempfile
import sys
sys.path.insert(0, '.')
import detach
# Create test structure
with tempfile.TemporaryDirectory() as tmpdir:
project_dir = Path(tmpdir)
# Create .playwright-mcp with nested files
mcp_dir = project_dir / ".playwright-mcp"
mcp_dir.mkdir()
(mcp_dir / "test-xss.php").touch()
# Also add features.db to verify it still shows up
(project_dir / "features.db").touch()
# Get files with artifacts=True
files_with = detach.get_autocoder_files(project_dir, include_artifacts=True)
names_with = {f.name for f in files_with}
print("With artifacts=True:")
print(f" .playwright-mcp in list: {'.playwright-mcp' in names_with}")
print(f" test-xss.php in list: {'test-xss.php' in names_with}")
print(f" All names: {names_with}")
# Get files with artifacts=False
files_without = detach.get_autocoder_files(project_dir, include_artifacts=False)
names_without = {f.name for f in files_without}
print("\nWith artifacts=False:")
print(f" .playwright-mcp in list: {'.playwright-mcp' in names_without}")
print(f" test-xss.php in list: {'test-xss.php' in names_without}")
print(f" All names: {names_without}")
print(f"\nBUG FOUND: test-xss.php should NOT be in list when include_artifacts=False!")
EOFRepository: leonvanzyl/autocoder
Length of output: 458
Add include_artifacts check to pattern scanning loop and deduplicate nested entries.
The pattern scan loop (lines 209-216) iterates through all AUTOCODER_DIRECTORIES without checking include_artifacts. When include_artifacts=False, the .playwright-mcp directory is correctly excluded in the initial loop, but nested files matching AUTOCODER_PATTERNS are still added from the pattern scan.
Additionally, returning both parent directories and their nested files creates a deletion risk: if a parent (e.g., .autocoder) is deleted via shutil.rmtree() in Phase 3, the subsequent unlink() on nested children fails because they no longer exist, causing Phase 3 to raise an exception.
Apply the proposed fix:
- Add
if not include_artifacts and dir_name == ".playwright-mcp": continueto the pattern scan loop (line 210) - Filter the returned list to remove entries that are children of included parent directories (before the final return)
🤖 Prompt for AI Agents
In `@detach.py` around lines 184 - 228, The get_autocoder_files function scans
AUTOCODER_DIRECTORIES and AUTOCODER_PATTERNS but misses the include_artifacts
check during the pattern-scan loop and returns both parent directories and their
nested file entries causing delete-order errors; update the pattern-scan loop in
get_autocoder_files to skip the ".playwright-mcp" directory when
include_artifacts is False (same check used earlier: if not include_artifacts
and dir_name == ".playwright-mcp": continue) and, before returning, filter the
collected list to remove any paths that are children of an included parent
directory (i.e., drop items where any ancestor is already present in the files
list) so only top-level parent dirs remain alongside non-nested matches; use the
existing symbols AUTOCODER_DIRECTORIES, AUTOCODER_PATTERNS,
AUTOCODER_ROOT_PATTERNS and include_artifacts to locate and implement these
changes.
| def create_backup( | ||
| project_dir: Path, | ||
| project_name: str, | ||
| files: list[Path], | ||
| dry_run: bool = False | ||
| ) -> Manifest: | ||
| """ | ||
| Create backup of Autocoder files. | ||
|
|
||
| Uses copy-then-delete approach to prevent data loss on partial failures. | ||
|
|
||
| Args: | ||
| project_dir: Path to project directory | ||
| project_name: Name of the project | ||
| files: List of files/directories to backup | ||
| dry_run: If True, only simulate the operation | ||
|
|
||
| Returns: | ||
| Manifest describing the backup | ||
| """ | ||
| backup_dir = project_dir / BACKUP_DIR | ||
|
|
||
| # Build manifest | ||
| manifest_files: list[FileEntry] = [] | ||
| total_size = 0 | ||
| total_file_count = 0 | ||
|
|
||
| for file_path in files: | ||
| relative_path = file_path.relative_to(project_dir) | ||
|
|
||
| if file_path.is_dir(): | ||
| size, count = get_directory_info(file_path) | ||
| manifest_files.append({ | ||
| "path": str(relative_path), | ||
| "type": "directory", | ||
| "size": size, | ||
| "checksum": None, | ||
| "file_count": count, | ||
| }) | ||
| total_size += size | ||
| total_file_count += count | ||
| else: | ||
| size = file_path.stat().st_size | ||
| checksum = compute_file_checksum(file_path) if not dry_run else "dry-run" | ||
| manifest_files.append({ | ||
| "path": str(relative_path), | ||
| "type": "file", | ||
| "size": size, | ||
| "checksum": checksum, | ||
| "file_count": None, | ||
| }) | ||
| total_size += size | ||
| total_file_count += 1 | ||
|
|
||
| manifest: Manifest = { | ||
| "version": MANIFEST_VERSION, | ||
| "detached_at": datetime.now(timezone.utc).isoformat(), | ||
| "project_name": project_name, | ||
| "autocoder_version": get_autocoder_version(), | ||
| "files": manifest_files, | ||
| "total_size_bytes": total_size, | ||
| "file_count": total_file_count, | ||
| } | ||
|
|
||
| if dry_run: | ||
| return manifest | ||
|
|
||
| # Create backup directory | ||
| backup_dir.mkdir(parents=True, exist_ok=True) | ||
| phase = 1 # Track phase: 1=Copy, 2=Manifest, 3=Delete originals | ||
|
|
||
| try: | ||
| # Phase 1: Copy files to backup (preserves originals on failure) | ||
| for file_path in files: | ||
| relative_path = file_path.relative_to(project_dir) | ||
| dest_path = backup_dir / relative_path | ||
|
|
||
| # Ensure parent directory exists | ||
| dest_path.parent.mkdir(parents=True, exist_ok=True) | ||
|
|
||
| # Copy file/directory (handle symlinks explicitly) | ||
| if file_path.is_symlink(): | ||
| # Preserve symlinks as symlinks | ||
| link_target = os.readlink(file_path) | ||
| dest_path.symlink_to(link_target) | ||
| elif file_path.is_dir(): | ||
| shutil.copytree(file_path, dest_path, symlinks=True) | ||
| else: | ||
| shutil.copy2(file_path, dest_path) | ||
|
|
||
| logger.debug("Copied %s to backup", relative_path) | ||
|
|
||
| # Phase 2: Write manifest (before deleting originals) | ||
| phase = 2 | ||
| manifest_path = backup_dir / MANIFEST_FILE | ||
| with open(manifest_path, "w", encoding="utf-8") as f: | ||
| json.dump(manifest, f, indent=2) | ||
|
|
||
| # Phase 3: Delete originals (only after successful copy + manifest) | ||
| phase = 3 | ||
| logger.debug("Phase 3: Deleting %d original files", len(files)) | ||
| for file_path in files: | ||
| if file_path.is_dir() and not file_path.is_symlink(): | ||
| shutil.rmtree(file_path) | ||
| else: | ||
| file_path.unlink() | ||
| logger.debug("Removed original: %s", file_path.relative_to(project_dir)) | ||
|
|
||
| except Exception as e: | ||
| # Cleanup partial backup on failure - but only for Phase 1/2 | ||
| # Phase 3 failure means backup is valid, keep it for recovery | ||
| if phase < 3: | ||
| logger.error("Backup failed in phase %d: %s - cleaning up partial backup", phase, e) | ||
| if backup_dir.exists(): | ||
| shutil.rmtree(backup_dir) | ||
| else: | ||
| logger.error("Delete originals failed in phase 3: %s - backup preserved for recovery", e) | ||
| raise | ||
|
|
||
| return manifest |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd detach.py 2>/dev/null || cd . && find . -name "detach.py" -type fRepository: leonvanzyl/autocoder
Length of output: 75
🏁 Script executed:
# First, let's locate detach.py and examine the full context
find . -name "detach.py" -type f -exec head -500 {} +Repository: leonvanzyl/autocoder
Length of output: 16413
🏁 Script executed:
# Look for the get_directory_info and compute_file_checksum functions
rg -n "def get_directory_info|def compute_file_checksum" -A 5Repository: leonvanzyl/autocoder
Length of output: 700
🏁 Script executed:
python3 << 'EOF'
from pathlib import Path
import tempfile
import os
# Test Path.is_dir() and is_symlink() behavior with symlinks
with tempfile.TemporaryDirectory() as tmpdir:
tmpdir_path = Path(tmpdir)
# Create a real directory with a file
real_dir = tmpdir_path / "real_directory"
real_dir.mkdir()
(real_dir / "file.txt").write_text("test")
# Create a symlink to that directory
symlink_to_dir = tmpdir_path / "symlink_to_dir"
symlink_to_dir.symlink_to(real_dir)
# Create a symlink to a file
real_file = tmpdir_path / "real_file.txt"
real_file.write_text("test file")
symlink_to_file = tmpdir_path / "symlink_to_file"
symlink_to_file.symlink_to(real_file)
print("=== Testing symlink to directory ===")
print(f"symlink_to_dir.is_symlink(): {symlink_to_dir.is_symlink()}")
print(f"symlink_to_dir.is_dir(): {symlink_to_dir.is_dir()}")
print(f"symlink_to_dir.stat().st_size: {symlink_to_dir.stat().st_size}")
print(f"symlink_to_dir.lstat().st_size: {symlink_to_dir.lstat().st_size}")
print("\n=== Testing symlink to file ===")
print(f"symlink_to_file.is_symlink(): {symlink_to_file.is_symlink()}")
print(f"symlink_to_file.is_dir(): {symlink_to_file.is_dir()}")
print(f"symlink_to_file.is_file(): {symlink_to_file.is_file()}")
print(f"symlink_to_file.stat().st_size: {symlink_to_file.stat().st_size}")
print(f"symlink_to_file.lstat().st_size: {symlink_to_file.lstat().st_size}")
print("\n=== Key finding ===")
print("Explanation:")
print("- is_dir() follows symlinks: True for symlink pointing to dir")
print("- is_symlink() checks path itself: True for any symlink")
print("- stat() follows symlinks: size of target")
print("- lstat() doesn't follow symlinks: size of symlink itself")
EOFRepository: leonvanzyl/autocoder
Length of output: 687
🏁 Script executed:
# Now check how get_directory_info would behave with a symlink
python3 << 'EOF'
from pathlib import Path
import tempfile
# Simulate the current get_directory_info logic
def get_directory_info(dir_path: Path) -> tuple[int, int]:
"""Get total size and file count for a directory."""
total_size = 0
file_count = 0
for item in dir_path.rglob("*"):
if item.is_file():
total_size += item.stat().st_size
file_count += 1
return total_size, file_count
with tempfile.TemporaryDirectory() as tmpdir:
tmpdir_path = Path(tmpdir)
# Create a directory with multiple files
real_dir = tmpdir_path / "real_directory"
real_dir.mkdir()
for i in range(3):
(real_dir / f"file{i}.txt").write_text(f"content {i}")
# Create a symlink to that directory
symlink_to_dir = tmpdir_path / "symlink_to_dir"
symlink_to_dir.symlink_to(real_dir)
print("=== Testing current manifest building logic ===")
print(f"symlink_to_dir.is_dir(): {symlink_to_dir.is_dir()}")
# This is what the current code would do
if symlink_to_dir.is_dir():
print("Would call get_directory_info() on symlink_to_dir")
size, count = get_directory_info(symlink_to_dir)
print(f"Result: size={size}, count={count}")
print("Problem: Symlink treated as real directory, traverses target content!")
print(f"\nWhat should happen:")
print(f"symlink_to_dir.is_symlink(): {symlink_to_dir.is_symlink()}")
print(f"symlink_to_dir.lstat().st_size: {symlink_to_dir.lstat().st_size}")
EOFRepository: leonvanzyl/autocoder
Length of output: 376
🏁 Script executed:
file detach.py | head -1Repository: leonvanzyl/autocoder
Length of output: 106
Fix symlink handling in manifest building to match copy phase logic.
Path.is_dir() follows symlinks, so a symlink pointing to a directory incorrectly triggers directory traversal and mislabels the entry. The manifest will contain wrong file counts/sizes and may traverse outside the project scope. Check is_symlink() first before checking is_dir(), and record symlinks with lstat() (not stat()) to get symlink size without following the target.
Fix
for file_path in files:
relative_path = file_path.relative_to(project_dir)
- if file_path.is_dir():
+ if file_path.is_symlink():
+ size = file_path.lstat().st_size
+ manifest_files.append({
+ "path": str(relative_path),
+ "type": "file",
+ "size": size,
+ "checksum": None,
+ "file_count": None,
+ })
+ total_size += size
+ total_file_count += 1
+ elif file_path.is_dir():
size, count = get_directory_info(file_path)
manifest_files.append({
"path": str(relative_path),
"type": "directory",
"size": size,
"checksum": None,
"file_count": count,
})
total_size += size
total_file_count += count
else:
size = file_path.stat().st_size
checksum = compute_file_checksum(file_path) if not dry_run else "dry-run"
manifest_files.append({🤖 Prompt for AI Agents
In `@detach.py` around lines 363 - 482, In create_backup, the manifest-building
loop incorrectly treats symlinks as directories because Path.is_dir() follows
symlinks; change the order to check file_path.is_symlink() first, and when a
symlink use file_path.lstat() (not stat()) to get size and do not call
get_directory_info or compute_file_checksum on the target; record the entry with
"type": "symlink", size from lstat().st_size, checksum or target info (e.g.,
readlink result) as appropriate, and only call
get_directory_info/compute_file_checksum for real directories/files respectively
so manifest_files entries accurately reflect symlinks versus directories/files.
a71f9bd to
d6befed
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@detach.py`:
- Around line 15-27: The top-level import block in detach.py is unsorted and
triggers Ruff I001; reorder the imports at the top of the file so standard
library imports are grouped first and alphabetized (e.g., argparse, datetime,
hashlib, json, logging, os, shutil, sys, tempfile, tomllib, from pathlib import
Path, from datetime import datetime, timezone), followed by any third-party and
local imports, and ensure relative imports remain last; update the import
ordering in the existing import block (the lines with argparse, hashlib, json,
logging, os, shutil, sys, tempfile, tomllib, datetime/timezone, pathlib.Path,
typing.TypedDict) to be alphabetized and grouped correctly.
In `@server/routers/review.py`:
- Around line 145-157: The get_project_dir function currently returns the
registry path from get_project_path without verifying it exists; update
get_project_dir to validate the returned registry path by converting
project_path to a Path and checking .exists() and .is_dir() (same checks used
for the direct-path branch) before returning it; if the registry path is missing
or not a directory, fall through to the direct-path check and ultimately raise
the HTTPException if neither branch yields a valid directory.
- Around line 165-233: The run_code_review async endpoint currently calls
blocking CPU/file I/O functions (ReviewAgent.review and agent.save_report)
directly; change this and the other listed endpoints (/reports/{project_name},
/reports/{project_name}/{filename}, /create-features, /delete/{filename}) to
synchronous endpoints (convert async def run_code_review(...) to def
run_code_review(...), and likewise for the other handlers) so FastAPI will
offload them to the threadpool, or alternatively keep async but wrap blocking
calls with fastapi.concurrency.run_in_threadpool(); specifically update
run_code_review to call ReviewAgent.review and agent.save_report from a worker
thread (or make the handler def), update report-listing handlers to perform
glob/open/json.load synchronously (or via run_in_threadpool), update
create-features to run SQLAlchemy session.query/add/commit off the event loop,
and update delete handler to call Path.unlink from a thread — also add/adjust
any necessary imports (run_in_threadpool) and remove/await keywords as
appropriate.
In `@test_detach.py`:
- Around line 1586-1590: The imports in the try block are alphabetically
unsorted; swap the import order so HTTPException is imported before
validate_project_not_detached. Modify the try block to import HTTPException from
fastapi first, then import validate_project_not_detached from
server.dependencies, keeping the same exception handling and skipTest behavior.
🧹 Nitpick comments (1)
test_detach.py (1)
351-427: Consider adding checksum verification tests.The PR review comments noted that "MD5 checksums computed but never verified on restore." While the manifest includes checksums (tested in
TestBackupCreation.test_manifest_contains_checksums), there's no test verifying thatrestore_backupvalidates checksums before completing the restore. Consider adding a test that tampers with a backed-up file and verifies the restore detects the corruption.💡 Suggested test for checksum verification
def test_restore_detects_corrupted_file(self): """Should detect file corruption via checksum mismatch.""" backup_dir = self.project_dir / detach.BACKUP_DIR # Corrupt a backed-up file (backup_dir / "features.db").write_bytes(b"corrupted content") success, files_restored, conflicts = detach.restore_backup(self.project_dir) # If checksums are verified, this should fail or warn # Adjust assertion based on expected behavior
| import argparse | ||
| import hashlib | ||
| import json | ||
| import logging | ||
| import os | ||
| import shutil | ||
| import sys | ||
| import tempfile | ||
| import tomllib | ||
| from datetime import datetime, timezone | ||
| from pathlib import Path | ||
| from typing import TypedDict | ||
|
|
There was a problem hiding this comment.
Fix import order to satisfy Ruff I001.
CI reports unsorted imports. Please reorder the stdlib imports to pass the lint step.
🧹 Suggested ordering
-import argparse
-import hashlib
-import json
-import logging
-import os
-import shutil
-import sys
-import tempfile
-import tomllib
-from datetime import datetime, timezone
-from pathlib import Path
-from typing import TypedDict
+import argparse
+from datetime import datetime, timezone
+import hashlib
+import json
+import logging
+import os
+from pathlib import Path
+import shutil
+import sys
+import tempfile
+import tomllib
+from typing import TypedDict🤖 Prompt for AI Agents
In `@detach.py` around lines 15 - 27, The top-level import block in detach.py is
unsorted and triggers Ruff I001; reorder the imports at the top of the file so
standard library imports are grouped first and alphabetized (e.g., argparse,
datetime, hashlib, json, logging, os, shutil, sys, tempfile, tomllib, from
pathlib import Path, from datetime import datetime, timezone), followed by any
third-party and local imports, and ensure relative imports remain last; update
the import ordering in the existing import block (the lines with argparse,
hashlib, json, logging, os, shutil, sys, tempfile, tomllib, datetime/timezone,
pathlib.Path, typing.TypedDict) to be alphabetized and grouped correctly.
| try: | ||
| from server.dependencies import validate_project_not_detached | ||
| from fastapi import HTTPException | ||
| except ImportError: | ||
| self.skipTest("Server dependencies not available") |
There was a problem hiding this comment.
Fix the unsorted imports flagged by the CI pipeline.
The Ruff linter (I001) has flagged that the imports are unsorted. The HTTPException import should come before validate_project_not_detached alphabetically.
🔧 Proposed fix for import ordering
try:
- from server.dependencies import validate_project_not_detached
from fastapi import HTTPException
+ from server.dependencies import validate_project_not_detached
except ImportError:
self.skipTest("Server dependencies not available")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| from server.dependencies import validate_project_not_detached | |
| from fastapi import HTTPException | |
| except ImportError: | |
| self.skipTest("Server dependencies not available") | |
| try: | |
| from fastapi import HTTPException | |
| from server.dependencies import validate_project_not_detached | |
| except ImportError: | |
| self.skipTest("Server dependencies not available") |
🧰 Tools
🪛 GitHub Actions: CI
[error] 1587-1589: I001 Import block is unsorted or unformatted. Organize imports. Found 1 error. This can be fixed with 'ruff --fix'.
🤖 Prompt for AI Agents
In `@test_detach.py` around lines 1586 - 1590, The imports in the try block are
alphabetically unsorted; swap the import order so HTTPException is imported
before validate_project_not_detached. Modify the try block to import
HTTPException from fastapi first, then import validate_project_not_detached from
server.dependencies, keeping the same exception handling and skipTest behavior.
…geAI#137) - Added spec_path parameter to create_agent_spec() CRUD function - Updated API router responses to include spec_path in all 3 endpoints (create, get, update) - Database migration, model column, and Pydantic schemas were added in prior commits - Verified all 5 feature steps pass end-to-end Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add ability to detach projects (remove AutoForge tracking files while backing them up) and reattach them later. Includes bidirectional file backup for conflict handling, schedule disabling for detached projects, and comprehensive test coverage. New files: detach.py, test_detach.py, server/dependencies.py, server/routers/review.py Modified: server routers (agent, features, projects, schedules), schemas, UI components, start.py, CLAUDE.md, pyproject.toml
Update detach.py and test_detach.py to use .autoforge naming convention: - BACKUP_DIR: .autocoder-backup -> .autoforge-backup - DETACH_LOCK: .autocoder-detach.lock -> .autoforge-detach.lock - Variable names: AUTOCODER_* -> AUTOFORGE_* - Functions: get_autocoder_* -> get_autoforge_* - Keep .autocoder in directory set as legacy fallback
def8421 to
cc7fc1b
Compare
Remove unused `get_project_path` imports from features.py and schedules.py (ruff F401). Add symlink detection in detach manifest creation to prevent is_dir() from following symlinks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
.autocoder-backup/) so Claude Code can run without restrictionsChanges
Backend
detach.pymodule with core detach/reattach logictest_detach.pywith 37 unit tests (all passing)POST /detach,POST /reattach,GET /detach-statusis_detachedfield toProjectSummaryschemaCLI
[3] Detach/Reattachoption tostart.pymenuFrontend
Test plan
python detach.py <project-name>python detach.py --reattach <project-name>🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
UI
API / Backend
Tests
Docs
✏️ Tip: You can customize this high-level summary in your review settings.