-
Notifications
You must be signed in to change notification settings - Fork 828
Metal backend: test modules #17076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Metal backend: test modules #17076
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17076
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 12 Cancelled Jobs, 1 Unrelated FailureAs of commit 0834659 with merge base ba6de95 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds comprehensive test coverage for the Metal backend by introducing module-level export and runtime execution tests.
Changes:
- Adds
test_modules.pywith 27 test modules covering operations like matrix multiplication, convolution, attention (SDPA), normalization, and composite blocks - Adds
run_metal_test.shbuild script to compile Metal runtime and run inference tests - Extends CI workflow with
test-metal-modulesjob that builds Metal runtime and runs module tests
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 12 comments.
| File | Description |
|---|---|
| backends/apple/metal/tests/test_modules.py | New comprehensive test suite with 27 model modules tested across float32 and bfloat16, including export validation and runtime output consistency checks |
| backends/apple/metal/tests/run_metal_test.sh | New bash script to build Metal runtime with CMake and run inference with executor_runner binary |
| .github/workflows/metal.yml | Adds new CI job to build Metal runtime and run module tests on macos-m2-stable runner |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| EXECUTOR_RUNNER = BUILD_DIR / "executor_runner" | ||
| RUN_METAL_TEST_SCRIPT = TESTS_DIR / "run_metal_test.sh" | ||
|
|
||
| # Test output directory - use current working directory in CI for reliable write access |
Copilot
AI
Jan 31, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment mentions 'aoti_debug_data' directory but doesn't explain why this specific directory name is used or what 'aoti' stands for (Ahead-Of-Time Inference). Consider adding a comment explaining that this directory name is used for Metal backend test outputs to maintain consistency with AOTI conventions.
| # Test output directory - use current working directory in CI for reliable write access | |
| # Test output directory - use current working directory in CI for reliable write access. | |
| # We use the 'aoti_debug_data' directory name here to keep Metal backend test outputs | |
| # consistent with Ahead-Of-Time Inference (AOTI) debug data conventions used elsewhere. |
mergennachin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are your thoughts on reusing Backend Test Harness?
https://github.com/pytorch/executorch/tree/main/backends/test/suite
test_modules.py almost look like this:
https://github.com/pytorch/executorch/tree/main/backends/test/suite/operators
I think we should try to reuse as much as possible, and if possible contribute back to test harness
Every backend has its own tests, independent of our more comprehensive testing frameworks. E.g. xnnpack, vulkan, qualcomm, arm The intent of frameworks like FACTO or Backend Test Harness is to find unknown issues, on mature operators/backends where certain level of coverage and correctness is expected. On the other hand, having the ability to add specific tests remains useful, as we develop and want to verify that a given PR intended to enable certain feature, is indeed providing the desired functionality. It is also a minimum bar that the backend/operator should pass, and that will be enforced on CI. |
|
Discussed offline with @manuelcandales Agreed that we need both approaches (backend specific unit testing, and backend test harness) As discussed, let's remove unnecessary tests that's not testing what you're developing right now and could be enabled by backend test harness in the future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| CMAKE_ARGS="-DEXECUTORCH_BUILD_METAL=ON \ | ||
| -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \ | ||
| -DEXECUTORCH_BUILD_EXECUTOR_RUNNER=ON \ | ||
| -DEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON \ | ||
| -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \ | ||
| -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \ | ||
| -DEXECUTORCH_BUILD_EXTENSION_NAMED_DATA_MAP=ON \ | ||
| -DAOTI_METAL=ON \ | ||
| -DEXECUTORCH_LOG_LEVEL=Info \ | ||
| -DCMAKE_BUILD_TYPE=Release" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not use preset? cmake --workflow --preset llm-metal-stats
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If not llm specific, also strongly recommend to add a base preset metal-release and let llm-metal-stats extend it.
| echo "Running inference..." | ||
| echo " PTE: $pte_path" | ||
|
|
||
| "$EXECUTOR_RUNNER" --model_path "$pte_path" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be much better to add pybind and run tests in pure python environment. This way you don't need this script.
This pull request adds new support for building and testing the Metal backend modules on macOS, including automation in the CI workflow and a dedicated test runner script. The main changes are the addition of a reusable shell script for building and running Metal backend tests, and the integration of a new GitHub Actions job to automate module testing for the Metal backend.
CI/CD and Testing Automation:
test-metal-backend-modulesjob to the.github/workflows/metal.ymlworkflow, which runs on a macOS M2 runner, sets up the environment, builds the Metal runtime, and runs Python unit tests for the Metal backend modules.Tooling and Scripts:
backends/apple/metal/tests/run_metal_test.shthat provides commands to build the Metal runtime, check if it is built, and run inference with given model files. This script is now used in the CI workflow to automate Metal backend testing.