Add OptimizationOrchestrator for hardware-guided kernel optimization #92
Open
kaiming-cheng wants to merge 39 commits intomainfrom
Open
Add OptimizationOrchestrator for hardware-guided kernel optimization #92kaiming-cheng wants to merge 39 commits intomainfrom
kaiming-cheng wants to merge 39 commits intomainfrom
Conversation
Consolidates previous kernel_benchmark.py and pytorch_benchmark.py into a streamlined 3-file architecture with clear separation of concerns: Architecture: - benchmark.py (299 lines): Main Benchmark class with simplified API - benchmark_kernel(): Always uses subprocess for crash protection - benchmark_pytorch(): Always uses direct mode for stable code - BenchmarkLockManager: GPU lock management for multi-worker scenarios - timing.py (437 lines): Complete timing infrastructure - Timing: time_with_cuda_events(), time_with_triton_do_bench() - Loading: prepare_pytorch_model(), load_kernel_function() - Stats: compute_timing_stats() with essential metrics (mean/std/min/max) - kernel_subprocess.py (442 lines): Subprocess runner for kernel isolation - Crash protection for potentially buggy kernels - Clean CUDA state between runs - Timeout handling Key improvements: - Eliminated string code generation (was generating Python as strings) - Removed unnecessary statistics (median, p25/p75/p95/p99) - Removed confusing use_subprocess parameter (behavior now deterministic) - Fixed dtype bug causing incorrect speedup measurements - Reduced from 5 files to 3 files with clearer naming - Code reduction: ~1,400 lines → 1,178 lines Simple API: bench = Benchmark(logger, temp_dir, lock, worker_id) pytorch_result = bench.benchmark_pytorch(problem_file) kernel_result = bench.benchmark_kernel(kernel_file, problem_file) speedup = pytorch_result['stats']['mean'] / kernel_result['time_ms']
added 2 commits
February 1, 2026 23:42
Jack-Khuu
approved these changes
Feb 6, 2026
Contributor
Jack-Khuu
left a comment
There was a problem hiding this comment.
Reviewed just OptOrchest since i assume that is the main delta
| from kernel_perf_agent.kernel_opt.roofline.ncu_roofline import RooflineAnalyzer | ||
| from triton_kernel_agent.prompt_manager import PromptManager | ||
| from triton_kernel_agent.worker import VerificationWorker | ||
| from triton_kernel_agent.worker_util import ( |
Contributor
There was a problem hiding this comment.
Ditto this file doesn't exist anymore
Comment on lines
+64
to
+65
| # Fallback: return first kernel if no Triton kernel found | ||
| return next(iter(ncu_metrics.values()), {}) |
Contributor
There was a problem hiding this comment.
Why would we return the first kernel if there are no Triton kernels? This seems like unexpected behavior?
Comment on lines
+355
to
+362
| if self.pytorch_baseline_time is not None: | ||
| pytorch_baseline_time = self.pytorch_baseline_time | ||
| if pytorch_baseline_time != float("inf"): | ||
| self.logger.info( | ||
| f"📊 PyTorch baseline: {pytorch_baseline_time:.4f} ms (pre-computed)" | ||
| ) | ||
| else: | ||
| pytorch_baseline_time = None |
Contributor
There was a problem hiding this comment.
Suggested change
| if self.pytorch_baseline_time is not None: | |
| pytorch_baseline_time = self.pytorch_baseline_time | |
| if pytorch_baseline_time != float("inf"): | |
| self.logger.info( | |
| f"📊 PyTorch baseline: {pytorch_baseline_time:.4f} ms (pre-computed)" | |
| ) | |
| else: | |
| pytorch_baseline_time = None | |
| if self.pytorch_baseline_time is not None and self.pytorch_baseline_time != float("inf"): | |
| self.logger.info( | |
| f"📊 PyTorch baseline: {pytorch_baseline_time:.4f} ms (pre-computed)" | |
| ) | |
| else: | |
| pytorch_baseline_time = None |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary: