This directory contains the performance benchmarking setup for the IMAS Codex server using ASV (airspeed velocity).
["core_profiles", "equilibrium"]This ensures benchmarks measure tool performance, not embedding generation overhead.
Install benchmark dependencies:
make install-bench
# or manually:
uv sync --extra bench
asv machine --yes
Run performance baseline:
make performance-baseline
asv.conf.json - ASV configuration with uv integrationbenchmarks.py - Main benchmark suite with all MCP tool benchmarksbenchmark_runner.py - Utility class for running and managing ASV benchmarksperformance_targets.py - Performance targets and validation functions__init__.py - Package initializationmake performance-current
# or:
asv run --python=3.12
asv run --python=3.12 -b SearchBenchmarks.time_search_imas_basic
make performance-compare
# or:
asv compare HEAD~1 HEAD
asv publish
# Results will be in .asv/html/
time_search_imas_basic - Basic search performancetime_search_imas_with_ai - Search with AI enhancementtime_search_imas_complex_query - Complex query performancetime_search_imas_ids_filter - Search with IDS filteringpeakmem_search_imas_basic - Memory usage for basic searchtime_explain_concept_basic - Basic concept explanationtime_explain_concept_advanced - Advanced concept explanationtime_analyze_ids_structure_small - Small IDS structure analysistime_analyze_ids_structure_large - Large IDS structure analysistime_export_ids_bulk_single - Single IDS exporttime_export_ids_bulk_multiple - Multiple IDS exporttime_export_ids_bulk_with_relationships - Export with relationshipstime_export_physics_domain - Physics domain exportpeakmem_export_ids_bulk_large - Memory usage for large exporttime_explore_relationships_depth_1 - Depth 1 relationship explorationtime_explore_relationships_depth_2 - Depth 2 relationship explorationtime_explore_relationships_depth_3 - Depth 3 relationship explorationCurrent baseline targets are defined in performance_targets.py:
The benchmarks can be integrated into GitHub Actions or other CI systems:
- name: Run performance benchmarks
run: |
uv sync --extra bench
asv machine --yes
asv run --python=3.12
asv publish
asv run --quick for faster development iterationsasv run --bench <pattern> to run specific benchmark patternsasv show <commit> to view results for a specific commitasv find to find performance regressions between commits