Skip to content
This repository was archived by the owner on Apr 6, 2026. It is now read-only.

Latest commit

 

History

History
79 lines (55 loc) · 1.89 KB

File metadata and controls

79 lines (55 loc) · 1.89 KB

MLSys Submission Repository

Our submission for the Google Graph Scheduling Competition at MLSys 2026.

Quickstart

git clone https://github.com/ami2802/MLSys
cd MLSys

# Initialize third part dependencies
git submodule update --init --recursive 

Single file inference

To run on a single file (output to stdout):

cargo run -- -i "./tests/testcases/example-1/input.json" -s naive

Run tests

To run example tests:

cargo test --test run_testcases

Tests are defined in ./tests/run_testcases.rs, and expect files to be placed in ./tests/testcases/test-name.

After adding, new tests can be defined by appending a new line inside the generate_solver_tests macro in run_testcases.rs.

generate_solver_tests! {
    example_1, "example-1", naive, "always-spill.json";
    example_2, "example-2", naive, "always-spill.json";
    example_3, "example-3", naive, "spill.json";
}

Solvers

You can define new solvers as separate files under src/solvers directory as structs with a solve impl:

pub fn solve(&self, problem: &Problem) -> Result<Solution>

Once done, register the solver in src/solvers/mod.rs:

define_solvers! {
    Naive, "naive", naive::NaiveSolver;
}

Run benchmarks

The competition benchmarks are located in tests/benchmarks/. To run:

cargo test --release --test run_benchmarks -- --nocapture

To change the solver or benchmarks, modify the benchmarks variable in run_benchmarks.rs:

    let benchmarks = vec![
        ("benchmark-1", "dp"),
        ("benchmark-5", "dp"),
        ("benchmark-9", "dp"),
        ("benchmark-13", "dp"),
        ("benchmark-17", "dp"),
    ];

The output will appear in tests/output/ as [benchmark_name].json. Benchmarks have a standard timeout, but it's currently increased by 2x for debugging.