Multiobjective functionality for Brownie Bee#363
Multiobjective functionality for Brownie Bee#363dk-teknologisk-mon merged 16 commits intodevelopfrom
Conversation
…ltiobjective sampling is reproducible.
…of 4 as the number of simulated Pareto front points
…nt, in the sense of the point closest to the ideal solution in normalized objective coordinates
SRFU-NN
left a comment
There was a problem hiding this comment.
Nice work on the reproducibility.
I have some comments on the functionality. They are mostly to make sure that we are clear about what we want to do, and that we do that.
…ependency plots of the given model
…ated points in the front to be adjusted
|
Updated this pull-request with a third function Among other things this allows us more freedom in how we present plotting information and will also make it easier for us to create overlapping 1D plots for multiobjective optimization mode (instead of the double 1D plots in my mock-up above). The intent is for this function to be used in conjunction with # A multiobjective optimizer is fitted
front_x_data, front_y_data, obj1_error, obj2_error = get_Brownie_Bee_Pareto(opt)
best_idx = get_Pareto_front_compromise(front_y_data)
# Some more code that generates the Pareto plot
# Get 1D graphs at the highlighted point on the Pareto front
obj1_1D_data = get_Brownie_Bee_1d_plot(result_list[0], x_eval=front_x_data[best_idx])
obj2_1D_data = get_Brownie_Bee_1d_plot(result_list[1], x_eval=front_x_data[best_idx])
# Some more code that generates overlapping 1D dependency graphs at the highlighted point |
…ownie_Bee_1d_plot
|
Also, I have no clue why the Python 3.9 test fails in This is the specific test that fails: def test_custom_model(sample_space):
"""Test the get_optimal_DOE function with a custom model."""
custom_model = "x1 + x2 + x1:x2 + pow(x1, 2)"
design, factor_names = get_optimal_DOE(
sample_space, 5, res=5, model=custom_model, seed=42
)
design = np.asarray(np.asarray(design[:, :2]), dtype=int)
expected = np.array([[10, 5], [10, -5], [0, 5], [5, -5], [0, -5]])
assert design.shape == (5, 2)
assert np.all(design[:, 0] >= 0) and np.all(design[:, 0] <= 10)
assert np.all(design[:, 1] >= -5) and np.all(design[:, 1] <= 5)
assert factor_names == ["x1", "x2"]
np.testing.assert_array_almost_equal(design, expected)EDIT: And even more confusingly, the same test now passes... |
That test is finicky, we have had problems with it before. @RuneChristensen-NN can perhaps comment? The failed run is here: https://github.com/novonordisk-research/ProcessOptimizer/actions/runs/19503686383/job/55824051220 |
|
Darn. I thought I already fixed that test. Will do the bad thing for now and just comment out the line that causes failure. Will then revisit at a later point. |
|
I also fixed the small bug in closes #357 |
This pull request adds supporting functionality for multiobjective optimization in Brownie Bee. Specifically, Brownie Bee needs to be able to handle plotting of the Pareto front on its own and I have therefore built functions that deliver information about the Pareto front location, its uncertainty in both objective values and a function to select a default point on the front to highlight.
A side-effect of me doing this work is that I discovered several places where functionality related to multiobjective optimization was not reproducible. This was true both for multiobjective opt.ask() and for Pareto front calculations. I have therefore made changes to RNG and seeding in a couple of places to fix this as well.
Below follows a simple script demonstrating the intended use-case that the code supports:
The resulting figure looks like this:
