Decades of Optimization Research in a Click

How OASIS Works:

OASIS is able to compute the full range of design possibilities and uses intelligent sampling which pinpoints the focus areas that contain the most promising options; regardless of design complexity.

The application of OASIS’ techniques yield best designs found in less time, while users generate meaningful data to understand their models better.

OASIS Optimization Core Explained p2.png

The Design Iteration Loop

OASIS uses machine learning to build a working idea of your model, which enables it to make educated guesses about how to improve it, without knowing anything about that model except the names of its variables. The process looks like this:

  • The loop begins at the “Sample” stage by creating a few experimental designs with your CAE model

  • The simulation of those experimental designs is run and its outputs are captured

  • At the “Record” stage the captured outputs are recorded in a table of performance results indexed by the design that produced that performance.

  • At the “Internalize” stage the Optimization Engine builds meta-models and response-surfaces using the dataset from the last stage;

  • The optimizer then generates new designs at the “Refine” stage with the intent of expanding its understanding of your model.

  • At the next “Sample” stage the Optimization Engine will generate points intended to improve the design based on its working knowledge, and the loop is started again.

Included Algorithms:

  • Single Objective Global Optimization (SOGO): solving global optimization problems with one objective and many inexpensive constraints

  • Multi-Objective Global Optimization (MOGO): solving global optimization problems with more than one objective and inexpensive constraints

  • SOGO for Constrained Problems (SOGO-C): solving global optimization problems with expensive constraints and/or tightly-constrained search spaces

All three OASIS algorithms share common features:

  1. Solution for linear/nonlinear, discrete/continuous, and unimodal/multimodal problems.

  2. Superior performance from low-scale (number of variables less than 10) to large scale problems

  3. Direct integration with external analysis or simulation, no equations necessary

  4. No algorithm picking

  5. No algorithm parameter tuning

  6. Effective optimization with fewer simulation calls