While simulation is an extremely powerful tool in an engineer’s toolset, it’s often a bottleneck in the product development process. The process of setting up, running, and analyzing high-fidelity simulations can be extremely time consuming and expensive. Design teams are forced to make early decisions with limited data, which leads to missed opportunities and expensive rework when designs fail validation late in the process.
Physics AI offers a fundamentally new approach: fast, accessible performance predictions that require no meshing, no cleanup, and no traditional simulation pipeline. It delivers simulation-grade insight not just to CFD experts, but to a much broader set of users. Designers can now access the expertise of CFD engineers early in the concept phase—before constraints are locked in. Engineers can rapidly explore large design spaces with high-confidence performance data at their fingertips. And programmatic optimization tools can use these models to search for and refine optimal solutions, augmenting engineering workflows with scalable, AI-driven intelligence. In short, Physics AI brings high-fidelity insight to every phase of product development.
The question is no longer if Physics AI will transform engineering workflows. It’s how we make it scalable, usable, and real.
Physics AI Model Training Workflow with Luminary’s SDK
Luminary’s Physics AI Factory: From Concept to Inference
Luminary’s Physics AI factory is the first platform designed to make the entire Physics AI lifecycle—from data generation to real-time inference and model re-training—work as a single, coherent process.
Under the hood, the Luminary Physics AI Factory runs like a production line. You start by ingesting geometry and operating conditions, then spin up high-fidelity simulations to generate training data at scale. That data flows into automated training pipelines, where models are trained, validated against evolving test suites, versioned, and pushed into the tools where designers work. Each stage is modular—you can use built-in components or integrate your own solvers, data sources, and validation stages. Retraining loops can be automated as new test or simulation results become available.
Integrating all the components into a unified platform eliminates the bottlenecks that typically plague simulation and ML workflows. This co-location of simulation, training, and inference environments is critical for efficiently managing the massive amounts of compute and data required to build industrial-scale Physics AI models.
With Luminary, engineers can generate training datasets, build and refine AI models, and deploy them directly into the software environments where design decisions happen. What would typically require months of coordination and manual setup can now be done in days or even hours.
Getting started is simple. Luminary handles every step, including geometry variation, simulation orchestration, data management, model training, and deployment. There’s no need to manage cloud infrastructure, configure ML training environments, or manually move data between systems.
Luminary’s Physics AI Factory

Luminary’s Physics AI Factory
Training Data
The foundation of Luminary’s Physics AI platform lies in its ability to generate and assemble a massive, rich dataset to fuel model training. Synthetic data—produced through high-fidelity simulations—is a critical enabler, providing the dense, structured coverage across design and operating spaces needed to train accurate 3D surrogate models.
These datasets can be drawn from existing legacy simulations as well as new automated runs orchestrated by the Luminary SDK, which explore geometric and physical variations in a scalable, programmatic way. Luminary has developed its own gpu-native CFD solver with market-leading performance and scalability, along with automations at every stage of the simulation workflow, including geometry healing, mesh adaptation, and scalable simulation orchestration. The platform can also infuse data from a broad range of commercial and open-source simulation tools.
Real-world measurements such as wind tunnel tests and sensor data, while invaluable for grounding the models in physical truth, are typically too sparse or narrowly distributed to support full model development on their own. Instead, they serve as validation anchors and supplementary signals within the training process. Combining Luminary‑generated synthetic data, historical simulation data, and physical measurements into a centralized training repository enables an agile and iterative model development process.
Developing an industrial-scale Physics AI model demands a massive and diverse dataset that adequately spans the relevant design space. Today’s models cannot generalize to physical phenomena they have not encountered in training.
The process begins with defining a design space sampling strategy, encompassing both geometric variations and operating conditions. Geometry variations capturing the design space of interest are created using techniques such as parameterization, surface morphing, and configuration swapping. Luminary supports all major CAD and 3D modeling design tools through interchange formats, and has additionally developed direct integrations with a growing set of these tools (e.g. OnShape, Blender, nTop, Solidworks).

Parametric Geometry Variations for the SHIFT-SUV Physics AI Model
Solver parameters such as discretization order, convergence criteria, and turbulence models are then specified to ensure accuracy and robustness of the solutions. Once these inputs are provided, the platform launches solvers across multiple combinations of mesh resolutions, boundary conditions, and physics. Each simulation produces high‑fidelity field data and derived performance metrics that are automatically annotated with the input metadata. All results and associated metadata flow into a centralized dataset, versioned for traceability, ensuring that a richly varied and fully documented simulation data lake is available for AI training.
Once collected, the training data needs to be transformed to ensure compatibility with a particular model architecture - a critical aspect of the data preprocessing pipeline. Luminary handles this with automated pipelines that remove the guesswork and manual effort from the process.
Model Configuration
With training data in place, users then specify the AI model configuration settings that determine the training process, i.e., hyperparameters. Luminary has created templates for a number of applications, and we expose full control so you can experiment with which architectures and hyperparameters work best for your use case. Importantly, the connection between data storage and the model’s input pipelines is fully managed by the platform, enabling rapid iteration and experimentation without manual I/O handling or data preprocessing.
Model Architecture
There are a number of model architectures that have been developed for physics prediction. Luminary supports a wide range of architectures including traditional convolutional and graph-based neural networks and recurrent networks for time-series predictions. The model architecture defines how input fields, boundary conditions, and geometric descriptors are processed to predict quantities of interest such as pressure distributions, lift and drag coefficients, or flow separation metrics.
NVIDIA has developed PhysicsNeMo DoMINO, a local, multi-scale, point-cloud-based architecture tailored for modeling large-scale physics problems such as external aerodynamics. It leverages a learned global encoding from multi-scale point clouds, incorporating signed distance functions and positional encodings to capture both short and long-range geometric dependencies for excellent scalability. DoMINO operates directly on STL geometries and predicts flow quantities—including pressure and wall shear stress on surfaces, as well as velocity and pressure fields in the surrounding volume. Designed for scalability and accuracy, DoMINO serves as a high-performance surrogate model for industrial-scale CFD applications. DoMINO is available on Luminary’s platform as a recommended architecture for external aerodynamics, and was detailed in a recent joint webinar.

NVIDIA PhysicsNeMo DoMINO FNO-based Model Architecture; source: Ranade et al, “A Decomposable Multi-scale Iterative Neural Operator for Modeling Large Scale Engineering Simulations”, 2025 https://arxiv.org/pdf/2501.13350
Hyperparameters
Beyond selecting a model architecture, engineers can configure a suite of hyperparameters that govern training dynamics and model behavior. Learning rate schedules control how quickly model weights adapt to error signals, while batch sizes and optimizer choices impact convergence stability and computational efficiency. Depending on the prediction targets—whether scalar quantities like lift or drag coefficients, surface fields such as wall pressure distributions, or full 3D volumetric fields—you can adjust network depth, receptive field, or resolution handling strategies accordingly. Additional parameters such as regularization strength, dropout rates, and hidden layer widths help balance model capacity with generalization. All training configurations are captured in a structured metadata format, ensuring that each training run remains reproducible and its performance traceable.
SHIFT Models
Luminary’s SHIFT models—pretrained models built on large, high-quality datasets tailored to specific applications–can be leveraged for instant momentum in your Physics AI deployment. These models—and their associated training weights—can be used to initialize subsequent training, encoding relevant physical patterns and design priors. For example, SHIFT-SUV comes pretrained on thousands of SUV geometries, allowing accelerated development of new automotive Physics AI pipelines.
You can fine-tune a SHIFT model by continuing its training on a smaller set of custom data—allowing the model to adapt to the specifics of a new design space or operating regime without starting from scratch. This significantly boosts accuracy and generalization while dramatically reducing the amount of task-specific data needed. Since all SHIFT versions are hosted on-platform, there is zero data movement required—another advantage of Luminary’s unified infrastructure.

SHIFT-SUV Physics AI Model for SUV Aerodynamics
Model Training
Once the model architecture and hyperparameters are finalized, the platform initiates training on cloud-based clusters of NVIDIA’s latest GPUs, leveraging model architectures specifically designed to scale efficiently across multiple GPUs using data and model parallelism. The training process is fully orchestrated, abstracting away infrastructure complexity so you don’t need to manage provisioning, job scheduling, or hardware configuration. Behind the scenes, the system manages distributed workloads, optimizes memory usage, and balances compute to minimize training time, even for models operating on petabyte-scale datasets. Throughout training, the system logs not just the evolving model weights, but also rich metadata: loss curves, validation metrics, and parameter distributions, all captured in real-time to support analysis and debugging.
During and after training, users can validate model performance on new datasets—including real-world measurements or synthetic test cases—on the fly. This allows engineers to interrogate the model’s applicability beyond its training distribution, building trust and robustness before deployment.
Physics AI Model Deployment
The workflow culminates in a trained, deployment-ready Physics AI Model, securely stored in your organization’s Model Registry. Each registry entry encapsulates the model architecture, learned weights, hyperparameters, and training metadata—including dataset versions and distribution characteristics—ensuring full reproducibility and traceability. Iteration is seamless: retraining with modified input data, architecture, or hyperparameters automatically produces a new, versioned model entry in the registry. Engineers can deploy any registered model to production via Luminary’s inference API or through custom (including on-premise) integrations.
The deployed model can be used for a variety of powerful use cases, including rapid design iteration, design exploration, optimization, controls, and digital twins. In the below example, inference was deployed in nTop for inverse design of a conceptual aircraft.
Conceptual aircraft optimization using Physics AI inference in nTop
Physics AI is Here
Luminary’s Physics AI Factory makes industrial-scale Physics AI not just possible, but practical. By integrating simulation, model training, validation, and deployment into a unified platform, it turns what was once a complex, fragmented process into something engineering teams can actually use—at scale, in production. The potential to transform product design and development is tremendous.
If you’re ready to build your first model—or scale up your deployment— Luminary’s Physics AI factory is ready now.
Get in touch to learn more about developing a Physics AI model for your use case.