It’s a little over three years to the day that ChatGPT was released to the masses. In the three years that have gone by, there are now hundreds if not thousands of open source LLMs. Hundreds of billions of dollars have been invested in AI startups and there is a trillion dollar global data center buildout happening worldwide and consequently a push to find the energy to power these data centers.
To put things into perspective, the entire raw training set for GPT-3 was only around 45 terabytes. And while LLMs have enabled the rise of several useful agentic applications, the size and complexity of those pale in comparison to the data and compute needs of applications in the design and simulation world. As an example, our open-source SHIFT-Wing model is the largest open source model for transonic wing design at over 27k simulation samples, 729TB of simulation data, and 27TB of AI/ML data. And that is just the tip of the iceberg. Let’s come back to the implications of massive data for Luminary later.
Prior to the advent of Physics AI, the traditional approach (for clarity, I am discounting physical prototyping approaches that are infeasible for everything but the simplest of products ) to designing new products like aircraft, automobiles and turbo-machinery involves a combination of slow and expensive physical prototyping, experimentation and maybe computer-aided engineering (CAE). CAE first requires creating geometric CAD models, sophisticated meshing, followed by intense simulation cycles and extensive evaluation of results to determine properties of the product under real world conditions. The design cycle required very specialized design and engineering skills, lots of compute and storage cycles and most importantly, required weeks and months to get through one single design.
As a result, only the most consequential companies with the most consequential products (like rockets and airplanes) were the ones who could invest in these capabilities. This is not unlike the days when mainframes were extremely expensive and only the biggest companies could afford them. Even they had to optimize time on the mainframe and only use it for the most consequential things in an enterprise. And just like the client-server era brought computing to the masses, Physics AI promises to bring AI-led product design to the masses.
Generative AI has transformed how we write code, build presentations and build new SaaS applications. The use of the same deep learning techniques applied to the design and simulation market promises to transform not just the velocity of the design process, but also remove skill limitations that confined design to a chosen few products and highly trained professionals. Whether it is designing new cars, new rockets that are designed to move in Earth’s atmosphere or landers designed to land in precise locations on the moon or Mars with varying payloads or drones that operate in air or underwater, the ability to have multiple design alternatives with different efficiency and operating characteristics is critical to arriving at the best outcome with the least risk.
Instead of building physical wind tunnels or even digital wind tunnels and simulating on a limited set of designs, Physics AI democratizes the design process, by enabling design teams to confirm the feasibility of multiple design options. By compressing the design validation process from months to minutes, the engineering design market worldwide expands greatly. By being able to test out multiple designs in a short period of time, companies can aspire to put out not just functional products but increase the options available to customers in terms of style, different price points and different materials. And most importantly, Physics AI brings a whole new generation of companies who can incorporate AI-led design into their product development lifecycle.
All of this makes Luminary one of the most interesting deep tech companies in the AI space today. Building differentiated models backed by highly performant and accurate solvers, Luminary aims to create some of the most generalizable models for the product design industry. Composed of hardcore professionals, who have dedicated their professional lives to understanding and applying the laws of physics to fluid dynamics, structural mechanics, thermal physics and more, the company is solving hard problems that have been outside the realm of possibility thus far. Coming back to data sizes, when a single design cycle requires you to deal with more than a petabyte of valid data, and when meshing and solving each design requires hundreds of thousands of GPU cores working in tandem, you recognize that the era of truly big data and truly big compute is upon us in the physical manufacturing world. Everything that we have known about scale, security, encryption, compression and throughput all get magnified manifold. It is impossible to overstate the value of a scalable platform that can tackle these challenges. Building large scale platforms that deliver continuous value to customers is what I have excelled at in the past and this opportunity promises to be bigger and bolder in every dimension. I couldn’t be more excited with the possibilities that this affords.