Inspur recently launched a new line for the NVIDIA OVX ecosystem. The Inspur MetaEngine is designed as the company’s initial foray into the Omniverse/ Metaverse offerings, but many of the applications are very tangible. I had the opportunity to chat with Inspur about the new platform.
Inspur MetaEngine for NVIDIA OVX
The new Inspur MetaEngine platform is a server powered by NVIDIA A40 GPUs. We have reviewed not just the NVIDIA A40, but also a number of Inspur GPU servers previously. For those wondering, the original NVIDIA OVX platform focuses on the A40 instead of the A100 because the A100 is more heavily focused on training applications. Instead of simply focusing on the hardware, I wanted to focus on the why, and the bigger question of how this technology is being used. As such, I spoke to Gavin Wang, Senior Manager Senior AI Product Manager at Inspur, about the company’s new offering, and where the new systems are being used.
Gavin’s major areas he sees companies adopting the new technologies are in a few different areas. Inspur has a number of its cloud CSP customers like Baidu and Alibaba working to build their community platforms for the metaverse era. These large providers need to build large infrastructure to host these new platforms. Being on the ground floor of a new platform has huge advantages for businesses. We can see that in the search, online marketplaces, video sharing/ streaming, gaming, and messaging platform spaces as a few examples. The large providers are working on metaverse platforms now.
Beyond that, there are a number of efforts around design and simulation. This can include the concept of digital twins, or replicating physical objects or environments digitally. It can also include robotics applications which I was told are becoming very popular. There are a number of companies using these platforms for design and simulation work to create next generation products and even buildings and factories. The company’s government and financial services customers are also using the platform for things like banking applications. There is even interest in these servers for doing NLP applications to handle text to speech and speech to text applications.
Inspur is not just using NVIDIA OVX/ Omniverse software. It also has its own software like Inspur AIStation for management. There are even third party applications that its customers are using, and Inspur is integrating a number of these applications so they can be used on the MetaEngine platform.
While NVIDIA made its push for OVX at GTC 2022, I asked and Inspur already has customers and revenue for the new MetaEngine platform. Several other large OEMs we talked to had less of a clear vision on how they were bringing NVIDIA OVX platforms to market. It was quite different talking to Inspur about how it is working on an OVX ecosystem for its customers with third party software and how many other vendors look at it like OVX is a solution just to wrap service and support contracts around.
Is this a sponsored article or just a nothingburger?
David – I am in NJ this weekend for a wedding and Mother’s Day. The hotel here has upload speeds that are very slow (like 36 hours to upload the video.) So you are right, content got swapped out at the last minute because the secret shopper video cannot get uploaded. As mentioned in Twitter, that is also going to impact the next week or two. Still getting the hang of travelling and getting pieces with videos up.
My metacarpus metabolised my metakeyboard when I metaread this metanews about the metaengine. The metalheads are metagoing to metabecome metanuts about these metalwares.
Patrick, it’s no complaint to anyone personal but usually STH has high quality articles that are worth reading, while this one reads like it’s written by someone’s PR department. For me personally the article subtracts value from STH as a whole and I would have preferred not hanging read it.
But then again, maybe someone was desperate for a bunch of buzzwords on a pile and really feels like this article enriched their life so take it for what it’s worth.