The DeanBeat: Nvidia CEO Jensen Huang says AI will automatically populate the metaverse's 3D imagery

The DeanBeat: Nvidia CEO Jensen Huang says AI will automatically populate the metaverse's 3D imagery ...

Nvidia CEO Jensen Huang said this week during a Q&A at the GTC22 online event that AI will auto-populate the metaverse's 3D imagery.

Nvidia has proven that AI will take the first attempt at constructing the 3D objects that populate the vast virtual worlds of the metaverse.

Nvidia Research has announced that a new AI technology might assist in the massive virtual worlds created by expanding numbers of companies and creators. The range of 3D structures, cars, characters, and more may be more easily populated.

Nvidia asserted that the real world is packed with possibilities: streets are lined with unique structures, with different vehicles passing by, and varied people passing through. It's incredibly time consuming to model a 3D virtual reality that reflects this.

Nvidia's Omniverse tools and cloud service aim to make development easier when it comes to metaverse applications. And auto-generating art — as we've seen with the likes of DALL-E and other AI models this year — is a way to alleviate the burden of building a world of virtual realities.

In a press briefing earlier this week, Huang asked me what might help the metaverse flow better. He mentioned Nvidia Research's work, although the company did not release the information until today.

“The metaverse is created by users,” Huang said. “And in the future, it is very likely that we'll describe some characteristic of a house or one characteristic of a city, or something like that. Or we can just keep hitting “enter” until it automatically generates one we'd like to start from. And from there, from that world, we will modify it.”

Details on GET3D

Nvidia GET3D is a 3D graphics software program that is trained only on 2D images. High-fidelity textures and complex geometric details are created in the same format as popular graphics software applications, allowing users to import their shapes for future editing.

For industries such as gaming, robotics, architecture, and social media, the generated objects may be used in 3D representations of structures, outdoor spaces, or entire cities.

GET3D can generate a virtually unlimited number of 3D shapes based on the knowledge it's trained on. Like an artist who turns a lump of clay into a detailed sculpture, the model transforms numbers into complex 3D shapes.

“It's at the core of that I was talking about a second ago, which is large language models,” he said. “And so from words, through a large language model, will come out someday, triangles, geometry, textures, and materials. And so all of this simulation of physics and all of the light simulation must be done in real time. That's why the latest technologies that we're developing with respect to RTX neuro rendering are so important.”

For example, it trains 2D automobile models to create a large number of sedans, trucks, race cars, and vans. It also creates swivel chairs, dining chairs, and cozy recliners, as well as animals.

Sanja Fidler, Nvidia's vice president of AI research and a leader of the Toronto-based AI lab that created the tool, said the feature. "Its ability to instantly generate textured 3D shapes might be a game-changer for developers, helping them quickly populate virtual worlds with varied and interesting objects."

GET3D is one of more than 20 Nvidia-authored papers and workshops scheduled to take part in the NeurIPS AI conference in New Orleans and via the internet from November 26 to December 4.

Nvidia claims that previous 3D generative AI models were limited in the amount of detail they could produce, and that they could only generate 3D objects based on 2D images taken from various angles at the same time.

When running inference on a single Nvidia graphics processing unit (GPU), GET3D can instead produce 20 shapes a second — acting like a generative adversarial network for 2D images while generating 3D objects. The larger and more diverse the training dataset, the more diverse anddetailed the output.

Researchers at Nvidia developed GET3D based on 2D images of 3D shapes taken from different camera angles. It took the team only two days to train the model on around a million images using Nvidia A100 Tensor Core GPUs.

GET3D takes its name from its capability to generate explicit textured 3D meshes. The shapes it creates are in the shape of a triangle mesh, like a papier-mâché model, covered with a textured material, thus users may import the objects into game engines, 3D modelers, and film renderers.

Once a designer has exported GET3D-generated shapes to a graphics application, they may achieve realistic lighting effects as the object moves or rotates in a scene. With another AI tool from NVIDIA Research, StyleGAN-NADA, artists may change an image to resemble a burned one.

Researchers suggest that a future version of GET3D might leverage camera pose estimation techniques to enable developers to train the model on actual data rather than synthetic datasets. It might also be enhanced to support universal generation, allowing developers to train GET3D on all kinds of 3D shapes at the same time.

Huang said AI will create worlds, not just animations. Huang envisions the need to construct a "new type of datacenter around the world." It's a graphics delivery network that has been battle tested through Nvidia's GeForce Now cloud gaming service Omniverse Cloud, a suite of tools that can be used to create Omniverse applications, anytime and anywhere.

This kind of network might enable real-time computing required for the metaverse.

Huang noted that "that is instantaneous" interaction.

Are there any game designers that are asking for this? I know one who is. Brendan Greene, the creator of Battle Royale game PlayerUnknown's Productions, asked for this kind of technology this year when he announced Prologue and then revealed Project Artemis, an attempt to create a virtual world the size of the Earth that could only be built using a combination of game design and user-generated content.

Well, holy shit.

You may also like: