Mixed Reality Dynamic Occlusion in Quest 3
While dynamic occlusion is presently only supported by a small number of Quest 3 apps, it offers improved quality, lower CPU and GPU requirements, and a little easier implementation process for developers.
The ability of virtual objects to appear behind real objects is known as occlusion, and it is an essential feature for mixed-reality headsets. If the scenery is solely pre-scanned, this is known as static occlusion; if the scenery changes and objects move, this is known as dynamic occlusion.
Quest 3 did not support dynamic occlusion at launch, although it did support static occlusion. Dynamic occlusion was originally made accessible to developers a few days later as a "experimental" feature, meaning that it couldn't be shipped on the Quest Store or App Lab.
Developers may apply dynamic occlusion on an application-by-application basis and access a coarse per-frame depth map generated by the headset with Meta's Depth API, which is now available. Developers need to modify their shaders for every virtual item they want to be obscured; there is no one-click fix available to them. Because of this, there are currently relatively few Quest 3 mixed reality apps that enable dynamic occlusion.
Nevertheless, Meta has significantly increased the Depth API's efficiency and slightly improved its visual quality with version 67 of the Meta XR Core SDK. The business claims that this leads to around 50% reduced CPU utilisation and substantially less GPU usage.
Moreover, v67 provides support for easily adding occlusion to shaders made with Unity's Shader Graph tool, making it simpler for developers to integrate the functionality. The Depth API's code has also been refactored to make it easier to work with.
Although the Depth API is still rather unrefined, combining it with v67 results in marginally higher occlusion quality. With the Depth API's exclusion capability, you can now utilise the hand tracking mesh to mask off your tracked hands on the depth map. For quite some time, some developers have been able to accomplish hands-on occlusion by using the hand-tracking mesh. Meta provides an example of how to do this with version 67, in addition to obfuscating everything else using the Depth API.