Quest 3’s Combined Actuality Occlusion Is Now Increased High quality
Dynamic occlusion on Quest 3 is presently solely supported in a handful of apps, however now it is larger high quality, makes use of much less CPU and GPU, and is barely simpler for builders to implement.
Occlusion refers back to the potential of digital objects to seem behind actual objects, a vital functionality for combined actuality headsets. Doing this for under pre-scanned surroundings is named static occlusion, whereas if the system helps altering surroundings and shifting objects it is generally known as dynamic occlusion.
Quest 3 launched with help for static occlusion however not dynamic occlusion. Just a few days later dynamic occlusion was launched as an “experimental” function for builders, which means it could not be shipped on the Quest Retailer or App Lab, and in December that restriction was dropped.
Builders implement dynamic occlusion on a per-app foundation utilizing Meta’s Depth API, which offers a rough per-frame depth map generated by the headset. Integrating it’s a comparatively complicated course of, although. It requires builders to change their shaders for all digital objects they wish to be occluded, removed from the best state of affairs of a one-click answer. As such, only a few Quest 3 combined actuality apps presently help dynamic occlusion.
One other downside with dynamic occlusion on Quest 3 is that the depth map may be very low decision, so you will see an empty hole across the edges of objects and it will not decide up particulars just like the areas between your fingers.
With v67 of the Meta XR Core SDK, although, Meta has barely improved the visible high quality of the Depth API and considerably optimized its efficiency. The corporate says it now makes use of 80% much less GPU and 50% much less CPU, liberating up additional sources for builders.
To make it simpler for builders to combine the function, v67 additionally provides help for simply including occlusion to shaders constructed with Unity’s Shader Graph software, and refactors the code of the Depth API to make it simpler to work with.
I attempted out the Depth API with v67 and may verify it offers barely larger high quality occlusion, although it is nonetheless very tough. However v67 has one other trick up its sleeve that’s extra important than the uncooked high quality enchancment.
The Depth API now has an choice to exclude your tracked arms from the depth map in order that they are often masked out utilizing the hand monitoring mesh as an alternative. Some builders have been utilizing the hand monitoring mesh to do hands-only occlusion for a very long time now, even on Quest Professional for instance, and with v67 Meta offers a pattern displaying how to do that alongside the Depth API for occlusion of every thing else.
I examined this out and located it leads to considerably larger high quality occlusion in your arms, although it provides some visible inconsistencies at your wrist, the place the system transitions to occlusion being powered by the depth map.
As compared, Apple Imaginative and prescient Professional has dynamic occlusion solely in your arms and arms, as a result of it masks them out the identical method Zoom masks you out moderately than producing a depth map. Which means on Apple’s headset the standard of occlusion in your arms and arms is considerably larger, although you will see peculiarities like objects you are holding showing behind digital objects and being invisible in VR.
Quest builders can discover Depth API documentation for Unity here and for Unreal here.