Intertisement is no stranger to developing Augmented Reality (AR) solutions, but making them interact with the real world has always been a challenge when developing these solutions.
With the rise of PBR (Physically Based Rendering) in modern 3D rendering solutions and middleware tools, it was obvious that we needed to take a look at how we can adapt some of the principles into our AR solutions and create products that just looks and behaves a little bit nicer.
The nice thing about PBR is that everything is grounded in an approximation of how real life objects look and behave when reacting to light sources, this makes 3D surfaces look a lot more consistent in different light sources as opposed to just looking nice under one lightning condition.
Every surface in a PBR workflow also has a value that indicates how metallic an object is supposed to be, this metallic value almost always indicate that the surface is fully metallic or has no metallic property to it, in between values tend not to be used.
The other part of PBR materials i either a roughness value or a specular value, values that indicate how shiny a surface should be. The more shiny a surface is, the more light bounces off the surface and creates reflections. When looking at the spheres in the images, we can see them all being metallic and having a different level of roughness attached to them.
Now, in a 3D game you can easily calculate the reflections from the surrounding geometry at every angle, but it get's a bit more complex when we are working with portable devices like a phone or a tablet where we have no real knowledge of the surrounding environment of the device.
So in order to create reflections, we will have to guess what is in those spots that the camera can't see. By doing a custom spherical mapping of what we can see from the device camera, we can create realtime reflections that can adjust the lightning conditions the 3D content is being rendered at, in order to create a more consistiently looking experience, rather than something with a very static lightning condition.
The effect can work very well and it needs some more time in production before we can release it to the entire world, but the outlook is positive. The system is built to have very little impact on how we develop AR experiences as we only need to use a single line of code to fully implement it in an existing product.
The system also separates the 3D rendering from the 2D camera feed, this enables us to optimize by rendering fewer pixels and even dynamically scale the resolution if the application begins to perform poorly. While everything looks better at a higher resolution, it's worth noting that mobile devices with high density screens, especially older devices with good screens, can have difficulty rendering 3D content in high resolutions, so rendering 70% of the screens pixels can deliver a very similar visual experience at a better frame rate.
This technology does pose some interesting issues that needs to be addressed before we can use it in commercially available products, primarily on Google's Android platform. Due to the sheer variety of Android devices available, we cannot test them all, we can only do real life tests on a small number of devices. This means that even though we have had great success on running it on three different Android devices, we cannot count on it running this well on all devices and configurations, though this is also a problem nearly all software developers face on Android when developing hardware intensive applications.
In the end we will probably end up making individual device configurations for iOS devices and since we cannot test all Android devices, we will probably end up making a hidden benchmark utility that is run during the applications first launch on Android. This means that we can deliver the best possible experience on all devices and focus on what matters the most, the content we project into the real world.