Welcome to the first installment of our developer blog. Here, we’ll be providing periodic updates on key technical developments that segments of our engineering team have been focusing on. These vlogs will cover progress on developments that are ready to become public. However, due to confidentiality reasons, they may not reflect all of the team’s current work.
Some of the topics we’ll be discussing are work in progress and open research. Nevertheless, we’d still like to share our approach and explain our progress in addressing and implementing broader technical challenges that will enable the earth-2 metaverse.
One of the most exciting development areas is the advanced terrain rendering engine, which is a major focus of our 3D open world developmental roadmap. This engine will support real-time photo-realistic terrain and environment rendering on a large scale while accurately reflecting real-world topology and locations.
To achieve this, we’ve developed a proprietary rendering pipeline that integrates height map data with clip mapping. However, when researching the use of satellite imagery data, we discovered that the direct utilization of real-world satellite imagery data would not be practical or substantial enough for the earth-2 metaverse. This is due to issues with image quality and resolution, inconsistent lighting and perspective, and the presence of man-made structures.
So, we’ve come up with a solution. What you see in this video is a prototype solution in which we synthesize satellite data on the fly without relying on costly oversized imagery data. Our parametric and procedural approach allows us to change the look of the terrain in real-time at scale while relying on heightmap data.