A key challenge in creating truly immersive experiences is creating environments that are sufficiently detailed and realistic to allow a user to really buy in. By using drones to capture highly detailed 3D imagery, Britelite has been able to inform and shape the creation of some of our most expansive projects. We primarily use these models in a couple of different ways: to make accurate architectural models, and directly in virtual reality simulations.
Capturing 3D models with a drone
First, a quick note: you have to make sure that you are meeting all the legal requirements for using a drone in an urban environment. In our case we had an operator with a commercial license, insurance, and approval from the city.
Building accurate 3D models from drone imagery uses a process called photogrammetry. Multiple photographs are taken of a site, in a regular pattern and from different angles. These photographs are uploaded to a service that scans them, looking for consistent features in multiple pictures, and uses the collection of photos to reconstruct a 3D model.
Building accurate cityscapes
For one case, we flew the drone over the site of a possible future building, in San Francisco’s SoMa neighborhood. In only 15 minutes, the drone took 370 pictures of the area.
For the SoMA project, a key value of the capture was to accurately establish dimensions of a 3D site model. The software we used relies on the extremely accurate flight path of the drone to export 3D models at real world scale. It was then easy to use measuring tools on the model and use these to inform construction of a 3D model for fabrication.
It was also extremely useful to have a detailed 3D model to understand aspects of the site, such as clearances between buildings, architectural finishes, and overall geometry.
Using imagery for renders
For another project, we flew the drone over a section of San Francisco’s Golden Gate Park, in order to explore options for an art project. Since this model would be used to produce imagery for renders, it was done at a higher level of detail, capturing nearly 1,000 images over the course of an hour..
In addition to capturing images for photogrammetry, we also used the drone to capture a 360 spherical panorama from directly above the site. This can be used to build a “skybox” for CG renderings of the site, showing the scenery that extends beyond the model area.
The Golden Gate Park imagery was used to create still and video renderings and VR simulations of a future public art project in the park. The resolution of the drone scan was not always high enough to produce detailed results, however the scan was useful for overall scaling and positioning of elements of the model.
The 3D model generated by the software was missing key details of buildings on the site. However, Google’s 3D Warehouse contained detailed files of these buildings (Cal Academy, De Young Museum) and these could be easily integrated with the drone capture.
The fact that the FBX files and the drone capture were at real world scale helped considerably in this process. Everything was imported into Cinema4D for rendering.
The trees that were captured via photogrammetry were not sufficiently detailed for close-up viewing, so the scanned trees were deleted and replaced with CG trees from a library. However, the 3D capture was a useful guide for placing the trees.
Using imagery for VR
As part of the Golden Gate Park project, we created a virtual reality simulation of the project. As with the renders, the drone imagery and detailed models of the buildings on the site were imported into a game development software package.
Since we only had a 3D model of part of the park, we needed a way to “fill in” additional imagery beyond. The 360 panoramic image taken during the scan could be used directly in the software to provide this additional image.
Since VR has higher processing demands, the scanned trees were retained instead of using CG trees from a library. However, it is always possible to edit the model and add more detailed, realistic trees.
Using 3D imagery generated by drones turned out to be a really useful and quick way to make realistic and compelling virtual environments. The accuracy of the models, and the way they reflect real world dimensions, made them particularly adaptable.
Moreover, the ability to generate 3D models and also 360 panoramic images during the same mission allowed for a very complete package, which could be used with minimal editing in common game design environments.
Using photogrammetry to capture props and buildings for use in games is a strong emerging application, and allows for very realistic settings.
A real application is to capture detailed 3D models of historical sites, to help with their reconstruction in case of damage or loss. An organization called Cyark uses both interior scans and drone flights to produce detailed 3D models of historical sites.
This incredible high resolution drone panorama of Notre Dame helped assess damage from the fire:
The speed with which it is possible to generate 3D models suggests that this could be done during an event, creating a model of an outdoor venue and making it available to attendees while it is happening.
Another great possibility is to build games and interactive experiences set in real world areas that may be otherwise dangerous or inaccessible. Here is a project by the USGS to use drones to produce 3D models of volcanoes:
The ability to capture detailed 3D imagery is a powerful addition to the digital creative agency toolkit – and a powerful complement to traditional video and photographic imagery.