Researchers have developed a system for taking photo collections from the internet and turning them into 3D reconstructions. They call it NeRF-W, which stands for Neural Radiance Fields which they're applying to "in-the-wild" photos found on the internet. According to the researchers:
NeRF-W captures lighting and photometric post-processing in a low-dimensional latent embedding space. Interpolating between two embeddings smoothly captures variation in appearance without affecting 3D geometry.
With enough time and computing power they could effectively crowdsource creating a 3D model of the entire world. Which would be great for me, since I haven't been outside in over eight years. Is that burning ball of fire still in the sky?
Keep going for a video explaining the process and demonstrating the mind-boggling results.