Advancing Urban Tree Monitoring with AI-Powered Digital Twins
The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”
What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google's Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.
“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”
Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.
The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.
A tree grows in Brooklyn (and many other places)
The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.
Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.
“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”
AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.
It’s a breeze
While Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.
“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”
The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.
The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.
“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”
Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean's Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month.