Issue 2025.04

When the
Machine
Sees the
Landscape

Using Adobe Lightroom to Enhance Landscapes

Words and Images by Jason Bradley

I was editing a coastal landscape recently, one of those tricky ones where I needed to mask off the sky from the rest of the landscape, but masking is quite challenging in Lightroom compared to Photoshop. Lightroom is notorious for generating halos or other kinds of digital artifacts when using its masking tools. However, this time, I reached for Lightroom’s new "Select Landscape" mask, expecting another questionable result.

Instead, the software nailed it. It separated the sky just as well as anything I could have done by hand, and it did it in a fraction of the time. Admittedly, it was a little spooky, but also kind of thrilling.

Artificial intelligence is quietly becoming one of the most transformative forces in photography, not just in the camera, but in the digital darkroom. For those of us who shoot nature, wildlife, or underwater subjects, that means help with the tedious stuff but also new ways to interact with our images. Smarter. Faster. And in some cases, surprisingly intuitive.

Two tools in Adobe’s latest Lightroom updates stand out: Landscape Masking and the Adaptive Color Profile. Together, they mark a turning point. AI is no longer just automating tasks; it’s beginning to interpret the visual world.

Landscape Masking: The Machine Learns the Terrain

Lightroom’s AI-powered masking tools aren’t exactly new. “Select Sky” and “Select Subject” have been around for a while. But the latest model — Select Landscape — is different. It’s trained to detect and separate terrain-specific features like sky, water, mountains, and foreground elements. In practice, that means you can hit one button and get several nuanced, editable regions ready to adjust, without a brush in sight.

With the click of a button, Lightroom can now detect and create masks for the different elements of your landscapes. It can detect and differentiate the sky from the water, from the ground, and more.

For those of us who’ve spent hours carefully burning down bright skies or dodging shadowy tree lines, this is huge. It’s not about cutting corners, but rather, about arriving faster at the part of editing where creativity lives.

It’s also a little eerie. The software isn’t just reading tonal values anymore; it’s recognizing structure. Which is water versus sky versus rock. It doesn’t always get it right, but when it does, it feels like magic.

Adaptive Color: A Smarter Starting Point

Then there’s the Adaptive Color Profile, a quieter feature that might be the bigger deal. This profile analyzes your RAW or DNG file and applies what Adobe calls “intelligent, image-specific tonal and color corrections.” In other words, it gives you an automated but non- destructive starting point. It doesn’t touch your sliders. It just applies its logic in the background, handing you a version of your image that’s already been balanced for dynamic range, color harmony, and mood.

What we’re doing today with sliders and algorithms, photographers have done for decades with paper, light, and chemistry. Dodging and burning were never just technical exercises, but rather, emotional ones.

It’s Subtle. But It Works.

For photographers dealing with challenging light, the murky gradients of underwater scenes, or a golden-hour sky clipped just a little too hot, for example, Adaptive Color can save real time. You’re not trying to fix the file. You’re already halfway there.

But it also raises a question: Whose judgment is it? Because let’s be honest — sometimes the Adaptive Color Profile gets it wrong. Or at least, not your version of right. And that’s part of the adjustment, too – learning when to accept the machine’s help and when to say, “Nah, not this time.”

The Creative Intent Hasn’t Changed

AI tools like these are a gift, but they’re also a responsibility, especially for photographers working in documentary or conservation contexts, where the line between enhancement and manipulation matters.
If AI masking makes a bird stand out more clearly against a background, is that helping tell the story or distorting it? If the Adaptive Profile shifts a scene’s mood subtly toward warmth or contrast, is that still faithful?

There’s no universal answer, but it’s worth asking loudly in these times where automation runs the risk of overtaking our own personal, creative, and even ethical insights.

And yet, this isn’t new territory. What we’re doing today with sliders and algorithms, photographers have done for decades with paper, light, and chemistry. Dodging and burning were never just technical exercises, but rather, emotional ones. Ansel Adams, Minor White, Brett Weston … these weren’t just masters of exposure; they were interpreters of feeling. They used every tool available to draw out tone, shape, and mood from the raw material of a negative. Tools that, to them, made images feel and look like the moment the photographer remembered, not just what the lens recorded.

In that sense, today’s AI tools aren’t changing the why of editing; they’re merely changing the how. They make us faster. More efficient. Maybe even a little braver in how we shape our files. But the intention to guide the viewer, to evoke something real, hasn’t changed at all.

What Comes Next

The real power here isn’t in speed. It’s insight. When the software starts recognizing not just pixels, but places, it invites us to edit differently. We make faster decisions. Or more confident ones. Or sometimes, we see something in the file we hadn’t noticed before.

This isn’t the end of craftsmanship. It’s a new chapter in the relationship between vision and tools. And like any new chapter, it asks us to stay awake, curious, and in control. That’s because the machine may see the landscape, but it’s still your story to tell.