Lockheed Martin successfully tested the ability of its F-35 fighter plane to use artificial intelligence-enhanced targeting in flight, the U.S. defense giant announced Monday.

The test, dubbed Project Overwatch, was conducted at Nellis Air Force Base in Nevada. It incorporated an AI machine learning model in to the warplane’s information control system.

The AI model generated data based on the plane’s surroundings and analyzed the information to present the pilot with potential targets.

It marked the first time a tactical AI model suggested a combat target to a fighter pilot independently, according to the company.

Jake Wertz, vice president of F‑35 Combat Systems at Lockheed Martin Aeronautics, said that Lockheed would continue to pursue AI-driven decision-making models to allow pilots to identify combat targets faster.

“Equally important is our ability to re‑program the AI model on the ground and have those updates available for the next sortie — an essential step toward maintaining a tactical edge in a rapidly evolving threat environment,“ Wertz said in a statement.

The F-35 features advanced electronic warfare capabilities, a low stealth profile, flexible ability to deliver firepower and flies up to about 1,200 miles per hour.

The use of AI on the fighter plane follows the April 2025 release of U.S. Air Force doctrine stating that AI will be integrated throughout the service as a force multiplier.

“AI will supercharge Intelligence, Surveillance, and Reconnaissance (ISR) by providing networked sensors capable of identifying hidden ‘needles in a haystack’ without prior threat knowledge,” the document noted.

However the Air Force observed that its approach would be to use AI as a tool to “augment the performance of Airmen,” as AI “lacks context sensitivity and reasoning.”

“Military discretion lies with Airmen, but AI can enable faster and superior operational decisions,” the service stated.

A 2025 report published by Georgetown’s Center for Security and Emerging Technology noted that AI can have practical appeal from a military perspective but must be used with caution — with skewed data, such as from spoofing or corrupted AI models, being a potential risk.

“When it comes to adversaries, some are more adept at avoiding intelligence collection than others and some may be adept at active deception or data manipulation. Available data on some adversaries may be scarcer than for others,” the report notes.

To properly leverage AI technology, the report recommends that commanders be aware of weaknesses within these systems and “prepare themselves and their teams to use them correctly.”

Zita Ballinger Fletcher previously served as editor of Military History Quarterly and Vietnam magazines and as the historian of the U.S. Drug Enforcement Administration. She holds an M.A. with distinction in military history.

Share:
More In MilTech