Image source: Public Domain
Horizon3.ai, the AI-native proactive security leader, announced new research that addresses one of the most critical barriers to adopting AI in cybersecurity: making autonomous defense systems predictable, controllable, and safe for real-world deployment.
AI-powered security tools have long promised speed and adaptability, but security teams have been reluctant to trust them in production. The issue is not intelligence. It is unpredictability. When facing adaptive attackers, even highly capable AI agents can behave in ways that introduce operational risk.
Horizon3.ai’s research addresses this challenge with a new tool-mediated architecture that makes stability a property of the system itself, not the underlying AI model. The AI remains responsible for strategy, but every action is constrained to a finite, pre-approved catalog and executed through deterministic, validated tools. This ensures the system remains controllable and stable, even under adversarial pressure.
Drawing from our vast training dataset derived from 250,000 real pentests, the results were tested and validated on 161 organizations from 25 different industries and demonstrated measurable impact and repeatable behavior across production configurations:
The research includes formal mathematical proofs, grounded in control theory and game theory, showing that the system always gets stronger with every policy change, remains stable even when attackers try new tactics, and steadily builds a more accurate picture of the network.
Most importantly, this enables a new operational capability: safe, automatic tuning of critical defenses in live environments. The NodeZero® AI-native Proactive Security Platform can now autonomously adjust EDR policies, including Microsoft Defender, with the assurance that changes will not degrade the overall defensive posture.
“Security teams have been waiting for AI that can match an attacker’s creativity without introducing operational risk. We’ve delivered that. By combining powerful AI reasoning with tightly constrained, pre-approved actions, we’ve made autonomous defense not just intelligent, but predictable, controllable, and provably stable for live production environments. This changes the game: organizations can now safely let AI continuously tune and strengthen their defenses in real time,” said Snehal Antani, CEO of Horizon3.ai.
The findings also challenge the assumption that only the most advanced AI models can deliver reliable results. The research shows that while more capable models can improve performance, safety and stability come from the architecture itself. Even smaller, more cost-efficient models can operate safely within this framework, enabling organizations to deploy AI-driven defense without relying on expensive frontier models.
The work marks a significant step toward fully autonomous learning loops between AI attackers and AI defenders, grounded in real-world attack data rather than synthetic environments. It establishes a practical foundation for deploying AI-driven cyber defense systems that are both effective and trustworthy at scale.
By subscribing, you agree to receive email related to content and products. You unsubscribe at any time.
Copyright 2026, AI Reporter America All rights reserved.