Vehicle Aware AI is Halo’s differentiator a proprietary intelligence layer that doesn’t just perceive the world but also understands the vehicle itself. By continuously monitoring factors like payload, tyre condition, vehicle age, maintenance status, fuel efficiency, and energy consumption, the system creates a dynamic cost-per-mile and emissions profile for every journey. This allows fleets to optimize routes not only for speed, but for efficiency, asset longevity, and sustainability. From a commercial standpoint, Vehicle Aware AI transforms HALO from a driverless kit into a profit-optimizing platform. Every decision from throttle modulation to braking patterns is informed by a deep awareness of both external conditions and internal vehicle health.
The Halo Main Autonomy Piece (HMAP) is the brain of the HALO system — orchestrating perception, planning, and control into a unified Level 4 autonomy stack. HMAP integrates verified components including camera buffers, LiDAR point clouds, GNSS feeds, and IMU calibration — all of which are already tested and verified within our current prototype environment. Dependencies such as dataset readiness and repository integration have been solved, with additional work underway on backbone training to further strengthen robustness. This tested foundation ensures HALO’s autonomy core is not just theoretical but proven in real-world conditions. By aligning perception with control logic through HMAP, fleets gain a reliable, high-performing autonomy solution designed to scale across multiple vehicle platforms.
RIGCONFIG is the commercialization enabler for Halo. It connects the autonomy software with real-world vehicle hardware through a proven integration layer. Modules including MCM configurations, steering calibrations, override thresholds, and CAN parsing are completed, while core drive-by-wire functions for brake, throttle, and steering have been successfully integrated. Even high-risk areas such as gear control timing have been patched and validated, reducing technical uncertainty. For fleet operators and OEM partners, this means HALO can be deployed onto existing vehicles with minimal engineering overhead.
Halo’s Sensor Fusion is already demonstrating significant maturity, with BEVFusion Transformer strategies implemented and datasets validated. The system actively combines multi-camera images, LiDAR point clouds, IMU, and GNSS signals to create a comprehensive 360° environmental model. Milestones such as multi-cam synchronization and LiDAR integration have been tested/verified, while dependencies including large-scale datasets like NuScenes (1.4TB) are fully prepared and integrated. Ongoing validation ensures backbone training continues to refine 3D object detection, even in edge-case scenarios. This level of progress moves sensor fusion beyond R&D it is a proven, scalable technology that underpins HALO’s ability to deliver safe, reliable, and intelligent autonomy under all conditions.
Halo's Training programme is built on a rigorous foundation of both simulated and real-world datasets. Critical training milestones are underway, including Lidar-only backbone training and multi-camera Swin image model support, which are already progressing with dependencies resolved. CUDA-BEVFusion networks are providing a library of models to benchmark and optimize HALO’s autonomy stack, with validation listed as ongoing to ensure continuous improvement. Each training cycle sharpens object recognition, trajectory prediction, and control responsiveness, making Halo more adaptable to real-world challenges. By grounding its AI models in both proven datasets and proprietary fleet data, HALO guarantees an evolving autonomy stack that consistently pushes performance, safety, and efficiency benchmarks for next-generation logistics.
Visualisation is where Halo’s intelligence becomes tangible. It transforms complex streams of sensor data, vehicle telemetry, and AI decision-making into real-time dashboards that track performance, learning progress, and operational efficiency. Every data point — from camera perception and LiDAR mapping to payload influence, energy use, and cost-per-mile — is combined into a single, intuitive interface. This enables operators to monitor not just where a driverless vehicle is, but how the autonomy stack is learning, adapting, and improving in real time. For fleet managers, this means unprecedented visibility into the health of their operations, empowering faster decisions, predictive insights, and seamless integration with logistics workflows.