When we launched Cyberdefenseinsights on January 12, 2024, our premise was simple. Modern conflict no longer cleanly separates cyberspace from the physical battlefield. That premise has guided our reporting, toolkits, and policy analyses over the past two years, and it remains the frame through which we evaluate risk and prescribe defensive steps.
Milestone one: focused beats, not scattershot coverage. From the first months we committed to three core beats that reflect the convergence of cyber and kinetic domains: network and firmware vulnerabilities that affect deployed systems, cyber-physical threats to aerial platforms and their ground infrastructures, and the role of artificial intelligence in threat detection and decision support. These beats let us produce deeper investigations rather than surface-level recaps.
Milestone two: investigative series that tied technical detail to operational risk. Our series on unmanned aircraft systems highlighted attack surfaces that go beyond open radio links. We emphasized that drones are also ICT devices with firmware and telemetry paths that can expose operators and supporting networks. Those concerns are consistent with public guidance from civil authorities that treat UAS as both physical and cyber risk vectors.
Milestone three: practical defense resources. Early on we published downloadable checklists and a hands-on laboratory guide for assessing UAS data hygiene and supply chain risk. The guidance we leaned on and amplified matches official efforts to raise awareness around foreign-manufactured UAS and supply chain weaknesses. The U.S. Cybersecurity and Infrastructure Security Agency has, for example, published materials intended to help operators and infrastructure owners reduce risk from insecure UAS and related peripherals.
Milestone four: coverage and critique of AI adoption in defense. We tracked policy moves and audits that signaled rapid AI adoption across defense institutions while also highlighting implementation gaps and workforce challenges. Public releases from the Department of Defense on its Data, Analytics, and AI adoption roadmap informed our view that adoption must be matched by governance, staffing, and acquisition reform. Independent oversight reports have also pointed to persistent gaps in workforce identification and acquisition guidance that will shape how safely and effectively AI gets fielded.
Milestone five: community building and ethical hacking outreach. We ran virtual workshops that connected ethical hackers with defense-focused operators and IR teams. Those workshops prioritized responsible disclosure, reproducible testing, and a constructive pipeline from vulnerability discovery to mitigations. Building practical bridges between researchers and operators remains a top priority because real-world defenders rarely have the luxury of theoretical tradeoffs.
Milestone six: policy engagement. Over two years we published multiple policy memos advocating layered security for aerial systems, clearer supplier vetting, and adoption of risk principles for AI in operational contexts. That advocacy is grounded in the reality that policy documents and executive guidance have been evolving. Public policy steps at the federal level, including executive guidance on AI and related interagency work, underscored the need for defense-specific guardrails as adoption accelerates.
What we learned and what we warn about. First, defense organizations need realistic threat models that include supply chain and telemetry exposure. Treating a drone only as an aircraft misses the point that its companion devices, software update processes, and telemetry flows are attractive targets. Second, rushing AI tools into operations without aligned acquisition and workforce practices increases the chance of brittle or unsafe outcomes. Public oversight and audit findings show there is much work to do to institutionalize AI responsibility inside defense organizations.
Where we go from here. In year three we will double down on three priorities. One, continued technical playbooks for operational teams showing how to test firmware, segregate telemetry, and lock down ground-side data flows. Two, deeper case studies of incidents and near-miss analyses to make lessons concrete and implementable. Three, a practical series on AI governance for small program offices that lack large acquisition teams but still field AI-assisted capabilities.
A final caution. Progress in tools and policy is real and important, but the asymmetry in cyberspace favors persistent, low-cost probing against complex systems. Defense professionals should assume probing will continue, increase detection and logging around air systems and their supporting networks, and fund exercises that validate containment and recovery. Cybersecurity is not a one-off fix. It is the steady, often tedious work of reducing windows of exposure and making recovery reliable.
Thank you to the researchers, practitioners, and readers who have engaged with our work. If you have a lesson, incident study, or research project you want to see amplified, reach out. We intend to keep this publication practical, rigorous, and attuned to the messy way digital and kinetic threats intersect.