Contacts
Members area
Close

Contacts

Registered office:

1065 Budapest, Bajcsy-Zsilinszky út 31. I/11.

info@ceuli.org

Human-Centric AI in Defence: Bridging Ethics, Technology and Strategy

ux-gun-5Mj4PO7KIFc-unsplash

Artificial intelligence (AI) is swiftly reshaping the defense arena, promising enhanced operational capabilities, improved decision-making, and reduced risks to human lives. However, the integration of AI into military systems also raises profound ethical, legal, and practical questions. Central to these debates is the concept of human-centricity, which ensures that humans remain at the core of AI-enabled systems—both as decision-makers and as beneficiaries of these technologies. The UK Ministry of Defence’s (MOD) JSP 936 Directive[1] plays a pivotal role in advancing these principles by providing a robust framework for the ethical and dependable use of AI in defense.

This blog post explores the broader implications of human-centric AI in defense, focusing on human-machine teaming (HMT), ethical challenges, and the role of JSP 936 in shaping the future of military AI.

The Rise of Human-Centric AI in Defense

Human-centric AI prioritizes human values, needs, and well-being in the design, development, and deployment of AI systems. In the defense sector, this approach ensures that technological advancements align with ethical principles such as accountability, transparency, fairness, and respect for human rights. The MOD’s JSP 936 Directive underscores this commitment by embedding human-centricity into every phase of the AI lifecycle.

The rationale for a human-centric approach is clear. As AI systems increasingly take on critical roles in areas such as surveillance, decision support, and autonomous operations, there is a growing need to ensure that these systems enhance—rather than undermine—human agency. For example, while AI can process vast amounts of data to identify threats or optimize logistics, final decisions must remain under meaningful human control to avoid ethical pitfalls and maintain accountability.[2]

Human-Machine Teaming: A New Paradigm

One of the most promising applications of human-centric AI in defense is human-machine teaming (HMT). HMT involves integrating AI systems with human decision-makers to leverage their respective strengths. While humans excel at contextual judgment and ethical reasoning, machines offer unparalleled speed and precision in data analysis. Together, they can create a synergy that enhances situational awareness, operational efficiency, and mission success.[3]

The MOD’s emphasis on HMT reflects its recognition of the critical role humans play in ensuring the reliability and accountability of AI systems. For instance, JSP 936 mandates that all AI-enabled systems must be designed to support meaningful human control throughout their lifecycle. This includes clear delineations of responsibility between humans and machines to prevent accountability gaps.

However, effective HMT requires more than just technological integration. It demands a cultural shift within defense organizations to build trust between humans and their machine counterparts. Training programs must focus on helping personnel understand AI system behavior, limitations, and risks. Collaborative training exercises where humans and machines learn from each other are particularly valuable for fostering this trust.[4]

Ethical Challenges in Military AI

The integration of AI into military operations raises several ethical challenges that cannot be ignored. These include:

  • Accountability: Who is responsible when an AI system makes an error? JSP 936 addresses this by requiring clear governance structures that assign responsibility at every stage of the AI lifecycle.
  • Bias: Unintended biases in AI algorithms can lead to discriminatory outcomes or flawed decision-making.[5] The MOD mandates proactive measures to identify and mitigate such biases during system development.
  • Transparency: Many advanced AI systems operate as „black boxes,” making it difficult for users to understand how decisions are made.[6] This lack of transparency can erode trust and complicate oversight.
  • Autonomy: As autonomous systems become more prevalent, ensuring that they operate within ethical and legal boundaries becomes increasingly complex.[7] JSP 936 emphasizes the importance of maintaining meaningful human control over all autonomous capabilities.

These challenges highlight the need for robust ethical frameworks like JSP 936 to guide the development and deployment of military AI.

JSP 936: A Framework for Ethical Military AI

The MOD’s JSP 936 Directive serves as a comprehensive roadmap for integrating dependable AI into defense operations while adhering to ethical principles. Building on the MOD’s Ambitious, Safe, Responsible (ASR) Policy[8], it outlines five key principles: Human-Centricity, Responsibility, Understanding, Bias Mitigation, and Reliability.

At its core, JSP 936 seeks to ensure that all AI-enabled systems are designed with human oversight and accountability in mind. This includes:

  • Conducting ethical risk assessments for all AI projects.
  • Establishing governance mechanisms such as appointing Responsible AI Senior Officers (RAISOs) within each organization.
  • Developing training programs to enhance personnel competency in managing AI technologies.
  • Ensuring compliance with national and international legal standards.

By embedding these principles into its operations, the MOD aims to foster trust in military AI among stakeholders—including allied nations—while maintaining its strategic edge.

Broader Implications for International Defense Cooperation

The principles outlined in JSP 936 have significant implications beyond the UK’s borders. As military alliances like NATO increasingly rely on interoperable technologies, aligning ethical standards across member states becomes crucial. Disparities in how nations approach issues like accountability or bias could undermine coalition operations or erode trust among allies.[9]

To address this challenge, JSP 936 emphasizes collaboration with international partners to develop shared norms for responsible military AI use. For example, it aligns closely with NATO’s AI strategy.[10] Such alignment not only enhances interoperability but also reinforces democratic values in an era where authoritarian regimes may exploit AI unethically.

The Future of Human-Centric Defense Systems

Looking ahead, human-centricity will remain a cornerstone of military innovation. Advances in areas like explainable AI (XAI) are expected to improve transparency by making machine decision-making processes more understandable to humans.[11] Meanwhile, ongoing research into HMT will explore new ways to optimize collaboration between humans and machines.

However, achieving these goals requires sustained investment in education, training, and research. Defense organizations must prioritize building a workforce capable of navigating the complexities of modern military technology while upholding ethical standards.

Conclusion

The integration of artificial intelligence into defense is not merely a technological endeavor—it is an ethical imperative that demands careful consideration of how these systems impact humanity. By adopting a human-centric approach as outlined in JSP 936, the UK Ministry of Defence sets a high standard for balancing innovation with responsibility.

As militaries worldwide grapple with similar challenges, frameworks like JSP 936 offer valuable lessons on how to harness the transformative potential of AI without compromising ethical principles or operational integrity. In doing so, they pave the way for a future where technology serves humanity—not the other way around.


[1] https://assets.publishing.service.gov.uk/media/6735fc89f6920bfb5abc7b62/JSP936_Part1.pdf

[2] https://isij.eu/system/files/download-count/2024-11/5551_Human-centered_AI.pdf

[3] https://therepublicjournal.com/journal/human-cognitive-autonomy/

[4] https://www.atlanticcouncil.org/wp-content/uploads/2023/08/Battlefield-Applications-for-HMT.pdf

[5] https://blogs.icrc.org/law-and-policy/2024/09/24/transcending-weapon-systems-the-ethical-challenges-of-ai-in-military-decision-support-systems/

[6] https://builtin.com/articles/black-box-ai

[7] https://www.dst.defence.gov.au/publication/ethical-ai

[8] https://www.gov.uk/government/publications/ambitious-safe-responsible-our-approach-to-the-delivery-of-ai-enabled-capability-in-defence/ambitious-safe-responsible-our-approach-to-the-delivery-of-ai-enabled-capability-in-defence

[9] https://cset.georgetown.edu/publication/responsible-and-ethical-military-ai/

[10] https://www.nato.int/cps/en/natohq/official_texts_227237.htm

[11] https://www.f5.com/company/blog/crucial-concepts-in-ai-transparency-and-explainability

Leave a Comment

Az e-mail címet nem tesszük közzé. A kötelező mezőket * karakterrel jelöltük