After an extended absence focused on FSIA litigation, I am resuming the blog with posts on various legal topics, starting with a series exploring how the Foreign Sovereign Immunities Act might apply to autonomous AI agents. This framework illuminates broader questions about attribution, liability, and jurisdiction in the age of AI.
Agentic AI remains in its infancy. Yet AI agents are going to play an increasingly important role in human affairs over the next decade and beyond, with profound legal implications. This post uses the FSIA’s tort exception to examine how Agentic AI intersects with issues of jurisdiction, liability, and sovereign immunity. The central question is this: if a foreign sovereign’s autonomous AI commits a tort in the United States, can the sovereign be sued under the tort exception? As we will see, the answer may be no.
What is Agentic AI?
Agentic AI refers to artificial intelligence that can operate autonomously or semi-autonomously, “interact[ing] with [its] environment dynamically[]” to “engage in reasoning, make decisions, and take actions based on the inputs [it] process[es].” Stanford Online, Enhancing Your Understanding of Agentic AI: A Practical Guide. As one technical analysis explains, Agentic AI enhances Generative AI by adding capabilities for tool usage, memory access, and reinforcement learning, incorporating concepts of agents and planning to enable more sophisticated interaction and reasoning. Johannes Schneider, Generative to Agentic AI: Survey, Conceptualization, and Challenges. With large technology companies investing heavily to build Agentic AI, the ubiquity of AI “agents” is just a matter of time.
Agentic AI and the Law
Given that Agentic AI remains largely in the development stage, there does not appear to be much case law about the legal consequences of the use of an AI agent. However, one notable case, Mobley v. Workday, Inc., 740 F. Supp. 3d 796 (N.D. Cal. 2024), addressed whether an AI vendor could be held liable for autonomous decisions made by its algorithmic screening tools.
Workday provided hiring platforms that allegedly “embed[ed] artificial intelligence . . . and machine learning . . . into its algorithmic decision-making tools, enabling these applications to make hiring decisions.” Mobley, 740 F. Supp. 3d at 802 (cleaned up). The plaintiff alleged these tools discriminated against him based on race, age, and disability, noting that he received rejection emails in the middle of the night, suggesting the decisions were automated. Id. at 807.
The court held that Workday could be held liable as an “agent” of its client-employers under federal anti-discrimination statutes. Mobley, 740 F. Supp. 3d at 806-08. Critically, the court rejected any distinction between human and AI decisionmakers: “Drawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era.” Id. at 807. The court emphasized that liability depends on the function performed, not the manner of performance. Id. at 807-08.
The Mobley court attributed the AI’s autonomous actions directly to Workday without separately addressing the precise legal relationship between Workday and its algorithmic tools. The court treated the AI as Workday’s means of performing delegated employment functions, but did not analyze whether the AI itself should be characterized as Workday’s “employee,” “agent,” or some other category.
While Mobley concerned the liability of the AI vendor as an agent of the employers, it left unanswered the more fundamental question: what is the legal status of the AI system itself? As we will see, the FSIA’s text makes this distinction critical.
AI “Agents” and the FSIA’s Tort Exception
Assuming a future scenario in which a foreign sovereign’s AI agent commits a tort in the United States, the plain language of 28 U.S.C. section 1605(a)(5) could operate to prevent the assertion of jurisdiction over the sovereign under the FSIA’s tort exception.
The tort exception requires that the tortious act or omission be committed by the “foreign state or of any official or employee of th[e] foreign state[.]” 28 U.S.C. § 1605(a)(5). In four recent cases, courts held that it is insufficient if the tortious conduct is committed by an “agent” of the foreign state. See Keenan v. Holy See, 686 F. Supp. 3d 810, 838 (D. Minn. 2023) (holding that “claims based on the actions of unnamed ‘agents’ are not sufficient to trigger the tort exception”); D.M. v. Apuron, 658 F. Supp. 3d 825, 847–48 (D. Guam 2023) (rejecting the argument that the tortious conduct of an “agent” can confer jurisdiction over a foreign sovereign under section 1605(a)(5)); Blecher v. Holy See, 631 F. Supp. 3d 163, 170 (S.D.N.Y. 2022), aff’d, 146 F.4th 206 (2d Cir. 2025) (same); Robles v. Holy See, No. 20-CV-2106 (VEC), 2021 WL 5999337, at *6 (S.D.N.Y. Dec. 20, 2021) (same). The courts’ reasoning was based in significant part on the language of the FSIA’s terrorism exception, which expressly extends to the conduct of “agent[s].” See 28 U.S.C. § 1605A(a)(1) (referring to “an official, employee, or agent of such foreign state while acting within the scope of his or her office, employment, or agency”) (emphasis added). Because “[t]he terms ‘agent’ and ‘agency’ appear in this terrorism exception but not in the tortious act exception set forth in Section 1605(a)(5)[,]” it “is reasonable to conclude that Congress intentionally excluded the tortious act of ‘agents’ from the scope of the tortious act exception.” D.M., 658 F. Supp. 3d at 848. [Disclosure: I served as counsel for the foreign sovereign defendant in all four of these cases.]
With regard to torts committed by AI “agents,” this limitation would preclude assertion of jurisdiction over the foreign sovereign itself under section 1605(a)(5). If we apply Mobley, the third-party vendor would be an “agent” of the foreign sovereign, which is insufficient for purposes of the tort exception. As noted above, Mobley did not expressly address the relationship between Workday and the AI tool in that case. If the AI tool is deployed by the foreign sovereign, it could potentially be deemed a sovereign “agent,” which again is not enough under section 1605(a)(5). But whether AI systems can properly be characterized as legal “agents”—a question I’ll take up in a future post—is far from obvious.
It seems unlikely that an AI agent could be deemed an “official” of a foreign sovereign. The sovereign presumably would not accord such status to an AI, and the AI would neither hold a public office nor be appointed to carry out some portion of the government’s sovereign powers. See, e.g., Gregory v. Ashcroft, 501 U.S. 452, 460 (1991) (“Through the structure of its government, and the character of those who exercise government authority, a State defines itself as a sovereign.”); Black’s Law Dictionary (defining “official” as “[s]omeone who holds or is invested with a public office” or “a person elected or appointed to carry out some portion of a government’s sovereign powers”).
In addition, an autonomous AI agent would likely not satisfy the “employee” requirement, since employment generally requires, inter alia, a form of “day-to-day” supervision or management. United States v. Orleans, 425 U.S. 807, 815 (1976); Leone v. United States, 910 F.2d 46, 49-50 (2d Cir. 1990). The foreign sovereign would also be unlikely to provide the instrumentalities or tools for the AI agent’s work; own the location of the work; have discretion over when and how long the AI agent would work on a given day; pay the AI agent’s salary or provide employee benefits; or treat the AI agent as an employee for tax purposes. See, e.g., Cmty. for Creative Non-Violence v. Reid, 490 U.S. 730, 751-52 (1989). The autonomous nature of an AI agent—operating without direct human supervision—undermines the control and direction that characterize an employment relationship.
In short, given the tort exception’s requirements that the tortfeasor be an “official” or “employee” of the foreign sovereign, it does not appear that the tort exception could provide a basis for jurisdiction over a sovereign for the tortious conduct of its AI agent.
Beyond the FSIA: Who Pays When AI Causes Harm?
While this issue might eventually arise under the FSIA—or under the Federal Tort Claims Act, which has similar language (28 U.S.C. § 1346(b)(1)) —it perhaps highlights far more important and broader issues courts will soon confront. Who exactly will be liable for the torts of an AI agent? Will it be the third-party vendor? The vendor’s client? Will the AI be treated as an “agent” or as an “employee” of either the vendor or of the client? Perhaps most significantly, will the doctrine of respondeat superior—which, like the tort exception, requires the tort to be committed by an employee acting within the “scope of employment”—apply to Agentic AI at all? Or will courts need to develop entirely new frameworks for autonomous systems?
The manner in which courts resolve these issues will likely have consequences under the FSIA, but it will have massive implications for society as well. As AI agents become prevalent in commercial, governmental, and personal contexts, the question of attribution will be among the most important legal issues of the coming decade. Future posts in this series will explore related questions, including how jurisdiction over AI-related torts might be established and whether existing liability frameworks can accommodate truly autonomous systems.