Public administration increasingly involves artificial intelligence processes, that is computational process that are designed ‘learn’ from their data inputs. In this paper I argue that, when it comes to imagining AI’s impact on administration, we ought not to treat it as an exogenous shock to the system. Instead we ought to treat is as an increasingly endogenous feature of administrative work and indeed as one that will become increasingly unnoticed and, for better or worse, taken for granted. I set this argument out by addressing the sources of accountability. Specifically I focus on the ‘teams’ from which informal accountability relationships emerge. These teams increasingly feature algorithms as non-human members. Such agents, albeit not in the same ways that people do, contribute to people negotiating and navigating their commitments; meanings; intentions; and actions as ‘plural subjects,’ to use Margaret Gilbert’s terms. Accountability itself takes on a ’cyborg’ air as algorithms start to play a role in how plural subjecthood emerges: plural subjects are the repository of the collective agency that is in turn subject to scrutiny within the accountability forum. While the ‘robot on the team’ might fade into the background, its effects will be profound. People, teams and regulatory approaches will come to be coded more and more. While AI will be navigated in the context of existing team commitments, it is more helpful to think of all agency as ahving an increasingly ‘cyborg’ air: as being increasingly infused with algorithmic sensibilities.