The Debate We Should Be Having: Ethics and Emerging Technologies
Introduction
Principia’s inaugural network debate examined some of the ethical challenges posed by emerging new technologies, with implications for governance, accountability, and operational risk. Annabel Gillard, Michael Skerker and Patrick Taylor Smith brought perspectives from across philosophy, policy and practice to a topic that remains deeply contested.
Current debates surrounding emerging technologies have reached an impasse. Organizations and policymakers often find themselves caught between two positions: a choice between innovating rapidly and the risks or governing heavily and fall behind.
This framing is prevalent across strategic and regulatory conversations, yet the panel argued that this dichotomy is false: not all technologies carry the same level or type of risk, and subsequently not all require the same approach to regulation. The more pressing challenge is developing institutional capacities to differentiate and better understand where caution is warranted, and where innovation can proceed responsibly.
Until organizations can make these distinctions more reliably, and align governance structures accordingly, the gap between operational demands and meaningful integration of ethical considerations will continue to widen.
The Debate
The panel discussed a number of questions that exemplified their position:
Should society ever deliberately slow down the deployment of new technologies on ethical grounds?
The panel challenged the assumption that technological deployment within society is ever linear or inevitable. Historical precedents, including nuclear arms control frameworks, demonstrate how societal norms and institutional constraints can meaningfully shape use of technologies, even after they exist.
There was agreement that governance bodies have a legitimate role in regulating deployment. The more difficult question is knowing when to intervene. Current governance structures cannot do so effectively yet, as they are often too slow to constrain technologies that require scrutiny, while being too obstructive to those that do not, such as renewable energy infrastructure or mRNA vaccines.
What is needed here instead is a more agile and representative governance that is better equipped to differentiate and make context-specific judgements about where intervention is most effective.
Who carries the greatest ethical responsibility – those who build deploy or regulate new technologies?
The panel agreed that denotation of responsibility is proportionate to possession of knowledge and power. Of the three groups, the deployer holds the greatest combination of both. Designers are often far removed from how the technology is being used, while regulators can face structural barriers, such as access to technical knowledge, which inhibit their ability to scrutinize what they are overseeing. A broader structural concern was identified: dynamics of the current innovation system mean that a small number of companies capture disproportionate value, while costs are distributed across society. This asymmetry is not incidental: these dynamics mean that the chain of accountability
Is it ethically acceptable for new technologies to benefit society overall, even if they disadvantage specific groups?
Trade-offs are an unavoidable feature of technological change, and disruption to established industries is not inherently unjust. It is more important to identify where negative impacts fall, as harms that exacerbate existing inequalities are more serious than disruptions to commercial interests.
A key distinction was drawn between interests and rights: while advancements can disrupt the former, it is not acceptable for them to violate the latter. The proliferation of personal data collection without meaningful consent and some of the exploitative labor conditions underpinning global AI infrastructure were two suggested examples of areas that remain largely unaddressed by existing frameworks.
Are these ethical risks problems of the technology itself, or the society deploying it?
Most emerging technologies are not inherently harmful nor beneficial, rather they are instruments whose impact is shaped by how they are being used. LLMs, for example, are capable of positive contributions including medical diagnostics and pharmaceutical research, but also significant harm such as manipulation and fraud, or the amplification of disinformation.
The structural environment fails to control these contrasting uses. Critical technological discourse has over-emphasized prevention by generating frameworks designed to stop negative outcomes, rather than enabling positive ones. This positioning leaves responsible and value-driven development underarticulated, while focus remains on inhibiting those arguing for deployment without oversight. The panel emphasized a need for reorientation toward frameworks that treat ethics as a condition of effective deployment rather than its constraint. The immediate priority moving forward is proof of concept. Demonstration through concrete examples that ethical deployment produces better outcomes is critically required.
Principia’s Position
The debate emphasized a view that is central to Principia: ethics is most effective when it is both practical and agile, while embedded into decision-making from the outset. Principia’s Good, Right and Fitting framework offers a basis for navigating the trade-offs that technological developments inevitably produce.
Principia’s work is designed to provide organizations with the tools to create responsible deployment that produces better outcomes in practice.
This includes working with firms to articulate a clear, coherent approach to responsible AI by translating values and principles into the governance structures and processes that stakeholders can understand and trust. By bridging the gap between ethical frameworks and operational reality, Principia helps organizations move beyond compliance to demonstrate genuine accountability in how they develop and deploy AI.