Technology and Ethics Cheat Sheet
The core ideas of Technology and Ethics distilled into a single, scannable reference — perfect for review or quick lookup.
Quick Reference
Algorithmic Bias
Systematic and unfair discrimination embedded in algorithmic systems, arising from biased training data, flawed design assumptions, or the amplification of existing social inequalities. Biased algorithms can produce discriminatory outcomes in hiring, lending, criminal justice, and healthcare without explicit discriminatory intent.
Informed Consent in the Digital Age
The principle that individuals should be fully informed about and freely agree to the collection, use, and sharing of their personal data. In practice, lengthy terms of service, opaque data practices, and the near-impossibility of opting out of digital services challenge the meaningfulness of consent.
Value-Sensitive Design (VSD)
A design methodology that accounts for human values throughout the technology design process. It involves conceptual investigation of stakeholder values, empirical investigation of how values are affected by technology, and technical investigation of how design choices support or undermine those values.
The Trolley Problem and Autonomous Vehicles
The application of the classic trolley problem to autonomous vehicle programming: when a crash is unavoidable, how should the vehicle be programmed to choose between different harmful outcomes? This raises questions about the programmability of moral decisions and whose values are encoded.
Surveillance Ethics
The ethical analysis of monitoring systems, including government surveillance, corporate data tracking, facial recognition, and workplace monitoring. Central concerns include the balance between security and privacy, the chilling effect on free expression, and the disproportionate impact on marginalized communities.
Digital Privacy
The right of individuals to control their personal information in digital environments, including what data is collected, how it is used, who has access, and how long it is retained. Privacy is considered both an individual right and a societal good that supports autonomy and democratic participation.
Explainability and Transparency in AI
The principle that AI systems making consequential decisions should be interpretable and their reasoning understandable to those affected. Black-box models that cannot explain their outputs raise accountability concerns, particularly in high-stakes domains like criminal justice and healthcare.
Responsible Innovation
A framework that integrates ethical reflection, inclusive deliberation, and anticipation of social impacts into the research and development process. It aims to align innovation with societal values and needs rather than treating ethics as an afterthought.
Digital Autonomy
The capacity of individuals to make free, informed decisions about their engagement with digital technologies, including what information they consume, how their data is used, and how algorithmic systems influence their choices. Persuasive design and dark patterns can undermine digital autonomy.
Dual-Use Technology
Technology that can be used for both beneficial and harmful purposes, creating ethical dilemmas about its development and distribution. The dual-use problem is central to debates about AI, biotechnology, encryption, and drone technology.
Key Terms at a Glance
Get study tips in your inbox
We'll send you evidence-based study strategies and new cheat sheets as they're published.
We'll notify you about updates. No spam, unsubscribe anytime.