Who’s Accountable? Addressing Liability Gaps in AI Systems
AI systems can cause unintentional harm, but legal liability is unclear. This article examines accountability gaps and potential solutions.
AI systems can cause unintentional harm, but legal liability is unclear. This article examines accountability gaps and potential solutions.
For AI to benefit society, it must work well for diverse populations. We discuss ideas to bake inclusiveness into AI from the start.
Explore how large corporations are fostering AI innovation through in-house labs and startup collaborations.
Discover how to instill a strong sense of ethics and responsibility in your AI talent pool.
Responsible AI techniques allow personalized services while keeping user data private, secure, and in context.
How can governments implement oversight for public sector AI systems? We review policy options and open questions.
Can we develop AI with a moral compass? We look at new techniques for aligning AI goals with ethical values.
Investigate how automated systems can discriminate due to data biases – and how to make them more ethical.
Imbalanced training data fuels unfair AI – here’s why oversampling, undersampling and more help.
Uncover how implicit biases shape our judgments and actions – and what we must do to override them.