Study reveals AI systems lack training in empathy and ethics, proposing a new method to align AI with societal values.

Researchers from Purdue University found that AI systems are trained primarily on information and utility values, often overlooking prosocial, well-being, and civic values. The study examined three datasets used by major AI companies and found a lack of training on empathy, justice, and human rights. The team introduced a method called reinforcement learning from human feedback to help AI systems align with societal values, using curated datasets to ensure ethical behavior and better serve community values.

2 months ago
3 Articles

Further Reading