The NAACP Legal Defense Fund (LDF) has submitted a response to the National Institute of Standards and Technology’s (NIST) proposal for “Identifying and Managing Bias in Artificial Intelligence.”

In its letter to NIST, LDF highlights that the proposal should:

  • include greater context on the historic and contemporary relationship between racial inequality and technological innovation;
  • be grounded in civil and human rights principles and law, and offer guidance for designing remedies in response to biased and discriminatory practices;
  • broaden its “Reject Development” parameters to include technologies that may result in discriminatory harm, particularly when used by law enforcement; and
  • draw upon and incorporate the expertise of a broad set of stakeholders, including impacted individuals and communities, civil and human rights organizations, and other agencies with relevant experience.

In all, LDF recommends:

  • NIST should (1) confront the historic and present-day methods of racial bias around all development and evaluation of AI and (2) center its framework on civil and human rights and develop supplemental guidance specifically addressing the implication of civil and human rights in AI. This should include interdisciplinary insights from impacted communities, relevant public agencies, and civil and human rights experts.
  • NIST should unequivocally state that algorithmic discrimination is unlawful, and that AI developers and practitioners have legal obligations across the lifecycle of AI to ensure rigorous compliance with civil and human rights law.
  • NIST should create specific processes to determine whether the development or dissemination of AI tools and systems risks civil and human rights violations. This should apply across all contexts but carry special consideration for areas with histories of racialized harm, such as law enforcement.
  • NIST should proactively seek out individuals with first-hand experience of AI bias and discrimination, leaders, activists, organizers, and others within marginalized communities. In addition, NIST should conduct a series of targeted field visits to frontline communities to ensure the voices of those impacted by algorithmic bias and discrimination are centered.
  • NIST should ensure that its future guidance incorporates guiding principles on the development of AI in the context of civil and human rights.

Read the full letter here.

Shares
OSZAR »