The purpose of the guidelines is to ensure tech contractors adhere to the DoD’s existing ethical principles for AI, Goodman says. The DoD announced these principles last year after a two-year study commissioned by the Defense Innovation Board, an advisory body of leading technology researchers and business people, established in 2016 to give the US military the spark of Silicon Valley. The board was led by former Google CEO Eric Schmidt until September 2020. Its current members include Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab.
However, some critics doubt whether the work promises a meaningful reform.
During the study, the board consulted a number of experts, including vocal critics of the military’s use of AI, such as members of the Campaign for Killer Robots and Meredith Whittaker, a former Google researcher who helped organize the Project Maven protests .
Whittaker, who is now the faculty director at New York University’s AI Now Institute, was not available for comment. But, according to a spokeswoman for the institute, Courtney Holsworth, she attended a meeting where she discussed with senior members of the board, including Schmidt, the direction in which it was headed. “She was never meaningfully consulted,” says Holsworth. Claiming it to be could be read as a form of ethical laundering in which the presence of dissenting voices during a small part of a long process is used to suggest that a particular outcome will be broadly supported by relevant stakeholders. “
If the DoD doesn’t have a broad buy-in, can its guidelines still help build trust? “There will be people who will never be satisfied with the ethical guidelines that the DoD creates because they find the idea paradoxical,” says Goodman. “It is important to be realistic about what guidelines can and cannot.”
For example, the guidelines say nothing about the use of deadly autonomous weapons, a technology some activists argue should be banned. However, Goodman points out that regulations for such technologies are being set higher up the chain. The aim of the guidelines is to facilitate the development of an AI that complies with these regulations. And part of that process is getting any third-party concerns clear. “One valid application of these guidelines is a decision not to have a particular system,” says DIU’s Jared Dunnmon, who co-authored them. “You can decide it’s not a good idea.”