Highlights: The White House issued draft rules today that would require federal agencies to evaluate and constantly monitor algorithms used in health care, law enforcement, and housing for potential discrimination or other harmful effects on human rights.

Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.

  • krellor@kbin.social
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    I don’t think you have anything to worry about. All this requires is that any models used by the government are tested for bias. Which is a good thing.

    Go ask an early generation ai image generator to make pictures of people cleaning and it will give you a bunch of pictures of women. There are all sorts of examples of racial, sex, and religious biases in the models because of the data they were trained on.

    Requiring the executive agencies to test for bias is a good thing.