Awareness of the possible impacts associated with artificial intelligence has
risen in proportion to progress in the field. While there are tremendous
benefits to society, many argue that there are just as many, if not more,
concerns related to advanced forms of artificial intelligence. Accordingly,
research into methods to develop artificial intelligence safely is increasingly
important. In this paper, we provide an overview of one such safety paradigm:
containment with a critical lens aimed toward generative adversarial networks
and potentially malicious artificial intelligence. Additionally, we illuminate
the potential for a developmental blindspot in the stovepiping of containment
mechanisms.
49
0
Stovepiping and Malicious Software: A Critical Review of AGI Containment
attributed to: Jason M. Pittman, Jesus P. Espinoza, Courtney Crosby
Awareness of the possible impacts associated with artificial intelligence has
risen in proportion to progress in the field. While there are tremendous
benefits to society, many argue that there are just as many, if not more,
concerns related to advanced forms of artificial intelligence. Accordingly,
research into methods to develop artificial intelligence safely is increasingly
important. In this paper, we provide an overview of one such safety paradigm:
containment with a critical lens aimed toward generative adversarial networks
and potentially malicious artificial intelligence. Additionally, we illuminate
the potential for a developmental blindspot in the stovepiping of containment
mechanisms.
0
Vulnerabilities & Strengths