Etične dileme dolgoletnega raziskovalca umetne inteligence / Ethical dilemmas of a long-time AI researcher  

Za 60-letni razvoj umetne inteligence so značilna obdobja vzponov in padcev. Še leta 2005 je umetna inteligenca veljala za področje neizpolnjenih obljub. Tehnični prodori strojnega učenja pa so od 2010 povzročili eksplozijo novih uspešnih aplikacij in nenehnega vzpona javne podobe umetne inteligence. Leta 2013 smo se nekateri raziskovalci umetne inteligence prvič zavedli, da prinaša tehnični razvoj poleg velikih priložnosti tudi resne nevarnosti. Te so med drugim povezane s sledečimi vprašanji: ali manipulacija ljudi z metodami umetne inteligence vodi do konca demokracije? Če samovozeči avtomobil vidi, da je nesreča neizogibna, kako naj se odloči – koga naj reši, koga žrtvuje? Ali je moralno, da odločitev za uboj prepustimo avtonomnemu orožju? Ali se s tem izognemo odgovornosti? Ali je nadzor nad spornimi aplikacijami umetne inteligence ustrezen? Zakonodaja pogosto zaostaja za tehničnim razvojem.

Nekatere najbolj uspešne metode umetne inteligence priporočajo odločitve, ki jih človek ne razume. Kako lahko to privede do tega, da umetna inteligenca de facto prevzame oblast? Kako lahko avtomatizacija oblasti privede do oblasti avtomatizacije?

The 60 years of developing artificial intelligence are characterised by periods of ups and downs. Even in 2005, artificial intelligence was considered an area of ​​unfulfilled promises. Since 2010, however, the technical breakthroughs of machine learning have led to an explosion of new successful applications and a continuing rise of a positive public perception of artificial intelligence. In 2013, for the first time, several artificial intelligence researchers – myself included – became aware that in addition to great opportunities, technical development brings serious dangers, raising questions such as: does manipulating people with artificial intelligence methods lead to the end of democracy? If a self-driving car sees an accident as inevitable, how to decide – who to save and who to sacrifice? Is it moral to leave the decision to kill to an autonomous weapon? Do we avoid liability? Is the control of controversial artificial intelligence applications appropriate? Legislation often lags behind technical development.

Some of the most successful artificial intelligence methods recommend decisions that people do not understand. How can this enable artificial intelligence to de facto assume power? How can automation of governance lead to governance by automation?

festivalgrounded@gmail.com