AI’s black box problem: Why is it still indecipherable to researchers

hace 3 horas 1

When a neural network is operational, even the most specialized researchers are often left in the dark about what’s happening. This isn’t a discussion about biology; rather, it pertains to artificial intelligence algorithms — specifically those based on deep learning, which mimic the connections between neurons. These systems operate as a black box, remaining indecipherable even to data scientists, the brightest minds in academia, and engineers at OpenAI and Google, who have recently been awarded the Nobel Prize.

Seguir leyendo

Leer el artículo completo