one thing that annoys me about LLM is that a neural network of any substantive size is basically a black box and so figuring out why it gives the output it does is difficult to impossible
which is frustrating because i would like to know whether the seemingly impressive connections AI is drawing are the result of it actually grasping deep structure in the world or just doing statistical correlation on a giant corpus