Funny thing there is actually attempts at modeling uncertainty in Deep Learning. But they are rarely used because they are either super inaccurate or have super slow convergence. (MCMC, Bayesian neural networks) The problem is essentially that learning algorithms cannot properly integrate over certainty distributions, so only an approximation can be trained, which is often pretty slow.
Hey thank you guys for your attempt to help, although I have already figured it out. I feel this is not the place for support requests, and my intention was rather just to share this funny error statement.
This can also be used a great example of proof by contradiction: There is no correct answer in the options. Proof: Assume there was a correct answer in the options. Then it must be either 25%, 50% or 60%. Now we make a case distinction.
(A) Assume it was 25. Then there would be two of four correct options yielding in a probability of 50%. Therefore 50 must be the correct answer. -> contradiction.
(B) Assume it was 50. Then there would be one of four correct options yielding in a probability of 25%. Therefore the answer is 25. -> contradiction.
(C) Assume it was 60%. Since only 0,1,2,3 or 4 of the answers can be correct the probability of choosing the right answer must be one of 0% 25% 50% 75% or 100%. -> contradiction.
Because of (A), (B) and (C), it cannot be 25, 50% or 60%. -> contradiction.
How could she have kwown? Does she have some inside information from the Deep Empire?