As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
I don’t see any legal issue here.
When a person or a company publishes software that causes harm or damages, that person or company is fully liable and legally responsible.
Whether they themselves understand what the software does is completely irrelevant. If they don’t have control over its output, they shouldn’t have published it.
I don’t see any legal issue here.
When a person or a company publishes software that causes harm or damages, that person or company is fully liable and legally responsible.
Whether they themselves understand what the software does is completely irrelevant. If they don’t have control over its output, they shouldn’t have published it.