In this case, I think we are going to see such improvements because there’s a direct benefit to companies operating LLMs to save costs.
I’m not so sure, not when a lot of venture capital money often rides on grandiose promises to dazzle investors (including vague promises of nuclear fusion payback from a startup in four years in Microsoft’s case)
There’s no putting toothpaste back in the tube at this point.
Considering the already present socioeconomic consequences of this unregulated technology, from career/reputation threatening deepfaking to further working class precarity, saying “nothing can be done” in response to such harm sounds like tech inevitabilism to me. Should the same be argued about the worsening surveillance state (which is also being boosted with this technology)? Would it have been worthwhile to say nothing could be done about, say, CFCs, high fructose corn syrup, partially hydrogenated soybean oil, or leaded gasoline? Saying “this product is doing bad things but oh well it’s already invented” is tiresome fatalism to me.
Again the issue here is with capitalism not with technology. I personally don’t see anything uniquely harmful that’s inherent in LLMs, and I think that it’s interesting technology that has a lot of legitimate uses. However, it’s clear to me that this tech will be used in horrible ways under our current economic system just like all other tech that’s used in horrible ways already.
I’m not being fatalistic at all, I just think you’re barking up the wrong tree here.
I already said my part in the other reply I just gave and it also answers most of this post.
I will emphasize my first post in this thread: yes, LLMs are tools, and tools can be useful. I’d rather not give any inevitabilist arguments to the tool as if it requires special laissez-faire privileges that we don’t grant to, say, internal combustion engines or nuclear power.
I’m not so sure, not when a lot of venture capital money often rides on grandiose promises to dazzle investors (including vague promises of nuclear fusion payback from a startup in four years in Microsoft’s case)
Considering the already present socioeconomic consequences of this unregulated technology, from career/reputation threatening deepfaking to further working class precarity, saying “nothing can be done” in response to such harm sounds like tech inevitabilism to me. Should the same be argued about the worsening surveillance state (which is also being boosted with this technology)? Would it have been worthwhile to say nothing could be done about, say, CFCs, high fructose corn syrup, partially hydrogenated soybean oil, or leaded gasoline? Saying “this product is doing bad things but oh well it’s already invented” is tiresome fatalism to me.
Again the issue here is with capitalism not with technology. I personally don’t see anything uniquely harmful that’s inherent in LLMs, and I think that it’s interesting technology that has a lot of legitimate uses. However, it’s clear to me that this tech will be used in horrible ways under our current economic system just like all other tech that’s used in horrible ways already.
I’m not being fatalistic at all, I just think you’re barking up the wrong tree here.
I already said my part in the other reply I just gave and it also answers most of this post.
I will emphasize my first post in this thread: yes, LLMs are tools, and tools can be useful. I’d rather not give any inevitabilist arguments to the tool as if it requires special laissez-faire privileges that we don’t grant to, say, internal combustion engines or nuclear power.