Honestly the most impressive part of LLMs is the tokenizer that breaks down the request, not the predictive text button masher that comes up with the response.
Yes, exactly! It’s ability to parse the input is incredible. It’s the thing that has that “wow” factor, and it feels downright magical.
Unfortunately, that also makes people intuitively trust its output.
Yes, exactly! It’s ability to parse the input is incredible. It’s the thing that has that “wow” factor, and it feels downright magical.
Unfortunately, that also makes people intuitively trust its output.