This isn’t even a Bing AI. It’s a Bing search feature like the Google OneBox that parses search results for a matching answer.
It’s using word frequency matching, not a LLM, which is why the “can I do A and B” works at returning incorrect summarized answers for only “can I do A.”
You’d need to show the chat window response to show the LLM answer, and it’s not going to get these wrong.
This isn’t even a Bing AI. It’s a Bing search feature like the Google OneBox that parses search results for a matching answer.
It’s using word frequency matching, not a LLM, which is why the “can I do A and B” works at returning incorrect summarized answers for only “can I do A.”
You’d need to show the chat window response to show the LLM answer, and it’s not going to get these wrong.