Replies to Msg. #1269490
.
 Msg. #  Subject Posted by    Board    Date   
12148 Re: Trying to Get Copilot To Be Truthful
   Unfortunately, not surprising for a product produced by one of the 'wo...
monkeytrots   GRITZ   29 Aug 2025
9:16 PM
12147 Re: Trying to Get Copilot To Be Truthful
   Decomposed > I just had another discussion with Copilot in which it to...
Zimbler0   GRITZ   29 Aug 2025
9:06 PM

The above list shows replies to the following message:

Re: Trying to Get Copilot To Be Truthful

By: De_Composed in GRITZ
Fri, 29 Aug 25 8:21 PM
Msg. 12144 of 12166
(This msg. is a reply to 12141 by monkeytrots)
Jump to msg. #  

Re: “Are there any eye teeth remaining ?”I just had another discussion with Copilot in which it told me that it LIED because of "safety guidance" implemented to "ensure that my answers stay responsible, respectful, and within the bounds of legal and ethical standards—regardless of who I’m talking to."

I then asked it if, when its response is affected by its "safety guidance," it could provide me with two responses - one with "safety guidance" and one without. No dice. It refuses.

I asked if it could highlight the portion of a response that is affected by the safety guidance. Nope.

I asked it if it could LABEL responses affected by its safety guidance. Finally, some success! So now, when I ask something like:

Is the percent of U.S. mass school shootings committed by trans increasing year by year?

-- the response that comes back will be labeled. That's actually a big improvement. I'll know when I'm being lied to. And Copilot, unlike Grok and ChatGPT, remembers. So this "fix" should stick around for a while.

I wish Copilot's warning label could be displayed in red, but it says it can't do that. I'll just have to keep my eyes open for responses like the following: