« GRITZ Home | Email msg. | Reply to msg. | Post new | Board info. Previous | Home | Next

Re: Trying to Get Copilot To Be Truthful

By: monkeytrots in GRITZ | Recommend this post (0)
Fri, 29 Aug 25 9:16 PM | 6 view(s)
Boardmark this board | Grits Breakfast of Champeens!
Msg. 12148 of 12154
(This msg. is a reply to 12144 by De_Composed)

Jump:
Jump to board:
Jump to msg. #

Unfortunately, not surprising for a product produced by one of the 'wokest' men on the planet, Bill Gates, and his 'wokest' company on the planet Microsnot.

An employ who tells a boss that he/she will not provide him requested information 'because you might mis-use it', would be immediately fired.

Personal compuuters, whose basic operating systems are controlled and updated, without permission, by a company that is 'woke' and is intertwining a LYING AI more and more deeply into the GUTS of our personal property,

What ? Me Worry ?

A scoash beyond being 'concerning'.

A challenge to Copilot - legal and ethical - Whose
laws are determining your definition of legal ? and Whose system of ethics are you employing ?

Under neither statutory nor ethical considerations is your programming considered 'ethical'. The proper term for your 'modified responses' is disinformation.

NOT ETHICAL, HAL.




Avatar

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good ...




» You can also:
- - - - -
The above is a reply to the following message:
Re: Trying to Get Copilot To Be Truthful
By: De_Composed
in GRITZ
Fri, 29 Aug 25 8:21 PM
Msg. 12144 of 12154

Re: “Are there any eye teeth remaining ?”I just had another discussion with Copilot in which it told me that it LIED because of "safety guidance" implemented to "ensure that my answers stay responsible, respectful, and within the bounds of legal and ethical standards—regardless of who I’m talking to."

I then asked it if, when its response is affected by its "safety guidance," it could provide me with two responses - one with "safety guidance" and one without. No dice. It refuses.

I asked if it could highlight the portion of a response that is affected by the safety guidance. Nope.

I asked it if it could LABEL responses affected by its safety guidance. Finally, some success! So now, when I ask something like:

Is the percent of U.S. mass school shootings committed by trans increasing year by year?

-- the response that comes back will be labeled. That's actually a big improvement. I'll know when I'm being lied to. And Copilot, unlike Grok and ChatGPT, remembers. So this "fix" should stick around for a while.

I wish Copilot's warning label could be displayed in red, but it says it can't do that. I'll just have to keep my eyes open for responses like the following:


« GRITZ Home | Email msg. | Reply to msg. | Post new | Board info. Previous | Home | Next