Browse All Blog Post

Ethics in AI and Advocacy

#Ethics in #AI and #advocacy is not an ivory tower problem.

If, before a meeting with a public official, MEP, Member of Congress I feed their (publicly available ) profile information into GPT and prompt "Tell me 5 ways I can present my arguments [on topic X] in a persuasive way that will resonate with this person's political views", I get helpful and relevant ideas which probably stay within the ethical boundaries.

But if I add various other background information about that person that I'm aware of (their personality type, their hobbies, food taste, or other much more sensitive info, including rumours about them, e.g. "this person is vain, loses his temper quickly, loves Italian wine and he has had multiple harassment claims against him from his assistants, so I want you to adapt the suggested arguments in a way that considers his psychological profile"), into the prompt (which may then train the larger algorithm, unless I deliberately turn it off), that can give me far more "relevant" (useful/effective) arguments but it's hardly ethical.

Of course, it's possible to do exactly the same anonymously and use a 'fictional' persona in my prompt while keeping all input data the same. That takes the odium from the person but it doesn't change the moral dilemma about these methods.Not sure what the takeaway is, but it's definitely an issue worth exploring deeper.

Thoughts?

**This was originally posted on Andras Baneth's LinkedIn account.

Subscribe to our Newsletter

Free Resources

No items found.

By continuing to browse, you accept the use of cookies. To learn more about the use of cookies and our approach to data privacy, click here.