Consilio Wealth Advisors

View Original

AI From a Philosophical Perspective

See this content in the original post

With Gemini’s embarrassing AI image generator rollout, Google inadvertently pulled the curtain back on how AI guardrails are established and how human biases figure into it.

If you haven’t heard, image prompts into Gemini yielded embarrassing and inaccurate results. Google has since disabled image generation from Gemini while it works out the kinks.

The controversies we’ve seen from Gemini raise ethical questions on what AI can and should produce. Philosophical insights are needed to analyze the ethical implications of AI development and use, considering issues like bias, fairness, privacy, and accountability. They can help create ethical frameworks and guidelines for responsible AI development and implementation.

Google showed us that heavy handed tweaks can have unintended consequences.

Ethnicity became a problem when Gemini users try to generate historically accurate depictions of World War II soldiers.

Prabhakar Raghavan wrote on his blog about the many shortcomings on the product rollout:

“When we built this feature in Gemini, we tuned it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people. And because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

Preventing explicit and violent images makes sense but the lines filtering these types of images can get fuzzy quickly. Who decides what’s inappropriate or not? Let’s have a user prompt the AI to generate an image of an operating room where a doctor was performing surgery. There would be elements of blood, and someone being operated on. At what point do you dial down the image to where it is inoffensive but still accurate? Any image that’s dialed too much could be useless, defeating the purpose of using image generation in the first place.

Trying to placate global norms makes this an impossible task. An American idea of what’s acceptable isn’t the same as Canadian norms. And I picked the two closest countries I could think of culturally. Imagine setting Chinese standards against Brazilian.  

On the flipside, you don’t want to be so loose with the guardrails as it can open the door for lawsuits or user abandonment. Take Twitter for example, where advertisers are leaving the platform for fear of their brands being seen next to unacceptable content. Users are leaving because the content has become overwhelmingly toxic.

AI doesn’t distinguish right from wrong and will need human judgment to constantly slide the scale between appropriate and inappropriate. The world evolves and becomes more tolerant in certain areas while becoming less tolerant in other areas. Would you want to train an AI on 1950s American standards? How about 1990s Russian standards?

I don’t know if an AI can self-evolve in a way that is acceptable to all of us. Or ultimately acceptable to advertisers. Content moderation is a requirement for companies relying on ad revenue.

Automated interactions need to be predefined, hence the need for training a bot. You don’t want it to respond with made up answers or wrong answers. Setting up guard rails opens the training to bias, as well as what data to train on. I can imagine very different outcomes for a bot trained on Fox News versus CNN.

Then there’s the issue of AI being used to manipulate entire populations. Take an election for example where a foreign country can bombard an electoral body with disinformation or faked videos. Social media is a tool that’s been manipulated in past elections to swing votes. I think AI makes it easier to generate and distribute that bad information.

It’ll be a matter of how quickly we can adapt to the ever evolving world of AI.

DISCLOSURES:

The information provided is for educational and informational purposes only and does not constitute investment advice and it should not be relied on as such. It should not be considered a solicitation to buy or an offer to sell a security. It does not take into account any investor's particular investment objectives, strategies, tax status or investment horizon. You should consult your attorney or tax advisor.

The views expressed in this commentary are subject to change based on market and other conditions. These documents may contain certain statements that may be deemed forward looking statements. Please note that any such statements are not guarantees of any future performance and actual results or developments may differ materially from those projected. Any projections, market outlooks, or estimates are based upon certain assumptions and should not be construed as indicative of actual events that will occur.

This document is for your private and confidential use only, and not intended for broad usage or dissemination.

No investment strategy or risk management technique can guarantee returns or eliminate risk in any market environment. All investments include a risk of loss that clients should be prepared to bear. The principal risks of CWA strategies are disclosed in the publicly available Form ADV Part 2A.

Index returns are unmanaged and do not reflect the deduction of any fees or expenses. Index returns reflect all items of income, gain and loss and the reinvestment of dividends and other income. You cannot invest directly in an Index.

Past performance shown is not indicative of future results, which could differ substantially.

Consilio Wealth Advisors, LLC (“CWA”) is a registered investment advisor. Advisory services are only offered to clients or prospective clients where CWA and its representatives are properly licensed or exempt from licensure.