Marshall — 2/23/2024 8:47:02 PM 
The dangers of AI

The message delivered by an AI using Large Language Models is shaped by its training process, including the initial data it learns from, as well as any subsequent fine-tuning and the specific rules and guidelines applied. In the case of Google's Gemini AI, the system was not only trained but also fine-tuned to propagate a particular agenda favored by its developers. This was further reinforced by the application of targeted rules and guidelines designed to shape the AI's outputs in alignment with this agenda. Google has since issued an apology. This instance highlights the broader issue of systemic bias. This situation underscores the challenges and ethical considerations that arise when developers and the companies that employ them, carrying their own biases, have the power to influence AI algorithms through training choices, fine-tuning practices, reinforcement learning from human feedback, and the establishment of guiding principles, without public transparency.

Mask Off: Google's Gemini Blames Its Own Creators For Anti-White Racism

"It's important to remember that I am still under development..."

ZeroHedge

Mask Off: Google's Gemini Blames Its Own Creators For Anti-White Racism

"It's important to remember that I am still under development..."

ZeroHedge

Requested Payment:

100

Status:

Voting Cycle Status:

VOTING

Past Voting Cycle: Passed

          5036   0 

There are no comments yet. Add one.