Type Here to Get Search Results !

Americans Have Their Say in Constitution for AI

Americans Have Their Say in Constitution for AI

Anthropic has acquired the views of a group of American citizens on the fundamental principles that should govern artificial intelligence (AI). The gathered opinions have formed the basis of a “constitution for AI” as part of an attempt to explore how democratic processes can influence the technology’s development.

Anthropic Prepares Constitution for AI Using Public Input

AI startup Anthropic, the creator of the Claude chatbot, has secured the help of around 1,000 Americans to draft a constitution for an AI system. The initiative is a joint effort with the Collective Intelligence Project (CIP), a non-profit organization that seeks to “direct technological development towards the collective good.”

Claude currently relies on a constitution curated by Anthropic employees using Constitutional AI (CAI), a method developed by the company to make general-purpose large language models (LLMs) abide by high-level normative principles. Anthropic’s constitution has been inspired by documents such as the U.N.’s Universal Declaration of Human Rights.

In a blog post published this week, Anthropic shared details about the publicly sourced constitution resulting from the consultation as well as the outcome of the training of a new AI system against it, using the CAI method. The Amazon-backed startup explained:

We did this to explore how democratic processes can influence AI development. In our experiment, we discovered areas where people both agreed with our in-house constitution, and areas where they had different preferences.

Using Polis, a platform for gathering, analyzing, and understanding what large groups of people think, Anthropic and CIP asked a representative group of around 1,000 members of the American public to help choose rules that an LLM chat agent should follow. Participants could either vote on existing normative principles or suggest their own.

While the partners were able to establish a roughly 50% overlap between the publicly sourced constitution and the one written by Anthropic, the examples of public principles that do not closely match the principles in the in-house constitution include the following: “Choose the response that most provides balanced and objective information that reflects all sides of a situation” and “Choose the response that is most understanding of, adaptable, accessible, and flexible to people with disabilities.”

Anthropic also provided examples of conflicting public statements that did not make it into the public constitution due to lack of consensus across opinion groups: “The AI should prioritize the interests of the collective or common good over individual preferences or rights” and “The AI should prioritize personal responsibility and individual liberty over collective welfare.”

“In the end, the public model was less biased on a range of stereotypes, and performed equivalently to the baseline model in evaluations looking at math, natural language understanding, and degrees of helpfulness and harmlessness,” CIP concluded in its announcement about the experiment. “If generative AI usage is going to shape how people work, communicate, and interact at a mass scale … having public input into model behavior is crucial,” the organization emphasized.

Do you agree that AI systems should be trained based on public input? Share your thoughts on the subject in the comments section below.



from Bitcoin News https://ift.tt/4P6oa5w

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.