US President Joe Biden signed an executive order that immediately set some ground rules around how tech companies can use and develop AI. The executive order places a lot of emphasis on safety, innovation, and broader societal benefits.
The order, spanning nearly 20,000 words and encompassing eight distinct pillars, seeks to create a harmonious AI landscape by aligning federal directives with the aspirations of the private sector.
However, this endeavour towards achieving a balanced AI ecosystem has sparked numerous debates, with critics expressing concerns over potential bureaucratic obstacles. As the United States embarks on its journey into this new era of AI, the interplay between policy, innovation, and societal impact unfolds, promising a narrative filled with challenges and opportunities.
NetChoice, an organization dedicated to advocating for “light-touch regulation” of the Internet, and whose members include major companies like eBay and Airbnb, has already raised objections to President Biden’s approach. Carl Szabo, the vice president and general counsel for the association, referred to the order as an “AI Red Tape Wishlist.”
Szabo stated, “Biden’s new executive order is a back-door regulatory scheme for the wider economy, which uses AI concerns as an excuse to expand the President’s power over the economy. There are already many regulations governing AI. Instead of exploring how these existing rules can address modern challenges, Biden has chosen to further increase the complexity and burden of the federal code. This will hinder new companies and competitors from entering the marketplace and significantly expand the federal government’s control over American innovation.”
In contrast, US Senator Charles Schumer, a Democrat from New York and the Senate Majority Leader, views the order as a “crucial step,” but acknowledges that meaningful legislation will be the responsibility of Congress.
To Schumer’s point, the Executive Order primarily sets the policy direction for federal agencies and outlines the administration’s broader approach to AI. While this could impact private firms through subsequent legislation or regulations, it does not compel private firms to take specific actions.
In a contrasting move to the recently introduced AI legislation, 15 major tech companies have willingly adopted AI safety commitments. However, the government regulatory body has expressed scepticism, stating that these voluntary efforts are insufficient.
President Biden, during the signing of the executive order, emphasized the need to regulate AI to harness its potential while mitigating risks. He underlined the potential for AI to be misused in the wrong hands, making society vulnerable to cyber threats. Nonetheless, the executive order is viewed as a temporary measure until Congress formulates long-term legislation for this emerging technology.
The Act mandates that developers of powerful AI must disclose safety test result data to ensure secure deployment. Simultaneously, the National Institute of Standards and Technology will establish standardized rules to guide the development of AI systems.
Furthermore, the legislation introduces an ‘AI Bill of Rights’ aimed at safeguarding against potential AI-related harms, with an emphasis on privacy, equity, and worker support. Substantial investments are directed toward research and development to maintain AI leadership. The bill also outlines policies to ensure the responsible and ethical use of AI in government functions, with a focus on societal benefit and minimizing potential negative impacts.
A noteworthy provision in the bill stipulates that any AI model requiring more than 1e26 floating-point operations or 1e23 integer operations to build must report to the government. This primarily affects major tech companies like OpenAI and Google, as it surpasses the scale of existing models such as OpenAI’s GPT-4. However, the mechanism for monitoring and enforcing this requirement remains unclear.
While the executive order is expected to take time to be fully implemented, with uncertainties regarding monitoring and regulation, the European Union (EU) has introduced a more comprehensive document known as the EU AI Act, outlining guidelines for AI development.
The key distinction between the two approaches lies in their treatment of transparency and accountability. The EU AI Act places a strong emphasis on these aspects, requiring AI developers to disclose detailed information about their AI systems’ development and functionality. It also obligates AI developers to take measures to ensure accountability and responsibility for any harm caused by their AI systems.
In contrast, the Biden AI executive order encourages transparency and accountability but is less prescriptive. It urges AI developers to adopt voluntary measures in this regard but does not mandate them to do so.
Additionally, the EU AI Act applies to all AI systems, regardless of size or complexity, while the Biden AI executive order only pertains to specific AI systems, particularly those used by the government or posing a high risk to public safety.
from Firstpost Tech Latest News https://ift.tt/yLU72Ns
No comments:
Post a Comment