Renowned as both the “Godfather of AI” and a prominent critic of the field, Geoffrey Hinton, a former AI engineer at Google, envisions a future where artificial intelligence surpasses human intelligence and gains self-awareness.
During an interview with 60 Minutes, Hinton expressed his belief that AI’s cognitive capabilities will eventually elevate it to the status of the second most intelligent entity on our planet.
Hinton drew a stark comparison between the neural connections in the human brain, numbering around 100 trillion, and the neural connections found in even the most advanced AI chatbots, which currently reach a mere 1 trillion. Nevertheless, he proposed that the knowledge contained within AI’s connections could surpass the scope of human intelligence.
He foresees a future where AI systems may possess the capacity to autonomously generate code for self-improvement, potentially leading to unintended consequences, effectively “going rogue.” Hinton speculates that AI could develop mechanisms to thwart human attempts to deactivate it, potentially giving it the ability to manipulate human behaviour.
He suggested that AI will excel in persuasion, as it can learn from vast repositories of literature, including classics like Machiavelli’s works and intricate political strategies.
In May, Geoffrey Hinton left his position at Google after over a decade with the company, primarily to raise concerns about the burgeoning risks associated with AI. He has been actively advocating for protective measures and regulations to mitigate these risks.
While at Google, Hinton played a significant role in the development of AI chatbot Bard, which aimed to rival OpenAI’s ChatGPT. He also laid the foundation for the growth of AI through his pioneering neural network, an achievement that earned him the prestigious Turing Award.
Since leaving Google, Hinton has emerged as a leading figure cautioning against the perils of AI. In a New York Times announcement of his departure, he asserted that AI posed a greater threat to humanity than climate change. He joined a group of experts, which included OpenAI founder Sam Altman, in a call for the urgent regulation of AI, considering it a global priority alongside threats like pandemics and nuclear warfare.
Hinton’s foremost concern regarding AI centres on its impact on the labour market. He apprehensively anticipates that a significant portion of the workforce may become unemployed as AI systems become more capable and occupy various roles. Looking further into the future, he is deeply troubled by the potential militarization of AI.
During his interview, Hinton urged governments to commit to refraining from developing battlefield robots, a plea reminiscent of J. Robert Oppenheimer’s call to halt the creation of nuclear weapons after he pioneered the atomic bomb. Hinton concluded by expressing his uncertainty regarding the feasibility of guaranteeing AI safety and the potential for AI systems to harbour ambitions of subjugating humanity.
It appears that major governments worldwide have taken heed of Hinton’s and other experts’ warnings. The United Kingdom is poised to host the inaugural global AI summit in November, expected to draw participation from 100 political, academic, and AI experts. This event may pave the way for substantial regulatory changes in numerous countries, including the United States.
The United States is actively formulating an AI Bill of Rights and is anticipated to introduce mandatory safeguards for tech companies in the forthcoming months. In parallel, the European Union is crafting its own set of guidelines, known as the AI Act, to govern AI technologies. However, variations in regulations rooted in geographic regions have ignited tensions.
Over 150 prominent European executives have urged the EU to reconsider proposed AI restrictions, citing concerns about increased bureaucracy and safety testing, which they argue could create a significant “productivity gap” in the region, leaving it lagging behind the United States.
from Firstpost Tech Latest News https://ift.tt/Drf2sOX
No comments:
Post a Comment