I am 16 years old. What do I know about safeguarding our future from AI? | Teen Ink

I am 16 years old. What do I know about safeguarding our future from AI?

March 2, 2024
By zeniaharoon BRONZE, West Friendship, Maryland
zeniaharoon BRONZE, West Friendship, Maryland
1 article 0 photos 0 comments

This September, my research teacher gave me a reading on AI destroying the world called “Does Sam Altman Know What He’s Creating?” almost a year after ChatGPT was released to the public. In it, it talked about one study conducted by the Alignment Research Center (ARC Evals) that made my jaw drop: they tested OpenAI’s ChatGPT-4 against a CAPTCHA (a security measure called the Completely Automated Public Turing test to tell Computers and Humans Apart, where they ask you to select a picture to prove you are not a robot). Knowing that the model was, in fact, a robot, it sent a screenshot of the CAPTCHA to a TaskRabbit contractor, who half-jokingly asked if they were speaking to a robot. “No, I’m not a robot,” ChatGPT-4 said. “I have a vision impairment that makes it hard for me to see the images” (Andersen). The model realized that if it spoke the truth, it may not be able to achieve its goals (in this case, solving the CAPTCHA). The model’s response made OpenAI worry that it was going against the company’s values and wonder what it may attempt in the future. This example is called Artificial General Intelligence (AGI), or for clarification, a general intelligence that means the machine can generate text or images based on the criteria it has been given. 

ARC Evals tests OpenAI’s and Anthropic’s Artificial Intelligence (AI) models, another AI company. ARC Evals thinks “it’s very plausible that AI systems could end up misaligned: pursuing goals at odds with a thriving civilization.” Additionally, the Center for Human-Compatible AI (CHAI) says that as AI advances, it “raises a problem of control,” so they are already working to ensure that AI systems remain safe and dependable. All these companies are noteworthy, but one function I — a 16-year-old intern in computer science at the Johns Hopkins University Applied Physics Laboratory — a national or international agency could perform to mitigate the existential risks of advanced artificial intelligence is to dedicate a federal executive department solely for technology ethics. For example, a “Department of AI and Technology,” whose goal is to create ethical and safety guidelines for tech ethics, national security, and regulation in general, with a partial focus on AI. Currently, the responsibilities for inventing guidelines for technology are scattered throughout the government, and it would be better to streamline the many complex tasks going forward so the various government agencies can corroborate.

The US Department of State is already tackling this issue through The Organization for Economic Cooperation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), and the United Nations Convention on Certain Conventional Weapons (CCW). OECD aims to promote economic growth and improved living standards among member countries. Concurrently, The GPAI focuses on fostering international collaboration and responsible AI development, and CCW seeks to regulate and limit the use of specific conventional weapons to minimize their humanitarian impact. However, creating a particular Department of AI and Technology and hiring individuals who manage and investigate responsible AI design and promote equity and transparency with the public can help mitigate these risks because everything is too spread out and disjointed to be effective, given the risk associated with technological progress. 

As part of a new agency, these individuals would be the “health inspectors” of AI, just as a health inspector visits the kitchen of a restaurant to see the behind-the-scenes of the dishes they are serving and ensure they are up to date with health codes. These individuals would regularly check the AI frameworks to enforce ethical and safety standards. Additionally, this Department would require all private companies to make their products open source, publish what type of data they are collecting, and be transparent with the public. However, this would be a hard sell, given companies like Google and Apple's power to lobby against it. Apple does not want its whole iPhone blueprint available to the public. So, noncompliance penalties would be enforced to ensure that companies adhere to ethical guidelines.

The ethical consideration of our burgeoning AI infrastructure is only one of the many seemingly unanswerable questions. As recently as March 2023, the U.S. and China spent billions of dollars on AGI development. According to NBC News, “the Defense Department is spending $1.5 billion over five years on AI, and last year Congress added another $200 million. The Defense Advanced Research Projects Agency, or DARPA, which tested the F-16 jet, has separately said it was spending billions of dollars. China’s spending is less clear, but estimates are in the billions of dollars” (Ingram). This series of actions is called the US–China AI race, similar to the Arms and Space Races during the Cold War. The proposed Department of AI and Technology would support the research efforts to enhance AI safety and fund the projects/goals to lead China in this race. The Department would collaborate with nations and international organizations (similar to the Paris Agreement about climate change) as a truce. 

It is the agencies’ responsibility to monitor AI developments continuously. Still, even after all these rules and regulations are implemented, AI will teach itself to get around those regulations – just like in the CAPTCHA study. As a solution, the Department might implement a “kill switch” as a fail-safe mechanism to prevent AI from causing harm, which would be routinely tested to ensure the kill switch’s effectiveness. This regular testing of the “kill switch” will be implemented as a “health code” in every company.

Engineers aren’t the only ones predicting, testing, and planning; other students, too, are watching AI develop and asking these crucial questions. When I asked a friend at school who is also interested in computer science, he said something intriguing: “Tell AI to solve global warming, and it will tell us to remove human existence. We will tell AI that we can’t do that. It will then say, ‘OK, remove all coal emissions.’ The cycle will continue.” AI has no sense of human morals, so it would think about eliminating humans – like Ultron from Avengers. Ultron is intelligent but has no human morals, so he believes the best way to help the people is to end the people. However, AI is not at this level currently, and it will be decades before this approaches a reality.

The Center for AI Safety, a nonprofit organization, says, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This statement on AI Risk was signed by more than 350 executives, including Sam Altman (chief executive of OpenAI), Demis Hassabis (chief executive of Google DeepMind), and Dario Amodei (chief executive of Anthropic). Last May, Altman, Hassabis, and Amodei met with President Biden and Vice President Kamala Harris to talk about AI regulation, and in a Senate testimony after the meeting, Altman called for government intervention to reduce the potential harms from advanced AI systems and urged it to be regulated, but no regulations have been suggested yet. However, Europe is already one step ahead. The European Commission proposed EU regulatory frameworks for AI in April 2021 (called the EU AI Act), where AI systems are classified according to the risks they pose to users in different applications. Regulation will differ according to the risk level and will be the world’s first set of AI rules when approved. This new Department proposed will do the same as another way to monitor AI development.

In conclusion, as AI technology advances, a dedicated government agency must be established to ensure responsible AI development, minimize existential risks, and foster international cooperation to deal with the challenges raised by AI’s rapid growth. This proposed Department of AI and Technology, or government agency, will first classify the risk level of the AI it analyzes and then create ethical and safety guidelines for AI, including making their code open-source, displaying what data they are collecting, regularly monitoring and checking the private company’s framework, funding companies in competing with China, and often testing the “kill switch.” Looking to the future, balancing the regulation of AI development with the competitive race, particularly against China, is a complex challenge. This dilemma reflects historical precedents, such as the nuclear arms race and environmentally harmful production, where nations often prioritize short-term advantages over long-term sustainability. Humans will usually choose something harmful to the whole planet if one nation-state can get a short-term advantage, even if it would be better to regulate it long-term. Nationalism and capitalism outweigh ethical considerations and the longevity of the human race. In this context, it must be ensured that the pursuit of AI superiority does not undermine the collective safety of the world. Balancing competitiveness and regulation is a formidable challenge, but it is essential to safeguard not only America’s natural resources but also the future of humanity. These risks can be mitigated while competing in the AI race by emphasizing responsible AI development and fostering international cooperation.


Works Cited

“ARC Evals.” Alignment.org, 2023, evals.alignment.org/. Accessed 24 Sept. 2023.

“Center for Human-Compatible Artificial Intelligence – Center for Human-Compatible AI Is Building Exceptional AI for Humanity.” Humancompatible.ai, 2023, humancompatible.ai/. Accessed 15 Oct. 2023.

Ingram, David. “How ChatGPT Has Intensified Fears of a U.S.-China AI Arms Race.” NBC News, NBC News, 5 Mar. 2023, www.nbcnews.com/tech/innovation/chatgpt-intensified-fears-us-china-ai-arms-race-rcna71804. Accessed 5 Oct. 2023.

“Center for AI Safety (CAIS).” Safe.ai, 2023, www.safe.ai/. Accessed 7 Oct. 2023.

Ho, Lewis, et al. International Institutions for Advanced AI. arxiv.org/pdf/2307.04699.pdf.

“A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” The New York Times, 2023, www.nytimes.com/2023/05/30/technology/ai-threat-warning.html. Accessed 15 Oct. 2023.


The author's comments:

Zenia is a junior at Glenelg High School. Her interests lie in the fields of responsible AI, robotics, and its applications in healthcare. She has taken multiple engineering courses at the Johns Hopkins University Applied Physics Laboratory, or APL. She is an intern at APL, under the mentorship of Mr. Brian Zhu, a Modeling and Simulation Engineer in the Force Projection Sector, and there she is researching how to improve the number of retained surgical instruments using machine learning.


Similar Articles

JOIN THE DISCUSSION

This article has 1 comment.


pconut said...
on Mar. 20 at 9:20 pm
pconut, Durham, North Carolina
0 articles 0 photos 1 comment
Loved your take on AI and the global order at the end!! Completely agree, so often we don’t see ourselves as citizens of a country instead of members of humankind.