Artificial Super Intelligence (ASI) is a type of high-level artificial intelligence of Artificial Intelligence (AI). ASI not only mimics or understands human intelligence and behavior, but also surpasses human intelligence and capabilities.
The “big guys” invest in ASI
ASI can think, learn, judge, and act holistically, autonomously by learning large-scale data based on computing infrastructure capable of high-volume computation. AI can be classified based on its intelligence level, including 3 types: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI) ) and Artificial Super Intelligence (ASI). With the exception of ANI which is popular now, AGI and ASI are both superior AIs and currently in research stage. One of the important steps forward is when AI reaches ASI level, making it possible to create new algorithms for extremely fast computing. It can aggregate from big data sources, making ASI quickly surpass humans to reach supernatural levels. Up to now, there have been leading giants in the world that are investing heavily in AI technology in general and ASI in particular, such as Google, Facebook, Amazon, IBM, Microsoft… Illustration of super artificial intelligence ASI Accordingly, Google acquired the AI startup DeepMind to become Google DeepMind and implemented an AI project for the London subway system (UK). Google also put its free TensorFlow machine learning system to work using voice recognition, image recognition, translation technology, and mimicking the workings of the human brain. Facebook also uses AI technology to help visually impaired people “see” photos through an iOS app, creating detailed maps of the population, global Internet users, and studying user behavior; facial recognition of people in photos posted on social networks… IBM’s famous Watson computer is capable of answering questions in natural language, using AI to analyze the context and meaning behind photos, videos, messages and dialogue. IBM is working on upgrading Watson to 1.7 times stronger and the application for the lesson machine based on the provided material. The AI Research Institute of LG Group recently announced in an online chat that it will invest more than 100 million USD over the next 3 years to develop a mainframe infrastructure that can handle a large amount of data. large data quickly. Specifically, the aforementioned AI institute plans to build a global Top 3 AI computing infrastructure that can process calculations 95.7 million billion times per second. In the second half of 2021, LG will announce an ASI called GPT-3 with 600 billion parameters. GPT-3 can communicate as naturally as humans and write essays and novels. LG’s AI Research Institute is planning to develop super-large ASI with trillions of parameters in the first half of 2022. According to market research site ResearchAndMarkets.com, AI is now appearing in many areas, from data management to retail shopping. In the long term, the solutions involve many types of AI and ASI as well as integration across other areas such as Internet of Things (IoT) and data analytics. Worried about ASI losing control According to researchers, along with great human achievements in AI technology to help shape the world in a better way, AI also brings growing concern that at some point, machines will machines will control humans, making humans dependent on the decisions of machines when they reach AGI and ASI levels. Physicist Stephen Hawking in 2017 hypothesized just before his death: “Unless we learn to prepare for and avoid potential risks, AI could be the worst event in the world. history of our civilization. It brings dangers, like powerful autonomous weapons or new ways for the few to dominate the majority. It could bring major disruption to our economy.” Although the physicist Hawking did not specify when the AI could pose such a hazard, it will almost certainly happen after the point when it reaches the level of ASI. Some people believe that ASI may appear at some point in the not-too-distant future and could lead to disaster for humanity, possibly even extinction. The concept of intelligent machines taking over the world, or variations thereof, has made its way to the silver screen many times through blockbusters such as The Terminator or The Matrix, both of which envision an apocalyptic scenario. caused by machines that surpass human intelligence. Physicist Hawking isn’t the only influential person to warn of AI being a global catastrophe. Tesla and SpaceX boss Elon Musk has predicted similarly dire consequences, arguing that AI is potentially more dangerous than nuclear warheads, and has frequently called for more regulatory oversight of development. development of AI. Oxford University professor Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies (ASI: Paths, Dangers, Strategies) discusses the multitude of situations where machine superiority could threaten humanity. . The book focuses on the period when AI achieved an intellectual explosion. Professor Bostrom also claims that a badly designed ASI system is hard to fix. “Once the unfriendly ASI machine exists, it prevents humans from replacing or changing its preferences and it determines our fate,” Professor Bostrom said. However, not all forecasts are pessimistic. Some would like to emphasize at this stage, there is no evidence that robots using ASI are about to wipe out humanity. The AI research department of Stanford University, USA said that the scary visions according to which AI will dominate humanity in the future only appear from movies and novels, as well as in imagination. In fact, AI has been changing our daily lives, almost entirely in ways that improve human health, safety, and productivity… And importantly, humans know how to prevent it. the ability to abuse AI technology for malicious purposes. Beneficial AI applications in schools, homes, and hospitals are growing at a rapid pace. Therefore, at this stage, we must believe that, in the future, when AI develops into ASI, it is also intended to serve human civilization.
You must log in to post a comment.