top of page

Artificial Intelligence: Reason to Fear or Embrace?

Updated: Mar 20

According to Grand View Research, the global AI market size is predicted to reach $1,811.8 billion by 2030, up from $136.6 billion in 2022 with a CAGR of 38.1%. While there are many misconceptions regarding AI and the potential developments that it may lead to, it is essential to analyse the reasons for many companies investing in this technology. A study by IBM found that at present approximately 77 per cent of companies are either using AI or exploring AI for further research into implementation. This editorial piece aims to explore the debate surrounding the fear as a consequence of artificial intelligence and provide the argument that it generates a net positive result.

Origins of Artificial Intelligence

Criticism of AI is not new, it has been the case since its advent in the 1940s. Alan Turing was the first to implicitly evoke the idea of machines improving and modifying their own program under his stored-program concept. The famous “Turing Test” conducted in 1950, was essential in understanding intelligent behavior. In his paper, Turing described the experiment as an imitation game, whereby the computer must provide answers that closely resemble that of a human being. With an interrogator present in a separate disclosed room, they were required to distinguish between the human and computer subjects solely on the responses received to a set of posed questions. Scientists to date debate whether passing the Turing test should be considered the perfect tool for computers exhibiting “intelligence.” The term “Artificial Intelligence” was then coined in 1955 by John McCarthy, a computer and cognitive scientist. He would later rightly be known as the father of artificial intelligence for also developing the first computer language for symbolic computation, used in a multitude of areas within the field of AI. The period between 1974 to 1980, however, was shadowed as the “AI Winter” due to governments' collapse in funding. This was spurred by the culmination of the oil crisis along with the loss of faith in AI, as it failed to meet expectations.

Use-Cases and Benefits

At present with greater infrastructure, AI can be used for task expedition, improved coordination, and feed into the demands of a new leisure society. The impact can be observed in various value chains within businesses including manufacturing, retail, education, and healthcare. For example, Mckinsey Global Insititute Analysis observed that the current use-case of AI as a prediction tool through machine learning has led to a 13 per cent improvement in EBIT for manufacturing. This was mainly achieved by automating the procurement processes and using AI for better R&D procedures. AI also consequently contributed to enriching the overall user experiences by reducing costs, which resulted in less levied burdens on the market prices. In this particular case, fuel savings were boosted by 12 per cent through the employment of optimized flight routes for transportation.

In healthcare, an AI software developed by researchers at Houston Methodist Research Institute, increased accuracy to 99 per cent and speed by 30 times, more than that of a human doctor in reviewing mammograms. The Chair of the Department of Systems Medicine and Bioengineering at the institute said, “This software intelligently reviews millions of records in a short amount of time, enabling us to determine breast cancer risk more efficiently using a patient's mammogram.” This is crucial because as per the American Cancer Society, in reality, a high number of diagnoses can yield false-positive results. Therefore the use of AI in such situations can avoid patients from undergoing unnecessary invasive procedures or biopsies. Healthcare in the near future will need to cope with the increasing workforce demands, and maintain a level of sustainability. By 2030, there would be a shortfall of an estimated 10 million physicians, nurses, and midwives globally over the same period, mostly in low- and lower-middle-income countries, according to the World Health Organization. Such gaps can be met with the adoption and scaling of AI solutions to reduce hours spent on administrative or routine tasks.

Companies, including Content Technologies and Carnegie Learning, are examples of AI deployment in education. These digital platforms are constructed to use the individualised learning approach and to add customisations at par with the students’ respective understanding levels. With greater advancements in this tech, such tailored styles could become the norm, especially aiding those with undiagnosed learning disabilities. In 2015, John F. Pane and his colleagues at RAND Corp. found that 11,000 students at 62 schools had greater gains in mathematics and reading when they used individualised learning plans (ILP), as compared to others in a more traditional setting. While this remains to be the most comprehensive study to date, with more concentration on AI technology, barriers can be eliminated for rounded research to further understand such benefits.

Through video and image analytics, criminal justice can be improved by providing investigative assistance. Convolutional Neural Network (CNN), a deep learning algorithm, is developed to increase accuracy in image classification. Apart from facial recognition, forensic laboratories can use AI in DNA testing to process degraded evidence after long periods of time. Recently, Forensic and National Security Science Institute (FNSSI) professors, Micheal Marciano and Jonathan Adelman were the first to invent a novel hybrid machine learning approach (MLA) that provides high-confidence results. Internet companies like PayPal have also relied on AI for identifying fraud attempts by training their algorithms to detect anomalies in patterns or new patterns.

Addressing Concerns

As the decision-making role of AI increases within institutions, there is a fear surrounding the accompanied risks. Even with more complex programs, the data often reflects past inequities thereby causing interferences through unwanted biases. A study conducted by Joy Buolamwini and Timnit Gebru at MIT concluded that facial analysis tech has a lesser success rate with minorities, especially women, due to a lack of available training data. However, progress is being made to rectify such system errors through either pre-processing or post-processing techniques. For example, Silvia Chiappa, a research scientist at DeepMind, developed a path-specific counterfactual method that takes into account the effect on outcomes due to sensitive attributes. Governing bodies such as the European Commission proposed the Artificial Intelligence Act in 2021, to introduce a safety framework and to prevent any unethical prejudices.

Fear stemming from the advancement in AI can be attributed to the myth of catastrophic superintelligence, usually due to the media consumed. The human-like qualities instill the misconception that it may one day replace mankind, however, these are purely irrational and impractical in the real world. AI can be considered a disruptive technology, only in the progressive sense as it streamlines processes and provides greater efficiency within organisations.

24 views0 comments

Recent Posts

See All
bottom of page