Dailyhunt
The implications of Artificial Intelligence for the future

The implications of Artificial Intelligence for the future

The Hans India 6 days ago

Somuch has been said and written about Artificial Intelligence (AI) and its impact on productivity in future, problems of job losses and consequent inequalities, intra and between nations.

Some studies indicate that AI may wipe off millions of jobs across various sectors of industry, including manufacturing, inventory management, sales, logistics and Services such as software, legal and accounting services, insurance, etc., just to illustrate a few.

According to Dr. Kai Fu Lee, the Chairman and CEO of Sinovation Ventures, a leadling technology-Savvy investment firm "Creating an Ai Superpower of the twenty-first century requires four main building blocks abundant data, tenacious entrepreneurs, well trained AI scientists and supportive policy environment."

According to him, though hundreds of companies are pouring resources into AI research, a few giants, like Google, Facebook, Amazon, Microsoft, Baidu, Alibaba and Tencent have invested their resources to become global leaders. There is stiff competition between US and China to become a global leader in AI. According to Dr Lee, the way things stand today, China appears to have the edge in entrepreneurship, data and government support and is rapidly catching up with the United States in expertise. Dr. Lee believes that the real underlying threat posed by AI may result in tremendous social disorder and political collapse stemming from wide spread unemployment and gaping inequality.

PricewaterhouseCoopers estimates AI deployment will add US$ 15.7 trillion to global GDP by 2030. China is predicted to take home US$ 7 trillion of that total, nearly double North America's US$3.7 trillion in gains. The estimate shows that the economic balance of power tilts in favour of China, which will enhance political influence and softpower.

As of today, there is a race between the 'giants' and the 'startups' which use battery power, in developing AI products and train algorithms for specific tasks, including medical diagnosis, mortgage lending, insurance, logistics, etc.

The 'grid' approach is being followed by the 'Seven giants' using the power of machine learning into a standardized service that can be marketed. AI research today relies on specialized high-performance computing (HPC) systems, often referred to as AI Supercomputers. These machines are distinct from traditional supercomputers used for scientific simulations because they are optimized for massive parallel processing, primarily using graphics processing units (GPUs) or AI accelerators like NVIDIA H100/B200, AMD MI300X, or Google's Tensor Processing Units (TPUs).

Power Requirements from the Grid

AI supercomputers are enormously power-intensive, drawing far more electricity per rack and per facility, than traditional data centers.

•A single NVIDIA H100 GPU consumes about 700 watts of power.

•A rack of 8 GPUs uses roughly 5.6 kilowatts, and an entire training hall with 1,000 racks can exceed 5-6 megawatts (MW) of compute power alone.

•A large-scale AI supercomputer with~50,000 GPUs may require 200-500 MW continuously-comparable to the output of a mid-sized coal or nuclear power plant.

•AI data centers' overall use would be between 20 MW and 1 GW depending on scale.

•The largest "gigawatt-class" supercomputing data centers (2026 class facilities by Amazon, Google, and Microsoft) each require roughly the same power as a nuclear reactor, around 1 GW, to support their computing and cooling loads.

The supercomputers used for AI research are energy-intensive. GPU accelerated systems require hundreds of megawatts of stable power and exabyte-scale storage. The biggest challenge is not computing hardware but grid power availability which defines the limits of AI infrastructure expansion. Sustainable operation increasingly depends on direct renewable energy sourcing, on-site generation, and strategic geographic deployment close to power generation stations.

While availability of power is one challenge for developing AI, an equally important concomitant variable is the availability of high performance chips which are central for various tailor made applications.

Next-generation AI chips are specialized processors designed to handle artificial intelligence workloads such as machine learning, deep learning, and real-time decision-making.

Unlike traditional CPUs, modern AI systems use:

GPUs (Graphics Processing Units) are used for large-scale parallel processing and are best for training large AI models. It has the ability for massive parallel competition. However, their power consumption is very high. GPUs are mostly used in data centers, research and large scale AI systems.

TPUs (Tensor Processing Units) are used in AI specific high speed chips. They are used for deep learning and in neural networks. They are highly efficient and used for matrix operations. These chips are less flexible and are designed specifically for AI workloads like language processing, speech recognition and real-time translation.

NPUs (Neural Processing Units) are used for smartphones, surveillance cameras, surveillance systems, etc., and are best for on-device AI. They consume low-power. NPUs and TPUs chips are ideal for training large models.

ASICs - These chips are meant for specific tasks and have customized AI hardware.

FPGAs - These chips are re-programmable and flexible. They are used in specialized systems like defence, finance and autonomous machines. These chips are optimized for speed, efficiency, and low latency, which are critical for real-world AI applications.

No single chip can handle AI tasks efficiently. The future lies in combining different AI processors. The combination of NPUs, GPUs, TPUs, ASICs and FPGAs enables advanced applications like autonomous vehicles, smart factories and defence systems.

(The writer is former member, CBIC and DG, NCB)

Dailyhunt
Disclaimer: This content has not been generated, created or edited by Dailyhunt. Publisher: thehansindia