Artificial Intelligence : Challenge of Human Development
What is Artificial Intelligence ?
Today we have all kinds of “smart” devices, many of which can even be activated by voice alone and offer intelligent responses to our queries. This kind of cutting-edge technology may make us consider Artificial Intelligence(AI) to be a product of the 21st century. But it actually has much earlier roots, going all the way back to the middle of the 20th century.
Today we have all kinds of “smart” devices, many of which can even be activated by voice alone and offer intelligent responses to our queries. This kind of cutting-edge technology may make us consider Artificial Intelligence(AI) to be a product of the 21st century. But it actually has much earlier roots, going all the way back to the middle of the 20th century.
The Roots
SiFis have imagined AI much before anybody but the real credit goes to Alan Turing of USA, his ideas for computational thinking lay the foundation for AI. Turing first presented the concept in a 1947 lecture. Certainly, it is something Turing thought about, for his written work includes a 1950 essay that explores the question, "Can Machines Think ?" This is what gave rise to the famous Turing Test.
Even before that , in the year 1945, Vannevar Bush set out a vision of futuristic technology in an Atlantic Magazine article entitled " As We May Think". Among the wonders he predicted was a machine able to rapidly process data to bring up people with specific characteristics or find images requested.
SiFis have imagined AI much before anybody but the real credit goes to Alan Turing of USA, his ideas for computational thinking lay the foundation for AI. Turing first presented the concept in a 1947 lecture. Certainly, it is something Turing thought about, for his written work includes a 1950 essay that explores the question, "Can Machines Think ?" This is what gave rise to the famous Turing Test.
Even before that , in the year 1945, Vannevar Bush set out a vision of futuristic technology in an Atlantic Magazine article entitled " As We May Think". Among the wonders he predicted was a machine able to rapidly process data to bring up people with specific characteristics or find images requested.
The Emergence
Thorough as they were in their explanations, none of these visionary thinkers employed the term “artificial intelligence.” That only emerged in 1955 to represent the new field of research that was to be explored. It appeared in the title of "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence". The conference itself took place in the summer of 1956.
As they were poised at the beginning of the decade of optimism, researchers expressed confidence in the future and thought it would take just a generation for AI to become a reality. There was great support for AI in the United States during the sixties. With the Cold War in full swing, the United States did not want to fall behind the Russians on the tech front, so MIT benefited, receiving a $2.2 million grant from the Government of USA to explore machine-aided cognition in 1963.
Progress continued with funding for a range of AI programs, including, MIT’s SHRDLU , David Marr’s theories of Machine Vision , Marvin Minsky’s frame theory, the RROLOG language, and the development of expert systems . That level of support for AI came to an end by the mid-1970s, though.
The First AI Winter !
Yes, AI also faced rough winter.The period of 1974-1980 is considered the first “AI winter,” a time when there is a shortage of funding for the field in the USA. This shift in attitude toward AI funding is largely attributed to two reports. In the U.S.A., it was " Language and Machines : Computers in Translation and Linguistics" by the Automatic Language Processing Advisory Committee (ALPAC), published in 1966. In the U.K. it was '" Artificial Intelligence : A General Survey" by Professor Sir James Lighthill, FRS, published in 1973. It declared, “in no part of the field have discoveries made so far produced the major impact that was then promised,” Lighthill’s take corroborated the view that continued funding would be throwing good money after bad.
This doesn’t mean that there was no progress at all, only that it happened under different names, as explained in " AI Winters and its Lessons" This is when the terms "Machine Learning", "Informatics", "Knoweldge-based system" and " Pattern Recognition" started to be used Changing Seasons in the Last Two Decades of the 20th Century.
In the eighties, a form of AI identified as " Knowldege Based" or so called "Expert Systems" emerged. It had finally hit the mainstream as documented by the rates of sales in the U.S. The amount of "AI-related hardware and software" hit $425 million in 1986.
But AI hit a second winter in the year 1987, though this one only lasted until 1993. When desktop computers entered the picture, the far more expensive and specialized systems lost much of their appeal. DARPA, a major source of research funding, also decided that they were not seeing enough of a payoff.
At the end of the century, AI was once again in the limelight, particularly the victory of IBMs' deep blue over chess champion Garry Kasparov" in 1997. But major corporate investment on Garry Kasparasov a large scale would only happen in the next century.
Implementation
Artificial Intelligence (AI) is now a buz word, it has been implemented and is delivering on its promise at many large companies including Amazon, Facebook, Google, Uber and Netflix. Others are in the line. Airlines are using this tool in zero percent error in the movement of luggage, Retailers are using AI-powered robots in their warehouses, Utilities use AI to forecast electricity demand, Automakers are using AI for autonomous cars, and Financial Services companies are using AI to better understand behavior of their customers, look for potential fraud, and to identify new products/services customers will want. But as we look beyond the technology segment and these specific examples, AI adoption is still at a early level.
The typical company who has successfully adopted AI is larger than their industry peers, have already embraced the digital transition and well on the path to implementation, employ AI in the core of their value chain, adopt AI to increase revenue, and have the full support of executive leadership.
According to an estimate of the McKinsey Global Institute, the companies leading AI adopters have profit margins from 3 to 15 percentage point higher than industry averages in their market segments. So why isn’t everyone jumping into AI?
Challenges
A number of companies have implemented AI programs and understand that a full implementation touches all aspects of the business. The insights gained have led to transformational changes that have fundamentally changed their business.
From implementing Predictive and Prescriptive Analytics solutions the challenges fall into 4 major areas:
Cost – supporting not only the initial outlays for software and costs for cloud support but the on-going costs for training employees and continued training of the AI system when business processes change.
Culture clash – any changes will be viewed with suspicion by the organization. For example, a company had employees whose primary job responsibility was to generate a weekly report which with the implementation, can now done instantly by the system. Also, because no one has really analyzed the data it may find that markets and certain promotions really don’t work as commonly believed.
There are so many technology options – so it is hard to narrow down the potential choices and select the best implementation path.
Inability to precisely predict ROI especially at the start of the project will be a source of friction.
While these challenges must be addressed, it is important to keep the team focused on the long-term goal and the benefits AI will deliver. As with all transformative changes, clear and open communications will keep the team engaged and maintain management support. In addition, companies should understand that a re-training program will be valuable for those personnel affected by the business process changes.
Opportunities
As stated earlier, the profit margins for companies who adopt AI is much higher than their industry peers. For example, McKinsey predicts that a retail company could generate profit margins 9 percentage points higher than industry average. In Health care and Financial services, the margins will be even higher.
Also, the additional benefits of higher customer
engagement/satisfaction, smarter more efficient R&D, optimized production and maintenance, and more efficient and effective sales/marketing campaigns will begin to create a sustainable competitive advantage. And competitors who delay adoption may find themselves in a very difficult position.
Data privacy and security
Most AI applications rely on huge volumes of data to learn and make intelligent decisions. Machine Learning systems feast on data – often sensitive and personal in nature – to learn from them and enhance themselves. This makes it vulnerable to serious issues like data breach and identity theft. Here is some good news; the increasing awareness among consumers about the growing number of machine-made decisions using their own personal data, has prompted the European Union (EU) to implement the General Data Protection Regulation (GDPR), designed to ensure the protection of personal data. Besides, an emerging method – ‘Federated Learning’ – is all set to disrupt the AI paradigm. It will empower data scientists to develop AI without compromising users’ data security and confidentiality.
Algorithm bias
An inherent problem with AI systems is that they are only as good – or as bad – as the data they are trained on. Bad data is often laced with racial, gender, communal or ethnic biases. Proprietary algorithms are used to determine who’s called for a job interview, who’s granted bail, or whose loan is sanctioned. If the bias lurking in the algorithms that make vital decisions goes unrecognized, it could lead to unethical and unfair consequences. For instance, Google Photos service uses AI to identify people, objects and scenes. But there’s a risk of it displaying wrong results, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.
In the future, such biases will probably be more accentuated, as many AI systems will continue to be trained using bad data. Hence, the need of the hour is to train these systems with unbiased data and develop algorithms that can be easily explained. Microsoft is developing a tool that can automatically identify bias in a series of AI algorithms. It’s a significant step towards automating the detection of unfairness that may find their way into Machine Learning. It’s a great opportunity for businesses to leverage AI without inadvertently discriminating against a specific group of people. You can also use approaches like "Path Specific Counterfactual Fairness" by DeepMind researchers Silvia Chiappa and Thomas Gillam to remove biases.
Data scarcity
It is true that organisations have access to more data today than ever before. However, data-sets that are relevant for AI applications to learn are indeed rare. The most powerful AI machines are the ones that are trained on supervised learning. This training requires labeled data – data that is organised to make it ingestible for machines to learn. Labeled data is limited. In the not-so-distant future, the automated creation of increasingly complex algorithms, largely driven by deep learning, will only aggravate the problem. There’s a ray of hope though. As a trend that’s fast catching up, organisations are investing in design methodologies, trying to figure out how to make AI models learn despite the scarcity of labeled data. ‘Transfer Learning’, ‘Unsupervised/Semi-Supervised Learning’, ‘Active Learning’, and so on are just a few examples of the next-generation AI algorithms that can help resolve this.
Will AI Eat Jobs in Future
One of the biggest “bottom-up” advances for artificial intelligence is the ability to be intuitive in planning and responding to tasks. Perhaps the biggest breakthrough in this regard came in 2016 when AlphaGo, a custom program developed by Google's DeepMind AI unit, beat the world’s best “Go” player.
The historical Chinese board game had long been seen as one of AI's greatest challenges, the sheer variety of possible moves demanding players evaluate and react in countless different ways to each turn. That a program was finally able to challenge this level of “humanity” was a real breakthrough, even more than IBM’s Deep Blue over chess champion Garry Kasparov in 1996.
Because of the leap forward in intelligence, experts from across the globe now predict we will see an AI program be able to win the World Series of Poker in just two short years. Not only that, but the same reactive technology is currently being investigated by the banking sector - with Natwest’s "Cora" chatbot in particular tipped to replace all telephone banking by 2022.
What about other job sectors. Are they too under threat from the advancement of artificial intelligence? Well, recent research from survey company Gartner suggests that 85% of customer interactions in retail will be AI-managed by 2020. The other 15%, mainly the human sales process, will take a fair while longer - with 2031 the closest estimate for full replacement.
What can be done?
Because automation has crept into modern society so slowly, it can be extremely difficult to predict how the job market will evolve as it gets ever more advanced. Perhaps the biggest challenge will be ensuring “artificial intelligence” does not lead to the mass wipe-out of several job sectors - almost certainly requiring new legislation to be passed, as well as a re-think of the employment market overall.
However, we have already seen shifts to incorporate the digital-driven advances in a variety of sectors, from banking to farming and beyond. Many predict that learning new skills early will be crucial for any affected sector - which looks set to be many of them. In short, the only way to beat the machines is to join them - or at the very least know how to use them.
The historical Chinese board game had long been seen as one of AI's greatest challenges, the sheer variety of possible moves demanding players evaluate and react in countless different ways to each turn. That a program was finally able to challenge this level of “humanity” was a real breakthrough, even more than IBM’s Deep Blue over chess champion Garry Kasparov in 1996.
Because of the leap forward in intelligence, experts from across the globe now predict we will see an AI program be able to win the World Series of Poker in just two short years. Not only that, but the same reactive technology is currently being investigated by the banking sector - with Natwest’s "Cora" chatbot in particular tipped to replace all telephone banking by 2022.
What about other job sectors. Are they too under threat from the advancement of artificial intelligence? Well, recent research from survey company Gartner suggests that 85% of customer interactions in retail will be AI-managed by 2020. The other 15%, mainly the human sales process, will take a fair while longer - with 2031 the closest estimate for full replacement.
What can be done?
Because automation has crept into modern society so slowly, it can be extremely difficult to predict how the job market will evolve as it gets ever more advanced. Perhaps the biggest challenge will be ensuring “artificial intelligence” does not lead to the mass wipe-out of several job sectors - almost certainly requiring new legislation to be passed, as well as a re-think of the employment market overall.
However, we have already seen shifts to incorporate the digital-driven advances in a variety of sectors, from banking to farming and beyond. Many predict that learning new skills early will be crucial for any affected sector - which looks set to be many of them. In short, the only way to beat the machines is to join them - or at the very least know how to use them.
AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways.
One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by Price Waterhouse Coopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030, if you honestly look Indian foray into the area it is not a very encouraging trend.
Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.
Finance
Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion.According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.”In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.”These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.
A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decision making. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions.Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. That dramatically increases storage capacity and decreases processing times.
Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels.
National Security
AI is going to play a substantial role in national defense. I do not have access to information how it is used in national security related areas in India. But in USA, through Project Maven, military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.”According to US Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.”
The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decision making to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyper-war.
While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyper-war speeds. The challenge in the West of where to position “humans in the loop” in a hyper-war scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict.
Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.
Preparing for hyper-war and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well.
Health care
AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images." According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.
What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiology imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.
AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.”
Judicial System
AI is being deployed in the criminal justice area in USA. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity.
Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:
Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates.
However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.
Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.”New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. Put differently, China has become the world’s leading AI-powered surveillance state.
Transportation
Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector.
Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps.
Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.
Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself.Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change.
Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service.
However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.
Smart cities
Metropolitan governments in US and Europe are using AI to improve urban service delivery. For example, according to Rashmi Krishnamurthy, Kevin Desouzaand Gregory Dawson:
The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls.
Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.
Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards.
Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.”
AI : Indian Initiatives
The findings, released recently, further suggest that while changes driven by AI technologies may still be in their infancy, their impact is being felt across the global labour market in all sectors (https://bit.ly/2OroZdA). Moreover, industries with more AI skills present among their workforce are also the fastest-changing industries.
Further, even as AI— broadly defined as the desire to replicate human intelligence in machines — has no superpower yet, it is undoubtedly becoming smarter with every passing day following advancements in machine learning and deep-learning algorithms, humongous amounts of Big Data on which these algorithms can be trained, and the phenomenal increase in computing power.
These developments have, understandably, given rise to the fear that automation and AI will take away our jobs and eventually become smarter than us. That may, however, not necessarily be the case.
A study by EY and Nasscom predicts that in India by 2022, around 46% of the workforce will be engaged in entirely new jobs that do not exist today, or will be deployed in jobs that have radically-changed skill sets. This is also borne out by the new LinkedIn study. AI skills, for instance, are among the fastest-growing skills on LinkedIn — a 190% increase from 2015 to 2017.
In 2017, the number of LinkedIn members adding AI skills to their profiles increased 190% from 2015. “When we talk about ‘AI skills’, we are referring to skills needed to create AI technologies, which include expertise in areas such as neural networks, deep learning, and machine learning, as well as actual ‘tools’ such as Weka and Scikit-Learn," according to the author of the report, Igor Perisic, chief data officer of LinkedIn.
Many of the changes in skills, according to the LinkedIn report, are due to three reasons: a rise in data and programming skills that are complementary to AI; skills to use products or services that are powered by data such as search engine optimization for marketers; and interpersonal skills.
While the software industry does continue to stand out as the top field for professionals with AI skills, growth is also strong in sectors such as education and academia, hardware and networking, finance, and manufacturing, according to the report. As per the report of Price Waterhouse report, in the IT/ITES industry, machine learning is the most popular AI-powered solution (63% of the participants). This also reinforces the understanding that IT/ITES may potentially be the most disrupted sector by machine learning solutions, indicating that the sector may replace repetitive manual jobs.
This, even as it recognizes that the country lacks broad-based expertise in research and application of AI, and that there is a pressing need for privacy and security, including a lack of formal regulations around anonymization of data.

Comments
Post a Comment