The 2010-2019 decade can be named as the decade of digital transformation.The progress made in these years in the technological field has been gigantic, so much that some enthusiasts talk about the advent of the “Fourth Industrial Revolution“. The technologies that lead this era change are: Blockchain, Cloud Computing, Internet of Things, Big Data Analytics and Artificial Intelligence. Among them, the latter has taken center stage, since it has shown great potential for innovation in multiple sectors of the economy and society.
What is it?
It is called Artificial Intelligence (AI) to a branch of Information Technology aimed at creating intelligent machines. That is, they are able to act and react in a similar way to human reasoning. Today, AI has become an essential part of the technology industry.
Research and developments in Artificial Intelligence include computer programming to perform and refine tasks such as: reasoning, problem solving, perception, learning, planning, ability to manipulate and move objects.
Artificial Intelligence systems are critical for companies that seek to extract value from data by automating and optimizing processes or producing valuable and useful insights. Artificial Intelligence systems driven by machine learning allow companies to take advantage of their large amounts of available data to discover perspectives and patterns that would be impossible for anyone to discover, allowing them to offer more specific and personalized communication, predict critical events , identify probable fraudulent transactions, and more.
Harvard Business Review offers key information about the importance of AI in today’s economic environment:
“The effects of AI will be magnified in the next decade, since manufacturing, retail, transportation, finance, health, law, advertising, insurance, entertainment, education and virtually any another industry will transform its main business processes and models to take advantage of machine learning”.
There are many classifications for Artificial Intelligence. Here we consider two of them:
The first way to classify it is to establish a division into two groups: weak AI and strong AI. Weak AI corresponds to systems designed and trained for a particular task. The virtual personal assistants offered by the mobile phone operating systems are an example of this category.
Strong AI, on the other hand, corresponds to systems with generalized human cognitive abilities, so that when an unknown task is presented, it has enough intelligence to find a solution.
The second classification establishes four types:
Reactive machines: The most basic types of AI systems are purely reactive. They analyze the data of the present situation and establish predictions and possible decisions. They do not have the capacity to form memories nor can they use past experiences to base current decisions.
Limited memory: These systems can use past experiences to inform future decisions. Some of the decision-making functions in autonomous vehicles have been designed in this way. The observations are used to develop actions in the near future, such as the perception that a car has changed lanes, so the speed should be reduced. These observations are not stored permanently.
Theory of mind: It implies the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior. It is a basic element for social interaction. This type of AI does not yet exist.
Self-awareness: In this category, AI systems have a sense of themselves, they have consciousness. Self-aware machines understand their current state and can use the information to infer what others are feeling. This type of AI does not yet exist.
Automation: is the process of automatically creating a system or process function. Robotic process automation (RPA), for example, can be programmed to perform high-volume repeatable tasks normally performed by humans. RPA is different from IT automation as it can adapt to changing circumstances.
Machine Learning: These are systems in which the machine processes data sets, identifies patterns and registers them as a new piece of information that will incorporate for analysis and decision making. Deep Learning is a subset of machine learning that, in very simple terms, can be considered as the automation of predictive analytics. There are three types of machine learning algorithms: supervised learning, in which data sets are labeled so that patterns can be detected and used to tag new data sets; unsupervised learning, in which data sets are not labeled and classified according to similarities or differences; and reinforcement learning, in which data sets are not labeled, but after performing one action or several actions, the AI system receives feedback.
Deep Learning: This is an important improvement of the machine learning algorithms. Programs are constructed for using logical structures that closely resemble the organization of the mammalian nervous system, having layers of process units (artificial neurons) that specialize in detecting certain characteristics existing in perceived objects. These structures are called Neural Networks. Deep Learning represents a more intimate approach to the way the human nervous system works. Our brain has a highly complex microarchitecture, in which differentiated nuclei and areas have been discovered whose neural networks are specialized to perform specific tasks.
Thanks to Neuroscience, the study of clinical cases of brain damage and advances in diagnostic imaging, we know, for example, that language has specific centers or that there are specialized networks to detect different aspects of vision, such as edges, inclination of lines, symmetry and even areas closely related to the recognition of faces and their emotional expression.
Deep Learning computational models mimic these architectural features of the nervous system, allowing within the global system there are networks of process units that specialize in the detection of certain hidden features in the data. This approach has allowed better results in computational perception tasks, if we compare them with monolithic networks of artificial neurons.
Computer vision: it consists of the capture and analysis of visual information using a camera and the processing of data for image recognition and object identification. It is often compared to human sight, but artificial vision is not linked to biology and can be programmed to see through walls, for example. It is used in a wide range of applications, from signature identification to the analysis of medical images.
Natural Language Processing (NLP): the machine can understand and use different languages as if it were a human. One of the oldest and best-known examples of NLP is spam detection, which looks at the subject line and the text of an email and decides if it is garbage. Current NLP approaches are based on machine learning. NLP tasks include text translation, sentiment analysis and voice recognition.
Robotics: this is an engineering field focused on the design and manufacture of robots. Robots are often used to perform tasks that are difficult for humans to perform consistently. They are used in assembly lines for the production of cars or to move large objects in space. More recently, researchers are using machine learning to build robots that can interact in social environments.
Autonomous vehicles. Companies like Google or Tesla have advanced in recent years in the development of vehicles that can be driven without human intervention. Although this has not yet been achieved, there have been important approaches. Currently, automotive firms offer vehicles with the ability to park alone, detect collisions, monitor blind spots, voice recognition and internet browsing. All these features use Artificial Intelligence algorithms for real-time processing from data entry terminals.
Banking and finances. Due to the increase in the amount of financial data, many service providers have resorted to Artificial Intelligence. Robots are much faster in analyzing market data to forecast changes in stock trends and manage finances. They can even use algorithms to offer suggestions to customers in decision making. Likewise, banks use AI to track the customer base, meet their needs, make suggestions on product schemes and detect irregular behaviors that may suggest fraud.
Health. Artificial Intelligence is being used by doctors to help with diagnostic and treatment procedures. This reduces the need for multiple machines and equipment, which in turn reduces the cost. There are experiences in the administration of anesthesia to patients automatically by standard procedures. In addition, there are systems designed to suggest different types of treatments based on the medical history of patients.
Manufacture. Manufacturing is one of the first industries to use AI. Robotic parts are used in factories to assemble different parts and then package them without the need for manual help. From the handling of raw materials to the delivery of the final products, automated mechanisms play a fundamental role.
Internet sales. Several websites offer customers to chat with their representative in case of a query or complaints. However, most of the time these are not human, but robots that are trained to respond and extract the required knowledge of the site and present it to the client. These robots use natural language processing to interpret the client’s query by focusing on the keywords and then, in response, the necessary data is obtained.
Home. Smart home devices that have IoT technology also make use of Artificial Intelligence. The technique consists in learning the behavior and the pattern of use that the user shows and, consequently, the device begins to behave similarly on its own without the need for instructions. Air conditioning devices can set the temperature of your home the way you want at different times of the day. Similarly, lighting systems can modify the brightness at different times according to user’s preferences.
Smartphones. The most common application of AI can be seen on our mobile phones in the form of Virtual Personal Assistants. These can answer any question, since they collect information to interpret what the user asks for and then obtain the necessary data to adapt to their preferences. These assistants use Machine Learning to process large amounts of data and thus improve their efficiency.
Journalism. In today’s digital world, reading blogs and articles has become a common practice for most of us, but we hardly realize that some of them are actually written by machines. Although it cannot be used to write in-depth articles, AI can easily prepare simple reports that do not require much analysis. Large communication companies are using AI to create simple reports about sports or elections, which will take a long time if done manually. Likewise, companies whose business model is based on data, including real estate and e-commerce companies, are generating content in this way.
There are several clear market trends that will see its boom in the coming years withinthe field of Artificial Intelligence. One of them is “Edge AI“. While intelligence today, that is, AI algorithms, reside primarily in cloud services, more and more device manufacturers want to provide these services directly, without having to rely so much on third-party network or infrastructure. The current prevailing situation is that applications capable of performing artificial vision and natural language comprehension tasks work by sending the data (images and audio) over the network, so that some machines that are in the cloud perform these processes. The tendency is to take this processing to the edge, that is, to the devices themselves. That is what it is known as “Edge Computing.” Thus, delays by network communication are eliminated and a more agile service can be offered in real time, although it requires much more computing capacity in the devices.
Another trend is “Hardware specialized in AI.” In line with the above, device manufacturers are incorporating new specialized microprocessors to execute AI algorithms. For example, we will see how it is increasingly common for mobile devices to have Neural Processing Units (NPU), capable of very efficiently executing the computations required by deep neural networks (deep learning). On the other hand, Google is also opting to develop new hardware specific to AI: these are TPUs (Tensor Processing Units), designed to run applications programmed with its development environment for AI (TensorFlow) much faster.
Finally, we have the “Democratization of AI“: there is a clear trend in terms of accessibility to the advanced algorithms of Artificial Intelligence. In recent years, only highly specialized professional profiles, known as Data Scientists, were able to use these technologies. What is sought now is to make it easier to use and increase the productivity of these technologies through more friendly tools available in the cloud.
Artificial Intelligence is the center of the technological revolution currently underway. All major technology companies are investing heavily in research and innovation design in this area. In a world based on data, where the information generated by human activity and its increasing use of electronic devices connected to the network increases dramatically, the development of intelligent systems that feed and learn from this data will advance in such a way that it can radically change the traditional ways in which humans relate.