The potential and limitations of artificial intelligence

Everyone is excited about artificial intelligence. Great strides have been made in machine learning technology and technique. However, at this early stage of its development, we may need to curb our enthusiasm a bit.

The value of AI can already be seen in a wide range of trades including marketing and sales, business operations, insurance, banking and finance, and more. In short, it is an ideal way to conduct a wide range of business activities, from human capital management and people performance analysis to recruiting and more. Its potential runs through the entire Eco business structure. It is now more than evident that the value of AI to the entire economy can be worth trillions of dollars.

Sometimes we can forget that AI is still an act in progress. Due to its infancy, there are still limitations in technology that must be overcome before we are truly in the brave new world of AI.

In a recent podcast published by McKinsey Global Institute, a firm that analyzes the global economy, Michael Chui, president of the company and James Manyika, director, discussed what the limitations of AI are and what is being done to alleviate them.

Factors limiting the potential of AI

Manyika noted that AI’s limitations are “purely technical.” He identified them as how to explain what the algorithm is doing? Why do you make the decisions, the results and the forecasts that you do? Then there are practical limitations that involve the data as well as its use.

He explained that in the learning process, we are giving data to computers not only to program them, but also to train them. “We are teaching them,” he said. They are empowered by providing tagged data. Teaching a machine to identify objects in a photograph or to recognize a variation in a data stream that may indicate that a machine is going to breakdown is done by feeding them a large amount of labeled data indicating that this batch of data is about of the machine. break and in that collection of data the machine is not about to break and the computer realizes if a machine is about to break.

Chui identified five limitations of AI that need to be overcome. He explained that now humans are labeling the data. For example, people are reviewing traffic photos and tracking cars and lane markers to create tagged data that autonomous cars can use to create the algorithm necessary to drive the cars.

Manyika noted that she knows of students who go to a public library to tag art so they can create algorithms that the computer uses to make predictions. For example, in the UK, groups of people are identifying photos of different breeds of dogs, using tagged data that is used to create algorithms so that the computer can identify the data and know what it is about.

This process is being used for medical purposes, he noted. People are labeling pictures of different types of tumors so that when a computer scans them, it can understand what a tumor is and what type of tumor it is.

The problem is that it takes an excessive amount of data to teach the computer. The challenge is creating a way for the computer to review the tagged data faster.

The tools that are now being used to do that include confrontational generative networks (GANs). The tools use two networks: one generates the correct things and the other distinguishes if the computer is generating the correct things. The two networks compete with each other to allow the computer to do the right thing. This technique allows a computer to generate art in the style of a particular artist or to generate architecture in the style of other things that have been observed.

Manyika noted that people are currently experimenting with other machine learning techniques. For example, he said Microsoft Research Lab researchers are developing flow tagging, a process that tags data through usage. In other words, the computer is trying to interpret the data based on how it is being used. Although stream labeling has been around for a while, it has recently made great strides. Still, according to Manyika, data labeling is a limitation that needs further development.

Another limitation of AI is the lack of data. To combat the problem, companies developing AI are acquiring data for several years. To try to reduce the amount of time to collect data, companies are turning to simulated environments. Creating a simulated environment inside a computer allows you to run more tests so that the computer can learn a lot more things faster.

Then there is the problem of explaining why the computer decided what it did. Known as explicability, the problem deals with regulations and regulators who can investigate the decision of an algorithm. For example, if someone has been released from jail on bail and another has not, someone will want to know why. One could try to explain the decision, but it will certainly be difficult.

Chui explained that a technique is being developed that can provide the explanation. Called LIME, which stands for agnostic explanation of the locally interpretable model, it involves looking at parts of a model and inputs and seeing if that alters the result. For example, if you are looking at a photo and trying to determine if the item in the photograph is a truck or a car, then if the windshield of the truck or the back of the car is changed, then one of those changes is changed. the difference. That shows that the model focuses on the rear of the car or the windshield of the truck to make a decision. What happens is that experiments are being carried out on the model to determine what makes a difference.

Finally, biased data is also a limitation for AI. If the data entering the computer is skewed, then the result is skewed as well. For example, we know that some communities are subject to more police presence than other communities. If the computer is going to determine whether a large number of police officers in a community limits crime and the data comes from the neighborhood with a heavy police presence and a neighborhood with little or no police presence, then the computer’s decision is based on more data from the neighborhood. with police and without any information about the neighborhood that does not have police. The oversampled neighborhood can cause a biased conclusion. Therefore, reliance on artificial intelligence can result in reliance on inherent bias in the data. The challenge, therefore, is to find a way to “de-bias” the data.

So as we can see the potential of AI, we must also recognize its limitations. Do not transport; AI researchers are feverishly working on the problems. Some things that were considered limitations of AI a few years ago are not today due to its rapid development. This is why you should constantly check with AI researchers what is possible today.

Leave a Reply

Your email address will not be published. Required fields are marked *