Artificial intelligence for Good, the artificial intelligence of the future.

Interview with Richard Benjamins, Chief AI & Data Strategist at Telefónica

— Besides the metaverse, artificial intelligence always appears as the big talking point when it comes to technological advances. How far do you think we can go in terms of artificial intelligence? What are the challenges and/or opportunities ahead? And which is it, AI for Good or Good AI? What would be the difference between these two concepts?

The quickest answer would be that there is no clear answer, at this time no one knows, no one has a clear idea of how far artificial intelligence can go. What I can give you is my opinion. A book called “Architects of Intelligence: The truth about AI from the people building it” comes to mind, for example, where more than 30 gurus in this field are interviewed and each one has a different vision. The issue is whether or not artificial intelligence can rise to the level of human intelligence.

In this regard we’re talking about a 300-year period, with the most optimistic talking about 2049 and others saying that artificial intelligence will never match human intelligence.

The big leap forward, and where we are today, is in deep learning, which means more computing, more data, more powerful learning algorithms and network settings. The breakthrough for all this was in 2010-2011 and that’s the moment we’re at: machine learning that requires a huge amount of data and looks for patterns that are based on the past and allow us to predict, classify, actually allow us to do a lot of things, to the extent that they can do tasks much better than a person, for example with natural language, translations, summaries, interpretations, etc.

There is already parity between a machine and a human, but in very specific tasks; it is narrow AI and it’s called that because it is capable of doing a specific task and there is still a great distance from human intelligence, which is capable of doing thousands of different tasks. Although, on the other hand, there are natural language programs like GPT-3 that are trained with vast amounts of data and are doing things that it was never envisaged that they could ever do.

— It’s hard to imagine the future. When you talk about the narrow AI concept, do you mean that the activities in which artificial intelligence will be most present will be those most likely to be automated?

No, what is needed is a direct correlation between input and output, with a lot of data, because there is no one who can make a program that predicts the stock market, there are not enough events (data) to do it. It could be predicted in normal times, say, but not if there are unforeseen events like a pandemic or a war. Such events (luckily) don’t happen frequently enough to be able to train the machine and give it time to learn them. It is useful for predicting diseases, for example, because there is a pattern there.

At the end of the day it’s a machine that interprets mathematics, patterns and statistics, it has no awareness of the type of data it’s interpreting. What has changed in recent years are the major logic learning machine (LLM) natural language models, because they are trained to do a task, but then it turns out they can do a lot more.

Predicting the words that are missing in a text, generating summaries, stories, answering questions, is no longer so narrow even though it is still so. We have taken a small step, like with transfer learning for example, where the machine learns objects which then serve it to learn faces or also people, it does not have to start from scratch because it follows on from a previously learned base.

Compared to human intelligence, we have common sense, context, sense of reality, whereas the machine has no consciousness, no sense of reality, of physical interaction. Let’s compare it to how learning happens in a baby who knows nothing at birth and acquires a physical model within 18 months. In machine learning the machine has no physical model, does not understand if we drop something or are throwing it upwards, has no idea what is really happening.

Much research is needed going forward. Doing many other things starting from deep learning, such as reinforcement learning. There is still a long way to go in terms of artificial intelligence, we cannot say that it is possible but at least we can already say that it is not impossible.

Human beings, whose origins are in atoms, cells, etc. and who do have consciousness and intelligence, have evolved to where we are now in a process of billions of years and in a random way. So I don’t see why this same story can’t be repeated again, at worst it can take another few thousand years or it can even be much more targeted, which is what artificial intelligence is trying to do.

— Are we pursuing a goal? Can we bring together all our experience and ability to go in one sole direction, accelerate the process and make a quantum leap in evolution?

If, for example, in quantum computing in 20 years everything will be more mature and may be suddenly things are achieved that we can’t even imagine. For example, we know that there is a lot of chemical and electrical activity in our brain, but we can’t quite see how a concept like “freedom” or “war” is formed. It’s something abstract, almost philosophical.

Deep Learning, the same artificial intelligence that we have today, if we apply it to more things we can go much further. In terms of applying it, we can make it far more extensive and in a massive way, but on the other hand there is always artificial intelligence, which can still grow a lot.

That’s exactly where we are now, in that phase of expanding artificial intelligence to many more sectors, more massively and at the same time doing academic research into new artificial intelligence formulas. And we link this to Good AI, to ethical and responsible use. We already know that algorithms learn from data, but they never get it 100% right, it can be 90-85% or even less.

What we need to assess is, above all, the model’s percentage of error, whether it is acceptable for the application that we need, for example if we use algorithms to make medical diagnoses or facial recognition to identify criminals, an error rate of 15% is very high since we are talking about putting people at risk. For these two examples we would need to reduce that error rate to a minimum. Algorithms are not infallible because they learn from us and from the data we feed them.

— In this regard, what is the effect of the type of data we feed the machine?

Reality has biases, so algorithms learn models that are imperfect and always have a percentage of error, even if it is a small one. This can be seen clearly in a case that happened to Amazon, which applied artificial intelligence to select and hire profiles.

The machine was trained with the CVs from the previous 10 years, where there were many more men than women. Because it had learned to select only men, it dismissed women. This can become a serious problem if vulnerable groups are discriminated against, which could potentially lead to committing discrimination offences.

Data representativeness and biases have a high impact in regard to natural language. For example, for the machine, nurses are always women and engineers are always men, for the simple reason that these models are statistics and the matches between “doctor” and “he” are higher than between “doctor” and “she”. The result of this is that, if the model has no other context, the machine associates the word “doctor” with men more than with women. This must be taken into account when understanding the machine’s interpretation of natural language.

On the other hand there are the black boxes, there are many algorithms so complex that they are incomprehensible to humans, but depending on the domain it is important to understand them.

For example in medicine, if an algorithm tells a doctor that a patient has ancer, the doctor must have a very good understanding of the diagnosis the machine has made and of the reasons before stating and communicating it to the patient, since as we said before there is a margin of error. This is why it is so important to make responsible use of algorithms when we apply them. You have to ask yourself some questions beforehand, see the representativeness, the impact of false negatives, false positives, to avoid surprises. There are decisions that can be taken explicitly, for example rather than a black box algorithm, using a white box algorithm. It performs a little less well but avoids the problems I mentioned earlier.

As well as using the technology for the business, it can also be used for social good. It can be applied throughout the disaster management cycle. For example, during the pandemic we used it to find vaccines, to predict how the virus was spreading, and in a preventive manner.
 

We have so much data that when taken together are proxies for human activity. These proxies are like Plato’s cave, where men only saw a shadow and it had to be interpreted. Big Data is like this, the data are only a shadow of reality and they have to be interpreted, keeping in mind that we can get this interpretation wrong because it is not pure reality.


Another example would be data from insurance companies, which can manage to monitor and predict natural disasters. This information is very useful to better understand the advance of climate change, and because it gives them a very interesting view of the areas of high risk in meteorological phenomena.

— Can you tell us about any project you worked on that follows this line of AI for Good?

Yes, an example of this was during the pandemic, where we used data extracted from the mobile network in an anonymised and aggregated manner to generate mobility matrices. This information was relevant for governments to manage the pandemic in terms of virus spread, monitoring lockdowns and the effect on the economy. A mobility matrix says, for example, that around 10,000 trips were made in one day between Madrid and Barcelona, 50% less than before Covid. The same data served to improve predictions about healthcare system saturation and other problems that arose from the pandemic.

Another project we have carried out aimed to measure air quality in Madrid. In many cases, when we see air quality data for a city, we see them divided by districts. These data are obtained through a static sensor that gives information on the street where it is located, but in order to have a comprehensive view we should take into account whether the sensor is next to a park, which will always give an optimal reading, or next to a car park, which will show a red indicator to denote that the air in that area is less clean. The source of pollution is 30% traffic, another 30% buildings and the rest industries, etc. We focus on the traffic issue because we can make estimates based on the same mobility data used for managing a pandemic.

We cross these data with open data on vegetation, population, climate (temperature, wind), the functions of buildings, and with data from a mobile pollution sensor placed on top of an electric car travelling along different streets around Madrid, which gave us a reading of the air quality in each street. This serves to take and adapt measurements for air quality according to the context of the specific area, for example for detection and avoidance when there is poor air quality near a school.

Having the information is key to taking decisions and making changes in regard to what we mentioned.

"It’s important to have the data, but most importantly, when you have them, using them for improvement."

This requires a cultural, generational and also political transformation, because data do not always come out in line with all ideologies or interests. Having the data comes with responsibilities for taking decisions in a better way.

Richard Benjamins is the author of the book A Data-Driven Company: 21 lessons for large organizations to create value from AI, newly translated into Spanish.
https://www.lideditorial.com/libros/data-driven-company


Why are companies failing to implement business agility?

NEXT ARTICLE