‘Artificial Intelligence has to be more human’
Artificial Intelligence (AI) can break through if it consumes fewer data and can make connections like humans. That’s what Mieke De Ketelaere says in the weekly magazine Trends. De Ketelaere has been working with artificial intelligence for more than twenty years.
Since May, she has been drawing up the strategy for artificial intelligence at IMEC. The research center is an authority in research into and development of computer chips, and semiconductors, but in recent years, it has also invested heavily in software and artificial intelligence.
AI has the potential to change the world radically. But to do so, we must raise the bar. “Most AI applications require a huge amount of data and computing power. That’s not feasible in the long term. Now we train software to recognize an object by showing it thousands of examples.”
“That consumes as much energy as flying from New York to Los Angeles. Approximately 2 percent of global CO2 emissions come from AI applications, and with the further breakthrough of the Internet of Things that will rise to 20 percent if we continue to use the same techniques,” says De Ketelaere.
De Ketelaere advocates processing data as close to the source as possible, in contrast to the current cloud-based trend. In doing so, information is sent to gigantic data centers, analyzed to send the result back to the device at the end of the chain.
“Not only does this consume energy, with all these data transfers we risk violating our privacy. At a crucial moment, communication can also be too slow or even lost, while autonomous cars and other advanced applications have to decide in a fraction of a second,” says De Ketelaere.
“Our answer to all these problems is described as TinyAI: everything happens as close as possible to the end device. That means smaller and more powerful hardware, with more economical and smarter algorithms that can handle a smaller amount of data.”
The gap between believers and non-believers in AI slows down the research and use of AI. In order to remove the distrust, de Ketelaere pleads for transparency, explaining that the algorithms are not a black box.
Systems with artificial intelligence can do smart things without pre-programmed rules. To get the value out, you have to know and follow the process.
She compares AI with cooking, but with data instead of ingredients: “Instead of a recipe, you have an algorithm. Variables are weighed up like ingredients, which give a different result, a model.”
“We don’t have to explain everything from needle to thread to the general public, but we have to give enough information about the method. We also all use a microwave, without knowing all the technical details,” says De Ketelaere.
Trust also requires that AI is used properly. To this end, De Ketalaere pleads for external control: “I am an absolute believer in artificial intelligence, but we need independent bodies that check which data algorithms are used and how decisions are automated.”
“At the moment, there are still too few technical standards and too few systems are tested against each other before we send them into the real world.”
Silicon Valley is convinced that autonomous car development is so fast because the government is not involved. But at the same time, according to De Ketelaere, these companies are pleading with the government not to allow the technology on public roads yet, because they know its limits.
Today’s limited AI is good in a certain niche, but less useful in complex situations. “A Tesla can drive autonomously very well if it only has to consider itself. But the real world is much more complex.”
“One of the recent accidents with autonomous cars happened because the car didn’t brake for a pedestrian that stepped off the curb. The system hadn’t been taught that. People and systems have to learn to cooperate and communicate. That is the basic principle of strong AI. The current form of artificial intelligence only has the level of a two-year-old,” says De Ketelaere.
Humans’ great advantage, in addition to the energy efficiency and speed of his brain, is his ability to interact and reason. “When cars come to an intersection at the same time, we can communicate with gestures or flickering lights. Computers are not yet able to interpret those intentions. They are good at pattern recognition, but they have difficulty empathizing with someone else’s future actions,” says De Ketelaere.
Today, AI relies on computing power and a lot of data. This used to be less the case, and AI made use of a better knowledge of, and better-defined relationships within the data. “So we have to look for combinations of the new data-driven techniques with knowledge-driven techniques. That fits in perfectly with our TinyAI strategy. With these hybrid systems, we can work more efficiently, and make automated decisions because the information is readable for computers,” says De Ketelaere.
According to her, the problem is that this necessary knowledge is often trapped in people’s heads or different systems in the company. That’s why the company has to invest a lot to create applications that can make good connections.
According to De Ketelaere, passion for technology sometimes makes engineers blind to ethical and social complications: “I believe that ethics should become a compulsory subject in engineering education. We increasingly have a direct impact on people’s lives, like whether or not they get a loan or a job. We need to be more mindful of doctors and their Hippocratic oath.”
Architects are liable for construction errors. De Ketelaere doesn’t think it’s a good idea to do the same for programmers when they create incorrect algorithms: “As an engineer, programmer, or data scientist, you don’t really know how and where your creations are used. You can have the best system in mind, but in the end, financial and other aspects can be given priority. That’s why you can’t hold them solely responsible.”