Voice of the month: David Dutour, an AI expert at the service of businesses.
Welcome to the “Voice of the Month” series, where we showcase the experiences and insights of leaders in the tech and commercial real estate sectors. Today, we have the pleasure of introducing David Dutour, the founder and CEO of ENNO AI. With over 25 years of experience in the tech industry, David Dutour offers us a unique perspective on AI and its evolution. In this interview, we explore his background, expertise, and his reflections on the past and future of technology. Without delay, let’s delve into our conversation with David Dutour, our Voice of the Month.
Could you briefly introduce yourself ?
My name is David Dutour and I trained as an AI engineer. I graduated in 96, so I’ve been able to follow all the winters and summers of artificial intelligence.
When I left school, we were in the middle of the AI winter. In 2018, I took an Executive MBA to return to my first love: AI and entrepreneurship.
My career has been divided into 3 parts:
- 10 years of consulting on IT topics
- 10/15 years as a project director in a major pharmaceutical laboratory
- Following on from my MBA, I taught prospective strategy at emLyon and set up my company Enno AI at the same time.
It’s true that I have a very tech-oriented profile for a manager. In fact, it’s almost a barrier, because I love coding and development, and I have less and less time to do it. So, on the side, I test and prototype advanced techs to assess the difficulties and potential for implementation. I sort of have a double life: manager by day, AI coder by night. But that’s between us (laughs).
What is your job and what does it involve ?
In practical terms, my day-to-day role consists of understanding the motivations of my teams and harnessing this energy to support the company’s strategy.
I work on the principle that the important role of a leader is to understand the teams and get them to pivot so that they enjoy doing what they’re doing in the service of the desired strategic direction. Chairman, CEO, Dirigeant…
These are all good terms, but when you’re in a start-up, they’re a bit of a catch-all… We are all CEOs in our own lives. There’s an important Leadership aspect: our role is to bring people together and share our vision.
There’s also a monitoring aspect: another part of my job is to anticipate the future and see how we can identify and respond to needs, quickly and directly.
It’s one of the most difficult jobs in my opinion, because you have to be both generic enough to address everyone, and specific enough, because you’re dealing with other managers and other problems that you have to understand. My years as a consultant have served me well (laughs).
Can you tell us more about Enno AI ?
Enno AI is a consulting and technology development company based on Artificial Intelligence. The company was officially created 2 years ago, but has been in gestation for much longer. I myself am a BPI-certified data / AI expert.
We also have the support of the Region for all the “Industrie du Futur” programs for small and medium-sized company in the Region who want to invest in tech 4.0. We help them to benefit from subsidies to create prototypes and launch Industry of the Future projects.
How can Enno AI support players in the retail sector (shopping centers, retailers, etc.) ?
Our aim is to provide forward-looking thinking and to make sure that it’s feasible. We have worked with a number of players: marketing companies, e-commerce companies, manufacturers…
We work in two areas:
- Diagnosis: we set up a consulting and reflection phase, to map out the data/AI,, to understand how to create value using innovative technologies…
- Implementation: We support our customers in practice, not just in theory. In the context of industrial projects, we mainly deal with under-valued corporate data: we have set up data harvesting agents, analytics, language analysis (Large Language Model), automatic segmentation to identify inconsistencies in corporate data or to optimize processes.
All this, while remaining attentive to our customers’ technological autonomy. We always have technological continuity in mind, and avoid dependency on dominant international players. This avoids adherence to proprietary technologies.
Imagine: you create value with a tool, but you don’t own the engine. So we always try to implement solutions that can be mastered locally.
What sets us apart is our crash test method. Often, when you launch a project, you need to know if it’s going to work, otherwise everyone involved will run out of steam. At Enno AI, we give ourselves 3 months to release a first prototype and aim for initial success. Then we iterate. Over a year, 4 successes is pretty good!
The advantage of this approach is that it saves us money, keeps the attention of all the teams (customer and internal) and adds value to the project for everyone.
What is “success” in AI ?
Success is user satisfaction. My objective is to answer the following question: “Is it relevant for the company?
Declarative measures of satisfaction don’t always reflect reality. The real measure is frequency of use, or better still: has the company’s process been enriched by the natural adoption of the tool?
For example, some time ago, an internal modification to one of our customers’ IT systems rendered one of our robots mute. The very next day, our customer called us, astonished that his bot hadn’t pointed out any manufacturing inconsistencies. This showed that the tool has a high level of adoption in his business processes. Rest assured, the problem was solved before the customer even called us. (laughs)
In the same way, ChatGPT is a success story, because it’s a tool used and adopted daily by a large part of the population (143 million active users per month) !
How do you feel about the tech market ?
I think we’re living through a fantastic period of geopolitical and sociological upheaval. Tech is accelerating this upheaval.
There are technologies like quantum computing that are going to revolutionize a lot of things – and they’re advancing very rapidly. We tend to think of progress in linear terms, but technological progress is exponential!
Parallel to the scale of innovation, natural language models like ChatGPT use technologies developed over 5 years ago. The first models (LSTM) were released over 20 years ago. The work is very old on a tech scale. One of the algorithms used in language analysis dates back to 1950!
To give you an idea, in 1954 we saw the first theories of automatic translation (translation from Russian into English).
Today, we are beginning to reach maturity in this market, and to see concrete applications. There’s still a lot of mysteries about what’s possible in business, especially with generative AI: imagination is moving faster than technology… and that’s for the best!
Today, technology is changing society, and that’s normal. That’s the way it works, it makes professions evolve. The farrier was out of work when the bicycle arrived. But he could start selling bikes, if they considered that their job was to help people get around!
I think technology is a great tool for providing a service, saving time, increasing productivity, making work easier and concentrating on things that add value. I think it’s having a big impact on changing ways of working: before, we were paid by the hour to file data or carry out tasks; today, AI can do it for us in a matter of seconds. This goes hand in hand with the challenges of robotization: machines provide for us, and the model for rewarding work is going to change completely. Many of the social evolutions impacted by AI were already addressed by Isaac Asimov in the 50s (read “The Robot Cycle” and the rest!).
What do you think about AI?
We’re hearing a lot about ‘Chat GPT’ and ‘Low Tech’, and we’re also seeing a lot of ‘AI’ start-ups developing, as well as new ‘prompt engineer’ type jobs: what do you think of all this?
Everyone’s putting a lot of nonsense into the term “Low Tech”. For me, Low Tech is technology with a purpose: there’s no point in doing something complicated if one string will do. I find this very elegant, because we’re going back to basics: there’s no point in launching an AI engine with massive computing power if you can get the same result with a calculator or common sense.
Sometimes you can get surprising results if you think before you act. For example, the use of mirrors for lighting or transistors made of wood. Not everything can be solved with Low Tech, but it’s always rewarding to ask the question about the different routes to success.
At Enno.Ai, unless we’re forced to, we develop AI models so that they can be run on ‘small standard servers’. I’m exasperated by the debauchery of power. I praise advances such as “llama.cpp“, which have shown that AI models can be used in acceptable conditions with resources that are accessible to everyone.
When it comes to AI, it’s like any other trend: lots of startups say they’re doing AI, few actually use it. With the amount of money available, many use grants to test projects. It’s true that after 3 months of training, some consider themselves AI experts. But there’s a paradox: the less you know, the more you think you know. It’s the Dunning-Kruger overconfidence effect. Serious experts remain pragmatic: the more they learn, the more they have to discover!
Fortunately, there are some extraordinary and brilliant AI-related communities made up of very intelligent people on the subject. But as with any subject, there are also people who bring no more value than the replication of a tutorial or a ChatGPT answer. This is one of the pitfalls of AI: there’s a big difference between a tutorial and adapting it to a different context. For example, at Enno.Ai, we specialize light language models so that they are relevant to our customers’ topics (answers to questions on industrial park maintenance, intelligent bots that replace FAQs on a dedicated site, etc.).
About the prompt engineer job: I think it’s another word for “oriented writer”.
Some say that the new programming language won’t be Python, C++ or Rust, but English! (laughs)
Tomorrow, we won’t need to code as much. By posing your problem in English to ChatGPT or equivalent, you’ll be able to have a code – to be debugged and adapted – which will be the beginnings of a skeleton. For me, prompt engineer is a misnomer, it’s just a way of saying that someone is clever at posing a problem.
There’s a craze around language models, it’s new. It came out to the general public a few months ago.
When I see that a 12-year-old can be a “prompt engineer” for his history essay, I can’t help thinking that it’s not a job. On the other hand, being a good writer and understanding how to find the best terms is a real profession, especially when it comes to image-generating AI. On this subject, there are many more subtleties to take into account (coefficients, negative prompts, etc.) to create oriented images.
All in all, however, I’d say that “prompt engineer” is a passing trade, because some of the AIs already translate the problem for you: “Hop! A scribble… Hop! The website proto!”
By the time training for this job arrives, we’ll all be prompt engineers.
Can you tell us about the main stages of ENNO AI ?
There weren’t any clear stages, it was a progression through iterations, because you only progress when you’re confronted with others, with reality. Typically, the classic pattern is as follows: you have an idea, not necessarily shared by everyone. The execution scenario is not necessarily appropriate. The entrepreneur’s objective is to match the ideas and resources of the moment with the reality on the ground.
I’ve learned from experience that it’s not enough to have a good idea, you also need a good execution plan. Good ideas: everyone has them, but few manage to implement them smartly.
As an example, we could cite Steve Jobs with the iPod: they didn’t invent anything, but the execution was exceptional. They did the same with the iPhone.
Nokia used to be the leader in smartphones. Apple started from an observation: the problem with the Mac ecosystem… These were very nice machines, but with a low adoption rate, partly due to a lack of software. So in 2007, they decided to facilitate the creation of a developer community around the iPhone, to provide content. For iPhones, this became the “apps” and the whole AppStore developer ecosystem, and it’s what largely contributed to its success and was the slogan trademarked by Apple in 2010: “There’s an app for that!”
We often hear about it, so here’s the famous question: can AI be autonomous ?
Let’s get one thing straight: Artificial Intelligence is a great calculator. After that, the question is whether I prefer to do my calculations by hand or use the result to make the right decision. AI is a great tool, but the decision must always remain human. And I’ve been doing it long enough to confirm that! In short, don’t give AI autonomy without human validation of the decision to act in the real world.
If we study the subject in greater depth, AI has difficulty making coherent decisions, because it lacks data: it doesn’t take weak signals into account, whereas the human brain has an extraordinary ability to identify the unintelligible – what we don’t perceive concretely and consciously. For the time being, an AI can’t yet blend elements like a human using non-deterministic models.
For example, ChatGPT has read the equivalent of a row of books from the earth to the moon, yet it misses the majority of weak signals. On the other hand, he’ll be better than the rest of us at completing a sentence based on what he’s already read.
When it comes to power consumption and carbon impact, that’s another story! A human brain at full intensity consumes less energy than a light bulb!
Maintaining ChatGPT costs $700,000/day, whereas a human only needs 3 meals: morning, noon and evening.
For me, AI will replace repetitive tasks. I myself use ChatGPT or other home-made programs for my thinking and to gain in productivity. We all stand to gain from using these technologies, even students. It’s going to totally change teaching. Now that we can access information quickly and even generate it automatically, the main role of teachers is to develop the ability to analyze and think critically. The magic of knowledge is that the intelligent combination of knowledge creates new knowledge. Creation ex nihilo is still a domain reserved for humans.
How do you feel about data governance ?
For me, the subject of data governance is particularly complex. I’m going to talk about data governance in the proper sense for AI. There are in fact two subjects: learning and inference (its use).
The techniques used require a staggering amount of training data. That’s a big responsibility. Available and selected content can be biased without careful consideration. Sharing training data with the world is important and even essential! Today, there is little information available and the information lacks fairness and balance. For example, female content is under-represented, as is the female presence that brings a complementary vision to the “mustachians” who are largely in the majority in the AI field. We also need a lot more female developers, because I find it abnormal that AIs that are supposed to represent humanity should only be developed by a majority of male developers. This generates a lot of bias.
Typically, even “Open AI”, which defined itself as open, is increasingly closing its models. At the same time, fortunately, many initiatives are opening up theirs. We need more open literature on the subject to enable progress. After that, like all technologies, the negative side comes from the use we decide to make of them. This was the case for nuclear power, as it was for Facebook. In the same way, DTT was originally designed to move mountains, but we’ve turned it into bombs. If you think about it, what makes a car or a fork dangerous in the wrong hands is how you use them. In my opinion, this is why states should not dictate how things are done, as this limits innovation, but define the legal framework for use.
Any final advice for players looking to integrate AI ?
Surround yourself with reasonable people. Between the hype and all the promises that can’t be kept, you need to keep a cool head. Some things can’t be done, others can. That’s why at Enno AI, we believe that if you want to start an AI project, you have to do it in 3 months. Otherwise, it’s likely to cost a lot of money, time and energy.
This initial 3-month test is obviously not the whole experiment, but a first step. The next step is to iterate and prototype.
After that, a 3-month prototype fixes things and allows us to see results. Then you iterate.
Here’s an example: It’s easy enough to create a generic chatbot: Chat GPT is a generic chatbot. On the other hand, creating a chatbot for a specialized field requires more resources and qualified data. It can be done in 3 months, but you need to have the relevant data. You need to surround yourself internally and externally. You can’t be seduced by fashion; you have to challenge, be reasonable and go step by step. A success every 3 months, that’s 4 successes a year!
MINIBIO
David Dutour is a lecturer in forward-looking strategy at emLyon and a BPI-recognized Data/IA Expert Engineer, helping companies make the most of their data. His career as an IT Consultant and Project Director has given him a deep understanding of the realities of technology projects.
He founded Enno.Ai to give very small businesses and small & medium-sized companies easier access to Artificial Intelligence solutions, and to promote technological independence from the dominant international players.
Enno.Ai is a member of the French Tech and ENE Industrie du Futur program, specializing in the design and implementation of Data/IA projects.