Research

The deployment of artificial intelligence: Deeploy makes AI explainable.

AI is developing fast and has enormous potential. But how exactly can AI be applied? Startup Deeploy does not use AI in their business but ensures proper deployment for businesses and the government. As goes with all aspects of life: if you can no longer explain what you are doing, eventually it will go wrong. 

As was shown in the previous interview of this series, developments in AI are moving at lightning speed, including the arrival of Quantum Computing and new generation algorithms. And so, it is important to increase the attention on making it explainable and clarifying how AI models actually work and come to their decisions.

Bastiaan van de Rakt, one of the change-makers of the AI landscape, and co-founder of Deeploy is a true technician. He grew up with technology, first during his time as a student at TU Delft, and afterward, he gained over 20 years of experience as a consultant or CEO/CTO for companies focusing on machine learning and data science. Bastiaan started his entrepreneurial career aiming to truly make a difference and is the founder of several companies, of which Deeploy and Enjins are the latest.

Deeploy makes AI models explainable and reproducible. What exactly does this mean? 

"Over the past 20 years, the world of AI deployment has made two things very clear to me. One: many people who are creating these models, find it difficult to actually get them in production. Meaning that the decision made by an AI model - running on a website, stock system, or hardware device (edge AI) - really is implemented automatically. This process of operationalizing machine learning is also called MLOps. Although the models themselves are often made by data scientists, this step also involves other skills (ML engineering) that are closer to software engineering. And two: whenever a model is running, there’s often a lack of maintenance and evaluation if the model keeps doing what it’s supposed to. It’s often just assumed that the model is working and will keep doing so. With increasingly complex models, even the maker is less and less aware of how the model works (we call this Explainability in AI, XAI for short).”

And how can companies discover this (on time)? 

“The aspects mentioned above only come to light when someone asks for an explanation. For example: 'Dear bank or healthcare institution: could you explain to me what made you decide to decline my loan application two years ago?' In order to answer these types of questions, three things need to be in order. One: information about the model; the correct version number, associated data, maker of the model, etc. Two: thus, being able to reproduce the decisions the model has made in the past. And three: an explanation of all variables of the model; what variables have led to the decision that has been made? Also, you need to be able to explain the value of the decisions to the end-user."

Deeploy takes care of these three aspects. What does this mean for your customers?  

“Deeploy provides insight into Machine Learning (ML) implementations by integrating explainable AI at the center of ML operations (or MLOps). With this, we give people the control to understand what models do and to (be able to) correct automated decisions. Without products like ours, it is practically impossible for companies and the government to keep track of their AI models, and maintain insight and control. We offer them explainable AI by design.”

So, if I’m working at a company or the government, what would be the results of using your solution? 

“We have created a platform on which every data scientist can very easily (via a simple user interface) implement their AI-model and keep it running in such a way that everything is explainable and reproducible at all times. These explainable models give insight into how the model works and how decisions are being made. 'Reproducible' means that models that are working well can be used again, without unwanted errors or adjustments." 

In Europe – following the GDPR – more and more regulations regarding the application of AI are developed. What are your expectations?

“Lately, there’s been a discussion about how far AI regulations will go, and which (government) body will be monitoring the execution. These regulations will always contain components of a certain degree of explainability and reproducibility. These upcoming years will provide us with more clarity of what it will look like exactly. At our company, we ensure that we are ahead of the expected regulations and developments, and we help our customers to be prepared (as far as possible).”

Is every model fully explainable? 

“No, sometimes the explainer is harder to make than the model. But there’s no need to be able to explain everything at all times. Even a partial explanation can create the transparency that is necessary for the adoption of and trust in the model and allows citizens to discuss or object to a decision that has been made by the AI model of a certain organization. The degree of explanation depends on what the model is used for. It must be in proportion. Logically, it’s less important to explain up until two digits after the decimal point for product recommendations at the clothing website, than to be able to explain decisions made by an AI-model supporting a radiologist with your hip surgery." 

Deeploy is already active on the market, in what way do you work with organizations? 

“We have different types of partnerships. We work with the major platform vendors (Microsoft Azure, AWS, GCP, etc.) to add to their tech stack, as well as with AI/ML consultancy companies for the implementation of our software. Sometimes we do this as a partner, and sometimes on assignment. You can use Deeploy for free for 14 days, directly via the website, but also via the AWS or Azure marketplace.”

What sectors should be interested in your solution? 

“Almost all of them, but right now we’re focusing on fintech, mobility, healthcare, and public services. These are sectors where AI already plays a major role, and therefore explainability of models is a crucial part for the adoption by related organizations and consumers.”  

What means are most important for you to build your platform? 

“First of all, our staff: software engineers with an interest in AI and consultants aiming to make AI more explainable are the most important resources for us. But also (open source) tools: start-ups are interesting customers for us because they are the best testers of our latest functionalities. Our philosophy is to offer a ‘best-of-breed’ environment, so we integrate with other ML tools or open-source standards to be part of the ecosystem."

Deeploy has just completed a first investment round of €1,000,000: what can we expect from your company in the future? 

“We are mainly targeting companies using various models in Europe, but we will also make a core version available to all AI developers. In addition, we want to be a frontrunner in the AI community in developing and expanding explainer libraries (such as SHAP, LIME, Anchor, etc.), but also build custom explainers when necessary.”

Do you have a final AI quote for the readers?

“Responsible AI starts with Explainable AI.”

Translated from original interview published on: https://www.emerce.nl/interviews/ai-in-de-praktijk-deeploy-maakt-ai-uitlegbaar

Newsletter

Curious about Curiosity?
Sign up to our newsletters

Subscribe Now