- You cannot have AI without an information architecture; any successful AI deployment in the enterprise needs to be underpinned with an AI information architecture that takes into account the entire AI flow
- Decision optimization is an essential component of AI, which should complement machine learning with predictive analytics to facilitate optimal and business-relevant actions
- The operationalization of AI, which is getting models that are created by data scientists into production, as well as the inference of AI-based analytical insight into consuming applications requires the need for platform interoperability and vendor transparency
- In recent years, the current state of AI has advanced from research to practice; it is meanwhile applied by a large number of companies and organizations to a stunning breadth of use cases, leveraging not only machine and deep learning, but also addressing AI governance, and managing the lifecycle of AI models
- AI has a significant impact on existing domains, such as DevOps, which requires an AI DevOps model, and change management, which requires an AI change management framework
The book Deploying AI in the Enterprise by Eberhard Hechler, Martin Oberhofer, and Thomas Schaeck gives insight into the current state of AI related to themes like change management, DevOps, risk management, blockchain, and information governance. It discusses the possibilities, limitations, and challenges of AI and provides cases that show how AI is being applied.
InfoQ readers can download an extract of Deploying AI in the Enterprise.
InfoQ interviewed Eberhard Hechler, Martin Oberhofer, and Thomas Schaeck about deploying AI in the enterprise.
InfoQ: What made you decide to write this book?
Eberhard Hechler: Today’s enterprises clearly see the business need to embrace AI, but are struggling with how their journey to AI should be implemented, and what it means to deploy AI in the enterprise. What are meaningful entry points for a journey to AI, how does AI relate to well-known themes that enterprises are very well familiar with, such as change management, DevOps, risk management and information governance, and what is the impact of AI to their existing information architecture and master data management implementations? Are there AI limitations that need to be taken into account? In working with many of IBM’s clients in recent years, we have gained confidence in the significance of addressing these challenges and questions holistically, which has led us to write this book.
InfoQ: For whom is this book intended?
Hechler: This book is for a reader who is looking for guidance and recommendations on how to overcome AI solution deployment and operationalization challenges in an enterprise, and who is furthermore eagerly interested in getting a comprehensive overview on how AI impacts other areas, such as design thinking, information architecture, DevOps, blockchain, and quantum computing – to name a few. The anticipated reader is looking for examples on how to leverage data to derive actionable insight and predictions, and tries to understand current risks and limitations of AI and what this means in an industry-relevant context. We are aiming at IT and business leaders, IT professionals, data scientists and software architects, and readers who have a general interest in getting a holistic AI understanding.
InfoQ: What is the current state of artificial intelligence? What has been accomplished?
Martin Oberhofer: AI is infused in many applications today like medical diagnosis, speech recognition in technologies like Alexa or risk predictions for a loan to default, to name just a few. This means that for many people, whether they know it or not, they interact with AI on a regular basis. And the trend to infuse AI into even more aspects of our daily lives is accelerating. The ubiquitous access and ease of use of data science tools like Watson Studio allows more and more people to easily develop and deploy AI capabilities wherever they are required. I consider this a huge accomplishment compared to the state of AI 10 or 15 years ago when it was not as accessible and widely used as it is today.
However, there are still some outstanding challenges in the use of AI – namely around AI governance in using AI in an ethical fashion. Last year, the EU published in February 2020 a whitepaper indicating that there is a need to regulate and manage the use of AI in an ethical and meaningful way (see: On Artificial Intelligence – A European approach to excellence and trust). The complementing technology to manage the lifecycle of AI models to detect bias and explainability is also in early stages of maturity, and in early stages of adoption into enterprises. In summary, there are great accomplishments and huge value gains with AI today, but there are still some outstanding challenges we need to address in the next few years.
InfoQ: Why do we need an information architecture for AI? What purpose does it serve?
Hechler: As organizations are developing their journey to AI and are increasingly using AI with machine learning and deep learning, the need for adjusting and improving their existing information architecture becomes an obvious task. The lack of an enterprise-wide Information Architecture does result in a fragmented AI infrastructure that makes enterprise-scale AI projects and use cases challenging and risky undertakings. Business users, data scientists, data engineers, and IT operations specialists have to effectively collaborate in order to exchange and govern new types of AI artifacts. Transforming data for new AI consumption patterns and new types of applications, and addressing AI model deployment and operationalization challenges requires an information architecture for AI. The purpose of an AI information architecture is to facilitate collaboration across various personas for them to develop and manage new AI artifacts.
InfoQ: What does an information architecture for AI look like?
Hechler: An information architecture in the context of AI needs to address several areas. It should incorporate machine learning and deep learning methods, and needs to address cataloging and governance of all AI artifacts. It should underpin the needs to deploy and operationalize AI models, and facilitate the exchange of AI artifacts across IT platforms and business systems. Platform interoperability and vendor transparency represent key areas to be addressed by an AI information architecture. Furthermore, it should ensure model accuracy and precision of their entire lifecycle, and support the infusion of insights from AI models into business systems and also into traditional reporting tools. The book describes the AI information architecture in terms of six layers, such as (1) data sources, (2) source data access layer, (3) data preparation and quality layer, (4) analytics and AI layer, (5) deployment and operationalization layer, and (6) an information governance and information catalog, which provides services to all other five layers. In the book, we detail out these six layers in terms of required capabilities.
InfoQ: How can decision optimization help to progress from predictions to better actions?
Thomas Schaeck: When using ML to make predictions, in many cases a large number of predictions can be the result. For example, a bank may use ML models to predict the product that each customer will likely be most interested in. If they have 1 million customers, they now have 1 million predictions. Running a large marketing campaign with brute force would be suboptimal in terms of cost vs. benefit and risk. With decision optimization (DO), it is possible to determine the optimal actions based on input data and predictions, e.g. in this case, based on constraints such as marketing budget, risks involved, and when the last offer was made to customers, decision optimization can generate the optimal set of actions, i.e. to what subset of customers, and what offers should be made, to get the optimal profit without exceeding marketing budget and acceptable risk? This is just one example; generally, decision optimization can be applied to a large breadth of use cases by defining constraints and optimization targets based on input data and ML predictions DO can solve computing the optimal actions.
InfoQ: What are the challenges of AI operationalization and how can we deal with them?
Schaeck: The main challenges in AI operationalization are getting models that are created by data scientists into production, and once in production detecting when models start having issues and need to be revised by data scientists to address these issues. It is essential to have a process in place that allows data scientists to build models on a “development system” and a CI/CD based approach to propagating their work to test and then production environments, and monitor models for quality metrics, fairness and drift, in order to alert business stakeholders to be aware when models have issues that warrant that data scientists revise them.
InfoQ: How can we apply DevOps to factor in AI artifacts and take advantage of AI and data science methods?
Oberhofer: DevOps is the state of the art methodology to develop and deliver software today. Modern toolchains for DevOps are using tools like Git, Jenkins, Pagerduty, etc. State of the art data science tools like Watson Studio allow data scientists to benefit from the DevOps toolchain by having out of the box integration to these tools. For example, Watson Studio has a Git integration to manage the version of machine learning models, etc. By having seamless integration with the DevOps toolchain from the data science tools, the data scientists benefit from source control and versioning, build and deploy automation and efficient operations of the deployed AI models in production.
InfoQ: What can be done to leverage AI and ML for optimizing change management?
Oberhofer: Change management is a broad term, and could be divided in changing business processes for improved outcomes as well as optimizing the processes of changing the applications executing the business processes. Optimizing business processes using AI techniques is commonplace today already by using What-if analysis models looking at the impact of various parameters before regarding change in the outcome of the business processes. By using DevOps in public cloud as a service software using feature toggle techniques, practitioners can explore the impact of a new AI model with a subset of the subscribers of the software. Only if the results in practice with a subset of the users confirm the predicted outcomes are such changes activated for all subscribers. Otherwise, they might get refined or pulled out. Optimizing the software change process itself using AI is also commonplace today. Using AI to predict potential error situations of the software- by making adjustments in critical parameters in the deployment infrastructure to avoid negative impacts for the end users- is used in many places today. Auto-healing, auto-tuning, auto-correction – all based on AI models – are infused in many software solutions today, eliminating the need for human-based changes of software parameters to address problems or even code changes.
InfoQ: What will the future bring for AI?
Schaeck: Looking at this question from an enterprise point of view, I think there are two very different angles to it:
- AI will help existing enterprises to evolve towards becoming faster and much more intelligent and efficient, by automating and accelerating business decisions based on machine learning based predictions and decision optimization, deriving fast yet optimal decisions and resulting actions. This will quickly make immense differences, and enterprises who drive forward aggressively and reap the benefits in speed, efficiency and optimization can drive meaningful competitive advantage.
- AI will enable entirely new businesses that would not be possible at all without AI. An example of this from the consumer space is new social media apps, that go viral from few to hundreds of millions of users in a short period of time as they are able to observe user behavior and predict what individual users will like, present to them what they likely will like, and learn more and more about their users in the process, to ever improving matching content to their preferences. We can expect many new businesses to emerge in the future that are based on new business models enabled by AI.
About the Book Authors
Eberhard Hechler is an executive architect at the IBM Germany R&D Lab. He has worked for IBM in Germany, the US, and Singapore. Hechler has studied in Germany and France and holds a master’s degree (Dipl.-Math.) in pure mathematics and a bachelor’s degree (Dipl.-Ing. (FH)) in electrical engineering. He has co-authored the following books: Enterprise MDM, The Art of Enterprise Information Architecture, and Beyond Big Data.
Martin Oberhofer is an IBM Distinguished Engineer and executive architect. He is a technologist and engineering leader with deep expertise in master data management, data governance, data integration, metadata and reference data management, artificial intelligence, and machine learning. He is a certified IBM Master Inventor with over 100 granted patents and numerous publications, including four books.
Thomas Schaeck is an IBM Distinguished Engineer at IBM Data and AI, leading Watson Studio on IBM Cloud. On a one-year assignment in the USA in 2013 – 2014, Schaeck led transformation of architecture, technical strategy, and DevOps processes for IBM OpenPages Governance Risk Compliance. On a two-year assignment in the USA in 2004 – 2006, Schaeck led collaboration software architecture, development, and performance for messaging and web conferencing.