April DLT Monthly Meeting Summary - "AI for Strategic Impact"
We had a rich conversation on Friday, April 11th to discuss elevating AI projects for strategic impact. We invited the co-facilitator of the new Samudra AI Innovation Exchange (AIX), to share some of her initial thinking to get us started, and then heard from DLT members on their approaches:
1) Charlene Li - Charlene is an author and expert on AI in her own right, and provided a deck and some key thoughts, including:
Use cases are not a strategy: “Think big, start small, and scale fast.”
There is a need for enterprise-level exposure to AI, including secure, simple, and safe AI access for all employees.
Departmental tools for efficiency and productivity are important, as well as strategic bets for long-term impact.
She shared a matrix for prioritizing AI applications, focusing on high impact and speed, and to consider a rolling six-quarter plan for ongoing and rapid strategic alignment.
2) Member Sharing:
i. Financial Services member
Education - Educating executives and employees about AI and the need for creating a competitive advantage with it is important.
Spotify CEO memo - one member mentioned the AI imperative that the Spotify CEO is driving.
Scaling - There are immediate challenges of scaling governance, increasing employee AI literacy, and focusing on operational efficiency.
Governance - Enhance existing governance channels and create an AI inventory management system.
Data - Having a data roadmap is foundational.
Use cases - One member commented that in some cases, like patient care, there is no room for error so probabilistic outcomes will be a challenge. In other cases, it will be acceptable to work from less than perfect data, and the AI solution should be capable of self-repair when data conflicts or gaps are identified.
Another member shared the following (please do not distribute):
ii. Research/Non-Profit organization member
Middleware - With AI sprawl, there is a need for a middleware layer in order to manage multiple AI platforms (the market is still volatile and many positive options exist).
APIs - Another participant commented that using APIs and middleware to ensure flexibility and orderly governance is one way to go.
Explainability - Another member indicated that having a middleware layer can help manage autonomous AI models and ensure explainability for regulated industries.
iii. Health insurance member:
Reimaging processes - Start small, identifying problems, and reimagining processes to incorporate AI.
Selective governance and champions - There is a need for selective governance and the role of business champions in spreading AI literacy.
Enablement - One member commented on changing the word from governance to enablement.
Zero back-office goal - AI has been successful in document processing and the goal of achieving a zero back-office environment.
iv. Other input:
Transparency and interoperability - there is a need for transparency and interoperability in one’s AI architecture.
Change management - a misstep was not focusing enough on change management when implementing several AI apps. A comprehensive program that focuses on staff at all levels (C-level to individual contributor) is required.
Incubator - Creating an AI innovation incubator and the success of AI-based solutions in predicting health events and improving image completion in a healthcare environment proved successful.
Experimentation - we have supported experimentation and educational journeys with light governance,
Ecosystem - having an ecosystem of solution partners to identify the most effective AI solutions.
Ongoing training - providing ongoing training and education to ensure AI literacy across the organization has been important.
User groups - User groups within the AI Center of Excellence can help cross-educate and prevent unnecessary AI sprawl.
Reimagine the customer journey - AI can be used for improving the customer experience; and AI can address horizontal and vertical opportunities.
Human in the loop - this can slow down execution so there needs to be risk-based deployments of AI/agent solutions where humans are not checking AI results, but fixing hallucinations.
We appreciated all the rich discussion and sharing by the group! In a rapidly changing environment, there is so much we can be learning from each other.