Strategy World 2026 is coming. Join us February 23–26, 2026. Register now →

Strategy logo
NEWMosaicMerchStrategy.com

Finding value with AI: Your roadmap to success

Photo of Juliana Schoettler
Juliana Schoettler

January 23, 2026

Share:

With so many options for AI tools on the market, the challenge is not so much what to use or even build, but how an AI solution will provide meaningful value. According to Gartner, only one in five AI initiatives are likely to provide clear ROI.

Where AI projects fail

With the rise of AI, many companies are putting a directive on integrating AI into their processes or tech stack. For those who become responsible for AI implementation or integration, they effectively become an AI product manager for that solution. Often, AI solutions aren’t replacing an existing system or workflow. They are either an add-on feature to an application, like Salesforce’s Agentforce, or they are an entirely new paradigm for interacting with existing tools, like Microsoft Copilot.

Once you’ve understood the needs of the business, the next logical step is to purchase or develop and deploy the AI system. However, without proper interest or enablement, business users are unlikely to organically adopt a product, even if they “know” that it’s good for them.

Stakeholder buy-in

This is where the tension between AI potential and human nature can create friction. Though it’s based off decades of machine learning research, generative AI’s prevalence is a new paradigm for most. Asking someone to build a new habit outside of their existing workflows can be challenging for many who are likely already busy and overscheduled, even if that person knows that the new habit is good for them.

Keep in mind that less than 10% of people keep their New Year’s resolutions by the end of the year.  

This means that every interaction with a new AI solution counts to get adoption across your target user base. First impressions are everything when it comes to getting immediate buy-in, and meaningful outcomes are essential to keep users coming back.

The AI value conversation

But what is a meaningful outcome? This is where AI value conversations begin to break down as there often isn’t a one-size-fits-all solution for every organization. Generative AI tools like ChatGPT and Gemini can make it faster and easier to draft emails or research topics, but how much value that provides a person will depend on both their immediate and long-term goals. While those tools function as more of a catch-all solution that can provide a great deal of AI value in general, they don't serve a clear, focused use case.

When I managed our internal AI adoption initiative, the question of AI ROI came up frequently with our leaders and executives . As the primary resource for enabling our workforce, as well as developing or recommending third-party internal AI tools, the critical question was how to measure success.

My recommendation was to look at success with three key phases in mind:  

  • Development
  • Enablement
  • Adoption

The first step is development. During this phase, you identify the job to be done with AI. This is also where you want to decide whether to build or buy. Once the tool is deployed, you have to enable your workforce on how and why to use it. Ideally, the why is defined well enough that it provides obvious value and organically flows into the third phase, adoption. However, as mentioned, if the tool creates an additional step for users, they may not naturally engage, and so further enablement, through additional training or incentive programs, may be necessary. More importantly, the adoption phase is where you can gather feedback to improve the user experience, feeding back into development. Each phase then serves as part of an iterative cycle that fuels the next.

At the very beginning of our organization’s AI journey, simple usage of our internal GPT interface provided sufficient value, as we were trying to encourage general usage of AI. Like many organizations, development came out of an immediate need to provide employees with a secure, internal ChatGPT interface. Enablement was a coordinated effort of establishing channels to share new tools, as well as the implementation of an executive-appointed AI Council. This group of cross-functional leaders was selected to ensure every part of the company was represented in their specific AI needs, and that employees were fully engaged through trainings and departmental reinforcement.

Once we were able to grow adoption, employees could provide feedback and ideas that fueled development and ignited this creative iterative process.

Unlocking AI success

Two factors were critical to the initiative’s success:

  • Executive sponsorship
  • User feedback

Executive sponsorship put a mandate around the AI initiative.  Even those who were reluctant to adopt AI started to engage.  Often I’ve found that once people start to use AI, they begin to recognize its potential and their imagination unlocks. As people see their ideas realized, they start to take ownership and feel excitement, fueling their use of the tools.

That being said, it’s important to evaluate new requests carefully. While it’s good to foster that excitement and creativity to propel engagement, adoption and, ultimately, success, building a one-off tool for a single business user may not provide the corresponding return on investment.

This is why your very first step when evaluating whether to pursue development should focus on what the return is for any project from a profit and loss perspective, or if there are simpler ways to accomplish the same outcome.

Beyond adoption

More than likely, AI value for your organization or team will extend beyond just users using an AI tool. Whether you build or buy, you’ll want to ensure that you can map dollar value to the value the tool provides, whether it be hours saved or revenue generated.

During our internal enablement, I spoke to leaders about setting meaningful goals for AI. I explained that they needed to evaluate what their overall goals were at a department level and how they rolled up or down at an individual level.    

For HR, this meant using AI to evaluate how effectively managers were delivering performance review sessions, and how aligned those meetings were with what was in the review as written. The goal was to empower leaders to improve those sessions so that the organization could effectively develop our workforce, resulting in having the right people in the right jobs and better expected performance for our company overall.

To that end, we identified that we needed to develop a tool that would measure the effectiveness of the performance review conversations and then enabled employees on how to use that tool through training sessions. A key component was ensuring that the process was as effortless as possible to the participants so that they simply needed to record the meetings, and the system did the rest.

The initial success metric was 100% of managers completing the performance review sessions for the first quarter, creating a baseline that ensured the meetings were happening in the first place. Each manager then received a rating for each of their meetings with feedback to empower them to improve their sessions going forward, leading to more meaningful employee and leadership development.

Success for our Legal team, on the other hand, was focused mainly on adoption as they were looking to proactively increase proficiency with AI. Through a series of dedicated trainings I delivered, I reinforced what tools were available to them, how to use them, and how those tools could provide value for their specific roles. Within a year, we had increased their monthly interactions with our internal GPT by 3900%.

In both scenarios, we had executive sponsorship reinforcing engagement but also set up channels so that participants could share their feedback so that the tools and experience provided personal value.

Do one thing well

Don’t expect success all at once, and understand that with the AI implementation process, like any project, will come points of failure. It’s also important to recognize that, while AI is a powerful tool, it’s not always the right tool for every job. As you start your development journey, think about what the outcome is and what the fastest path would be to accomplish it. It may require a simple Python script or automation instead of full AI implementation.

When managing our AI Innovation Hub, I liked to remind my team that for each project, we should try to do one thing well. This meant that instead of trying to execute several features all at once, they should figure out the core job to be done, get it done well, and then move on to the next. This allowed my small team to execute multiple AI projects at once that delivered value to internal business users, as well as the development of our core AI product.

With your own AI journey, think about the one thing that you want done well. What is the best way to accomplish that goal?

  • Is it through the purchase of third-party AI tools, like Strategy’s agents?
  • Is it getting your data AI-ready with tools like Mosaic?
  • Or is it simply automating a process without AI and saving AI for a more appropriate task down the line?

Whatever that vision is, break it down for yourself into its simplest terms so that you understand what value truly looks like for you. Figure out the one thing you want to do well, do it, and then figure out the next.

 Don’t forget, every road is traveled one step at a time.

Start your AI journey

Take the first step toward measurable AI value by making your data AI-ready with Strategy Mosaic.


Mosaic
Semantic Layer
AI Trends
Data Fabric
Analytics

Share:

Photo of Juliana Schoettler
Juliana Schoettler

Juliana Schoettler is a Senior Product Marketing Manager at Strategy. Since 2019, she has worked in Support, Engineering, and Product, leading the AI Hub and AI adoption initiatives. With an enablement background, she focuses on practical, data-driven use of AI.


Related posts

Video: The hidden cost of semantic debt in enterprise data models
The hidden cost of semantic debt in enterprise data models

Semantic debt quietly erodes trust, inflates BI costs, and breaks AI initiatives. Learn how a universal semantic layer helps enterprises eliminate inconsistency and scale analytics with confidence using Strategy.

Photo of Tanmay Ratanpal

Tanmay Ratanpal

January 21, 2026

Video: Why Your Semantic Layer Must Remain Independent
Why Your Semantic Layer Must Remain Independent

Vendor lock-in quietly erodes data value. Learn why independence at the semantic layer is critical to avoid rigid stacks and future-proof analytics and AI strategies.

Photo of David Peterson

David Peterson

January 19, 2026

Video: Why most BI stacks fragment (and how leaders unify them)
Why most BI stacks fragment (and how leaders unify them)

Discover how to overcome BI stack fragmentation with a universal semantic layer, ensuring seamless data portability, governance, and consistent insights across your organization. Learn how Strategy Mosaic unifies your BI tools, enabling teams to speak the same language and make informed decisions with confidence.

Photo of Tanmay Ratanpal

Tanmay Ratanpal

January 15, 2026

Video: Is your semantic layer AI-ready? 5 red flags to watch for
Is your semantic layer AI-ready? 5 red flags to watch for

Is your semantic layer ready for AI? Learn the 5 warning signs of an AI-unready semantic layer—and how Strategy Mosaic delivers governed, reusable metrics that power accurate, scalable enterprise AI and analytics.

Photo of Tanmay Ratanpal

Tanmay Ratanpal

January 15, 2026

Endless Possibilities. One Platform

Background Image

MicroStrategy is now Strategy! I can tell you more!