Monday at 10am. Oman, a developer of new AI (artificial intelligence) tools, is eagerly anticipating a new technology product due to launch today. Earlier, the director of the ICU (intensive care unit) at Duke University Hospital asked Oman and his colleagues to develop an AI tool to prevent ICU overcrowding. Studies have shown that certain types of heart disease patients do not need to be admitted to the ICU ward, so ICU department leaders hope to use AI tools to help emergency doctors identify these patients and transfer them to non-critical care departments. This improves the quality of patient care and reduces unnecessary costs.
Oman and his team — made up of cardiologists, data scientists, computer scientists and program managers — have developed an AI tool that allows clinicians to easily identify these patients. The tool also inserts words into patients' electronic medical records explaining why they didn't need to be admitted to the ICU. After a year of hard work, the research and development of this AI tool has finally been completed and is ready to be put into use.
Fast forward three weeks. Startup of the tool failed. One emergency physician commented, “We don’t need tools to tell us how to do our jobs.” This is the typical response of frontline workers to the introduction of AI decision support tools. In the fast-paced emergency room environment, busy clinicians are reluctant to take on the extra work of entering data into the system in addition to their regular workflow, and they are very averse to outsiders with little knowledge of emergency operations intruding into their areas of expertise .
Similar stories of AI landing failures are also being staged in other fields. While these new ways of working can help organizations improve product and service quality, reduce costs, and increase revenue, end users often refuse to adopt AI tools to guide decision-making because they feel they are getting little out of it, and new tools can mean additional workload and cause them to lose their autonomy.
Conflicts of interest in technology implementation between target end users and executives or other departmental stakeholders are not new. But in the age of AI tools, this problem has become more acute because AI tools are predictive, prescriptive, and their development process requires many laborious exchanges back and forth between developers and end users.
So, how can AI project leaders improve end-user acceptance and usage of AI tools? Over the past five years, we at the Duke Institute for Health Innovation have closely observed the design, development, and integration of 15 AI decision support tools, proposing a set of best practices that balance stakeholder interests. We found that in order to increase end-user acceptance and adoption of AI decision-making tools, organizations and AI project leaders need to increase the benefits associated with the use of AI tools, reduce the workload of AI tool development, and ensure end-user core jobs by safeguarding their autonomy.
In response to the challenges that the implementation of AI tools poses to hospital administrators, end users, and tool developers, we collected a large amount of data, paying particular attention to those decision support tools that have been successfully implemented. While the focus of this study is on the implementation of AI decision support tools in healthcare, we find that the issues and dynamics identified in the study also exist in other settings, such as technology, manufacturing, insurance, telecommunications, and retail.
Roots of end-user resistance
The disconnect between what the AI project team wants to implement and what the end user is willing to adopt stems from three major conflicts of interest.
1. The biggest beneficiaries of predictive AI tools tend to be organizations rather than end users. The predictions provided by AI tools allow organizations to intervene earlier in their value chain, potentially improving quality and reducing costs for both the organization and downstream stakeholders. However, there is often no immediate benefit to the target end user, as in the case above, emergency physicians were asked to use an AI tool that would benefit ICU clinicians.
An online retailer faced a similar situation: They developed an AI tool that flagged active applicants whose resumes were matched against profiles of employees who had been successful in the organization in the past. The target end-users of the tool are talent search officers in HR departments, who used to ignore these active candidates in favor of searching through social platforms such as LinkedIn, as they try to attract a large number of candidates with scarce skills and actively seek employment Few people in the group have the required skills. However, proactive job seekers are more likely to accept job offers than candidates found through other channels, so this tool will benefit the organization as a whole, as well as HR interviewers downstream in the talent search chain.
2. AI tools may require additional labor from end users who are not the primary beneficiaries of the tools. The development of AI tools requires many laborious exchanges back and forth between developers and end users. Technology developers have long been engaged in user-centered design, using a variety of means such as task analysis, observation, and user testing to integrate end-user needs, but AI tools require deeper end-user involvement.
Because building AI tools requires large amounts of high-quality data, developers rely on end users to identify and reconcile data discrepancies across groups and unify reporting methods. Developers also rely on end users at every step of the process to define, evaluate, and supplement machine inputs and outputs, and to validate assumptions that guide end user decisions.
If the primary beneficiaries of AI tools are downstream stakeholders or executives, end users may not have the incentive to cooperate with developers in this laborious back-and-forth. Emergency physicians, for example, are not interested in spending time and effort developing tools to identify low-risk heart disease.
Researchers at Oxford University found a similar problem at a telecommunications company that developed an AI tool designed to help salespeople identify high-value customers. While executives are interested in providing salespeople with a pair of AI-enabled “eyes,” salespeople themselves are more focused on maintaining a personal, trusting relationship with customers, using their intuition to spot sales opportunities. They have no interest in engaging in a laborious process to design, develop, and integrate a tool that they don't think will benefit them.
3. Prescriptive AI tools tend to diminish end-user autonomy. AI decision support tools are prescriptive in nature, and they suggest some sort of action to the end user, such as transferring a patient to the ICU. The instructions provided by AI tools enable internal third-party stakeholders (such as managers of an organization or stakeholders in different departments) to see, and even control, to a certain extent, the decisions of target end users. Internal stakeholders, such as executives, were previously only able to create terms of action that end users would interpret and apply based on their own judgment on a case-by-case basis. AI tools can now inform these judgments, provide corresponding recommendations, and track whether end-users accept or not, so they have the potential to violate end-user autonomy.
For example, once Duke University Hospital has adopted an AI tool for identifying low-risk heart disease, hospital executives and ICU clinicians can see the AI tool’s recommendations, along with emergency room physicians, when emergency physicians choose to admit a heart patient to the ICU. Whether the doctor followed the advice. ER physicians don't like when others barge in on their field to give advice and try to control their decisions without seeing the patient.
A study of the adoption of AI tools in retail found a similar situation. Stanford researchers looked at the implementation of an algorithmic decision support tool designed for fashion buyers. These buyers have traditionally used their experience and intuition on fashion trends to predict future demand and make purchasing decisions. For example, buyers of men's jeans must choose between styles (skinny, flared, straight) and denim colors (light, medium, dark). Buyers have considerable autonomy and are not used to explicitly modeling and evaluating the results of their own intuitive judgments.
Encouraging Frontline Embrace of AI
We found that in order to overcome the above obstacles and facilitate the smooth implementation of AI tools, project leaders need to address the imbalance of end-user and organizational value capture. In practice, this means increasing the end-user benefits associated with the use of AI tools, reducing the workload of AI tool development, and ensuring end-user autonomy by safeguarding their core jobs. (See the sub-column "Overcoming First-Line User Resistance to AI Implementation")
1. Increase the benefits of end-users
If end-users think they can clearly benefit from an AI tool, they are more likely to adopt it. There are several strategies AI project leaders can employ to achieve this goal.
Identifying End-User Pain Points While AI tool developers need to keep organizational goals in mind, they also need to focus on how the tool can help target end-users solve problems they face in their day-to-day work, or adapt to new workloads that come with using the tool. Cardiologists at Duke University Hospital, for example, asked the project team to create an AI tool to detect patients with low-risk pulmonary embolism so that they could be sent to outpatient care rather than stay in costly inpatient care. Upon receiving the request, the project team immediately contacted the tool's future end user, emergency physicians. Project team members learned that the pain point for emergency physicians was how to quickly prepare low-risk PE patients for discharge and ensure they receive the outpatient care they need.
The same tactic was used by the aforementioned project leader who was trying to use AI tools to screen active candidates. Developers have learned that talent hunters often fail to schedule candidates for interviews as quickly as possible because downstream interviewers don’t have enough processing power.
Clearly, AI project leaders should focus on how tools can help target end users solve problems they face in their day-to-day work. So why do they often fail to do it? The reason is that the first people to reach out to them and provide resources for AI tool development are usually the executives or downstream stakeholders who derive the most benefit from the tool. As a result, project leaders often view these individuals as key customers, ignoring the need to engage target end users.
Developing Interventions to Address End-User Problems The introduction of a pulmonary embolism identification tool at Duke University Hospital may exacerbate the problem facing emergency physicians that there is no simple way to ensure that patients with low-risk pulmonary embolism, once identified, can be easily and safely Follow-up outpatient treatment is arranged for them. Knowing this, the project team began to focus on how to make it easy for emergency physicians to schedule follow-up care for these patients.
Similarly, developers of HR screening tools noticed that HR talent hunters had difficulty scheduling interviews for candidates flagged by AI tools, so they began to consider how to increase the "bandwidth" of HR interviewers, and finally recommended hiring professionals to conduct interviews Pre-screening services to reduce workload for current HR interviewers.
Strengthen end-user incentives to achieve the outcomes that AI tools are intended to improve. Organizations often do not evaluate and reward improved outcomes for end-users using AI tools to guide decision-making. For example, emergency physicians at Duke University Hospital are evaluated by their ability to identify and treat acute common diseases, not their ability to identify and treat rare diseases such as low-risk pulmonary embolism. The AI project team worked with hospital leaders to revise the incentive system to include an "ability to identify and classify patients with low-risk pulmonary embolism" into the evaluation criteria for emergency physicians.
Similarly, the executives in the previous case who wanted to introduce HR screening tools realized that they needed to change the incentives for end users to achieve the results that the AI tools were intended to improve. When HR workers use AI tools, they can appear ineffective if they are only evaluated on traditional performance metrics, such as the total number of candidates found with scarce skills. Executives recognize that their assessments and incentives need to be adjusted so that employees are not only motivated to seek out a large pool of candidates with scarce skills, but are more motivated to seek out a large pool of candidates who will eventually accept job offers.
Of course, AI project leaders cannot easily increase incentives for end users. This is because the stakeholders who get the most from AI tools are often not the ones who manage the performance and compensation of target end users. AI project leaders often need executive buy-in to help revise these incentives.
2. Reduce End User Effort
There are a number of ways that AI development teams can minimize the extent to which end users are asked for help.
In the tool design phase, minimize the end-user effort associated with building the dataset The data used to train the AI tool must be representative of the target population. This requires a large amount of training data, but pooling this data and reconciling differences between datasets is time-consuming. AI project leaders can bring in third-party stakeholders to participate in data construction, thereby minimizing the end-user workload associated with such work. For example, the Duke University project team has developed an AI tool to detect patients at high risk of advanced chronic kidney disease earlier. The data required by the tool is extracted from electronic medical records and claims data, respectively, and the two data sources are inconsistent with each other. The project team did not bother the target end users of the tool (first-visit doctors) to undertake the task of data cleaning, but validated the data with the help of the main beneficiaries of the tool, nephrologists, and completed the standardization between heterogeneous data.
AI project leaders can also choose a good enough AI tool and train it on a relatively small dataset that is currently available. For example, an AI project leader wants to develop a tool to help salespeople at a manufacturing company identify potential high-value customers, and he wants to minimize end-user labor in assembling relevant data sets. Instead of asking salespeople to spend time refining log data for various milestones in the sales process (such as leads, qualified leads, and demo data), the AI team first built a system with a good enough model that required less Training data, so the salesperson needs to prepare less data.
Minimize the end-user workload related to testing and verification in the tool development process. Once the prototype of the AI tool is built, the development team needs to conduct time-consuming back-and-forth communication with the end user to help test and verify the predictive effect of the AI tool, and Tweak the tool to increase its practical utility. This effort can be minimized by bringing in third-party stakeholders to participate in the review. For example, a project team developing an AI tool for identifying the best sales leads for a manufacturing company invited the leader of the process improvement team instead of salespeople to conduct an initial evaluation of the tool. Process improvement team leaders helped them identify success metrics for customer conversion rates. The so-called customer conversion rate refers to the percentage of potential customers who subsequently convert into actual customers. He also helped them A/B test the leads identified by the tool with those identified in the regular sales process, comparing the conversion rates of the two.
AI project leaders can often do more to help end users evaluate models more easily. For example, a Duke University project team developing a tool to detect high-risk patients with chronic kidney disease found that it was difficult for end users to determine the risk score threshold for placing patients in the high-risk category. Project team members used interactive graphs to help them see the percentage of patients with a specific score that eventually developed chronic kidney disease, making it easier for end users to set thresholds that differentiate high- and intermediate-risk patients.
In the tool integration process, minimize the end-user workload associated with tool use. Simplifying the user interface and automating related processes can help reduce the user's inner feeling that AI tools add extra workload to them. Here's a rule of thumb: never ask the user to enter data that the system can automatically retrieve. It would be even better if it was possible to actually predict what the user wanted and prepare the interface for them to use.
Another example comes from the Kin Center for Digital Innovation at Vrije Universiteit in Amsterdam, which is developing an AI tool for screening candidates. Developers first color-coded a candidate’s match to a profile of a previously successful employee, making the tool more accessible to HR recruiters at consumer goods companies: candidates with a match of 72% and above were marked green, Those who do not meet this standard are marked orange. Finally, developers further automate the process, allowing recruiters to click a button and order the AI tool to automatically filter out all candidates with low predicted success rates.
Another strategy is to reallocate some of the extra work required to use the tool. For example, emergency physicians at Duke University Hospital (the target end-users of chronic kidney disease AI tools) suffer from “alert fatigue” as they also receive alerts from a variety of other automated clinical decision support tools. The head of Duke University's AI program decided to create a new clinical role, assigning this person to use AI tools to remotely monitor all of Duke University Hospital's first-visiting physician-directed patients—more than 50,000 adults in total. When an AI tool marks a patient as being at high risk of chronic kidney disease, the clinician responsible for remote monitoring will pre-screen the alert through chart review, and if the patient is determined to be at high risk of chronic kidney disease, a message will be sent to the first doctor. When the primary doctor receives this information and agrees that the patient may be at high risk for chronic kidney disease, he or she will be referred to a nephrologist for treatment.
3. Protect end-user autonomy
Humans have always cherished autonomy and gained self-esteem from their accumulated work control capabilities and knowledge. So users are naturally upset when AI tools allow stakeholders from outside the domain to influence their decisions. A successful AI implementation requires a keen sense of how it might impact the relationship of end users with their jobs. Here are some ways developers can handle this.
Protecting end users is a task at the heart of work Sepsis, an infection that triggers a systemic inflammatory response that eventually leads to organ failure, has been targeted by a Duke University project team that developed an AI tool to help detect and manage sepsis treatments. The combined resistance of the end user - emergency physicians. Physicians want to remain in control of key tasks, such as having the final say in making a patient diagnosis and prescribing medications and blood tests. The project team adjusted the settings of the AI tool so that the predictions it made would not affect the above-mentioned critical tasks, but could actually help emergency physicians complete some important tasks that they do not particularly value.
In the case of the fashion buyer described earlier, the AI tool developers learned that the buyer wanted to remain in control of tasks they deem creative or strategic, such as deciding on the total amount of denim purchases for flared pants or red denim. percentage of . The project team adjusted the settings of the AI tool to match the expectations of fashion buyers: if the buyer wanted to buy red denim, he could add this as an input to the AI tool’s suggestion list, and the system would Be the first to fill out the red denim order.
It may seem self-evident common sense that AI tools developed by project teams should avoid interfering with tasks that end users see as the core of their work, but AI project leaders risk falling into this trap because interventions in core tasks are often expected to bring greater profit. For example, the AI project team initially built an AI tool for a retail organization to guide the decision-making of fashion buyers. A mistake in decision-making in this operation link will lead to two adverse consequences: one is that the purchased and stockpiled goods do not meet market demand, resulting in loss of income opportunities; the other is that the wrong product is purchased and subsequently has to be sold at a lower price, resulting in gross profit margins loss. But with buyers rejecting the tool, developers turned to the other end of the process, creating a tool that helps merchants decide when to cut prices on slow-moving clothing, and how much.
The tool ended up delivering far less value capture than originally planned, as it only focused on the end-stage of the retail process. However, smart AI project leaders have understood that as long as AI tools can be implemented, even if they only intervene in a limited set of tasks, they are more effective than tools that can theoretically provide more value but can never be implemented to intervene in the core tasks of end users. .
Allowing end users to help evaluate AI tools Introducing new AI decision support tools often requires replacing old tools embraced by target end users with new ones that can erode end user autonomy. For example, AI tools for sepsis detection threaten the autonomy of emergency physicians, a problem that current rule-based sepsis detection tools do not have. To protect end-user autonomy, the project team invited key developers of the tools currently in use and asked them to help design an experiment to test the effectiveness of the new tool.
Researchers at Harvard Business School found a similar situation in a study of a retail organization that developed an AI tool designed to help fashion dispatchers decide between sizes and styles. Which stores are assigned to and the number of rationed shoes. The tool enables managers outside the process to see what is happening in the transfer process, potentially threatening the autonomy of the transfer personnel, a problem not present with existing rules-based tools. To protect end-user autonomy, the project team enlisted the assistance of fashion transfer staff to design an A/B test to evaluate the performance of existing tools versus new AI tools.
Giving the target end user a voice in the evaluation process sounds perfectly reasonable. However, not all AI project team leaders can do it. Why? The reason is that whenever you let end users choose which areas of their work to test, they will always choose the hardest part. But this step cannot be skipped, because these are the very people who need to act on the recommendations made by the AI tool.
Involving end users from the start AI project leaders often deliberately remain silent in the early stages of development to avoid user resistance. However, if users are not invited to participate early, the chances of a successful project will be greatly reduced. Users will be dissatisfied with late invitations and will be grumpy all the time. Even if AI tools are able to fully automate the process, they will need to be embraced by end users in order to work. Successful AI project leaders understand that involving end users at the beginning of the project greatly increases the likelihood of success.
Behind the glittering promise of AI technology lies a stark reality: Even the world's top AI tools are meaningless if they are not embraced. To gain buy-in from front-line users, leaders must first recognize three main conflicts of interest in AI implementation: The intended end users of the AI tool do not see the benefits to themselves, and they need to take on additional work related to tool development or use , and may lose valuable autonomy. Only by recognizing these facts can leaders address the imbalance in end-user and organizational value capture and lay the foundation for successful project implementation. Success doesn't come from big data, flashy technology and bold promises. Instead, it depends on decisions that frontline workers make in their day-to-day work. To make the promise of AI a reality, leaders need to take into account the needs of frontline workers so that AI can work in the real world.
%20(170).jpeg)