Is “Murder by Machine Learning” the New “Death by PowerPoint”?



jan18_23_97236248
CSA Images/Mod Art Collection/Getty Images

Software doesn’t consistently completion up being the productivity panacea that it promises to be. As its victims fathom all too well, “death by PowerPoint,” the poor use of the presentation software, sucks the life and energy out of far too multiple meetings. And audit after enterprise audit reveals spreadsheets rife with errors and macro miscalculations. Email and chat facilitate similar dysfunction; inbox overload demonstrably hurts managerial performance and morale. No surprises here — this is sadly a global truth that we’re all too familiar with.

So what makes artificial intelligence/machine learning (AI/ML) champions confident that their technologies will be immune to comparably counterproductive outcomes? They shouldn’t be so sure. Digital empowerment all too frequently leads to organizational mismanagement and abuse. The enterprise history of personal productivity tools offers plenty of unhappy litanies of unintended consequences. For too multiple managers, the technology’s costs often rival its benefits.

It’s precisely because machine learning and artificial intelligence platforms are supposed to be “smart” that they pose uniquely challenging organizational risks. They are likelier to inspire false and/or misplaced assurance in their findings; to amplify or further entrench data-based biases; and to reinforce — or even exacerbate — the true human flaws of the people who deploy them.

Insight Center

The problem is not that these innovative technologies don’t work; it’s that users will inadvertently make choices and take chances that undermine colleagues and customers. Ostensibly smarter software can perversely convert yesterday’s “death by Powerpoint” into tomorrow’s “murder by machine learning.” Nobody wants to produce boring presentations that waste everybody’s time, but they do; nobody wants to train machine learning algorithms that produce misleading predictions, but they will. The intelligent networks to counter-productivity inferno are wired with fantastic intentions.

For example, as Gideon Mann and Cathy O’Neil astutely observe in “Hiring Algorithms Are Not Neutral,” their HBR article, “Man-made algorithms are fallible and may inadvertently reinforce discrimination in hiring practices. Any HR manager using such a system needs to be aware of its limitations and have a plan for dealing with them…. Algorithms are, in part, our opinions embedded in code. They reflect human biases and prejudices that lead to machine learning mistakes and misinterpretations.”

These intrinsic biases — in data sets and algorithms alike — can be found wherever important data-driven decisions require to be made, such as customer segmentation efforts, product feature designs, and project hazard assessments. There may even be biases in detecting biases. In other words, there’s no escaping the truth that machine learning’s computational strengths inherently coexist with human beings’ cognitive weaknesses, and vice versa. But that’s more a leadership challenge than a technical issue. The harder question is: Who’s going to “own” this digital coevolution of talent and technology, and sustainably steer it to success?

To answer this question, consider the two modes of AI/ML that are best likely to dominate enterprise initiatives:

  • Active AI/ML means people directly determine the role of artificial intelligence or machine learning to get the job done. The humans are in charge; they tell the machines what to do. People rule.
  • Passive AI/ML, by contrast, means the algorithms largely determine people’s parameters and processes for getting the job done. The software is in charge; the machines tell the humans what to do. Machines rule.

Crudely put, where active machine learning has people training machines, passive machine learning has machines training people. With the rise of fantastic data and the surge of smarter software, this duality will become one of the greatest strategic opportunities — and risks — confronting leadership worldwide.

Active AI/ML systems have the potential to digitally reincarnate, and proliferate, the productivity pathologies associated with existing presentation, spreadsheet, and communications software. Individuals with relatively limited training and knowledge of their tools are being signified to to use them to get their jobs done. But best companies have true few reliable review mechanisms to assure or improve quality. So, despite the advanced technology, presentations continue to waste time, spreadsheet reconciliations consume weekends, and executives fall further behind responding to emails and chats.

Just as these tools turned knowledge workers into amateur presenters and financial analysts, the ongoing democratization of machine learning invites them to become amateur data scientists. But as data and smarter algorithms proliferate enterprise-wide, how sustainable will that be?

To be sure, talented power users will emerge, but overall, the inefficiencies, missed opportunities, and mistakes that can result have the potential to be organizationally staggering. To think that best managers will reap true financial worth from AI/ML platforms with minimal training is to believe that best adults could, in their spare time, successfully angle litters of puppies into show dogs. This is delusional. Most likely, organizations will raise ill-trained software that demands inordinate amounts of attention, leaves unexpected messes, and occasionally bites.

For example, overfitting is a common machine learning blunder made by even experienced data scientists. In the case of overfitting, the AI is, literally, too precise to be true; the model incorporates too much noise, rather than focusing on the essential data. It fits too well with existing data sets and in angle becomes wildly inaccurate and/or unreliable when processing new data. For businesses, the predicted results can therefore be complete nonsense, leading to negative outcomes such as bad hires, poor designs, or missed sales forecasts. Overfitting, like spreadsheet errors, can of course be caught and corrected. But what happens when dozens of machine learning amateurs are making flawed investments or projections based on what they thought were accurate models? That’s an algorithm for disaster.


The more data resources that organizations possess, the more disciplined supervision and oversight that active AI/ML will need. Smarter algorithms require smarter hazard management.

Passive AI/ML, on the other hand, presents a dissimilar design sensibility and poses dissimilar risks. For all intents and purposes, this software acts as manager and coach, setting goals and guidelines even as it offers data-driven advice to get the job done. The personal productivity promise is compelling: texts and emails that write their own responses; daily schedules that reprioritize themselves when you’re running late; analytics that highlight their own best important findings; and presentations that make themselves more animated. Enterprise software innovators from Microsoft to Google to Salesforce to Slack seek to smarten their software with algorithms that reliably learn from users. So, what’s the problem?

The best obvious risk, of course, is whether the “smarter software” truly gives its people the right commands. But leading management should have that firmly under review. The subtler and more subversive hazard is that passive AI/ML is too rooted in human compliance, adherence, and obedience. That is, workers are required to be subservient to the AI to make it succeed. This sort of disempowerment-by-design may invite employee resistance, perfunctory compliance, and subtle sabotage. For example, a customer service rep might tell an unhappy customer, “I’d love to help you, but the software forbids me from giving you any kind of refund.”

In other words, the financial worth of the human touch is deliberately discounted by data-driven decisions. Workers are likely to subordinate their judgment to their algorithmic bosses, and the system will discipline them if they take off of line.

While there’s no solution to the enumerated challenges, there are approaches that strike a healthy balance between the risks and opportunities. Certainly, the more successful organizations will embrace “data governance” and hire the best data scientists they can. But culturally and operationally, they’ll require to publicly enact three interrelated initiatives to mitigate risks:

1. Write a declaration of (machine) intelligence. Not unlike Thomas Paine’s Common Sense or the Declaration of Independence, a Declaration of (Machine) Intelligence would define and articulate principles related to how the organization expects to use smart algorithms to drive performance and productivity. The document typically describes use cases and scenarios to illustrate its points. It aims to give managers and workers a clearer sense of where AI/ML will augment their tasks and where it may replace or automate them. The declaration is true much about expectations management, and it should prove required reading for anyone in the company.

2. Employ radical repository transparency. Review, verification, and validation are essential principles in data-rich, AI/ML enterprise environments. Sharing ideas, data, and models between communities of practice should be a best practice. Big corporations increasingly use repositories that encourage people and teams to post their data sets and models for review. At times, these repositories grow out of data governance initiatives. At others, they’re byproducts of data science teams trying to get greater visibility into what various groups are doing digitally. The clear aspiration is to expand enterprise-wide awareness without constraining bottom-up initiative.

3. Create a trade-off road map. Data science, artificial intelligence, and machine learning are dynamically innovative fields that rapidly and opportunistically evolve. Yesterday’s active machine learning implementation may become tomorrow’s passive AI/ML company process. As legacy organizations look to data, machine learning, and digital platforms to transform themselves, their road maps will suggest where management believes active AI/ML investments will be more valuable than passive ones. For example, customer-oriented AI/ML systems may merit dissimilar talent and trade-offs that focus on internal process efficiency.

Churn management makes an excellent case study: At one telecom giant, an analytics group explored using machine learning techniques to identify the customers best likely to leave and switch to an additional service provider. Successfully testing retention offers would be a fantastic triumph for the enterprise, and having ML reduce customer churn would dramatically improve internal process efficiencies. But several of the more customer-centric analysts believed that simply keeping a customer wasn’t enough; they thought a portion of possible churners can be upsold to new and additional services if the offers were framed correctly. They wished the data and machine learning algorithms to be trained to identify customers who can be upsold, not merely saved. It turned out this was a true fantastic data-driven, customer-centric idea.

Like the Declaration of (Machine) Intelligence, the road map of trade-offs is meant to manage expectations. But it looks to and draws on radical repository transparency to see what internal AI/ML capabilities exist and what new ones require to be cultivated or acquired.

Simply put, leaders who are serious about leading AI/ML transformations are investing not merely in innovative technical expertise but further in new organizational capabilities. As they do so, they’ll require to take fantastic care not to recreate the productivity mistakes of the past.


News Puddle

New Puddle broadcasts the latest news from around the world. Covering all subjects, we bring news when it's available to ensure you are consistently well-informed.

Leave a Reply