AI FOR MANAGEMENT

Act Like an Investor: How to Build your AI Decision Making Framework

AI promises to disrupt industries by providing management with an ability to drastically improve company-wide decision making. As a guide for implementing AI to improve decision making it’s useful to look to investing; where the difference between a good decision and a bad one is billions of dollars. Each day investors must sort through endless amounts of data to settle on what they believe is the best investment. Below we’ll explore two investment companies that provide management teams in all industries a framework for making better decisions.

At an AI & Venture Capital (VC) event in January Madison Elkhazin of Georgian Partners spoke about how her VC firm has not only taken the approach of investing in AI-driven companies, but have developed internal AI capabilities as well. This approach allows a two-fold boost in Georgian’s investment strategy:

  1. They can use their AI capabilities to identify high-quality, high-potential companies that align with their investment mandate
  2. They have the ability to provide their AI expertise to portfolio companies, resulting in the potential for a greater return on investment

Elkhazin told the crowd that Georgian was one of the first VC firms to develop this approach, and it has become table-stakes in the industry as other firms rush to follow. Looking at Georgian’s strategy, they clearly have an advantage. It’s incredibly valuable to use large amounts of data and computing power to sort through potential investments faster than any number of human analysts. Warren Buffet says that the success of Berkshire Hathaway is attributed to their ability to immediately turn down investment opportunities that do not align with their mandate. This allows him to focus all of his time on the few opportunities that do. Good decisions are made when management can quiet the noise, limit the universe of potential options, and dig into the details of a limited number of high-quality possibilities.

Another investment company that has a track-record of great decision making is Bridgewater Associates- widely considered the most successful hedge fund in the world. In a 2017 Ted Talk, Bridgewater Founder Ray Dalio spoke about the role of radical transparency as part of their corporate culture. Each employee is expected to give candid and direct feedback to every other employee at Bridgewater. They are even encouraged to provide Dalio direct feedback. He tells the story of receiving an email after a meeting where an employee rated him a ‘D’ on his contributions to the meeting. Dalio says this not only ensures that everyone is able to receive and adjust to feedback, but also creates space where ideas can be properly challenged.

These two methodologies- narrowing the scope of possibilities and radical transparency- are valuable ideas that management can combine to develop a potent AI Strategy. Management must not only aim to make better decisions through the use of AI, but also provide an environment in which the algorithmic outputs can be challenged. To move above the competition, companies will need to focus on finding experts within their industry who also possess strong understanding of the technical details of AI. This individual can identify where an AI model falls short and challenge its outputs the same way that Ray Dalio encourages employees to challenge him. 

As AI becomes table-stakes in VC and across all industries, competitive advantage begins to diminish. Companies who focus their efforts on hiring a new type of leader, one who deeply understands both industry and AI, will set themselves up for the returns previously only enjoyed by the Bridgwaters and Georgians of the world.

How to Navigate the Unknown Future: Taking an Open-Source Approach to Digital Transformations


In 2017, Kaggle announced they had surpassed 1million active monthly users. Kaggle, for those outside the data science world, is a subsidiary of Google where Machine Learning Specialists and Data Scientists can hone their skills and contribute to open source data science projects. The platform hosts competitions and allows teams to get together and come up with the best data science solutions for real world problems. Recently the NFL hosted a competition and encouraged contributors to build a model for predicting how many yards a running-back would gain on a given play. These competitions have become a breeding ground for amateurs to build their skills, experts to share their knowledge, and allows people from all backgrounds and experiences to come together for a common goal. While developing an AI Transformation Strategy, companies should take an open-source approach and include a plan to make their data available throughout the organization and allow all team members to contribute. 

Open-source competitions have a history of surprising contributions to science and technology. One of the most compelling examples comes from much earlier than the world of AI. In 1714 the British Parliament offered a 10,000 Lira pound prize to anyone who designed a method to measure longitude. Some of the leading scientific minds focused their considerable talents on this problem, but the purse was eventually claimed by the self-taught clockmaker John Harrison. Harrison used increasingly accurate clocks that could help ships determine their East-West orientation while at sea. This was a huge advancement for navigation because sailors could use the stars to determine North-South orientation, but there was no reliable way to determine longitude-which informs East-West orientation- and therefore resulted in ships going off course. 

What Kaggle competitions and John Harrison show us is by opening up a network and encouraging all people to contribute, organizations have a much better chance of reaching meaningful solutions. The first step is putting up the proper security and privacy controls to mitigate the risks of sharing data. Once this is complete and the data is safe to release, companies must arm their workforce with new AI experimentation tools that allow those with non-technical expertise to participate. Several companies are developing new tools for those with no programming expertise so they can apply Machine Learning models to data and derive useful insights. These platforms cannot replace the experience and expertise of highly-educated data scientists, but they allow companies to unleash the creativity and problem-solving skills of their entire employee base and improve their chances of becoming a sophisticated, data-driven organization in the future.  

Many companies have decided to pursue digital transformations and are migrating data to sources that allow for the application of AI. The decision to make this large investment of dollars and time is a step in the right direction and will set these companies up for success in the future. To go further, and gain a powerful advantage over the competition, management teams should embrace the open-source and arm their employees with the tools they need to test the waters of an unknown future. 

Lessons From Combat: What Navy SEALs Can Teach Us About AI Transformations

There are many Navy SEALs who have made second-careers writing management books and giving speeches at Fortune 500 companies. Whether it’s the self-discipline promoted by former SEAL Jocko Willink, or the dedication to teamwork embedded in the culture of the armed forces, many teachings can be transferred to the world of business. There is one lesson, though, that stands out for managers contemplating an AI transformation.

When navigating an urban environment, special forces operators can be heard saying “Slow is smooth and smooth is fast.” What they mean is that the squad needs to be moving at the ideal pace. They must maintain a speed that is slightly over a walk, but not quite a jog or a run. Maintaining this speed allows for a complete 360-degree threat assessment and quick pivots and updates to their strategic plan. When thinking about how to proceed in the uncertain environment of AI, the same concept applies. 

To apply the Navy SEALs strategy in business requires brave leadership. Often management teams are caught up in the hype and fear their company will get left behind. This causes them to make blind investments in AI that do not align with company values and do not provide an adequate return on investment. Instead of charging ahead at full speed and risking casualties, a more measured approach is better. 

Companies can use a 3-step approach to move forward slowly and smoothly- resulting in quicker advancement in AI capabilities than their competitors.

Step 1: Create a Plan

Creating a plan for AI implementation is about more than just AI. It’s about dissecting your business model and taking a deep look at structural weaknesses and where improved decision making could lead to better customer experiences, increased revenue, and reduced costs. This exercise does not need to mention AI or other technologies, but it must be honest, realistic, and exhaustive. AI has the ability to completely revamp business models if management allows it. This will only happen for companies who put in the initial hard-work up front. 

Step 2: Experiment & Transform

Once the planning and internal reflection occurs, the company should have an idea of the action needed to begin their AI implementation efforts. For the majority of companies, this looks like a digital transformation. This involves transferring legacy systems and paper processes to the digital world and creating unified data sources that can be used for future AI applications. These digital transformations are often long, frustrating, and full of failures. Companies must lean-in to the failure and be willing to dust themselves off and keep going. 

Many companies who encounter failures and hurdles during their digital transformation react by putting their AI strategy on hold. Management believes that they should wait until everything is digitized and the company has a perfect database with all the data available. Rather than focusing wholly on the digital transformation, companies should begin experiments and small test-cases for AI applications. By building and testing with experiments, companies can improve their internal AI capabilities and grow their data science team from the inside. These companies will be several steps ahead when the new shiny database finally comes to life. 

Step 3: Grow & Scale

With the digital transformation complete and several AI experiments under their belt, management can finally move ahead with large scale AI implementation. This is the time where the technology can truly affect the operations and profitability of the company. They have the data in a format that allows for large scale application and may already have several ideas and test-cases ready to bring to market. 

By moving at the ideal pace, management can avoid getting stuck standing still or ploughing forward too quickly. Slow movement puts you out of business as competitors begin capitalizing on AI implementation. Moving too quickly puts company money at risk as large investments are lost. When the world is screaming to innovate and get moving on AI, management would be best served to remain calm and remember, “Slow is smooth and smooth is fast.”

Data Reality Check: Two Questions to Ask Before a Digital Transformation

Almost all of the recent innovations in AI are due to the ability to amass large quantities of data. The discipline of Machine Learning takes this data and outputs insights never available before. AI can accurately identify images and individual faces, it can process and translate text, and can reliably respond to speech. The companies who have access to the most data, in the proper form, are able to build the most powerful AI applications. Think of Amazon who process millions of transactions, Google who process millions of searches, and Facebook who process millions of posts and pictures each day. All of this is data and is captured in a format that can be successfully fed into algorithms which then provide the insights and outputs valuable to businesses. As companies attempt to catch up to the big players, they need to avoid a mindless approach to data collection and ensure they build robust data generating processes. 

Companies have been collecting data for decades, if not centuries. Data-driven companies have the ability to move quickly and make informed decisions to drive profit. The problem is that companies who run legacy IT systems are not collecting data in digital form, nor in the quantities needed, and not in the format required to successfully train Machine Learning algorithms. This leaves management in a position where they need to ask two key questions:

  1. Should we go through a digital transformation?
  2. What data is essential to our decision making and AI applications?

The ability to capture insights, get to know your clients better, drive down costs and make better decisions are things that almost all companies can benefit from. This makes the answer to the first question simple for those who wish to maintain market share as competitors with digital capabilities enter the market. The second question leads management to an analysis of what data to capture and what items to focus on while going digital. 

The time preceding a digital transformation is when management really needs to fully analyze the current business model, understand use cases for AI and other technologies, and see where this transformation will take them. Earlier we noted that many companies have been collecting data long before the establishment of the internet. It would be foolish to not take a step back and determine which data collection processes are essential, and which are outdated and providing no value. Charlie Munger of Berkshire Hathaway tells a story of how poor data collection can lead to poor decision making: 

The water system of California was designed looking at a fairly short period of weather history. If they’d been willing to take less perfect records and look an extra hundred years back, they’d have seen that they weren’t designing it right to handle drought conditions which were entirely likely. You see again and again- that people have some information they can count well and they have other information much harder to count. So they make the decision based only on what they can count well. And they ignore much more important information because its quality in terms of numeracy is less- even though it’s very important in terms of reaching the right cognitive result. All I can tell you is that around Wesco and Berkshire, we try not to be like that. We have Lord Keynes’ attitude, which Warren quotes all the time: “We’d rather be roughly right than precisely wrong.” In other words, if something is terribly important, we’ll guess at it rather than just make our judgement based on what happens to be easily countable.

Although Charlie is speaking about a data collection process that may not have been applying Machine learning techniques, his statements remain true today. Management can fool themselves by making decisions on available information rather than the most important information. In the new world of AI, management has an incredible opportunity to rethink the way they collect their data and utilize it to make decisions. They mustn’t squander this opportunity by doing what is easy-only collecting small sample sizes, or simply duplicating old data collection processes. Those who take the time to reflect on the past will be better prepared for the future.

No Need to Cannonball: How to Wade into AI Innovation

A 2018 Gartner study found 25% of companies are experimenting – or ‘just dipping their toes in’ – and 49%, have no AI plans whatsoever. When a company’s decision makers lack technological expertise, it is difficult to craft credible strategies. Management cannot properly weigh risks and rewards, respond to investor expectations, and adequately engage the workforce. In order to avoid abandoning the companies existing business model too early, and unnecessarily risking investor capital, there are several steps managers can take to move into the future. 

Step 1: Rely on Internal Experts

Early in an AI Transformation, there will not be widespread understanding of the technology. As the company’s AI capabilities grow, knowledge will disperse through the organization and proficiency will grow. Until this occurs, management will need to create a centre-of-expertise (COE) to aid in understanding and decision making. It makes sense for this centre to pull heavily from existing IT and analytics departments as these team members may already have an understanding of AI or will have the baseline skills to quickly get up to speed. If required, the company can pull from external consulting firms in the early stages, but will want to make sure the consultants work with team members so the learnings remain within the company. 

Step 2: Identify Opportunities

Although management may lack the understanding needed to craft a quality AI strategy, they have (hopefully) been crafting high-quality business strategies for years. Management must share their understanding of the company’s business model with their AI COE. In their explanation, management should focus on:

  • Knowledge Bottlenecks: areas of the business where there are only one or two subject matter experts
  • Knowledge Scaling: areas of the business where it is difficult to quickly train and deploy workers.
  • Knowledge Gaps: areas of the business where decisions are made with incomplete or incorrect information

Step 3: The Innovation Lab

By completing the needs assessment prior to conceptualizing AI applications, the organization can avoid arbitrarily creating problems for preconceived solutions. With this approach, management can remain focused on creating value for the enterprise and improving the functioning of their business model. Once briefed on the areas of opportunity, the AI COE can now conceptualize the ways in which AI can be applied. Once the COE identifies key AI technologies and use-cases for their application, these need to be quickly tested and iterated upon. This will build proficiency of employees and identify which approaches fail to add value and which have promise. Quickly killing poor concepts is just as important as identifying promising ones. The success of the innovation lab is dependent on failing fast. 

To get an idea of how to experiment with high-impact AI applications, we can look at the approach of Sompo Holdings. Sompo is a Japanese insurance company who has taken an aggressive approach to their digital transformation. In their needs assessment, Sompo identified machine learning, autonomous vehicles, and cyber-security as key to the future of their business. Sompo went on to establish an innovation lab in Tokyo for Machine Learning, in Silicon Valley for autonomous vehicles, and a lab for cybersecurity innovation in Tel Aviv. By surrounding themselves with world-class thinkers and experts in each separate area of innovation, Sompo has positioned itself to quickly experiment and craft their path forward. By slowly wading in, and testing the waters, managers will be able to create strategies that make a big splash. 

The Robot Next Door: Understanding the Impacts of Robotic Process Automation 

Many are worried that the implementation of AI in business will have detrimental effects on the workforce. There are warnings of jobs being completely eliminated and automated. There is even a presidential candidate in the US who is running on a platform of Universal Income because he believes that AI will put so many workers out of a job. Right now, the focus of this fear is Robotic Process Automation (RPA) – which, technically speaking, is not really AI at all. By understanding exactly what RPA is, how it works, and how it’s being rolled out across organizations, managers can help calm the worries of their workforce and begin to reveal the positive impacts this change will allow. 

To think of RPA, think of all of the manual copy and pasting, think of repetitive and boring work that does not inspire customers or employees. Now, imagine this work is eliminated by delegating to back-end ‘robots’. This is exactly how RPA works. The enterprise, or a group of consultants, will map out manual, repetitive processes, and simply write computer code that mimics these actions. Examples include:

  • Taking data from an email and using it to update a customer address in the company database
  • Sending out a replacement credit card when a customer notifies the bank their card is lost
  • Filling out and sending accounts payable notices

One can see how this type of technology may result in the replacement or elimination of back-office admin jobs. Although this may be the case in the future, a survey of 71 RPA projects (completed in 2018) showed that only one project was used to eliminate jobs. Other projects did show some job loss, but they mainly replaced overseas workers who were on contracts and tasked with completing manual, repetitive tasks for a domestic company. 

While RPA can provide savings of cost and time, it also provides value in a few surprising ways. First, when managers look at automating some of their back-end tasks, they are forced to understand why this task is being done, and what steps could be eliminated. Many company processes have not been investigated since their establishment. A thorough investigation of companies procedures can save money by eliminating useless tasks and streamlining others. Second, by eliminating the mundane and menial tasks completed by employees, managers can free up these workers to be more creative and have meaningful interactions with customers. 

A survey conducted in 2019 showed that more than half of respondents reported an increase in employee engagement after implementing RPA. It seems counter-intuitive that when robots take over parts of employees work, they become more engaged. When one considers the fact that the robots are taking over the least-fulfilling tasks, it begins to make sense. 

As AI becomes more prevalent in the workforce, management must support their staff and provide resources to help employees adapt. Humans must become more human. Emotional connection, relationship building, and customer-centricity will become the key responsibilities of an organization’s workforce. Instead of being out of work, humans may just need to get used to their dull, robotic co-workers.

The Tortoise or The Hare: Exploring the Ideal Pace for AI Transformations

Each day, managers of large companies are told they need to make an immediate and substantial investment in AI. “Innovate or die”, they are told. In the media they are shown the latest and greatest accomplishments in AI. At home they browse Amazon and in one-click they have a new gadget delivered to their door in a day (or sometimes less). With all of the hype, noise, and uncertainty, managers may be tempted to invest in AI moonshots and try to skip several elementary- but important- steps in their AI transformation. 

There are several examples of this happening. One of the most prominent comes from IBM. In March of 2012, IBM entered an agreement with a well-funded US hospital to begin developing an AI application. This application would help physicians diagnose and treat cancer. The announcement came shortly after IBMs Watson had defeated the greatest jeopardy player of all time and the hospital received a $50million donation to pay for the project. 

Four years later, an auditor’s report revealed that the project had cost $62million to date and had not been used to treat a single patient. Even more worrisome, the project was not integrated with any of the hospitals systems and could not be used. After the emergence of this report, the CEO of the hospital submitted his resignation and the project was cancelled. Although the project had a noble cause, the hospital did not have the project management capabilities, nor the operational infrastructure to pull off such a difficult feat. There is another side to this story, though. 

During the time that the project was underway, another department in the hospital was also developing products using AI. This team was pursuing projects such as a ‘care concierge’  that makes recommendations to patient families, a supervised learning algorithm that would predict patients who may require financial assistance, and an automated IT support system. None of these received the hype or attention of the larger budget project, but all of them successfully added value to the patient experience and saved the hospital money. 

With the success of these smaller AI projects, the hospital is now prepared to re-engage in larger, more aspirational goals. The hospital has a strong model development infrastructure, technical and non-technical staff with AI project experience, and all of the learnings from their initial failures. This has led them to take on a new initiative that looks at patient data alongside their genomic profiles, and provides suggested treatments. 

The takeaway is not that AI moonshots or aspirational goals should be ignored and left unfunded. It instead illustrates that while large budget, aspirational projects may get the most hype and coverage, organizations early in their AI transformation may not be ready to deliver. Companies who are successful in transforming their businesses have many less ‘sexy’ projects going on under-the-hood and these are what slowly, but surely lead to the achievement of revolutionary transformations made possible with AI. 

Managers need to tune out the noise. They need to reduce their anxiety about moving too slowly and see value in moving at all. When it comes to successful AI transformations, it’s much better to be the tortoise than the hare.

Putting the Puzzle Pieces Together: A Brief Guide to Deep Learning

Many of the most recognizable applications of AI are due to recent advancements in a field of Machine Learning called: Deep Learning. Personal assistants like Google, Amazon Alexa, and Siri are examples of AI that involve this area of ML. Although these personal assistants are prone to error, their accuracy and usefulness have greatly improved over the past few years. With more data generated than ever, and computing power continuing to improve, Deep Learning algorithms can better classify images, recognize faces, and make increasingly accurate predictions.

To conceptualize Deep Learning, think of building a jigsaw puzzle. The builder dumps the pieces onto a table and begins to sort. First, they start with the border pieces; picking out all of these pieces and beginning to build the perimeter of the puzzle. Once this is complete, they may start identifying pieces that have similar colors or patterns and group them together. Using the information from the borders, the image begins to take shape. 

Deep Learning, like building a jigsaw puzzle, contains several layers. Each layer helps to break down part of the problem and contributes to identifying the solution. Each of the layers are called neurons and groups of layers are called neural networks. The name Deep Learning refers to the fact that these neural networks often contain several layers or ‘steps’ to solve the problem.

When a neural network identifies an image- for example an image of the letter ‘A’- it receives this image as a collection of pixels. Pixels are tiny 1×1 squares that together make up the overall image. The algorithm receives these pixels in the form of 3 numbers. This combination of numbers represents the color in a given pixel. In one layer of the neural network, the Deep Learning algorithm is looking for combinations of pixels and trying to identify the edges of the image. The next layer may look for different colors or saturation within combinations of pixels. The algorithm then determines what the image ‘looks’ like and compares the identified image with those that have been previously identified as the letter ‘A’ (it compares using images the algorithm has been trained on). Neural Networks require a large number of examples to successfully learn how to identify images; this is why the increase of data availability is so important to the successful application of Deep Learning.

Deep Learning was invented in 1965; with Canadian Geoffry Hinton making important contributions in 1986. Although the mathematical and algorithmic approaches to Deep Learning were discovered 40+ years ago, a lack of computing power and data availability limited their use. Computing power has doubled every two years since Deep Learning was invented and the introduction of digital technologies has resulted in an overwhelming amount of data collection. With these advancements, Deep Learning has made a comeback and is rapidly improving the abilities (such as seeing and communicating) we’ve come to expect from AI. 

Some of the success of Deep Learning is apparent in the reduction of error rates in image recognition by 41%, facial recognition by 27%, and voice recognition by 25%. As error rates are reduced companies have begun to apply the technology and have made it available for everyday use. Smart speakers (like Amazon Echo), Google Translate, and unlocking your phone just by looking at the camera, are all examples of Deep Learning in action. The largest technology companies have been some of the first to harness the power of Deep Learning because of their ability to collect data from consumers. Every time an email is sent in Gmail, an address is entered in Apple Maps, or a purchase is made on Amazon, data is being collected and fed into these neural networks. 

Some other applications of Deep Learning include:

  • Diagnosing diseases from medical scans
  • Detection of defective products on assembly line
  • Identification of fraudulent credit card transactions

Companies who take advantage of these recent developments and allow Deep Learning to put the puzzle pieces together, can greatly improve predictive accuracy and free up human resources to provide better customer experiences. 

Slow Down to Speed Up: How to Stop Losing Money on your AI Transformation

As companies take the leap and begin making investments in AI they often come up against several roadblocks along the way. Learning from the evolution of software development in the early 2000’s – what became known as DevOps – will allow companies to overcome these obstacles and will improve their AI and Analytics process. By investing in what is now being called ModelOps, companies can drastically cut down the time it takes to go from data collection to AI deployment. 

ModelOps is a framework for standardizing the process of collecting data, exploring and applying AI models, and ultimately deploying these models into business operations. Standardization can lead to incredible value creation for an organization by reducing time to deployment and allowing projects to fail fast. So the question is: how can this be achieved and why are so few companies focussed on implementing a ModelOps framework in their business?

Firstly, companies encounter endless issues with their data. They have unstructured data that is difficult to access and has missing values. The data may also hold values that are invalid and cannot be relied upon. In contrast, there are companies on the opposite end of the spectrum who do not even have data capture capabilities. This leaves them without the insights they need to apply Machine Learning algorithms. 

Second, there is a shortage of skilled data scientists who can understand the data and determine which models are best for a given business problem. Even when organizations have a team of data scientists, accessing data involves many time-consuming, manual processes. 

Lastly, companies lack the technology required to allow their data science teams to quickly and efficiently go from data collection to model deployment. They will then assign data science resources to maintaining old models which limits new innovation. Companies may also have legacy systems that only allow their data science staff to program in certain coding languages. This limits the number of qualified data scientists they can hire and increases the talent shortage. 

The three issues above are the largest barriers to achieving success in AI. To combat these, organizations must first focus on the data problem. Companies must seek out and identify the most important problems in their business. This could be to increase customer satisfaction, reduce costs, or diversify their revenue streams. Only once the problems are identified should the company investigate which data sources may help improve decision-making or fuel Machine Learning algorithms. By seeking out problems first, companies will avoid being driven towards solutions that their existing data makes available and increase the likelihood of achieving specific business goals. Companies can then invest in data storage that is easily accessed by data science teams and updated in real-time without human intervention. With heavy focus and investment on the data structuring, companies can move forward and empower their data science team. 

Embedding automation into the data collection process allows the data science team to quickly apply their models and avoid emailing excel files back and forth. Companies should also enable their data scientists to code in their language of choice. By doing this, HR can recruit from a larger pool of qualified and intelligent data scientists. Businesses can potentially go a step further and adopt platforms that are friendly to those who do not code. This further increases the number of staff capable of working on data science projects. 

Finally, once the above is successful, companies will be going quickly from data to deployment. As this happens, it is extremely important to coach data scientists to save their case studies and learnings in order to build a repository of model templates for common data science problems. AI rock stars may even embed Machine Learning within the ModelOps process to have algorithms quickly identify other useful algorithms that solve common problems (mind-blowing, I know!). 

The above steps are intensive, technical, confusing and difficult. Thankfully, there are several firms with the skills and expertise to assist companies in setting up their ModelOps infrastructure. These include: SAS, Avanade, and Accenture. With so much at stake it may be worthwhile to take a step-back, stop dumping resources into inefficient AI projects, and begin the hard work of establishing a ModelOps architecture. Slowing down today will allow companies to speed up tomorrow and 10x their AI investment.

The AI Diversity Problem: How to Build Algorithms for Everyone

When Harper Reed gave a speech on AI at a SAS conference last week, his main thesis was about using AI for good. When allowing AI to make critical decisions with full autonomy, there are massive ethical implications. He used a story of a Chinese start-up to illustrate the effects that AI systems can have when teams neglect to consider diversity and ethics. 

The start-up was using AI to look at an image of a human face and using only that image, predict that person’s age. They were very excited about the technology and boasted that it had an accuracy of over 90%. The team started their demo using the founders faces first. Sure enough, the AI predicted their age perfectly. Next, they passed it over to Harper (click the link to see what he looks like), and the AI predicted his age to be 140 years old. He tried again and got the same result. 

The founders were so confused! Why did it work well in testing, but not during the demo? After asking some questions about their data set, Harper found that the training set only contained images of mid-twenties, Asian males. The training set perfectly represented the co-founders because they collected images from friends and a network of people they knew. This led the AI to work really well for young Asian males, and terribly for the red-haired Harper Reed. 

There are many examples of AI outputs being misogynistic or racist. Joy Buolamwini is a researcher at the MIT Media Lab and has been fighting bias in algorithms since 2015. Joy was experimenting with an AI facial recognition software and noticed that it was not working for her. She initially assumed the technology was still in its infancy and would have some bugs, but noticed that her white colleagues did not have any issues. Joy made a white mask (that looks nothing like a human face), put it on, and the software detected the mask immediately. 

Facial recognition for iPhones is one thing, but imagine if the recognition software of a self-driving car was more likely to hit black pedestrians than white. Joy has created the Algorithmic Justice League to help companies and researchers avoid building bias in their algorithms and ensure that AI isn’t unfairly benefiting those who are represented in the datasets. 

The examples above do not show that AI researchers are inherently racist and only building solutions for those who look like them. They instead open up a conversation of how our unconscious biases can lead to creating programs and algorithms that serve only those who are like us. It’s important to build the training dataset to represent a diverse group of people. In order to build diverse datasets, it is important to have diverse AI and Machine Learning teams. 

Diversity is a complex and increasingly political subject. Companies are aware of the benefits of diversity, but often fall short on their commitments. When it comes to AI, diversity is not an option, it’s a requirement. The product will not work and companies will not succeed if their algorithms are biased against a large part of the customer base. 

In the examples above, the algorithms immediately show when they do not respond to people of different colors. With this feedback, we need leaders in all fields of AI to pivot towards diversity and ensure that organizations are focused on building a fair and free society for all.

Can Someone Tell Me What AI Actually Is?

Since the discussion of AI began showing up on social media platforms, in the news, and in general day-to-day conversations, using the term has become second nature. I’m sure you would get a blank stare if you paused a conversation to ask what AI actually is.

The textbooks tell us it’s, “The ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, and problem solving.”

This definition seems to suggest that computers and humans are similar in their information processing. The brain, in fact, acts nothing like a computer. It is a system that puts together patterns among neurons. Dr. Ralph Greenspan says:

Computers record, and computers have things stored in specific places that are stable. Our brains do none of that. We do pattern recognition. Even though we are capable of logic, our brain does not operate by the principles of logic. It operates by selection of pattern recognition. It’s a dynamic network. It’s not an “if-then” logic machine.

With this added insight we land on a more precise definition of what AI is. Artificial Intelligence is the process a machine undertakes to solve problems that we associate with human cognition including: perceiving, reasoning, learning, and problem solving. The key difference is the methods by which the computer solves these problems.

Humans and computers may take on similar problems and may come up with similar answers, but the process by which they go from question to answer is completely different. This also gives us a better idea of what AI is capable of. AI is very good at solving logical problems of a narrow scope. Games like chess are great for AI as there are set rules and a clear winner and loser. Problems that require improvisation, creativity, and have ambiguous outcomes prove much harder for computers to take on.

This difference leads to computers being better than humans at solving certain sets of problems, and being unable to offer any clear solutions to others. Check out some things AI has failed to tackle here.

Stephen Hawking and Elon Musk have warned that AI may eventually achieve the capacities needed to enslave the human race, but from what we know so far, as long as the problem of world domination remains an exercise of improvisation and creativity, us humans remain safe.

How to lead in the new world of AI

Over the past few decades (beginning with Peter Drucker) the study of management has steadily grown. Organizations began to realize that managers and leaders are not simply born, they need to be crafted, tested, and provided a culture that enables their success. Seth Godin, the entrepreneur/author, said it best, “Management involves very little in the way of shouting, hustling, or coercion. It’s a chance to serve, instead.”

At its core, management is about articulating a set of values the organization is to follow, and adequately communicating these values so they become apparent in everything the organization does. Management provides a context of values, and individuals respond. This dynamic seems to operate best when there is a common understanding of those values in addition to the broad business goals of the organization. This has held true for the past many years when business goals and operations were well understood by both management and operational employees. With the emergence of artificial intelligence as a scalable tool, this dynamic seems to have changed. 

Without an in-depth understanding of the complex tools that AI enables, management is unable to communicate effectively with their highly technical operational employees. Management becomes siloed from the key operations of the business. Highly technical employees build products and processes they believe to be valuable, but are not aligned with the values of management. Management misunderstands the capabilities of AI and create strategies the technology cannot support. The result is wasted time and resources, disappointed customers, low employee engagement, and the risk of being left behind as competitors successfully increase their technological capabilities. 

To solve this problem, organizations now need an AI translator – a manager who oversees the entire team. The ideal manager knows quite a bit about analytics and AI and understand the technical capabilities of the organizations software engineers and data scientists. These managers also have the business skills to interact with upper management and can translate business strategy into a data science solution.

Over the next 12 months I will be working to become this translator at Queens Smith School of Business in Toronto. The training involves the mathematical and programming aspects of AI, how to operate in the Agile framework, and how to apply the learning to strategic priorities of businesses in several sectors. Ultimately, the course provides the guidance necessary to become a catalyst in re-imagining the operations of an enterprise with the successful application of AI. 

This blog is my way of sharing some of my learnings. My hope is that it helps shape some of your thinking around the practice of management and how you can remain an effective manager in the fast-changing world of AI. 

Thanks for joining me at the beginning of this journey!