AI FOR MANAGEMENT
AI promises to disrupt industries by providing management with an ability to drastically improve company-wide decision making. As a guide for implementing AI to improve decision making it’s useful to look to investing; where the difference between a good decision and a bad one is billions of dollars. Each day investors must sort through endless amounts of data to settle on what they believe is the best investment. Below we’ll explore two investment companies that provide management teams in all industries a framework for making better decisions.
At an AI & Venture Capital (VC) event in January Madison Elkhazin of Georgian Partners spoke about how her VC firm has not only taken the approach of investing in AI-driven companies, but have developed internal AI capabilities as well. This approach allows a two-fold boost in Georgian’s investment strategy:
- They can use their AI capabilities to identify high-quality, high-potential companies that align with their investment mandate
- They have the ability to provide their AI expertise to portfolio companies, resulting in the potential for a greater return on investment
Elkhazin told the crowd that Georgian was one of the first VC firms to develop this approach, and it has become table-stakes in the industry as other firms rush to follow. Looking at Georgian’s strategy, they clearly have an advantage. It’s incredibly valuable to use large amounts of data and computing power to sort through potential investments faster than any number of human analysts. Warren Buffet says that the success of Berkshire Hathaway is attributed to their ability to immediately turn down investment opportunities that do not align with their mandate. This allows him to focus all of his time on the few opportunities that do. Good decisions are made when management can quiet the noise, limit the universe of potential options, and dig into the details of a limited number of high-quality possibilities.
Another investment company that has a track-record of great decision making is Bridgewater Associates- widely considered the most successful hedge fund in the world. In a 2017 Ted Talk, Bridgewater Founder Ray Dalio spoke about the role of radical transparency as part of their corporate culture. Each employee is expected to give candid and direct feedback to every other employee at Bridgewater. They are even encouraged to provide Dalio direct feedback. He tells the story of receiving an email after a meeting where an employee rated him a ‘D’ on his contributions to the meeting. Dalio says this not only ensures that everyone is able to receive and adjust to feedback, but also creates space where ideas can be properly challenged.
These two methodologies- narrowing the scope of possibilities and radical transparency- are valuable ideas that management can combine to develop a potent AI Strategy. Management must not only aim to make better decisions through the use of AI, but also provide an environment in which the algorithmic outputs can be challenged. To move above the competition, companies will need to focus on finding experts within their industry who also possess strong understanding of the technical details of AI. This individual can identify where an AI model falls short and challenge its outputs the same way that Ray Dalio encourages employees to challenge him.
As AI becomes table-stakes in VC and across all industries, competitive advantage begins to diminish. Companies who focus their efforts on hiring a new type of leader, one who deeply understands both industry and AI, will set themselves up for the returns previously only enjoyed by the Bridgwaters and Georgians of the world.
In 2017, Kaggle announced they had surpassed 1million active monthly users. Kaggle, for those outside the data science world, is a subsidiary of Google where Machine Learning Specialists and Data Scientists can hone their skills and contribute to open source data science projects. The platform hosts competitions and allows teams to get together and come up with the best data science solutions for real world problems. Recently the NFL hosted a competition and encouraged contributors to build a model for predicting how many yards a running-back would gain on a given play. These competitions have become a breeding ground for amateurs to build their skills, experts to share their knowledge, and allows people from all backgrounds and experiences to come together for a common goal. While developing an AI Transformation Strategy, companies should take an open-source approach and include a plan to make their data available throughout the organization and allow all team members to contribute.
Open-source competitions have a history of surprising contributions to science and technology. One of the most compelling examples comes from much earlier than the world of AI. In 1714 the British Parliament offered a 10,000 Lira pound prize to anyone who designed a method to measure longitude. Some of the leading scientific minds focused their considerable talents on this problem, but the purse was eventually claimed by the self-taught clockmaker John Harrison. Harrison used increasingly accurate clocks that could help ships determine their East-West orientation while at sea. This was a huge advancement for navigation because sailors could use the stars to determine North-South orientation, but there was no reliable way to determine longitude-which informs East-West orientation- and therefore resulted in ships going off course.
What Kaggle competitions and John Harrison show us is by opening up a network and encouraging all people to contribute, organizations have a much better chance of reaching meaningful solutions. The first step is putting up the proper security and privacy controls to mitigate the risks of sharing data. Once this is complete and the data is safe to release, companies must arm their workforce with new AI experimentation tools that allow those with non-technical expertise to participate. Several companies are developing new tools for those with no programming expertise so they can apply Machine Learning models to data and derive useful insights. These platforms cannot replace the experience and expertise of highly-educated data scientists, but they allow companies to unleash the creativity and problem-solving skills of their entire employee base and improve their chances of becoming a sophisticated, data-driven organization in the future.
Many companies have decided to pursue digital transformations and are migrating data to sources that allow for the application of AI. The decision to make this large investment of dollars and time is a step in the right direction and will set these companies up for success in the future. To go further, and gain a powerful advantage over the competition, management teams should embrace the open-source and arm their employees with the tools they need to test the waters of an unknown future.
There are many Navy SEALs who have made second-careers writing management books and giving speeches at Fortune 500 companies. Whether it’s the self-discipline promoted by former SEAL Jocko Willink, or the dedication to teamwork embedded in the culture of the armed forces, many teachings can be transferred to the world of business. There is one lesson, though, that stands out for managers contemplating an AI transformation.
When navigating an urban environment, special forces operators can be heard saying “Slow is smooth and smooth is fast.” What they mean is that the squad needs to be moving at the ideal pace. They must maintain a speed that is slightly over a walk, but not quite a jog or a run. Maintaining this speed allows for a complete 360-degree threat assessment and quick pivots and updates to their strategic plan. When thinking about how to proceed in the uncertain environment of AI, the same concept applies.
To apply the Navy SEALs strategy in business requires brave leadership. Often management teams are caught up in the hype and fear their company will get left behind. This causes them to make blind investments in AI that do not align with company values and do not provide an adequate return on investment. Instead of charging ahead at full speed and risking casualties, a more measured approach is better.
Companies can use a 3-step approach to move forward slowly and smoothly- resulting in quicker advancement in AI capabilities than their competitors.
Step 1: Create a Plan
Creating a plan for AI implementation is about more than just AI. It’s about dissecting your business model and taking a deep look at structural weaknesses and where improved decision making could lead to better customer experiences, increased revenue, and reduced costs. This exercise does not need to mention AI or other technologies, but it must be honest, realistic, and exhaustive. AI has the ability to completely revamp business models if management allows it. This will only happen for companies who put in the initial hard-work up front.
Step 2: Experiment & Transform
Once the planning and internal reflection occurs, the company should have an idea of the action needed to begin their AI implementation efforts. For the majority of companies, this looks like a digital transformation. This involves transferring legacy systems and paper processes to the digital world and creating unified data sources that can be used for future AI applications. These digital transformations are often long, frustrating, and full of failures. Companies must lean-in to the failure and be willing to dust themselves off and keep going.
Many companies who encounter failures and hurdles during their digital transformation react by putting their AI strategy on hold. Management believes that they should wait until everything is digitized and the company has a perfect database with all the data available. Rather than focusing wholly on the digital transformation, companies should begin experiments and small test-cases for AI applications. By building and testing with experiments, companies can improve their internal AI capabilities and grow their data science team from the inside. These companies will be several steps ahead when the new shiny database finally comes to life.
Step 3: Grow & Scale
With the digital transformation complete and several AI experiments under their belt, management can finally move ahead with large scale AI implementation. This is the time where the technology can truly affect the operations and profitability of the company. They have the data in a format that allows for large scale application and may already have several ideas and test-cases ready to bring to market.
By moving at the ideal pace, management can avoid getting stuck standing still or ploughing forward too quickly. Slow movement puts you out of business as competitors begin capitalizing on AI implementation. Moving too quickly puts company money at risk as large investments are lost. When the world is screaming to innovate and get moving on AI, management would be best served to remain calm and remember, “Slow is smooth and smooth is fast.”
Almost all of the recent innovations in AI are due to the ability to amass large quantities of data. The discipline of Machine Learning takes this data and outputs insights never available before. AI can accurately identify images and individual faces, it can process and translate text, and can reliably respond to speech. The companies who have access to the most data, in the proper form, are able to build the most powerful AI applications. Think of Amazon who process millions of transactions, Google who process millions of searches, and Facebook who process millions of posts and pictures each day. All of this is data and is captured in a format that can be successfully fed into algorithms which then provide the insights and outputs valuable to businesses. As companies attempt to catch up to the big players, they need to avoid a mindless approach to data collection and ensure they build robust data generating processes.
Companies have been collecting data for decades, if not centuries. Data-driven companies have the ability to move quickly and make informed decisions to drive profit. The problem is that companies who run legacy IT systems are not collecting data in digital form, nor in the quantities needed, and not in the format required to successfully train Machine Learning algorithms. This leaves management in a position where they need to ask two key questions:
- Should we go through a digital transformation?
- What data is essential to our decision making and AI applications?
The ability to capture insights, get to know your clients better, drive down costs and make better decisions are things that almost all companies can benefit from. This makes the answer to the first question simple for those who wish to maintain market share as competitors with digital capabilities enter the market. The second question leads management to an analysis of what data to capture and what items to focus on while going digital.
The time preceding a digital transformation is when management really needs to fully analyze the current business model, understand use cases for AI and other technologies, and see where this transformation will take them. Earlier we noted that many companies have been collecting data long before the establishment of the internet. It would be foolish to not take a step back and determine which data collection processes are essential, and which are outdated and providing no value. Charlie Munger of Berkshire Hathaway tells a story of how poor data collection can lead to poor decision making:
The water system of California was designed looking at a fairly short period of weather history. If they’d been willing to take less perfect records and look an extra hundred years back, they’d have seen that they weren’t designing it right to handle drought conditions which were entirely likely. You see again and again- that people have some information they can count well and they have other information much harder to count. So they make the decision based only on what they can count well. And they ignore much more important information because its quality in terms of numeracy is less- even though it’s very important in terms of reaching the right cognitive result. All I can tell you is that around Wesco and Berkshire, we try not to be like that. We have Lord Keynes’ attitude, which Warren quotes all the time: “We’d rather be roughly right than precisely wrong.” In other words, if something is terribly important, we’ll guess at it rather than just make our judgement based on what happens to be easily countable.
Although Charlie is speaking about a data collection process that may not have been applying Machine learning techniques, his statements remain true today. Management can fool themselves by making decisions on available information rather than the most important information. In the new world of AI, management has an incredible opportunity to rethink the way they collect their data and utilize it to make decisions. They mustn’t squander this opportunity by doing what is easy-only collecting small sample sizes, or simply duplicating old data collection processes. Those who take the time to reflect on the past will be better prepared for the future.
A 2018 Gartner study found 25% of companies are experimenting – or ‘just dipping their toes in’ – and 49%, have no AI plans whatsoever. When a company’s decision makers lack technological expertise, it is difficult to craft credible strategies. Management cannot properly weigh risks and rewards, respond to investor expectations, and adequately engage the workforce. In order to avoid abandoning the companies existing business model too early, and unnecessarily risking investor capital, there are several steps managers can take to move into the future.
Step 1: Rely on Internal Experts
Early in an AI Transformation, there will not be widespread understanding of the technology. As the company’s AI capabilities grow, knowledge will disperse through the organization and proficiency will grow. Until this occurs, management will need to create a centre-of-expertise (COE) to aid in understanding and decision making. It makes sense for this centre to pull heavily from existing IT and analytics departments as these team members may already have an understanding of AI or will have the baseline skills to quickly get up to speed. If required, the company can pull from external consulting firms in the early stages, but will want to make sure the consultants work with team members so the learnings remain within the company.
Step 2: Identify Opportunities
Although management may lack the understanding needed to craft a quality AI strategy, they have (hopefully) been crafting high-quality business strategies for years. Management must share their understanding of the company’s business model with their AI COE. In their explanation, management should focus on:
- Knowledge Bottlenecks: areas of the business where there are only one or two subject matter experts
- Knowledge Scaling: areas of the business where it is difficult to quickly train and deploy workers.
- Knowledge Gaps: areas of the business where decisions are made with incomplete or incorrect information
Step 3: The Innovation Lab
By completing the needs assessment prior to conceptualizing AI applications, the organization can avoid arbitrarily creating problems for preconceived solutions. With this approach, management can remain focused on creating value for the enterprise and improving the functioning of their business model. Once briefed on the areas of opportunity, the AI COE can now conceptualize the ways in which AI can be applied. Once the COE identifies key AI technologies and use-cases for their application, these need to be quickly tested and iterated upon. This will build proficiency of employees and identify which approaches fail to add value and which have promise. Quickly killing poor concepts is just as important as identifying promising ones. The success of the innovation lab is dependent on failing fast.
To get an idea of how to experiment with high-impact AI applications, we can look at the approach of Sompo Holdings. Sompo is a Japanese insurance company who has taken an aggressive approach to their digital transformation. In their needs assessment, Sompo identified machine learning, autonomous vehicles, and cyber-security as key to the future of their business. Sompo went on to establish an innovation lab in Tokyo for Machine Learning, in Silicon Valley for autonomous vehicles, and a lab for cybersecurity innovation in Tel Aviv. By surrounding themselves with world-class thinkers and experts in each separate area of innovation, Sompo has positioned itself to quickly experiment and craft their path forward. By slowly wading in, and testing the waters, managers will be able to create strategies that make a big splash.
Many are worried that the implementation of AI in business will have detrimental effects on the workforce. There are warnings of jobs being completely eliminated and automated. There is even a presidential candidate in the US who is running on a platform of Universal Income because he believes that AI will put so many workers out of a job. Right now, the focus of this fear is Robotic Process Automation (RPA) – which, technically speaking, is not really AI at all. By understanding exactly what RPA is, how it works, and how it’s being rolled out across organizations, managers can help calm the worries of their workforce and begin to reveal the positive impacts this change will allow.
To think of RPA, think of all of the manual copy and pasting, think of repetitive and boring work that does not inspire customers or employees. Now, imagine this work is eliminated by delegating to back-end ‘robots’. This is exactly how RPA works. The enterprise, or a group of consultants, will map out manual, repetitive processes, and simply write computer code that mimics these actions. Examples include:
- Taking data from an email and using it to update a customer address in the company database
- Sending out a replacement credit card when a customer notifies the bank their card is lost
- Filling out and sending accounts payable notices
One can see how this type of technology may result in the replacement or elimination of back-office admin jobs. Although this may be the case in the future, a survey of 71 RPA projects (completed in 2018) showed that only one project was used to eliminate jobs. Other projects did show some job loss, but they mainly replaced overseas workers who were on contracts and tasked with completing manual, repetitive tasks for a domestic company.
While RPA can provide savings of cost and time, it also provides value in a few surprising ways. First, when managers look at automating some of their back-end tasks, they are forced to understand why this task is being done, and what steps could be eliminated. Many company processes have not been investigated since their establishment. A thorough investigation of companies procedures can save money by eliminating useless tasks and streamlining others. Second, by eliminating the mundane and menial tasks completed by employees, managers can free up these workers to be more creative and have meaningful interactions with customers.
A survey conducted in 2019 showed that more than half of respondents reported an increase in employee engagement after implementing RPA. It seems counter-intuitive that when robots take over parts of employees work, they become more engaged. When one considers the fact that the robots are taking over the least-fulfilling tasks, it begins to make sense.
As AI becomes more prevalent in the workforce, management must support their staff and provide resources to help employees adapt. Humans must become more human. Emotional connection, relationship building, and customer-centricity will become the key responsibilities of an organization’s workforce. Instead of being out of work, humans may just need to get used to their dull, robotic co-workers.
Each day, managers of large companies are told they need to make an immediate and substantial investment in AI. “Innovate or die”, they are told. In the media they are shown the latest and greatest accomplishments in AI. At home they browse Amazon and in one-click they have a new gadget delivered to their door in a day (or sometimes less). With all of the hype, noise, and uncertainty, managers may be tempted to invest in AI moonshots and try to skip several elementary- but important- steps in their AI transformation.
There are several examples of this happening. One of the most prominent comes from IBM. In March of 2012, IBM entered an agreement with a well-funded US hospital to begin developing an AI application. This application would help physicians diagnose and treat cancer. The announcement came shortly after IBMs Watson had defeated the greatest jeopardy player of all time and the hospital received a $50million donation to pay for the project.
Four years later, an auditor’s report revealed that the project had cost $62million to date and had not been used to treat a single patient. Even more worrisome, the project was not integrated with any of the hospitals systems and could not be used. After the emergence of this report, the CEO of the hospital submitted his resignation and the project was cancelled. Although the project had a noble cause, the hospital did not have the project management capabilities, nor the operational infrastructure to pull off such a difficult feat. There is another side to this story, though.
During the time that the project was underway, another department in the hospital was also developing products using AI. This team was pursuing projects such as a ‘care concierge’ that makes recommendations to patient families, a supervised learning algorithm that would predict patients who may require financial assistance, and an automated IT support system. None of these received the hype or attention of the larger budget project, but all of them successfully added value to the patient experience and saved the hospital money.
With the success of these smaller AI projects, the hospital is now prepared to re-engage in larger, more aspirational goals. The hospital has a strong model development infrastructure, technical and non-technical staff with AI project experience, and all of the learnings from their initial failures. This has led them to take on a new initiative that looks at patient data alongside their genomic profiles, and provides suggested treatments.
The takeaway is not that AI moonshots or aspirational goals should be ignored and left unfunded. It instead illustrates that while large budget, aspirational projects may get the most hype and coverage, organizations early in their AI transformation may not be ready to deliver. Companies who are successful in transforming their businesses have many less ‘sexy’ projects going on under-the-hood and these are what slowly, but surely lead to the achievement of revolutionary transformations made possible with AI.
Managers need to tune out the noise. They need to reduce their anxiety about moving too slowly and see value in moving at all. When it comes to successful AI transformations, it’s much better to be the tortoise than the hare.
Many of the most recognizable applications of AI are due to recent advancements in a field of Machine Learning called: Deep Learning. Personal assistants like Google, Amazon Alexa, and Siri are examples of AI that involve this area of ML. Although these personal assistants are prone to error, their accuracy and usefulness have greatly improved over the past few years. With more data generated than ever, and computing power continuing to improve, Deep Learning algorithms can better classify images, recognize faces, and make increasingly accurate predictions.
To conceptualize Deep Learning, think of building a jigsaw puzzle. The builder dumps the pieces onto a table and begins to sort. First, they start with the border pieces; picking out all of these pieces and beginning to build the perimeter of the puzzle. Once this is complete, they may start identifying pieces that have similar colors or patterns and group them together. Using the information from the borders, the image begins to take shape.
Deep Learning, like building a jigsaw puzzle, contains several layers. Each layer helps to break down part of the problem and contributes to identifying the solution. Each of the layers are called neurons and groups of layers are called neural networks. The name Deep Learning refers to the fact that these neural networks often contain several layers or ‘steps’ to solve the problem.
When a neural network identifies an image- for example an image of the letter ‘A’- it receives this image as a collection of pixels. Pixels are tiny 1×1 squares that together make up the overall image. The algorithm receives these pixels in the form of 3 numbers. This combination of numbers represents the color in a given pixel. In one layer of the neural network, the Deep Learning algorithm is looking for combinations of pixels and trying to identify the edges of the image. The next layer may look for different colors or saturation within combinations of pixels. The algorithm then determines what the image ‘looks’ like and compares the identified image with those that have been previously identified as the letter ‘A’ (it compares using images the algorithm has been trained on). Neural Networks require a large number of examples to successfully learn how to identify images; this is why the increase of data availability is so important to the successful application of Deep Learning.
Deep Learning was invented in 1965; with Canadian Geoffry Hinton making important contributions in 1986. Although the mathematical and algorithmic approaches to Deep Learning were discovered 40+ years ago, a lack of computing power and data availability limited their use. Computing power has doubled every two years since Deep Learning was invented and the introduction of digital technologies has resulted in an overwhelming amount of data collection. With these advancements, Deep Learning has made a comeback and is rapidly improving the abilities (such as seeing and communicating) we’ve come to expect from AI.
Some of the success of Deep Learning is apparent in the reduction of error rates in image recognition by 41%, facial recognition by 27%, and voice recognition by 25%. As error rates are reduced companies have begun to apply the technology and have made it available for everyday use. Smart speakers (like Amazon Echo), Google Translate, and unlocking your phone just by looking at the camera, are all examples of Deep Learning in action. The largest technology companies have been some of the first to harness the power of Deep Learning because of their ability to collect data from consumers. Every time an email is sent in Gmail, an address is entered in Apple Maps, or a purchase is made on Amazon, data is being collected and fed into these neural networks.
Some other applications of Deep Learning include:
- Diagnosing diseases from medical scans
- Detection of defective products on assembly line
- Identification of fraudulent credit card transactions
Companies who take advantage of these recent developments and allow Deep Learning to put the puzzle pieces together, can greatly improve predictive accuracy and free up human resources to provide better customer experiences.
As companies take the leap and begin making investments in AI they often come up against several roadblocks along the way. Learning from the evolution of software development in the early 2000’s – what became known as DevOps – will allow companies to overcome these obstacles and will improve their AI and Analytics process. By investing in what is now being called ModelOps, companies can drastically cut down the time it takes to go from data collection to AI deployment.
ModelOps is a framework for standardizing the process of collecting data, exploring and applying AI models, and ultimately deploying these models into business operations. Standardization can lead to incredible value creation for an organization by reducing time to deployment and allowing projects to fail fast. So the question is: how can this be achieved and why are so few companies focussed on implementing a ModelOps framework in their business?
Firstly, companies encounter endless issues with their data. They have unstructured data that is difficult to access and has missing values. The data may also hold values that are invalid and cannot be relied upon. In contrast, there are companies on the opposite end of the spectrum who do not even have data capture capabilities. This leaves them without the insights they need to apply Machine Learning algorithms.
Second, there is a shortage of skilled data scientists who can understand the data and determine which models are best for a given business problem. Even when organizations have a team of data scientists, accessing data involves many time-consuming, manual processes.
Lastly, companies lack the technology required to allow their data science teams to quickly and efficiently go from data collection to model deployment. They will then assign data science resources to maintaining old models which limits new innovation. Companies may also have legacy systems that only allow their data science staff to program in certain coding languages. This limits the number of qualified data scientists they can hire and increases the talent shortage.
The three issues above are the largest barriers to achieving success in AI. To combat these, organizations must first focus on the data problem. Companies must seek out and identify the most important problems in their business. This could be to increase customer satisfaction, reduce costs, or diversify their revenue streams. Only once the problems are identified should the company investigate which data sources may help improve decision-making or fuel Machine Learning algorithms. By seeking out problems first, companies will avoid being driven towards solutions that their existing data makes available and increase the likelihood of achieving specific business goals. Companies can then invest in data storage that is easily accessed by data science teams and updated in real-time without human intervention. With heavy focus and investment on the data structuring, companies can move forward and empower their data science team.
Embedding automation into the data collection process allows the data science team to quickly apply their models and avoid emailing excel files back and forth. Companies should also enable their data scientists to code in their language of choice. By doing this, HR can recruit from a larger pool of qualified and intelligent data scientists. Businesses can potentially go a step further and adopt platforms that are friendly to those who do not code. This further increases the number of staff capable of working on data science projects.
Finally, once the above is successful, companies will be going quickly from data to deployment. As this happens, it is extremely important to coach data scientists to save their case studies and learnings in order to build a repository of model templates for common data science problems. AI rock stars may even embed Machine Learning within the ModelOps process to have algorithms quickly identify other useful algorithms that solve common problems (mind-blowing, I know!).
The above steps are intensive, technical, confusing and difficult. Thankfully, there are several firms with the skills and expertise to assist companies in setting up their ModelOps infrastructure. These include: SAS, Avanade, and Accenture. With so much at stake it may be worthwhile to take a step-back, stop dumping resources into inefficient AI projects, and begin the hard work of establishing a ModelOps architecture. Slowing down today will allow companies to speed up tomorrow and 10x their AI investment.
When Harper Reed gave a speech on AI at a SAS conference last week, his main thesis was about using AI for good. When allowing AI to make critical decisions with full autonomy, there are massive ethical implications. He used a story of a Chinese start-up to illustrate the effects that AI systems can have when teams neglect to consider diversity and ethics.
The start-up was using AI to look at an image of a human face and using only that image, predict that person’s age. They were very excited about the technology and boasted that it had an accuracy of over 90%. The team started their demo using the founders faces first. Sure enough, the AI predicted their age perfectly. Next, they passed it over to Harper (click the link to see what he looks like), and the AI predicted his age to be 140 years old. He tried again and got the same result.
The founders were so confused! Why did it work well in testing, but not during the demo? After asking some questions about their data set, Harper found that the training set only contained images of mid-twenties, Asian males. The training set perfectly represented the co-founders because they collected images from friends and a network of people they knew. This led the AI to work really well for young Asian males, and terribly for the red-haired Harper Reed.
There are many examples of AI outputs being misogynistic or racist. Joy Buolamwini is a researcher at the MIT Media Lab and has been fighting bias in algorithms since 2015. Joy was experimenting with an AI facial recognition software and noticed that it was not working for her. She initially assumed the technology was still in its infancy and would have some bugs, but noticed that her white colleagues did not have any issues. Joy made a white mask (that looks nothing like a human face), put it on, and the software detected the mask immediately.
Facial recognition for iPhones is one thing, but imagine if the recognition software of a self-driving car was more likely to hit black pedestrians than white. Joy has created the Algorithmic Justice League to help companies and researchers avoid building bias in their algorithms and ensure that AI isn’t unfairly benefiting those who are represented in the datasets.
The examples above do not show that AI researchers are inherently racist and only building solutions for those who look like them. They instead open up a conversation of how our unconscious biases can lead to creating programs and algorithms that serve only those who are like us. It’s important to build the training dataset to represent a diverse group of people. In order to build diverse datasets, it is important to have diverse AI and Machine Learning teams.
Diversity is a complex and increasingly political subject. Companies are aware of the benefits of diversity, but often fall short on their commitments. When it comes to AI, diversity is not an option, it’s a requirement. The product will not work and companies will not succeed if their algorithms are biased against a large part of the customer base.
In the examples above, the algorithms immediately show when they do not respond to people of different colors. With this feedback, we need leaders in all fields of AI to pivot towards diversity and ensure that organizations are focused on building a fair and free society for all.
In 2018, Venture Capital (VC) firms invested over $9 billion in AI related startups. It used to be that putting “Blockchain” in your startup name would guarantee your series A funding. Now switch the name to “Anything AI” and you’ll have similar success. Joking aside, the enthusiasm for AI-related startups is approaching that of internet and software companies in the early 2000’s and that should be cause for concern.
AI is an incredibly complicated topic. It is highly technical and it often takes a PHD in Mathematics to decipher the complex algorithms involved. Any time you pair a complex topic with enthusiasm from a non-technical investment community, a ‘speculative bubble’ appears. Think to the most recent financial crisis of 2008. It is partially attributed to investors placing large sums of money into financial derivatives that they did not fully understand. Once it was discovered that the underlying quality of the investments was poor, the bubble popped and the stock market spiralled out of control.
When investors see large dollars flowing to a single technology or sector they tend to follow along to avoid missing out on the opportunity. The most famous example of investors abandoning reason to make a quick buck is “Tulip Mania” of the 1600’s. The Dutch were buying and selling tulip bulbs like stocks. At its peak, a rare tulip bulb could go for several thousand dollars or about 10 times the annual salary of a skilled craftsman at the time. This bubble popped and many lost everything they owned. This is all to say that when investors see large sums of money flowing into AI, they may invest without understanding the technology or completing an assessment of whether it will provide investment returns over time.
The above is meant to be a warning; not a recommendation against investments in AI. $9 billion dollars is a lot of money and there are many savvy investors who understand AI and have completed the required due-diligence before putting their capital at risk. Instead of avoiding AI investments altogether, VCs and other investors should be asking “How can I de-risk my investment with an understanding of the technology and its potential for future returns?”
Reducing the risk of AI investments and making strong, research-based decisions should follow the same decision-making process of any other investment. An investigation of the product, how it benefits customers, what are its competitive advantages, what are the risks of fast-paced growth, and other typical investment questions are still relevant. The difference is the product is an algorithm. Or it’s a new way of providing a service with machine intelligence as the driving force, and it takes highly specialized domain expertise to properly assess the opportunity.
To understand an AI investment opportunity, VCs could attempt to hire a PHD in Computer Science or Machine Learning and leverage their expertise. Unfortunately, this task is becoming more and more difficult. Element AI, an AI startup, estimates that there are 144,000 AI-related job openings in the US and only 26,000 qualified workers applying for AI-related positions.
With firms seeking talent to help them maximize returns on AI investment this is the time for business professionals to begin diving in and getting a more complete understanding of the technology. Many are worried about AI disrupting the workforce, and this is valid, but those who face the challenge head-on will be able to adapt, understand the technology, and make an impact on the new economy. Check out these resources to get started:
The most successful companies of the last 10 years have harnessed the capabilities of Machine Learning. Netflix and Spotify serve up movies and songs users want, when they want them. They’ve crafted an experience for each individual and have done so at scale. On the other hand, as you can see in this video, some companies have dramatically failed to launch ML-powered products. When starting a new ML project it is extremely important to determine the impact predictions will have on your customers, and decide a level of predictive accuracy customers will accept. To illustrate, we’ll look at several examples of ML projects and how the measures of accuracy should differ to provide the best customer experience.
Let’s consider a hypothetical classification algorithm which takes images and tells the user if the image is, or is not, a jar of peanut butter. The algorithm is tested on an image set of 100 images total. 95 of the images are of peanut butter and the other 5 are raspberry jelly. If the algorithm predicts that every image is peanut butter, it will be correct 95% of the time! This example makes it clear that managers cannot simply measure success solely on the amount the algorithm gets the answer right. A good algorithm, in this case, needs to be measured evenly on the images it classified correctly as well as incorrectly. By establishing a metric of success that takes into account both positive and negative predictions (predicting peanut butter and not peanut butter), the manager will give the data scientist the direction they need to fine tune the model.
Now consider a non-profit who wants use ML to scan twitter posts, predict if someone is in distress, and then alert emergency services. The manager needs to take a look at this project and establish the risks of incorrect predictions. After interacting with officials from the emergency services department the manager was told they do not trust social media and want to minimize the number of times they are called incorrectly. With this knowledge in mind, the manager needs to establish a metric that focuses mainly on the accuracy of true positive predictions. In other words, the model should be optimized to only predict an individual is in distress when they are truly in distress. Unlike the example above, the model does not need to be tuned to predict the negative case (not in distress) with high accuracy, but only needs to focus on correctly notifying emergency services every time.
A final example comes from the medical field. There has been discussion about using ML to identify cancer from images of patient scans. In this example, the worst case is sending home a patient who has been identified as being cancer-free, when in fact, they do have cancer. This is called the false negative rate- predicting ‘no cancer’ incorrectly. The data scientists would be less worried about predicting cancer for those who do not have cancer (they patients will engage in follow-up tests with no harm done), and must focus on tuning the model to make sure doctors do not send home patients incorrectly. The stakes are high in this case and the managers making the decision to roll out this ML model need to establish a proper metric for success early in the process.
KPIs (key performance indicators) and OKRs (objectives and key results) are nothing new in business. It has long been the practice of management to create the proper incentives and goals for company success. With the emergence of AI and ML, managers now need to arm themselves with an understanding of these new metrics and how they can be used to create incentives for their data scientists. By providing incentives that align with the projects goals, managers can create products that exceed expectations.
Andrew Ng, one of the world’s most prominent thinkers on AI, says the best way to implement AI into your business is to start small. In a recent talk, he gives the example of his own experience at Google and how he used Machine Learning (ML) to add value to speech recognition and Google Maps before tackling bigger business problems in the advertising department. Before managers even start thinking about AI, though, they must first identify customer problems and consider all possible solutions to solve them. Below are the 4 steps a manager should follow when using ML to solve business problems.
Step 1: Challenge your Assumptions
One of the biggest mistakes companies can make in the new world of AI is to look for or create problems because they think AI would do a good job of solving them. They create chatbots that nobody asked for or automate tasks that benefit from human touch. By speaking with customers and identifying their critical pain points, the company can avoid creating problems and start solving them. Only once proper customer discovery has been completed can the manager move on to identifying the best technologies to solve the problem.
Step 2: Consider All Solutions
Managers can add immense value with an understanding of the strengths and limitations of AI. Good leaders can craft strategy with AI in mind, but it takes great leaders to understand situations in which using AI would not be beneficial. By understanding problems that cannot be solved by AI, managers help save the company wasted time and money. Once the manager has decided that AI is (or isn’t) a good fit. They need to identify available data sources.
Step 3: Find the Data
When considering all solutions, there should be a preliminary assessment of the data available and whether or not there is enough data to successfully train ML algorithms. Moving on to step 3, the manager must consider insights ML algorithms provide.
For example, a bank is hoping to alert customers when they detect fraud in their accounts. In order to create an algorithm to detect fraud, it will need to train on examples of fraudulent transactions that occurred in the past. In this case, the data needed is transactions with a label of ‘fraud’ or ‘not fraud’ for each.
Step 3 can go terribly wrong if steps 1 and 2 were not properly completed. Without a definitive customer problem to solve and an analysis of the best solution, it will be terribly difficult to identify the proper data sources needed. By completing the prior steps correctly, the manager will have a strong basis for discussion with the data scientist and can ensure that the whole team is working towards a solution that creates the greatest value for customers.
Step 4: Identify the Best Algorithm
Once the data scientist has collected all of the data, and completed some preprocessing, managers can help determine which algorithm is the best fit. Managers do not need to know the specifics of algorithms or what they are called, how they work, etc. Instead, they only need to establish an acceptable level of error in output and the level of explainability of the algorithm.
Would it be acceptable to the business if the algorithm for detecting fraud was correct only 50% of the time? By establishing metrics and measurements for success, managers can ensure the data scientist is working towards a common business goal.
Leaders must also determine the importance of model explainability. Customers may not wish to know how the bank decided a transaction was fraudulent, but it may be very important to explain why an algorithm rejected their loan application.
By understanding customers expectations, and how they interact with an algorithm’s outputs, managers can greatly improve the customer experience and avoid creating problems with algorithms that are not explainable and inaccurate. Carefully following the steps above will lead to high value ML solutions that add direct value to the business.
To avoid losing substantial time and money on AI projects, managers must have a strong understanding of how data is processed. Data processing involves actions data scientists undertake to transform dirty, real world data, into clean, understandable data. Machine learning algorithms can only provide valid results and predictions if data is free from errors and is correctly formatted. As with many things in life, “Garbage in equals garbage out.”
Part of data processing includes searching for outliers or impossible data points and inspecting them. For example, a dataset includes client’s date of birth. Filtering this data to inspect birthdates over 90 or 100 years old may reveal data points with birth years in the 1800’s. The data scientist could then remove these data points to avoid confusing the model.
Another important part of data processing involves handling missing values. Datasets are often incomplete and data scientists must decide to delete entire entries that contain missing elements, or they can insert a placeholder (perhaps using the median or most frequent value). If the data scientist determines that there are too many missing values in the dataset, the manager must decide whether it is cost-effective to collect more data, proceed with the current data, or kill the project. This decision should be made with a solid understanding of the costs of collection or non-collection as well as the likelihood new data will have completeness the current set is lacking.
Once the manager has communicated with the data scientist and determined the data quality is satisfactory to move forward, the training can begin! This is where machine learning terminology can cause confusion. The data scientist is not standing by his computer with a stopwatch and a whistle, shouting at the data to run faster. Instead, they write a few lines of code to split the dataset into two. One set is called the ‘training set’ and the second ‘test set’. It is common to use 80% of the data in the training set and 20% in the test set.
Training the model occurs, as one may think, on the training set. This means that the algorithm is run on all of the data points in the set and it outputs a formula or methodology that will be used to predict or classify future data points. Once the data scientist is satisfied with the outputs and fine tunes the model, it is time to put it to the test.
Since data points in the test set are different than the training set, testing will determine how well the model will generalize new data. By running the new model on the test set, the data scientist can compare the real output values (or labels) of the training set to the model’s predicted output values. When the model’s outputs are reasonable as compared to the actual outputs, there is low generalization error, and the model reacts well to new data.
With high generalization error, the model has learned the training data well, but will not be useful for use in real life. The model does not provide useful outputs from data that represents new situations not seen in the training set. To avoid training a model with high generalization error, the data scientist could use a more sophisticated split. They could run an algorithm and identify data points that are most like each other (called segments in the dataset) and make sure that an equal number of data points from each segment are in the test set and the training set. This can help ensure that data in both sets represents the overall data population.
Although much of the above is undertaken by the data scientist, managers can add incredible value to AI projects by understanding the high level aspects of data processing and training. Ultimately, the manager is the decision maker and having high-level expertise of impacts data has on model production can help lead to better decisions.
Two major problems often arise when implementing AI in business. Projects run into ‘Bad Data’ and ‘Bad Algorithms’. Last week’s blog described some of the issues that come about when dealing with bad data. This week, we’ll use football to illustrate why bad algorithms can cause trouble for machine learning projects.
Imagine you are the coach of the Dallas Cowboys football team. Your team has struggled to win games and you’re worried about losing your job. You’ve seen what analytics and AI have done for baseball, so you decide to give it a shot. You hire a data scientist and give her all of the plays in your playbook and the outcomes of each play during the game. You tell her to create an algorithm that shows which plays you should run and in what order.
The data scientist takes the data and applies machine learning algorithms. Let’s imagine 3 resulting scenarios:
- A full script of plays for an entire game
- A recommendation to use the same play every time
- A list of plays that changes depending on the in-game situation
In scenario 1, we may think a full script of plays will work in theory, but it would go horribly wrong in a real game. As different situations arise in the game, the coach could not adapt the plays accordingly. This means the script of plays would work on the data used to train the model (the plays and outcomes provided by the coach), but would not respond well to uncertainty or to situations that arise in new games.
This is called overfitting the data. Overfitting happens when a complex model is applied to the data and does not allow for any drift or variation in the prediction. In games like football there are millions of sequences and outcomes available which may have not been captured by the data the coach provided. Therefore, any new situation not provided by the original dataset would not be acknowledged in the ‘perfect’ script of plays provided.
Looking at scenario 2, we can see the data scientist has badly underfit the data. They have selected a model that does not learn the underlying complexity of the data and has output a single play it decided works best. A football game is much more complex than the model allows, so the predictions are not accurate even on the training data. Think of drawing a straight line through a scatter plot. It does not react to the ups and downs of the data, and does not provide any insights to changes in variables.
The best scenario is number 3. Here the coach scored an all-star data scientist who understands the concept of regularization. This fancy mathematics term means the data scientist has allowed enough complexity to properly understand the data, but has constrained output values to ensure the model does not overfit. Regularization is as much an art as it is a science. It takes experience and understanding to properly limit a model and avoid issues with over/under fitting data.
Managers need to be aware of issues that arise when the incorrect algorithms are applied to a dataset. Good managers will look at the dataset and ask:
- What are the expected outcomes?
- What are the expected relationships between variables?
With up front expectations, the manager can help identify models that over or under fit the data and avoid implementing models that provide incorrect results.
With the excitement surrounding opportunities that come with deploying Machine Learning (ML), it is easy to forget the downsides and risks. One of the major reasons that ML strategies lose money, miss the mark, or disappoint customers is ‘bad data’. Bad data can come in many forms and can lead to incorrect insights when improperly addressed and managed. Politics provide many examples of misuse and misunderstanding of data and will be used to help illustrate the concept.
Before diving in, its useful to define a few common terms used in data science. A population is defined as every member or every data point of a certain group. If we are hoping to understand voting intentions of Canadians in the upcoming election, we would say that all Canadians are included in the population. A sample, on the other hand, is a small number of data-points that are drawn from the overall population. Think of a sample as 3,000 Canadian citizens who were contacted for a poll and asked their preferred candidate.
The first common issue when training ML models is simply not having enough data. When the sample size is small there can be bias and error introduced by chance. The data may contain outliers that have a large effect on the results because of their relative importance to the rest of the sample. Think of scrolling through twitter and trying to discern the political views of users. A single user with radical views can throw off our assessment and cause us to think these views represent a large part of the population.
Although it may seem intuitive that a small data-set may be unrepresentative, large data-sets can fool managers and their machine learning algorithms as well. Think of how many polls are created and published during election time. It seems like every day there is a new poll proclaiming to know voter sentiment at that moment, only to find that they were way off as the true results come in.
A famous example comes from the U.S. election of 1936. Leading up to the election, a magazine called Literary Digest collected 2.4 million answers from readers and predicted the challenger, Alf Landon, would unseat President Franklin D. Roosevelt. FDR ended up taking the highest percentage of the popular vote since 1820. So what happened?
Managers need to think about the source of the data and the bias that may be introduced through the collection process. Readers of Literary Digest were upper class and were more likely to oppose the policies introduced by FDR. Also, those who tend to answer polls may have different opinions than those who do not answer polls. These differences do not show up in the data and lead to insights that do not represent the population.
Prior to training models, managers need to make sure they contemplate the data collection process, identify potential biases, and determine whether the data-set is truly representative of the overall population they are hoping to derive insights about. Keep in mind that data collection and analysis process is the job of data scientists. A manager’s role is not to replace this expertise. Instead, the manager must ask critical questions about the data collection process and help identify potential sources of bias. Through this understanding, companies can avoid deploying models that contain harmful biases that affect the customers they hope to serve. This critical thinking will also help uncover the human biases present in organizations and may lead to constructive conversations on how machine learning can be created to benefit all.
On the TED stage in 2010, Tom Wujec introduced the “Marshmallow Experiment.” The experiment involves twenty sticks of spaghetti, one yard of tape, one yard of string, and a marshmallow. From these materials the participants are told they have 18 minutes to build the largest free-standing structure they can. The structure is only deemed complete when it successfully holds the marshmallow on top. He has completed this experiment hundreds of times with different groups of people. From MBA graduates to recent graduates of kindergarten; each of these groups tackles the problem a little bit differently.
MBAs will spend some time organizing themselves and negotiating for power. They will spend time thinking, sketching planning, and eventually get started on building the tower. They build it up until time is almost up and finally place the marshmallow on top. They stand back, admire their work, and the end result is often the tower crumbling under the weight of the marshmallow.
Kindergarten students complete the task completely differently. They don’t spend any time planning and immediately start working and building. The key difference between strategies, though, is the continuous placement of the marshmallow on the tower throughout the experiment. The 5-year-olds build the tower up a little bit, place the marshmallow, get the feedback (did the tower fall or not), and continue iterating and building from there. This results in kindergartners building towers over twice the size, on average, than MBA graduates.
Now I know what you’re thinking, what in the world does this have to do with machine learning? Past blogs described supervised machine learning and unsupervised machine learning, while this story is meant to illustrate the concept of reinforcement machine learning and how a computer uses feedback to learn.
In reinforcement learning, engineers will program the ‘rules’ of a game into a computer. In this case they would program the types of items, the rules of gravity, the load-bearing capability of spaghetti, the weight of the marshmallow, etc. Once the rules are programmed, the engineers then program feedback that ‘rewards’ the computer for taller towers. The computer then performs several iterations of the game and learns the most optimal path and configuration of all the items to receive the highest reward. Think of the computer as mimicking the kindergarten students who tried an iteration, placed the marshmallow, got the feedback, and then reconfigured their structure.
A recent example of successful implementation of reinforcement learning occurred in 2017 when a computer was programmed to learn ‘GO’. GO is more complicated than chess and has millions more possible combinations of moves. The program was able to beat World Champion Ke Jie signifying a huge step forward in reinforcement learning.
Reinforcement learning is one of the most exciting and revolutionary examples of AI. It is most useful when you are trying to determine an optimal set of actions in an environment with clear rules and an end-state that can be defined as ‘success’ or ‘failure’. Business cases include:
- Self-driving cars
- Optimizing an investment portfolio trading strategy
- Use robots to re-stock and pick inventory in warehouses
When companies are able to conceptualize and understand reinforcement learning, they move from simply being able to predict, as with other types of machine learning, towards an ability to optimize. While competitors have their MBAs jockeying for power, planning, and watching their tower crumble under the weight of a marshmallow, those who operationalize reinforcement learning have the blueprint to the tallest and strongest tower already printed out.
In order to take advantage of the insights and capabilities that AI enables, organizations need to be prepared on several fronts:
- The organization must have a solid understanding of the problem they want to solve.
- They must have access to the data that provides insights they are looking to exploit.
- They must hire individuals with technical capabilities (both data scientists and programmers) to access the data and implement the machine learning process.
It may seem counter-intuitive, but the step that most companies get wrong is the first. Managers try to apply machine learning to problems that cannot be answered by the data available or they misunderstand the capability of the algorithms and what questions they can answer. In order to increase managers’ ability to ask the correct questions and improve the effectiveness of AI decision-making, we need to craft a better understanding of the difference between supervised machine learning and unsupervised machine learning.
In a previous blog, I outlined the capabilities of supervised machine learning and provided some examples of business problems that can be solved with these algorithms. This method is only part of the story. Unsupervised machine learning can solve a whole different suite of business problems and obtain insights that are otherwise invisible to human analysis.
Unsupervised machine learning is best for instances when the data set is so large that humans cannot classify or define relationships between variables. The process allows decision makers to rely on an algorithm to review the data and come up with insights that are missed by human review.
Some business cases for unsupervised machine learning include:
- Recommend what product a customer may like based on the preferences of customers with similar attributes (Netflix uses this to suggest content you may like)
- Create segments of employees based on their likelihood of leaving the company
- Create a micro-segmented groups of users of a credit card to determine better rewards programs to offer.
To provide further understanding, let’s look at an example:
Imagine you are a manager at a large grocery store chain and are looking to solve the problem of decreasing sales. Because you understand the capabilities of unsupervised machine learning, you suggest that the company use unsupervised algorithms to identify products often purchased together and provide personalized offers based on the results.
You collect the receipts for all purchases being made in your store and apply the algorithm to this dataset. The algorithm identifies several products that customers often buy together.
By understanding the applications of different machine learning algorithms the manager was able to:
- Identify a business problem and understand how machine learning can solve it.
- Understand the dataset needed and how the algorithm would be applied to this dataset.
- Receive specific insights that allow the manager to craft a data-driven strategy.
When managers have an understanding of machine learning they can create a data-driven strategy, make better decisions, and drive positive change.
Much of the hype surrounding AI involves the recent developments in machine learning. The terms “AI” and “Machine Learning” are often incorrectly used, so let’s redefine both. AI, as defined in a previous post, can be understood as computers performing tasks typically associated with human cognition. Machine learning is one of the six disciplines of AI. In other words, machine learning is one of multiple categories of AI, but not all AI involves machine learning.
To clear things up, think of machine learning as a ‘special’ kind of AI that detects patterns within datasets, and uses these to make recommendations or predictions. Instead of having human coded instructions, machine learning algorithms rely on data to make their insights. Hence the ‘learning’ part of machine learning.
For simplicity let’s think of an algorithm we rely on every day: weather prediction. Lets say, for example, we believe that the weather tomorrow is based on the weather today, the weather yesterday, and the weather on this date last year. We can draw up an imaginary equation.
WeatherTomorrow = B0 + B1Today + B2Yesterday + B3LastYear
Weather forecasters would then use our equation and feed the machine learning algorithm all of the different variables for as far back as we have data recorded. The most important thing to keep in mind is the data includes the WeatherTomorrow variable all the way up until today. Based on this historical data, which includes the historical ‘answers’ to the question the equations asks, our algorithm tries to predict what the weather will be tomorrow. This is called supervised machine learning. In supervised machine learning, the algorithm is trained with data that includes the output variable or the ‘answer’ we are looking for.
Another quick example: housing prices. You give the computer monthly input data (eg. interest rates,size in square feet, location of property) and the monthly output data (eg. sale price of home). The algorithm is then trained to make future predictions based on the inputs and outputs it has been given.
Supervised machine learning is best when you have a dataset with outputs that can be classified or collected by humans (this is called labelled data). Here are a few use cases for supervised machine learning:
- Provide a decision framework for hiring a new employee
- Forecast inventory levels
- Predict incoming call volume at call centres
The effectiveness of this type of algorithm is in the quality of the data collected. Up until very recently, data collection was difficult, expensive, and time consuming. Nowadays with smartphones, social media, and constant access to the internet, there is a plethora of data available to feed supervised machine learning algorithms. The challenge for companies is to properly gather this data, understand what insights it can provide, and be agile enough to shift their business strategy in response to consistent real-time insights.
Understanding the capabilities of the algorithms, the data needed to feed them, and the type of business problems that can be solved is a good first step to making your company or business unit more data-driven.
It seems as though AI has a negative reputation many workplaces. A recent survey suggests nearly 50 percent of Albertans age 18 to 64 feel AI will replace more jobs than it will create. Just 16 percent believe AI is a net job-creator, while a fairly large proportion (27 per cent) are on the fence.
Getting to know how businesses are using (or planning to use) AI can help calm employee fears and move your workforce towards embracing the change that AI enables. Peter Breuer, a senior partner at McKinsey & Company says, “Transformation is 50 percent about AI and 50 percent about [changing employees mind-sets]. The second 50 percent, in many cases, is forgotten because everybody’s so excited about computers and robots.”.
To help your organization better understand, here are the 6 major disciplines of AI and some examples to get you familiar:
1. Knowledge Reasoning: This is all about representing information in a way that a computer understands so it can make associations and apply ‘reason’ to questions.
Example: When you type ‘Mona Lisa’ into Google the search engine can then, using knowledge reasoning, associate this search with other facts about the Mona Lisa. Things like who painted it, its significance, etc.
2. Planning: The program is given a starting state, what the goal is, and possible strategies to get to the goal. Many very technical computing techniques and algorithms are then applied, leading to a sequence of actions from the start to the goal.
Example: This is one AI discipline that autonomous vehicles use to drive from point A to B. They are given a start state, potential sequence of braking, accelerating, steering, and the end goal is to safely arrive at a destination.
3. Machine Learning: Instead of having humans create and implement the computer code needed to run the algorithms, the computer is given data and constructs outcomes and outputs from this data.
Example: The most famous example of this is Deep Blue. This was a system designed by IBM which defeated the Chess World Champion Gary Kasporov. It was fed large sequences of moves in prior chess games and used this data to predict the next best move.
4. Computer Vision: Uses large datasets of images to become familiar with the positioning of the pixels (the small squares of color in images on a screen). The computer can then identify what an object is, as well as where an object is in an image.
Example: Computer vision is used in Google Photos to help identify your friends and family (which is also kind of creepy. We’ll get into ethics later…).
5. Robotics: This is the combination of hardware (physical metal parts) and any AI process to bring the computers cognitive function to the real world.
Example: Watch this video from Boston Dynamics to see some of their robots at work.
6. Natural Language Processing: Similar to computer vision, this also uses large datasets of text to help predict the next letter, word, or phrase in a conversation.
Example: This has many applications such as chatbots and suggesting what word you are most likely to be typing next.
Aligning employees to any new corporate initiative is a key component to this process.This alignment is even more important when it comes to something as revolutionary as AI. By understanding the basics of the 6 disciplines you can get a head start in engaging your employees and making them allies in the organization’s change.