Month: February 2018

Blog

4 Areas of AI and Machine Learning

AI and machine learning has come a long way in recent years, opening up unlimited possibilities with the hope of transforming the core of how businesses and industries operate. AI can often perform tasks or solve problems faster and more efficiently than what is humanly possible; allowing business owners to streamline, scale and automate quantitative work so that resources inside the business can be freed up to focus on the more human, subjective areas of the business strategy, growth, and improvement.

Discussed below are four of the largest growing areas in AI and machine learning:

Robotics and autonomous vehicles
Computer vision
Natural language processing
Virtual agents

Robotics and Autonomous Vehicles
Through advances in robotics, computer vision, and machine learning, robots and vehicles can now demonstrate contextual and situational awareness. They can learn and continue to learn behaviours that help them perform their given task better, such as playing video games or navigating through complex terrain in an unseen environment. This breakthrough development means robots can move for themselves without a human directing it via a remote, paving the way for autonomous vehicles and robotics.

In warehouses, robotics and automation can be used to increase productivity by working faster, minimising the chance of human error and decreasing the risk of injury. Swisslog reduced stocking time by 30% with autonomous vehicles while Ocado (an online supermarket in the UK) has a warehouse filled with conveyor belt, robots and AI applications to move products from the store, to shopping bags, to delivery vans, and on to the customer (Bughin et al., 2017).

Self-driving cars are fitted with all kinds of cameras, sensors, and navigation systems to collect as much data as possible about their location and surroundings (Ashish, 2017). The car chooses a navigation path and drives there, avoiding any obstacle along the way. Unlike humans, sensors can monitor all angles around the car 100% of the time, mitigating human blind-spots and resulting in fully informed decisions down to the nearest millisecond. In the near future, people will be presented with more opportunities while travelling – they can work or the car can recommend where to stop for coffee or a nice lookout. This will again revolutionise the private driver industry. Eventually, services like Uber and Lyft will be able to release a fleet of self-driving cars rather than paying for human drivers, drastically reducing the cost of the end consumer given that the driver is 50% of the cost of most rides (Ashish, 2017).
Computer Vision
Computer vision relates to the ability of an AI system to process visual information about the world, giving the computer the functionality of the human eyes (Vittori, 2017). This opens up a whole new world of opportunity in health, travel, maintenance work, and study processes.

AI virtual agents are now reviewing MRI and CAT scans using image recognition technology to identify problems and irregularities. Computer vision systems are now more precise than the naked human eye, recognising and diagnosing significantly more subtle indicators of problems, fast-tracking the recovery process, and preventing the problem escalating – saving countless lives in the process.

Google translate has an innovative feature for travellers – it can look at a photo of a foreign language and translate it directly into their native language, using augmented reality to overlay the translated photo on the original text. Moreover, Amazon has begun using delivery drones with computer vision – these are able to avoid obstacles and damage to the drone or product, delivering goods faster and cheaper (for small items) than large vans. Similarly, research is being done toward bug-sized maintenance robots that can inspect aircrafts internally to identify errors and engineering irregularities.

Computer vision can also be used in adaptive learning to work out the education methods best suited to each individual student – systems can track the eyes and movements of students as they are in a class to analyse whether they are engaged, confused, or bored, while also identifying their learning preferences (Bughin et al, 2017). This data is then used to direct lesson planning to maximise learning and engagement. All this information is synthesised to form effective learning groups based on student profiles, learning styles, and knowledge levels to further enhance the learning experience.
Natural Language Processing
Natural language processing is the application of machine learning techniques to analyse and understanding natural language and speech. This technology is already being used by schools to speed up the administration of basic tasks such as marking and documentation. Natural language processing systems are able to decipher student handwriting and mark objective questions. Furthermore, natural language processing systems are readily applied to smart devices for speech recognition in virtual assistants like Siri, Alexis, and Google Assistant. Interestingly, advancements in natural language processing has enabled the synthetic generation of human-sounding speech. Google’s Duplex system is able to book appointments and make reservations with businesses while exhibiting the minute characteristics of human speed such as imperfect grammar, pauses, repetition, and filler words.

Furthermore, natural language processing systems are able to draw meaning and semantic understanding from what is being said. AI systems are able to evaluate different tones of voice and understand the connotative meanings behind human speech. Businesses are currently using semantic systems to evaluate employee performance and to further enhance the overall customer experience.
Virtual Agents
Virtual agents, although mostly driven by natural language processing techniques, are AI agents that act as online customer service representatives for company and brand engagements.

The most prevalent example of virtual agents are chatbots. They make 24/7 assistance a reality without having to maintain a large service team for on-demand calls. Moreover, chatbots enable smaller companies to expand their customer service access globally, overcoming time and language differences. Conversation through chatbots makes it easier to collect customer data and insights while increasing customer engagement.

Virtual agents have further been built as receptionists in medical centres in aim of matching patients with the most appropriate doctor based on their past history, symptoms, and the doctor’s specialisations (Bughin et al, 2017). They are also used to analyse and summarise large amounts of information and medical journals to better assist with the accuracy and velocity of the diagnosis. Virtual agents can sift through endless pages of medical data in seconds to suggest a well-informed treatment plan and compare the patient’s data with a large data library of past cases. Common symptoms generally point to a common health problem, and the agent is able to identify the most effective treatments used in the past.

References
Ashish, 2017 ‘Autonomous Cars and Artifical Intelligence’ Codementor<>

Bughin, J, Hazan, E, Ramaswamy, S, Chui, M, Allas, T, Dahlstr.m, P, Henke, N, Trench, M 2017 Artificial Intelligence: The Next Digital Frontier?McKinsey Global Institute

Harris, T ‘How Robots Work’ How Stuff Works<https://science.howstuffworks.com/robot6.htm>

Vittorri, C ‘What’s the different between AI and Computer Vision?’ B&T<http://www.bandt.com.au/opinion/whats-difference-ai-computer-vision-every-marketer-needs-know>