Category: Blog

AWS Blog

re:Invent 2018 – Machine Learning and AI Take Centre Stage

This year’s AWS re:Invent was full of new releases and updates, cutting-edge tools, innovative products, and valuable resources to help take advantage of the rapid pace of growth and development in cloud services. For us at Intellify, we were most excited by the focus on building out Machine Learning tools and environments. SageMaker has gotten many new updates, making optimisation, recommendation systems, reinforcement learning (RL) and forecasting models deployable by more users than ever. Tellingly, much of Andy Jassy’s keynote was dedicated to announcing the massive new focus on Machine Learning and AI from AWS, and we are gearing up to help our customers take full advantage of the benefits.

5 Key Takeaways

While we continue to digest the content coming from re:Invent – it is a huge week after all – and begin to work with the expansions to SageMaker, Reinforcement Learning, Elastic Inference, Marketplace, and other ML-based toolkits, here are some of the major takeaways we had from the keynote presentations:

New SageMaker offerings: Ground Truth, Neo, Reinforcement Learning. We have been developing and deploying for our customers on SageMaker (see here, and here) since its first release and have found it a powerful environment for ML projects. AWS is expanding SageMaker even more, including:

Ground Truth: adding the capability to outsource data labelling to either human agents through Mechanical Turk, third parties, or in-house; or a combination of human and intelligent systems. Accurate labelling of data is crucial to successful implementation and integration of data sets for ML, and we can see ourselves taking advantage of this new service to ensure solution accuracy.
Reinforcement Learning (RL): a new environment dedicated to RL algorithms and compute power allowing deployment of RL-based ML solutions, which have much different requirements to supervised and unsupervised learning techniques.

 

Allowing ML vendors to share their cutting-edge ML algorithms and solutions in AWS Marketplace. Not only will this greatly expand the options available to those seeking pre-built ML solutions, it will also enable us to share some of our expertise on the platform, and more rapidly deploy flexible customer solutions using an array of customisable tools.

Significant expansions on Elastic Inference, and the arrival of AWS’ specialised ML chip: Inferentia. This is particularly exciting for us to see the expansion of flexible compute power for ML. Inference requires significant power when running our models, but for relatively sparse periods of time. The advantage of elastic inference will be in providing that high compute when it is needed, rather than paying for the availability of high compute all month. We’re sure this will make our future inference-based ML projects more cost-effective, while retaining the compute power we need for successful deployment.

The Introduction of DeepRacer and DeepRacer League.We loved this! Based on the new RL environment in AWS, watch out for an Intellify team flexing their ML and data science muscles in the League!

Amazon Textract, Personalize, and Forecast. This past year, customers have shown a lot of interest in document recognition/parsing; recommender systems, especially in ecommerce and customer experience-focused businesses; and time series modelling and forecasting. There are so many vital applications of these ML-based tools, and we can’t wait to get on board with Textract, Personalize, and Forecast to take them to a whole new realm of customers seeking the benefits of AI/ML.

Our Thoughts

AI and ML are hugely on the rise for both organisations and individuals. AWS’ new suite of releases and expansions to SageMaker, RL, Inference, and Marketplace will only help this field grow at an even faster rate. We’ve been deploying on SageMaker since its release, and these tools will only help us expand what we can offer to customers in terms of cutting edge, competitive AI/ML solutions.

Get in Contact

If you or your organisation are looking to take advantage of AI/ML and its enormous opportunities to boost revenue, create operational efficiency, and enhance competitiveness, please get in contact via phone (02 8089 4073) or email (info@intellify.com.au). We are AWS Consulting partners for Machine Learning, with a range of projects already completed across SageMaker and AWS cloud services.

Blog In the Media

In the Media: [This is My Code] on AWS

Our very own Kale Temple was featured in an AWS segment on their Youtube channel. This video featured an in-depth discussion on Particle Swarm Optimisation using Amazon SageMaker. In an age where companies are looking for better ways to optimise pricing, discounts, and offers across their product portfolios, environments like SageMaker are game changing. Take a look at the video below for a detailed look at this incredible process.

We’re thrilled to be a part of the conversation involving machine learning and AWS tools, including SageMaker. We look forward to the exciting new things that will come out of re:Invent 2018. 

Blog Events

Executive Breakfast – AI & ML Opportunities for Australian Business

Thursday 1st November we had our Executive Briefing breakfast ses

 

sion, a roundtable presentation and discussion with an array of 20 senior managers and executives, at the Four Seasons Sydney. Our discussion centred around the growing market and opportunities for enterprises looking to implement data science, machine

 

learning and AI. This session we had a focus on not just the economic importance of implementing ML & AI now to secure competitive advantage, but also some of the tools and techniques to get AI projects successfully underway. With a presentation from AWS’ Koorosh, and a sumptuous Four Seasons breakfast, attendees responded they felt the event was very useful and informative.

We will be running our next Executive Briefing in March 2019 – if you are at the stage of encouraging customers to investigate AWS Machine Learning for data projects, or looking at leveraging the current opportunities in Machine learning, SageMaker, Artificial Intelligence
, we’ll be inviting selected CxOs and senior managers for another exclusive opportunity to network and discuss the future of ML & AI.

Blog

Amazon SageMaker vs. AWS Lambda for Machine Learning

Introduction
Advances in serverless compute have enabled applications and micro-services to elastically scale out horizontally, sustaining the volatility of service demand. AWS Lambda is perhaps the most prominent serverless platform for developers and machine learning engineers alike. Its scalability and accessibility makes it a top choice for serving and deploying modern machine learning models.

SageMaker, on the other hand, was introduced by Amazon on re:Invent 2017 and aims to make machine learning solutions more accessible to developers and data scientists. In detail, SageMaker is a fully-managed platform that enables quick and easy building, training, and deploying of machine learning models at any scale. The platform, interacted through a Jupyter Notebook interface, further makes it accessible to perform exploratory data analysis and preprocessing on training data stored in Amazon S3 buckets. Moreover, SageMaker includes 12 common machine learning algorithms that are pre-installed and optimised to the underlying hardware – delivering up to 10x performance improvements compared to other frameworks.

In this article, we will compare and contrast the various advantages and disadvantages of SageMaker (server) and Lambda (serverless) for the machine learning and data science workflow. We will use the categories of cost, model training, and model deployment to detail the characteristics of both services.
Pricing
The pricing model for SageMaker mirrors that of EC2 and ECS, albeit at a premium when compared to the bare-bone virtual machines. Like most server deployments, serving a machine learning model on the SageMaker platform is more costly for sparse prediction jobs. However interestingly, SageMaker instance prices are divided into the segments of model building, training, and deployment. Below is an example from the US West – Oregon region; for more detail see .
Model Building Instance Pricing (US West – Oregon)

Model Training Instance Pricing (US West – Oregon)

Model Hosting Instance Pricing (US West – Oregon)

 

AWS Lambda, in contrast, allows flexible horizontal scaling of model predictions to the workload. Lambda pricing is further based on the number of requests per month and the GB-seconds of Lambda executions (https://aws.amazon.com/lambda/pricing/). Lambda, and serverless compute in general, are more cost effective for models with low or highly volatile interactions.
Model Training
For model training, SageMaker is reliant on the notebook interface and lifecycle while Lambda is agnostic to the model training environment – one can choose to train either locally or through EC2. This means that Lambda requires no changes to the model training process. On the other hand, SageMaker provides the added benefit of highly optimised native implementations of popular machine learning algorithms such as:

Linear regression – https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html
K-means – https://docs.aws.amazon.com/sagemaker/latest/dg/k-means.html
Principle component analysis (PCA) – https://docs.aws.amazon.com/sagemaker/latest/dg/pca.html
Latent Dirichlet Allocation (LDA) – https://docs.aws.amazon.com/sagemaker/latest/dg/lda.html
Factorisation machines – https://docs.aws.amazon.com/sagemaker/latest/dg/fact-machines.html
Neural topic modelling (NTM) – https://docs.aws.amazon.com/sagemaker/latest/dg/ntm.html
Sequence to sequence modelling (Based on Sockeye) – https://docs.aws.amazon.com/sagemaker/latest/dg/seq-2-seq.html
Boosted decision trees (XGBoost) – https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html
Image Classification (Based on ResNet for full training or transfer learning) – https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html
Recurrent Neural Network Forecasting – https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html

Using SageMaker C5 or P3 instances enables further optimisations for neural network training. C5 instances come with  Intel’s Advanced Vector Instructions optimisations while CUDA 9 and cuDNN 7 drivers on P3 instances take advantage of mixed precision training on the Volta V100 GPUs. Lastly, SageMaker’s formal deployment and serving module enables developers and data scientists to build models offline while using the service for model training through the SDK.
Model Deployment
SageMaker makes it easy and accessible for model deployment. The platform has the ability to autoscale inference APIs in a manner similar to attaching application load scalers to EC2 instances (https://docs.aws.amazon.com/sagemaker/latest/dg/endpoint-auto-scaling.html). Furthermore, SageMaker conveniently enable data scientists to A/B test candidate model deployments in real time. Below is a diagram from the official SageMaker documentation that outlines the model deployment process. When deploying a model, SageMaker first creates a new EC2 instance, downloads the serialised model data from S3, pulls the docker container from ECR, and deploys the runtime required to run and serve the model. Finally, it mounts a volume with the model data to the runtime.

 

In comparison, AWS Lambda provides greater flexibility and cost efficiency for model deployments with low interaction rates. However, the main drawbacks of Lambda include the inherent cold-start delay and the lack of GPU support (not that GPUs are needed for model predictions). Moreover, Lambda deployments are constrained to a maximum of 5 minutes of computation time and 50MB of compressed deployment size (in a zip or jar format). One can work around the package size constraint by uploading large dependencies to S3 and subsequently downloading and caching them to an in-memory directory such as /tmp on the first request. Furthermore, one can reduce cold start delays by using a model architectures with higher prediction throughput, driver optimisations, and/or continuously pinging the API every 5-10 minutes.
Conclusion
Overall, both Amazon SageMaker and AWS Lambda provide many benefits for the machine learning workflow. SageMaker’s Jupyter notebook interface for model building and training makes it extremely accessible to developers and data scientists. Moreover, the optimised implementations of popular machine learning algorithms such as K-means and boosted trees makes it attractive in the model training process. For machine learning models with variable demand, we recommend model deployments on AWS Lambda. However despite Lambda providing continuous horizontal scaling to the workload, we find that SageMaker’s ‘one-click’ deployment and built-in A/B testing enables us to quickly iterate across candidate model deployments — multiple minimal viable product at a time.
Further Reading

https://aws.amazon.com/sagemaker/
https://aws.amazon.com/lambda/

 

Blog

Accessing Jupyter Lab in Amazon SageMaker

Introduction
Amazon SageMaker makes it easy and accessible to build machine learning models using the familiar Jupyter Notebook interface. Given the prominence of Jupyter Notebooks in the data science and machine learning workflow, Jupyter Lab is the next generation user-face of Jupyter notebook files. All notebook files are fully supported within the Lab interface. This tutorial will detail how to access Jupyter Lab on the Amazon SageMaker platform. However, note that Jupyter Lab is currently in beta so more features (such as real-time collaboration) and improvements are still under development.
Accessing Jupyter Lab in Amazon SageMaker
Jupyter Lab is easily accessible on the Amazon SageMaker platform. First, start your SageMaker instance on the management console. The platform is currently only available in Tokyo, North Virginia, Ohio, Ireland, and Oregon (as of June 2018) so you will have to change your region to one of the available regions. After some time, open the notebook interface inside your browser; you will see the familiar Jupyter notebook interface start up on the home directory. It only takes one step to change the user interface – simply replace the ‘tree’ at the end of the URL with ‘lab’.

3 Reasons to Use Jupyter Lab:
Below we list three of our favourite features of Jupyter Lab in no particular order. For more details and features, visit the user documentation at http://jupyterlab.readthedocs.io/en/latest/.
Dark Theme
A dark theme is now built into the interface. We also expect custom themes and extensions from the community in the near future.

Multiple Notebooks and Windows
Having multiple notebooks snapped to a bash terminal or python console facilitates a modular environment that is entirely customisable to the user or task.

Live Markdown Editor
A live markdown editor is included in Jupyter Lab, enabling users to preview their edits in real time.

Blog

GDPR and its Implications for Australia and Machine Learning

What is GDPR?
General Data Protection Regulations (GDPR) is a regulation in EU law that was adopted by the European Parliament in April 2016 and became enforceable on May 25th, 2018. The regulation applies to the collection, processing, and movement of personal data for individuals residing in 32 European states. Non-compliance will result in a penalty up to €20,000,000 or 4% of global revenue, whichever is higher. Furthermore, GDPR extends to all companies that are holding information from data subjects that are EU citizens. This includes Australian businesses.

In short, with regard to Machine learning, GDPR signifies a significant compliance burden for automated decision-making (i.e. machine learning) systems. It outlines three areas where automated decisions are legal:

Where it is necessary for contractual reasons;
Where it is separately authorised by another law;
When the data subject (individual) has explicitly consented.

Moreover, the legislative text describes a general ‘right to explanation’ of automated decision-making processes. In greater detail, data subjects are entitled to an explanation of the automated decision-making systems after the decisions are made. Data subjects are further entitled to contest those decisions.
How Does it Affect Australian Businesses?
The GDPR will apply to Australian businesses that:

Have an establishment in the EU (regardless of whether they process personal data in the EU); or
Do not have an establishment in the EU, but offer goods and services, or monitor the behaviour of individuals (through sensitive personal data) in the EU.

For example, an Australian business will need to comply with GDPR if it:

Ships products to individuals in the EU;
Sells a health gadget that can monitor the behaviour of an individual in the EU;
Deals with the personal information of an individual in the EU (for example, an EU citizen living in Australia obtains tax advice from local accountant).

What Does This Mean for Machine Learning?
One of the main legislative points affecting the application of machine learning in businesses is the aforementioned ‘right to explanation’ for all EU individuals affected by automated decision-making systems. In short, if an outcome-of-interest of the individual is affected by their data, they have a right to know why the machine learning model made that decision.

What people generally think when initially hearing this is that GDPR and further data privacy legislations will constrain the application of more accurate black-box machine learning models. This is true to a certain degree. Naïve businesses and machine learning practitioners can no longer purely focus on the accuracy of the models while neglecting holistic and individualistic understandings of the algorithm, ignoring the quirks and potential biases that a business’s dataset may bring.

Although the application of machine learning may initially be constrained by higher compliance costs, the need for greater model interpretability is an exciting shift for both academia and industry. Existing research and methodologies are able to facilitate both global interpretability (understanding the aggregate relationships between the factors and the recommended decision) and local interpretability (understanding the individual factors that led to a decision being made) for almost all areas of the supervised machine learning spectrum.

Model interpretability is not a new concept. Academia has always noted that the lack of understanding for black-box models is not something to simply accept. For example, a team of researchers looking to understand their image classifier found that their convolutional neural network was identifying ‘dog’ and ‘wolf’ solely on the background snow – instead of the visual attributes of the animals.
Our View
Rather than constraining machine learning, GDPR will accelerate research in model interpretability and mature the overall data science industry.

 

Contact us today, if you are interested in better understanding your machine learning algorithms and their role in data driven decision making.

 
References
https://www.eugdpr.org

https://www.oreilly.com/ideas/how-will-the-gdpr-impact-machine-learning

Blog

Machine Learning for Competitive Advantage

What Exactly is Machine Learning?
Machine learning is the next progression of big data and fast data analytics. Big data involves analysing large sets of data to reveal patterns and trends in order to build stronger relationships with customers and make business processes more efficient. One step on, fast data focuses on applying big data analytics in real time to proactively solve issues and monitor operational health.

Machine learning (the task of teaching a machine to algorithmically learn a task without explicitly programming it) pertains to a variety of other fields such as deep learning (artificial neural networks that mimic the human brain) and statistical learning (using statistical algorithms to help machines formulate and validate hypotheses). In greater context, machine learning and data science facilitates predictive analytics, identifying trends and pattern in data and experience to help make more informed decisions. For example, an inventory optimisation system is able to use its information on stock levels and historical purchasing data to forecast future demand and subsequently optimise inventory holdings.

A recent survey by MIT Technology Review Custom and Google Cloud (2017) revealed 45% of implementers found they were able to extend their data analysis and insights with machine learning – with 26% feeling they have achieved a competitive advantage of measurable value in the marketplace. According to Chappell and van Loon (2016), companies using machine analytics are:

2x as likely to use data to make well-informed decisions
5x as likely to make decisions faster than competitors
3x as likely to execute decisions faster than without machine learning
2x as likely to have financial results in the top 25% of similar companies

Machine learning can be applied in almost any industry and enable companies to use their existing data to improve their business efficiency. According to MIT Technology Review Insights (2017), the most common applications include:

Image recognition, classification, and tagging (47%)
Emotion/behavioural analysis (47%)
Text classification and mining (47%)

How Machine Learning Can Benefit your Company
Machine learning can benefit a company in many ways, including:

When there is no available human expertise

Human expertise takes a large magnitude of time and effort to manifest. Machine learning algorithms are able to learn from experience in a fraction of the time it takes for a human.

When decision making is based on human intuition

Machine learning algorithms reduce key-person risk by consolidating intellectual property and expertise. Moreover, machine learning algorithms are able identify and capitalise on subtle patterns that would normally be missed or identified as intuition

When solutions should be personalised

With the prevalence of hyper-personalisation, machine learning enables businesses to effectively target and personalise brand engagements and solutions based on customer attributes and preferences at a massive scale

When solutions are monotonous

Machine learning systems perform best when solutions are monotonous and repetitive. The adoption of this technology will enable employees to work on the things that matter more to them.

When scalability is required

Machine learning systems scale to the compute power of the organisation. In contrast, tradition systems are limited to human capital. For example, an automated chatbot agent exhibits accessibility and scalability that is far beyond even the largest customer service workforce

Overall, it’s not hard to see how machine learning can help streamline processes and decision-making with superhuman accuracy. With the right strategy, adoption of machine learning processes will drastically boost a company’s competitive advantage.

 
References
Chappell, N, van Loon, R 2016 ‘Machine Learning Becomes Mainstream: How to Increase Your Competitive Advantage’ Data Science Central<http://www.7wdata.be/data-analysis/machine-learning-becomes-mainstream-how-to-increase-your-competitive-advantage-3/?utm_source=newsletter&utm_medium=email&utm_campaign=machine_learning_becomes_mainstream_how_to_increase_your_competitive_advantage&utm_term=2017-12-13>

MIT Technology Review Insights 2017 ‘Machine Learning: The New Proving Ground for Competitive Advantage’ MIT Technology Review <https://www.technologyreview.com/s/603872/machine-learning-the-new-proving-ground-for-competitive-advantage/>

Van Loon, R 2017 ‘Securing Competitive Advantage with Machine Learning’ Dataconomy<http://dataconomy.com/2017/09/competitive-advantage-machine-learning/>

Blog

4 Areas of AI and Machine Learning

AI and machine learning has come a long way in recent years, opening up unlimited possibilities with the hope of transforming the core of how businesses and industries operate. AI can often perform tasks or solve problems faster and more efficiently than what is humanly possible; allowing business owners to streamline, scale and automate quantitative work so that resources inside the business can be freed up to focus on the more human, subjective areas of the business strategy, growth, and improvement.

Discussed below are four of the largest growing areas in AI and machine learning:

Robotics and autonomous vehicles
Computer vision
Natural language processing
Virtual agents

Robotics and Autonomous Vehicles
Through advances in robotics, computer vision, and machine learning, robots and vehicles can now demonstrate contextual and situational awareness. They can learn and continue to learn behaviours that help them perform their given task better, such as playing video games or navigating through complex terrain in an unseen environment. This breakthrough development means robots can move for themselves without a human directing it via a remote, paving the way for autonomous vehicles and robotics.

In warehouses, robotics and automation can be used to increase productivity by working faster, minimising the chance of human error and decreasing the risk of injury. Swisslog reduced stocking time by 30% with autonomous vehicles while Ocado (an online supermarket in the UK) has a warehouse filled with conveyor belt, robots and AI applications to move products from the store, to shopping bags, to delivery vans, and on to the customer (Bughin et al., 2017).

Self-driving cars are fitted with all kinds of cameras, sensors, and navigation systems to collect as much data as possible about their location and surroundings (Ashish, 2017). The car chooses a navigation path and drives there, avoiding any obstacle along the way. Unlike humans, sensors can monitor all angles around the car 100% of the time, mitigating human blind-spots and resulting in fully informed decisions down to the nearest millisecond. In the near future, people will be presented with more opportunities while travelling – they can work or the car can recommend where to stop for coffee or a nice lookout. This will again revolutionise the private driver industry. Eventually, services like Uber and Lyft will be able to release a fleet of self-driving cars rather than paying for human drivers, drastically reducing the cost of the end consumer given that the driver is 50% of the cost of most rides (Ashish, 2017).
Computer Vision
Computer vision relates to the ability of an AI system to process visual information about the world, giving the computer the functionality of the human eyes (Vittori, 2017). This opens up a whole new world of opportunity in health, travel, maintenance work, and study processes.

AI virtual agents are now reviewing MRI and CAT scans using image recognition technology to identify problems and irregularities. Computer vision systems are now more precise than the naked human eye, recognising and diagnosing significantly more subtle indicators of problems, fast-tracking the recovery process, and preventing the problem escalating – saving countless lives in the process.

Google translate has an innovative feature for travellers – it can look at a photo of a foreign language and translate it directly into their native language, using augmented reality to overlay the translated photo on the original text. Moreover, Amazon has begun using delivery drones with computer vision – these are able to avoid obstacles and damage to the drone or product, delivering goods faster and cheaper (for small items) than large vans. Similarly, research is being done toward bug-sized maintenance robots that can inspect aircrafts internally to identify errors and engineering irregularities.

Computer vision can also be used in adaptive learning to work out the education methods best suited to each individual student – systems can track the eyes and movements of students as they are in a class to analyse whether they are engaged, confused, or bored, while also identifying their learning preferences (Bughin et al, 2017). This data is then used to direct lesson planning to maximise learning and engagement. All this information is synthesised to form effective learning groups based on student profiles, learning styles, and knowledge levels to further enhance the learning experience.
Natural Language Processing
Natural language processing is the application of machine learning techniques to analyse and understanding natural language and speech. This technology is already being used by schools to speed up the administration of basic tasks such as marking and documentation. Natural language processing systems are able to decipher student handwriting and mark objective questions. Furthermore, natural language processing systems are readily applied to smart devices for speech recognition in virtual assistants like Siri, Alexis, and Google Assistant. Interestingly, advancements in natural language processing has enabled the synthetic generation of human-sounding speech. Google’s Duplex system is able to book appointments and make reservations with businesses while exhibiting the minute characteristics of human speed such as imperfect grammar, pauses, repetition, and filler words.

Furthermore, natural language processing systems are able to draw meaning and semantic understanding from what is being said. AI systems are able to evaluate different tones of voice and understand the connotative meanings behind human speech. Businesses are currently using semantic systems to evaluate employee performance and to further enhance the overall customer experience.
Virtual Agents
Virtual agents, although mostly driven by natural language processing techniques, are AI agents that act as online customer service representatives for company and brand engagements.

The most prevalent example of virtual agents are chatbots. They make 24/7 assistance a reality without having to maintain a large service team for on-demand calls. Moreover, chatbots enable smaller companies to expand their customer service access globally, overcoming time and language differences. Conversation through chatbots makes it easier to collect customer data and insights while increasing customer engagement.

Virtual agents have further been built as receptionists in medical centres in aim of matching patients with the most appropriate doctor based on their past history, symptoms, and the doctor’s specialisations (Bughin et al, 2017). They are also used to analyse and summarise large amounts of information and medical journals to better assist with the accuracy and velocity of the diagnosis. Virtual agents can sift through endless pages of medical data in seconds to suggest a well-informed treatment plan and compare the patient’s data with a large data library of past cases. Common symptoms generally point to a common health problem, and the agent is able to identify the most effective treatments used in the past.

References
Ashish, 2017 ‘Autonomous Cars and Artifical Intelligence’ Codementor<>

Bughin, J, Hazan, E, Ramaswamy, S, Chui, M, Allas, T, Dahlstr.m, P, Henke, N, Trench, M 2017 Artificial Intelligence: The Next Digital Frontier?McKinsey Global Institute

Harris, T ‘How Robots Work’ How Stuff Works<https://science.howstuffworks.com/robot6.htm>

Vittorri, C ‘What’s the different between AI and Computer Vision?’ B&T<http://www.bandt.com.au/opinion/whats-difference-ai-computer-vision-every-marketer-needs-know>

Blog

What is AI?

What do Instant Messaging, Flat-Screen TVs, Credit Cards, and Artificial Intelligence All Have in Common?
Answer: They were all once foreign, imagined concepts in old sci-fi films!

It’s hard to imagine life without the first three in the list, but Artificial Intelligence is still shrouded in uncertainty. Simply put, AI is a computer system capable of performing tasks humans need intelligence to do.

AI has endless real-world applications – it could be your phone recognising a face and suggesting you tag Billy in a photo from the weekend, a driverless car deciding when to accelerate or turn based on synthesised information, or a business management platform for consumer trends and predictive analytics.

What probably came to mind when you heard Artificial intelligence is General AI, where a system can perform any task of a human, dynamically learning and communicating. The realisation of general artificial intelligence is far from the near future and most experts believe that the first prototypes are decades away.

On the other hand, the process that awarded AI as the top of Garter’s Top 10 Strategic Technology Trends for 2018 is Narrow AI, where a computer system is programmed to focus on and learn a specific task (e.g. dictating spoken words or analysing dense legal documents) using processes such as machine learning, predictive modelling, and data science.

The majority of technologies in Narrow AI, however, are not recent. Only in the last decade or so have we seen an explosive growth in the application of machine learning and AI. The exponential growth in computational power, paralleled with major advances in data acquisition and generation, has lead to technology catching up to academic theory, driving the practicality of AI applications.

 

Blog

Top Strategic Technology Trends for 2018

Where Does Artificial Intelligence Fit In?
The world is ever-changing – new research constantly leads to more efficient and effective ways of doing things. The technological advancement and disruption, whether perceived as a threat or opportunity, is the only constant factor.

Research shows Narrow Artificial Intelligence (AI) driving three of the top ten strategic technology trends shaping businesses in 2018 (Gartner, 2017). It will be the primary game changer for businesses over the coming years. Moreover, Garner Research noted that we will see AI creeping into business through three different areas: AI Foundations, Intelligent Apps and Analytics, and Intelligent Things.
AI Foundations:
This involves applying AI to reinforce relevant and informed decision making within an organisation and to strengthen the customer experience. Approximately 59% of organisations are in the process of gathering information to formulate AI strategies (Gartner, 2017). The remaining organisations are further ahead in the development or implementation stages.

These strategies use Narrow AI, where a computer system is initiated to learn a specific task (such as recognising human speech), using processes such as machine learning and data science.
Intelligent Apps and Analytics:
Companies are incorporating AI and machine learning into their applications and services to make their products and services more intuitive, responsive, and relevant in an increasingly competitive business environment. Arthur Clarke said, “Any sufficiently advanced technology is indistinguishable from magic”. This magic that businesses and consumers have and will be seeing in applications is due to adoption of machine learning and AI.

These apps and analytics are applied as a layer of intelligence between people and the business, helping to augment and influence human decisions through information (David Cearley, Gartner 2017). When used effectively, AI makes the services offered by a company more personalised and helps the business perform at a higher standard – increasing customer value.
Intelligent Things:
AI and machine learning can also be applied to objects in everyday life to improve their performance and autonomous functionality. An object can operate semi-autonomously or fully autonomously with a set purpose and time frame, aided by the data it gathers as it does its tasks (e.g. a self-directing vacuum cleaner operating at set hours and familiarising itself with the space it works in; or a self-driving car deciding when to turn a corner based on the road and surrounding vehicles).

Brands are motivated to use this application as the intelligent enhancements makes their products and services intuitively interfaced and personalised to each individual and their surroundings, increasing its usability and functionality.

In short, the presence of AI is ever-increasing, and there is a constant pressure for companies to integrate their products and users as seamlessly as possible.

 
References
Panetta, K 2017 ‘Gartner Top 10 Strategic Technology Trends for 2018’ Gartner https://www.gartner.com/smarterwithgartner/gartner-top-10-strategic-technology-trends-for-2018/