Hexagon Technology Inc. - Oakville

Accelerating the Adoption of AI Models

 In collaboration with industry, Peter Darveau P. Eng., contributed to this project to develop an open-source machine learning (ML) model for identifying and analyzing root causes in plant operations at a facility. The project is part of Stage 2 of their AI maturity framework. Hexagon Technology in Oakville Ontario educates about and develops transformative AI solutions to bridge human and machine collaboration. In this post, we discuss how time-consuming root cause analysis can be reduced using a machine learning model as an alternative to the troubleshooting and analysis methods that come with legacy controllers and software.

Controls systems in place today work mostly on binary logic (1s and 0s) or some basic form of linear control incorporating proportional gain and possibly an offset. In some cases, such as process control, the rate of change of the output compared to the rate of change of the input is factored in to stabilize an otherwise unstable output. While the various operations and sequences perform their respective tasks well independently from each other, the function of the controller as a whole is ignored. Advances in neural networks theory, IIoT and computing technology offer new ways of getting more out of legacy controllers.

What is the advantage of looking at the controller as a whole? We can imagine a box with many inputs and outputs. By performing statistical analysis around what the controller controls, we can visualize how our controller is performing and make transactional parameter adjustments to get the desired output performance. This is generally what Statistical Process Control (SPC) does. But doing it this way involves an external control method and in no way does this mean we have an optimized model of all the control sequences taking into account the sensitivities to all the inputs.

Developing an optimized model of the controller will only be beneficial if we can optimize the box with some method of error correction; that is to say to optimize the box such that the errors between what is expected at the outputs and what is observed are minimized. This is where neural networks come into play.

It can be shown that a series of control sequences can be represented as a neural network and is related to the most iconic model of all: linear regression given by the algebraic formula familiar to all: y = mx + b, a straight line. By cascading the output from each linear regression model layer (function) to the next, we obtain a simple network.

undefined


By replacing and adding some nomenclature to keep track of everything, we turned a straight line equation into a mathematical network. The diagram flows unidirectionally from inputs to the outputs. Adding more functionality creates a more complex network as shown below:

undefined

It is important to note in this example that no matter how we analyze or modify this model, the output(s) will always behave according to a straight line and this is explains why the model represented by all control sequences running in a typical controllers can’t be optimized.

In another post, I will explain how this problem can be resolved with the introduction of an activation function in the network. It is this function that shapes the output in a non-linear (curve) fashion allowing the network to perform error minimization by factoring in the sensitivity of the change of the outputs to the change of the inputs in the network.

AI systems perform at or above human-level for many specialized tasks. This includes tasks that were never before possible with written rules or software, such as recognizing and categorizing millions of states. Advances in technology such as with Intel’s Xeon processors, Movidius NCS and OneAPI framework make that possible.

The key to AI maturity is envisioning what the end-state could look like and seek support to aid with a clear path to that vision from the current state. At Hexagon Technology, we are inspired by the potential of AI. We’ve also been privileged to go on this transformational journey with business leaders to help them understand the promise of AI to create the future of industry, services, customer experiences and the environment.

References:

[1] Darveau, P. Prognostics and Availability for Industrial Equipment Using High Performance Computing (HPC) and AI Technology. Preprints 2021, 2021090068 .

[2] Darveau, P., 1993. C programming in programmable logic controllers.

[3] Darveau, P., 2015. Wearable Air Quality Monitor. U.S. Patent Application 14/162,897.

 

Plant Operations using IoT

 

 Application demonstrating capability of using and Edge device to "call" data on a remote server or Cloud.  In this case, the user speaks to the iPhone to retrieve data that gives the level of detail called out for.

 

 

References:

[1] Darveau, P. Prognostics and Availability for Industrial Equipment Using High Performance Computing (HPC) and AI Technology. Preprints 2021, 2021090068 .

[2] Darveau, P., 1993. C programming in programmable logic controllers.

[3] Darveau, P., 2015. Wearable Air Quality Monitor. U.S. Patent Application 14/162,897.

Verification & Validation of Machine Learned & AI Models

Machine Learned and AI models degrade over time in production. As the adoption of these models steadily increases, this problem is becoming so evident that some scientists, such as Dr. Allen from Rice University, are raising awareness of a reproducibility crisis especially in the biomedical research field causing significant waste in time and money.

The chart below was put together to help explain model vulnerability to degradation in various types of applications. In reducing the risks involved due to the degradation of the ML/AI model and thus making the model certifiable, there are two focus areas to consider: 1) The platform running the model and; 2) the requirement for verification & validation of ML and AI models. The first being a technical aspect leading to cost implications and the second dealing more with the responsible development as mandated by ethical engineering practices, quality requirements and key values such as those set out in "The Montreal Declaration for a Responsible Development of AI".

undefined

Further elaboration on those two focus areas is required.

Platforms: How quickly a model degrades depends on the application and the problem the model is designed to solve. Putting a platform in place to support it is key to ensuring the model can be tuned to suit the purpose it is meant to serve. Compare, for example, platforms for a vision system and a cyber security system. The model for a vision system is programmed by a set of crafted rules, tested for serving its purpose and is for the most part static from that point on. In contrast, the platform supporting a model ensuring protection against cyber attacks that needs adjustment and frequent tuning will need to support active supervised and unsupervised learning capabilities to stay relevant. A static cyber security model (one based on a limited set of rules) would degrade very quickly, require constant dataset updating and redeployment leading to wasted time and money.

Verification & Validation (V&V): Meeting the requirements of a quality management system (QMS) and incorporating good engineering control practices are the second key. Surprisingly absent from public ML/AI development lifecycle frameworks, V&V is the task which users need to consider seriously when model degradation is a concern. Performing V&V, especially on models that will run on more sophisticated platforms, is not only favourable from an effort/cost perspective but can also provide safety benefits namely in autonomous vehicles applications. On the right of the chart - vertical axis - we find that as the code for the ML/AI algorithm becomes important to the ML/AI system and surrounding infrastructure so does the requirement for performing V&V.

With all implementations of ML/AI, it is important to thoroughly define the requirements early in the project to ensure a high quality model is delivered to the user. A good Data Science Lifecycle shows model evaluation including cross-validation, model reporting and A/B testing but a great one will include provisions for the Platform requirements and thoughtful V&V testing leading to a higher quality and certifiable ML/AI model.

Thanks.

Peter Darveau, P. Eng.

 

References:

[1] Darveau, P. Prognostics and Availability for Industrial Equipment Using High Performance Computing (HPC) and AI Technology. Preprints 2021, 2021090068 .

[2] Darveau, P., 1993. C programming in programmable logic controllers.

[3] Darveau, P., 2015. Wearable Air Quality Monitor. U.S. Patent Application 14/162,897.

Lessons Learned - Implementing Machine Learning Projects into Production

undefined

The Early Adoption of Machine Learning Projects

Machine Learning is in its early stages of adoption. While there are a lot of examples stating the benefits, putting a Data Science team together requires some preparation. Before implementing Machine Learning into production, there are some important steps that should be taken before any major ML Project kicks off. Here is a summary from our findings:

Business' Role:

A major stumbling block can occur when a business problem isn't clearly articulated from the very beginning. The questions asked by the business need to be matched up by whether the data is available. Clients will be well served by identifying a business champion when considering a data science strategy.

Leadership's Role:

Machine Learning Projects are very experimental-oriented. The business' leadership must foster a stronger culture of empowerment and experimentation. There must be a willingness to include data in the overall vision of the business.

Talent Identification:

Many businesses make the assumption that huge investments in new talent and skillsets are required to implement a data science strategy. This is rarely the case. It works to identify employees who are apt to be genuinely interested and have a certain skillset related to data science. Identifying someone from Engineering is a great place to start but it is advantageous that this person have multi-disciplinary experience.

Scoping Work:

As with any new initiative, there will be a lot of learning and education involved. A Pilot Project is typically identified for which a lot of data gathering needs to occur. Expect 30 - 50% of effort to properly scope out a ML Pilot Project.

Bottlenecks to be aware of:

Some bottlenecks found include:

Ability to collect the right data. For example, in predictive maintenance, the unavailability of failure history could hinder the results
Leadership vision on company data and how to use it
Lack of incentive

 

References:

[1] Darveau, P. Prognostics and Availability for Industrial Equipment Using High Performance Computing (HPC) and AI Technology. Preprints 2021, 2021090068 .

[2] Darveau, P., 1993. C programming in programmable logic controllers.

[3] Darveau, P., 2015. Wearable Air Quality Monitor. U.S. Patent Application 14/162,897.

Home