Determining optimal control settings for an industrial process can be challenging. For example, when there are interactions between the effects of the controls, adjusting one setting can require readjusting other settings. This article addresses the problem using genetic optimization.

]]>While it is straightforward to invert a function like *y = mx* to produce the inverse, *x = y/m*, some functions can’t be easily inverted. One such function is the cumulative distribution function (CDF) of the normal probability distribution, where neither the CDF nor the inverse CDF (quantile function) can be expressed in a closed form. This article presents a method for inverting a function using a neural network regardless of whether the problem can be solved analytically.

This article presents a method for training a neural network to derive the integral of a function. The technique works not only with analytically-solvable integrals but also with integrals that do not have a closed-form solution and are typically solved by numerical methods. An example is the normal distribution’s cumulative density function (CDF).

]]>Many relationships in physics, biology, chemistry, economics, engineering, etc., are defined by differential equations. In general, a differential equation (DE) describes how variables are affected by the rate of change of other variables. For instance, a DE explains how the position of a mass vibrating on spring changes with time in relation to the mass’s velocity and acceleration. A physics-informed neural network (PINN) produces responses that adhere to the relationship described by a DE (whether the subject is physics, engineering, economics, etc.). In contrast, an inverse physics-informed neural network (iPINN) acts on a response and determines the parameters of the DE that produced it. PINNs and iPINNs are trained by including a constraint during training that forces the relationship between the input and output of the neural network to conform to the DE being modeled.

]]>Part 1 described a reinforcement learning system used to find the optimal control settings for a reflow oven used for soldering electronic components to a circuit board. Part 2 presents the details of the oven simulator used to accelerate the training process.

]]>Part 1 explores the ability of a model trained with reinforcement learning (RL) to generalize, i.e., produce acceptable results when presented with data it was not exposed to during training. The application in this study is an industrial process with multiple controls that determine the effect on a product as it transitions through the process.

]]>Since considerable time is required to stabilize an oven’s temperature after changing the heater settings and passing the product through the oven, an oven simulator is used to speed up the process. The simulator emulates a single pass of the product through the oven in a few seconds compared to the minutes required by a physical oven.

The oven simulator has eight heating zones, each with a control for setting the temperature of the zone’s heater. After each pass, the simulator provides the temperature readings of the product recorded as it traveled through the oven.

]]>This paper explores the ability of a model trained with reinforcement learning (RL) to generalize, i.e., produce acceptable results when presented with data it was not exposed to during training. The application in this study is an industrial process with multiple controls that determine the effect on a product as it transitions through the process. Determining optimal control settings in this environment can be challenging. For example, when there are interactions between the controls, adjusting one setting can require the readjustment of other settings. Also, a complex relationship between a control and its effect complicates finding an optimal solution. The results presented here show that a model trained by an RL process performs well in this environment. Further, with proper definitions of the state and reward functions in the RL process, the trained model is able to generalize to conditions different from those used for training.

]]>Determining optimal control settings for an industrial process can be tough. For instance, controls can interact, where adjusting one setting requires readjustment of other settings. Also, the relationship between a control and its effect can be very complex. Such complications can be challenging for optimizing a process. This article explores a reinforcement learning solution for controlling an industrial conveyor oven. An example of this type of equipment is a reflow oven used for soldering electronic components to a circuit board (Figure 1).

]]>