1. Introduction
In mining operations, the primary energy consumer is the comminution system, responsible for more than half of the entire mine consumption [1]. From all pieces of equipment that integrate the comminution circuit, the semi-autogenous grinding mill (SAG) is perhaps the most important in the system. With an aspect ratio of 2:1 (diameter to length), these mills combine impact, attrition and abrasion to reduce the ore size. SAG mills are located at the beginning of the comminution circuits, after a primary crushing stage. Although there are small SAG mills, their size usually ranges from 9.8 × 4.3 to 12.8 × 7.6 m, with a nominal energy demand of 8.2 and 26 MW, respectively [2], which make SAG mills the most relevant energy consumer within the concentrator. Modelling their consumption behaviour supports the operational control and energy demand-side management [3].
Most theoretical and empirical models [4,5,6] demand input feed characteristics, such as hardness, size distribution and inflow rate, SAG characteristics, such as sizing and product size distribution, and operational variables such as bearing pressure, water addition and grinding charge level. Although they are suitable to provide adequate design guidelines, they lack accurate in-situ inference since most assume steady-state and isolation from up and downstream processes. In response, model predictive control, SAG MPC [7], combines those methods with real-time operational information. However, expert knowledge is required to model the SAG mill dynamics properly.
From a geometallurgical perspective, the integration of new predictive methods that account for space and time relationships over real-time attributes has been defined as a fundamental challenge [8,9] in mining operations, particularly in an integrated system such as comminution. In response, data-driven approaches have been proposed ranging from support vector machines [10] and gene expression programming [11] to hybrid models that combine genetic algorithms and neural networks [12] and recurrent neural networks [13]. As data-driven methods are sensitive to the context (available information) and representation (information workflow), the authors have studied the use of several machine learning and deep learning methods in modelling the SAG energy consumption behaviour based only on operational variables [14].
The energy consumed by a SAG mill is related to several factors such as expert operator decisions, charge volume, charge specific gravity and the hardness of the feed material. Knowing the output hardness material becomes relevant for the downstream stage in the primary grinding circuit. Ore hardness can be characterized at sample support by combining the logged geological properties and the result of standardized comminution tests. They can be used to predict the hardness of each block sent to the process. However, these attributes are not always available. In response, a qualitative characterization of the ore hardness processed at time t, relative to the operational hardness of the ore processed at timet+1can be done using only operational variables rather than a set of mineralogical characterizations. This qualitative characterization is referred and here used as operational relative-hardness (ORH).
We take advantage of previous works [14] by knowing that the Long Short-Term Memory (LSTM) [15] outperforms other machine learning and deep learning techniques on inferring the SAG mill energy consumption. Therefore, Section 2 presents the ORH and LSTM models, Section 3 establishes the SAG mill experimental framework, the results of which are presented in Section 4, and conclusions are drawn in Section 5.
2. Model 2.1. Operational Relative-Hardness Criteria From the several operational parameters that can be captured and associated to SAG mill operations, we consider the energy consumption (EC) and feed tonnage (FT) to build our operational relative-hardness criteria.
{EC,FT}tis collected over a period of time T using aΔtdiscretization. By considering the one-step forward time difference of energy consumption (ΔECt=ECt+1−ECt) and feed tonnage (ΔFTt= FTt+1−FTt), a qualitative assessment of the operational relative-hardness can be done. For instance, if the energy consumption is increasing and the feed tonnage is constant, it can be interpreted as an increase in ore hardness relative to the previous period. Similarly, if the feed tonnage is constant and the energy decreases, a decrease in ore hardness relative to the previous period can be assumed. Particularly, when bothΔECtandΔFTt show the same behaviour, the SAG can be either processing ore with medium operational relative-hardness or being filled up or emptied. To avoid misclassification in this last case, the operational relative-hardness is labelled as undefined. Table 1 summarizes the nine combinations of states and the associated operational relative-hardness.
The qualitative labelling ofΔECtandΔFTtas increasing, constant or decreasing can be established based on their global distribution over the period T as:
ΔECt=IncreasingifΔECt>λ·σΔECConstantif|ΔECt|≤λ·σΔECDecreasingifΔECt<−λ·σΔECΔFTt=IncreasingifΔFTt>λ·σΔFTConstantif|ΔFTt|≤λ·σΔFTDecreasingifΔFTt<−λ·σΔFT
whereσΔECandσΔFTrepresent the standard deviations over the period T ofECandFT, respectively, andλis a scalar value that modulates the labelling distribution. Note that (i) aλvalue above 1.5 would make the entire definition meaningless since most values would remain as constant, and (ii) theλvalue definition is an external model parameter and can be guided either subjectively or via statistical meaning.
2.2. Long Short-Term Memory
The Long Short-Term Memory (LSTM) [15] neural network architecture belongs to the family of recurrent neural networks in Deep Learning [16]. They are suitable to capture short and long term relationships in temporal datasets. Internally, LSTM applies several combinations of affine transformations, element-wise multiplications and non-linear transfer functions, for which the building blocks are:
-
xt: input vector at time t. Dimension(m,1).
-
Wf,Wi,Wc,Wo: weight matrices forxt. Dimensions(nH,m).
-
ht: hidden state at time t. Dimension(m,1).
-
Uf,Ui,Uc,Uo: weight matrices forht−1. Dimensions(nH,m).
-
bf,bi,bc,bo: bias vectors. Dimensions(nH,1).
-
V: weight matrix forhtas output. Dimension(K,m).
-
c: bias vector for output. Dimension(K,1).
where m is the number of variables as input, K is the number of output variables, andnHis the number of hidden units. Letτ∈Nbe a temporal window. At each timet∈{1,…,τ}, the LSTM receives the inputxt, the previous hidden stateht−1and previous memory cellct−1. The forget gateft=σWf xt+Uf ht−1+bfis the permissive barrier of the information carried byxt. The input gateit=σWi xt+Ui ht−1+bidecides the relevance of the information carried byxt. Note that bothftandituse sigmoidσ(x)=(1+e−x)−1as the activation function over a linear combination ofxtandht−1.
By passing the combination ofxtandht−1through a Tanh function, a candidate memory cellc˜t=TanhWc xt+Uc ht−1+bcis computed. The final memory cellct=ft⊙ct−1+it⊙c˜tis computed as a sum of (i) what to forget from the past memory cell as an element-wise multiplication (⊙) betweenftandct−1, and (ii) what to learn from the candidate memory cell as an element-wise multiplication (⊙) betweenitandc˜t.
Similar toitandftthe output gateot=σWo xt+Uo ht−1+bopasses through a sigmoid function a linear combination betweenxtandht−1. It controls the information passing from the current memory cellctto the final hidden stateht=Tanhct⊙otas an element-wise multiplication betweenotandTanhct. At the final stepτ, the output is computed asyτ=Vhτ+c. When dealing with more than one categorical prediction (K>1), as in the present work for ORH forecasting, a softmax function is applied overyτto obtain the normalized probability distribution, and the category k has a probability ofp^(k)=exp(yτ,k)∑c=1Kexp(yτ,c).
An illustrative scheme of the internal connection at time step t inside an LSTM is shown in Figure 1 (left). The ORH prediction has three categories (hard, soft and undefined) and the probability is computed at the last unit, at time stepτ , as shown in the unrolled LSTM in Figure 1 (right).
3. Experiment 3.1. Dataset
We used two datasets containing operational data for two independent SAG mills every half hour over a total time of 340 and 331 days, respectively. Each one of the SAG mills receives fresh feed and is connected in an open circuit configuration (SABC-B) where the pebble crusher product is sent to ball mills. At each time t, the dataset contains Feed tonnage (FT) (ton/h), Energy consumption (EC) (kWh), Bearing pressure (BPr) (psi) and Spindle speed (SSp) (rpm). They are split into two main subsets (a validation dataset is not considered since the optimum LSTM architecture to train is drawn from previous work [14]): training and testing (Table 2). This is an arbitrary division, and we seek to have a proportion of ∼50/50, respectively.
As it can be seen in Table 2, the predictive methods are trained with the first 50% and tested with the upcoming 50%, without being fed with the previous 50% of historical data.
Note that the comminution properties of the ore, such asa×bor BWi, are not included in the datasets; therefore, the relationship between forecasted ORH and comminution properties is not explored in this work. The results herein presented, however, serve as a basis to examine such a relationship if those properties were known.
3.2. Assumptions SAG mills are fundamental pieces in comminution circuits. As no information regarding downstream/upstream processes is available, recognizing bottlenecks in the dataset becomes subjective. We assume that SAG mills will potentially show changes from steady-state to under capacity and vice versa along with the dataset. Thus, stationarity of all operational variable distributions is assumed throughout this work, including the ore grindability. It means that the entire dataset belongs to a known and planned combination of ore characteristics (geometallurgical units). By doing so, we limit the applicability of the present models beyond the temporal dataset without a proper training process. As explained in the problem statement section, we make use of the temporal average over energy consumption and feed tonnage as input for operational hardness prediction. Thus, we assume an additivity property over those variables as their units are kWh and ton/h, respectively, over constant temporal discretization so averaging adjacent data points is mathematically consistent. In the operation from which the datasets were obtained, the SAG mill liners are replaced every 5–7 months. Since the datasets cover almost a year, we can ensure that the liners were replaced in each SAG mill at least once during the tested period, which may alter the relationship between energy consumption and other operational variables, inducing a discontinuity in the temporal plots. However, since in this work the temporal window for ORH evaluation is eight hours, the local discontinuity associated with liners replacement is not expected to affect the forecast at that time frame. The ORH is related to what was happening in the corresponding mill within the last few hours, and not to the mill behaviour prior to the last replacement of liners. 3.3. Problem Statement
The aim is to forecast the operational relative-hardness. To do so, we need to label the datasets with the associated ORH category at data point. We know from Equation (1) that the ORH labelling process requires as input (i) the one-step forward differences on energy consumption (ΔECt) and feed tonnage (ΔFTt), and (ii) a lambda (λ) value. In addition, we are interested in forecasting the ORH at different time supports.
Since the information is collected every 30 min, the upcoming energy consumption ECt+1and feed tonnage FTt+1at 0.5 h support are denoted simply as ECt+1and FTt+1in reference to ECt+1(0.5h)and FTt+1(0.5h), respectively. An upcoming EC and FT at 1 h support, ECt+1(1h)and FTt+1(1h), are computed by averaging the next two energy consumption, ECt+1and ECt+2, and the two feed tonnage, FTt+1and FTt+2. Similarly, by averaging the upcoming ECs and FTs, different supports can be computed. Letsbe the time support in hours, which represents the average over a temporal interval of a given duration, then ECt+1(sh)and FTt+1(sh)are calculated as:
ECt+1(sh)=ECt+1+…+ECt+2s2sFTt+1(sh)=FTt+1+…+FTt+2s2s
In this experiment, three different supports (sh) are considered: 0.5, 2 and 8 h.
Figure 2 illustrates the ORH criteria using a half-hour time support on SAG mill 1 dataset. From the daily graph of ECt(0.5h)and FTt(0.5h)at the top, the graph ofΔECt(0.5h)andΔFTt(0.5h)are extracted and presented at the centre and bottom, respectively. Three different bands, corresponding toλ : 0.5, 1.0 and 1.5, are shown. The values that are above the band are considered as increasing, the ones below it are considered as decreasing and inside as undefined (relatively constant). The corresponding categories for EC and FT are used to define the operational relative-hardness (as in Table 1). It can be seen that, whenλincreases, the proportions of hard and soft instances decrease. Sinceλis an arbitrary parameter, a sensitivity analysis is performed in the range[0.5,1.5]to capture its influence on the resulting LSTM accuracy to suitably learn to predict the ORH at the different time supports.
At each time t the input variables considered to predict ORHt+1(sh)are FTt, BPrtand SSpt. To account for trends, and since FT and SSp are operational decisions, the differences FTt+1−FTtand SSpt+1−SSptare also considered as inputs. Therefore, the dataset of predictors and output{X,Y}∈R5×R, at each time supportsh, has samples{xt,yt}∈{X,Y}made byxt={FTt, BPrt, SSpt, FTt+1−FTt, SSpt+1−SSpt}andyt=ORHt+1(sh). We also tried several other combinations of input variables, but all led to results with lower quality. A temporal window of the previous four hours (previous eight consecutive data points) are used as input for training and testing the LSTM models.
3.4. Preprocessing Dataset A preprocessing step is performed over the raw datasets to make them suitable for deep neural network training and inference processes. The aim is to make all input attributes fall into certain regions of the non-linear transfer functions via normalization and to be properly coded in categories via one-hot encoding. Thus, we normalize the entire raw dataset with the mean and standard deviation of the training dataset.
Letxt(var)∈xtbe one of the five input variables (var) at time t, its normalized expression is computed asxt(var)=vart−mvarsvar, wheremvarandsvarrepresent the mean and standard deviation ofvarin the training dataset. We normalize the first three attributes ofxt, FTt, BPrtand SSptwhile for last two attributes, the differences between the original values FTt+1−FTtand SSpt+1−SSpt, are replaced by the differences between the normalized values of FT and SSp.
The known operational relative-hardness at time t (yt) is one-hot encoding such that soft, undefined and hard are encoded as[1,0,0],[0,1,0]and[0,0,1], respectively.
3.5. Optimal LSTM Architecture
From the training dataset, sequence{x1,…,xτ}of lengthτare extracted to train the LSTM model in order to forecast the operational relative-hardness at next time stepτ+1, at different time supports. The chosen length is four hours (τ: 8).
The external hyper-parameter to be optimized on any LSTM architecture is the number of hidden units,nH . Based on a previous work [14], the optimum number of hidden units was found and here used. They are displayed in Table 3.
Adam Optimizer is used to train the LSTM with hyper-parametersϵ=1×e−8,β1=0.9andβ2=0.999 as recommended by [17].
4. Results
Directly from the datasets, the real operational relative-hardness ORHR is calculated from Equation (1), varyingλin the set(0.5,0.6,…,1.4,1.5)at each time t and for each time support. On the other hand, a probability vector with soft, undefined and hard ORH states is predicted. By taking the highest probability, the predicted ORHPis obtained. Then, a confusion matrix, filled with the number of instances of pairs (RHR, RHP), is built for each time support and eachλ value. Table 4 summarizes and presents the cases ofλ: 0.5,1.0and1.5 , and supports 0.5, 2 and 8 h over the SAG mill 1, while the Table 5 summarizes the same results over the SAG mill 2.
The accuracy of the model prediction, ORHP, defined as the percentage of right predictions is computed as:
ORHAccuracy=#(softR,softP)+#(undR,undP)+#(hardR,hardP)#Total·100
and it represents the percentage of elements in the confusion matrix diagonal. The relative percentage of predictions of each class (rows) is shown in Table 6 for SAG mill 1 and in Table 7 for SAG mill 2.
As shown in Table 6 and Table 7 at 0.5 h time support, the LSTM is able to predict with enough confidence the ORH regardless the value ofλ. Nevertheless, asλincreases, the number of instances of soft and hard ORH decreases improving the final accuracy since the higher the value ofλ, the more data points are classified as undefined. Particularly, for 0.5 h time support, increasingλfrom 0.5 to 1.5 makes real undefined points increase from 4325 to 6577 (from 53.0% to 80.7%) in SAG mill 1 and from 3600 to 6469 (from 45.3% to 81.4%) in SAG mill 2. Therefore, increasingλimproves accuracy, but the price is resolution. On the other hand, the number of extreme cases (softR,hardP) and (hardR,softP) is close to zero. This is a great result, since predicting soft hardness when it is actually hard (or vice versa) may induce bad short term decisions on how to operate the SAG mill, along with other downstream decisions.
The percentage of extreme cases ((softR,hardP) and (hardR,softP)) usingλ:0.5increases when moving from 0.5 to 8 h time support, on both SAG mills. However, they decrease to a value close to zero when increasingλfrom 0.5 to 1.5, at all time supports. However, LSTM loses accuracy in terms of predicting the relevant cases (softR,softP) and (hardR,hardP) as soon as the time support increases, on both SAG mills.
The accuracy graph (Figure 3) shows theλsensitivity at all time supports on both SAG mills. The lower accuracy is 51% and is achieved at 2 h time supports withλ: 0.5 on SAG mill 1. Its accuracy increases to 66% withλ: 1.0 and 81% withλ: 1.5. The best results are achieved at 0.5 h time support (same support as the original data) where 77%, 88% and 93% of accuracy are obtained withλ: 0.5, 1.0 and1.5, respectively on SAG mill 1, and 79%, 85% and 90% of accuracy withλ: 0.5, 1.0 and1.5on SAG mill 2.
5. Conclusions This work proposes the use of Long Short-Term Memory networks to forecast relative operational hardness in two SAG mills using operational data. We have presented the internal architecture of the deep networks, how to deal with raw operational datasets, and qualitative criteria to estimate the operational hardness of processing material inside the SAG mill based on the consumed energy, feed tonnage and a statistical distribution using a lambda value. Particularly, Long Short-Term Memory models have been trained to predict the operational relative-hardness based only on low-cost and fast acquiring operational information (feed tonnage, spindle speed and bearing pressure). The LSTM network shows great results on predicting the relative operational hardness at 30 min time support. On SAG mill 1, using a lambda value of 0.5, the obtained accuracy was 77.3% while increasing the lambda to 1.5 led to an increase in accuracy of 93.1%. Similar results were found on the second SAG mill. As the time support increases to two and eight hours, the accuracy drops to around 52% using a lambda value of 0.5 and 78% with a lambda value of 1.5, on both SAG mills. The inaccuracy of LSTM, when predicting extreme cases such as soft hardness when it is hard and vice-versa, is pretty low. Extreme misclassification is close to 1% at 0.5 h time support on both SAGs regardless of the lambda value. Although it increases to around 20% when increasing the time support using a lambda value of 0.5, it rapidly decreases to around 1% as lambda increases. Lastly, the proposed application can be extended to any crushing and grinding equipment, under a similar context of real-data acquisition in order to forecast categorical attributes that are relevant to downstream processes.
Energy Consumption | Feed Tonnage | Operational Relative-Hardness |
---|---|---|
Constant | Decreasing | Hard |
Increasing | Constant | Hard |
Increasing | Decreasing | Hard |
Decreasing | Decreasing | Undefined |
Increasing | Increasing | Undefined |
Constant | Constant | Undefined |
Constant | Increasing | Soft |
Decreasing | Constant | Soft |
Decreasing | Increasing | Soft |
SAG Mill 1 | Training|Testing Dataset | |||||||||
Variable | Min | Mean | Max | St Dev | Count | |||||
Feed Tonnage (ton/h) | 0 | 0 | 911 | 884 | 2111 | 1953 | 497 | 480 | 8170 | 8170 |
Energy Consumption (kWh) | 0 | 0 | 9927 | 8920 | 12,248 | 10,809 | 1245 | 959 | 8170 | 8170 |
Bearing Pressure (psi) | 0 | 0 | 12.7 | 11.9 | 13.7 | 13.7 | 2.2 | 2.2 | 8170 | 8170 |
Spindle Speed (rpm) | 0 | 0 | 9.2 | 9.1 | 10.3 | 10.7 | 0.7 | 0.7 | 8170 | 8170 |
SAG Mill 2 | Training|Testing Dataset | |||||||||
Variable | Min | Mean | Max | St Dev | Count | |||||
Feed Tonnage (ton/h) | 0 | 0 | 2077 | 2073 | 3477 | 3452 | 1136 | 1134 | 7953 | 7952 |
Energy Consumption (kWh) | 0 | 0 | 16,709 | 17,445 | 19,688 | 19,533 | 1504 | 1462 | 7953 | 7952 |
Bearing Pressure (psi) | 0 | 0 | 13.8 | 14.8 | 18.3 | 18.3 | 3.5 | 3.8 | 7953 | 7952 |
Spindle Speed (rpm) | 0 | 0 | 9.1 | 8.9 | 10.0 | 9.9 | 0.6 | 0.6 | 7953 | 7952 |
LSTM | SAG Mill 1 | SAG Mill 2 | ||||
---|---|---|---|---|---|---|
Time support | ORH(0.5h) | ORH(2h) | ORH(8h) | ORH(0.5h) | ORH(2h) | ORH(8h) |
Model(nH) | 280 | 240 | 516 | 596 | 576 | 488 |
0.5 h | Prediction | 0.5 h | Prediction | 0.5 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 151559116295317939065551606 | 181643252012 | Real | SoftUndHard | 957268124252551370362931 | 119958851069 | Real | SoftUndHard | 622140215162981304139667 | 7776577799 |
Accurate → | 6300 | Accurate → | 7143 | Accurate → | 7587 | ||||||
2.0 h | Prediction | 2.0 h | Prediction | 2.0 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 120492228873116599042309141301 | 216534952493 | Real | SoftUndHard | 4061120151604793102161348193 | 5827261310 | Real | SoftUndHard | 29060921706172663740101 | 4637521169 |
Accurate → | 4164 | Accurate → | 5392 | Accurate → | 6563 | ||||||
8.0 h | Prediction | 8.0 h | Prediction | 8.0 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 1277805370626145710832028141519 | 210530762972 | Real | SoftUndHard | 683873333244135504231053525 | 103060611062 | Real | SoftUndHard | 308579428660381194656159 | 5987273282 |
Accurate → | 4253 | Accurate → | 5343 | Accurate → | 6505 |
0.5 h | Prediction | 0.5 h | Prediction | 0.5 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 170443413330271841654481882 | 203936002311 | Real | SoftUndHard | 10853342180448536013001203 | 126651191565 | Real | SoftUndHard | 64027481115916912279629 | 7536469728 |
Accurate → | 6304 | Accurate → | 6773 | Accurate → | 7185 | ||||||
2.0 h | Prediction | 2.0 h | Prediction | 2.0 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | ||||||
Real | SoftUndHard | 1026104914946022247201281218976 | 161444911845 | Real | SoftUndHard | 676768474184178395251066395 | 11196012819 | Real | SoftUndHard | 3385931222857211331787137 | 5677101282 |
Accurate → | 4226 | Accurate → | 5231 | Accurate → | 6196 | ||||||
8.0 h | Prediction | 8.0 h | Prediction | 8.0 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | ||||||
Real | SoftUndHard | 917115119636120528969010821205 | 136842852297 | Real | SoftUndHard | 789735293584118353221148398 | 11696001780 | Real | SoftUndHard | 3256411027356601338690210 | 6066991353 |
Accurate → | 4174 | Accurate → | 5305 | Accurate → | 6195 |
0.5 h | Prediction | 0.5 h | Prediction | 0.5 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 71.427.90.87.682.310.10.325.674.1 | 22.353.024.7 | Real | SoftUndHard | 78.121.90.14.393.32.40.028.072.0 | 14.772.213.1 | Real | SoftUndHard | 81.418.30.32.395.72.00.517.282.3 | 9.580.79.8 |
Accurate → | 77.3 | Accurate → | 87.6 | Accurate → | 93.1 | ||||||
2.0 h | Prediction | 2.0 h | Prediction | 2.0 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 49.938.211.922.250.427.49.437.453.2 | 26.642.930.6 | Real | SoftUndHard | 26.372.71.03.294.82.01.086.612.4 | 7.189.13.8 | Real | SoftUndHard | 32.267.60.22.796.31.00.487.712.0 | 5.792.22.1 |
Accurate → | 51.1 | Accurate → | 66.1 | Accurate → | 80.5 | ||||||
8.0 h | Prediction | 8.0 h | Prediction | 8.0 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 52.132.815.119.846.034.28.032.159.9 | 25.837.736.5 | Real | SoftUndHard | 43.054.92.16.583.310.21.465.832.8 | 12.674.313.0 | Real | SoftUndHard | 34.665.00.44.493.71.80.580.119.4 | 7.389.23.5 |
Accurate → | 52.2 | Accurate → | 65.5 | Accurate → | 79.8 |
0.5 h | Prediction | 0.5 h | Prediction | 0.5 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 79.220.20.69.578.512.00.219.280.6 | 25.645.329.1 | Real | SoftUndHard | 76.423.50.13.689.37.20.119.980.0 | 15.964.419.7 | Real | SoftUndHard | 69.429.70.91.896.71.50.230.769.1 | 9.581.49.2 |
Accurate → | 79.3 | Accurate → | 85.2 | Accurate → | 90.4 | ||||||
2.0 h | Prediction | 2.0 h | Prediction | 2.0 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 46.147.26.713.565.321.25.552.542.0 | 20.356.523.2 | Real | SoftUndHard | 45.351.53.28.483.77.91.772.625.7 | 14.175.610.3 | Real | SoftUndHard | 35.862.91.33.794.12.20.185.114.8 | 7.189.33.5 |
Accurate → | 53.2 | Accurate → | 65.8 | Accurate → | 77.9 | ||||||
8.0 h | Prediction | 8.0 h | Prediction | 8.0 h | Prediction | ||||||
λ=0.5 | SoftUndHard | Total | λ=1.0 | SoftUndHard | Total | λ=1.5 | SoftUndHard | Total | |||
Real | SoftUndHard | 40.550.88.710.962.027.13.845.550.7 | 17.253.928.9 | Real | SoftUndHard | 50.847.31.97.485.37.31.473.225.4 | 14.775.59.8 | Real | SoftUndHard | 33.365.71.04.593.32.20.976.023.1 | 7.687.94.4 |
Accurate → | 52.5 | Accurate → | 66.7 | Accurate → | 77.9 |
Author Contributions
Conceptualization, S.A. and W.K.; methodology, S.A.; codes, S.A.; validation, S.A., W.K. and J.M.O.; formal analysis, S.A.; investigation, S.A.; resources, W.K.; data curation, S.A.; writing-original draft preparation, S.A.; visualization, S.A.; supervision, W.K. and J.M.O.; project administration, W.K.; funding acquisition, W.K. and J.M.O. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Natural Sciences and Engineering Council of Canada (NSERC) grant number RGPIN-2017-04200 and RGPAS-2017-507956, and by the Chilean National Commission for Scientific and Technological Research (CONICYT), through CONICYT/PIA Project AFB180004, and the CONICYT/FONDAP Project 15110019.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
LSTM | Long Short-Term Memory |
ORH | Operational relative-hardness |
FT | Feed tonnage |
BPr | Bearing pressure |
SSp | Spindle speed |
SAG | Semi-autogenous grinding |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Ore hardness plays a critical role in comminution circuits. Ore hardness is usually characterized at sample support in order to populate geometallurgical block models. However, the required attributes are not always available and suffer for lack of temporal resolution. We propose an operational relative-hardness definition and the use of real-time operational data to train a Long Short-Term Memory, a deep neural network architecture, to forecast the upcoming operational relative-hardness. We applied the proposed methodology on two SAG mill datasets, of one year period each. Results show accuracies above 80% on both SAG mills at a short upcoming period of times and around 1% of misclassifications between soft and hard characterization. The proposed application can be extended to any crushing and grinding equipment to forecast categorical attributes that are relevant to downstream processes.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer