Determination of Mass Properties in Floor Slabs from the Dynamic Response Using Artificial Neural Networks

Most of the research on accidental eccentricity is directed at both the evaluation of accidental eccentricity design code recommendations and the study of building torsional response. In contrast, this paper addresses how the mass properties of each of the levels of a building could be determined from the dynamic response of a building. Using the dynamic response of buildings, this paper presents the application of multilayer feed forward artificial neural networks (ANNs) to determine the magnitude, the radial distance, and the polar moment of inertia of the mass for each level of reinforced concrete (RC) buildings. Analytical models were developed for three regular buildings. Live-load magnitude and mass position are considered as random variables. Seven load cases were generated for the 1, 2 and 4-story models using two excitations. As for the input parameters of the ANNs, three different choices of input data to the network were used. The developed ANN models are able to predict with adequate accuracy the radial position, magnitude, and polar moment of inertia of masses of each level. The implementation of this method based on ANNs would allow the monitoring, either permanently or temporarily, of changes in mass properties at each building floor slab.


Introduction
The seismic response of buildings depends on several factors, such as the soil-structure interaction, the characteristics of the seismic excitation and the characteristics of the building itself, such as mass, stiffness and damping. There are many variables to be taken into account, and all of them have some degree of uncertainty. The irregularities in the spatial distribution of mass, stiffness, and lateral resistance of buildings lead to torsional effects, even in symmetric buildings, and thus make them more susceptible and vulnerable to damage in severe earthquakes [1][2][3]. Design standards recognize the importance of torsional contributions to the horizontal displacement response and simplified procedures have been proposed to estimate these contributions [4][5][6]. Building codes consider two types of torsion: a natural (or inherent) torsion and an accidental torsion. In such a way that natural torsion occurs when the centers of mass (CM) do not coincide with the center of stiffness at one or more levels in a structure [3,7,8].
Accidental eccentricity is associated with uncertainties in (1) the spatial distribution and variability of mass; (2) uncertainties of stiffness in structural elements, and (3) uncertainty in the strength [7,[9][10][11]. To account for these uncertainties, most building codes use a design accidental eccentricity as a percentage (5 to 10%) of the building dimension perpendicular to the direction of excitation.
Accidental eccentricity has been addressed in two main areas. In the first approach, the adequacy of the provisions given by the current design standards has been evaluated. Examples of these works are those of De la Llera and Chopra (1994a, 1994b) [12,13]; Wong and Tso (1994) [14]; Shakib and Tohidi (2002) [15], among others. In the second approach, the behavior of multi-story buildings is studied, e.g., Stathopoulos and Anagnostopoulos (2005) [16]; De-la-Colina et al. (2016) [17]; among others. In contrast, this paper addresses how the mass properties of each of the levels of a building can be determined from its dynamic response. A paper with some similarities to the present work is the paper by Badaoui et al. (2019) [18], in which the authors developed an ANN model to predict the accidental eccentricity at each floor of reinforced concrete buildings using the acceleration or displacement records at the base and on the structure, as well as the vibration frequencies of the structure. Such a study analyzes eight prototypes of reinforced concrete buildings in which the models are excited at the base by the NS component of the 1940 El Centro earthquake. The authors comment that their procedure can be used as a tool for estimating the actual eccentricities of buildings. Despite the great importance of their work, the study has some limitations, such as not being able to determine the magnitude of the mass of each level nor the polar moment of inertia produced by all the masses of each level. In addition, it is not clear how to discretize the mass of each level. On the contrary, in the present work, the objective is to monitor, temporarily or permanently, the changes in the mass properties acting on the floor slabs of the building and their evolution in time.
The applied loads to buildings are of a random nature, such that loads vary in their position, intensity, and duration throughout the useful life of the building. In the specific case of floor slab loads, live loads are of great interest since there is seldom sufficient certainty of their position and magnitude measured throughout the useful life of the structure, and even more when the building changes its use.
Live loads are divided into two components: sustained live loads and transitory or extraordinary live loads. In the case of sustained loads, they are constituted by furniture and human loads. Such weights are found in studies of building live loads, such as those presented by Andam (1986) [19]; Ruiz and Sampayo-Trujillo (1997) [20]; Ruiz and Soriano (1997) [21]; Kumar (2002) [22], among others. Extraordinary live loads represent all those loads produced by gatherings of people or furniture, which take place in an unusual way during the lifetime of the structure. Even when sustained loads act on the structure for long periods of time, they do not have a fixed magnitude or intensity over time.
In general, the problem of determining the magnitude and position of the live loads, as well as the probability distribution function to which these loads conform, has been studied by applying the inventory method. In this method, a list of weights of the different types of furniture (either office or residential) and people is made. Once this list has been made, an imaginary grid is created for each building level, and each of the objects observed is tabulated, indicating their weights and their position in the grid [19][20][21][22][23][24][25].
This paper addresses the problem from an approach based on the training of ANNs, assuming that it is possible to obtain the magnitude of the inertial mass, the radial distance with respect to the geometric center (GC), as well as the polar moment of inertia of the mass for each level of the building in question. This is based on the measurement of the dynamic response to an excitation. For this aim, mathematical models of the analyzed buildings are built; then, they are subjected to two types of dynamic excitations using information from the dynamic response, the proposed ANNs are trained and their performance is evaluated. In theory, ANNs are able to emulate the computation of any (unknown) mathematical function with sufficient training data as well as sufficient training time [26,27].
In this paper, two dynamic excitations are used: white noise in a frequency band from 1 to 100 Hz and an impulsive force of very short duration. The reason for using this type of excitation is that most of the vibration frequencies of the building or system can be excited. It is expected to apply this methodology to real buildings to evaluate and monitor the accidental eccentricity due to live loads in buildings.

Neural Networks
ANNs are computational models developed in the 1950s and 1960s, sometime after the advent of computers. ANNs are inspired by the architecture of neurons and the functioning of the human brain [26,29]. This powerful tool has been used in multiple areas of civil engineering, for example in: damage detection in vehicular bridge girders [28], seismic risk analysis [30], mass location and estimation of accidental eccentricity [31,18], prediction of slab shear capacity [32], modeling of school completion times [33], design of reinforced concrete building structures [34], among others.
In ANNs, the input vector represents the input signals to the system, which emulates the dendrites' function in a biological neuron. These input signals are transmitted to the cell body through the synapse, which can accelerate or delay the incoming signal [35]. Such acceleration or delay of signals is modeled by synaptic weights which are henceforth referred to as "weights". The bias, which has a unitary input is an additional weight, whose power lies in the ability to represent the relationships between inputs and outputs more easily [36]. Finally, the activation or transfer functions are responsible for modulating or filtering the output of the neuronal body and sending that signal to another neuron, or to the output of the network [28].
The "knowledge" acquired by ANNs is obtained via a training or learning algorithm. This process is stopped when the weights and biases obtain their optimal value [29]. The backpropagation algorithm is widely used in the ANN literature. However, new techniques related to this method have been developed and applied, which allow faster convergence speeds and greater stability, such as backpropagation with momentum, backpropagation with a variable learning rate, resilient backpropagation, conjugate gradient, scaled conjugate gradient (SCG), quasi-Newton, Levenberg-Marquardt algorithm, among others. A detailed description of these algorithms is discussed in Demuth and Beale (2002) [36].
One of the major problems encountered in the training of ANNs is overtraining, i.e., the ANN has learned the information that it has been given to such a point that it has modeled the noise present in the information. The most commonly used methods to improve the generalization ability of networks are: (1) pruning method, i.e., removing neurons and connections in such a way that the error of the network is not affected [37]; (2) the early-stopping, in which the available training data are divided into two subsets (one training and one validation). When the error begins to increase in the validation set, training is stopped, since from this point it is presumed that the network will perform poorly in the prediction made by the neural network in the presence of a data set not seen in the training [29] and (3) the regularization method either L1 (Lasso) or L2 (Ridge).
In the regularization method, the performance function (the mean squared error, see Equation 1) is modified by adding a term that consists in adding the sum of the absolute values of the weights and the biases (L1) or by adding the sum of the squares of the weights and the biases (L2), (see Equation 2). This term acts as a penalty of the error function, which does not allow the errors to disappear completely. Both terms are affected by a regularization factor. Therefore, the function to evaluate the network performance is given by Equation Here, F is a function that evaluates ANN performance, N is the number of training vectors, ai is the output given by the network for the i-th training vector, ti is the objective function corresponding to the i-th training vector, w is the weight or bias, and n is the number of weights of the ANN.
The problem with regularization as was described previously is that it is difficult to determine the optimal value of the parameter γ. Since making this parameter too large can result in overfitting, and if it is too small, the network does not fit the training data properly. This disadvantage of the method led, to solve this drawback from a Bayesian perspective [38]. Thus, it is assumed that the weights and deviations are random variables that have some specific distribution (normally Gaussian). As for the function that evaluates the performance of the network, it changes slightly with respect to Equation 3.
where the parameters β1 and α1 are related to the unknown variances associated with these distributions. Hence, they can be estimated using statistical techniques [13]. In this work a Bayesian regularization is employed in combination with the Levenberg-Marquardt training algorithm. It should be noted that this training algorithm is advisable for medium and small problems [39]. A detailed discussion of the Bayesian regularization method can be found in Foresee and Hagan (1997) [40]. Regarding the activation functions, in this work, the linear, hyperbolic tangent and logistic or sigmoid activation functions are used.

Description of the Proposed Method
The objective of the study is the implementation of a method based on ANNs, which allows to monitor either permanently or temporarily the properties of building slab masses. For this purpose, three mathematical models, representative of the buildings, were analyzed. This should not be interpreted as a limitation of the method, but simply as a simplification to test the proposed methodology. Regarding the analyzed ANNs, the training algorithms, activation functions, number of hidden layers, number of neurons in the hidden layers and number of neurons in the hidden layers were selected on the guidelines given in the previous section [28,29,[35][36][37][38][39][40]. As can be seen, this method requires the dynamic response of the analyzed building to a given excitation. In previous tests, harmonic type excitations were analyzed, with one and two frequencies. In both cases it was found that the tested ANNs did not converge. In this paper are presented the results of two types of excitations with a wide range of frequencies, which makes it possible to excite most of the vibration frequencies of the system. To see the performance of the trained ANNs, the correlation coefficients between the response given by the ANN and the training and validation data were calculated. Thus, values close to unity indicate that the trained ANN has sufficient capacity to predict the properties of the masses at each of the floor slabs of one building with adequate accuracy.
In order to fulfill the above-mentioned purposes, it is necessary to apply the following methodology: (1) to build the mathematical model of the building to be analyzed or monitored; it is worth mentioning that the mathematical model must be sufficiently representative of the building, such that the dynamic properties of the mathematical model match; (2) to select an ANN such as those proposed here and train the ANN with the load cases of interest or more probable (e.g. Table 2); (3) to apply some dynamic excitation (impulsive type or white noise) at the top of the building (at the GC) and record the dynamic response of the building at all levels (at the GC of each level); (4) to feed the previously trained ANN with the obtained data in selected points of the building and (5) to obtain the mass properties, such as radial position, magnitude and polar moment of inertia of the masses of each floor. The methodology followed in the present work is briefly summarized in Figure 1.

Figure 1. Flowchart of the methodology
In geotechnical earthquake engineering and electrical engineering, seismic signals can be characterized from 4 main aspects [41,42]: (1) signal amplitude such as peak acceleration, peak velocity or peak displacement; (2) frequency content, which describes how the amplitude of a system's response is distributed across a range of frequencies; (3) signal duration; and (4) parameters that consider amplitude, frequency content and duration or some combination of them such as the Arias intensity or the cumulative absolute velocity. To characterize the signal response, in this research study, the following parameters were used as input parameters to the neural networks: maximum displacement, maximum acceleration, maximum velocity, accumulated absolute velocity, Arias intensity, vibration frequencies of the system, as well as information regarding the histories of rotations of the GC.

Description of the Buildings Analyzed and Input Forces
In this study, three regular reinforced concrete buildings of 1, 2, and 4 stories are analyzed (see Figure 2). Each building has three bays in both directions; the spacing between columns (along both horizontal directions) is 5.00 m. Table 1 shows the nominal dimensions of the column cross-sections, the inter-story heights, and the dead loads that are applied directly to the slab. All beams were assumed to be width=0.25 m and depth=0.50 m. The calculation of the stiffness matrix of the models was based on their nominal cross-section, without taking into account the reinforcing steel and the possible cracking of the reinforced concrete elements. The floor slabs were modeled as 1.25×1.25 m shell-type finite elements. For the mathematical modeling of buildings, a Rayleigh damping is used, with a damping ratio for the first and second modes equal to 0.05.  In relation to live loads, 7 load cases were generated for the 1-, 2-and 4-story models. For this purpose, each slab of each level was divided into 9 panels as shown in the plan view of the analyzed models ( Figure 2). Thus, for the first load case (base case), 2000 random scenarios were generated, in which each finite element of shell type that makes up the slab of each level was subjected to a load with a uniform distribution, between 0 and 2500 N. Table 2 shows the load cases analyzed, as well as which panels are subjected to a given load (within a range of values, uniformly distributed) and the number of scenarios analyzed. The fact of having analyzed more cases than those that make up the base case (case 1 with 2000 scenarios), was to give a greater generalization capacity to the neural networks (ANNs) to be trained later. Hence, it was necessary to create 8,000×3 building models, which were subjected to forced excitation at the top level. All the models (24,000 buildings) were subjected to two types of excitations (an impulsive type excitation and another one composed of white noise in a frequency band). Each of these forced excitations was applied at the GC of the top level of each of the buildings analyzed, both in the X and Y directions. As illustration Figure 3   Once the 24,000 (8000×3) finite-element models of the three buildings to be analyzed have been "built", the next step is to determine which output and input parameters will be used in the ANNs to be trained. In regard to the output parameters, in this work were considered (1) the magnitude of the inertial mass, (2) the polar moment of inertia of the slab mass and (3) the radial position of the center of mass (CM) for each level ( = √ 2 + 2 ). Three output neurons for each level. It should be noted that previous studies used the coordinates of the position of the CM, i.e., xcm and ycm. However, by applying a pair of orthogonal excitations at the last level, it was not possible to reach the values of the correlation coefficients achieved in this work. Therefore, it was decided to determine only the radial position.

Obtain the Mass Properties in each Floor
With respect to the input parameters of the ANNs, three options of input data to the network were used. Option 1 uses the maximum displacement, maximum acceleration, maximum velocity, accumulated absolute velocity (calculated from Equation 5) and Arias intensity (see Equation 6) in the direction X, Y and rotation around the Z-axis (which is perpendicular to the building floor and passes through its GC), as well as the first three vibration frequencies in the case of the one-story building, and the first four and five frequencies in the case of the two-story and four-story model, respectively. From the above it follows that, if we are talking about a 1-level model, we will have 18 input neurons, and if we are talking about a 4-level model we will have 65 input neurons. Option 2 uses the same input data as option 1 except for the rotational data. Thus, for the 4-level model there will be 45 input neurons. Finally, an option 3 is used, which is the same as option 2 except that, in the vibration frequencies, the first 3 frequencies are used if it is a 2-level building and the first 4 frequencies are used if it is a 4-level building. The purpose of using these three input data options was to verify if the data concerning the rotations around the Z-axis are necessary for the determination of the radial position, magnitude and polar moment of inertia of the mass for each level of the analyzed buildings; as well as to analyze if the same number of vibration frequencies required in option 1 is necessary, which would cause the input data set to be larger. The expressions needed to calculate the accumulated absolute velocity and the Arias intensity are presented below. max , , , , 1 where CAV is the cumulative absolute velocity; |̈, , | is the absolute value of the history of accelerations in the x, y directions and the rotation around the z-axis for the i-th recorded value; imax denotes the total number of time increments; Δt is the time increment; AIx,y,Rz is the Arias intensity.
In order to carry out the training of the ANNs analyzed, both the input data and the output data (targets) were normalized so that they had values between 0 and 1. It was decided because, if the sigmoid activation function is used, the entries of the nodes of the next layer will oscillate between 0 and 1, whereas, if the hyperbolic tangent is used as the activation function, they will oscillate between -1 and 1. Table 3 describes the neural network architectures (ANNs) that were analyzed in this work (6 ANNs for the 1-level model, and 9 ANNs for the 2-level and 4-level models, respectively), as well as the neural array used; note that the first number indicates the number of neurons in the input layer and the last one indicates the number of neurons in the output layer. The remaining ones are the number of neurons in the hidden layers. As can be seen in Table 3, four hidden layers were used for training the ANNs of the 1-and 2-level models, while only three hidden layers were used for the 4-level model. A total of 48 ANNs were analyzed. To verify the generalization capacity of the ANNs, the data set was divided into two sets, one for training data and the other for validation or evaluation of the network in scenarios never seen during training. In this study 75% of the data was used for the training phase and the remaining 25% to validate or evaluate the performance of the network. As for the stopping criterion, two criteria were used, the early-stopping method and ending the training given a certain number of processing hours (instead of using the number of epochs; even though these two are intrinsically related). Once the ANNs were trained, regression analyses and the calculation of the Pearson's correlation coefficients were performed, in order to evaluate the quality of the results provided by each network, that is, to compare the response predicted by the trained network, both for the training scenarios and for the evaluation scenarios, versus the real data of magnitude, position and polar moment of inertia of mass for each level, i.e., with the data with which the network was fed. Table 4 presents the correlation coefficients for the 24 ANNs considering impulsive excitation. Table 5 presents the corresponding figures for the other 24 ANNs considering a white noise excitation. Both tables also highlight the ANNs with the best and worst performances.   Figure 4 shows only the ANNs for which the best performances were obtained for both an impulsive signal and a white noise type excitation. While, Figures 5 and 6 show the regression analyses obtained for both the training scenarios and the network evaluation or validation scenarios. It should be noted that only the regression analyses for the ANNs with which the best performances were obtained are presented, see Table 6 and 7. So, in Figure 5 these analyses are displayed when an impulsive type excitation applied at the last level of the building is used, while in Figure 6, regression analyses are presented, but when a white noise excitation signal is used. The x-axis shows the normalized data ([0-1]) corresponding to the objective function with which the network was trained (targets); while the y-axis shows the data predicted by the network once trained (outputs). From the above facts we can deduce that, if the network response is equal to the training or validation data, then these data will be adjusted to a straight line represented in Figures 5 and 6 as A=T with unit slope. These figures include the equation of the line, obtained by fitting the data to a linear model by the least square error, which is shown as a solid line, as well as the correlation coefficient between the output predicted or simulated by the network and the objective function.   Table 7.
, of each of the neurons in the output layer for the training scenarios and for the validation scenarios. Assuming a white noise type excitation . Neural architectures: R6-1N; R2-2N Table 6 reports the correlation coefficients obtained for each of the output layer neurons for the best performing ANNs, i.e., R6-1N; R2-2N and R5-4N. Finally, Table 7 shows the correlation coefficients obtained for each of the neurons of the output layer for the R6-1N; R2-2N and R5-4N ANNs, and considering a white noise type excitation.

Results and Discussion
This section analyzes and discusses the results obtained for both training and validation scenarios of the neural networks, for impulsive and white noise excitations.
Regarding the results obtained when an impulsive type excitation is used (Table 4 and Table 6), results are as follows. From Table 4 it can be seen that for the 1-level model the ANNs R2-1N, R3-1N, R4-1N, R5-1N and R6-1N have a "very good" performance both for the network evaluation scenarios and for the network validation scenarios, since the correlation coefficients are equal to unity. This indicates that it makes no difference whether 13 or 18 input neurons are used. When the complexity of the model increases (2 and 4 levels), a slight decrease in performance is observed. Thus, for the 2-level model and using the R2-2N NN, we obtain a ρ=0.9996 for the training scenarios and a ρ=0.9988 for the validation scenarios. The difference between the R4-2N network is not substantial since for that network a ρ=0.9946 is obtained for the validation scenarios and an equal value of ρ for the training scenarios. Regarding to the 4-level model (R5-4N), the following values ρ=0.9973 and ρ=0.9906, for the training and validation scenarios were reached, respectively. This indicates that the network performance is good. Nevertheless, poor performance (in the validation scenarios) was also observed for some networks such as R9-2N (ρ=0.4742), R9-4N (ρ=0.8931) for the case of an impulsive excitation and R8-2N (ρ=0.6126) and R9-4N (ρ=0.8642) networks for the case of a white noise excitation. Once again, it can be seen that the rotational components are dispensable.
As for the number of frequencies needed to feed the NN, i.e., between input data option 2 and 3, it can be seen in the 2 and 4 level models that there is a difference, although it is minimal (very sensitive to the chosen network architecture). This shows that the information of the vibration frequencies of the system is important, even though the difference between option 2 and 3 is only one vibration frequency, i.e., one neuron in the input vector to the network. Table 4 presented the correlation coefficients globally for the analyzed neural networks. For both training and evaluation scenarios. This table only allows us to discern the best and the worst architecture. However, this information is still insufficient, since it is not possible to determine the individual performance of each of the output neurons of the network, which are related to the output data of the neural network, such as the properties of the mass. For this, Table 6 reports the correlation coefficients obtained for each of the output layer neurons for the best performing ANNs (R6-1N; R2-2N and R5-4N). In this table, neuron 1 represents the magnitude of the mass of level 1; neuron 2 indicates the polar moment of inertia of mass of level 1; neuron 3 shows the radial distance of the CM with respect to the GC; neuron 4 represents the magnitude of mass of level 2 and so on. Thus, for the one-level model, a value of ρ=1 is obtained for all neurons, both for the training scenarios and for the network performance validation scenarios. A slight decrease in the ANN performance is observed when the complexity of the model increases. Thus, for the 2-level model in the training scenarios and considering the R2-2N architecture, the values of ρ are between 0.9921 (obtained at the neuron providing the polar moment of inertia) and 0.9999 (at the neuron providing the magnitude of the masses of each level).
In the validation scenarios, i.e., for scenarios never seen by the network, a decrease in the values of ρ is observed, which are between 0.9857 and 0.9999. As for the 4-level building model, for the training scenarios, the lowest value of ρ is 0.9565 (at the neuron that provides the polar moment of inertia of mass of level 1) and the highest value of ρ is 0.9981 (at the neuron that gives the radial distance of the CM with respect to the GC of level 4); in the validation scenarios the lowest value of ρ (0.9553) was obtained at the neuron providing the polar moment of inertia at level 1, and a maximum value of ρ = 0.9978, both in the magnitude of mass at level 2, and in the radial distance of the CM with respect to the GC at level 3. From the above it can be deduced that in all cases the values of the correlation coefficients were higher than 0.95 which is an indicator that the analyzed neural networks have an acceptable performance.
By performing a similar analysis to the one discussed previously, but now considering as excitation a white noise type force (see Tables 5 and 7), the following results are obtained. From Table 5, for the 1-level model, the R3-1N, R5-1N and R6-1N architectures have correlation coefficients equal to unity, both for training scenarios and for those scenarios never seen before by the network, i.e., the same architectures when an impulsive force is considered, excluding the R2-1N and R4-1N architectures. In the case of the 2-level model, it is observed that the best overall performance is obtained by employing the R2-2N neural architecture, in which a ρ=0.9989 is obtained for the training scenarios and a ρ=0.9985 for the validation scenarios, which are slightly lower than those obtained considering an impulsive force. As in the previous case (considering an impulsive force) we have that the rotational components are not necessary to properly identify the radial position of the mass, the magnitude or the polar moment of mass inertia of each level. Finally, for the 4-level model, the best overall performance is achieved for the R5-4N network architecture with a ρ=0.9950 for the training scenarios, and ρ=0.9910 for the validation scenarios. Similar to the previous case (considering an impulsive force) the "worst" performance is obtained with R9-4N for which a ρ=0.8642 was obtained for the validation scenarios. As in Table 4, Table 5 only allows us to identify which architecture is the best overall performer Table 7 presents the correlation coefficients for each output neuron of the best performing ANNs. For the 1-level model, a value of ρ=1 is obtained for all neurons, both for the training scenarios and for the ANN performance evaluation scenarios. As for the 2-level model, it is observed that the values of ρ are between 0.9763 (obtained at the neuron that gives the polar moment of inertia of level 1) and 0.9999 (at the neuron that outputs the magnitude of the mass of level 2). The above, for the training scenarios. For the validation scenarios for the 2-level model, it was observed that the values of ρ are between 0.9763 (obtained at the neuron which provides the polar moment of inertia of level 1) and 0.9999 (at the neuron which yields the magnitude of the mass of level 2). For the 4-level model (training scenarios), it is observed that the value of ρ is between 0.9565 (neuron providing the mass polar moment of inertia of level 1) and 0.9983 (neuron that gives the mass magnitude of level 1). And finally, for the 4-level model validation scenarios, the extreme values of ρ were between 0.9512 (neuron providing the mass polar moment of inertia of level 1) and 0.9979 (neuron which yields the mass magnitude of level 1).
From Figures 5 and 6 it can be seen the ANNs (with the best performance) have an adequate fit, that is, multilayer feed forward networks, in this case with four hidden layers (for the case of the 1 and 2 level models) and three hidden layers (for the 4 level model), with activation functions: linear, hyperbolic tangent, and logistic, employing a Bayesian regularization in combination with the Levenberg-Marquardt training algorithm are able to determine the radial position, magnitude and polar moment of inertia of the mass for each level in regular buildings.

Conclusions
In this study, a neural-network-based model was developed, applied and evaluated to determine the magnitude, the radial position, and the polar moment of inertia of the masses for each level of building models, based on the measured dynamic response of each building level. For this purpose, finite element models of three regular buildings with square geometry were developed. The models, with 1, 2, and 4 levels, were subjected to two excitations. A total of 48 ANNs were trained, 24 considering an impulsive excitation and 24 considering a white noise excitation. Three choices of input data were used to feed the networks. The output data selected for this study were: (1) the magnitude of the inertial mass, (2) the polar moment of inertia of the slab mass, and (3) the radial position of the CM for each level. Hence, three output neurons for each building level were required. The main conclusions of this study are the following:  The results of this study indicate that the proposed scheme of multilayer feed forward networks is able to estimate the magnitude, the radial position, and the polar moment of inertia of the mass of each level of low-rise regular buildings, using as input the measurements of their dynamic response. For models with 1 and 2 stories, four hidden layers led to the best performance of the ANN, while for models with four stories, three hidden layers showed the best performance;  As for the neuron activation functions, the following ones were used: linear, hyperbolic tangent, and logistic. In all cases, Bayesian regularization was used along with the Levenberg-Marquardt training algorithm;  As the complexity of the model increases, the accuracy of the ANNs decreases to estimate the slab mass parameters, although the correlation coefficients resulted in values larger than 0.95 in all cases. Slab rotations were not essential and, therefore, they do not seem to be required for the network training;  No substantial differences were found between using an impulsive and a white noise type excitation;  Results indicate that it is possible to apply it to the determination of the radial position of the CM with respect to the GC, the magnitude of the mass, and the polar moment of inertia of mass in real buildings. To achieve this, it is sufficient to apply the methodology suggested in Figure 1. This would allow us to evaluate and monitor the accidental eccentricity due to live loads in a building or multiple buildings. The two main limitations of this work are: (1) the necessary training time, since as the complexity of the model increases, the time required for the network to give acceptable results seems to increase exponentially; and (2) given the characteristics of how the excitation was taken into account (orthogonal), the proposed method is not able to determine the X, Y coordinates of the CM. For future work, it is recommended to analyze the performance that could be obtained with evolutionary algorithms such as genetic algorithms, swarm algorithms, or some other optimization algorithm. Additionally, to modify the proposed method to include the determination of the X, Y coordinates of the CM.

Author Contributions
Conceptualization, C.A.G.P., and J.D.L.C..; methodology, C.A.G.P..; validation, C.A.G.P., and J.D.L.C.; investigation, C.A.G.P.; writing-original draft preparation, C.A.G.P. and J.D.L.C.; writing-review and editing, C.A.G.P. and J.D.L.C. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement
The data presented in this study are available on request from the corresponding author.