Gabriel Walton,Sankhaneel Sinha
Department of Geology and Geological Engineering,Colorado School of Mines,Golden,CO,80401,USA
Keywords:Back analysis Numerical modelling Inversion Case studies
ABSTRACT Numerical back analysis is a valuable tool available to rock mechanics researchers and practitioners.Recent studies related to back analysis methods focused primarily on applications of increasingly sophisticated optimization algorithms (primarily machine learning algorithms) to rock mechanics problems.These methods have typically been applied to relatively simple problems;however,more complex back analyses continue to be conducted primarily through ad hoc manual trial-and-error processes.This paper provides a review of the basic concepts and recent developments in the field of numerical back analysis for rock mechanics,as well as some discussion of the relationship between back analysis and more broadly established frameworks for numerical modelling.The challenges of flexible constraints,non-uniqueness,material model limitations,and disparate data sources are considered,and representative case studies are presented to illustrate their impacts on back analyses.The role of back analysis(or“model calibration”) in bonded particle modelling (BPM),bonded block modelling (BBM),and synthetic rock mass(SRM)modelling is also considered,and suggestions are made for further studies on this topic.
Back analysis is the process of determining a set of input conditions that lead to outputs consistent with observed phenomena.Typically,this is performed using numerical models,and is also commonly referred to as model calibration.In the field of rock mechanics,back analysis has long been considered a critical component of the observational method of tunnelling (Terzaghi et al.,1948; Gioda and Maier,1980; Cividini et al.,1981; Kaiser,1995) and has more recently become a critical aspect of bonded particle modelling (BPM),bonded block modelling (BBM),and synthetic rock mass(SRM)modelling(Potyondy and Cundall,2004;Ivars et al.,2011).In this paper,the current state of back analysis in the context of rock mechanics is reviewed; particular attention is paid to the challenges associated with back analysis in the context of rock mechanics problems,and illustrative examples from tunnelling and SRM modelling are presented.
Generally speaking,physical systems can be represented by(Aster et al.,2012):
where d represents a set of observations (i.e.data),m is a set of parameters that define the physical model being used,and G,the“operator”,defines the mathematical model that relates m and d.In general,G can take a relatively simple form,such as a system of linear equations,or a more complex nonlinear form.As a simple example for an individual rock element with homogeneous stress and strain,our observable variable (d) is the strain tensor,the physical material is defined by the compliance tensor(m),and G is the function that relates the strain tensor to the compliance tensor(the product of the compliance tensor and the stress tensor).Inversion of Eq.(1)in this case represents the standard method for stress “measurement” in rock mechanics (Amadei,1996; Deb,2010).In the context of most numerical models in rock mechanics,G represents a system of partial differential equations;inherent in the definition of G for a particular problem are the characteristics of the system which typically remain fixed for a given analysis,such as the choice of material model type.ξ represents any component of d not captured by the choice of G in representing the physical system.For example,if a measured response(d) contains a small time-dependent component but the selected material model neglects time-dependent material behavior,the associated discrepancy can be represented by ξ.η represents all other sources of error in d (Aster et al.,2012).
In the process of back analysis or model calibration,one aims to identify m such that G(m)provides an accurate representation of d.This can either be achieved using an inverse approach or a direct approach (Cividini et al.,1981).The inverse approach relies on the re-arrangement of the governing equations such that the model parameters,m,can be solved;in the context of Eq.(1),this requires inversion of the operator,G.The so-called direct approach,in contrast,involves an iterative solution of the forward problem,G(m),where various combinations of model parameters are tested to identify an optimal set.As G is often quite complex for rock mechanics problems,the advantage of the direct approach is that it does not require formulation of the inverse problem,and can therefore be applied to nonlinear back analysis problems with relative ease.Accordingly,the direct approach is more commonly applied for geotechnical back analysis (Gioda and Maier,1980;Cividini et al.,1981; Sakurai and Takeuchi,1983).The direct approach requires three basic components(Oreste,2005;Miranda et al.,2011):
(1) A forward model which can calculate a set of outputs for comparison with the observations of interest (d);
(2) An error function (also called an objective function),which serves as a measure of the quality of any given model iteration; by defining an error function for a given back analysis,one defines the quantity to be minimized during the back analysis process; and
(3) An optimization algorithm that intelligently selects sets of material parameters,for each model iteration,that are expected to reduce the value of the error function.
When performing a back analysis using numerical models with ever-increasing complexity,it is important to consider the purpose of both numerical models themselves and the process of model calibration.The concepts introduced in the following paragraphs provide the context within which the challenges associated with back analysis presented later in the paper should be considered.
Starfield and Cundall (1988) recommended a rock mechanics modelling methodology,and many of their points remain highly relevant today.With reference to Holling’s (1978) classification of modelling problems based on the amount of data and fundamental understanding available to the modeller,Starfield and Cundall(1988) noted that rock mechanics problems tend to be relatively data-limited,with varying degrees of associated fundamental understanding.Although some advances in data collection have occurred over the past three decades,it appears that the amount of data available to rock mechanics modellers has been outpaced by the rate of growing model complexity.This means that the issue of having relatively minimal data to constrain numerical models remains as relevant today as it was in 1988.
The issue of model complexity can be related to the concept of regularization in inverse theory.The relationship between measures of model complexity (e.g.number of parameters) and the degree of misfit between a model and data (e.g.sum of squared errors)will have a shape similar to the curves shown in Fig.1.When using a very simple model (lower on the plot),it may not be possible to achieve a low degree of misfit,whereas the addition of some additional complexity (moving upwards on the curve) can significantly reduce the misfit (moving leftwards on the curve).Once the misfit between model and data has been reduced to a relatively small value,however,additional increases in model complexity provide diminishing returns with respect to reduction of the misfit(steeper part of the curve).The exact amount of model complexity required to achieve a given degree of agreement with observational data will depend on the complexity of the physical scenario being represented(e.g.in situ behavior that is linear elastic versus time-dependent inelastic behaviors).This is represented in Fig.1 as two separate curves,where increasingly complex models will be necessary to maintain a given level of model-data misfit as the complexity of the physical scenario being represented increases.
Fig.1.Typical relationship between model complexity and the degree of model-data misfit (after Aster et al.,2012).
An ideal model should balance model complexity with modeldata misfit,and so a model which lies at a point somewhere along the shoulder of the curve is preferred.Alternatively,we could say that because every parameter added to our model adds a new source of uncertainty,we must find a model with the best marginal improvement in model-data fit improvement considered with respect to the marginal uncertainty added (Hammah and Curran,2009).As noted above,the exact shape and position of the curve(and characteristics of the optimal model) will depend on the complexity of the physical situation being modelled.For example,to investigate stress flow around an excavation in a strong,massive,isotropic rock mass under low stress,there is no marginal benefit of adding complexity beyond an isotropic linear elastic material model,whereas the study of salt creep and damage around an excavation over a period of several years will require a more complex model to achieve a similarly good representation of observable phenomena (e.g.the curve in Fig.1 will be shifted upwards in the latter scenario relative to the former).
In geophysical inversions,regularization is the method by which the optimal model is obtained (Aster et al.,2012).To obtain a regularized solution,rather than simply minimize an error function which captures the degree of model-data misfit,one solves the following optimization problem (Aster et al.,2012):
where ε represents the model-data misfit,μ is a measure of model complexity,and α is a weighting factor referred to as the regularization parameter.
In a geophysical inversion,it is relatively straightforward to quantify μ,often as a measure of spatial heterogeneity for a physical parameter of interest.In rock mechanics models,however,a model’s complexity is often inherent in the formulation of the forward problem.Should the model use a continuum or discontinuum formulation? Should inelastic and time-dependent deformation components be incorporated,or not? These decisions are made before the optimization process of back analysis occurs.As such,considerations related to model complexity must be seriously considered before starting a back analysis,as it is possible to obtain a sub-optimal back analyzed model simply by virtue of its complexity(or lack thereof),even if the error function has been minimized.This is perhaps best summarized by Oreste(2005): “minimization of the error function does not guarantee a correct back analysis”.
Unfortunately,it is never possible to truly know if a model is“correct”.Indeed,it is impossible to truly verify a numerical model.This is simply the nature of modelling natural systems which are dynamic and can change in unanticipated ways.As a consequence,a calibrated model developed through back analysis cannot necessarily be considered reliable for predictive purposes(Oreskes et al.,1994).
Given these limitations,it is important to adopt an adaptive modelling approach.In such an approach,modelling results enhance our understanding of the physical problem at hand,which in turn can guide the development of additional models,perhaps of increasing complexity; in the long term,model-driven data collection activities can be included into such a scheme.As suggested by Starfield and Cundall(1988),one’s approach to modelling “should be like that of a detective rather than a mathematician”.This philosophy,however,runs counter to many studies on back analysis in rock mechanics,which have focused on using automated“black box” approaches to model calibration.Although valuable information can be extracted from these processes (for example,the error function plots presented by Moreira et al.,2013),the adaptive nature of the modelling process is effectively bypassed.The following section provides a summary of recent trends in back analysis as applied to rock mechanics problems,including the role of automation.
Over the past four decades,back analysis in the field of rock mechanics has seen many advances.With respect to the available forward models,increasingly sophisticated approaches have been developed to represent rock behavior.These include models which capture time-dependent deformation (e.g.Van Sambeek,1986;Ladanyi,1993; Gioda and Cividini,1996; Wang and Cai,2020),continuum models for brittle deformation (e.g.Hajiabdolmajid et al.,2002; Lee et al.,2012; Walton and Diederichs,2015b;Renani and Martin,2018),and combined finite-discrete element method models that simultaneously simulate continuum and discontinuum behaviors of rocks(e.g.Munjiza,2004;Elmo and Stead,2010; Mahabadi et al.,2012; Li et al.,2019; Vazaios et al.,2019).Most of the advances specific to back analysis procedures,however,have focused on the advancement of optimization algorithms that improve the automation of back-analysis procedures.In particular,there has been a movement away from the study of classical gradient-based methods(e.g.Gens et al.,1996;Ledesma et al.,1996;Lecampion et al.,2002; Calvello and Finno,2004; Finno and Calvello,2005) towards machine-learning approaches (e.g.Haupt and Haupt,1998; Hashash et al.,2004; Tawadrous et al.,2009;Miranda et al.,2011;Gutierrez et al.,2012;Sun et al.,2013;Moreira et al.,2013;Zhao et al.,2015).This shift has been largely due to the greater capability of machine learning approaches to identify global optima for highly nonlinear problems(Vardakos,2007;Rao,2009;Miranda et al.,2011).Unlike the first and second order gradientbased approaches,most machine learning algorithms (e.g.particle swarm optimization (PSO),evolution strategies (ES)) only require the value of the objective function at discrete points and hence are suitable for discontinuous and mixed continuousdiscrete problems (Rao,2009; Miranda et al.,2011).Another advantage of these algorithms is that the parameter space is concurrently searched at multiple points,meaning that there is a lower probability of getting stranded at a local optimum.
However,these advances have not been translated into a broad adoption of formalized back analysis procedures by rock mechanics practitioners or researchers (Sakurai et al.,2003; Gutierrez et al.,2012).There are many factors contributing to this problem.One issue is that the proposed automated approaches have almost exclusively been applied to cases where the ground is represented as either an elastic,viscoelastic or perfectly plastic Mohr-Coulomb material;if more complex constitutive models are used,then only a small subset of the total number of parameters are back-analyzed(Swoboda et al.,1999; Deng,2001; Fakhimi et al.,2004; Oreste,2005; Zhang et al.,2006,2020; Miranda et al.,2011; Gutierrez et al.,2012; Moreira et al.,2013; Yang et al.,2017; Gan et al.,2017; Sun et al.,2018; Ghorbani et al.,2021).The majority of studies which have utilized more realistic models for rock behavior have tended to apply a manual trial-and-error approach to back analysis(Vardakos et al.,2007;Zhao et al.,2010;Walton et al.,2014,2016;Lisjak et al.,2015;Walton and Diederichs,2015b;Perras et al.,2015; Renani et al.,2016; Fazio et al.,2017; Renani and Martin,2018; Sharrock and Chapula,2020; Kabwe et al.,2020; Sinha and Walton,2020a).
There have only been limited studies which applied automated optimization algorithms to back analysis in more complex cases,such as where a viscoelastic-perfectly-plastic material model (Sharifzadeh et al.,2013) or a discontinuum model(Tawadrous et al.,2009; Yazdani et al.,2012) was used to represent rock behavior.Yazdani et al.(2012) and Sharifzadeh et al.(2013) specifically utilized a univariate optimization approach which changes only one model parameter at a time.As noted by Oreste (2005) and Leroueil and Tavenas (1981),it is necessary to consider all parameters concurrently,as their interactions may have significant effects on model outputs,particularly when complex nonlinear behaviors are being modelled.Tawadrous et al.(2009),on the other hand,used artificial neural network (ANN) to constrain discontinuum model parameters,but at the cost of running 3125 simulations.Given the computational demand of discontinuum models,it is often not practical to run such a large number of simulations,meaning that a manual back analysis requiring only a few dozen simulations is preferred in many cases.
There are also issues related to the proper definition of an error function for application in an automated optimization scheme.The two most commonly used approaches are as follows (Vardakos,2007; Miranda et al.,2011):
(1) A least-squares criterion where the error function is a linear combination of squared differences between observed and computed values (e.g.displacements),with or without normalization against the observed values; and
(2) A maximum likelihood approach,which is a probabilistic framework requiring probability distributions to be assigned to both the model parameters and observation errors.
The vast majority of studies utilizing automated back analysis approaches use least-squares approaches intended to minimize the sum-of-squares between observed and modelled displacement values (Sakurai and Takeuchi,1983; Yazdani et al.,2012;Sharifzadeh et al.,2012,2013; Moreira et al.,2013; Zhao et al.,2015).Improvement of error functions for numerical back analyses has not been a major focus of studies in the literature.
The purpose of this paper is to summarize the current state of knowledge in numerical back analysis for rock mechanics problems,highlighting ongoing challenges unique to this field.Illustrative examples from the authors’ work are provided to demonstrate these specific challenges.Aspects of back analysis specific to discontinuum models of rock damage processes are also discussed.
While there are several challenges associated with rock mechanics modelling in general(see Section 1.1)and the process of back analysis outside the discipline of rock mechanics,there are some major challenges particularly significant for rock mechanics back analyses.Four of these challenges are: flexible constraints,nonuniqueness,material model limitations,and disparate data sources.
A back analysis requires an indicator of model quality to be defined; in the case of automated approaches,this must be explicitly defined in the form of an error function(e.g.Oreste,2005;Sharifzadeh et al.,2013;Luo et al.,2018;Ghorbani et al.,2021).This can be challenging due to the fact that often in rock mechanics,one does not have a well-defined set of constraints to impose on their model.Examples include the studies of Hajiabdolmajid et al.(2002),Diederichs (2007),Edelbro (2009),Edelbro et al.(2012),Lan et al.(2013),who attempted to match the geometric characteristics of notch-shaped fractured zones around underground openings,of Damjanac and Zorica (2021),who simulated the lowangle shear fracture shape in the roof of Waste Isolation Pilot Plant rooms,and of Barla et al.(2012) and Zhao et al.(2016),who constrained model parameters based on landslide runout profiles and visual observations of damage evolution in the field.It is apparent how defining an error function in such cases is not straightforward,as characterizations of failure geometries are typically approximate both in the field and in the assessment of numerical model results.
It can also be said that rock mechanics practitioners and researchers typically deal with ill-posed problems (Starfield and Cundall,1988).Some characteristics of these problems include(after Hammah and Curran,2009):
(1) Under-specification or absence of crucial information;
(2) Existing pieces of information are unconnected,and require holistic understanding to be developed to determine what information is important and what must be ignored;and
(3) The available information may be conflicting or contradictory.
Ultimately,these factors combine to make it very difficult to properly define an error function except in the most idealized cases.Because the relative significance of individual observations may only become clear through the investigation associated with the modelling process,it may be appropriate or even necessary to modify how these observations are factored into any definition of model quality (e.g.by applying different weighting factors for different observations;Miranda et al.,2011).The process by which these adjustments are made through “expert judgement” has not been well-studied,and is therefore difficult to incorporate into automated back analysis approaches.An example can be found in Kaiser et al.(1990)where the authors omitted 3(“erroneous”)out of 8 stress measurements during back analysis of the in situ stress state at the Underground Research Laboratory (Lac du Bonnet,Manitoba,Canada),based on numerical model results.An automated approach would have considered all 8 measurement points and led to different back-analyzed parameter values.
In general,back analysis studies considering multiple data from single/different sources typically compare the field measurements with models results qualitatively (Vardakos et al.,2007;Esterhuizen et al.,2010,2021;Walton et al.,2016;Mohamed et al.,2016,2018; Bahrani and Hadjigeorgiou,2018).This is because either some of the field data points are affected by the local variability in geology and/or some physical processes occurring in reality are not accounted for in the model (see Section 2.3),and some“judgement” is therefore necessary.
Considering the ideal case of Eq.(1)where the“correct”forward model has been selected (ξ=0) and “errorless” data have been collected (η=0),it is still possible that an incorrect set of model parameters may be determined from a forward model which perfectly fits the measured data.This applies to the case where the problem is said to be “under-determined”,meaning that the number of independent data points is less than the number of model parameters.It should be noted that the number of “independent” data points may be less than the total number of data points.For example,two nearby multi-point borehole extensometers will provide more information about the ground conditions if they are oriented perpendicular to one another,as opposed to parallel (as two parallel extensometers provide some amount of redundant information).From a mathematical standpoint,the issue of parameter non-uniqueness arises when G has a non-trivial null space as any model m0lying in the null space can be linearly combined with a model that satisfies Eq.(1),without affecting the fit to the data (Aster et al.,2012).
Given that rock mechanics problems tend to be data-limited(Starfield and Cundall,1988; Jing 2003),this problem of multiple parameter sets leading to equally optimal solutions to the back analysis problem (referred to as “non-uniqueness”) is highly relevant (Oreskes et al.,1994; Rechea et al.,2008; Bahrani and Hadjigeorgiou,2018).For example,differences can be observed in the back-analyzed slip surface friction angle values of Crosta et al.(2007),Paronuzzi et al.,2013,Wolter et al.(2013),Boon et al.(2014),even though all these studies were simulating the same Vajont slide case study.Bahrani and Hadjigeorgiou(2018)reported obtaining similar drift behaviors using two different sets of input parameters in a 3D discontinuum model and stated: “This means that multiple combinations of strength and stiffness properties can lead to an equivalently well calibrated model in terms of field measurements,but not a unique model”.Studies like Oreskes et al.(1994),Lorig and Varona (2000),Asche et al.(2000),Christianson et al.(2006),Xu et al.(2010),Norouzi et al.(2013),Markus (2019)have also acknowledged the non-uniqueness issue in numerical models.More recently,Sinha and Walton (2020a) demonstrated how two different sets of FLAC3D model parameters could match the deformation profile measured by an extensometer in a coal pillar rib.
Furthermore,the uncertainty added to the system by measurement errors (η ≠0) and features not captured by the forward model approximation of the physical system(ξ ≠0)means that for practical purposes,non-uniqueness exists even when the number of independent data points exceeds the number of model parameters.This non-uniqueness is compounded by the flexibility of the constraints used to determine which model is “best”.Even if parametric non-uniqueness can be constrained for a given forward model,it may be possible to obtain equally valid results using a different forward model(Sakurai and Akutagawa,1995;Jing 2003;Sakurai et al.,2003).
One of the challenges associated with inverse problems in general is the concept of existence: even in the case where one’s data are free of error (i.e.η=0),there may be no forward model which can match the measured results (Aster et al.,2012).This occurs because the modelled physics are approximate(i.e.ξ ≠0 for all forward models),which is often the case in the rock mechanics modelling(i.e.due to the limits of the selected modelling approach and/or constitutive model).As a result,all reasonable solutions to a back analysis problem will typically be approximate in nature(Bobet et al.,2009).
Given the physical approximations made in selection of a modelling approach,phenomena may exist within data that cannot be replicated.These may be irrelevant to the original purpose of the numerical modelling activity and should not be strongly considered in determination of a model’s quality of fit to the data,or relevant to the purpose of study,meaning that a different forward model should be used.A determination that such phenomena exist within one’s data and the associated modifications to the forward model and/or error function used may only become clear through the modelling process.An adaptive modelling approach is necessary to address this issue.Various illustrations of material model limitations can be found in the literature: Sakurai (1997)-elastic versus inelastic simulation of a shallow tunnel; Bahrani and Hadjigeorgiou (2018)and Sinha and Walton (2019)-realism of rock-support interaction in continuum versus discontinuum models; Lan et al.(2010) and Sinha and Walton (2020b)-homogeneous versus heterogeneous block representation in modelling granitic rocks using BBM;Hajiabdolmajid et al.(2002)-elastic versus elastic-brittle-plastic versuscohesion-weakening-frictional-strengthening(CWFS)constitutive models for simulating brittle fracturing; Munson et al.(1989)-Tresca versus Von Mises criteria for simulating creep deformation at Waste Isolation Pilot Plant (Carlsbad,New Mexico,USA),etc.Each of these studies highlights how one forward model is more appropriate than the other in relation to a specific problem.
With respect to back analyses of in situ excavation behavior,most studies tend to use displacement measurements for purposes of comparison with model outputs.This is because such data are relatively easy to acquire (e.g.convergence points,borehole extensometers)and tend to be reliable (Martin et al.,1996;Cai et al.,2007).When using only displacement data,however,large amounts of data are necessary for the purposes of back analyzing cases where material models are defined by more than two or three parameters (Miranda et al.,2015).Given the limited availability of this data in practice and the associated challenges with nonuniqueness,it is necessary to utilize all available information to constrain one’s model as much as possible (Sjöberg,2020).Aside from different forms of field measurements(e.g.stresses or acoustic emissions),it is also important to make use of relevant laboratory data,empirical correlations,and background knowledge.Herein lies another disadvantage of the typically applied least-squares error function: it is difficult to properly incorporate disparate sources of data.Although a weighted least-squares criterion can be used(e.g.Zhao et al.,2015)such that different weighting factors can be applied to the different data types,there is no well-established method for appropriately determining such weighting factors.As an alternative,authors like Pelizza et al.(2000) have simply justified higher data-model mismatch in stress values and placed more confidence on calibration against displacement data by stating that stress measurements are generally affected by errors more than displacement measurements.
A probabilistic approach (i.e.an error function based on maximum likelihood concepts)can be well-suited to incorporating disparate data sources in a consistent manner,but in this case the determination of probability distributions for each parameter is difficult(Miranda et al.,2011).Because no clear methodology exists in either case,additional uncertainty is simply added to the system due to the addition of extra parameters(i.e.the weighting factors or the parameters required to define probability distributions)(Hammah and Curran,2009).
The previous section has highlighted some of the major challenges associated with back analysis in rock mechanics,as well as how these challenges impact the applicability of recently developed optimization techniques.In the following,examples of each of these challenges are presented in the context of specific back analysis case studies.
Walton et al.(2018) presented a case study on a deep (almost 3 km depth)shaft excavated in an argillite unit with closely spaced foliation (see Fig.2a).The foliation dips almost vertically; as a consequence,the relatively high in situ stresses tend to concentrate tangential to vertical excavations along the bedding strike,leading to a buckling behavior caused by both stress and strength anisotropy (see Fig.2b).
As a result of shaft liner damage induced by excessive ground movements,it was decided to switch the shaft to an elliptical shaft geometry,and several extensometers were installed in both the circular and elliptical sections of the shaft to monitor long-term stability.A back analysis was commissioned to allow the following to be accomplished:
(1) Estimate reasonable parameters for the intact argillite material and discrete foliation planes that can approximately replicate observed ground deformation for both the circular and elliptical shaft shapes when unlined and lined;
(2) Use the calibrated model to estimate the sensitivity of ground displacements to the in situ stress condition; and
(3) Use the calibrated model to estimate the sensitivity of ground displacements to the selected primary support and final liner design.
Based on the extensometer measurements,models for the timedependent deformation after primary support installation and liner installation were developed for both shaft shapes.Unfortunately,the initial deformation within three days of excavation was not measured in any case.Using this and all other information available,the constraints for the back analysis were set as follows(phrases highlighting the flexibility of the constraints are shown in italics):
(1) Typical values of unconfined compressive strength of the intact argillite are between 50 MPa and 175 MPa,with an average just above 100 MPa;
(2) Typical values of Young’s modulus (E) of the intact argillite ranges from 10 GPa to 55 GPa,with an average around 30 GPa;
(3) The in situ k-ratios for the maximum and minimum horizontal stresses are thought to be approximately 1.5 and 1.25,respectively;
(4) The equilibrium radial deformation associated with buckling in the circular portion of the shaft is expected to be on the order of 1/4 the shaft radius (visually estimated from the photo shown in Fig.2b);
(5) The time-dependent component of buckling deformation at equilibrium in the unlined circular shaft should be at least 240 mm (the time-dependent deformation measurements,in this case,suggest 240 mm after one year of displacement without a clear indication of equilibration);
(6) The time-dependent component of buckling deformation at equilibrium in the unlined elliptical shaft should be approximately 24 mm;
Fig.2.(a) Sub-vertical argillite beds with a spacing of approximately 15 mm; and (b) buckling of argillite beds observed in a bored raise (after Strickland et al.,2016).
(7) If installed after approximately 180 mm of time-dependent deformation has occurred,the liner should bring the ground to equilibrium in the circular shaft after approximately 31 mm of additional displacement at the groundliner boundary;
(8) If installed after approximately 9 mm of time-dependent deformation has occurred,the liner should bring the ground to equilibrium in the elliptical shaft after approximately 4 mm of additional displacement at the ground-liner boundary; and
(9) The depth at which significant dilation begins to occur (i.e.where a change in the gradient of ground displacement is detected) should not change significantly over the course of the time-dependent buckling process,and should be in the range of 3-8 m behind the shaft wall for both the circular and elliptical shaft shapes.
Although the extensive extensometer monitoring performed in the shaft during construction allowed for development of several constraints on the displacement characteristics of the shaft,many of the available constraints were still relatively vague and flexible.For example,an ideal model might show a shift in the displacement gradient at a depth of 5-6 m(in the middle of the acceptable 3-8 m range),but it may not be possible to find a model which matches this condition as well as the displacement conditions,necessitating the acceptance of a model with this depth at ~3 m or ~8 m.Also,it is not immediately clear how to best handle the flexibility in constraint number five.Given the available information,is a model showing 240 mm of equilibrium displacement equally as plausible as one showing 480 mm of equilibrium displacement?This type of question cannot be easily answered without further investigation,likely involving some amount of iterative,user-guided modelling.
In the 1980s and 1990s,Atomic Energy of Canada Limited(AECL)owned and operated an Underground Research Laboratory excavated in the Lac du Bonnet Granite,approximately 120 km Northeast of Winnipeg,Manitoba,Canada(Martin et al.,1996).A mine-by test tunnel constructed at the AECL laboratory served as the focal point for research on the strength characteristics of brittle rock(e.g.Martin and Chandler,1994; Read and Martin,1996; Martin,1997).From this work,a CWFS approach to model the strength of brittle rock around excavations was developed (Martin,1997;Hajiabdolmajid et al.,2002),and this approach has since been applied by many authors (e.g.Diederichs,2007; Edelbro,2009;Zhao et al.,2010; Walton et al.,2014; Walton and Diederichs,2015b).The AECL mine-by test tunnel remains the best studied case using the CWFS strength model.
Fig.3.Results from four finite difference models (gray regions representing modelled yield) showing reasonable agreement with the observed notch geometry at the AECL mine-by experiment (dashed red lines): (a) parameters from Hajiabdolmajid et al.(2002); (b) parameters which differ significantly from those of Hajiabdolmajid et al.(2002),including a lower in situ unconfined strength; (c) parameters selected based on the author’s experience modelling similar rocks; (d) an instantaneous CWFS approach using ϕpeak=10° as per Edelbro(2009,2010)and Edelbro et al.(2012)(from Walton,2019).The input parameters of the four models presented can be found in Table 1.
In brittle rock,high stress concentrations around an excavation can lead to the development of spalling fractures sub-parallel to the excavation surface.As these fractures dilate and de-confine the rock further away from the excavation,characteristic V-shaped notches may form,which can constitute a stable excavation geometry(Diederichs,2007).The shape of this notch,which can be roughly characterized by its depth and angular extent around the excavation is dependent on both the stress and strength characteristics of the ground (Perras and Diederichs,2014; Walton et al.,2015).Several authors have used notch shape as an observation against which to compare numerical modelling results obtained using a CWFS strength model (Hajiabdolmajid et al.,2002; Diederichs,2007; Edelbro,2009; Zhao et al.,2010; Lee et al.,2012; Edelbro et al.,2012; Renani and Martin,2018; Dadashzadeh,2020).Unfortunately,such back analyses are poorly constrained,as a numerical model using a CWFS strength model requires at least twelve parameters (three defining the in situ stress condition,two elastic parameters,six strength parameters,and a dilation angle).Even if some of these parameters (i.e.stresses and elastic constants) are constrained by independent information,the number of strength parameters(six)still far exceeds the number of independent pieces of information contained within a characterization of the notch geometry (two-depth and angular extent).This corresponds to a back analysis with a high degree of non-uniqueness.This nonuniqueness is further compounded due to the non-zero model and measurement errors,suggesting that slightly differing modelled notch geometries can still be claimed to reasonably match the observed notch shape.
The non-uniqueness associated with this problem was illustrated in the context of the AECL mine-by experiment by Walton(2019).Fig.3 shows the results from four different CWFS models,each of which matches the observed notch geometry satisfactorily.Table 1 contains a record of the strength parameters and dilation angles used to generate these models.There are significant variations in each of the model parameters,and yet the resulting model outputs would lead to any one of these models being considered as an appropriate back analysis result.The same conclusion can be drawn from the Kiirunavaara mine case study(simulation of brittle rock damage in a drift) where different CWFS parameters were identified by Edelbro et al.(2012),Sjöberg et al.(2015),and Tjäder(2018) as appropriate for the host rock.In such situations,to determine which of the models is most appropriate would require additional data,such as displacements associated with spalling fracture dilatancy or laboratory test data that provide further information on some of the strength and/or dilation parameters.This example demonstrates that because of the problem of nonuniqueness,a satisfactory model-data fit cannot be taken as an indication of model reliability.
Table 1 CWFS parameters and dilation angle values used to obtain the results shown in Fig.3 (modified from Walton,2019).
Walton et al.(2016) showed the results of a back analysis experiment conducted at the Creighton Mine(Sudbury,Canada).A three-dimensional (3D) model was developed in FLAC3D (a finitedifference modelling software),and a manual back analysis was conducted to determine reasonable stress,stiffness,strength,and dilation parameters and study the deformation mechanisms of hard rock pillars.Ultimately,a calibrated model was developed,but there were still two model locations (out of a total of twelve monitored locations) which did not match the full deformation history of the extensometer anchors at those points.
The anchor with the greatest degree of misfit was located 20 cm from the edge of the monitored pillar near the edge of the ore body,with the granitic host rock containing a higher than typical amount of sulphide minerals.As can be seen in Fig.4,although the initial(elastic)portions of the measured and modelled curves agree,and the final displacement values obtained are approximately equal,there is a major deviation of the model from the measured displacement at the end of March 2014.
Fig.4.Extensometer data,back analyzed model result,and hypothetical least-squares solution for an extensometer anchor located 20 cm away from the edge of the pillar being monitored.The large deviation that occurred in March 2014 is thought to be associated with time-dependent effects not accounted for in the model (after Walton et al.,2016).
Through back analysis process (which ultimately included over 1600 user-defined individual forward model trials),it was determined that no reasonable combination of parameters could accurately capture the observed deformation of the anchor in question.This is because at the point of the greatest deviation between the model and data (late March),the excavation of approximately 4000 m3of material from a stope occurred only ~40 m away from the monitoring point.Given this information,it is perhaps more surprising that the extensometer recorded no notable movement at this time.With this in mind,and knowing that a reasonable agreement was found between the model and data for ten out of twelve of the monitoring points,it seems plausible that the lack of an increase in extensometer displacement shown in Fig.4 represents a phenomenon not captured by the forward model employed(i.e.ξ ≠0).Walton et al.(2016)suggested that this might represent delay in the onset of brittle damage at low confinements(i.e.for the two shallow anchors which deviate from the data) due to the increased presence of relatively ductile sulphide lenses in the area of the extensometer located closer to the orebody.There could be other plausible explanations,such as the presence of a structural feature (or features) that was not incorporated into the model,which may have preferentially redistributed stresses away from the pillar being monitored.
In any case,the challenge of reconciling the inability of the model to replicate the full history of displacement from two of twelve extensometer anchors (even if the final displacements are similar) illustrates a challenge in developing an appropriate error function.This issue was not understood until a large number of models had been run as part of a manual back analysis process.The model ultimately selected through the manual back analysis process would not be considered optimal based on a conventional least-squares error function,which may have selected a model bisecting the extensometer data as shown in Fig.4(several models run through the back analysis process showed results similar to the dotted line in this figure),while making some sacrifices on fit quality for ten out of twelve measurement points considered.
Even when performing a back analysis using a relatively simpler forward model,it can be valuable to consider data beyond in situ measurements (e.g.recorded displacements) to obtain a more realistic result,for example,the case of an elastic back analysis where the in situ stress condition and rock mass modulus are the parameters to be determined.While matching in situdisplacements may be the primary goal of the back analysis,one should be wary if the results obtained differ significantly from the values that might be suggested by independent means.For example,if the obtained stress magnitudes are well outside the range of values suggested by previous studies,such a discrepancy must be scrutinized.With respect to the determination of rock mass modulus,one may have access to laboratory-based estimates of intact Young’s modulus which can be used to estimate the rock mass modulus using empirical means (e.g.Hoek and Diederichs,2006).Again,a large discrepancy between a back analyzed value and an empirically estimated value is an unfavorable result.
In the case of more complex forward models,consideration of all available data sources becomes even more critical.Walton and Diederichs (2015a) presented a case study of a shaft back analysis using the CWFS strength model.They demonstrated that equally acceptable back analysis results could be obtained using three distinct models for dilation (see Fig.5):
(1) A confining stress and plastic shear strain dependent mobilized dilation angle model,here referred to as the W-D model(Walton and Diederichs,2015b);
(2) A plastic shear strain dependent dilation angle model,which considers an initial dilation angle increase followed by a decay with differing profiles for the low confinement inner zone immediately surrounding the excavation and the high confinement zone further into the rock mass,referred to here as Alternative 1; and
(3) A plastic shear strain dependent dilation angle model,which considers immediate decay with differing profiles for the low confinement inner zone immediately surrounding the excavation and the high confinement zone further into the rock mass,referred to here as Alternative 2.
The associated non-dilation model parameters and two measures of model-data fit for each of these cases are shown in Table 2.Note that the quantitative measures of model quality provided reflect only the degree of agreement between modelled ground displacements and associated measurements obtained from the borehole extensometers.Nevertheless,the non-dilation model parameters in all cases were selected to be approximately consistent with existing information (in situ stress measurements,empirical correlations between in situ material parameters and laboratory test data).The key difference between these models is that the dilation model and parameters used for the W-D model are consistent with post-peak laboratory test data available for the rock type of interest,whereas Alternatives 1 and 2 were created independent of any consideration of laboratory data.This example illustrates the importance of incorporating distinct data types into a back analysis to better constrain the result and reduce the potential for non-uniqueness.Here,the lack of a clear framework through which to quantitatively combine disparate data sources into a coherent error function necessitated the use of a trial-and-error approach to back analysis.
Table 2 Non-dilation model parameters and measures of model quality for the three dilation models tested by Walton and Diederichs (2015a),as well as a separate back analysis conducted by Renani et al.(2016).
It is also interesting to consider that the same case study was separately studied by Renani et al.(2016).Rather than utilizing a CWFS strength model,Renani et al.(2016)employed a conventional strain-softening model.In this case,it is unknown how laboratory parameters were incorporated into the back analysis process.The resulting models,while showing“reasonable”agreement with the extensometer data,have larger errors than the models presented by Walton and Diederichs(2015a).The multiple models presented by Walton and Diederichs (2015a) and Renani et al.(2016) further highlight the importance of considering multiple data sources in the back analysis process,and additionally demonstrate the problem of non-uniqueness in back analysis.
Fig.5.Dilation angle models used to match model displacements to extensometer measurements: (a) the “W-D” model (Walton and Diederichs,2015b); (b) Alternative 1; and (c) Alternative 2 (after Walton and Diederichs,2015a).γp is maximum plastic shear strain.
A major trend in rock mechanics modelling is the increasing use of BPM or BBM to simulate intact rock behavior and SRM models to simulate rock mass behavior.The BPM/BBM approach to model intact rock behavior simulates the formation,growth,and interaction of micro-cracks using a bonded assembly of spherical particles or polygonal blocks.Through the appropriate selection of“microparameters” that govern the interactions of individual particles,such models can produce a set of emergent behaviors that closely match those of intact rocks (Potyondy and Cundall,2004).
Following the initial introduction of the BPM approach to model intact rocks through several pioneering studies(e.g.Potyondy et al.,1996; Diederichs,2000; Potyondy and Cundall,2004),several further advancements were made to allow the method to better replicate observed features of intact rock behavior (e.g.Cho et al.,2007; Potyondy,2012; Ghazvinian et al.,2014).For example,smooth joints can be introduced in BPM to represent grainboundaries,and such models are known as BPM-grain based models (BPM-GBM; Bahrani et al.,2014; Bewick et al.,2014; Zhou et al.,2019).By generating more block interlocking,BPM-GBM was able to overcome all known limitations of BPM such as the inability to simultaneously match the compressive and tensile strengths of brittle rocks,the tendency to underestimate friction angles and inhibition of shear fracture propagation (Diederichs,2000; Potyondy and Cundall,2004; Cho et al.,2007).An alternate approach is BBM,which utilizes discrete elastic or inelastic polygonal blocks to represent the material domain.This approach has been found to successfully replicate small-scale damage processes in granitic (Lan et al.,2010; Ghazvinian et al.,2014; Nicksiar and Martin,2014; Farahmand and Diederichs,2015; Cai and Noorani,2015; Sinha and Walton,2020b; Sinha et al.,2020) and sedimentary rocks(Kazerani and Zhao,2010)and has also shown potential in reproducing the behavior of field-scale structures(Preston et al.,2013; Sinha and Walton,2018).In comparison to BPM,both BBM and BPM-GBM take more time to run and have larger numbers of input parameters (hence,increased parameter non-uniqueness potential).
More recently,the incorporation of discrete fracture networks into BPM and BBM to simulate rock mass-scale discontinuities has led to the development of so-called SRM models which can replicate the deformation mechanisms of rock masses associated with both their intact rocks and structural discontinuity components(e.g.Ivars et al.,2011; Vallejos et al.,2016; Farahmand et al.,2018;Vazaios et al.,2018).SRM,therefore,has the potential to not only overcome the limitations of the conventional Hoek-Brown-geological strength index (GSI) approach in estimating rock mass strength(applicable to rock masses where deformation is mostly by joint slippage and/or block rotation;Marinos et al.,2005),but also to simulate complete stress-strain behaviors of rock masses at different scales and under different loading conditions.The key limitations of this approach are the inherent computational demand of conducting large-scale discontinuum simulations and the potential for difficulty in validating all aspects of model behavior in practical field-scale scenarios.
An element of back analysis is inherently incorporated into most uses of BPM,BBM,and SRM models.The use of such models can be most fundamentally broken into two steps:
(1) Iteratively test different combinations of micromechanical parameters until the emergent behaviors of numerically simulated laboratory tests match those observed in actual test data (model calibration); and
(2) Forward (i.e.predictive) model testing under conditions difficult to simulate in a physical laboratory.
Historically,BPM and BBM models have been calibrated based on a limited number of mechanical parameters.In order of decreasing prevalence,these are unconfined compressive strength(UCS),E,tensile strength,ν,triaxial strength data,crack initiation(CI) stress,crack damage (CD) stress,and fracture toughness measures.Typical studies calibrate their models to three to five of these parameters (Potyondy et al.,1996; Potyondy and Cundall,2004;Cho et al.,2007; Tawadrous et al.,2009; Kazerani and Zhao,2010;Ivars et al.,2011;Bahrani et al.,2012;Potyondy,2012;Norouzi et al.,2013;Ghazvinian et al.,2014;Chen et al.,2015,2016;Vallejos et al.,2016).By contrast,these models require on the order of ten or more material parameters to be defined,and as a result these models are subject to some degree of non-uniqueness(Potyondy and Cundall,2004; Christianson et al.,2006; Ghazvinian et al.,2014).
The non-uniqueness in BPM and BBM calibration can be managed based on an understanding of reasonable limits for certain parameters that are well understood in terms of physical meaning (e.g.the friction angle between two particles),or simply by fixing some parameters.Additionally,an understanding of macro-property sensitivity to changes in micro-properties can allow the calibration process to be performed in stages(Ghazvinian et al.,2014; Fabjan et al.,2015).However,the non-uniqueness induced based on a given model’s formulation (i.e.the grain structure)rather than its micro-properties may be significant,given the large impact of the simulated grain geometry on the emergent behavior of the model (Ghazvinian et al.,2014; Azocar,2016; Gui et al.,2016; Mayer and Stead,2017).Although this particular aspect of model formulation and its impact on the calibration process has not been studied extensively,some recent studies indicate that different realizations of the same grain structure tend to produce similar behaviors,while changes in the block sphericity(or angularity)can significantly alter model results(Gui et al.,2016;Liu et al.,2018; Contreras Inga et al.,2020).
While the goal of such models is to realistically capture the fundamental behaviors of rocks at a micro scale,the choice of model complexity necessary for that purpose is often not well understood.An over-simplified model,therefore,may not have the capability of capturing the entire range of target attributes.This was observed in a recent study (Sinha and Walton,2018) where elastic Voronoi blocks were employed for developing a synthetic representation of a granite.The contact and block micro-properties were constrained against UCS,Brazilian tensile strength(BTS),E,v,damage thresholds (CI and CD),and the shape of a representative stress-strain curve.Although the model exhibited realistic strength and dilatant behavior under unconfined conditions,it showed exceptionally high triaxial strengths in comparison to those obtained from laboratory testing(see Fig.6).In the same study,the models were up-scaled to simulate rock pillars with width to height ratio ranging from 1 to 3,and it was found that different sets of micro-parameters were necessary to reproduce basic pillar failure mechanisms and realistic stress-strain responses (Sinha and Walton,2018).The inability of the constrained synthetic model to replicate the entire range of expected behavior is attributed to the choice of elastic homogeneous block properties,which may be inappropriate for polycrystalline granitic rocks.
Identification of such material model limitations(e.g.inability to simulate certain phenomena with a given model representation)is difficult in BBM due to the added complexity of parameter nonuniqueness.In other words,during a trial-and-error back analysis process,a researcher may be confronted with the dilemma as to whether any model-data mismatch is due to an imperfect calibration,or a limitation of the modelling method or model representation.A practical solution is to model the target rock using a simplified approach (e.g.elastic grains in Voronoi) and then add degrees of complexity until a large number of macro-properties are adequately captured(Sinha and Walton,2020b).
While such an iterative approach is cumbersome,a review of existing literature can assist in selecting the most appropriate modelling method for a specific problem.Previous attempts to model the damage processes in Lac du Bonnet granite using the Voronoi approach have involved the use of 3D elastic homogenous blocks,two-dimensional(2D)inelastic homogenous blocks and 2D elastic heterogeneous blocks.The increase in complexity of the modelling approach (from elastic homogenous to elastic heterogeneous)led to better conformity with a larger number of expected macro-properties(see Table 3).A key caveat in using such complex heterogeneous models is the higher risk of encountering nonuniqueness issues,and this is evident from the significantly different micro-properties used by Lan et al.(2010),Farahmand and Diederichs (2015) and Chen et al.(2016) to model the same rock type with an identical model representation(2D elastic,heterogeneous blocks).
Fig.6.(a)Stress-strain curve and other damage thresholds for Voronoi models; and(b)triaxial test results from laboratory and Voronoi models.Note that“HB”stands for“Hoek-Brown”.
Table 3 BBM micro-parameters and simulation results from Farahmand and Diederichs(2015),Cai and Noorani(2015),and Ghazvinian et al.(2014).The properties of Lan et al.(2010)and Chen et al.(2016) are not shown to avoid redundancy and also because all macro-properties listed in the table were not computed.
With all that in mind,the ability of a complex model to capture more target attributes is not entirely related to the increase in the number of calibration parameters,but also to the phenomenological improvement in the model’s capability.For example,an inelastic grain-scale model can approximate the intragranular fracturing process (i.e.grain yield) that is relevant at higher confinement,but a block model with elastic elements cannot;accordingly,the latter models are better suited for low confinement simulations (Sinha and Walton,2020b).Similarly,heterogeneous models allow the development of local tensile stresses due to elastic mismatch and can faithfully reproduce the pre-peak damage mechanisms,but a homogeneous model cannot(Sinha and Walton,2020b).It is therefore important to obtain a good understanding of a model’s capabilities before utilizing it for a given rock engineering problem.
Fig.7.Examples of post-peak laboratory test data and associated BPM modelling results:(a)laboratory test data and simulated stress-strain curves from Kirchberg-II granite(after Chen et al.,2016);and(b)measured and modelled unconfined stress-strain curves for anisotropic Cobourg limestone with perpendicular and parallel bedding orientations relative to the loading direction (after Ghazvinian et al.,2014).
Whether or not model representation and/or parameter nonuniqueness has a strong effect on the process of calibrating these models to the pre-peak portion of a material’s stress-strain curve,there is a significant potential for calibrated models to provide inaccurate representations of the material’s post-peak behavior.Often the post-peak portions of BPM and BBM model results are omitted from published results,or are not directly compared with laboratory data.Even in studies with some of the best post-peak results (examples shown in Fig.7),the model to data match is not particularly strong and is not quantified (Kazerani and Zhao,2010; Ghazvinian et al.,2014; Chen et al.,2016).Additionally,there is a distinct lack of study on the post-peak dilatational behavior of BPM,BBM and SRM models.These facts are troubling given the significant importance of post-peak strength in various rock mechanics applications such as pillar stability (e.g.Walton et al.,2016) and block caving assessment (e.g.Ivars et al.,2011).As noted by Potyondy and Cundall (2004),one of the challenges with calibrating BPM simulations to laboratory data is uncertainty with respect to which indicators from data should be used for the purposes of calibration.Given the recent development of models for post-peak dilatancy (e.g.Alejano and Alonso,2005; Zhao and Cai 2010; Walton and Diederichs,2015b; Rahjoo,2019),however,this challenge has been mitigated to some extent.
Given that the ultimate goal of BPM/BBM/SRM models is to allow for the development of behavioral predictions for rocks and rock masses outside the set of conditions which are reasonably observable in a laboratory setting,they are,by nature,pushing the philosophical limits of the capabilities of numerical models.By making concerted efforts to incorporate further data(e.g.post-peak response parameters) into calibration efforts,further studying the sensitivity of emergent macro-properties to different model formulations and parameters,and exercising prudence in the interpretation of results,however,these techniques have the potential to allow for the development of significant advances in the field of rock mechanics.
Back analysis represents an important tool in the field of rock mechanics.It can aid in developing understanding of phenomena that cannot be easily observed,and can be used to test hypotheses about specific aspects of rock behavior.It also constitutes a critical component of modern modelling methods which aim to represent rocks and rock masses as assemblies of micromechanical components.Despite its importance,the methods used to conduct back analyses have drawn minimal attention,and other than in the case of relatively simple problems,rock mechanics researchers and practitioners still use ad hoc manual trial-and-error approaches.
This paper outlined several complex and interconnected challenges that have made it difficult to develop a generally applicable,efficient framework for numerical back analysis in rock mechanics.Perhaps the greatest challenge moving forward will be to find a way to incorporate the spirit of adaptive modelling into semiautomated processes for back analysis.This will ultimately require a great deal of further study on how error functions can be designed in a flexible and reliable manner to incorporate disparate data sources and flexible constraints,and to better handle material model limitations and non-uniqueness.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
We gratefully acknowledge funding support from National Institute for Occupational Safety and Health (NIOSH) (Grant No.200-2016-90154).
Journal of Rock Mechanics and Geotechnical Engineering2022年6期