## 1. Introduction

In data assimilation one seeks to find the best estimation of the state of a dynamical system given a forecast model with a possible model error and noisy observations at discrete observation intervals (Kalnay 2002). This process is complicated on the one hand by the often chaotic nature of the underlying nonlinear dynamics leading to an increase of the variance of the forecast, and on the other hand by the fact that one often has only partial information of the observables. In this paper we address the latter issue. We consider situations whereby noisy observations are available for some variables but not for other unresolved variables. However, for the latter we assume that some prior knowledge about their statistical climatic behavior such as their variance and their mean is available.

A particularly attractive framework for data assimilation are ensemble Kalman filters (e.g., see Evensen 2006 ). These straightforwardly implemented filters distinguish themselves from other Kalman filters in that the spatially and temporally varying background error covariance is estimated from an ensemble of nonlinear forecasts. Despite the ease of implementation and the flow-dependent estimation of the error covariance, ensemble Kalman filters are subject to several errors and specific difficulties [see Ehrendorfer (2007) for a recent review]. Besides the problems of estimating model error, which is inherent to all filters, and inconsistencies between the filter assumptions and reality such as non-Gaussianity which render all Kalman filters suboptimal, ensemble-based Kalman filters have the specific problem of sampling errors due to an insufficient size of the ensemble. These errors usually underestimate the error covariances, which may ultimately lead to filter divergence when the filter trusts its own forecast and ignores the information given by the observations.

Several techniques have been developed to counteract the associated small spread of the ensemble. To deal with errors in ensemble filters due to sampling errors we mention two of the main algorithms: covariance inflation and localization. To avoid filter divergence due to an underestimation of error covariances the concept of covariance inflation was introduced whereby the prior forecast error covariance is increased by an inflation factor (Anderson and Anderson 1999). This is usually done in a global fashion and involves careful and expensive tuning of the inflation factor; however, recently methods have been devised to adaptively estimate the inflation factor from the innovation statistics (Anderson 2007, 2009; Li et al. 2009). Too small ensemble sizes also lead to spurious correlations associated with remote observations. To address this issue, the concept of localization has been introduced (Houtekamer and Mitchell 1998, 2001; Hamill et al. 2001; Ott et al. 2004; Szunyogh et al. 2005) whereby only spatially close observations are used for the innovations.

To take into account the uncertainty in the model representation we mention here isotropic model error parameterization (Mitchell and Houtekamer 2000; Houtekamer et al. 2005), stochastic parameterizations (Buizza et al. 1999), and kinetic energy backscatter (Shutts 2005). A recent comparison between those methods is given in Houtekamer et al. (2009), Charron et al. (2010), and Hamill and Whitaker (2005). The problem of non-Gaussianity is for example discussed in Pires et al. (2010) and Bocquet et al. (2010).

Whereas the underestimation of error covariances has received much attention, relatively little is done for a possible overestimation of error covariances. Overestimation of covariance is a finite-ensemble size effect that typically occurs in sparse observation networks (e.g., see Liu et al. 2008; Whitaker et al. 2009). Uncontrolled growth of error covariances, which is not tempered by available observations, may progressively spoil the overall analysis. This effect is even exacerbated when inflation is used; in regions where no observations influence the analysis, inflation can lead to unrealistically large ensemble variances progressively degrading the overall analysis (e.g., see Whitaker et al. 2004). This is particularly problematic when inappropriate uniform inflation is used. Moreover, it is well known that covariance localization can be a significant source of imblance in the analyzed fields (e.g., see Houtekamer and Mitchell 2005; Kepert 2009; Houtekamer et al. 2009). Localization artificially generates unwanted gravity wave activity, which in poorly resolved spatial regions may lead to an unrealistic overestimation of error covariances. Being able to control this should help filter performances considerably.

When assimilating current weather data in numerical schemes for the troposphere, the main problem is underestimation of error covariances rather than overestimation. This is due to the availability of radiosonde data, which assures wide observational coverage. However, in the preradiosonde era there were severe data voids, particularly in the Southern Hemisphere and in vertical resolution since most observations were done on the surface level in the Northern Hemisphere. There is an increased interest in so-called climate reanalysis (e.g., see Bengtsson et al. 2007; Whitaker et al. 2004), which has the challenge to deal with large unobserved regions. Historical atmospheric observations are reanalyzed by a fixed forecast scheme to provide a global homogeneous dataset covering troposphere and stratosphere for very long periods. A remarkable effort is the international Twentieth Century Reanalysis Project (20CR; Compo et al. 2011), which produced a global estimate of the atmosphere for the entire twentieth century (1871 to the present) using only synoptic surface pressure reports and monthly sea surface temperature and sea ice distributions. Such a dataset could help to analyze climate variations in the twentieth century or the multidecadal variations in the behavior of the El NiÃ±oâ€“Southern Oscillation. An obstacle for reanalysis is the overestimation of error covariances if one chooses to employ ensemble filters (Whitaker et al. 2004) where multiplicative covariance inflation is employed.

Overestimation of error covariances also occurs in modern numerical weather forecast schemes for which the upper lid of the vertical domain is constantly pushed toward higher and higher levels to incorporate the mesosphere, with the aim to better resolve processes in the polar stratosphere (e.g., see Polavarapu et al. 2005; Sankey et al. 2007; Eckermann et al. 2009). The energy spectrum in the mesosphere is, contrary to the troposphere, dominated by gravity waves. The high variability associated with these waves causes very large error covariances in the mesosphere which can be 2 orders of magnitude larger than at lower levels (Polavarapu et al. 2005), rendering the filter very sensitive to small uncertainties in the forecast covariances. Being able to control the variances of mesospheric gravity waves is therefore a big challenge.

The question we address in this work is how can the statistical information available for some data, which are otherwise not observable, be effectively incorporated in data assimilation to control the potentially high error covariances associated with the data void. We will develop a framework to modify the familiar Kalman filter (e.g., see Evensen 2006; Simon 2006) for partial observations with only limited information on the mean and variance, with the effect that the error covariance of the unresolved variables cannot exceed their climatic variance and their mean is controlled by driving it toward the climatological value.

The paper is organized as follows. In section 2 we will introduce the dynamical setting and briefly describe the ensemble transform Kalman filter (ETKF), a special form of an ensemble square root filter. In section 3 we will derive the variance-limiting Kalman filter (VLKF) in a variational setting. In section 4 we illustrate the VLKF with a simple linear toy model for which the filter can be analyzed analytically. We will extract the parameter regimes where we expect VLKF to yield optimal performance. In section 5 we apply the VLKF to the 40-dimensional Lorenz-96 system (Lorenz 1996) and present numerical results illustrating the advantage of such a variance-limiting filter. We conclude the paper with a discussion in section 6.

## 2. Setting

*N*-dimensional

^{1}dynamical system whose dynamics is given bywith the state variable

**z**= (

**x**,

**y**) with

*n*+

*m*=

*N*. Here

**x**shall denote those variables for which direct observations are available, and

**y**shall denote those variables for which only some integrated or statistical information is available. We will coin the former

*observables*and the latter

*pseudo-observables*. We do not incorporate model error here and assume that (1) describes the truth. We apply the notation of Ide et al. (1997) unless stated explicitly otherwise.

**x**. We assume that observations of the designated variables

**x**are given at equally spaced discrete observation times

*t*with the observation interval Î”

_{i}*t*

_{obs}. Since it is assumed that there is no model error, the observations

*t*=

_{i}*i*Î”

*t*

_{obs}are given bywith independent and identically distributed observational Gaussian noise

We further introduce an operator **y**. We assume that the pseudo-observables have variance

The model forecast state **z*** _{f}* at each observation interval is obtained by integrating the state variable with the full nonlinear dynamics in (1) for the time interval Î”

*t*

_{obs}. The background (or forecast) involves an error with covariance

Data assimilation aims to find the best estimation of the current state given the forecast **z*** _{f}* with variance

**y**

*of the designated variables with error covariance*

_{o}**a**

_{clim}and error covariance

### Ensemble Kalman filter

*k*members

**z**

*is propagated by the full nonlinear dynamics (1), which is written asThe ensemble is split into its mean:where*

_{k}*t*=

*t*âˆ’

_{i}*Ïµ*are evaluated before taking the observations (and/or pseudo observations) into account in the analysis step, and variables at times

*t*=

*t*+

_{i}*Ïµ*are evaluated after the analysis step when the observations (and/or pseudo observations) have been taken into account. In the first step of the analysis the forecast mean,is updated to the analysis mean:where the Kalman gain matrices are defined asThe analysis covariance

*k*â‰ª

*N*we shall use the ETKF. Note that the matrix is not uniquely determined for

*k*<

*N*. The transformation matrix can be obtained either by using continuous Kalman filters (Bergemann et al. 2009) or directly (Wang et al. 2004) byHere

*k*âˆ’ 1) Ã— (

*k*âˆ’ 1) block of the diagonal matrix

A new forecast

In the next section we will determine how the error covariance

## 3. Derivation of the variance-limiting Kalman filter

One may naively believe that the error covariance of the pseudo-observable

**z**

*and associated error covariance*

_{f}**z**is the state variable at one observation time

*t*=

_{i}*i*Î”

*t*

_{obs}. Note that the part involving the pseudo-observables corresponds to the notion of weak constraints in variational data assimilation (Sasaki 1970; Zupanski 1997; Neef et al. 2006).

*t*

_{i+}_{1}to yield

**z**

*and*

_{f}**âˆ‡**

_{z}*J*(

**z**) = 0 is readily evaluated to beand yields (3) for the analysis mean

**D**

_{ii}â‰¥ 0 and

## 4. Analytical linear toy model

**Î›**are all skew symmetric;

*Ïƒ*

_{x}_{,y}and

**Î“**

_{x}_{,y}are symmetric; and

^{2}We assume here for simplicity thatwith the identity matrix

**x**and

**y**. We assume that we have access to observations of the variable

**x**at discrete observation times

*t*=

_{i}*i*Î”

*t*

_{obs}, but have only statistical information about the variable

**y**. We assume knowledge of the climatic mean

*Î¼*_{clim}and the climatic covariance

**y**. The noise is of Ornsteinâ€“Uhlenbeck type (Gardiner 2004), and may represent either model error or parameterize highly chaotic nonlinear dynamics. Without loss of generality, the coupling is chosen such that the

**y**dynamics drives the

**x**dynamics but not vice versa. The form of the coupling is not essential for our argument, and it may be oscillatory or damping with

**Î›**=

*Î»*

**I**. We write this system in the more compact form for

**for our choice of the matrices, is given bywith meanand covariancewhereThe climatic mean**

*Ïƒ**t*â†’ âˆž asandIn order for the stochastic process (12) to have a stationary density and for

**Î£**(

*t*) to be a positive definite covariance matrix for all

*t*, the coupling has to be sufficiently small with

*Î»*

^{2}< 4

*Î³*. Note that the skew product nature of the system (12) is not special in the sense that a nonskew product structure where

_{x}Î³_{y}**x**couples back to

**y**would simply lead to a renormalization of

**x**to

**y**, the Kalman filter generically introduces back coupling of all variables through the inversion of the covariance matrices [cf. (5)].

We will now investigate the variance-limiting Kalman filter for this toy model. In particular we will first analyze under what conditions

**z**

*(*

_{j}*t*

_{i+}_{1}) is the forecast of ensemble member

*j*at time

*t*

_{i+}_{1}=

*t*+ Î”

_{i}*t*

_{obs}= (

*i*+ 1)Î”

*t*

_{obs}before the analysis propagated from its initial condition

**Î£**of one trajectory with a nonrandom initial condition

**z**

_{0}. The difference is most pronounced for small observation intervals when the covariance of the ensemble

**Î£**. In the long-time limit, both,

**Î£**, will approach the climatic covariance

**Î£**

_{clim}[cf. (13)].

In the following we restrict ourselves to the limit of small observation intervals Î”*t*_{obs} â‰ª 1. In this limit, we can approximate **Î£**(Î”*t*_{obs}) for small observation intervals Î”*t*_{obs}. This is consistent with our stationarity assumption *Mathematica* (Wolfram Research, Inc. 2008), but is omitted from this paper.

*Î´*> 1 multiplying the forecast variance

**y**to be

**a**

_{clim}=

*Î¼*_{clim}. Then, upon using the definitions (10) and (11), we find that the error covariance for the pseudo-observables

*t*

_{obs}is sufficiently large.

^{3}Particularly, in the limit of

*Î´*> 1 the critical Î”

*t*

_{obs}above which

*t*

_{obs}. If no inflation is applied (i.e.,

*Î´*= 1), this simplifies toBecause 4

*Î³*âˆ’

_{x}Î³_{y}*Î»*

^{2}> 0 the critical observation interval Î”

*t*

_{obs}is smaller for nontrivial inflation with

*Î´*> 1 than if no variance inflation is incorporated. This is intuitive, because the variance inflation will increase instances with

*Î»*or sufficiently small values of

*Î³*, (16) may not be consistent with the assumption of small observation intervals Î”

_{x}*t*

_{obs}â‰ª 1.

We have checked analytically that the derivative of *t*_{obs}, indicating that the frequency of occurrence when the variance constraint is switched on increases monotonically with the observation interval Î”*t*_{obs}, in the limit of small Î”*t*_{obs}. This has been verified numerically with the application of VLKF for (12) and is illustrated in Fig. 1.

At this stage it is important to mention effects due to finite-size ensembles. For large observation intervals Î”*t*_{obs} â†’ âˆž and large observational noise *Ïƒ*_{clim} implying positive definite values of *t*_{obs} â†’ âˆž and for the size of the ensemble *k* â†’ âˆž.

The analytical results obtained above are for the ideal case with *k* â†’ âˆž. As mentioned in the introduction, in sparse observation networks finite ensemble sizes cause the overestimation of error covariances (Liu et al. 2008; Whitaker et al. 2009), implying that *k* for different observational noise variances. Here we used no inflation (i.e., *Î´* = 1) in order to focus on the effect of finite ensemble sizes. It is clearly seen that the projected covariance decreases for large enough ensemble sizes. The variance will asymptote from above to *k* â†’ âˆž. For sufficiently small observational noise, the filter corrects too large forecast error covariances by incorporating the observations into the analysis leading to a decrease in the analysis error covariance.

*t*

_{obs}when the ensemble will have acquired the climatic mean and covariances, VLKF and ETKF will have equal skill. We now turn to the question under what conditions VLKF is expected to yield improved skill compared to standard ETKF. To this end we introduce as skill indicator the (squared) RMS error:between the truth

**z**

_{truth}and the ensemble mean analysis

*W*. We introduced the norm

**z**

_{truth}(

*t*), and performing the average over the realizations, we finally arrive atwith the mutually independent normally distributed random variables:We have numerically verified the validity of our assumptions of the statistics of

**a**

_{clim}= 0. Note that using our stationarity assumption to calculate

*k*â†’ âˆž the random variable

*Î¾**in (19) and (23).*

_{t}*t*

_{obs}, we expect skill improvement for small Î”

*t*

_{obs}. We perform again a Taylor expansion in small Î”

*t*

_{obs}of the skill improvement

We found that there is indeed skill improvement *Î³ _{y}* â†’ âˆž or

*Î³*â†’ 0. This suggests that the skill is controlled by the ratio of the time scales of the observed and the unobserved variables. If the time scale of the pseudo-observables is much larger than the one of the observed variables, VLKF will exhibit superior performance over ETKF. This can be intuitively understood since 1/(2

_{x}*Î³*) is the time scale on which equilibrium (i.e. the climatic state) is reached for the pseudo-observables

_{y}**y**. If the pseudo-observables have relaxed toward equilibrium within the observation interval Î”

*t*

_{obs}, and their variance has acquired the climatic covariance

Furthermore, we found analytically that the skill improvement increases with increasing observational noise *R*_{obs} (at least in the small observation interval approximation). In particular we found that *R*_{obs} = 0. The increase of skill with increasing observational noise can be understood phenomenologically in the following way. For *R*_{obs} = 0 the filter trusts the observations, which as a time series carry the climatic covariance. This implies that there is a realization of the Wiener process such that the analysis can be reproduced by a model with the true values of *Î³*_{x,y} and *Ïƒ*_{x,y}. Similarly, this is the case in the other extreme *R*_{obs} â†’ âˆž, where the filter trusts the model. For 0 â‰ª *R*_{obs} â‰ª âˆž the analysis reproducing system would have a larger covariance *Ïƒ _{x}* than the true value. This slowed-down relaxation towards equilibrium of the observed variables can be interpreted as an effective decrease of the damping coefficient

*Î³*. This effectively increases the time-scale separation between the observed and the unobserved variables, which was conjectured above to be beneficial for skill improvement.

_{x}As expected, the skill improves with increasing inflation factor *Î´* > 1. The improvement is exactly linear for Î”*t*_{obs} â†’ âˆž. This is due to the variance inflation leading to an increase of instances with

In Fig. 3 we present a comparison of the analytical results (19) and (23) with results from a numerical implementation of ETKF and VLKF for varying damping coefficient *Î³ _{y}*. Since

*Î³*controls the time scale of the

_{y}**y**process, we cannot use the same Î”

*t*

_{obs}for a wide range of

*Î³*in order not to violate the small observation interval approximations used in our analytical expressions. We choose Î”

_{y}*t*

_{obs}as a function of

*Î³*such that the singular values of the first-order approximation of the forecast variance is a good approximation for this Î”

_{y}*t*

_{obs}. For Fig. 3 we have Î”

*t*

_{obs}âˆˆ (0.005, 0.01) to preserve the validity of the Taylor expansion. Besides the increase of the skill with

*Î³*, Fig. 3 shows that the value of

_{y}*Î´*> 1.

We will see in the next section that the results we obtained for the simple linear toy model (12) hold as well for a more complicated higher-dimensional model, where the dynamic Brownian driving noise is replaced by nonlinear chaotic dynamics.

## 5. Numerical results for the Lorenz-96 system

**z**= (

*z*

_{1}, â€¦ ,

*z*) and periodic

_{D}*z*

_{i}_{+D}=

*z*. This system is a toy model for midlatitude atmospheric dynamics, incorporating linear damping, forcing and nonlinear transport. The dynamical properties of the Lorenz-96 system have been investigated (e.g., Lorenz and Emanuel 1998; Orrell and Smith 2003; Gottwald and Melbourne 2005), and in the context of data assimilation it was also investigated (e.g., Ott et al. 2004; Fisher et al. 2005; Harlim and Majda 2010). We use

_{i}*D*= 40 modes and set the forcing to

*F*= 8. These parameters correspond to a strongly chaotic regime (Lorenz 1996). For these parameters one unit of time corresponds to 5 days in the earthâ€™s atmosphere as calculated by calibrating the

*e*-folding time of the asymptotic growth rate of the most unstable mode with a time scale of 2.1 days (Lorenz 1996). Assuming the length of a midlatitude belt to be about 30 000 km, the spatial scale corresponding to a discretization of the circumference of the earth along the midlatitudes in

*D*= 40 grid points corresponds to a spacing between adjacent grid points

*z*of approximately 750 km, roughly equalling the Rossby radius of deformation at midlatitudes. We estimated from simulations the advection velocity to be approximately 10.4 m s

_{i}^{âˆ’1}, which compares well with typical wind velocities in the midlatitudes.

In the following we will investigate the effect of using VLKF on improving the analysis skill when compared to a standard ensemble transform Kalman filter, and on stabilizing the filter and avoiding blow-up as discussed in (Ott et al. 2004; Kepert 2004; Harlim and Majda 2010). We perform twin experiments using a *k* = 41-member ETKF and VLKF with the same truth time series, the same set of observations, and the same initial ensemble. We have chosen an ensemble with *k* > *D* in order to eliminate the effect that a finite-size ensemble can only fit as many observations as the number of its ensemble members (Lorenc 2003). Here we want to focus on the effect of limiting the variance.

The system is integrated using the implicit midpoint rule (e.g., see Leimkuhler and Reich 2005) to a time *T* = 30 with a time step *dt* = 1/240. The total time of integration corresponds to an equivalent of 150 days, and the integration time step *dt* corresponds to half an hour. We measured the approximate climatic mean and variance, *Î¼*_{clim} and *T* = 2000, which corresponds roughly to 27.5 yr. Because of the symmetry of the system (24), the mean and the standard deviation are the same for all variables *z _{i}* and are measured to be

*Ïƒ*

_{clim}= 3.63 and

*Î¼*

_{clim}= 2.34.

The initial ensemble at *t* = 0 is drawn from an ensemble with variance *Ïƒ*_{clim}, and set the observational error covariance matrix

We study first the performance of the filter and its dependence on the time between observations Î”*t*_{obs} and the proportion of the system observed 1/*N*_{obs}. Here *N*_{obs} = 2 means only every second variable is observed, *N*_{obs} = 4 only every fourth, and so on.

We have used a constant variance inflation factor *Î´* = 1.05 for both filters. We note that the optimal inflation factor at which the RMS error *t*_{obs} = 5/120 (5 h) and *N*_{obs} = 4 we find that *Î´* = 1.06 produces minimal RMS errors for VLKF and *Î´* = 1.04 produces minimal RMS errors for ETKF. For *Î´* < 1.04, filter divergence occurs in ETKF, so we chose *Î´* = 1.05 as a compromise between controlling filter divergence and minimizing the RMS errors of the analysis.

Figure 4 shows a sample analysis using ETKF with *N*_{obs} = 5, Î”*t*_{obs} = 0.15, and **a**_{clim} = *Î¼*_{clim}**e** and

As for the linear toy model (12), finite ensemble sizes exacerbate the overestimation of error covariances. In Fig. 6 the maximal singular value of *k*. Again we use no inflation (i.e., *Î´* = 1) in order to focus on the effect of finite ensemble sizes. The projected covariance clearly decreases for large enough ensemble sizes. However, here the limit of the maximal singular value of *k* â†’ âˆž underestimates the climatic variance

**z**

_{truth}and the ensemble mean

*L*= âŒŠ

*T*/Î”

*t*

_{obs}âŒ‹, where the average is taken over 500 different realizations, and

*D*â‰¤

_{o}*D*denotes the length of the vectors

*N*

_{obs}and Î”

*t*

_{obs}. The increased RMS error for larger observation intervals Î”

*t*

_{obs}can be linked to the increased variance of the chaotic nonlinear dynamics generated during longer integration times between analyses. Figure 7 shows the average proportional improvement of the VLKF over ETKF, obtained from the values of Table 1. Figure 7 shows that the skill improvement is greatest when the system is observed frequently. For large observation intervals Î”

*t*

_{obs}ETKF and VLKF yield very similar RMS. We checked that for large observation intervals Î”

*t*

_{obs}both filters still produce tracking analyses. Note that the observation intervals Î”

*t*

_{obs}considered here are all much smaller than the

*e*-folding time of 2.1 days. The most significant improvement occurs when one-quarter of the system is observed, that is for

*N*

_{obs}= 4, and for small observation intervals Î”

*t*

_{obs}. The dependency of the skill of VLKF on the observation interval is consistent with our analytical findings in section 4.

RMS errors for (top) ETKF and (bottom) VLKF for different values of *N*_{obs} and observational interval Î”*t*_{obs}, averaged over 500 simulations, and with

We have checked that the increase in skill as depicted in Fig. 7 is not sensitive to incomplete knowledge of the statistical properties of the pseudo-observables by perturbing **a**_{clim} and then monitoring the change in RMS error. We performed simulations where we drew **a**_{clim} independently from uniform distributions **a**_{clim}, 1.1**a**_{clim}). We found that for parameters *N*_{obs} = 2, 4, 6; *Î·* = 0.05, 0.25, 0.5 [with *Î·* measuring the amount of the climatic variance used through *t*_{obs} = 0.025, 0.05, 0.25 (corresponding to 3, 6, and 30 h) over a number of simulations, there was on average no more than 7% difference of the analysis mean and the singular values of the covariance matrices between the control run where **a**_{clim} = *Î¼*_{clim}**e** is used, and when **a**_{clim} are simultaneously perturbed.

An interesting question is how the relative skill improvement is distributed over the observed and unobserved variables. This is illustrated in Figs. 8 and 9. In Fig. 8 we show the proportional skill improvement of VLKF over ETKF for the observed variables and the pseudo-observables, respectively. Figure 8 shows that the skill improvement is larger for the pseudo-observables than for the observables, which is to be expected. In Fig. 9 we show the actual RMS error *t*_{obs}. In contrast, VLKF exhibits an improved skill for the observed variables either for small observation intervals for all values of *N*_{obs} or for all observation intervals when *N*_{obs} = 4, 5. We have, however, checked that the analysis is still tracking the truth reasonably well, and the discrepancy with ETKF is not due to the analysis not tracking the truth anymore. As expected, the RMS error asymptotes for large observation intervals Î”*t*_{obs} (not shown) to the standard deviation of the observational noise 0.25*Ïƒ*_{clim} â‰ˆ 0.910 for the observables, and to the climatic standard deviation *Ïƒ*_{clim} = 3.63 for the pseudo-observable (not shown), albeit slightly reduced for small values of *N*_{obs} due to the impact of the surrounding observed variables (see Fig. 10).

Note that there is an order of magnitude difference between the RMS errors for the observables and the pseudo-observables for large *N*_{obs} (cf. Fig. 9). This suggests that the information of the observed variables does not travel too far away from the observational sites. However, the nonlinear coupling in the Lorenz-96 system in (24) allows for information of the observed components to influence the error statistics of the unobserved components. Therefore the RMS errors of pseudo-observables adjacent to observables are better than those far away from observables. Moreover, the specific structure of the nonlinearity introduces a translational symmetry breaking (one may think of the nonlinearity as a finite-difference approximation of an advection term **zz*** _{x}*), which causes those pseudo-observables to the right of an observable to have a more reduced RMS error than those to the left of an observable. This is illustrated in Fig. 10 where the RMS error is shown for each site when only one site is observed. The advective time scale of the Lorenz-96 system is much smaller than Î”

*t*

_{obs}, which explains why the skill is not equally distributed over the sites, and why, especially for large values of

*N*

_{obs}, we observe a big difference between the site-averaged skills of the observed and unobserved variables.

In Fig. 11 we show how the RMS error behaves as a function of the observational noise level. We see that for *N*_{obs} = 4, VLKF always has a smaller RMS error than ETKF.

The results confirm again the results from our analysis of the toy model in section 4, which is that VLKF yields best performance for small observation intervals Î”*t*_{obs} and for large noise levels. For large observation intervals ETKF and VLKF perform equally well, since then the chaotic model dynamics will have lead the ensemble to have acquired the climatic variance during the time of propagation.

In Ott et al. (2004) it was observed that if not all variables *z _{i}* are observed the Kalman filter diverges exhibiting blow-up. Similar behavior was observed in Harlim and Majda (2010). In Ott et al. (2004) the authors suggested that the sparsity of observations leads to an inhomogeneous background error, which causes an underestimation of the error covariance. Here we study this catastrophic blow-up divergence (as opposed to filter divergence when the analysis diverges from the truth) and its dependence on the time between observations Î”

*t*

_{obs}and the proportion of the system observed 1/

*N*

_{obs}. We note that blow-up divergence appears only in the case of sufficiently small observational noise and moderate values of Î”

*t*

_{obs}. Once Î”

*t*

_{obs}is large enough (in fact, larger than the

*e*-folding time corresponding to the most unstable Lyapunov exponent, in our case 2.1 days) we notice that no catastrophic divergence occurs, independent of

*N*

_{obs}. This probably occurs because for large observation intervals the ensemble acquires enough variance through the nonlinear propagation. We prescribe Gaussian observational noise of the order of 5% of the climatological standard deviation

*Ïƒ*

_{clim}, and set the observational error covariance matrix to

*t*= 0 is drawn again from an ensemble with variance

To study the performance of VLKF when blow-up occurs in ETKF simulations we count the number *N _{b}* of blow-ups that occur before a total of 100 simulations have terminated without blow-up. The proportions of blow-ups for the respective filters is then given by

*N*/(

_{b}*N*+100). We tabulate this proportion in Table 2 for the ETKF and VLKF, respectively, and the proportional improvement in Table 3. The Ã—s in the table represent cases where no successful simulations could be obtained due to blow-up.

_{b}Proportion of catastrophically diverging simulations with (top) ETKF and (bottom) VLKF for different values of *N*_{obs} and observation interval Î”*t*_{obs}. Observational noise with

Proportional improvement of VLKF and ETKF as calculated as the ratio of the values from Table 2.

Both filters suffer from severe filter instability for *N*_{obs} = 6 (i.e., for very sparse observational networks), at small observation intervals Î”*t*_{obs}. No blow-up occurs for either filter when every variable is observed. Note the reduction in occurrences of blow-ups for large observation intervals Î”*t*_{obs} as discussed above. We have checked that for all *N*_{obs} there is no blow-up for ETKF (and VLKF) for sufficiently large Î”*t*_{obs} (not shown); the larger *N*_{obs} the smaller the upper bound of Î”*t*_{obs} such that no blow-ups occur. Collapse is most prominent for ETKF (and for VLKF, but to a much lesser extent) for larger values of *N*_{obs} and at intermediate observation intervals that depend on *N*_{obs}. Tables 2 and 3 clearly show that incorporating information about the pseudo-observables strongly increases the stability of the filter and suppresses blow-up. However, we note that despite the gain in stability VLKF has a skill less than the purely observational skill in the cases when blow-up occurs for ETKF, because the solutions become nontracking. Further research is under way to improve on this in the VLKF framework.

The fact that incorporating information about the variance of the unobserved variables improves the stability of the filter is in accordance with the interpretation of filter divergence of sparse observational networks provided in Ott et al. (2004).

## 6. Discussion

We have developed a framework to include information about the variance of unobserved variables in a sparse observational network. The filter is designed to control overestimation of error covariances typical in sparse observation networks, and limits the posterior analysis covariance of the unresolved variables to stay below their climatic variance. We have done so in a variational setting and found a relationship between the error covariance of the variance constraint

We illustrated the beneficial effects of the variance-limiting filter in improving the analysis skill when compared to the standard ensemble square root Kalman filter. We expect the variance-limiting constraint to improve data assimilation for ensemble Kalman filters when finite-size effects of too small ensemble sizes overestimate the error covariances, in particular in sparse observational networks. In particular we found that the skill will improve for small observation intervals Î”*t*_{obs} and sufficiently large observational noise. We found substantial skill improvement for both observed and unobserved variables. These effects can be undestood with a simple linear toy model that allows for an analytical treatment. We further established numerically that VLKF reduces the probability of catastrophic filter divergence and improves the stability of the filter when compared to the standard ensemble square root Kalman filter.

We remark that the idea of the variance-limiting Kalman filter is not restricted to ensemble Kalman filters, but can also be used to modify the extended Kalman filter. However, for the examples we used here the nonlinearities were too strong and the extended Kalman filter did not yield satisfactory results, even in the variance-limiting formulation.

The effect of the variance-limiting filter to control unrealistically large error covariances of the poorly resolved variables due to finite ensemble sizes may find useful applications. We mention here that the variance constraint is able to adaptively damp unrealistic excitation of ensemble spread in underresolved spatial regions due to inappropriate uniform inflation. This may be an alternative to the spatially adaptive schemes which were recently developed (Anderson 2007; Li et al. 2009). In addition, it is known that localization of covariance matrices in EnKF leads to imbalance in the analyzed fields (e.g., see Houtekamer and Mitchell 2005; Kepert 2009 for recent studies). Filter localization typically excites unwanted gravity waves that when uncontrolled can substantially degrade filter performance. One may construct balance constraints as pseudo-observations and thereby potentially reduce this undesired aspect of covariance localization. As more specific applications, we mention climate reanalysis and data assimilation for the mesosphere. It would be interesting to see how the proposed variance-limiting filter can be used in climate reanalysis schemes to deal with the vertical sparcity of observational data and the less dense observation network on the Southern Hemisphere in the preradiosonde era (see Whitaker et al. 2004). One would need to establish though whether the historical observation intervals Î”*t*_{obs} are sufficiently small to allow for a skill improvement. Similarly, it may help to control the dynamically dominant gravity wave activity in the mesosphere as the upper lid is pushed farther and farther (e.g., see Polavarapu et al. 2005). However, a word of caution is required here. In some atmospheric data assimilation problems, it is not at all uncommon to have an ensemble prior variance for certain variables that is significantly larger than the climatological variance, when the atmosphere is locally far away from equilibrium. One relevant example would be in the vicinity of strong fronts over the Southern Ocean. In such a case, it may not be appropriate to limit the variance to the climatological value.

In this work we have studied systems where for sufficiently large observation intervals Î”*t*_{obs} the variables acquire their true climatological mean and variance when the model is run. In particular we have not included model error. It would be interesting to see whether the variance-limiting filter can help to control model error in the case that the free running model would produce unrealistically large forecast covariances. Usually numerical schemes underestimate error covariances, but this is often caused by severe divergence damping (Durran 1999), which is artificially introduced to the model to control unwanted gravity wave activity and to stabilize the numerical scheme. The stabilization may be achieved by a much smaller amount of divergence damping by implementing the variance-limiting constraint in the data assimilation procedure. The VLKF would in this case act as an effective adaptive damping scheme, counteracting the model error.

## Acknowledgments

We thank Craig Bishop and Jeffrey Kepert for pointing us to the possible application of simulations involving the mesosphere. We also thank the editor and three anonymous reviewers for valuable comments. GAG acknowledges support by the ARC.

## REFERENCES

Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation.

,*Mon. Wea. Rev.***129**, 2884â€“2903.Anderson, J. L., 2007: An adaptive covariance inflation error correction algorithm for ensemble filters.

,*Tellus***59A**, 210â€“224.Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters.

,*Tellus***61A**, 72â€“83.Anderson, J. L., , and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts.

,*Mon. Wea. Rev.***127**, 2741â€“2758.Bengtsson, L., and Coauthors, 2007: The need for a dynamical climate reanalysis.

,*Bull. Amer. Meteor. Soc.***88**, 495â€“501.Bergemann, K., , G. Gottwald, , and S. Reich, 2009: Ensemble propagation and continuous matrix factorization algorithms.

,*Quart. J. Roy. Meteor. Soc.***135**, 1560â€“1572.Bocquet, M., , C. A. Pires, , and L. Wu, 2010: Beyond Gaussian statistical modeling in geophysical data assimilation.

,*Mon. Wea. Rev.***138**, 2997â€“3023.Buizza, R., , M. Miller, , and T. N. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF Ensemble Prediction System.

,*Quart. J. Roy. Meteor. Soc.***125**, 2887â€“2908.Charron, M., , G. Pellerin, , L. Spacek, , P. L. Houtekamer, , N. Gagnon, , H. L. Mitchell, , and L. Michelin, 2010: Toward random sampling of model error in the Canadian ensemble prediction system.

,*Mon. Wea. Rev.***138**, 1877â€“1901.Compo, G. P., and Coauthors, 2011: The twentieth century reanalysis project.

,*Quart. J. Roy. Meteor. Soc.***137**, 1â€“28, doi:10.1002/qj.776.Durran, D. R., 1999:

*Numerical Methods for Wave Equations in Geophysical Fluid Dynamics*. Springer, 482 pp.Eckermann, S. D., and Coauthors, 2009: High-altitude data assimilation system experiments for the northern summer mesosphere season of 2007.

,*J. Atmos. Sol.-Terr. Phys.***71**, 531â€“551.Ehrendorfer, M., 2007: A review of issues in ensemble-based Kalman filtering.

,*Meteor. Z.***16**, 795â€“818.Evensen, G., 2006:

*Data Assimilation: The Ensemble Kalman Filter*. Springer, 280 pp.Fisher, M., , M. Leutbecher, , and G. A. Kelly, 2005: On the equivalence between Kalman smoothing and weak-constraint four-dimensional variational data assimilation.

,*Quart. J. Roy. Meteor. Soc.***131**, 3235â€“3246.Gardiner, C. W., 2004:

*Handbook of Stochastic Methods for Physics, Chemistry, and the Natural Sciences*. 3rd ed. Springer, 415 pp.Golub, G. H., , and C. F. Van Loan, 1996:

*Matrix Computations*. 3rd ed. The Johns Hopkins University Press, 728 pp.Gottwald, G. A., , and I. Melbourne, 2005: Testing for chaos in deterministic systems with noise.

,*Physica D***212**, 100â€“110.Hamill, T. M., , and J. S. Whitaker, 2005: Accounting for the error due to unresolved scales in ensemble data assimilation: A comparison of different approaches.

,*Mon. Wea. Rev.***133**, 3132â€“3147.Hamill, T. M., , J. S. Whitaker, , and C. Snyder, 2001: Distance-dependent filtering of background covariance estimates in an ensemble Kalman filter.

,*Mon. Wea. Rev.***129**, 2776â€“2790.Harlim, J., , and A. J. Majda, 2010: Catastrophic filter divergence in filtering nonlinear dissipative systems.

,*Comm. Math. Sci.***8**, 27â€“43.Houtekamer, P. L., , and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique.

,*Mon. Wea. Rev.***126**, 796â€“811.Houtekamer, P. L., , and H. L. Mitchell, 2001: A sequential ensemble Kalman filter for atmospheric data assimilation.

,*Mon. Wea. Rev.***129**, 123â€“136.Houtekamer, P. L., , and H. L. Mitchell, 2005: Ensemble Kalman filtering.

,*Quart. J. Roy. Meteor. Soc.***131**, 3269â€“3289.Houtekamer, P. L., , H. L. Mitchell, , G. Pellerin, , M. Buehner, , M. Charron, , L. Spacek, , and B. Hansen, 2005: Atmospheric data assimilation with an ensemble Kalman filter: Results with real observations.

,*Mon. Wea. Rev.***133**, 604â€“620.Houtekamer, P. L., , H. L. Mitchell, , and X. Deng, 2009: Model error representation in an operational ensemble Kalman filter.

,*Mon. Wea. Rev.***137**, 2126â€“2143.Ide, K., , P. Courtier, , M. Ghil, , and A. C. Lorenc, 1997: Unified notation for data assimilation: Operational, sequential and variational.

,*J. Meteor. Soc. Japan***75**, 181â€“189.Kalnay, E., 2002:

*Atmospheric Modeling, Data Assimilation and Predictability*. Cambridge University Press, 364 pp.Kepert, J. D., 2004: On ensemble representation of the observation-error covariances in the ensemble Kalman filter.

,*Ocean Dyn.***54**, 561â€“569.Kepert, J. D., 2009: Covariance localisation and balance in an Ensemble Kalman Filter.

,*Quart. J. Roy. Meteor. Soc.***135**, 1157â€“1176.Leimkuhler, B., , and S. Reich, 2005:

*Simulating Hamiltonian Dynamics*. Cambridge University Press, 379 pp.Li, H., , E. Kalnay, , and T. Miyoshi, 2009: Simultaneous estimation of covariance inflation and observation errors within an ensemble Kalman filter.

,*Quart. J. Roy. Meteor. Soc.***135**, 523â€“533.Liu, J., , E. J. Fertig, , H. Li, , E. Kalnay, , B. R. Hunt, , E. J. Kostelich, , I. Szunyogh, , and R. Todling, 2008: Comparison between local ensemble transform Kalman filter and PSAS in the NASA finite volume GCMâ€”Perfect model experiments.

,*Nonlinear Processes Geophys.***15**, 645â€“659.Lorenc, A. C., 2003: The potential of the ensemble Kalman filter for NWPâ€”A comparison with 4DVAR.

,*Quart. J. Roy. Meteor. Soc.***129**, 3183â€“3203.Lorenz, E. N., 1996: Predictabilityâ€”A problem partly solved.

*Predictability,*T. Palmer, Ed., European Centre for Medium-Range Weather Forecasts, 1â€“18.Lorenz, E. N., , and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: Simulation with a small model.

,*J. Atmos. Sci.***55**, 399â€“414.Mitchell, H. L., , and P. L. Houtekamer, 2000: An adaptive ensemble Kalman filter.

,*Mon. Wea. Rev.***128**, 416â€“433.Neef, L., , S. M. Polavarapu, , and T. G. Shepherd, 2006: Four-dimensional data assimilation and balanced dynamics.

,*J. Atmos. Sci.***63**, 1840â€“1850.Orrell, D., , and L. Smith, 2003: Visualising bifurcations in high dimensional systems: The spectral bifurcation diagram.

,*Int. J. Bifurcation Chaos***13**, 3015â€“3028.Ott, E., , B. Hunt, , I. Szunyogh, , A. Zimin, , E. Kostelich, , M. Corrazza, , E. Kalnay, , and J. Yorke, 2004: A local ensemble Kalman filter for atmospheric data assimilation.

,*Tellus***56A**, 415â€“428.Pires, C. A., , O. Talagrand, , and M. Bocquet, 2010: Diagnosis and impacts of non-Gaussianity of innovations in data assimilation.

,*Physica D***239**, 1701â€“1717.Polavarapu, S., , T. G. Shepherd, , Y. Rochon, , and S. Ren, 2005: Some challenges of middle atmosphere data assimilation.

,*Quart. J. Roy. Meteor. Soc.***131**, 3513â€“3527.Sankey, D., , S. Ren, , S. Polavarapu, , Y. Rochon, , Y. Nezlin, , and S. Beagley, 2007: Impact of data assimilation filtering methods on the mesosphere.

,*J. Geophy. Res.***112**, D24104, doi:10.1029/2007JD008885.Sasaki, Y., 1970: Some basic formalisms on numerical variational analysis.

,*Mon. Wea. Rev.***98**, 875â€“883.Shutts, G. J., 2005: A stochastic kinetic energy backscatter algorithm for use in ensemble prediction systems.

,*Quart. J. Roy. Meteor. Soc.***131**, 3079â€“3102.Simon, D. J., 2006:

*Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches*. John Wiley & Sons, Inc., 552 pp.Szunyogh, I., , E. Kostelich, , G. Gyarmati, , D. J. Patil, , B. Hunt, , E. Kalnay, , E. Ott, , and J. Yorke, 2005: Assessing a local ensemble Kalman filter: Perfect model experiments with the National Centers for Environmental Prediction global model.

,*Tellus***57A**, 528â€“545.Tippett, M. K., , J. L. Anderson, , C. H. Bishop, , T. M. Hamill, , and J. S. Whitaker, 2003: Ensemble square root filters.

,*Mon. Wea. Rev.***131**, 1485â€“1490.Wang, X., , C. H. Bishop, , and S. J. Julier, 2004: Which is better, an ensemble of positiveâ€“negative pairs or a centered spherical simplex ensemble?

,*Mon. Wea. Rev.***132**, 1590â€“1605.Whitaker, J. S., , G. P. Compo, , X. Wei, , and T. M. Hamill, 2004: Reanalysis without radiosondes using ensemble data assimilation.

,*Mon. Wea. Rev.***132**, 1190â€“1200.Whitaker, J. S., , G. P. Compo, , and J. N. ThÃ©paut, 2009: A comparison of variational and ensemble-based data assimilation systems for reanalysis of sparse observations.

,*Mon. Wea. Rev.***137**, 1991â€“1999.Wolfram Research, Inc., 2008: Mathematica Version 7.0.Wolfram Research, Inc., Champaign, IL. [Available online at http://www.wolfram.com/mathematica/.]

Zupanski, D., 1997: A general weak constraint applicable to operational 4DVar data assimilation systems.

,*Mon. Wea. Rev.***125**, 2274â€“2292.

^{1}

The exposition is restricted to

^{2}

We will use bold font for matrices and vectors, and regular font for scalars. It should be clear from the context whether bold fonts refer to a matrix or a vector.

^{3}

We actually compute