HKU Scholars Hubhttp://hub.hku.hkThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Wed, 22 Jan 2020 00:04:02 GMT2020-01-22T00:04:02Z501141- A Lagrangian dual approach to the single-source localization problemhttp://hdl.handle.net/10722/250869Title: A Lagrangian dual approach to the single-source localization problem
Authors: Qi, Hou Duo; Xiu, Naihua; Yuan, Xiaoming
Abstract: The single-source localization problem (SSLP), which is nonconvex by its nature, appears in several important multidisciplinary fields such as signal processing and the global positioning system. In this paper, we cast SSLP as a Euclidean distance embedding problem and study a Lagrangian dual approach. It is proved that the Lagrangian dual problem must have an optimal solution under the generalized Slater condition. We provide a sufficient condition for the zero-duality gap and establish the equivalence between the Lagrangian dual approach and the existing Generalized Trust-Region Subproblem (GTRS) approach studied by Beck ['Exact and Approximate Solutions of Source Localization Problems,' IEEE Trans. Signal Process., vol. 56, pp. 1770-1778, 2008]. We also reveal new implications of the assumptions made by the GTRS approach. Moreover, the Lagrangian dual approach has a straightforward extension to the multiple-source localization problem. Numerical simulations demonstrate that the Lagrangian dual approach can produce localization of similar quality as the GTRS and can significantly outperform the well-known semidefinite programming solver SNLSDP for the multiple source localization problem on the tested cases. Â© 1991-2012 IEEE.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2508692013-01-01T00:00:00Z
- Forward-backward-based descent methods for composite variational inequalitieshttp://hdl.handle.net/10722/250868Title: Forward-backward-based descent methods for composite variational inequalities
Authors: He, Bingsheng; Yuan, Xiaoming
Abstract: We consider the monotone composite variational inequality (CVI) where the underlying mapping is formed as the sum of two monotone mappings. We combine the forward-backward and descent direction ideas together, and thus present the unified algorithmic framework of forward-backward-based descent methods for solving the CVI. A new iterate of such a method is generated by a prediction-correction fashion, where the predictor is yielded by the forward-backward method and then the predictor is corrected by a descent step. We derive some implementable forward-backward-based descent algorithms for some concrete cases of the CVI, and verify their numerical efficiency via preliminary numerical experiments. Â© 2013 Copyright Taylor and Francis Group, LLC.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2508682013-01-01T00:00:00Z
- A customized proximal point algorithm for convex minimization with linear constraintshttp://hdl.handle.net/10722/251261Title: A customized proximal point algorithm for convex minimization with linear constraints
Authors: He, Bingsheng; Yuan, Xiaoming; Zhang, Wenxing
Abstract: This paper demonstrates a customized application of the classical proximal point algorithm (PPA) to the convex minimization problem with linear constraints. We show that if the proximal parameter in metric form is chosen appropriately, the application of PPA could be effective to exploit the simplicity of the objective function. The resulting subproblems could be easier than those of the augmented Lagrangian method (ALM), a benchmark method for the model under our consideration. The efficiency of the customized application of PPA is demonstrated by some image processing problems. Â© 2013 Springer Science+Business Media New York.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2512612013-01-01T00:00:00Z
- On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithmhttp://hdl.handle.net/10722/251225Title: On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm
Authors: Tao, Min; Yuan, Xiaoming
Abstract: Â© 2017 Springer Science+Business Media, LLC The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in the generic setting of finding a zero point of a maximal monotone operator, and show that the condition proposed by Rockafellar can also sufficiently ensure the linear convergence rate for this generalized PPA. Indeed we show that these linear convergence rates are optimal. Both the exact and inexact versions of this generalized PPA are discussed. The motivation of considering this generalized PPA is that it includes as special cases the relaxed versions of some splitting methods that are originated from PPA. Thus, linear convergence results of this generalized PPA can be used to better understand the convergence of some widely used algorithms in the literature. We focus on the particular convex minimization context and specify Rockafellarâ€™s condition to see how to ensure the linear convergence rate for some efficient numerical schemes, including the classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and its relaxed version, the original alternating direction method of multipliers (ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions weaker than existing ones are proposed in these particular contexts.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2512252017-01-01T00:00:00Z
- Convergence analysis of the direct extension of ADMM for multiple-block separable convex minimizationhttp://hdl.handle.net/10722/251247Title: Convergence analysis of the direct extension of ADMM for multiple-block separable convex minimization
Authors: Tao, Min; Yuan, Xiaoming
Abstract: Â© 2017 Springer Science+Business Media, LLC Recently, the alternating direction method of multipliers (ADMM) has found many efficient applications in various areas; and it has been shown that the convergence is not guaranteed when it is directly extended to the multiple-block case of separable convex minimization problems where there are m â‰¥ 3 functions without coupled variables in the objective. This fact has given great impetus to investigate various conditions on both the model and the algorithmâ€™s parameter that can ensure the convergence of the direct extension of ADMM (abbreviated as â€œe-ADMMâ€). Despite some results under very strong conditions (e.g., at least (m âˆ’ 1) functions should be strongly convex) that are applicable to the generic case with a general m, some others concentrate on the special case of m = 3 under the relatively milder condition that only one function is assumed to be strongly convex. We focus on extending the convergence analysis from the case of m = 3 to the more general case of m â‰¥ 3. That is, we show the convergence of e-ADMM for the case of m â‰¥ 3 with the assumption of only (m âˆ’ 2) functions being strongly convex; and establish its convergence rates in different scenarios such as the worst-case convergence rates measured by iteration complexity and the globally linear convergence rate under stronger assumptions. Thus the convergence of e-ADMM for the general case of m â‰¥ 4 is proved; this result seems to be still unknown even though it is intuitive given the known result of the case of m = 3. Even for the special case of m = 3, our convergence results turn out to be more general than the existing results that are derived specifically for the case of m = 3.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2512472017-01-01T00:00:00Z
- Convergence rate analysis for the alternating direction method of multipliers with a substitution procedure for separable convex programminghttp://hdl.handle.net/10722/251234Title: Convergence rate analysis for the alternating direction method of multipliers with a substitution procedure for separable convex programming
Authors: He, Bingsheng; Tao, Min; Yuan, Xiaoming
Abstract: Â© 2017 INFORMS. Recently, in He et al. [He BS, Tao M, Yuan XM (2012) Alternating direction method with Gaussian back substitution for separable convex programming. SIAM J. Optim. 22(2):313-340], we have showed the first possibility of combining the Douglas- Rachford alternating direction method of multipliers (ADMM) with a Gaussian back substitution procedure for solving a convex minimization model with a general separable structure. This paper is a further study on this theme. We first derive a general algorithmic framework to combine ADMM with either a forward or backward substitution procedure. Then, we show that convergence of this framework can be easily proved from the contraction perspective, and its local linear convergence rate is provable if certain error bound condition is assumed. Without such an error bound assumption, we can estimate its worst-case convergence rate measured by the iteration complexity.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2512342017-01-01T00:00:00Z
- Convergence analysis of Douglas-Rachford splitting method for "strongly+Weakly" Convex Programminghttp://hdl.handle.net/10722/251243Title: Convergence analysis of Douglas-Rachford splitting method for "strongly+Weakly" Convex Programming
Authors: Guo, Ke; Han, Deren; Yuan, Xiaoming
Abstract: Â© 2017 Society for Industrial and Applied Mathematics. We consider the convergence of the Douglas-Rachford splitting method (DRSM) for minimizing the sum of a strongly convex function and a weakly convex function; this setting has various applications, especially in some sparsity-driven scenarios with the purpose of avoiding biased estimates which usually occur when convex penalties are used. Though the convergence of the DRSM has been well studied for the case where both functions are convex, its results for some nonconvexfunction- involved cases, including the "strongly + weakly" convex case, are still in their infancy. In this paper, we prove the convergence of the DRSM for the "strongly + weakly" convex setting under relatively mild assumptions compared with some existing work in the literature. Moreover, we establish the rate of asymptotic regularity and the local linear convergence rate in the asymptotical sense under some regularity conditions.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2512432017-01-01T00:00:00Z
- Primalâ€“dual hybrid gradient method for distributionally robust optimization problemshttp://hdl.handle.net/10722/251250Title: Primalâ€“dual hybrid gradient method for distributionally robust optimization problems
Authors: Liu, Yongchao; Yuan, Xiaoming; Zeng, Shangzhi; Zhang, Jin
Abstract: Â© 2017 Elsevier B.V. We focus on the discretization approach to distributionally robust optimization (DRO) problems and propose a numerical scheme originated from the primalâ€“dual hybrid gradient (PDHG) method that recently has been well studied in convex optimization area. Specifically, we consider the cases where the ambiguity set of the discretized DRO model is defined through the moment condition and Wasserstein metric, respectively. Moreover, we apply the PDHG to a portfolio selection problem modelled by DRO and verify its efficiency.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2512502017-01-01T00:00:00Z
- An improved general extra-gradient method with refined step size for nonlinear monotone variational inequalitieshttp://hdl.handle.net/10722/250855Title: An improved general extra-gradient method with refined step size for nonlinear monotone variational inequalities
Authors: Xu, M. H.; Yuan, X. M.; Huang, Q. L.
Abstract: Extra-gradient method and its modified versions are direct methods for variational inequalities VI(Î©, F) that only need to use the value of function F in the iterative processes. This property makes the type of extra-gradient methods very practical for some variational inequalities arising from the real-world, in which the function F usually does not have any explicit expression and only its value can be observed and/or evaluated for given variable. Generally, such observation and/or evaluation may be obtained via some costly experiments. Based on this view of point, reducing the times of observing the value of function F in those methods is meaningful in practice. In this paper, a new strategy for computing step size is proposed in general extra-gradient method. With the new step size strategy, the general extra-gradient method needs to cost a relatively minor amount of computation to obtain a new step size, and can achieve the purpose of saving the amount of computing the value of F in solving VI(Î©, F). Further, the convergence analysis of the new algorithm and the properties related to the step size strategy are also discussed in this paper. Numerical experiments are given and show that the amount of computing the value of function F in solving VI(Î©, F) can be saved about 12-25% by the new general extra-gradient method. Â© 2007 Springer Science+Business Media, Inc.
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10722/2508552007-01-01T00:00:00Z
- On the flexibility of block coordinate descent for large-scale optimizationhttp://hdl.handle.net/10722/251231Title: On the flexibility of block coordinate descent for large-scale optimization
Authors: Wang, Xiangfeng; Zhang, Wenjie; Yan, Junchi; Yuan, Xiaoming; Zha, Hongyuan
Abstract: Â© 2017 Elsevier B.V. We consider a large-scale minimization problem (not necessarily convex) with non-smooth separable convex penalty. Problems in this form widely arise in many modern large-scale machine learning and signal processing applications. In this paper, we present a new perspective towards the parallel Block Coordinate Descent (BCD) methods. Specifically we explicitly give a concept of so-called two-layered block variable updating loop for parallel BCD methods in modern computing environment comprised of multiple distributed computing nodes. The outer loop refers to the block variable updating assigned to distributed nodes, and the inner loop involves the updating step inside each node. Each loop allows to adopt either Jacobi or Gaussâ€“Seidel update rule. In particular, we give detailed theoretical convergence analysis to two practical schemes: Jacobi/Gaussâ€“Seidel and Gaussâ€“Seidel/Jacobi that embodies two algorithms respectively. Our new perspective and behind theoretical results help devise parallel BCD algorithms in a principled fashion, which in turn lend them a flexible implementation for BCD methods suited to the parallel computing environment. The effectiveness of the algorithm framework is verified on the benchmark tasks of large-scale â„“ 1 regularized sparse logistic regression and non-negative matrix factorization.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10722/2512312018-01-01T00:00:00Z
- An approximate proximal-extragradient type method for monotone variational inequalitieshttp://hdl.handle.net/10722/250900Title: An approximate proximal-extragradient type method for monotone variational inequalities
Authors: He, Bing Sheng; Yang, Zhen Hua; Yuan, Xiao Ming
Abstract: Proximal point algorithms (PPA) are attractive methods for monotone variational inequalities. The approximate versions of PPA are more applicable in practice. A modified approximate proximal point algorithm (APPA) presented by Solodov and Svaiter [Math. Programming, Ser. B 88 (2000) 371-389] relaxes the inexactness criterion significantly. This paper presents an extended version of Solodov-Svaiter's APPA. Building the direction from current iterate to the new iterate obtained by Solodov-Svaiter's APPA, the proposed method improves the profit at each iteration by choosing the optimal step length along this direction. In addition, the inexactness restriction is relaxed further. Numerical example indicates the improvement of the proposed method. Â© 2004 Elsevier Inc. All rights reserved.
Thu, 01 Jan 2004 00:00:00 GMThttp://hdl.handle.net/10722/2509002004-01-01T00:00:00Z
- Comparison of Two Kinds of Prediction-Correction Methods for Monotone Variational Inequalitieshttp://hdl.handle.net/10722/250847Title: Comparison of Two Kinds of Prediction-Correction Methods for Monotone Variational Inequalities
Authors: He, Bingsheng; Yuan, Xiaomino; Zhang, Jason J.Z.
Abstract: In this paper, we study the relationship between the forward-backward splitting method and the extra-gradient method for monotone variational inequalities. Both of the methods can be viewed as prediction-correction methods. The only difference is that they use different search directions in the correction-step. Our analysis explains theoretically why the extra-gradient methods usually outperform the forward-backward splitting methods. We suggest some modifications for the two methods and numerical results are given to verify the Superiority of the modified methods.
Thu, 01 Jan 2004 00:00:00 GMThttp://hdl.handle.net/10722/2508472004-01-01T00:00:00Z
- A sequential updating scheme of the lagrange multiplier for separable convex programminghttp://hdl.handle.net/10722/251221Title: A sequential updating scheme of the lagrange multiplier for separable convex programming
Authors: Dai, Yu Hong; Han, Deren; Yuan, Xiaoming; Zhang, Wenxing
Abstract: Â© 2016 American Mathematical Society. The augmented Lagrangian method (ALM) is a benchmark for solving convex minimization problems with linear constraints. Solving the augmented subproblems over the primal variables can be regarded as sequentially providing inputs for updating the Lagrange multiplier (i.e., the dual variable). We consider the separable case of a convex minimization problem where its objective function is the sum of more than two functions without coupled variables. When applying the ALM to this case, at each iteration we can (sometimes must) split the resulting augmented subproblem in order to generate decomposed subproblems which are often easy enough to have closedform solutions. But the decomposition of primal variables only provides less accurate inputs for updating the Lagrange multiplier, and it points out the lack of convergence for such a decomposition scheme. To remedy this difficulty, we propose to update the Lagrange multiplier sequentially once each decomposed subproblem over the primal variables is solved. This scheme updates both the primal and dual variables in Gauss-Seidel fashion. In addition to the exact version which is useful enough for the case where the functions in the objective are all simple such that the decomposed subproblems all have closed-form solutions, we investigate an inexact version of this scheme which allows the decomposed subproblems to be solved approximately subject to certain inexactness criteria. Some preliminary numerical results when the proposed scheme is respectively applied to an image decomposition problem and an allocation problem are reported.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2512212017-01-01T00:00:00Z
- An Algorithmic Framework of Generalized Primalâ€“Dual Hybrid Gradient Methods for Saddle Point Problemshttp://hdl.handle.net/10722/251202Title: An Algorithmic Framework of Generalized Primalâ€“Dual Hybrid Gradient Methods for Saddle Point Problems
Authors: He, Bingsheng; Ma, Feng; Yuan, Xiaoming
Abstract: Â© 2017, Springer Science+Business Media New York. The primalâ€“dual hybrid gradient method (PDHG) originates from the Arrowâ€“Hurwicz method, and it has been widely used to solve saddle point problems, particularly in image processing areas. With the introduction of a combination parameter, Chambolle and Pock proposed a generalized PDHG scheme with both theoretical and numerical advantages. It has been analyzed that except for the special case where the combination parameter is 1, the PDHG cannot be casted to the proximal point algorithm framework due to the lack of symmetry in the matrix associated with the proximal regularization terms. The PDHG scheme is nonsymmetric also in the sense that one variable is updated twice while the other is only updated once at each iteration. These nonsymmetry features also explain why more theoretical issues remain challenging for generalized PDHG schemes; for example, the worst-case convergence rate of PDHG measured by the iteration complexity in a nonergodic sense is still missing. In this paper, we further consider how to generalize the PDHG and propose an algorithmic framework of generalized PDHG schemes for saddle point problems. This algorithmic framework allows the output of the PDHG subroutine to be further updated by correction steps with constant step sizes. We investigate the restriction onto these step sizes and conduct the convergence analysis for the algorithmic framework. The algorithmic framework turns out to include some existing PDHG schemes as special cases, and it immediately yields a class of new generalized PDHG schemes by choosing different step sizes for the correction steps. In particular, a completely symmetric PDHG scheme with the golden-ratio step sizes is included. Theoretically, an advantage of the algorithmic framework is that the worst-case convergence rate measured by the iteration complexity in both the ergodic and nonergodic senses can be established.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2512022017-01-01T00:00:00Z
- Linearized primal-dual methods for linear inverse problems with total variation regularization and finite element discretizationhttp://hdl.handle.net/10722/251195Title: Linearized primal-dual methods for linear inverse problems with total variation regularization and finite element discretization
Authors: Tian, Wenyi; Yuan, Xiaoming
Abstract: Â© 2016 IOP Publishing Ltd. Linear inverse problems with total variation regularization can be reformulated as saddle-point problems; the primal and dual variables of such a saddle-point reformulation can be discretized in piecewise affine and constant finite element spaces, respectively. Thus, the well-developed primal-dual approach (a.k.a. the inexact Uzawa method) is conceptually applicable to such a regularized and discretized model. When the primal-dual approach is applied, the resulting subproblems may be highly nontrivial and it is necessary to discuss how to tackle them and thus make the primal-dual approach implementable. In this paper, we suggest linearizing the data-fidelity quadratic term of the hard subproblems so as to obtain easier ones. A linearized primal-dual method is thus proposed. Inspired by the fact that the linearized primal-dual method can be explained as an application of the proximal point algorithm, a relaxed version of the linearized primal-dual method, which can often accelerate the convergence numerically with the same order of computation, is also proposed. The global convergence and worst-case convergence rate measured by the iteration complexity are established for the new algorithms. Their efficiency is verified by some numerical results.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10722/2511952016-01-01T00:00:00Z
- Preferences for travel time under risk and ambiguity: Implications in path selection and network equilibriumhttp://hdl.handle.net/10722/251183Title: Preferences for travel time under risk and ambiguity: Implications in path selection and network equilibrium
Authors: Qi, Jin; Sim, Melvyn; Sun, Defeng; Yuan, Xiaoming
Abstract: Â© 2016 In this paper, we study the preferences for uncertain travel times in which probability distributions may not be fully characterized. In evaluating an uncertain travel time, we explicitly distinguish between risk, where the probability distribution is precisely known, and ambiguity, where it is not. In particular, we propose a new criterion called ambiguity-aware CARA travel time (ACT) for evaluating uncertain travel times under various attitudes of risk and ambiguity, which is a preference based on blending the Hurwicz criterion and Constant Absolute Risk Aversion (CARA). More importantly, we show that when the uncertain link travel times are independently distributed, finding the path that minimizes travel time under the ACT criterion is essentially a shortest path problem. We also study the implications on Network Equilibrium (NE) model where travelers on the traffic network are characterized by their knowledge of the network uncertainty as well as their risk and ambiguity attitudes under the ACT. We derive and analyze the existence and uniqueness of solutions under NE. Finally, we obtain the Price of Anarchy that characterizes the inefficiency of this new equilibrium. The computational study suggests that as uncertainty increases, the influence of selfishness on inefficiency diminishes.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10722/2511832016-01-01T00:00:00Z
- On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalitieshttp://hdl.handle.net/10722/251193Title: On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities
Authors: Chen, Caihua; Fu, Xiaoling; He, Bingsheng; Yuan, Xiaoming
Abstract: Â© 2017, Springer Science+Business Media New York. Projection-type methods are important for solving monotone linear variational inequalities. In this paper, we analyze the iteration complexity of two projection methods and accordingly establish their worst-case sublinear convergence rates measured by the iteration complexity in both the ergodic and nonergodic senses. Our analysis does not require any error bound condition or the boundedness of the feasible set, and it is scalable to other methods of the same kind.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2511932017-01-01T00:00:00Z
- Alternating Direction Method of Multipliers for Linear Programminghttp://hdl.handle.net/10722/251185Title: Alternating Direction Method of Multipliers for Linear Programming
Authors: He, Bing Sheng; Yuan, Xiao Ming
Abstract: Â© 2016, Operations Research Society of China, Periodicals Agency of Shanghai University, Science Press, and Springer-Verlag Berlin Heidelberg. Linear programming is the core problem of various operational research problems. The dominant approaches for linear programming are simplex and interior point methods. In this paper, we show that the alternating direction method of multipliers (ADMM), which was proposed long time ago while recently found more and more applications in a broad spectrum of areas, can also be easily used to solve the canonical linear programming model. The resulting per-iteration complexity is O(mn) where m is the constraint number and n the variable dimension. At each iteration, there are m subproblems that are eligible for parallel computation; each requiring only O(n) flops. There is no inner iteration as well. We thus introduce the new ADMM approach to linear programming, which may inspire deeper research for more complicated scenarios with more sophisticated results.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10722/2511852016-01-01T00:00:00Z
- Some Goldstein's type methods for co-coercive variant variational inequalitieshttp://hdl.handle.net/10722/250957Title: Some Goldstein's type methods for co-coercive variant variational inequalities
Authors: Li, M.; Liao, L. Z.; Yuan, X. M.
Abstract: The classical Goldstein's method has been well studied in the context of variational inequalities (VIs). In particular, it has been shown in the literature that the Goldstein's method works well for co-coercive VIs where the underlying mapping is co-coercive. In this paper, we show that the Goldstein's method can be extended to solve co-coercive variant variational inequalities (VVIs). We first show that when the Goldstein's method is applied to solve VVIs, the iterative scheme can be improved by identifying a refined step-size if the involved co-coercive modulus is known. By doing so, the allowable range of the involved scaling parameter ensuring convergence is enlarged compared to that in the context of VVIs with Lipschitz and strongly monotone operators. Then, we show that for such a VVI whose co-coercive modulus is unknown, the Goldstein's method is still convergent provided that an easily-implementable Armijo's type strategy of adjusting the scaling parameter self-adaptively is employed. Some numerical results are reported to verify that the proposed Goldstein's type methods are efficient for solving VVIs. Â© 2010 IMACS. Published by Elsevier B.V. All rights reserved.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/2509572011-01-01T00:00:00Z
- An inexact parallel splitting augmented Lagrangian method for monotone variational inequalities with separable structureshttp://hdl.handle.net/10722/250991Title: An inexact parallel splitting augmented Lagrangian method for monotone variational inequalities with separable structures
Authors: Tao, Min; Yuan, Xiaoming
Abstract: Splitting methods have been extensively studied in the context of convex programming and variational inequalities with separable structures. Recently, a parallel splitting method based on the augmented Lagrangian method (abbreviated as PSALM) was proposed in He (Comput. Optim. Appl. 42:195-212, 2009) for solving variational inequalities with separable structures. In this paper, we propose the inexact version of the PSALM approach, which solves the resulting subproblems of PSALM approximately by an inexact proximal point method. For the inexact PSALM, the resulting proximal subproblems have closed-form solutions when the proximal parameters and inexact terms are chosen appropriately. We show the efficiency of the inexact PSALM numerically by some preliminary numerical experiments. Â© 2011 Springer Science+Business Media, LLC.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10722/2509912012-01-01T00:00:00Z
- An LQP-based decomposition method for solving a class of variational inequalitieshttp://hdl.handle.net/10722/250993Title: An LQP-based decomposition method for solving a class of variational inequalities
Authors: Yuan, Xiaoming; Li, Min
Abstract: The alternating direction method (ADM) is an influential decomposition method for solving a class of variational inequalities with block-separable structures. In the literature, the subproblems of the ADM are usually regularized by quadratic proximal terms to ensure a more stable and attractive numerical performance. In this paper, we propose to apply the logarithmicquadratic proximal (LQP) terms to regularize the ADMsubproblems, and thus develop an LQP-based decomposition method for solving a class of variational inequalities. Global convergence of the new method is proved under standard assumptions. Â© 2011 Society for Industrial and Applied Mathematics.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/2509932011-01-01T00:00:00Z
- Matrix completion via an alternating direction methodhttp://hdl.handle.net/10722/250994Title: Matrix completion via an alternating direction method
Authors: Chen, Caihua; He, Bingsheng; Yuan, Xiaoming
Abstract: The matrix completion problem is to complete an unknown matrix from a small number of entries, and it captures many applications in diversified areas. Recently, it was shown that completing a low-rank matrix can be successfully accomplished by solving its convex relaxation problem using the nuclear norm. This paper shows that the alternating direction method (ADM) is applicable for completing a low-rank matrix including the noiseless case, the noisy case and the positive semidefinite case. The ADM approach for the matrix completion problem is easily implementable and very efficient. Numerical comparisons of the ADM approach with some state-of-the-art methods are reported. Â© 2011 The author. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rightss reserved.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10722/2509942012-01-01T00:00:00Z
- Efficient neural networks for solving variational inequalitieshttp://hdl.handle.net/10722/250996Title: Efficient neural networks for solving variational inequalities
Authors: Jiang, Suoliang; Han, Deren; Yuan, Xiaoming
Abstract: In this paper, we propose efficient neural network models for solving a class of variational inequality problems. Our first model can be viewed as a generalization of the basic projection neural network proposed by Friesz et al. [3]. As the basic projection neural network, it only needs some function evaluations and projections onto the constraint set, which makes the model very easy to implement, especially when the constraint set has some special structure such as a box, or a ball. Under the condition that the underlying mapping F is pseudo-monotone with respect to a solution, a condition that is much weaker than those required by the basic projection neural network, we prove the global convergence of the proposed neural network. If F is strongly pseudo-monotone, we prove its globally exponential stability. Then to improve the efficient of the neural network, we modify it by choosing a new direction that is bounded away from zero. Under the condition that the underlying mapping F is co-coercive, a condition that is a little stronger than pseudo-monotone but is still weaker than those required by the basic projection neural network, we prove the exponential stability and global convergence of the improved model. We also reported some computational results, which illustrated that the new method is more efficient than that of Friesz et al. [3] . Â© 2012 Elsevier B.V.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10722/2509962012-01-01T00:00:00Z
- A hybrid inexact Logarithmic-Quadratic Proximal method for nonlinear complementarity problemshttp://hdl.handle.net/10722/250909Title: A hybrid inexact Logarithmic-Quadratic Proximal method for nonlinear complementarity problems
Authors: Xu, Ya; He, Bingsheng; Yuan, Xiaoming
Abstract: Inspired by the Logarithmic-Quadratic Proximal method [A. Auslender, M. Teboulle, S. Ben-Tiba, A logarithmic-quadratic proximal method for variational inequalities, Comput. Optim. Appl. 12 (1999) 31-40], we present a new prediction-correction method for solving the nonlinear complementarity problems. In our method, an intermediate point is produced by approximately solving a nonlinear equation system based on the Logarithmic-Quadratic Proximal method; and the new iterate is obtained by convex combination of the previous point and the one generated by the improved extragradient method at each iteration. The proposed method allows for constant relative errors and this yields a more practical Logarithmic-Quadratic Proximal type method. The global convergence is established under mild conditions. Preliminary numerical results indicate that the method is effective for large-scale nonlinear complementarity problems. Â© 2005 Elsevier Inc. All rights reserved.
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/10722/2509092006-01-01T00:00:00Z
- The prediction-correction approach to nonlinear complementarity problemshttp://hdl.handle.net/10722/250914Title: The prediction-correction approach to nonlinear complementarity problems
Authors: Yuan, Xiao ming
Abstract: This paper presents a prediction-correction approach to solving the nonlinear complementarity problem (NCP). Each iteration of the new method consists of a prediction and a correction. The predictor is produced by an inexact Logarithmic-Quadratic Proximal method; and then it is corrected by the Proximal Point Algorithm. Convergence of the new method is proved under mild assumptions. Comparison to existing methods shows the superiority of the new method. Numerical experiments including the application to traffic equilibrium problems demonstrate that the new method is attractive in practice. Â© 2005 Elsevier B.V. All rights reserved.
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10722/2509142007-01-01T00:00:00Z
- A descent method for structured monotone variational inequalitieshttp://hdl.handle.net/10722/250915Title: A descent method for structured monotone variational inequalities
Authors: Ye, Cai Hong; Yuan, Xiao Ming
Abstract: This article presents a descent method for solving monotone variational inequalities with separate structures. The descent direction is derived from the well-known alternating directions method. The optimal step size along the descent direction also improves the efficiency of the new method. Some numerical results demonstrate that the new method is effective in practice.
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10722/2509152007-01-01T00:00:00Z
- Risk-taking path choice behaviors under ATIS in transportation networks with demand uncertaintyhttp://hdl.handle.net/10722/250919Title: Risk-taking path choice behaviors under ATIS in transportation networks with demand uncertainty
Authors: Shao, Hu; Tian, Qiong; Yuan, Xiaoming; Wang, Guodong
Abstract: A new travel time reliability-based traffic assignment model is proposed to investigate the effects of an advanced transportation information system (ATIS) on drivers' risk-taking path choice behaviours in transportation networks with demand uncertainty. In the model, drivers are divided into two classes. The first class is not equipped with ATIS, while the second class is equipped with ATIS. Different risk-taking path choice behaviours of the two classes are studied, respectively. A corresponding mixed equilibrium traffic assignment model is formulated as a variational inequality problem in terms of path flows, which is solved by a heuristic solution algorithm. Numerical results indicate that the ATIS can influence the drivers' risk-taking path choice behaviours and the total system travel time in transportation networks with demand uncertainty. It is also found that under higher demand levels, the benefits of ATIS for network performance enhancement may be more obvious.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10722/2509192008-01-01T00:00:00Z
- Solving large-scale least squares semidefinite programming by alternating direction methodshttp://hdl.handle.net/10722/250966Title: Solving large-scale least squares semidefinite programming by alternating direction methods
Authors: He, Bingsheng; Xu, Minghua; Yuan, Xiaoming
Abstract: The well-known least squares semidefinite programming (LSSDP) problem seeks the nearest adjustment of a given symmetric matrix in the intersection of the cone of positive semidefinite matrices and a set of linear constraints, and it captures many applications in diversing fields. The task of solving large-scale LSSDP with many linear constraints, however, is numerically challenging. This paper mainly shows the applicability of the classical alternating direction method (ADM) for solving LSSDP and convinces the efficiency of the ADM approach. We compare the ADM approach with some other existing approaches numerically, and we show the superiority of ADM for solving large-scale LSSDP. Â© 2011 Society for Industrial and Applied Mathematics.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/2509662011-01-01T00:00:00Z
- Solving a class of matrix minimization problems by linear variational inequality approacheshttp://hdl.handle.net/10722/250967Title: Solving a class of matrix minimization problems by linear variational inequality approaches
Authors: Tao, Min; Yuan, Xiao Ming; He, Bing Sheng
Abstract: A class of matrix optimization problems can be formulated as a linear variational inequalities with special structures. For solving such problems, the projection and contraction method (PC method) is extended to variational inequalities with matrix variables. Then the main costly computational load in PC method is to make a projection onto the semi-definite cone. Exploiting the special structures of the relevant variational inequalities, the Levenberg-Marquardt type projection and contraction method is advantageous. Preliminary numerical tests up to 1000Ã—1000 matrices indicate that the suggested approach is promising. Â© 2011 Published by Elsevier Inc.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/2509672011-01-01T00:00:00Z
- The efficiency analysis for oligopolistic games when cost functions are non-separablehttp://hdl.handle.net/10722/250969Title: The efficiency analysis for oligopolistic games when cost functions are non-separable
Authors: Han, Deren; Yang, Hai; Yuan, Xiaoming
Abstract: By deriving an upper bound of the so-called 'price of anarchy', this paper analyses the efficiency of oligopolistic games in networks with non-separable and asymmetric cost functions, splittable flows and fixed demands. The new bound is determined by the optimal objective function values of some optimisation problems. In particular, for some special cases, the bound turns out to be explicit in the sense that it is representable explicitly by the number of players, and the constants measuring the degree of asymmetry and non-linearity of the cost function. Copyright Â© 2010 Inderscience Enterprises Ltd.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10722/2509692010-01-01T00:00:00Z
- Recovering low-rank and sparse components of matrices from incomplete and noisy observationshttp://hdl.handle.net/10722/250971Title: Recovering low-rank and sparse components of matrices from incomplete and noisy observations
Authors: Tao, Min; Yuan, Xiaoming
Abstract: Many problems can be characterized by the task of recovering the low-rank and sparse components of a given matrix. Recently, it was discovered that this nondeterministic polynomial-time hard (NP-hard) task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely acknowledged nuclear norm and l1 norm are utilized to induce low-rank and sparsity. This paper studies the recovery task in the general settings that only a fraction of entries of the matrix can be observed and the observation is corrupted by both impulsive and Gaussian noise. We show that the resulting model falls into the applicable scope of the classical augmented Lagrangian method. Moreover, the separable structure of the new model enables us to solve the involved subproblems more efficiently by splitting the augmented Lagrangian function. Hence, some splitting numerical algorithms are developed for solving the new recovery model. Some preliminary numerical experiments verify that these augmented-Lagrangianbased splitting algorithms are easily implementable and surprisingly efficient for tackling the new recovery model. Â© 2011 Society for Industrial and Applied Mathematics.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/2509712011-01-01T00:00:00Z
- Solving Over-production and Supply-guarantee Problems in Economic Equilibriahttp://hdl.handle.net/10722/251254Title: Solving Over-production and Supply-guarantee Problems in Economic Equilibria
Authors: He, Bing sheng; Xu, Wei; Yang, Hai; Yuan, Xiao Ming
Abstract: The classical Spatial Price Equilibrium of economic markets may result in over-production at supply markets and under-supply at demand markets. This paper considers the policy instruments of levying taxes at supply markets to avoid over-production and granting subsidy for traders to guarantee supply at demand markets. The decision process of determining appropriate rates of tax and subsidy is characterized by an implicit complementarity problem. Consequently, a direct type algorithm is applied to solve the complementarity model. Preliminary numerical results are also reported. Â© 2009 Springer Science+Business Media, LLC.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/2512542011-01-01T00:00:00Z
- Alternating direction method for covariance selection modelshttp://hdl.handle.net/10722/250997Title: Alternating direction method for covariance selection models
Authors: Yuan, Xiaoming
Abstract: The covariance selection problem captures many applications in various fields, and it has been well studied in the literature. Recently, an l 1 -norm penalized log-likelihood model has been developed for the covariance selection problem, and this novel model is capable of completing the model selection and parameter estimation simultaneously. With the rapidly increasing magnitude of data, it is urged to consider efficient numerical algorithms for large-scale cases of the l 1 -norm penalized log-likelihood model. For this purpose, this paper develops the alternating direction method (ADM). Some preliminary numerical results show that the ADM approach is very efficient for large-scale cases of the l 1 -norm penalized log-likelihood model. Â© Springer Science+Business Media, LLC 2011. Â© Springer Science+Business Media, LLC 2011.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10722/2509972012-01-01T00:00:00Z
- An Accelerated Inexact Proximal Point Algorithm for Convex Minimizationhttp://hdl.handle.net/10722/250998Title: An Accelerated Inexact Proximal Point Algorithm for Convex Minimization
Authors: He, Bingsheng; Yuan, Xiaoming
Abstract: The proximal point algorithm is classical and popular in the community of optimization. In practice, inexact proximal point algorithms which solve the involved proximal subproblems approximately subject to certain inexact criteria are truly implementable. In this paper, we first propose an inexact proximal point algorithm with a new inexact criterion for solving convex minimization, and show its O(1/k) iteration-complexity. Then we show that this inexact proximal point algorithm is eligible for being accelerated by some influential acceleration schemes proposed by Nesterov. Accordingly, an accelerated inexact proximal point algorithm with an iteration-complexity of O(1/k 2 ) is proposed. Â© 2011 Springer Science+Business Media, LLC.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10722/2509982012-01-01T00:00:00Z
- Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: A unified approachhttp://hdl.handle.net/10722/251269Title: Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: A unified approach
Authors: Gu, Guoyong; He, Bingsheng; Yuan, Xiaoming
Abstract: This paper focuses on some customized applications of the proximal point algorithm (PPA) to two classes of problems: the convex minimization problem with linear constraints and a generic or separable objective function, and a saddle-point problem. We treat these two classes of problems uniformly by a mixed variational inequality, and show how the application of PPA with customized metric proximal parameters can yield favorable algorithms which are able to make use of the models' structures effectively. Our customized PPA revisit turns out to unify some algorithms including some existing ones in the literature and some new ones to be proposed. From the PPA perspective, we establish the global convergence and a worst-case O(1/t) convergence rate for this series of algorithms in a unified way. Â© 2013 Springer Science+Business Media New York.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10722/2512692014-01-01T00:00:00Z
- Alternating direction method with Gaussian back substitution for separable convex programminghttp://hdl.handle.net/10722/251004Title: Alternating direction method with Gaussian back substitution for separable convex programming
Authors: He, Bingsheng; Tao, Min; Yuan, Xiaoming
Abstract: We consider the linearly constrained separable convex minimization problem whose objective function is separable into m individual convex functions with nonoverlapping variables. A Douglas-Rachford alternating direction method of multipliers (ADM) has been well studied in the literature for the special case of m = 2. But the convergence of extending ADM to the general case of m â‰¥ 3 is still open. In this paper, we show that the straightforward extension of ADM is valid for the general case of m â‰¥ 3 if it is combined with a Gaussian back substitution procedure. The resulting ADM with Gaussian back substitution is a novel approach towards the extension of ADM from m = 2 to m â‰¥ 3, and its algorithmic framework is new in the literature. For the ADM with Gaussian back substitution, we prove its convergence via the analytic framework of contractive-type methods, and we show its numerical efficiency by some application problems. Â© 2012 Society for Industrial and Applied Mathematics.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10722/2510042012-01-01T00:00:00Z
- Alternating algorithms for total variation image reconstruction from random projectionshttp://hdl.handle.net/10722/251006Title: Alternating algorithms for total variation image reconstruction from random projections
Authors: Xiao, Yunhai; Yang, Junfeng; Yuan, Xiaoming
Abstract: Total variation (TV) regularization is popular in image reconstruction due to its edgepreserving property. In this paper, we extend the alternating minimization algorithm recently proposed in [37] to the case of recovering images from random projections. Specifically, we propose to solve the TV regularized least squares problem by alternating minimization algorithms based on the classical quadratic penalty technique and alternating minimization of the augmented Lagrangian function. The per-iteration cost of the proposed algorithms is dominated by two matrixvector multiplications and two fast Fourier transforms. Convergence results, including finite convergence of certain variables and q-linear convergence rate, are established for the quadratic penalty method. Furthermore, we compare numerically the new algorithms with some state-of-the-art algorithms. Our experimental results indicate that the new algorithms are stable, efficient and competitive with the compared ones. Â© 2012 American Institute of Mathematical Sciences.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10722/2510062012-01-01T00:00:00Z
- A Note on the Alternating Direction Method of Multipliershttp://hdl.handle.net/10722/251009Title: A Note on the Alternating Direction Method of Multipliers
Authors: Han, Deren; Yuan, Xiaoming
Abstract: We consider the linearly constrained separable convex programming, whose objective function is separable into m individual convex functions without coupled variables. The alternating direction method of multipliers has been well studied in the literature for the special case m=2, while it remains open whether its convergence can be extended to the general case m â‰¥3. This note shows the global convergence of this extension when the involved functions are further assumed to be strongly convex. Â© 2012 Springer Science+Business Media, LLC.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/10722/2510092012-01-01T00:00:00Z
- The GUS-property of second-order cone linear complementarity problemshttp://hdl.handle.net/10722/250876Title: The GUS-property of second-order cone linear complementarity problems
Authors: Yang, Wei Hong; Yuan, Xiaoming
Abstract: The globally uniquely solvable (GUS) property of the linear transformation of the linear complementarity problems over symmetric cones has been studied recently by Gowda et al. via the approach of Euclidean Jordan algebra. In this paper, we contribute a new approach to characterizing the GUS property of the linear transformation of the second-order cone linear complementarity problems (SOCLCP) via some basic linear algebra properties of the involved matrix of SOCLCP. Some more concrete and checkable sufficient and necessary conditions for the GUS property are thus derived. Â© 2012 Springer and Mathematical Optimization Society.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2508762013-01-01T00:00:00Z
- Linearized alternating direction method of multipliers with Gaussian back substitution for separable convex programminghttp://hdl.handle.net/10722/251058Title: Linearized alternating direction method of multipliers with Gaussian back substitution for separable convex programming
Authors: He, Bingsheng; Yuan, Xiaoming
Abstract: Recently, we have proposed combining the alternating direction method of multipliers (ADMM) with a Gaussian back substitution procedure for solving the convex minimization model with linear constraints and a general separable objective function, i.e., the objective function is the sum of many functions without coupled variables. In this paper, we further study this topic and show that the decomposed subproblems in the ADMM procedure can be substantially alleviated by linearizing the involved quadratic terms arising from the augmented Lagrangian penalty. When the resolvent operators of the separable functions in the objective have closed-form representations, embedding the linearization into the ADMM subproblems becomes necessary to yield easy subproblems with closed-form solutions. We thus show theoretically that the blend of ADMM, Gaussian back substitution and linearization works effectively for the separable convex minimization model under consideration.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2510582013-01-01T00:00:00Z
- A strictly contractive Peace man Rachford splitting method for convex programminghttp://hdl.handle.net/10722/250878Title: A strictly contractive Peace man Rachford splitting method for convex programming
Authors: He, Bingsheng; Liu, Han; Wang, Zhaoran; Yuan, Xiaoming
Abstract: Â© 2014 Society for Industrial and Applied Mathematics. In this paper, we focus on the application o f the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O(1/t). A worst-case O(1/t) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O(1/t) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10722/2508782014-01-01T00:00:00Z
- On non-ergodic convergence rate of Douglasâ€“Rachford alternating direction method of multipliershttp://hdl.handle.net/10722/250882Title: On non-ergodic convergence rate of Douglasâ€“Rachford alternating direction method of multipliers
Authors: He, Bingsheng; Yuan, Xiaoming
Abstract: Â© 2014, Springer-Verlag Berlin Heidelberg. This note proposes a novel approach to derive a worst-case $$O(1/k)$$O(1/k) convergence rate measured by the iteration complexity in a non-ergodic sense for the Douglasâ€“Rachford alternating direction method of multipliers proposed by Glowinski and Marrocco.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10722/2508822015-01-01T00:00:00Z
- On the Proximal Jacobian Decomposition of ALM for Multiple-Block Separable Convex Minimization Problems and Its Relationship to ADMMhttp://hdl.handle.net/10722/251140Title: On the Proximal Jacobian Decomposition of ALM for Multiple-Block Separable Convex Minimization Problems and Its Relationship to ADMM
Authors: He, Bingsheng; Xu, Hong Kun; Yuan, Xiaoming
Abstract: Â© 2015, Springer Science+Business Media New York. The augmented Lagrangian method (ALM) is a benchmark for solving convex minimization problems with linear constraints. When the objective function of the model under consideration is representable as the sum of some functions without coupled variables, a Jacobian or Gaussâ€“Seidel decomposition is often implemented to decompose the ALM subproblems so that the functionsâ€™ properties could be used more effectively in algorithmic design. The Gaussâ€“Seidel decomposition of ALM has resulted in the very popular alternating direction method of multipliers (ADMM) for two-block separable convex minimization models and recently it was shown in He et al. (Optimization Online, 2013) that the Jacobian decomposition of ALM is not necessarily convergent. In this paper, we show that if each subproblem of the Jacobian decomposition of ALM is regularized by a proximal term and the proximal coefficient is sufficiently large, the resulting scheme to be called the proximal Jacobian decomposition of ALM, is convergent. We also show that an interesting application of the ADMM in Wang et al. (Pac J Optim, to appear), which reformulates a multiple-block separable convex minimization model as a two-block counterpart first and then applies the original ADMM directly, is closely related to the proximal Jacobian decomposition of ALM. Our analysis is conducted in the variational inequality context and is rooted in a good understanding of the proximal point algorithm.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10722/2511402016-01-01T00:00:00Z
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergenthttp://hdl.handle.net/10722/251133Title: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
Authors: Chen, Caihua; He, Bingsheng; Ye, Yinyu; Yuan, Xiaoming
Abstract: Â© 2014, Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society. The alternating direction method of multipliers (ADMM) is now widely used in many fields, and its convergence was proved when two blocks of variables are alternatively updated. It is strongly desirable and practically valuable to extend the ADMM directly to the case of a multi-block convex minimization problem where its objective function is the sum of more than two separable convex functions. However, the convergence of this extension has been missing for a long timeâ€”neither an affirmative convergence proof nor an example showing its divergence is known in the literature. In this paper we give a negative answer to this long-standing open question: The direct extension of ADMM is not necessarily convergent. We present a sufficient condition to ensure the convergence of the direct extension of ADMM, and give an example to show its divergence.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10722/2511332016-01-01T00:00:00Z
- A phase model for point spread function estimation in ground-based astronomyhttp://hdl.handle.net/10722/251266Title: A phase model for point spread function estimation in ground-based astronomy
Authors: Chan, Raymond Honfu; Yuan, Xiao Ming; Zhang, Wen Xing
Abstract: In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence and hence images so acquired are blurred with unknown point spread function (PSF). To restore the observed images, aberration of the wavefront at the telescope's aperture, i.e., the phase, is utilized to derive the PSF. However, the phase is not readily available. Instead, its gradients can be collected by wavefront sensors. Thus the usual approach is to use regularization methods to reconstruct high-resolution phase gradients and then use them to recover the phase in high accuracy. Here, we develop a model that reconstructs the phase directly. The proposed model uses the tight frame regularization and it can be solved efficiently by the Douglas-Rachford alternating direction method of multipliers whose convergence has been well established. Numerical results illustrate that our new model is efficient and gives more accurate estimation for the PSF. Â© 2013 Science China Press and Springer-Verlag Berlin Heidelberg.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2512662013-01-01T00:00:00Z
- A customized Douglas-Rachford splitting algorithm for separable convex minimization with linear constraintshttp://hdl.handle.net/10722/251062Title: A customized Douglas-Rachford splitting algorithm for separable convex minimization with linear constraints
Authors: Han, Deren; He, Hongjin; Yang, Hai; Yuan, Xiaoming
Abstract: We consider applying the Douglas-Rachford splitting method (DRSM) to the convex minimization problem with linear constraints and a separable objective function. The dual application of DRSM has been well studied in the literature, resulting in the well known alternating direction method of multipliers (ADMM). In this paper, we show that the primal application of DRSM in combination with an appropriate decomposition can yield an efficient structure-exploiting algorithm for the model under consideration, whose subproblems could be easier than those of ADMM. Both the exact and inexact versions of this customized DRSM are studied; and their numerical efficiency is demonstrated by some preliminary numerical results. Â© 2013 Springer-Verlag Berlin Heidelberg.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10722/2510622014-01-01T00:00:00Z
- A generalized proximal point a lgorithm and its convergence ratehttp://hdl.handle.net/10722/251083Title: A generalized proximal point a lgorithm and its convergence rate
Authors: Corman, Etienne; Yuan, Xiaoming
Abstract: Copyright Â© by SIAM. Unauthorized reproduction of this article is prohibited. We propose a generalized proximal point algorithm (PPA) in the generic setting of finding a root of a maximal monotone operator. In addition to the classical PPA, a number of benchmark operator splitting methods in the PDE and optimization literatures can be retrieved by this generalized PPA scheme. We establish the convergence rate of this generalized PPA scheme under different conditions, including estimating its worst-case convergence rate measured by the iteration complexity under mild assumptions and deriving its linear convergence rate under certain stronger conditions. Throughout our discussion, we pay particular attention to the special case where the operator is the sum of two maximal monotone operators and specify our theoretical results in the generic setting to this special case. Our result turns out to be a general and unified study on the convergence rate of a number of existing methods and subsumes some existing results in the literature.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10722/2510832014-01-01T00:00:00Z
- Computing the nearest Euclidean distance matrix with low embedding dimensionshttp://hdl.handle.net/10722/251084Title: Computing the nearest Euclidean distance matrix with low embedding dimensions
Authors: Qi, Hou Duo; Yuan, Xiaoming
Abstract: Â© 2013, Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society. Euclidean distance embedding appears in many high-profile applications including wireless sensor network localization, where not all pairwise distances among sensors are known or accurate. The classical Multi-Dimensional Scaling (cMDS) generally works well when the partial or contaminated Euclidean Distance Matrix (EDM) is close to the true EDM, but otherwise performs poorly. A natural step preceding cMDS would be to calculate the nearest EDM to the known matrix. A crucial condition on the desired nearest EDM is for it to have a low embedding dimension and this makes the problem nonconvex. There exists a large body of publications that deal with this problem. Some try to solve the problem directly and some are the type of convex relaxations of it. In this paper, we propose a numerical method that aims to solve this problem directly. Our method is strongly motivated by the majorized penalty method of Gao and Sun for low-rank positive semi-definite matrix optimization problems. The basic geometric object in our study is the set of EDMs having a low embedding dimension. We establish a zero duality gap result between the problem and its Lagrangian dual problem, which also motivates the majorization approach adopted. Numerical results show that the method works well for the Euclidean embedding of Network coordinate systems and for a class of problems in large scale sensor network localization and molecular conformation.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2510842013-01-01T00:00:00Z
- A splitting method for separable convex programminghttp://hdl.handle.net/10722/251089Title: A splitting method for separable convex programming
Authors: He, Bingsheng; Tao, Min; Yuan, Xiaoming
Abstract: Â© 2014 The Authors. We propose a splitting method for solving a separable convex minimization problem with linear constraints, where the objective function is expressed as the sum of m individual functions without coupled variables. Treating the functions in the objective separately, the new method belongs to the category of operator splitting methods. We show the global convergence and estimate a worst-case convergence rate for the new method, and then illustrate its numerical efficiency by some applications.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2510892013-01-01T00:00:00Z
- Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimizationhttp://hdl.handle.net/10722/251019Title: Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization
Authors: Yang, Junfeng; Yuan, Xiaoming
Abstract: The nuclear norm is widely used to induce low-rank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to have closed-form solutions. In this paper, we are interested in the application of the ALM and the ADM for some nuclear norm involved minimization problems. When the resulting subproblems do not have closed-form solutions, we propose to linearize these subproblems such that closed-form solutions of these linearized subproblems can be easily derived. Global convergence results of these linearized ALM and ADM are established under standard assumptions. Finally, we verify the effectiveness and efficiency of these new methods by some numerical experiments. Â© 2012 American Mathematical Society.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/2510192013-01-01T00:00:00Z