** The Weakly-Compressible Smoothed Particle Hydrodynamics (WCSPH) method is a Lagrangian method that is typically used for the simulation of incompressible fluids. While developing an SPH-based scheme or solver, researchers often verify their code with exact solutions, solutions from other numerical techniques, or experimental data. This typically requires a significant amount of computational effort and does not test the full capabilities of the solver. Furthermore, often this does not yield insights on the convergence of the solver. In this paper we introduce the method of manufactured solutions (MMS) to comprehensively test a WCSPH-based solver in a robust and efficient manner. The MMS is well established in the context of mesh-based numerical solvers. We show how the method can be applied in the context of Lagrangian WCSPH solvers to test the convergence and accuracy of the solver in two and three dimensions, systematically identify any problems with the solver, and test the boundary conditions in an efficient way. We demonstrate this for both a traditional WCSPH scheme as well as for some recently proposed second order convergent WCSPH schemes. Our code is open source and the results of the manuscript are reproducible. **

** Nonnegative matrix factorization arises widely in machine learning and data analysis. In this paper, for a given factorization of rank r, we consider the sparse stochastic matrix factorization (SSMF) of decomposing a prescribed m-by-n stochastic matrix V into a product of an m-by-r stochastic matrix W and a sparse r-by-n stochastic matrix H. With the prescribed sparsity level, we reformulate the SSMF as an unconstrained nonvonvex-nonsmooth minimization problem and introduce a row-wise update algorithm for solving the minimization problem. We show that the proposed algorithm converges globally and the generated sequence converges to a special critical point of the cost function, which is a global minimizer over the W-factor as a whole and is nearly a global minimizer over each row vector of the H-factor. Numerical experiments on both synthetic and real data sets are given to demonstrate the effectiveness of our proposed algorithm. **

** We present a convex formulation of compliant frictional contact and a robust, performant method to solve it in practice. By analytically eliminating contact constraints, we obtain an unconstrained convex problem. Our solver has proven global convergence and warm-starts effectively, enabling simulation at interactive rates. We develop compact analytical expressions of contact forces allowing us to describe our model in clear physical terms and to rigorously characterize our approximations. Moreover, this enables us not only to model point contact, but also to incorporate sophisticated models of compliant contact patches. Our time stepping scheme includes the midpoint rule, which we demonstrate achieves second order accuracy even with frictional contact. We introduce a number of accuracy metrics and show our method outperforms existing commercial and open source alternatives without sacrificing accuracy. Finally, we demonstrate robust simulation of robotic manipulation tasks at interactive rates, with accurately resolved stiction and contact transitions, as required for meaningful sim-to-real transfer. Our method is implemented in the open source robotics toolkit Drake. **

** This paper considers the numerical treatment of the time-dependent Gross-Pitaevskii equation. In order to conserve the time invariants of the equation as accurately as possible, we propose a Crank-Nicolson-type time discretization that is combined with a suitable generalized finite element discretization in space. The space discretization is based on the technique of Localized Orthogonal Decompositions (LOD) and allows to capture the time invariants with an accuracy of order $\mathcal{O}(H^6)$ with respect to the chosen mesh size $H$. This accuracy is preserved due to the conservation properties of the time stepping method. Furthermore, we prove that the resulting scheme approximates the exact solution in the $L^{\infty}(L^2)$-norm with order $\mathcal{O}(\tau^2 + H^4)$, where $\tau$ denotes the step size. The computational efficiency of the method is demonstrated in numerical experiments for a benchmark problem with known exact solution. **

** We propose a new variant of Chubanov's method for solving the feasibility problem over the symmetric cone by extending Roos's method (2018) for the feasibility problem over the nonnegative orthant. The proposed method considers a feasibility problem associated with a norm induced by the maximum eigenvalue of an element and uses a rescaling focusing on the upper bound of the sum of eigenvalues of any feasible solution to the problem. Its computational bound is (i) equivalent to Roos's original method (2018) and superior to Louren\c{c}o et al.'s method (2019) when the symmetric cone is the nonnegative orthant, (ii) superior to Louren\c{c}o et al.'s method (2019) when the symmetric cone is a Cartesian product of second-order cones, and (iii) equivalent to Louren\c{c}o et al.'s method (2019) when the symmetric cone is the simple positive semidefinite cone, under the assumption that the costs of computing the spectral decomposition and the minimum eigenvalue are of the same order for any given symmetric matrix. We also conduct numerical experiments that compare the performance of our method with existing methods by generating instance in three types: (i) strongly (but ill-conditioned) feasible instances, (ii) weakly feasible instances, and (iii) infeasible instances. For any of these instances, the proposed method is rather more efficient than the existing methods in terms of accuracy and execution time. **

** We present a fast and approximate multifrontal solver for large-scale sparse linear systems arising from finite-difference, finite-volume or finite-element discretization of high-frequency wave equations. The proposed solver leverages the butterfly algorithm and its hierarchical matrix extension for compressing and factorizing large frontal matrices via graph-distance guided entry evaluation or randomized matrix-vector multiplication-based schemes. Complexity analysis and numerical experiments demonstrate $\mathcal{O}(N\log^2 N)$ computation and $\mathcal{O}(N)$ memory complexity when applied to an $N\times N$ sparse system arising from 3D high-frequency Helmholtz and Maxwell problems. **

** We consider the least-squares variational kernel-based methods for numerical solution of partial differential equations. Indeed, we focus on least-squares principles to develop meshfree methods to find the numerical solution of a general second order ADN elliptic boundary value problem in domain $\Omega \subset \mathbb{R}^d$ under Dirichlet boundary conditions. Most notably, in these principles it is not assumed that differential operator is self-adjoint or positive definite as it would have to be in the Rayleigh-Ritz setting. However, the new scheme leads to a symmetric and positive definite algebraic system allowing us to circumvent the compatibility conditions arising in standard and mixed-Galerkin methods. In particular, the resulting method does not require certain subspaces satisfying any boundary condition. The trial space for discretization is provided via standard kernels that reproduce $H^\tau(\Omega)$, $\tau>d/2$, as their native spaces. Therefore, the smoothness of the approximation functions can be arbitrary increased without any additional task. The solvability of the scheme is proved and the error estimates are derived for functions in appropriate Sobolev spaces. For the weighted discrete least-squares principles, we show that the optimal rate of convergence in $L^2(\Omega)$ is accessible. Furthermore, for $d \le 3$, the proposed method has optimal rate of convergence in $H^k(\Omega)$ whenever $k \le \tau$. The condition number of the final linear system is approximated in terms of discterization quality. Finally, the results of some computational experiments support the theoretical error bounds. **

** Factorization of matrices where the rank of the two factors diverges linearly with their sizes has many applications in diverse areas such as unsupervised representation learning, dictionary learning or sparse coding. We consider a setting where the two factors are generated from known component-wise independent prior distributions, and the statistician observes a (possibly noisy) component-wise function of their matrix product. In the limit where the dimensions of the matrices tend to infinity, but their ratios remain fixed, we expect to be able to derive closed form expressions for the optimal mean squared error on the estimation of the two factors. However, this remains a very involved mathematical and algorithmic problem. A related, but simpler, problem is extensive-rank matrix denoising, where one aims to reconstruct a matrix with extensive but usually small rank from noisy measurements. In this paper, we approach both these problems using high-temperature expansions at fixed order parameters. This allows to clarify how previous attempts at solving these problems failed at finding an asymptotically exact solution. We provide a systematic way to derive the corrections to these existing approximations, taking into account the structure of correlations particular to the problem. Finally, we illustrate our approach in detail on the case of extensive-rank matrix denoising. We compare our results with known optimal rotationally-invariant estimators, and show how exact asymptotic calculations of the minimal error can be performed using extensive-rank matrix integrals. **

** Because of continuous advances in mathematical programing, Mix Integer Optimization has become a competitive vis-a-vis popular regularization method for selecting features in regression problems. The approach exhibits unquestionable foundational appeal and versatility, but also poses important challenges. We tackle these challenges, reducing computational burden when tuning the sparsity bound (a parameter which is critical for effectiveness) and improving performance in the presence of feature collinearity and of signals that vary in nature and strength. Importantly, we render the approach efficient and effective in applications of realistic size and complexity - without resorting to relaxations or heuristics in the optimization, or abandoning rigorous cross-validation tuning. Computational viability and improved performance in subtler scenarios is achieved with a multi-pronged blueprint, leveraging characteristics of the Mixed Integer Programming framework and by means of whitening, a data pre-processing step. **