hessian calculator

Use variables x and y. Supported functions: sin, cos, tan, exp, log, ln, sqrt, abs. Use ^ for powers.
Enter a function and point, then click Calculate Hessian.

What Is the Hessian Matrix?

The Hessian matrix is a square matrix of second-order partial derivatives of a scalar-valued function. For a two-variable function f(x, y), the Hessian at a point (x₀, y₀) is:

H(x₀, y₀) = [[fxx, fxy], [fyx, fyy]]

Intuitively, first derivatives describe slope, while second derivatives describe curvature. So the Hessian tells you how the function bends in different directions near a point.

How This Hessian Calculator Works

This tool estimates derivatives numerically using central difference formulas. That makes it useful even when you do not want to manually differentiate the function by hand.

  • fxx and fyy: second partial derivatives along x and y directions.
  • fxy and fyx: mixed partial derivatives that measure cross-curvature.
  • Determinant D: D = fxxfyy - (fxy)² (using averaged mixed partials).
  • Trace: fxx + fyy, useful in stability analysis.

Input Tips

  • Use explicit multiplication like 3*x*y.
  • Use ^ for powers, e.g., x^3.
  • ln(x) is accepted and treated as natural log.
  • If results look noisy, reduce or increase step size h slightly.

Interpreting the Result: Min, Max, or Saddle?

At a critical point (where gradient is near zero), the second derivative test can classify the point:

  • If D > 0 and fxx > 0, you likely have a local minimum.
  • If D > 0 and fxx < 0, you likely have a local maximum.
  • If D < 0, the point is a saddle point.
  • If D ≈ 0, the test is inconclusive.

If the gradient is not near zero, the point is generally not an extremum candidate, even if the Hessian is available.

Why the Hessian Matters in Optimization and ML

In optimization, the Hessian plays a central role in Newton's method, quasi-Newton methods, and trust-region algorithms. In machine learning, it helps analyze loss surface curvature, conditioning, and training dynamics.

Practical Uses

  • Checking local convexity or concavity near a candidate solution.
  • Classifying stationary points in multivariable calculus.
  • Studying stability in dynamical systems and economics models.
  • Improving convergence speed in second-order optimization routines.

Common Mistakes to Avoid

  • Using a point where the function is undefined (for example, log of a negative number).
  • Choosing an extremely tiny step size, which can amplify floating-point roundoff.
  • Forgetting that numerical derivatives are approximations, not exact symbolic values.
  • Interpreting second derivative test results when gradient is clearly nonzero.

If you are doing high-precision work, compare multiple h values and confirm the result trend.

🔗 Related Calculators