Partial Derivatives in Haskell
A while back a friend wanted help with a program that could solve for the roots of functions using Newton's method, and naturally for that I needed some way to calculate the derivative of a function numerically, and this is what I came up with:
deriv f x = (f (x+h)  f x) / h where h = 0.00001
Newton's method was a fairly easy thing to implement, and it works rather well. But now I've started to wonder  Is there some way I could use this function to solve partial derivatives in a numerical manner, or is that something that would require a fullon CAS? I would post my attempts but I have absolutely no clue what to do yet.
Please keep in mind that I am new to Haskell. Thank you!
2 answers

This is called automatic differentiation and there is a lot of really neat work in this area in Haskell, though I don't know how accessible it is.
From the wiki page:
 A paper Beautiful Differentiation and the corresponding talk.
 Forward mode libraries: ad, fad, vectorspace, Data.Ring.Module.AutomaticDifferentiation
 Reverse mode libraries: also ad, rad

You can certainly do much the same thing as you already implemented, just with multivariate pertubations. But first (as you should always do with toplevel functions) add a type signature:
deriv :: (Double > Double) > Double > Double
(That's not the most general possible signature, but probably sufficiently general for everything you'll need.) I'll call
type ℝ = Double
in the following for brevity, i.e.
deriv :: (ℝ > ℝ) > ℝ > ℝ
Now what you want is, for example in ℝ²
grad :: ((ℝ,ℝ) > ℝ) > (ℝ,ℝ) > (ℝ,ℝ) grad f (x,y) = ((f (x+h,y)  f (x,y)) / h, (f (x,y+h)  f (x,y)) / h) where h = 0.00001
It's awkward having to write out the components individually and making the definition specific to a particulardimensional vector space. A generic way of doing it:
import Data.VectorSpace import Data.Basis grad :: (HasBasis v, Scalar v ~ ℝ) => (v > ℝ) > v > v grad f x = recompose [ (e, (f (x ^+^ h*^basisValue b)  f x) ^/ h)  (e,_) < decompose x ] where h = 0.00001
Note that this prechosenstep–finitedifferentiation is always a tradeoff between inaccuracy from higherorder terms and from floatingpoint errors, so definitely check out automatic differentiation.