After becoming interested in various bits of functional programming provided by non-functional languages, I've recently decided to start learning Haskell. As I'm fairly experienced with conventional imperative programming languages, I decided to make my "hello world" something a little more complex - an implementation of gradient descent with 2 variables.
The idea with this code is that you fill in the training set for the algorithm in the code, and then run the code something similar to this:
descentFunc = singleDescend trainingSet 0.1 deltas = descend descentFunc ( 100, 1 ) 100000
Where 0.1
is the learning rate (referred to as lr
in the code) 100000
is the number of iterations for the loop, and ( 100, 1 )
is an initial guess for the coefficients.
The actual code that's running is below. As I'm entirely new to Haskell, I was wondering whether code such as this is acceptable? And whether there's any obvious idioms I'm missing/misusing or any glaring style errors I've made.
Any comments on my implementation of the algorithm are welcome also.
import Data.List trainingSet = [ (1.0,10.0),(2.0,20.0),(3.0,30.0),(4.0,40.0) ] hypothesis (d1,d2) x = d1 + (d2 * x) squareDiff d (x,y) = diff * diff where diff = ( hypothesis d x ) - y costFunc ts d = scaleFactor * sum squares where scaleFactor = 1.0 / (2 * genericLength ts) squares = map (squareDiff d) ts diff d (x,y) = (hypothesis d x) - y descendTheta thetaFunc deltaFunc ts lr deltas = deltaFunc deltas - (scaleFactor * sum scaledDiffs) where scaleFactor = lr / genericLength ts diffs = map (diff deltas) ts scaledDiffs = map (\(x,y) -> x * thetaFunc y) $ zip diffs ts descendThetaZero = descendTheta (\_ -> 1) fst descendThetaOne = descendTheta (fst) snd singleDescend ts lr deltas = (thetaZero,thetaOne) where thetaZero = descendThetaZero ts lr deltas thetaOne = descendThetaOne ts lr deltas descend func deltas i | i == 0 = deltas | otherwise = descend func ( func deltas ) ( i - 1 )