Benchmark ODE solver: GSL V/s Boost Odeint library

For our neural simulator, MOOSE, we use GNU Scientific Library (GSL) for random number generation, for solving system of non-linear equations, and for solving ODE system.

Recently I checked the performance of GSL ode solver V/s Boost ode solver; both using Runge-Kutta 4. The boost-odeint outperformed GSL by approximately by a factor of 4. The numerical results were same. Both implementation were compiled with -O3 switch.

Below are the numerical results. In second subplot, a molecule is changing is concentration ( calcium ) and on the top, another molecule is produced (CaMKII). This network has more than 30 reactions and approximately 20 molecules.

GSL took approximately 39 seconds to simulate the system for 1 year. While Boost-odeint took only 8.6 seconds. (This turns out to be a corner case)

Update: If I let both solvers choose the step size by themselves for given accuracy, boost usually outperforms the GSL solver by a factor of 1.2x to 2.9x. These tests were done during development of a signaling network which has many many reactions. Under no circumstances I have tested, BOOSE ODE solver was slower than GSL solver.

Computing value of pi using Monte-Carlo sampling

I used Haskell to do this. Code is here I did not use standard System.Random library to generate random number but mwc-random package. I can’t say if speed is better or worse.

I wanted to see how fast we converge to the value of pi. On x-axis, we have number of samples (N), and for each N, we compute the value of pi 50 times. This way we can talk about mean and variance in computed pi (which we plot on y-axis). It seems that for N >= 1000, we get a reasonable approximation of pi with low variance.

For a 2-D sphere (aka circle: x_1^2 + x_2 ^ 2 = 1 ), The convergence is following.


For a 3-D sphere, (that is x_1 ^2 + x_2 ^2 + x_3 ^3 = 1), I get the following:


Now I am bit more ambitious, I use a 10-D sphere:

As you can see, we converge to a a reasonable value of pi with low variance but convergence is not improving monotonically as in previous cases. The mean starts from a very low value and get to a respectable value only after N > 1000 (in low-dimensions cases we were swinging around 3.1415). The answer: we need to sample minimum number of points from hyper-cube to get some reasonable estimate.


How variance decreases with N (when N is significantly large, 100 repeats of each computation)? Following plots starts from 2D and goes onto 10 dimensions.


Problem with GINaC library

While modeling a rat’s neuron axon, I needed to do some symbolic computation. I first used sympy. It failed to return any solution. I tried playing with its option but I did not pursue it too much.

Then, I went to wolfram alpha and pasted my equation there. Click me to see answer.

Then I thought of using GiNaC library of C++. It was some work but I managed to write a small program. This program is available here. The file axon.cpp is the main file and compartment.cpp is the file which creates this equation. Run ./ to compile and run the program. It is a cmake based project.

To my surprise, GiNaC failed to compute a solution.

-- Configuring done
-- Generating done
-- Build files have been written to: /home/dilawar/Work/GITHUB/courses/NeuroCourse/Assignments/Second/Problem01/cpp
Scanning dependencies of target Axon
[100%] Building CXX object CMakeFiles/Axon.dir/axon.cpp.o
Linking CXX executable Axon
[100%] Built target Axon
And the equation is (0.015915494309189535082+(3.1830988618379066628)*r)^(-1)*(3.5014087480216975145E-4-(0.12095775674984045018)*r)
 Puting wolfram answer back -4.2814382506543584682E-4
terminate called after throwing an instance of 'GiNaC::pole_error'
  what():  power::eval(): division by zero
./ line 2: 24873 Aborted                 (core dumped) ./Axon

This line in output computes the expression by putting the wolfram answer. Ideally it should be zero but it is not the case.

Puting wolfram answer back -4.2814382506543584682E-4

When I change this value and compute the expression again, I see large changes in output i.e. the function changes rapidly around its solution. I am not sure what I am doing wrong with this library. If everything is fine with my program, then capabilities of this library are rather limited and I’d recommend not using it for any serious lab work.

Coding with float

Some people trust computer to a degree that they will accept its computation as correct when algorithm is right and program has no bug. Some mistrust it to an extent that they wont trust any calculation returned by it.

There are serious limitations. Not only one can not represent 10/3 accurately as finite decimal representation, but one can’t represent some finite decimal representation such as 0.1 correctly in binary format [1]. See [2] for a detailed commentary on issues involved with computation with real numbers.

These issues have led many to explore alternatives of computation [3]. Till people come up with a nice model to compute real numbers, we can find ways to deal with them on computer.

On Windows machine, try computing \sqrt{4} - 2 or 2- \sqrt{4} in its inbuilt calculator. If you find its answer surprising and think of it as bug like this guy you should read [1].

Reference [1] explains why and when x=x+1.0 is true and when and why x=x is false when dealing with floating point numbers. Its a great reference for electrical engineering students who wants to implement a processor with floating point units in some HDL.

A rule of thumb is that when adding two floating point numbers, their relative values make a huge difference and one should be prepared for surprises. One can start his research into this by this wiki article. Ordering matters in such cases. Adding a float a into float b can be very different from adding b into a. Here is Haskell code which proves it. It computes the function f(x) = (2- x) - (2^3 - \frac{x^3}{12}) . One can translate it into other language also. This also serve as a beauty of Haskell syntax.

-- This is type signature of function. It says that function calc takes an
-- argument of type Float and returns a Float 
calc :: Float -> Float

-- This is the definition of function. Quite readable.
calc x = (x - (x ^ 3) / 12)

-- Haskell can infer the type by itself. So we can choose not to write a type
-- declaration. This function subtract f(x) from f(2).
calc2 x = (calc 2) - (calc x)

-- This function takes a list of Float and apply function calc2 on all of its
-- elements and add them up. In python, equivalent is
-- def calcList1(l):
--    newlist = []
--    for a in l:
--       newlist.append(calc2(a))
--    return (0.0 + sum(newlist))
calcList1 :: [Float] -> Float
calcList1 l = foldl (+) 0.0 (map calc2 l)

-- This is like the previous function but sum is done in reverse order. This is
-- not entirely correct. we are using foldr (+) and foldl (+) function to sum a
-- list. While the foldr adds first to the last, foldl adds from last to first.
-- Again this is not entirely correct. 

calcList2 :: [Float] -> Float
calcList2 l = foldr (+) 0.0 (map calc2 l)

-- Here we are applying the same function on the element of list but adding them
-- in different order. The input step construct a list l where each element is
-- step size away from the previos one.
-- e.g. [0.0,0.5..2.0] is [0.0, 0.5, 1.0, 1.5, 2.0]

test1 :: Float -> Float
test1 step = (calcList1 l) - (calcList2 l) 
        l = [0.0,step..2.0]

Let’s see now what happens when we run this script filename.hs in ghci filename.hs

*Main> test1 0.1
*Main> test1 0.01
*Main> test1 0.001
*Main> test1 0.0001

One expects 0.0 but this is not what is happening here. Life with floating point numbers is not easy, and it can be terribly confusing. If one is writing simulator, it pays to be aware of potential traps.


  1. What every computer scientist should know about floating-point arithmatic
  2. How are real number specified in computation
  3. Continuum Computing – The Next Step in the Theory of Computing