Multi-dimentional root fiding using Newton-Raphson method

In short, It does the same thing as GNU-gsl `gnewton` solver for finding roots of a system of non-linear system.

The standard caveats apply when finding the multi-dimensional roots.
https://bestw.gnu.org/software/gsl/manual/html_node/Multidimensional-Root_002dFinding.html#Multidimensional-Root_002dFinding

The templated class (must be compiled with  -std=c++11). It uses boost::numeric::ublas library.

The code is available here:

https://github.com/dilawar/algorithms/tree/master/RootFinding

The header file contains the class, file `test_root_finding.cpp` shows how to use it.

best,
Dilawar

Posted in Uncategorized | Tagged , , , , | Leave a comment

Performance of random number generator (mersenne twister)

I compared 4 implementations of mersenne twister (mt19937, 32 bit) algorithm: C++-11 standard template library <random>, boost, GNU Scientific Library, and our own implementation for MOOSE simulator. The code is here (You can customize the code to run it on your platform.  You have to disable MOOSE related code.). Flag -O3 flag passed to gcc-4.8 compiler, on OpenSUSE-Leap 42.1 OS.

I generated random numbers in a tight loop of size N. Numbers were stored in a per-initialized vector. For the bench-marking, I subtracted the time taken to put N values into std::vector. I ran the benchmark 5 times for each. Here is a sample run:

Wrote 1000000 sampled to sample_numbers.csv file
GSL random number generator: mt19937
MOOSE=3135507266, STL=2357136044,BOOST=2357136044,GSL=4293858116
MOOSE=1811477324, STL=2546248239,BOOST=2546248239,GSL=699692587
MOOSE=2095834071, STL=3071714933,BOOST=3071714933,GSL=1213834231
MOOSE=258599318, STL=3626093760,BOOST=3626093760,GSL=4068197670
MOOSE=1470212236, STL=2588848963,BOOST=2588848963,GSL=994957275
MOOSE=3009017253, STL=3684848379,BOOST=3684848379,GSL=2082945813
MOOSE=2280525416, STL=2340255427,BOOST=2340255427,GSL=4112332215
MOOSE=2165689929, STL=3638918503,BOOST=3638918503,GSL=3196767107
MOOSE=551388967, STL=1819583497,BOOST=1819583497,GSL=2319469851
MOOSE=368560217, STL=2678185683,BOOST=2678185683,GSL=3178073856
N = 100
 Baseline (vector storage time) ends. Time 2.46e-07
 MOOSE start.. ends. Time 1.309e-06
 STL starts .. ends. Time 1.6e-07
 BOOST starts .. ends. Time 1.54e-07
 GSL starts .. ends. Time 5.07e-07
N = 1000
 Baseline (vector storage time) ends. Time 2.26e-07
 MOOSE start.. ends. Time 6.63e-06
 STL starts .. ends. Time 3.883e-06
 BOOST starts .. ends. Time 2.3e-06
 GSL starts .. ends. Time 5.844e-06
N = 10000
 Baseline (vector storage time) ends. Time 1.213e-06
 MOOSE start.. ends. Time 6.1885e-05
 STL starts .. ends. Time 4.4721e-05
 BOOST starts .. ends. Time 2.4108e-05
 GSL starts .. ends. Time 6.3407e-05
N = 100000
 Baseline (vector storage time) ends. Time 1.3876e-05
 MOOSE start.. ends. Time 0.000616378
 STL starts .. ends. Time 0.000445375
 BOOST starts .. ends. Time 0.00023736
 GSL starts .. ends. Time 0.000630819
N = 1000000
 Baseline (vector storage time) ends. Time 0.000281644
 MOOSE start.. ends. Time 0.00591533
 STL starts .. ends. Time 0.00426978
 BOOST starts .. ends. Time 0.00222824
 GSL starts .. ends. Time 0.00612132
N = 10000000
 Baseline (vector storage time) ends. Time 0.00418973
 MOOSE start.. ends. Time 0.0574826
 STL starts .. ends. Time 0.0411453
 BOOST starts .. ends. Time 0.0204587
 GSL starts .. ends. Time 0.0590613
N = 100000000
 Baseline (vector storage time) ends. Time 0.0465758
 MOOSE start.. ends. Time 0.569934
 STL starts .. ends. Time 0.399875
 BOOST starts .. ends. Time 0.195795
 GSL starts .. ends. Time 0.598316
N = 1000000000
 Baseline (vector storage time) ends. Time 0.458415
 MOOSE start.. ends. Time 5.69154
 STL starts .. ends. Time 4.09148
 BOOST starts .. ends. Time 2.02454
 GSL starts .. ends. Time 5.85985
[100%] Built target run_randnum_benchmark

Here is the summary: boost random number generator is approximately 2x faster than C++-STL <random> implementation. BOOST is approximately 4x faster than GSL (which is as fast as MOOSE random number generator).

moose_stl_boost_gsl

MOOSE, C++11 <random> (STL), BOOST, and GSL are compared for mt19937 random number generator. For N > 100 to 10^9 values, run time are plotted in log-scale.

moose_stl_boost_gslratio

Comparison of speed. GSL and MOOSE are equally fast and slowest. BOOST is the fastest (somewhere around 3x than MOOSE or GSL and 2x than STL).

Posted in Algorithms, Uncategorized | Tagged , , , , , | Leave a comment

Comparision of various matplotlib/pylab plotting styles, version-1.(4, 5)

Global font size is set to 8 in all figures.

 

Style ‘ggplot’ is my first preference but the overlapping histogram does not have good contrast. So I settled for ‘seahorse-darkgrid’.

Posted in Python | Tagged , , , | Leave a comment

Derivation of Cable equation

Cable equation has application in computation neuroscience. Following document derives it.

Derivation of cable equation

Posted in Biological systems, Uncategorized | Tagged , , | Leave a comment

Fix “unknown script ‘context.lua’ or ‘mtx-context.lua'”

  • `$ locate mtx-context.lua` in linux-shell. If it is found, then you probably need to export `TEXMF` variable.
  • In my case, all I had to do: `$ export TEXMF=/usr/share/texmf` on my system with ` OpenSUSE Leap installed. I installed `texlive-context` package using `zypper`.

Other solutions are here: http://tex.stackexchange.com/questions/53892/texlive-2011-context-problem

Posted in TeX, Uncategorized | Tagged , | Leave a comment

Computing value of pi using Monte-Carlo sampling

I used Haskell to do this. Code is here I did not use standard System.Random library to generate random number but mwc-random package. I can’t say if speed is better or worse.

I wanted to see how fast we converge to the value of pi. On x-axis, we have number of samples (N), and for each N, we compute the value of pi 50 times. This way we can talk about mean and variance in computed pi (which we plot on y-axis). It seems that for N >= 1000, we get a reasonable approximation of pi with low variance.

For a 2-D sphere (aka circle: x_1^2 + x_2 ^ 2 = 1 ), The convergence is following.

monte_carlo_pi

For a 3-D sphere, (that is x_1 ^2 + x_2 ^2 + x_3 ^3 = 1), I get the following:

four_d_sphere.png

Now I am bit more ambitious, I use a 10-D sphere:

As you can see, we converge to a a reasonable value of pi with low variance but convergence is not improving monotonically as in previous cases. The mean starts from a very low value and get to a respectable value only after N > 1000 (in low-dimensions cases we were swinging around 3.1415). The answer: we need to sample minimum number of points from hyper-cube to get some reasonable estimate.

ten_d_sphere_2

How variance decreases with N (when N is significantly large, 100 repeats of each computation)? Following plots starts from 2D and goes onto 10 dimensions.

data_2.csvdata_3.csvdata_4.csvdata_5.csvdata_6.csvdata_7.csvdata_8.csvdata_9.csvdata_10.csvdata_11.csvdata_12.csv

Posted in Haskell, Numerical computation, Uncategorized | Leave a comment

Generating random numbers in haskell (fast)

import qualified System.Random.MWC as R 
import Data.Vector.Unboxed as U 

-- A random numbers generator (vector from a uniform distribution). 
random_floats :: Int -> IO (U.Vector Float) 
random_floats n = R.createSystemRandom >>= \gen -> R.uniformVector gen n 
  
-- To scale the random floats in (0, 1.0) to (min, max)
-- y = (max - min) x + min 
 scale_samples :: (Float, Float) -> U.Vector Float -> U.Vector Float 
 scale_samples (min, max) vs = U.map (\v -> (max - min) * v + min ) vs 
  
 -- Sample a given space for n points. 
 sampled_space :: [( Float, Float)] -> Int -> IO [ U.Vector Float ] 
 sampled_space axes n = 
 random_floats n >>= \vv -> return $ Prelude.map (\x -> scale_samples x vv ) axes
main = do 
    vs <- sampled_space [ (1,3), (-1,1) ] 100 
    print vs 
    putStrLn $ "Done"
Posted in Haskell, Uncategorized | Leave a comment