The script is hosted here http://github.com/dilawar/playground/raw/master/Python/test_dict_sorting.py . It is based on the work of https://writeonly.wordpress.com/2008/08/30/sorting-dictionaries-by-value-in-python-improved/
My script has been changed to accommodate python3 (
iteritems is gone and replaced by
items — not sure whether it is a fair replacement). For method names and how they are implemented, please refer to script or the blog post.
Following chart shows the comparison. PyPy does not boost up the performance for simple reason that dictionary sorted is not large enough. I’ve put it here just for making a point and PyPy can slow thing down on small size computation.
The fastest method is
sbv6 which is based on PEP-0265 https://www.python.org/dev/peps/pep-0265/ is the fastest. Python3 always performing better than python2.
You want to write an Maxima expression to a file which can be read by other application e.g. LaTeX.
Lets say the expression is
sys which contains variable
RM. You first want to replace
R_m . Be sure to load
mactex-utilities if you have matrix. Without loading this module, the
tex command generates
TeX output, not
load( "mactex-utilities" )$
sys : RM * a / b * log( 10 )$
texput( RM, "R_m")$
sysTex : tex( sys, false)$
with_stdout( "outout.txt", display( sysTex ) )$
Other methods such as
write put extra non-TeX characters in file.
I get the following in file
outout.txt after executing the above.
This is a common gotcha!
Add UIC to your .TRANS like e.g.
.TRAN 1ns 100ns UIC
Otherwise initial conditions will simply be ignored. See http://www.ngspice.com/spice3f5_doc/4.3.9.php
I implemented my own csv reader using
cassava library. The reader from
missingh library was taking too long (~ 17 seconds) for a file with 43200 lines. I compared the result with
python-pandas csv reader. Below is rough comparison.
|cassava (ignore #)
|cassava (no support for ignoring #)
|> 10 sec
As obvious, pandas does really well at reading csv file. I was hoping that my csv reader would do better but it didn’t. But it still beats the
parsec based reader hands down.
The code is here https://github.com/dilawar/HBatteries/blob/master/src/HBatteries/CSV.hs
Here is a test case
>>> import numpy as np
>>> a = np.array( [ 0.0, 1, 2, 0.2, 0.0, 0.0, 2, 3] )
I want to turn all non-zero elements of this array to 1. I can do it using
np.where and numpy indexing.
>>> a[ np.where( a != 0 ) ] = 1
array([ 0., 1., 1., 1., 0., 0., 1., 1.])
np.where returns the indices where the condition is true. e.g. if you want to change all 0 to -1.
>>> a[ np.where[ a == 0] ] = -1.0
That’s it. Checkout
np.clip as well.
For our neural simulator, MOOSE, we use GNU Scientific Library (GSL) for random number generation, for solving system of non-linear equations, and for solving ODE system.
Recently I checked the performance of GSL ode solver V/s Boost ode solver; both using Runge-Kutta 4. The boost-odeint outperformed GSL by approximately by a factor of 4. The numerical results were same. Both implementation were compiled with -O3 switch.
Below are the numerical results. In second subplot, a molecule is changing is concentration ( calcium ) and on the top, another molecule is produced (CaMKII). This network has more than 30 reactions and approximately 20 molecules.
GSL took approximately 39 seconds to simulate the system for 1 year. While Boost-odeint took only 8.6 seconds. (This turns out to be a corner case)
Update: If I let both solvers choose the step size by themselves for given accuracy, boost usually outperforms the GSL solver by a factor of 1.2x to 2.9x. These tests were done during development of a signaling network which has many many reactions. Under no circumstances I have tested, BOOSE ODE solver was slower than GSL solver.