Smoldyn packages

Following is from website.

Smoldyn is a computer program for cell-scale biochemical simulations. It simulates each molecule of interest individually to capture natural stochasticity and to yield nanometer-scale spatial resolution. It treats other molecules implicitly, enabling it to simulate hundreds of thousands of molecules over several minutes of real time. Simulated molecules diffuse, react, are confined by surfaces, and bind to membranes much as they would in a real biological system.

Its packages for various linux flavors can be found here. Current version is 2.54. Please let me know if there is any problem with the package.

Advertisements

Why Python’s `random.choice` is bad when number of choices are smalls

Consider two snippets. Both pick 0 or 1 with equal probabilities.

a = random.choice([0,1])

Or,

a = 1 if random.random() < 0.5 else 0

Later is much faster roughly 10 times faster.

In [4]: %timeit random.choice([0,1])
834 ns ± 2.19 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

In [5]: %timeit 1 if random.random() < 0.5 else 0
101 ns ± 0.25 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

How about when choices are large. Writing so many if else are waste of time. One can use the following approximation.

a = int(random.random()*100)

For 100 choices, it will generate number between 0 and 100. Its performance is slightly better than choice.

In [15]: %timeit random.choice( range(100) )
1 µs ± 1.38 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

In [16]: %timeit int(random.random()*100)
224 ns ± 1.23 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Surprise:

I thought if my choices are always in list; probably choice will be faster but I was wrong.

In [20]: choices = list(range(100))

In [21]: %timeit random.choice( choices )
752 ns ± 1.56 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

More importantly, DO NOT do this.

In [17]: choices = range(100)

In [18]: %timeit random.choice( choices )
798 ns ± 6.09 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Color alternate row in latex table

The often mentioned solution is the following:

\usepackage[table]{xcolor} % loads also colortbl
\rowcolors{1}{gray!10}{white} % color rows with gray!10 then white and so on starting with row 1

This works when when it doesn’t it is not very straightforward to figure out why.

If you have run into ‘xcolor’ related issue something like xcolor is already loaded etc. etc., Then do the following: At the very top of document, even before \documentclass [2].

\PassOptionsToPackage{table}{xcolor}

And do not \usepackage[table]{xcolor}; and then somewhere in your document — before the table — use the rowcolors statement.

\rowcolors{1}{gray!10}{white}

The package xcolor is heavily used by many other packages and it is very likely that some of package you have loaded before are already using xcolor therefore this error.

References:

  1. https://tex.stackexchange.com/a/5365/8087
  2. https://tex.stackexchange.com/a/5375/8087 (see comment by Bertram)

Sampling uniformly on unit simplex

The script is available here https://github.com/dilawar/Scripts/blob/master/simplex_generate.py

sage: simplex_generate.py [-h] [--dimension DIMENSION] [--number NUMBER]
[--output OUTPUT] [--method METHOD]
Generate x1,x2,...,xk such that sum(x1,x2,...,xk)=1

optional arguments:
-h, --help show this help message and exit
--dimension DIMENSION, -d DIMENSION
Dimention of vector (default 3)
--number NUMBER, -N NUMBER
Total number of vectors to generate (-1 for infinity).
--output OUTPUT, -o OUTPUT
Output file
--method METHOD, -m METHOD
Method (uniform sampling|TODO)

It writes the sampled numbers to stdout. By default, it samples from 3d space.

Stationary distribution of Markov Chains using Python

I always forget to add one special condition to transition matrix A when computing stationary distribution namely that in addition to \pi = \pi A, one also has to ensure that \sum \pi = 1.

def solveStationary( A ):
    """ x = xA where x is the answer
    x - xA = 0
    x( I - A ) = 0 and sum(x) = 1
    """
    n = A.shape[0]
    a = np.eye( n ) - A
    a = np.vstack( (a.T, np.ones( n )) )
    b = np.matrix( [0] * n + [ 1 ] ).T
    return np.linalg.lstsq( a, b )[0]

And thats it!

Peak detection in signal using Python

Usually people use scipy.signal to detect peak in signal.

scipy.signal.find_peaks_cwt( vec )

which returns list of index where vec has maximas. However this does not work all
the time, especially when vector is noisy. I could not get it working very well to compute the modality of distributions.

This is the most sophisticated library (AFAIK): https://gist.github.com/sixtenbe/1178136

Its can be installed via pip:

$ pip install analytic_wfm

To use,

>>> import analytic_wfm
>>> vec # this is the 1d array in which I want to detect peaks.
>>> maxPeaks, minPeaks = analytic_wfm.peakdetect( vec, lookahead = 10 )

This returns maximum peaks and minimum peaks. There are various other functions
described in readme.txt file.

Here is an example where I used this library to compute modality of a distribution:

coupled_bis