Python “threading”

Just a quick rant about threading.  After working with Java for so long, I’d gotten used to the idea that a thread is an independent entity, which can work for you without slowing down the main body of your program.  In python, that’s really not the case.

In python, a thread shares memory with the main “thread” of your code, which prevents it from running on other CPUs, or running independently.  In fact, with a python thread, you’re stuck with every line of code running on the same core, with the limitation that only one line of code can be processed at a time – meaning that all your threads take turns passing lines of code to the CPU to be run. (Or, as a mental image, that’s how it’s working, the reality is a bit more subtle.)

Unfortunately, that means that python threads don’t really speed up your code, and if there are enough of them, they can slow it down significantly.

The solution turned out to be to use a module in python called “multiprocessing”, which allows you to spawn processes instead of threads, which means that each individual process can run on a different CPU (if you have enough cores…), but does not share any memory with the main process or thread of your code.  Thus, you have to work out a system of thread-safe (process-safe?) queues, where each process can dump information into a buffer, allowing other threads or processes to pick up information and process it independently.  The worker threads can consume the information in parallel, giving you a speed up of the running time (wall time) of your code.

All in all, it’s actually a relatively elegant system, not much different than many other languages, with the exception of the terminology – processes vs threads.  Python got it right, but it took me a while to figure out that I was using the terms incorrectly.

At any rate, without any optimization, multiprocessing with 30 processes brings down the wall time of my code from about 3 hours down to about 15 minutes.  It’s almost time to start looking into c code optimization…  Almost. (-;

7 thoughts on “Python “threading”

    • Thanks John! I’m glad to hear from you – I hope all is well at Invitae.

      I’ll definitely play with pypy, it looks like a really useful project. It’s probably worth blogging about that as well, if I can get it to compile my code.

  1. Pingback: Faster is better… |

  2. Queues are nice but I find multiprocessing.Pool to be a more intuitive interface.

    If you’re on linux and need to send a lot of data to the process queues, you can take advantage of the fact that the processes inherit the main processes’ memory via copy-on-write. That is, if you start a multiprocessing.Pool, each subprocess has full read access to any global variables.

    It’s a hack but can save the work of putting work in the queue. Instead, start your Pool after your matrix is in memory (as a global), then have the subprocesses read from the matrix and return their result. If the matrix is huge, you’ll get a lot of savings by not sticking it into the work queue (which causes the whole thing to be pickled).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.