My algorithm had to run as fast as possible and I naturally ended up thinking about parallel processing. My application was kind of designed to be paralleled as at each instant I had two images coming (left and right side of my vehicle).
Most of computes now have at least 2 processors. Multiprocessing was then the best way to simply minimize computation time! In addition, as no data had to be passed from one application to the other (left and right images are totally unrelated) threads were obviously not the most interesting solution for me.
Basically, the following example will be perfect for applications in which the exact same task has to be performed several times on unrelated data :
Here is a simple multiprocessing example, coming from the Python documentation:
from multiprocessing import Process def f(name): print 'hello', name if __name__ == '__main__': p = Process(target=f, args=('bob',)) p.start() p.join()
This example creates a process, performing the f function. The first line indicates the function to be run and its arguments. The second line (start) starts the processus.
Finally, the join keyword waits for the function to finish properly and exits.
Creating two processes now appears to be pretty simple :
from multiprocessing import Process def f(name): print 'hello', name if __name__ == '__main__': p1 = Process(target=f, args=('la meute',)) p2 = Process(target=f, args=('tout le monde',)) p1.start() p2.start() p1.join() p2.join() print('Exiting!)
In this second example, the print will be performed only when p1 and p2 have been killed.
In order to enhance you productivity, a simple idea is to create as much processes as the number of cores in your computer. Once again, Python has already made the work for you.
nb_processes = multiprocessing.cpu_count()
Here it is, Your application should now run slightly faster !
In case you don’t know the difference between process and thread, you should have a look here .