python - tornado webserver unlimited subprocess forking -


http://tornado.readthedocs.org/en/latest/tcpserver.html#tornado.tcpserver.tcpserver.start

http://tornado.readthedocs.org/en/latest/httpserver.html

server = httpserver(app) server.bind(8888) server.start(0)  # forks multiple sub-processes ioloop.instance().start() 

when set 0 in server.start(), tornado forks maximum of x subprocesses (where x equals numbers of machine cores, in case have 4.)

to test it, have 2 controllers, 1 controller (a) sleep(9999), , other quick controller (b) returns "hello world"

when make 3 concurrent request controller a, + 1 request b controller, works fine, "hello world" returned.

but when make 4 concurrent request controller a, + 1 request b controller, b request waits.

how can no limit number of forks?

thanks!

there isn't option fork unlimited number of subprocesses. documentation states this:

if num_processes none or <= 0, detect number of cores available on machine , fork number of child processes. if num_processes given , > 1, fork specific number of sub-processes.

you specify high number if wanted, think you'd find @ point not far above number of cores on system, starts hurt performance.

tornado isn't designed run in way requires forking many subprocesses. 1 of major features of tornado asynchronous i/o, allow handle many more num_proccesses number of concurrent connections. example, if replace call sleep(9999) in controller non-blocking sleep, you'd able handle connections b controller instantly.


Comments

Popular posts from this blog

php - render data via PDO::FETCH_FUNC vs loop -

c++ - OpenCV Error: Assertion failed <scn == 3 ::scn == 4> in unknown function, -

The canvas has been tainted by cross-origin data in chrome only -