uWSGI Documentation Release 2.0


uWSGI Documentation Release 2.0 uWSGI October 07, 2014 Contents 1 Included components (updated to latest stable release)3 2 Quickstarts 5 2.1 Quickstart for Python/WSGI applications................................5 2.2 Quickstart for perl/PSGI applications.................................. 11 2.3 Quickstart for ruby/Rack applications.................................. 16 2.4 Snippets................................................. 24 3 Table of Contents 29 3.1 Getting uWSGI.............................................. 29 3.2 Installing uWSGI............................................. 29 3.3 The uWSGI build system........................................ 31 3.4 Managing the uWSGI server....................................... 33 3.5 Supported languages and platforms................................... 36 3.6 Supported Platforms/Systems...................................... 36 3.7 Web server integration.......................................... 37 3.8 Frequently Asked Questions (FAQ)................................... 38 3.9 Things to know (best practices and “issues”) READ IT !!!....................... 42 3.10 Configuring uWSGI........................................... 43 3.11 Fallback configuration.......................................... 50 3.12 Configuration logic............................................ 51 3.13 uWSGI Options............................................. 54 3.14 Defining new options for your instances................................. 245 3.15 How uWSGI parses config files..................................... 247 3.16 uwsgi protocol magic variables..................................... 249 3.17 The uwsgi Protocol............................................ 251 3.18 Managing external daemons/services.................................. 253 3.19 The Master FIFO............................................. 255 3.20 Socket activation with inetd/xinetd................................... 257 3.21 Running uWSGI via Upstart....................................... 257 3.22 Systemd................................................. 259 3.23 Running uWSGI instances with Circus................................. 261 3.24 Embedding an application in uWSGI.................................. 262 3.25 Logging.................................................. 264 3.26 Formatting uWSGI requests logs.................................... 269 3.27 Log encoders............................................... 271 3.28 Hooks................................................... 274 3.29 Glossary................................................. 278 i 3.30 uWSGI third party plugins........................................ 278 4 Tutorials 283 4.1 The uWSGI Caching Cookbook..................................... 283 4.2 Setting up Django and your web server with uWSGI and nginx.................... 291 4.3 Running uWSGI on Dreamhost shared hosting............................. 299 4.4 Running python webapps on Heroku with uWSGI........................... 303 4.5 Running Ruby/Rack webapps on Heroku with uWSGI......................... 307 4.6 Reliably use FUSE filesystems for uWSGI vassals (with Linux).................... 311 4.7 Build a dynamic proxy using RPC and internal routing......................... 315 4.8 Setting up Graphite on Ubuntu using the Metrics subsystem...................... 315 5 Articles 321 5.1 Serializing accept(), AKA Thundering Herd, AKA the Zeeg Problem................. 321 5.2 The Art of Graceful Reloading...................................... 328 5.3 Fun with Perl, Eyetoy and RaspberryPi................................. 337 5.4 Offloading Websockets and Server-Sent Events AKA “Combine them with Django safely”...... 343 6 uWSGI Subsystems 351 6.1 The uWSGI alarm subsystem (from 1.3)................................ 351 6.2 The uWSGI caching framework..................................... 355 6.3 WebCaching framework......................................... 359 6.4 The uWSGI cron-like interface..................................... 361 6.5 The uWSGI FastRouter......................................... 364 6.6 uWSGI internal routing......................................... 367 6.7 The uWSGI Legion subsystem...................................... 381 6.8 Locks................................................... 385 6.9 uWSGI Mules.............................................. 385 6.10 The uWSGI offloading subsystem.................................... 387 6.11 The uWSGI queue framework...................................... 388 6.12 uWSGI RPC Stack............................................ 389 6.13 SharedArea – share memory pages between uWSGI components................... 392 6.14 The uWSGI Signal Framework..................................... 394 6.15 The uWSGI Spooler........................................... 397 6.16 uWSGI Subscription Server....................................... 401 6.17 Serving static files with uWSGI (updated to 1.9)............................ 404 6.18 SNI - Server Name Identification (virtual hosting for SSL nodes)................... 409 6.19 The GeoIP plugin............................................ 411 6.20 uWSGI Transformations......................................... 412 6.21 WebSocket support............................................ 414 6.22 The Metrics subsystem.......................................... 417 6.23 The Chunked input API......................................... 424 7 Scaling with uWSGI 427 7.1 The uWSGI cheaper subsystem – adaptive process spawning...................... 427 7.2 The uWSGI Emperor – multi-app deployment............................. 432 7.3 Auto-scaling with Broodlord mode................................... 442 7.4 Zerg mode................................................ 443 7.5 Adding applications dynamically.................................... 445 7.6 Scaling SSL connections (uWSGI 1.9)................................. 446 8 Securing uWSGI 451 8.1 Setting POSIX Capabilities....................................... 451 8.2 Running uWSGI in a Linux CGroup................................... 452 8.3 Using Linux KSM in uWSGI...................................... 453 ii 8.4 Jailing your apps using Linux Namespaces............................... 454 8.5 The old way: the –namespace option.................................. 456 8.6 FreeBSD Jails.............................................. 458 8.7 The Forkpty Router........................................... 462 8.8 The TunTap Router............................................ 465 9 Keeping an eye on your apps 469 9.1 Monitoring uWSGI with Nagios..................................... 469 9.2 The embedded SNMP server....................................... 469 9.3 Pushing statistics (from 1.4)....................................... 470 9.4 Integration with Graphite/Carbon.................................... 471 9.5 The uWSGI Stats Server......................................... 472 9.6 The Metrics subsystem.......................................... 476 10 Async and loop engines 485 10.1 uWSGI asynchronous/non-blocking modes (updated to uWSGI 1.9).................. 485 10.2 The Gevent loop engine......................................... 488 10.3 The Tornado loop engine......................................... 490 10.4 uGreen – uWSGI Green Threads.................................... 493 10.5 The asyncio loop engine (CPython >= 3.4, uWSGI >= 2.0.4)...................... 495 11 Web Server support 499 11.1 Apache support.............................................. 499 11.2 Cherokee support............................................. 500 11.3 Native HTTP support........................................... 501 11.4 HTTPS support (from 1.3)........................................ 504 11.5 The SPDY router (uWSGI 1.9)..................................... 505 11.6 Lighttpd support............................................. 506 11.7 Attaching uWSGI to Mongrel2..................................... 507 11.8 Nginx support.............................................. 509 12 Language support 513 12.1 Python support.............................................. 513 12.2 The PyPy plugin............................................. 532 12.3 Running PHP scripts in uWSGI..................................... 541 12.4 uWSGI Perl support (PSGI)....................................... 547 12.5 Ruby support............................................... 550 12.6 Using Lua/WSAPI with uWSGI..................................... 557 12.7 JVM in the uWSGI server (updated to 1.9)............................... 561 12.8 The Mono ASP.NET plugin....................................... 572 12.9 Running CGI scripts on uWSGI..................................... 573 12.10 The GCCGO plugin........................................... 578 12.11 The Symcall plugin............................................ 580 12.12 The XSLT plugin............................................. 583 12.13 SSI (Server Side Includes) plugin.................................... 584 12.14 uWSGI V8 support............................................ 586 12.15 The GridFS plugin............................................ 587 12.16 The GlusterFS plugin.......................................... 591 12.17 The RADOS plugin........................................... 594 13 Other plugins 599 13.1 The Pty plugin.............................................. 599 13.2 SPNEGO authentication......................................... 600 13.3 Configuring uWSGI with LDAP..................................... 600 iii 14 Broken/deprecated features 601 14.1 Integrating uWSGI with Erlang..................................... 601 14.2 Management Flags............................................ 604 14.3 uWSGI Go support (1.4 only)...................................... 605 15 Release Notes 611 15.1 Stable releases.............................................. 611 15.2 LTS releases............................................... 691 16 Contact 693 17 Commercial support 695 18 Donate 697 19 Indices and tables 699 Python Module Index 701 iv uWSGI Documentation, Release 2.0 The uWSGI project aims at developing a full stack for building hosting services. Application servers (for various programming languages and protocols), proxies, process managers and monitors are all implemented using a common api and a common configuration style. Thanks to its pluggable architecture it can be extended to support more platforms and languages. Currently, you can write plugins in C, C++ and Objective-C. The “WSGI” part in the name is a tribute to the namesake Python standard, as it has been the first developed plugin for the project. Versatility, performance, low-resource usage and reliability are the strengths of the project (and the only rules fol- lowed). Contents 1 uWSGI Documentation, Release 2.0 2 Contents CHAPTER 1 Included components (updated to latest stable release) The Core (implements configuration, processes management, sockets creation, monitoring, logging, shared memory areas, ipc, cluster membership and the uWSGI Subscription Server) Request plugins (implement application server interfaces for various languages and platforms: WSGI, PSGI, Rack, Lua WSAPI, CGI, PHP, Go ...) Gateways (implement load balancers, proxies and routers) The Emperor (implements massive instances management and monitoring) Loop engines (implement events and concurrency, components can be run in preforking, threaded, asyn- chronous/evented and green thread/coroutine modes. Various technologies are supported, including uGreen, Greenlet, Stackless, Gevent, Coro::AnyEvent, Tornado, Goroutines and Fibers) Note: uWSGI is a very active project with a fast release cycle. For this reason the code and the documentation may not always be in sync. We try to make our best to have good documentation but it is a hard work. Sorry for that. If you are in trouble, the mailing list is the best source for help regarding uWSGI. Contributors for documentation (in addition to code) are always welcome. 3 uWSGI Documentation, Release 2.0 4 Chapter 1. Included components (updated to latest stable release) CHAPTER 2 Quickstarts 2.1 Quickstart for Python/WSGI applications This quickstart will show you how to deploy simple WSGI applications and common web frameworks. Python here is meant as CPython, for PyPy you need to use the specific plugin: The PyPy plugin, Jython support is under construction. Note: You need at least uWSGI 1.4 to follow the quickstart. Anything older is no longer maintained and is highly buggy! 2.1.1 Installing uWSGI with Python support Tip: When you start learning uWSGI, try to build from official sources: using distribution-supplied packages may bring you plenty of headaches. When things are clear, you can use modular builds (like the ones available in your distribution). uWSGI is a (big) C application, so you need a C compiler (like gcc or clang) and the Python development headers. On a Debian-based distro an apt-get install build-essential python-dev will be enough. You have various ways to install uWSGI for Python: • via pip pip install uwsgi • using the network installer curl http://uwsgi.it/install | bash -s default /tmp/uwsgi (this will install the uWSGI binary into /tmp/uwsgi, feel free to change it). • via downloading a source tarball and “making” it wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz tar zxvf uwsgi-latest.tar.gz 5 uWSGI Documentation, Release 2.0 cd make (after the build you will have a uwsgi binary in the current directory). Installing via your package distribution is not covered (would be impossible to make everyone happy), but all of the general rules apply. One thing you may want to take into account when testing this quickstart with distro-supplied packages, is that very probably your distribution has built uWSGI in modular way (every feature is a different plugin that must be loaded). To complete this quickstart, you have to prepend --plugin python,http to the first series of examples, and --plugin python when the HTTP router is removed (it could make no sense for you, just continue reading). 2.1.2 The first WSGI application Let’s start with a simple “Hello World” example (this is for Python 2.x, Python 3.x requires the returned string to be bytes, see lower): def application(env, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) return ["Hello World"] (save it as foobar.py). As you can see, it is composed of a single Python function. It is called “application” as this is default function that the uWSGI Python loader will search for (but you can obviously customize it). The Python 3.x version is the following: def application(env, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) return [b"Hello World"] 2.1.3 Deploy it on HTTP port 9090 Now start uWSGI to run an HTTP server/router passing requests to your WSGI application: uwsgi --http :9090 --wsgi-file foobar.py That’s all. Note: Do not use –http when you have a frontend webserver, use –http-socket. Continue reading the quickstart to understand why. 2.1.4 Adding concurrency and monitoring The first tuning you would like to make is adding concurrency (by default uWSGI starts with a single process and a single thread). You can add more processes with the --processes option or more threads with the --threads option (or you can have both). uwsgi --http :9090 --wsgi-file foobar.py --master --processes 4 --threads 2 6 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 This will spawn 4 processes (each with 2 threads), a master process (will respawn your processes when they die) and the HTTP router (seen before). One important task is monitoring. Understanding what is going on is vital in production deployment. The stats subsystem allows you to export uWSGI’s internal statistics as JSON: uwsgi --http :9090 --wsgi-file foobar.py --master --processes 4 --threads 2 --stats 127.0.0.1:9191 Make some request to your app and then telnet to the port 9191, you’ll get lots of fun information. You may want to use “uwsgitop” (just pip install it), which is a top-like tool for monitoring instances. Attention: Bind the stats socket to a private address (unless you know what you are doing), otherwise everyone could access it! 2.1.5 Putting behind a full webserver Even though uWSGI HTTP router is solid and high-performance, you may want to put your application behind a fully-capable webserver. uWSGI natively speaks HTTP, FastCGI, SCGI and its specific protocol named “uwsgi” (yes, wrong naming choice). The best performing protocol is obviously uwsgi, already supported by nginx and Cherokee (while various Apache modules are available). A common nginx config is the following: location/{ include uwsgi_params; uwsgi_pass 127.0.0.1:3031; } This means “pass every request to the server bound to port 3031 speaking the uwsgi protocol”. Now we can spawn uWSGI to natively speak the uwsgi protocol: uwsgi --socket 127.0.0.1:3031 --wsgi-file foobar.py --master --processes 4 --threads 2 --stats 127.0.0.1:9191 If you’ll run ps aux, you will see one process less. The HTTP router has been removed as our “workers” (the processes assigned to uWSGI) natively speak the uwsgi protocol. If your proxy/webserver/router speaks HTTP, you have to tell uWSGI to natively speak the http protocol (this is different from –http that will spawn a proxy by itself): uwsgi --http-socket 127.0.0.1:3031 --wsgi-file foobar.py --master --processes 4 --threads 2 --stats 127.0.0.1:9191 2.1.6 Automatically starting uWSGI on boot If you are thinking about firing up vi and writing an init.d script for spawning uWSGI, just sit (and calm) down and make sure your system doesn’t offer a better (more modern) approach first. Each distribution has chosen a startup system (Upstart, Systemd...) and there are tons of process managers available (supervisord, god, monit, circus...). uWSGI will integrate very well with all of them (we hope), but if you plan to deploy a big number of apps check the uWSGI Emperor - it is more or less the dream of every devops engineer. 2.1. Quickstart for Python/WSGI applications 7 uWSGI Documentation, Release 2.0 2.1.7 Deploying Django Django is very probably the most used Python web framework around. Deploying it is pretty easy (we continue our configuration with 4 processes with 2 threads each). We suppose the Django project is in /home/foobar/myproject: uwsgi --socket 127.0.0.1:3031 --chdir /home/foobar/myproject/ --wsgi-file myproject/wsgi.py --master --processes 4 --threads 2 --stats 127.0.0.1:9191 (with --chdir we move to a specific directory). In Django this is required to correctly load modules. Argh! What the hell is this?! Yes, you’re right, you’re right... dealing with such long command lines is unpractical, foolish and error-prone. Never fear! uWSGI supports various configuration styles. In this quickstart we will use .ini files. [uwsgi] socket= 127.0.0.1:3031 chdir= /home/foobar/myproject/ wsgi-file= myproject/wsgi.py processes=4 threads=2 stats= 127.0.0.1:9191 A lot better! Just run it: uwsgi yourfile.ini If the file /home/foobar/myproject/myproject/wsgi.py (or whatever you have called your project) does not exist, you are very probably using an old (< 1.4) version of Django. In such a case you need a little bit more configuration: uwsgi --socket 127.0.0.1:3031 --chdir /home/foobar/myproject/ --pythonpath .. --env DJANGO_SETTINGS_MODULE=myproject.settings --module "django.core.handlers.wsgi:WSGIHandler()" --processes 4 --threads 2 --stats 127.0.0.1:9191 Or, using the .ini file: [uwsgi] socket= 127.0.0.1:3031 chdir= /home/foobar/myproject/ pythonpath=.. env= DJANGO_SETTINGS_MODULE=myproject.settings module= django.core.handlers.wsgi:WSGIHandler() processes=4 threads=2 stats= 127.0.0.1:9191 Older (< 1.4) Django releases need to set env, module and the pythonpath (.. allow us to reach the myproject.settings module). 2.1.8 Deploying Flask Flask is a popular Python web microframework. Save the following example as myflaskapp.py: from flask import Flask app= Flask(__name__) 8 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 @app.route(’/’) def index(): return "I am app 1" Flask exports its WSGI function (the one we called “application” at the beginning of this quickstart) as “app”, so we need to instruct uWSGI to use it. We still continue to use the 4 processes/2 threads and the uwsgi socket as the base: uwsgi --socket 127.0.0.1:3031 --wsgi-file myflaskapp.py --callable app --processes 4 --threads 2 --stats 127.0.0.1:9191 (the only addition is the --callable option). 2.1.9 Deploying web2py Again a popular choice. Unzip the web2py source distribution on a directory of choice and write a uWSGI config file: [uwsgi] http= :9090 chdir= path_to_web2py module= wsgihandler master= true processes=8 Note: On recent web2py releases you may need to copy the wsgihandler.py script out of the handlers directory. We used the HTTP router again. Just go to port 9090 with your browser and you will see the web2py welcome page. Click on the administrative interface and... oops, it does not work as it requires HTTPS. Do not worry, the uWSGI router is HTTPS-capable (be sure you have OpenSSL development headers: install them and rebuild uWSGI, the build system will automatically detect it). First of all generate your key and certificate: openssl genrsa -out foobar.key 2048 openssl req -new -key foobar.key -out foobar.csr openssl x509 -req -days 365 -in foobar.csr -signkey foobar.key -out foobar.crt Now you have 2 files (well 3, counting the foobar.csr), foobar.key and foobar.crt. Change the uWSGI config: [uwsgi] https= :9090,foobar.crt,foobar.key chdir= path_to_web2py module= wsgihandler master= true processes=8 Re-run uWSGI and connect to port 9090 using https:// with your browser. 2.1.10 A note on Python threads If you start uWSGI without threads, the Python GIL will not be enabled, so threads generated by your application will never run. You may not like that choice, but remember that uWSGI is a language-independent server, so most of its choices are for maintaining it “agnostic”. But do not worry, there are basically no choices made by the uWSGI developers that cannot be changed with an option. 2.1. Quickstart for Python/WSGI applications 9 uWSGI Documentation, Release 2.0 If you want to maintain Python threads support without starting multiple threads for your application, just add the --enable-threads option (or enable-threads = true in ini style). 2.1.11 Virtualenvs uWSGI can be configured to search for Python modules in a specific virtualenv. Just add virtualenv = to your options. 2.1.12 Security and availability Always avoid running your uWSGI instances as root. You can drop privileges using the uid and gid options: [uwsgi] https= :9090,foobar.crt,foobar.key uid= foo gid= bar chdir= path_to_web2py module= wsgihandler master= true processes=8 If you need to bind to privileged ports (like 443 for HTTPS), use shared sockets. They are created before dropping privileges and can be referenced with the =N syntax, where N is the socket number (starting from 0): [uwsgi] shared-socket= :443 https= =0,foobar.crt,foobar.key uid= foo gid= bar chdir= path_to_web2py module= wsgihandler master= true processes=8 A common problem with webapp deployment is “stuck requests”. All of your threads/workers are stuck (blocked on request) and your app cannot accept more requests. To avoid that problem you can set a harakiri timer. It is a monitor (managed by the master process) that will destroy processes stuck for more than the specified number of seconds (choose harakiri value carefully). For example, you may want to destroy workers blocked for more than 30 seconds: [uwsgi] shared-socket= :443 https= =0,foobar.crt,foobar.key uid= foo gid= bar chdir= path_to_web2py module= wsgihandler master= true processes=8 harakiri= 30 In addition to this, since uWSGI 1.9, the stats server exports the whole set of request variables, so you can see (in realtime) what your instance is doing (for each worker, thread or async core). 10 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 2.1.13 Offloading The uWSGI offloading subsystem allows you to free your workers as soon as possible when some specific pattern matches and can be delegated to a pure-c thread. Examples are sending static file from the file system, transferring data from the network to the client and so on. Offloading is very complex, but its use is transparent to the end user. If you want to try just add --offload-threads where is the number of threads to spawn (1 per CPU is a good value to start with). When offload threads are enabled, all of the parts that can be optimized will be automatically detected. 2.1.14 Bonus: multiple python versions for the same uWSGI binary As we have seen, uWSGI is composed by a little core and various plugins. Plugins can be embedded in the binary or dynamically loaded. When you build uWSGI for python a series of plugins plus the python one are embedded in the final binary. This could be a problem if you want to support multiple python version without building a binary for each one. The best approach would be having a little binary with the language-independent features and one plugin for each python version that will be loaded on-demand. From the uwsgi sources directory: make PROFILE=nolang this will build a uwsgi binary with all the default plugins built-in except the python one. Now, from the same dir, we start building python plugins: PYTHON=python3.4 ./uwsgi --build-plugin "plugins/python python34" PYTHON=python2.7 ./uwsgi --build-plugin "plugins/python python27" PYTHON=python2.6 ./uwsgi --build-plugin "plugins/python python26" you will end with three files: python34_plugin.so, python27_plugin.so, python26_plugin.so that you should copy in the dir you want (by default uWSGI searches for plugins in the current directory). Now in your config files you can simply add (at the top) [uwsgi] plugins-dir= plugin= python26 this will load the python26_plugin.so file from the directory in which you copied the plugins. 2.1.15 And now You should already be able to go in production with such few concepts, but uWSGI is an enormous project with hundreds of features and configurations. If you want to be a better sysadmin, continue reading the full docs. 2.2 Quickstart for perl/PSGI applications The following instructions will guide you through installing and running a perl-based uWSGI distribution, aimed at running PSGI apps. 2.2. Quickstart for perl/PSGI applications 11 uWSGI Documentation, Release 2.0 2.2.1 Installing uWSGI with Perl support To build uWSGI you need a c compiler (gcc and clang are supported) and the python binary (it will only run the uwsgiconfig.py script that will execute the various compilation steps). As we are building a uWSGI binary with perl support we need perl development headers too (libperl-dev package on debian-based distros) You can build uWSGI manually: python uwsgiconfig.py --build psgi that is the same as UWSGI_PROFILE=psgi make or using the network installer: curl http://uwsgi.it/install | bash -s psgi /tmp/uwsgi that will create a uWSGI binary in /tmp/uwsgi (feel free to change the path to whatever you want) 2.2.2 Note for distro packages You distribution very probably contains a uWSGI package set. Those uWSGI packages tend to be highly modulars, so in addition to the core you need to install the required plugins. Plugins must be loaded in your configs. In the learning phase we strongly suggest to not use distribution packages to easily follow documentation and tutorials. Once you feel confortable with the “uWSGI way” you can choose the best approach for your deployments. 2.2.3 Your first PSGI app save it to a file named myapp.pl my $app= sub { my $env= shift; return [ ’200’, [ ’Content-Type’=> ’text/html’], [ "

Hello World

"], ]; }; then run it via uWSGI in http mode: uwsgi --http :8080 --http-modifier1 5 --psgi myapp.pl (remember to replace ‘uwsgi’ if it is not in your current $PATH) or if you are using a modular build (like the one of your distro) uwsgi --plugins http,psgi --http :8080 --http-modifier1 5 --psgi myapp.pl Note: Do not use –http when you have a frontend webserver, use –http-socket. Continue reading the quickstart to understand why. 12 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 2.2.4 What is that ‘–http-modifier1 5’ thing ??? uWSGI supports various languages and platform. When the server receives a request it has to know where to ‘route’ it. Each uWSGI plugin has an assigned number (the modifier), the perl/psgi one has the 5. So –http-modifier1 5 means “route to the psgi plugin” Albeit uWSGI has a more “human-friendly” internal routing system using modifiers is the fastest way, so, if possible always use them 2.2.5 Using a full webserver: nginx The supplied http router, is (yes, incredible) only a router. You can use it as a load balancer or a proxy, but if you need a full webserver (for efficiently serving static files or all of those task a webserver is good at), you can get rid of the uwsgi http router (remember to change –plugins http,psgi to –plugins psgi if you are using a modular build) and put your app behind nginx. To communicate with nginx, uWSGI can use various protocol: http, uwsgi, fastcgi, scgi... The most efficient one is the uwsgi one. Nginx includes uwsgi protocol support out of the box. Run your psgi application on a uwsgi socket: uwsgi --socket 127.0.0.1:3031 --psgi myapp.pl then add a location stanza in your nginx config location/{ include uwsgi_params; uwsgi_pass 127.0.0.1:3031; uwsgi_modifier15; } Reload your nginx server, and it should start proxying requests to your uWSGI instance Note that you do not need to configure uWSGI to set a specific modifier, nginx will do it using the uwsgi_modifier1 5; directive If your proxy/webserver/router speaks HTTP, you have to tell uWSGI to natively speak the http protocol (this is different from –http that will spawn a proxy by itself): uwsgi --http-socket 127.0.0.1:3031 --http-socket-modifier1 5 --psgi myapp.pl as you can see we needed to specify the modifier1 to use, as the http protocol cannot carry this kind of information 2.2.6 Adding concurrency You can give concurrency to to your app via multiprocess,multithreading or various async modes. To spawn additional processes use the –processes option uwsgi --socket 127.0.0.1:3031 --psgi myapp.pl --processes 4 To have additional threads use –threads uwsgi --socket 127.0.0.1:3031 --psgi myapp.pl --threads 8 Or both if you feel exotic 2.2. Quickstart for perl/PSGI applications 13 uWSGI Documentation, Release 2.0 uwsgi --socket 127.0.0.1:3031 --psgi myapp.pl --threads 8 --processes 4 A very common non-blocking/coroutine library in the perl world is Coro::AnyEvent. uWSGI can use it (even com- bined with multiprocessing) simply including the coroae plugin. To build a uWSGI binary with coroae support just run UWSGI_PROFILE=coroae make or curl http://uwsgi.it/install | bash -s coroae /tmp/uwsgi you will end with a uWSGI binary including both the psgi and coroae plugins. Now run your application in Coro::AnyEvent mode: uwsgi --socket 127.0.0.1:3031 --psgi myapp.pl --coroae 1000 --processes 4 it will run 4 processes each able to manage up to 1000 coroutines (or Coro microthreads). 2.2.7 Adding robustness: the Master process It is highly recommended to have the master process always running on productions apps. It will constantly monitor your processes/threads and will add funny features like the The uWSGI Stats Server To enable the master simply add –master uwsgi --socket 127.0.0.1:3031 --psgi myapp.pl --processes 4 --master 2.2.8 Using config files uWSGI has literally hundreds of options. Dealing with them via command line is basically silly, so try to always use config files. uWSGI supports various standards (xml, .ini, json, yaml...). Moving from one to another is pretty simple. The same options you can use via command line can be used on config files simply removing the -- prefix: [uwsgi] socket= 127.0.0.1:3031 psgi= myapp.pl processes=4 master= true or xml: 127.0.0.1:3031 myapp.pl 4 To run uWSGI using a config file, just specify it as argument: uwsgi yourconfig.ini if for some reason your config cannot end with the expected extension (.ini, .xml, .yml, .js) you can force the binary to use a specific parser in this way: 14 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 uwsgi --ini yourconfig.foo uwsgi --xml yourconfig.foo uwsgi --yaml yourconfig.foo and so on You can even pipe configs (using the dash to force reading from stdin): perl myjsonconfig_generator.pl | uwsgi --json - 2.2.9 Automatically starting uWSGI on boot If you are thinking about writing some init.d script for spawning uWSGI, just sit (and calm) down and check if your system does not offer you a better (more modern) approach. Each distribution has chosen a startup system (Upstart, Systemd...) and there are tons of process managers available (supervisord, god...). uWSGI will integrate very well with all of them (we hope), but if you plan to deploy a big number of apps check the uWSGI Emperor it is the dream of every devops. 2.2.10 Security and availability ALWAYS avoid running your uWSGI instances as root. You can drop privileges using the uid and gid options [uwsgi] socket= 127.0.0.1:3031 uid= foo gid= bar chdir= path_toyour_app psgi= myapp.pl master= true processes=8 A common problem with webapp deployment is “stuck requests”. All of your threads/workers are stuck blocked on a request and your app cannot accept more of them. To avoid that problem you can set an harakiri timer. It is a monitor (managed by the master process) that will destroy processes stuck for more than the specified number of seconds [uwsgi] socket= 127.0.0.1:3031 uid= foo gid= bar chdir= path_toyour_app psgi= myapp.pl master= true processes=8 harakiri= 30 will destroy workers blocked for more than 30 seconds. Choose carefully the harakiri value !!! In addition to this, since uWSGI 1.9, the stats server exports the whole set of request variables, so you can see (in realtime) what your instance is doing (for each worker, thread or async core) Enabling the stats server is easy: 2.2. Quickstart for perl/PSGI applications 15 uWSGI Documentation, Release 2.0 [uwsgi] socket= 127.0.0.1:3031 uid= foo gid= bar chdir= path_toyour_app psgi= myapp.pl master= true processes=8 harakiri= 30 stats= 127.0.0.1:5000 just bind it to an address (UNIX or TCP) and just connect (you can use telnet too) to it to receive a JSON representation of your instance. The uwsgitop application (you can find it in the official github repository) is an example of using the stats server to have a top-like realtime monitoring tool (with colors !!!) 2.2.11 Offloading The uWSGI offloading subsystem allows you to free your workers as soon as possible when some specific pattern matches and can be delegated to a pure-c thread. Examples are sending static file from the filesystem, transferring data from the network to the client and so on. Offloading is very complex, but its use is transparent to the end user. If you want to try just add –offload-threads where is the number of threads to spawn (one for cpu is a good value). When offload threads are enabled, all of the parts that can be optimized will be automatically detected. 2.2.12 And now You should already be able to go in production with such few concepts, but uWSGI is an enormous project with hundreds of features and configurations. If you want to be a better sysadmin, continue reading the full docs. 2.3 Quickstart for ruby/Rack applications The following instructions will guide you through installing and running a Ruby-based uWSGI distribution aimed at running Rack apps. 2.3.1 Installing uWSGI with Ruby support To build uWSGI you need a C compiler (gcc and clang are supported) and the Python binary (to run the uwsgiconfig.py script that will execute the various compilation steps). As we are building an uWSGI binary with Ruby support we need the Ruby development headers too (the ruby-dev package on Debian-based distributions). You can build uWSGI manually – all of these are equivalent: make rack UWSGI_PROFILE=rack make make PROFILE=rack python uwsgiconfig.py --build rack 16 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 But if you are lazy, you can download, build and install an uWSGI + Ruby binary in a single shot: curl http://uwsgi.it/install | bash -s rack /tmp/uwsgi Or in a more “Ruby-friendly” way: gem install uwsgi All of these methods build a “monolithic” uWSGI binary. The uWSGI project is composed by dozens of plugins. You can choose to build the server core and having a plugin for every feature (that you will load when needed), or you can build a single binary with all the features you need. This latter kind of build is called ‘monolithic’. This quickstart assumes a monolithic binary (so you do not need to load plugins). If you prefer to use your package distributions (instead of building uWSGI from official sources), see below. 2.3.2 Note for distro packages Your distribution very probably contains an uWSGI package set. Those uWSGI packages tend to be highly modular (and occasionally highly outdated), so in addition to the core you need to install the required plugins. Plugins must be loaded in your uWSGI configuration. In the learning phase we strongly suggest to not use distribution packages to easily follow documentation and tutorials. Once you feel comfortable with the “uWSGI way” you can choose the best approach for your deployments. As an example, the tutorial makes use of the “http” and “rack” plugins. If you are using a modular build be sure to load them with the --plugins http,rack option. 2.3.3 Your first Rack app Rack is the standard way for writing Ruby web apps. This is a standard Rack Hello world script (call it app.ru): class App def call(environ) [200,{’Content-Type’=> ’text/html’},[’Hello’]] end end run App.new The .ru extension stands for “rackup”, which is the deployment tool included in the Rack distribution. Rackup uses a little DSL, so to use it into uWSGI you need to install the rack gem: gem install rack Now we are ready to deploy with uWSGI: uwsgi --http :8080 --http-modifier1 7 --rack app.ru (remember to replace ‘uwsgi’ if it is not in your current $PATH) or if you are using a modular build (like the one of your distribution) uwsgi --plugins http,rack --http :8080 --http-modifier1 7 --rack app.ru 2.3. Quickstart for ruby/Rack applications 17 uWSGI Documentation, Release 2.0 With this command line we’ve spawned an HTTP proxy routing each request to a process (named the ‘worker’) that manages it and send back the response to the HTTP router (that sends back to the client). If you are asking yourself why spawning two processes, it is because this is the normal architecture you will use in production (a frontline web server with a backend application server). If you do not want to spawn the HTTP proxy and directly force the worker to answer HTTP requests just change the command line to uwsgi --http-socket :8080 --http-socket-modifier1 7 --rack app.ru now you have a single process managing requests (but remember that directly exposing the application server to the public is generally dangerous and less versatile). 2.3.4 What is that ‘–http-modifier1 7’ thing? uWSGI supports various languages and platforms. When the server receives a request it has to know where to ‘route’ it. Each uWSGI plugin has an assigned number (the modifier), the ruby/rack one has the 7. So --http-modifier1 7 means “route to the rack plugin”. Though uWSGI also has a more “human-friendly” internal routing system using modifiers is the fastest way, so if at all possible always use them. 2.3.5 Using a full webserver: nginx The supplied HTTP router, is (yes, astoundingly enough) only a router. You can use it as a load balancer or a proxy, but if you need a full web server (for efficiently serving static files or all of those task a webserver is good at), you can get rid of the uwsgi HTTP router (remember to change –plugins http,rack to –plugins rack if you are using a modular build) and put your app behind Nginx. To communicate with Nginx, uWSGI can use various protocol: HTTP, uwsgi, FastCGI, SCGI, etc. The most efficient one is the uwsgi one. Nginx includes uwsgi protocol support out of the box. Run your rack application on an uwsgi socket: uwsgi --socket 127.0.0.1:3031 --rack app.ru then add a location stanza in your nginx config location/{ include uwsgi_params; uwsgi_pass 127.0.0.1:3031; uwsgi_modifier17; } Reload your nginx server, and it should start proxying requests to your uWSGI instance. Note that you do not need to configure uWSGI to set a specific modifier, nginx will do it using the uwsgi_modifier1 5; directive. 2.3.6 Adding concurrency With the previous example you deployed a stack being able to serve a single request at time. 18 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 To increase concurrency you need to add more processes. If you are hoping there is a magic math formula to find the right number of processes to spawn, well... we’re sorry. You need to experiment and monitor your app to find the right value. Take in account every single process is a complete copy of your app, so memory usage should be taken in account. To add more processes just use the –processes option: uwsgi --socket 127.0.0.1:3031 --rack app.ru --processes 8 will spawn 8 processes. Ruby 1.9/2.0 introduced an improved threads support and uWSGI supports it via the ‘rbthreads’ plugin. This plugin is automatically built when you compile the uWSGI+ruby (>=1.9) monolithic binary. To add more threads: uwsgi --socket 127.0.0.1:3031 --rack app.ru --rbthreads 4 or threads + processes uwsgi --socket 127.0.0.1:3031 --rack app.ru --processes --rbthreads 4 There are other (generally more advanced/complex) ways to increase concurrency (for example ‘fibers’), but most of the time you will end up with a plain old multi-process or multi-thread models. If you are interested, check the complete documentation over at Rack. 2.3.7 Adding robustness: the Master process It is highly recommended to have the uWSGI master process always running on productions apps. It will constantly monitor your processes/threads and will add fun features like the The uWSGI Stats Server. To enable the master simply add --master uwsgi --socket 127.0.0.1:3031 --rack app.ru --processes 4 --master 2.3.8 Using config files uWSGI has literally hundreds of options (but generally you will not use more than a dozens of them). Dealing with them via command line is basically silly, so try to always use config files. uWSGI supports various standards (XML, INI, JSON, YAML, etc). Moving from one to another is pretty simple. The same options you can use via command line can be used with config files by simply removing the -- prefix: [uwsgi] socket= 127.0.0.1:3031 rack= app.ru processes=4 master= true or xml: 127.0.0.1:3031 app.ru 4 2.3. Quickstart for ruby/Rack applications 19 uWSGI Documentation, Release 2.0 To run uWSGI using a config file, just specify it as argument: uwsgi yourconfig.ini if for some reason your config cannot end with the expected extension (.ini, .xml, .yml, .js) you can force the binary to use a specific parser in this way: uwsgi --ini yourconfig.foo uwsgi --xml yourconfig.foo uwsgi --yaml yourconfig.foo and so on. You can even pipe configs (using the dash to force reading from stdin): ruby myjsonconfig_generator.rb | uwsgi --json - 2.3.9 The fork() problem when you spawn multiple processes uWSGI is “Perlish” in a way, there is nothing we can do to hide that. Most of its choices (starting from “There’s more than one way to do it”) came from the Perl world (and more generally from classical UNIX sysadmin approaches). Sometimes this approach could lead to unexpected behaviors when applied to other languages/platforms. One of the “problems” you can face when starting to learn uWSGI is its fork() usage. By default uWSGI loads your application in the first spawned process and then fork() itself multiple times. It means your app is loaded a single time and then copied. While this approach speedups the start of the server, some application could have problems with this technique (es- pecially those initializing db connections on startup, as the file descriptor of the connection will be inherited in the subprocesses). If you are unsure about the brutal preforking used by uWSGI, just disable it with the --lazy-apps option. It will force uWSGI to completely load your app one time per each worker. 2.3.10 Deploying Sinatra Let’s forget about fork(), and back to fun things. This time we’re deploying a Sinatra application: require ’sinatra’ get ’/hi’ do "Hello World" end run Sinatra::Application save it as config.ru and run as seen before: [uwsgi] socket= 127.0.0.1:3031 rack= config.ru master= true processes=4 lazy-apps= true 20 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 uwsgi yourconf.ini Well, maybe you already noted that basically nothing changed from the previous app.ru examples. That is because basically every modern Rack app exposes itself as a .ru file (generally called config.ru), so there is no need for multiple options for loading applications (like for example in the Python/WSGI world). 2.3.11 Deploying RubyOnRails >= 3 Starting from 3.0, Rails is fully Rack compliant, and exposes a config.ru file you can directly load (like we did with Sinatra). The only difference from Sinatra is that your project has a specific layout/convention expecting your current working directory is the one containing the project, so let’s add a chdir option: [uwsgi] socket= 127.0.0.1:3031 rack= config.ru master= true processes=4 lazy-apps= true chdir= env= RAILS_ENV=production uwsgi yourconf.ini In addition to chdir we have added the ‘env’ option that set the RAILS_ENV environment variable. Starting from 4.0, Rails support multiple threads (only for ruby 2.0): [uwsgi] socket= 127.0.0.1:3031 rack= config.ru master= true processes=4 rbthreads=2 lazy-apps= true chdir= env= RAILS_ENV=production 2.3.12 Deploying older RubyOnRails Older Rails versions are not fully Rack-compliant. For such a reason a specific option is available in uWSGI to load older Rails apps (you will need the ‘thin’ gem too). [uwsgi] socket= 127.0.0.1:3031 master= true processes=4 lazy-apps= true rails= env= RAILS_ENV=production So, in short, specify the rails option, passing the rails app directory as the argument, instead of a Rackup file. 2.3. Quickstart for ruby/Rack applications 21 uWSGI Documentation, Release 2.0 2.3.13 Bundler and RVM Bundler is the standard de-facto Ruby tool for managing dependencies. Basically you specify the gems needed by your app in the Gemfile text file and then you launch bundler to install them. To allow uWSGI to honor bundler installations you only need to add: rbrequire= rubygems rbrequire= bundler/setup env= BUNDLE_GEMFILE= (The first require stanza is not required for ruby 1.9/2.x.) Basically those lines force uWSGI to load the bundler engine and to use the Gemfile specified in the BUNDLE_GEMFILE environment variable. When using Bundler (like modern frameworks do) your common deployment configuration will be: [uwsgi] socket= 127.0.0.1:3031 rack= config.ru master= true processes=4 lazy-apps= true rbrequire= rubygems rbrequire= bundler/setup env= BUNDLE_GEMFILE= In addition to Bundler, RVM is another common tool. It allows you to have multiple (independent) Ruby installations (with their gemsets) on a single system. To instruct uWSGI to use the gemset of a specific RVM version just use the –gemset option: [uwsgi] socket= 127.0.0.1:3031 rack= config.ru master= true processes=4 lazy-apps= true rbrequire= rubygems rbrequire= bundler/setup env= BUNDLE_GEMFILE= gemset= ruby-2.0@foobar Just pay attention you need a uWSGI binary (or a plugin if you are using a modular build) for every Ruby version (that’s Ruby version, not gemset!). If you are interested, this is a list of commands to build the uWSGI core + 1 one plugin per every Ruby version installed in rvm: # build the core make nolang # build plugin for 1.8.7 rvm use 1.8.7 ./uwsgi --build-plugin "plugins/rack rack187" # build for 1.9.2 rvm use 1.9.2 ./uwsgi --build-plugin "plugins/rack rack192" # and so on... 22 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 Then if you want to use ruby 1.9.2 with the @oops gemset: [uwsgi] plugins= ruby192 socket= 127.0.0.1:3031 rack= config.ru master= true processes=4 lazy-apps= true rbrequire= rubygems rbrequire= bundler/setup env= BUNDLE_GEMFILE= gemset= ruby-1.9.2@oops 2.3.14 Automatically starting uWSGI on boot If you are thinking about firing up vi and writing an init.d script for spawning uWSGI, just sit (and calm) down and make sure your system doesn’t offer a better (more modern) approach first. Each distribution has chosen a startup system (Upstart, Systemd...) and there are tons of process managers available (supervisord, god, monit, circus...). uWSGI will integrate very well with all of them (we hope), but if you plan to deploy a big number of apps check the uWSGI Emperor - it is more or less the dream of every devops engineer. 2.3.15 Security and availability ALWAYS avoid running your uWSGI instances as root. You can drop privileges using the uid and gid options. [uwsgi] socket= 127.0.0.1:3031 uid= foo gid= bar chdir= path_toyour_app rack= app.ru master= true processes=8 A common problem with webapp deployment is “stuck requests”. All of your threads/workers are stuck blocked on a request and your app cannot accept more of them. To avoid that problem you can set an harakiri timer. It is a monitor (managed by the master process) that will destroy processes stuck for more than the specified number of seconds. [uwsgi] socket= 127.0.0.1:3031 uid= foo gid= bar chdir= path_toyour_app rack= app.ru master= true processes=8 harakiri= 30 This will destroy workers blocked for more than 30 seconds. Choose the harakiri value carefully! In addition to this, since uWSGI 1.9, the stats server exports the whole set of request variables, so you can see (in real time) what your instance is doing (for each worker, thread or async core) 2.3. Quickstart for ruby/Rack applications 23 uWSGI Documentation, Release 2.0 Enabling the stats server is easy: [uwsgi] socket= 127.0.0.1:3031 uid= foo gid= bar chdir= path_to_your_app rack= app.ru master= true processes=8 harakiri= 30 stats= 127.0.0.1:5000 just bind it to an address (UNIX or TCP) and just connect (you can use telnet too) to it to receive a JSON representation of your instance. The uwsgitop application (you can find it in the official github repository) is an example of using the stats server to have a top-like realtime monitoring tool (with fancy colors!) 2.3.16 Memory usage Low memory usage is one of the selling point of the whole uWSGI project. Unfortunately being aggressive with memory by default could (read well: could) lead to some performance problems. By default the uWSGI Rack plugin calls the Ruby GC (garbage collector) after every request. If you want to reduce this rate just add the --rb-gc-freq option, where n is the number of requests after the GC is called. If you plan to make benchmarks of uWSGI (or compare it with other solutions) take in account its use of GC. Ruby can be a real memory devourer, so we prefer to be aggressive with memory by default instead of making hello- world benchmarkers happy. 2.3.17 Offloading The uWSGI offloading subsystem allows you to free your workers as soon as possible when some specific pattern matches and can be delegated to a pure-c thread. Examples are sending static file from the file system, transferring data from the network to the client and so on. Offloading is very complex, but its use is transparent to the end user. If you want to try just add --offload-threads where is the number of threads to spawn (1 per CPU is a good value to start with). When offload threads are enabled, all of the parts that can be optimized will be automatically detected. 2.3.18 And now You should already be able to go in production with such few concepts, but uWSGI is an enormous project with hundreds of features and configurations. If you want to be a better sysadmin, continue reading the full docs. Welcome! 2.4 Snippets This is a collection of some of the most “fun” uses of uWSGI features. 24 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 2.4.1 X-Sendfile emulation Even if your frontend proxy/webserver does not support X-Sendfile (or cannot access your static resources) you can emulate it using uWSGI’s internal offloading (your process/thread will delegate the actual static file serving to offload threads). [uwsgi] ... ; load router_static plugin (compiled in by default in monolithic profiles) plugins = router_static ; spawn 2 offload threads offload-threads = 2 ; files under /private can be safely served static-safe = /private ; collect the X-Sendfile response header as X_SENDFILE var collect-header = X-Sendfile X_SENDFILE ; if X_SENDFILE is not empty, pass its value to the "static" routing action (it will automatically use offloading if available) response-route-if-not = empty:${X_SENDFILE} static:${X_SENDFILE} 2.4.2 Force HTTPS This will force HTTPS for the whole site. [uwsgi] ... ; load router_redirect plugin (compiled in by default in monolithic profiles) plugins = router_redirect route-if-not = equal:${HTTPS};on redirect-permanent:https://${HTTP_HOST}${REQUEST_URI} And this only for /admin [uwsgi] ... ; load router_redirect plugin (compiled in by default in monolithic profiles) plugins = router_redirect route = ^/admin goto:https ; stop the chain route-run = last: route-label = https route-if-not = equal:${HTTPS};on redirect-permanent:https://${HTTP_HOST}${REQUEST_URI} Eventually you may want to send HSTS (HTTP Strict Transport Security) header too. [uwsgi] ... ; load router_redirect plugin (compiled in by default in monolithic profiles) plugins = router_redirect route-if-not = equal:${HTTPS};on redirect-permanent:https://${HTTP_HOST}${REQUEST_URI} route-if = equal:${HTTPS};on addheader:Strict-Transport-Security: max-age=31536000 2.4.3 Python Auto-reloading (DEVELOPMENT ONLY!) In production you can monitor file/directory changes for triggering reloads (touch-reload, fs-reload...). During development having a monitor for all of the loaded/used python modules can be handy. But please use it only during development. 2.4. Snippets 25 uWSGI Documentation, Release 2.0 The check is done by a thread that scans the modules list with the specified frequency: [uwsgi] ... py-autoreload = 2 will check for python modules changes every 2 seconds and eventually restart the instance. And again: Warning: Use this only in development. 2.4.4 Full-Stack CGI setup This example spawned from a uWSGI mainling-list thread. We have static files in /var/www and cgis in /var/cgi. Cgi will be accessed using the /cgi-bin mountpoint. So /var/cgi/foo.lua will be run on request to /cgi-bin/foo.lua [uwsgi] workdir= /var ipaddress= 0.0.0.0 ; start an http router on port 8080 http= %(ipaddress):8080 ; enable the stats server on port 9191 stats= 127.0.0.1:9191 ; spawn 2 threads in 4 processes (concurrency level: 8) processes=4 threads=2 ; drop privileges uid= nobody gid= nogroup ; serve static files in /var/www static-index= index.html static-index= index.htm check-static= %(workdir)/www ; skip serving static files ending with .lua static-skip-ext= .lua ; route requests to the CGI plugin http-modifier1=9 ; map /cgi-bin requests to /var/cgi cgi= /cgi-bin=%(workdir)/cgi ; only .lua script can be executed cgi-allowed-ext= .lua ; .lua files are executed with the ’lua’ command (it avoids the need of giving execute permission to files) cgi-helper= .lua=lua ; search for index.lua if a directory is requested cgi-index= index.lua 2.4.5 Multiple flask apps in different mountpoints Let’s write three flask apps: 26 Chapter 2. Quickstarts uWSGI Documentation, Release 2.0 #app1.py from flask import Flask app= Flask(__name__) @app.route("/") def hello(): return "Hello World! i am app1" #app2.py from flask import Flask app= Flask(__name__) @app.route("/") def hello(): return "Hello World! i am app2" #app3.py from flask import Flask app= Flask(__name__) @app.route("/") def hello(): return "Hello World! i am app3" each will be mounted respectively in /app1, /app2, /app3 To mount an application with a specific “key” in uWSGI, you use the –mount option: ‘ --mount = ‘ in our case we want to mount 3 python apps, each keyed with what will be the WSGI SCRIPT_NAME variable: [uwsgi] plugin= python mount= /app1=app1.py mount= /app2=app2.py mount= /app3=app3.py ; generally flask apps expose the ’app’ callable instead of ’application’ callable= app ; tell uWSGI to rewrite PATH_INFO and SCRIPT_NAME according to mount-points manage-script-name= true ; bind to a socket socket= /var/run/uwsgi.sock now directly point your webserver.proxy to the instance socket (without doing additional configurations) Note: by default every app is loaded in a new python interpreter (that means a pretty-well isolated namespace for each app). If you want all of the app to be loaded in the same python vm, use the –single-interpreter option. Another note: you may find reference to an obscure “modifier1 30” trick. It is deprecated and extremely ugly. uWSGI is able to rewrite request variables in lot of advanced ways Final note: by default, the first loaded app is mounted as the “default one”. That app will be served when no mountpoint matches. 2.4. Snippets 27 uWSGI Documentation, Release 2.0 28 Chapter 2. Quickstarts CHAPTER 3 Table of Contents 3.1 Getting uWSGI These are the current versions of uWSGI. Release Date Link Unstable - https://github.com/unbit/uwsgi/ Stable/LTS 2014-09-05 http://projects.unbit.it/downloads/uwsgi-2.0.7.tar.gz Old/LTS 2013-08-23 http://projects.unbit.it/downloads/uwsgi-1.4.10.tar.gz uWSGI is also available as a package in several OS/distributions. uWSGI has a really fast development cycle, so packages may not be up to date. Building it requires less than 30 seconds and very few dependencies (only Python interpreter, a C compiler/linker and the libs/headers for your language of choice) 3.2 Installing uWSGI 3.2.1 Installing from a distribution package See also: See the Getting uWSGI page for a list of known distributions shipping uWSGI. 3.2.2 Installing from source To build uWSGI you need Python and a C compiler (gcc and clang are supported). Depending on the languages you wish to support you will need their development headers. On a Debian/Ubuntu system you can install them (and the rest of the infrastructure required to build software) with: apt-get install build-essential python And if you want to build a binary with python/wsgi support (as an example) apt-get install python-dev On a Fedora/Redhat system you can install them with: yum groupinstall "Development Tools" yum install python 29 uWSGI Documentation, Release 2.0 For python/wsgi support: yum install python-devel If you have a variant of make available in your system you can simply run make. If you do not have make (or want to have more control) simply run: python uwsgiconfig.py --build You can also use pip to install uWSGI (it will build a binary with python support). # Install the latest stable release: pip install uwsgi # ... or if you want to install the latest LTS (long term support) release, pip install http://projects.unbit.it/downloads/uwsgi-lts.tar.gz Or you can use ruby gems (it will build a binary with ruby/rack support). # Install the latest stable release: gem install uwsgi At the end of the build, you will get a report of the enabled features. If something you require is missing, just add the development headers and rerun the build. For example to build uWSGI with ssl and perl regexp support you need libssl-dev and pcre headers. 3.2.3 Alternative build profiles For historical reasons when you run ‘make’, uWSGI is built with Python as the only supported language. You can build customized uWSGI servers using build profiles, located in the buildconf/ directory. You can use a specific profile with: python uwsgiconfig --build Or you can pass it via an environment variable: UWSGI_PROFILE=lua make # ... or even ... UWSGI_PROFILE=gevent pip install uwsgi 3.2.4 Modular builds This is the approach your distribution should follow, and this is the approach you MUST follow if you want to build a commercial service over uWSGI (see below). The vast majority of uWSGI features are available as plugins. Plugins can be loaded using the –plugin option. If you want to give users the maximum amount of flexibility allowing them to use only the minimal amount of resources, just create a modular build. A build profile named “core” is available. python uwsgiconfig.py --build core This will build a uWSGi binary without plugins. This is called the “server core”. Now you can start building all of the plugins you need. Check the plugins/ directory in the source distribution for a full list. python uwsgiconfig.py --plugin plugins/psgi core python uwsgiconfig.py --plugin plugins/rack core python uwsgiconfig.py --plugin plugins/python core python uwsgiconfig.py --plugin plugins/lua core python uwsgiconfig.py --plugin plugins/corerouter core python uwsgiconfig.py --plugin plugins/http core ... 30 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 Remember to always pass the build profile (‘core’ in this case) as the third argument. 3.3 The uWSGI build system • This is updated to 1.9.13 This page describes how the uWSGI build system works and how it can be customized 3.3.1 uwsgiconfig.py This is the python script aimed at calling the various compile/link stage. During 2009, when uWSGI guidelines (and mantra) started to be defined, people agreed that autotools, cmake and friends was not loved by a lot of sysadmins. Albeit they are pretty standardized, the amount of packages needed and the incompatibility between them (expecially in the autotools world) was a problem for a project with fast develop- ment/evolution where “compile from sources” was, is and very probably will be the best way to get the best from the product. In addition to this the build procedure MUST BE fast (less than 1 minute on entry level x86 is the main rule) For such a reason, to compile uWSGI you only need to have a c compiler suite (gcc, clang...) and a python interpreter. Someone could argue that perl could have been a better choice, and maybe it is the truth (it is generally installed by default in lot of operating systems), but we decided to stay with python mainly because when uWSGI started it was a python-only application. (Obviously if you want to develop an alternative build system you are free to do it) The uwsgiconfig.py basically detects the available features in the system and builds a uwsgi binary (and eventually its plugins) using the so called ‘build profile’ 3.3.2 build profiles 3.3.3 First example 3.3.4 CC and CPP This 2 environment variables for uwsgiconfig.py to use an alternative C compiler and C preprocessor. If they are not defined the procedure is the following: For CC -> try to get the CC config_var from the python binary running uwsgiconfig.py, fallback to ‘gcc’ For CPP -> fallback to ‘cpp’ As an example, on a system with both gcc and clang you will end with CC=clang CPP=clang-cpp python uwsgiconfig.py --build 3.3.5 CPUCOUNT In the spirit of “easy and fast build even on production systems”, uwsgiconfig.py tries to use all of your cpu cores spawning multiple instances of the c compiler (one per-core). You can override this system using the CPUCOUNT environment variable, forcing the number of detected cpu cores (setting to 1 will disable parallel build). CPUCOUNT=2 python uwsgiconfig.py --build 3.3. The uWSGI build system 31 uWSGI Documentation, Release 2.0 3.3.6 UWSGI_FORCE_REBUILD 3.3.7 Plugins and uwsgiplugin.py A uWSGI plugin is a shared library exporting the _plugin symbol. Where is the name of the plugin. As an example the psgi plugin will export the psgi_plugin symbol as well as pypy will export he pypy_plugin symbol and so on. This symbol is a uwsgi_plugin C struct defining the hooks of the plugin. When you ask uWSGI to load a plugin it simply calls dlopen() and get the uwsgi_plugin struct via dlsym(). The vast majority of the uWSGI project is developed as a plugin, this ensure a modular approach to configuration and an obviously saner development style. The sysadmin is free to embed each plugin in the server binary or to build each plugin as an external shared library. Embedded plugins are defined in the ‘embedded_plugins’ directive of the build profile. You can add more embedded plugins from command line using the UWSGI_EMBED_PLUGINS environment variable (see below). Instead, if you want to build a plugin as a shared library just run uwsgiconfig.py with the –plugin option python uwsgiconfig.py --plugin plugins/psgi this will build the plugin in plugins/psgi to the psgi_plugin.so file To specify a build profile when you build a plugin, you can pass the profile as an additional argument python uwsgiconfig.py --plugin plugins/psgi mybuildprofile 3.3.8 UWSGI_INCLUDES • this has been added in 1.9.13 On startup, the CPP binary is run to detect default include paths. You can add more paths using the UWSGI_INCLUDES environment variable UWSGI_INCLUDES=/usr/local/include,/opt/dev/include python uwsgiconfig.py --build 3.3.9 UWSGI_EMBED_PLUGINS 3.3.10 UWSGI_EMBED_CONFIG Allows embedding the specified .ini file in the server binary (currently Linux only) On startup the server parses the embedded file as soon as possible. Custom options defined in the embedded config will be available as standard ones. 32 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.3.11 UWSGI_BIN_NAME 3.3.12 CFLAGS and LDFLAGS 3.3.13 UWSGICONFIG_* for plugins 3.3.14 libuwsgi.so 3.3.15 uwsgibuild.log 3.3.16 uwsgibuild.lastcflags 3.3.17 cflags and uwsgi.h magic 3.3.18 embedding files 3.3.19 The fake make 3.4 Managing the uWSGI server See also: If you are managing multiple apps or a high volume site, take a look at • The uWSGI Emperor – multi-app deployment • Zerg mode • uWSGI Subscription Server 3.4.1 Starting the server Starting an uWSGI server is the role of the system administrator, like starting the Web server. It should not be the role of the Web server to start the uWSGI server – though you can also do that if it fits your architecture. How to best start uWSGI services at boot depends on the operating system you use. On modern systems the following should hold true. On “classic” operating systems you can use init.d/rc.d scripts, or tools such as Supervisor, Daemontools or inetd/xinetd. Sys- tem Method Ubuntu Running uWSGI via Upstart (the official uwsgi package, available since Ubuntu 12.04 provides an init.d based solution. Read the README.) De- bian Running uWSGI via Upstart Fe- dora Systemd OSX launchd So- laris SMF 3.4. Managing the uWSGI server 33 uWSGI Documentation, Release 2.0 3.4.2 Signals for controlling uWSGI You can instruct uWSGI to write the master process PID to a file with the pidfile option. The uWSGI server responds to the following signals. Signal Description Convenience command SIGHUP gracefully reload all the workers and the master process –reload SIGTERM brutally reload all the workers and the master process SIGINT immediately kill the entire uWSGI stack –stop SIGQUIT immediately kill the entire uWSGI stack SIGUSR1 print statistics SIGUSR2 print worker status or wakeup the spooler SIGURG restore a snapshot SIGTSTP pause/suspend/resume an instance SIGWINCH wakeup a worker blocked in a syscall (internal use) 3.4.3 Reloading the server When running with the master process mode, the uWSGI server can be gracefully restarted without closing the main sockets. This functionality allows you patch/upgrade the uWSGI server without closing the connection with the web server and losing a single request. When you send the SIGHUP to the master process it will try to gracefully stop all the workers, waiting for the completion of any currently running requests. Then it closes all the eventually opened file descriptors not related to uWSGI. Lastly, it binary patches (using execve()) the uWSGI process image with a new one, inheriting all of the previous file descriptors. The server will know that it is a reloaded instance and will skip all the sockets initialization, reusing the previous ones. Note: Sending the SIGTERM signal will obtain the same result reload-wise but will not wait for the completion of running requests. There are several ways to make uWSGI gracefully restart. # using kill to send the signal kill -HUP‘cat /tmp/project-master.pid‘ # or the convenience option --reload uwsgi --reload /tmp/project-master.pid # or if uwsgi was started with touch-reload=/tmp/somefile touch /tmp/somefile Or from your application, in Python: uwsgi.reload() Or in Ruby, UWSGI.reload 3.4.4 Stopping the server If you have the uWSGI process running in the foreground for some reason, you can just hit CTRL+C to kill it off. 34 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 When dealing with background processes, you’ll need to use the master pidfile again. The SIGINT signal will kill uWSGI. kill -INT‘cat /tmp/project-master.pid‘ # or for convenience... uwsgi --stop /tmp/project-master.pid 3.4.5 The Master FIFO Starting from uWSGI 1.9.17, a new management system has been added using unix named pipes (fifo): The Master FIFO 3.4. Managing the uWSGI server 35 uWSGI Documentation, Release 2.0 3.5 Supported languages and platforms Technology Avail- able since Notes Status Python 0.9.1 The first available plugin, supports WSGI (PEP 333, PEP 3333), Web3 (from version 0.9.7-dev) and Pump (from 0.9.8.4). Works with Virtualenv, multiple Python interpreters, Python3 and has unique features like Aliasing Python modules, DynamicVirtualenv and uGreen – uWSGI Green Threads. A module exporting handy decorators for the uWSGI API is available in the source distribution. PyPy is supported since 1.3. The Python Tracebacker was added in 1.3. Stable, 100% uWSGI API support Lua 0.9.5 Supports LuaWSAPI, coroutines and threads Stable, 60% uWSGI API support Perl 0.9.5 uWSGI Perl support (PSGI) (PSGI) support. Multiple interpreters, threading and async modes supported Stable, 60% uWSGI API support Ruby 0.9.7- dev Ruby support support. A loop engine for Ruby 1.9 fibers is available as well as a handy DSL module. Stable, 80% uWSGI API support Integrating uWSGI with Erlang 0.9.5 Allows message exchanging between uWSGI and Erlang nodes. Stable, no uWSGI API support Running CGI scripts on uWSGI 1.0- dev Run CGI scripts Stable, no uWSGI API support Running PHP scripts in uWSGI 1.0- dev Run PHP scripts Stable from 1.1, 5% uWSGI API support uWSGI Go support (1.4 only) 1.4- dev Allows integration with the Go language 15% uWSGI API support JVM in the uWSGI server (updated to 1.9) 1.9- dev Allows integration between uWSGI and the Java Virtual Machine JWSGI and Clojure/Ring handlers are available. Stable The Mono ASP.NET plugin 0.9.7- dev Allows integration between uWSGI and Mono, and execution of ASP.NET applications. Stable uWSGI V8 support 1.9.4 Allows integration between uWSGI and the V8 JavaScript engine. Early stage of development 3.6 Supported Platforms/Systems This is the list of officially supported operating systems and platforms. • Linux 2.6/3.x • FreeBSD >= 7 • NetBSD • OpenBSD 36 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 • DragonFlyBSD • Windows Cygwin • Mac OSX • Solaris >= 10 • NexentaOS • SmartOS • OpenSolaris • OpenIndiana • OmniOS • Debian/kFreeBSD • GNU/Hurd 3.7 Web server integration uWSGI supports several methods of integrating with web servers. It is also capable of serving HTTP requests by itself. 3.7.1 Cherokee See also: Cherokee support The Cherokee webserver officially supports uWSGI. Cherokee is fast and lightweight, has a beautiful admin interface and a great community. Their support for uWSGI has been awesome since the beginning and we recommend its use in most situations. The userbase of the Cherokee uWSGI handler is probably the biggest of all. The Cherokee uWSGI handler is commercially supported by Unbit. 3.7.2 Nginx See also: Nginx support The uWSGI module is included in the official Nginx distribution since version 0.8.40. A version supporting Nginx 0.7.x is maintained in the uWSGI package. This is a stable handler commercially supported by Unbit. 3.7.3 Apache See also: Apache support The Apache2 mod_uwsgi module was the first web server integration module developed for uWSGI. It is stable but could be better integrated with the Apache API. It is commercially supported by Unbit. 3.7. Web server integration 37 uWSGI Documentation, Release 2.0 Since uWSGI 0.9.6-dev a second Apache2 module called mod_Ruwsgi is included. It’s more Apache API friendly. mod_Ruwsgi is not commercially supported by Unbit. During the 1.2 development cycle, another module called mod_proxy_uwsgi has been added. In the near future this should be the best choice for Apache based deployments. 3.7.4 Mongrel2 See also: Attaching uWSGI to Mongrel2 Support for the Mongrel2 Project has been available since 0.9.8-dev via the ZeroMQ protocol plugin. In our tests Mongrel2 survived practically all of the loads we sent. Very good and solid project. Try it :) 3.7.5 Lighttpd This module is the latest developed, but its inclusion in the official Lighttpd distribution has been rejected, as the main author considers the uwsgi protocol a “reinventing the wheel” technology while suggesting a FastCGI approach. We respect this position. The module will continue to reside in the uWSGI source tree, but it is currently unmaintained. There is currently no commercial support for this handler. We consider this module “experimental”. 3.7.6 Twisted This is a “commodity” handler, useful mainly for testing applications without installing a full web server. If you want to develop an uWSGI server, look at this module. Twisted. 3.7.7 Tomcat The included servlet can be used to forward requests from Tomcat to the uWSGI server. It is stable, but currently lacks documentation. There is currently no commercial support for this handler. 3.7.8 CGI The CGI handlers are for “lazy” installations. Their use in production environments is discouraged. 3.8 Frequently Asked Questions (FAQ) 3.8.1 Why should I choose uWSGI? Because you can! :) uWSGI wants to be a complete web application deployment solution with batteries included: • ProcessManagement • Management of long-running tasks • uWSGI RPC Stack 38 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 • Clustering • LoadBalancing • Monitoring • ResourceLimiting ... and many other annoying everyday tasks that you’d have to delegate to external scripts and manual sysadmin tasks. If you are searching for a simple server for your WSGI, PSGI or Rack app, uWSGI may not be for you. Though, if you are building an app which needs to be rock solid, fast, and easy to distribute and optimize for various loads, you will most likely find yourself needing uWSGI. The best definition for uWSGI is “Swiss Army Knife for your network applications”. 3.8.2 What about the protocol? The uwsgi (all lowercase) protocol is derived from SCGI but with binary string length representations and a 4-byte header that includes the size of the var block (16 bit length) and a couple of general-purpose bytes. We are not reinventing the wheel. Binary management is much easier and cheaper than string parsing, and every single bit of power is required for our projects. If you need proof, look at the official protocol documentation and you will understand why a new protocol was needed. Obviously, you are free to use the other supported protocols. Remember, if you cannot use uWSGI in some scenario, it is a uWSGI bug. 3.8.3 Can I use it in cluster environments? Yes, this is one of the main features of the uWSGI stack. You can have multiple instances bound on different servers, and using the load balancing facilities of your webserver/proxy/router you can distribute your load. Systems like uWSGI RPC Stack allows you to fast call functions on remote nodes, and The uWSGI Legion subsystem allows you to elect a master in a multi-node setup. 3.8.4 So, why all those timeout configuration flags? Choosing sane timeouts is the key to high availability. Do not trust network applications that do not permit you to choose a timeout. 3.8.5 I need help! What do I do? Post a message on the uWSGI mailing list including your • Operating system version • CPU architecture • Webserver used (if any) • uWSGI version • uWSGI command line or config files You should add the –show-config option and post the output in the message. It will be very useful for finding out just what’s wrong with your uWSGI. You can also rebuild uWSGI with debug symbols and run it under a debugger like gdb. 3.8. Frequently Asked Questions (FAQ) 39 uWSGI Documentation, Release 2.0 uWSGI is an enormous project with hundreds of options. You should be prepared that not everything will go right at the first shot. Ask for help, ask for help and ask for help. If you are frustrated, do not waste time blaming and ranting - instead simply join the list and ask for help. This is open source, if you only rant you are doing nothing useful. 3.8.6 I am not a sysadmin, nor a UNIX guru. Can I use uWSGI? That’s a good question :) But sadly there is no simple answer. uWSGI has not been developed with simplicity in mind, but with versatility. You can try it by starting with one of the quickstarts and if you have problems, simply ask for help in the list or on the IRC channel. 3.8.7 How can I buy commercial support for my company? Send an email to info at unbit.it with the word “uWSGI” in the subject. The email you send should include your company information and your specific request. We will reply as soon as possible. 3.8.8 Will this allow me to run my awesome apps on my ancient close-minded ISP? Probably not. The uWSGI server requires a modern platform/environment. 3.8.9 Where are the benchmarks? Sorry, we only do “official” benchmarks for regression testing. If benchmarks are very important to you, you can search on the mailing list, make your own benchmarks or search on Google. uWSGI gives precedence to machine health, so do not expect your ab test with an unrealistic number of concurrent connections to be managed flawlessly without tuning. Some socket and networking knowledge is required if you want to make a valid benchmark (and avoid geek rage in your blog comments ;). Also remember that uWSGI can be run in various modes, so avoid comparing it configured in preforking mode with another server in non-blocking/async mode if you do not want to look ridiculous. Note: If you see your tests failing at higher concurrency rates you are probably hitting your OS socket backlog queue limit (maximum of 128 slots on Linux, tunable via /proc/sys/net/somaxconn and /proc/sys/net/ipv4/tcp_max_syn_backlog for TCP sockets). You can set this value in uWSGI with the listen configuration option. 3.8.10 Ha! Server XXX is faster than uWSGI! Take that! As already stated uWSGI is not a silver bullet, it is not meant to be liked by the whole world and it is obviously not the fastest server out there. It is a piece of software following an “approach” to problems you may not like or that you may conversely love. The approach taken will work better for certain cases than others, and each application should be analyzed on it’s own merits using appropriate and accruate real-world benchmarks. 3.8.11 What is ‘Harakiri mode’? At Unbit we host hundreds of unreliable web apps on our servers. All of them run on hardly constrained (at kernel level) environments where having processes block due to an implementation error will result in taking down an entire site. The harakiri mode has two operational modes: • one that we define as “raw and a bit unreliable” (used for simple setup without a process manager) 40 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 • and another one that we define as “reliable” that depends on the presence of the uWSGI process manager (see ProcessManagement). The first one sets a simple alarm at the start of every request. If the process gets a SIGALRM signal, it terminates itself. We call this unreliable, because your app or some module you use could overwrite or simply cancel the alarm with a simple call to alarm(). The second one uses a master process shared memory area (via mmap) that maintains statistics on every worker in the pool. At the start of every request, the worker sets a timestamp representing the time after which the process will be killed in its dedicated area. This timestamp is zeroed after every successful request. If the master process finds a worker with a timestamp in the past it will mercilessly kill it. 3.8.12 Will my app run faster with uWSGI? It’s unlikely. The biggest bottleneck in web app deployment is the application itself. If you want a faster environment, optimize your code or use techniques such as clustering or caching. We say that uWSGI is fast because it introduces a very little overhead in the deployment structure. 3.8.13 What are the most important options for performance and robustness in the uWSGI environment? By default, uWSGI is configured with sane “almost-good-for-all” values. But if and when things start going wild, tuning is a must. • Increasing (or decreasing) timeout is important, as is modifying the socket listen queue size. • Think about threading. If you do not need threads, do not enable them. • If you are running only a single application you can disable multiple interpreters. • Always remember to enable the master process in production environments. See ProcessManagement. • Adding workers does not mean “increasing performance”, so choose a good value for the workers option based on the nature of your app (IO bound, CPU bound, IO waiting...) 3.8.14 Why not simply use HTTP as the protocol? A good question with a simple answer: HTTP parsing is slow, really slow. Why should we do a complex task twice? The web server has already parsed the request! The uwsgi protocol is very simple to parse for a machine, while HTTP is very easy to parse for a human. As soon as humans are being used as servers, we will abandon the uwsgi protocol in favor of the HTTP protocol. All this said, you can use uWSGI via Native HTTP support, FastCGI, ZeroMQ and other protocols as well. 3.8.15 Why do you support multiple methods of configuration? System administration is all about skills and taste. uWSGI tries to give sysadmins as many choices as possible for integration with whatever infrastructure is already available. Having multiple methods of configuration is just one way we achieve this. 3.8.16 What is the best webserver handler? See Web server integration. 3.8. Frequently Asked Questions (FAQ) 41 uWSGI Documentation, Release 2.0 3.9 Things to know (best practices and “issues”) READ IT !!! • The http and http-socket options are entirely different beasts. The first one spawns an additional process forwarding requests to a series of workers (think about it as a form of shield, at the same level of apache or nginx), while the second one sets workers to natively speak the http protocol. TL/DR: if you plan to expose uWSGI directly to the public, use –http, if you want to proxy it behind a webserver speaking http with backends, use –http-socket. .. seealso:: Native HTTP support • By default, sending the SIGTERM signal to uWSGI means “brutally reload the stack” while the convention is to shut an application down on SIGTERM. To shutdown uWSGI use SIGINT or SIGQUIT instead. If you absolutely can not live with uWSGI being so disrespectful towards SIGTERM, by all means enable the die-on-term option. • If you plan to host multiple applications do yourself a favor and check the The uWSGI Emperor – multi-app deployment docs. • Always use uwsgitop, through The uWSGI Stats Server or something similar to monitor your apps’ health. • uWSGI can include features in the core or as loadable plugins. uWSGI packages supplied with OS distributions tend to be modular. In such setups, be sure to load the plugins you require with the plugins option. A good symptom to recognize an unloaded plugin is messages like “Unavailable modifier requested” in your logs. If you are using distribution supplied packages, double check that you have installed the plugin for your language of choice. • Config files support a limited form of inheritance, variables, if constructs and simple cycles. Check the Config- uration logic and How uWSGI parses config files pages. • To route requests to a specific plugin, the webserver needs to pass a magic number known as a modifier to the uWSGI instances. By default this number is set to 0, which is mapped to Python. As an example, routing a request to a PSGI app requires you to set the modifier to 5 - or optionally to load the PSGI plugin as modifier 0. (This will mean that all modifierless requests will be considered Perl.) • There is no magic rule for setting the number of processes or threads to use. It is very much application and system dependent. Simple math like processes = 2 * cpucores will not be enough. You need to experiment with various setups and be prepared to constantly monitor your apps. uwsgitop could be a great tool to find the best values. • If an HTTP request has a body (like a POST request generated by a form), you have to read (consume) it in your application. If you do not do this, the communication socket with your webserver may be clobbered. If you are lazy you can use the post-buffering option that will automatically read data for you. For Rack applications this is automatically enabled. • Always check the memory usage of your apps. The memory-report option could be your best friend. • If you plan to use UNIX sockets (as opposed to TCP), remember they are standard filesystem objects. This means they have permissions and as such your webserver must have write access to them. • Common sense: do not run uWSGI instances as root. You can start your uWSGIs as root, but be sure to drop privileges with the uid and gid options. • uWSGI tries to (ab)use the Copy On Write semantics of the fork() call whenever possible. By default it will fork after having loaded your applications to share as much of their memory as possible. If this behavior is undesirable for some reason, use the lazy option. This will instruct uWSGI to load the applications after each worker’s fork(). Lazy mode changes the way graceful reloading works: instead of reloading the whole instance, each worker is reloaded in chain. If you want “lazy app loading”, but want to maintain the standard uWSGI reloading behaviour, starting from 1.3 you can use the lazy-apps option. • By default the Python plugin does not initialize the GIL. This means your app-generated threads will not run. If you need threads, remember to enable them with enable-threads. Running uWSGI in multithreading mode 42 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 (with the threads options) will automatically enable threading support. This “strange” default behaviour is for performance reasons, no shame in that. • If you spawn a new process during a request it will inherit the file descriptors of the worker spawning it - includ- ing the socket connected with the webserver/router. If you do not want this behaviour set the close-on-exec option. • The Ruby garbage collector is configured by default to run after every request. This is an aggressive policy that may slow down your apps a bit – but CPU resources are cheaper than memory, and especially cheaper than running out of memory. To tune this frequency use the ruby-gc option. • On OpenBSD, NetBSD and FreeBSD < 9, SysV IPC semaphores are used as the locking subsystem. These operating systems tend to limit the number of allocable semaphores to fairly small values. You should raise the default limits if you plan to run more than one uWSGI instance. FreeBSD 9 has POSIX semaphores, so you do not need to bother with that. • Do not build plugins using a different config file than used to build the uWSGI binary itself – unless you like pain or know exactly what you are doing. • By default uWSGI allocates a very small buffer (4096 bytes) for the headers of each request. If you start receiving “invalid request block size” in your logs, it could mean you need a bigger buffer. Increase it (up to 65535) with the buffer-size option. Note: If you receive ‘21573’ as the request block size in your logs, it could mean you are using the HTTP protocol to speak with an instance speaking the uwsgi protocol. Don’t do this. • If your (Linux) server seems to have lots of idle workers, but performance is still sub-par, you may want to look at the value of the ip_conntrack_max system variable (/proc/sys/net/ipv4/ip_conntrack_max) and increase it to see if it helps. • Some Linux distributions (read: Debian Etch 4) make a mix of newer kernels with very old userspace. This kind of combination can make the uWSGI build system spit out errors (most notably on unshare(), pthread locking, inotify...). You can force uWSGI to configure itself for an older system prefixing the ‘make’ (or whatever way you use to build it) with CFLAGS="-DOBSOLETE_LINUX_KERNEL" • By default, stdin is remapped to /dev/null on uWSGI startup. If you need a valid stdin (for debugging, piping and so on) add --honour-stdin. • You can easily add non-existent options to your config files (as placeholders, custom options, or app-related configuration items). This is a really handy feature, but can lead to headaches on typos. The strict mode (–strict) will disable this feature, and only valid uWSGI options are tolerated. • Some plugins (most notably Python and Perl) have code auto-reloading facilities. Although they might sound very appealing, you MUST use them only under development as they are really heavy-weight. For example the Python –py-autoreload option will scan your whole module tree at every check cycle. 3.10 Configuring uWSGI uWSGI can be configured using several different methods. All configuration methods may be mixed and matched in the same invocation of uWSGI. Note: Some of the configuration methods may require a specific plugin (ie. sqlite and ldap). See also: Configuration logic 3.10. Configuring uWSGI 43 uWSGI Documentation, Release 2.0 The configuration system is unified, so each command line option maps 1:1 with entries in the config files. Example: uwsgi --http-socket :9090 --psgi myapp.pl can be written as [uwsgi] http-socket= :9090 psgi= myapp.pl 3.10.1 Loading configuration files uWSGI supports loading configuration files over several methods other than simple disk files: uwsgi --ini http://uwsgi.it/configs/myapp.ini # HTTP uwsgi --xml - # standard input uwsgi --yaml fd://0 # file descriptor uwsgi --json ’exec://nc 192.168.11.2:33000’ # arbitrary executable Note: More esoteric file sources, such as the Emperor, embedded configuration (in two flavors), dynamic library symbols and ELF sections could also be used. 3.10.2 Magic variables uWSGI configuration files can include “magic” variables, prefixed with a percent sign. Currently the following magic variables (you can access them in Python via uwsgi.magic_table) are defined. %v the vassals directory (pwd) %V the uWSGI version %h the hostname %o the original config filename, as specified on the command line %O same as %o but refer to the first non-template config file (version 1.9.18) %p the absolute path of the configuration file %P same as %p but refer to the first non-template config file (version 1.9.18) %s the filename of the configuration file %S same as %s but refer to the first non-template config file (version 1.9.18) %d the absolute path of the directory containing the configuration file %D same as %d but refer to the first non-template config file (version 1.9.18) %e the extension of the configuration file %E same as %e but refer to the first non-template config file (version 1.9.18) %n the filename without extension %N same as %n but refer to the first non-template config file (version 1.9.18) %c the name of the directory containing the config file (version 1.3+) %C same as %c but refer to the first non-template config file (version 1.9.18) %t unix time (in seconds, gathered at instance startup) (version 1.9.20-dev+) %T unix time (in microseconds, gathered at instance startup) (version 1.9.20-dev+) %x the current section identifier, eg. config.ini:section (version 1.9-dev+) %X same as %x but refer to the first non-template config file (version 1.9.18) %i inode number of the file (version 2.0.1) %I same as %i but refer to the first non-template config file Continued on next page 44 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 Table 3.1 – continued from previous page %0..%9 a specific component of the full path of the directory containing the config file (version 1.3+) %[ ANSI escape “\033” (useful for printing colors) %k detected cpu cores (version 1.9.20-dev+) %u uid of the user running the process (version 2.0) %U username (if available, otherwise fallback to uid) of the user running the process (version 2.0) %g gid of the user running the process (version 2.0) %G group name (if available, otherwise fallback to gid) of the user running the process (version 2.0) %j HEX representation of the djb33x hash of the full config path %J same as %j but refer to the first non-template config file Note that most of these refer to the file they appear in, even if that file is included from another file. An exception are most of the uppercase versions, which refer to the first non-template config file loaded. This means the first config file not loaded through --include or --inherit, but through for example --ini,--yaml or --config. These are intended to use with the emperor, to refer to the actual vassal config file instead of templates included with --vassals-include or --vassals-inherit. For example, here’s funnyapp.ini. [uwsgi] socket= /tmp/%n.sock module= werkzeug.testapp:test_app processes=4 master=1 %n will be replaced with the name of the config file, sans extension, so the result in this case will be [uwsgi] socket= /tmp/funnyapp.sock module= werkzeug.testapp:test_app processes=4 master=1 3.10.3 Placeholders Placeholders are custom magic variables defined during configuration time by setting a new configuration variable of your own devising. [uwsgi] ; These are placeholders... my_funny_domain= uwsgi.it set-ph= max_customer_address_space=64 set-placeholder= customers_base_dir=/var/www ; And these aren’t. socket= /tmp/sockets/%(my_funny_domain).sock chdir= %(customers_base_dir)/%(my_funny_domain) limit-as= %(max_customer_address_space) Placeholders can be assigned directly, or using the set-placeholder / set-ph option. These latter options can be useful to: • Make it more explicit that you’re setting placeholders instead of regular options. • Set options on the commandline, since unknown options like --foo=bar are rejected but --set-placeholder foo=bar is ok. • Set placeholders when strict mode is enabled. 3.10. Configuring uWSGI 45 uWSGI Documentation, Release 2.0 Placeholders are accessible, like any uWSGI option, in your application code via uwsgi.opt. import uwsgi print uwsgi.opt[’customers_base_dir’] This feature can be (ab)used to reduce the number of configuration files required by your application. Similarly, contents of evironment variables and external text files can be included using the $(ENV_VAR) and @(file_name) syntax. See also How uWSGI parses config files. 3.10.4 Placeholders math (from uWSGI 1.9.20-dev) You can apply math formulas to placeholders using this special syntax: [uwsgi] foo= 17 bar= 30 ; total will be 50 total= %(foo + bar + 3) Remember to not miss spaces between operations. Operations are executed in a pipeline (not in common math style): [uwsgi] foo= 17 bar= 30 total= %(foo + bar + 3 * 2) ‘total’ will be evaluated as 100: (((foo + bar) + 3) * 2) Incremental and decremental shortcuts are available [uwsgi] foo= 29 ; remember the space !!! bar= %(foo ++) bar will be 30 If you do not specify an operation between two items, ‘string concatenation’ is assumed: [uwsgi] foo=2 bar=9 ; remember the space !!! bar= %(foo bar ++) the first two items will be evaluated as ‘29’ (not 11 as no math operation has been specified) 3.10.5 The ‘@’ magic We have already seen we can use the form @(filename) to include the contents of a file [uwsgi] foo= @(/tmp/foobar) 46 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 the truth is that ‘@’ can read from all of the supported uwsgi schemes [uwsgi] ; read from a symbol foo= @(sym://uwsgi_funny_function) ; read from binary appended data bar= @(data://0) ; read from http test= @(http://example.com/hello) ; read from a file descriptor content= @(fd://3) ; read from a process stdout body= @(exec://foo.pl) ; call a function returning a char * characters= @(call://uwsgi_func) 3.10.6 Command line arguments Example: uwsgi --socket /tmp/uwsgi.sock --socket 127.0.0.1:8000 --master --workers 3 3.10.7 Environment variables When passed as environment variables, options are capitalized and prefixed with UWSGI_, and dashes are substituted with underscores. Note: Several values for the same configuration variable are not supported with this method. Example: UWSGI_SOCKET=127.0.0.1 UWSGI_MASTER=1 UWSGI_WORKERS=3 uwsgi 3.10.8 INI files .INI files are a standard de-facto configuration format used by many applications. It consists of [section]s and key=value pairs. An example uWSGI INI configuration: [uwsgi] socket= /tmp/uwsgi.sock socket= 127.0.0.1:8000 workers=3 master= true By default, uWSGI uses the [uwsgi] section, but you can specify another section name while loading the INI file with the syntax filename:section, that is: uwsgi --ini myconf.ini:app1 Alternatively, you can load another section from the same file by omitting the filename and specifying just the section name. Note that technically, this loads the named section from the last .ini file loaded instead of the current one, so be careful when including other files. 3.10. Configuring uWSGI 47 uWSGI Documentation, Release 2.0 [uwsgi] # This will load the app1 section below ini= :app1 # This will load the defaults.ini file ini= defaults.ini # This will load the app2 section from the defaults.ini file! ini= :app2 [app1] plugin= rack [app2] plugin= php • Whitespace is insignificant within lines. • Lines starting with a semicolon (;) or a hash/octothorpe (#) are ignored as comments. • Boolean values may be set without the value part. Simply master is thus equivalent to master=true. This may not be compatible with other INI parsers such as paste.deploy. • For convenience, uWSGI recognizes bare .ini arguments specially, so the invocation uwsgi myconf.ini is equal to uwsgi --ini myconf.ini. 3.10.9 XML files The root node should be and option values text nodes. An example: /tmp/uwsgi.sock 127.0.0.1:8000 3 You can also have multiple stanzas in your file, marked with different id attributes. To choose the stanza to use, specify its id after the filename in the xml option, using a colon as a separator. When using this id mode, the root node of the file may be anything you like. This will allow you to embed uwsgi configuration nodes in other XML files. /tmp/tg.sock /tmp/django.sock • Boolean values may be set without a text value. • For convenience, uWSGI recognizes bare .xml arguments specially, so the invocation uwsgi myconf.xml is equal to uwsgi --xml myconf.xml. 3.10.10 JSON files The JSON file should represent an object with one key-value pair, the key being “uwsgi” and the value an object of configuration variables. Native JSON lists, booleans and numbers are supported. An example: 48 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 {"uwsgi":{ "socket":["/tmp/uwsgi.sock", "127.0.0.1:8000"], "master": true, "workers":3 }} Again, a named section can be loaded using a colon after the filename. {"app1":{ "plugin": "rack" }, "app2":{ "plugin": "php" }} And then load this using: uwsgi --json myconf.json:app2 Note: The Jansson library is required during uWSGI build time to enable JSON support. By default the presence of the library will be auto-detected and JSON support will be automatically enabled, but you can force JSON support to be enabled or disabled by editing your build configuration. See also: Installing uWSGI 3.10.11 YAML files The root element should be uwsgi. Boolean options may be set as true or 1. An example: uwsgi: socket: /tmp/uwsgi.sock socket: 127.0.0.1:8000 master: 1 workers: 3 Again, a named section can be loaded using a colon after the filename. app1: plugin: rack app2: plugin: php And then load this using: uwsgi --yaml myconf.yaml:app2 3.10.12 SQLite configuration Note: Under construction. 3.10. Configuring uWSGI 49 uWSGI Documentation, Release 2.0 3.10.13 LDAP configuration LDAP is a flexible way to centralize configuration of large clusters of uWSGI servers. Configuring it is a complex topic. See Configuring uWSGI with LDAP for more information. 3.11 Fallback configuration (available from 1.9.15-dev) If you need a “reset to factory defaults”, or “show a welcome page if the user has made mess with its config” scenario, fallback configuration is your silver bullet 3.11.1 Simple case A very common problem is screwing-up the port on which the instance is listening. To emulate this kind of error we try to bind on port 80 as an unprivileged user: uwsgi --uid 1000 --http-socket :80 uWSGI will exit with: bind(): Permission denied[core/socket.c line 755] Internally (from the kernel point of view) the instance exited with status 1 Now we want to allow the instance to automatically bind on port 8080 when the user supplied config fails. Let’s define a fallback config (you can save it as safe.ini): [uwsgi] print= Hello i am the fallback config !!! http-socket= :8080 wsgi-file= welcomeapp.wsgi Now we can re-run the (broken) instance: uwsgi --fallback-config safe.ini --uid 1000 --http-socket :80 Your error will be now something like: bind(): Permission denied[core/socket.c line 755] Thu Jul 25 21:55:39 2013 - !!! /home/roberto/uwsgi/uwsgi(pid: 7409) exited with status 1 !!! Thu Jul 25 21:55:39 2013 - !!! Fallback config to safe.ini !!! [uWSGI] getting INI configuration from safe.ini *** Starting uWSGI 1.9.15-dev-a0cb71c(64bit) on[Thu Jul 25 21:55:39 2013]*** ... As you can see, the instance has detected the exit code 1 and has binary patched itself with a new config (without changing the pid, or calling fork()) 3.11.2 Broken apps Another common problem is the inability to load an application, but instead of bringing down the whole site we want to load an alternate application: 50 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 uwsgi --fallback-config safe.ini --need-app --http-socket :8080 --wsgi-file brokenapp.py Here the key is –need-app. It will call exit(1) if the instance has not been able to load at least one application. 3.11.3 Multiple fallback levels Your fallback config file can specify a fallback-config directive too, allowing multiple fallback levels. BEWARE OF LOOPS!!! 3.11.4 How it works The objective is catching the exit code of a process before the process itself is destroyed (we do not want to call another fork(), or destroy already opened file descriptors) uWSGI makes heavy usage of atexit() hooks, so we only need to register the fallback handler as the first one (hooks are executed in reverse order). In addition to this we need to get the exit code in our atexit() hook, something is not supported by default (the on_exit() function is now deprecated). The solution is “patching” exit(x) with uwsgi_exit(x) that is a simple wrapper setting uwsgi.last_exit_code memory pointer. Now the hook only needs to check for uwsgi.last_exit_code == 1 and eventually execve() the binary again passing the fallback config to it char *argv[3]; argv[0]= uwsgi.binary_path; argv[1]= uwsgi.fallback_config; argv[2]= NULL; execvp(uwsgi.binary_path, argv); 3.11.5 Notes Try to place –fallback-config as soon as possibile in your config tree. The various config parsers may fail (calling exit(1)) before the fallback file is registered 3.12 Configuration logic Starting from 1.1 certain logic constructs are available. The following statements are currently supported: • for .. endfor • if-dir / if-not-dir • if-env / if-not-env • if-exists / if-not-exists • if-file / if-not-file • if-opt / if-not-opt • if-reload / if-not-reload – undocumented 3.12. Configuration logic 51 uWSGI Documentation, Release 2.0 Each of these statements exports a context value you can access with the special placeholder %(_). For example, the “for” statement sets %(_) to the current iterated value. Warning: Recursive logic is not supported and will cause uWSGI to promptly exit. 3.12.1 for For iterates over space-separated strings. The following three code blocks are equivalent. [uwsgi] master= true ; iterate over a list of ports for= 3031 3032 3033 3034 3035 socket= 127.0.0.1:%(_) endfor= module= helloworld 3031 3032 3033 3034 3035 127.0.0.1:%(_) helloworld uwsgi --for="3031 3032 3033 3034 3035" --socket="127.0.0.1:%(_)" --endfor --module helloworld Note that the for-loop is applied to each line inside the block separately, not to the block as a whole. For example, this: [uwsgi] for=abc socket= /var/run/%(_).socket http-socket= /var/run/%(_)-http.socket endfor= is expanded to: [uwsgi] socket= /var/run/a.socket socket= /var/run/b.socket socket= /var/run/c.socket http-socket= /var/run/a-http.socket http-socket= /var/run/b-http.socket http-socket= /var/run/c-http.socket 3.12.2 if-env Check if an environment variable is defined, putting its value in the context placeholder. [uwsgi] if-env= PATH print= Your path is %(_) check-static= /var/www endif= socket= :3031 52 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.12.3 if-exists Check for the existence of a file or directory. The context placeholder is set to the filename found. [uwsgi] http= :9090 ; redirect all requests if a file exists if-exists= /tmp/maintainance.txt route=.* redirect:/offline endif= Note: The above example uses uWSGI internal routing. 3.12.4 if-file Check if the given path exists and is a regular file. The context placeholder is set to the filename found. python :8080 settings.py django.core.handlers.wsgi:WSGIHandler() 3.12.5 if-dir Check if the given path exists and is a directory. The context placeholder is set to the filename found. uwsgi: socket: 4040 processes: 2 if-dir: config.ru rack: %(_) endif: 3.12.6 if-opt Check if the given option is set, or has a given value. The context placeholder is set to the value of the option reference. To check if an option was set, pass just the option name to if-opt. uwsgi: cheaper: 3 if-opt: cheaper print: Running in cheaper mode, with initially %(_) processes endif: To check if an option was set to a specific value, pass option-name=value to if-opt. uwsgi: # Set busyness parameters if it was chosen if-opt: cheaper-algo=busyness cheaper-busyness-max: 25 3.12. Configuration logic 53 uWSGI Documentation, Release 2.0 cheaper-busyness-min: 10 endif: Due to the way uWSGI parses its configs, you can only refer to options that uWSGI has previously seen. In particular, this means: • Only options that are set above the if-opt option are taken into account. This includes any options set by previous include (or type specific includes like ini) options, but does not include options set by previous inherit options). • if-opt is processed after expanding magic variables, but before expanding placeholders and other variables. So if you use if-opt to compare the value of an option, check against the value as stated in the config file, with only the magic variables filled in. If you use the context placeholder %(_) inside the if-opt block, you should be ok: any placeholders will later be expanded. • If an option is specified multiple times, only the value of the first one will be seen by if-opt. • Only explicitly set values will be seen, not implicit defaults. See also: How uWSGI parses config files 3.13 uWSGI Options This is an automatically generated reference list of the uWSGI options. It is the same output you can get via the --help option. This page is probably the worst way to understand uWSGI for newbies. If you are still learning how the project works, you should read the various quickstarts and tutorials. Each option has the following attributes: • argument: it is the struct option (used by getopt()/getopt_long()) has_arg element. Can be ‘required’, ‘no_argument’ or ‘optional_argument’ • shortcut: some option can be specified with the short form (a dash followed by a single letter) • parser: this is how uWSGI parses the parameter. There are dozens of way, the most common are ‘uwsgi_opt_set_str’ when it takes a simple string, ‘uwsgi_opt_set_int’ when it takes a 32bit number, ‘uwsgi_opt_add_string_list’ when the parameter can be specified multiple times to build a list. • help: the help message, the same you get from uwsgi --help • reference: a link to a documentation page that gives better understanding and context of an option You can add more detailed infos to this page, editing https://github.com/unbit/uwsgi-docs/blob/master/optdefs.pl (please, double check it before sending a pull request) 3.13.1 uWSGI core socket argument: required_argument shortcut: -s 54 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 parser: uwsgi_opt_add_socket help: bind to the specified UNIX/TCP socket using default protocol uwsgi-socket argument: required_argument shortcut: -s parser: uwsgi_opt_add_socket help: bind to the specified UNIX/TCP socket using uwsgi protocol suwsgi-socket argument: required_argument parser: uwsgi_opt_add_ssl_socket help: bind to the specified UNIX/TCP socket using uwsgi protocol over SSL ssl-socket argument: required_argument parser: uwsgi_opt_add_ssl_socket help: bind to the specified UNIX/TCP socket using uwsgi protocol over SSL http-socket argument: required_argument parser: uwsgi_opt_add_socket help: bind to the specified UNIX/TCP socket using HTTP protocol http-socket-modifier1 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier1 when using HTTP protocol http-socket-modifier2 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier2 when using HTTP protocol 3.13. uWSGI Options 55 uWSGI Documentation, Release 2.0 https-socket argument: required_argument parser: uwsgi_opt_add_ssl_socket help: bind to the specified UNIX/TCP socket using HTTPS protocol https-socket-modifier1 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier1 when using HTTPS protocol https-socket-modifier2 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier2 when using HTTPS protocol fastcgi-socket argument: required_argument parser: uwsgi_opt_add_socket help: bind to the specified UNIX/TCP socket using FastCGI protocol fastcgi-nph-socket argument: required_argument parser: uwsgi_opt_add_socket help: bind to the specified UNIX/TCP socket using FastCGI protocol (nph mode) fastcgi-modifier1 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier1 when using FastCGI protocol fastcgi-modifier2 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier2 when using FastCGI protocol 56 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 scgi-socket argument: required_argument parser: uwsgi_opt_add_socket help: bind to the specified UNIX/TCP socket using SCGI protocol scgi-nph-socket argument: required_argument parser: uwsgi_opt_add_socket help: bind to the specified UNIX/TCP socket using SCGI protocol (nph mode) scgi-modifier1 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier1 when using SCGI protocol scgi-modifier2 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier2 when using SCGI protocol raw-socket argument: required_argument parser: uwsgi_opt_add_socket_no_defer help: bind to the specified UNIX/TCP socket using RAW protocol raw-modifier1 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier1 when using RAW protocol raw-modifier2 argument: required_argument parser: uwsgi_opt_set_64bit help: force the specified modifier2 when using RAW protocol 3.13. uWSGI Options 57 uWSGI Documentation, Release 2.0 puwsgi-socket argument: required_argument parser: uwsgi_opt_add_socket help: bind to the specified UNIX/TCP socket using persistent uwsgi protocol (puwsgi) protocol argument: required_argument parser: uwsgi_opt_set_str help: force the specified protocol for default sockets socket-protocol argument: required_argument parser: uwsgi_opt_set_str help: force the specified protocol for default sockets shared-socket argument: required_argument parser: uwsgi_opt_add_shared_socket help: create a shared sacket for advanced jailing or ipc undeferred-shared-socket argument: required_argument parser: uwsgi_opt_add_shared_socket help: create a shared sacket for advanced jailing or ipc (undeferred mode) processes argument: required_argument shortcut: -p parser: uwsgi_opt_set_int help: spawn the specified number of workers/processes 58 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 workers argument: required_argument shortcut: -p parser: uwsgi_opt_set_int help: spawn the specified number of workers/processes thunder-lock argument: no_argument parser: uwsgi_opt_true help: serialize accept() usage (if possible) reference: Serializing accept(), AKA Thundering Herd, AKA the Zeeg Problem harakiri argument: required_argument shortcut: -t parser: uwsgi_opt_set_int help: set harakiri timeout harakiri-verbose argument: no_argument parser: uwsgi_opt_true help: enable verbose mode for harakiri harakiri-no-arh argument: no_argument parser: uwsgi_opt_true help: do not enable harakiri during after-request-hook no-harakiri-arh argument: no_argument parser: uwsgi_opt_true help: do not enable harakiri during after-request-hook 3.13. uWSGI Options 59 uWSGI Documentation, Release 2.0 no-harakiri-after-req-hook argument: no_argument parser: uwsgi_opt_true help: do not enable harakiri during after-request-hook backtrace-depth argument: required_argument parser: uwsgi_opt_set_int help: set backtrace depth mule-harakiri argument: required_argument parser: uwsgi_opt_set_int help: set harakiri timeout for mule tasks xmlconfig argument: required_argument shortcut: -x parser: uwsgi_opt_load_xml flags: UWSGI_OPT_IMMEDIATE help: load config from xml file xml argument: required_argument shortcut: -x parser: uwsgi_opt_load_xml flags: UWSGI_OPT_IMMEDIATE help: load config from xml file config argument: required_argument parser: uwsgi_opt_load_config flags: UWSGI_OPT_IMMEDIATE help: load configuration using the pluggable system 60 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 fallback-config argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_IMMEDIATE help: re-exec uwsgi with the specified config when exit code is 1 strict argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_IMMEDIATE help: enable strict mode (placeholder cannot be used) skip-zero argument: no_argument parser: uwsgi_opt_true help: skip check of file descriptor 0 skip-atexit argument: no_argument parser: uwsgi_opt_true help: skip atexit hooks (ignored by the master) set argument: required_argument shortcut: -S parser: uwsgi_opt_set_placeholder flags: UWSGI_OPT_IMMEDIATE help: set a placeholder or an option set-placeholder argument: required_argument parser: uwsgi_opt_set_placeholder flags: UWSGI_OPT_IMMEDIATE help: set a placeholder 3.13. uWSGI Options 61 uWSGI Documentation, Release 2.0 set-ph argument: required_argument parser: uwsgi_opt_set_placeholder flags: UWSGI_OPT_IMMEDIATE help: set a placeholder get argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_NO_INITIAL help: print the specified option value and exit declare-option argument: required_argument parser: uwsgi_opt_add_custom_option flags: UWSGI_OPT_IMMEDIATE help: declare a new uWSGI custom option reference: Defining new options for your instances declare-option2 argument: required_argument parser: uwsgi_opt_add_custom_option help: declare a new uWSGI custom option (non-immediate) resolve argument: required_argument parser: uwsgi_opt_resolve flags: UWSGI_OPT_IMMEDIATE help: place the result of a dns query in the specified placeholder, sytax: placeholder=name (immediate option) for argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) for cycle 62 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 for-glob argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) for cycle (expand glob) for-times argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) for cycle (expand the specified num to a list starting from 1) for-readline argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) for cycle (expand the specified file to a list of lines) endfor argument: optional_argument parser: uwsgi_opt_noop flags: UWSGI_OPT_IMMEDIATE help: (opt logic) end for cycle end-for argument: optional_argument parser: uwsgi_opt_noop flags: UWSGI_OPT_IMMEDIATE help: (opt logic) end for cycle if-opt argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for option 3.13. uWSGI Options 63 uWSGI Documentation, Release 2.0 if-not-opt argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for option if-env argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for environment variable if-not-env argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for environment variable ifenv argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for environment variable if-reload argument: no_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for reload if-not-reload argument: no_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for reload 64 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 if-exists argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for file/directory existance if-not-exists argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for file/directory existance ifexists argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for file/directory existance if-plugin argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for plugin if-not-plugin argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for plugin ifplugin argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for plugin 3.13. uWSGI Options 65 uWSGI Documentation, Release 2.0 if-file argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for file existance if-not-file argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for file existance if-dir argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for directory existance if-not-dir argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for directory existance ifdir argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for directory existance if-directory argument: required_argument parser: uwsgi_opt_logic flags: UWSGI_OPT_IMMEDIATE help: (opt logic) check for directory existance 66 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 endif argument: optional_argument parser: uwsgi_opt_noop flags: UWSGI_OPT_IMMEDIATE help: (opt logic) end if end-if argument: optional_argument parser: uwsgi_opt_noop flags: UWSGI_OPT_IMMEDIATE help: (opt logic) end if blacklist argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_IMMEDIATE help: set options blacklist context end-blacklist argument: no_argument parser: uwsgi_opt_set_null flags: UWSGI_OPT_IMMEDIATE help: clear options blacklist context whitelist argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_IMMEDIATE help: set options whitelist context end-whitelist argument: no_argument parser: uwsgi_opt_set_null flags: UWSGI_OPT_IMMEDIATE help: clear options whitelist context 3.13. uWSGI Options 67 uWSGI Documentation, Release 2.0 ignore-sigpipe argument: no_argument parser: uwsgi_opt_true help: do not report (annoying) SIGPIPE ignore-write-errors argument: no_argument parser: uwsgi_opt_true help: do not report (annoying) write()/writev() errors write-errors-tolerance argument: required_argument parser: uwsgi_opt_set_64bit help: set the maximum number of allowed write errors (default: no tolerance) write-errors-exception-only argument: no_argument parser: uwsgi_opt_true help: only raise an exception on write errors giving control to the app itself disable-write-exception argument: no_argument parser: uwsgi_opt_true help: disable exception generation on write()/writev() inherit argument: required_argument parser: uwsgi_opt_load help: use the specified file as config template include argument: required_argument parser: uwsgi_opt_load flags: UWSGI_OPT_IMMEDIATE help: include the specified file as immediate configuration 68 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 inject-before argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_IMMEDIATE help: inject a text file before the config file (advanced templating) inject-after argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_IMMEDIATE help: inject a text file after the config file (advanced templating) daemonize argument: required_argument shortcut: -d parser: uwsgi_opt_set_str help: daemonize uWSGI daemonize2 argument: required_argument parser: uwsgi_opt_set_str help: daemonize uWSGI after app loading stop argument: required_argument parser: uwsgi_opt_pidfile_signal flags: UWSGI_OPT_IMMEDIATE help: stop an instance reload argument: required_argument parser: uwsgi_opt_pidfile_signal flags: UWSGI_OPT_IMMEDIATE help: reload an instance 3.13. uWSGI Options 69 uWSGI Documentation, Release 2.0 pause argument: required_argument parser: uwsgi_opt_pidfile_signal flags: UWSGI_OPT_IMMEDIATE help: pause an instance suspend argument: required_argument parser: uwsgi_opt_pidfile_signal flags: UWSGI_OPT_IMMEDIATE help: suspend an instance resume argument: required_argument parser: uwsgi_opt_pidfile_signal flags: UWSGI_OPT_IMMEDIATE help: resume an instance connect-and-read argument: required_argument parser: uwsgi_opt_connect_and_read flags: UWSGI_OPT_IMMEDIATE help: connect to a socket and wait for data from it extract argument: required_argument parser: uwsgi_opt_extract flags: UWSGI_OPT_IMMEDIATE help: fetch/dump any supported address to stdout listen argument: required_argument shortcut: -l parser: uwsgi_opt_set_int help: set the socket listen queue size 70 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 max-vars argument: required_argument shortcut: -v parser: uwsgi_opt_max_vars help: set the amount of internal iovec/vars structures max-apps argument: required_argument parser: uwsgi_opt_set_int help: set the maximum number of per-worker applications buffer-size argument: required_argument shortcut: -b parser: uwsgi_opt_set_16bit help: set internal buffer size memory-report argument: no_argument shortcut: -m parser: uwsgi_opt_true help: enable memory report profiler argument: required_argument parser: uwsgi_opt_set_str help: enable the specified profiler cgi-mode argument: no_argument shortcut: -c parser: uwsgi_opt_true help: force CGI-mode for plugins supporting it 3.13. uWSGI Options 71 uWSGI Documentation, Release 2.0 abstract-socket argument: no_argument shortcut: -a parser: uwsgi_opt_true help: force UNIX socket in abstract mode (Linux only) chmod-socket argument: optional_argument shortcut: -C parser: uwsgi_opt_chmod_socket help: chmod-socket chmod argument: optional_argument shortcut: -C parser: uwsgi_opt_chmod_socket help: chmod-socket chown-socket argument: required_argument parser: uwsgi_opt_set_str help: chown unix sockets umask argument: required_argument parser: uwsgi_opt_set_umask flags: UWSGI_OPT_IMMEDIATE help: set umask freebind argument: no_argument parser: uwsgi_opt_true help: put socket in freebind mode set the IP_FREEBIND flag to every socket created by uWSGI. This kind of socket can bind to non-existent ip ad- dresses. Its main purpose is for high availability (this is Linux only) 72 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 map-socket argument: required_argument parser: uwsgi_opt_add_string_list help: map sockets to specific workers enable-threads argument: no_argument shortcut: -T parser: uwsgi_opt_true help: enable threads no-threads-wait argument: no_argument parser: uwsgi_opt_true help: do not wait for threads cancellation on quit/reload auto-procname argument: no_argument parser: uwsgi_opt_true help: automatically set processes name to something meaningful procname-prefix argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_PROCNAME help: add a prefix to the process names procname-prefix-spaced argument: required_argument parser: uwsgi_opt_set_str_spaced flags: UWSGI_OPT_PROCNAME help: add a spaced prefix to the process names 3.13. uWSGI Options 73 uWSGI Documentation, Release 2.0 procname-append argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_PROCNAME help: append a string to process names procname argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_PROCNAME help: set process names procname-master argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_PROCNAME help: set master process name single-interpreter argument: no_argument shortcut: -i parser: uwsgi_opt_true help: do not use multiple interpreters (where available) need-app argument: no_argument parser: uwsgi_opt_true help: exit if no app can be loaded master argument: no_argument shortcut: -M parser: uwsgi_opt_true help: enable master process 74 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 honour-stdin argument: no_argument parser: uwsgi_opt_true help: do not remap stdin to /dev/null emperor argument: required_argument parser: uwsgi_opt_add_string_list help: run the Emperor reference: The uWSGI Emperor – multi-app deployment The Emperor is a special uWSGI instance aimed at governing other uWSGI instances (named: vassals). By default it is configured to monitor a directory containing valid uWSGI config files, whenever a file is created a new instance is spawned, when the file is touched the instance is reloaded, when the file is removed the instance is destroyed. It can be extended to support more paradigms emperor-proxy-socket argument: required_argument parser: uwsgi_opt_set_str help: force the vassal to became an Emperor proxy emperor-wrapper argument: required_argument parser: uwsgi_opt_set_str help: set a binary wrapper for vassals emperor-nofollow argument: no_argument parser: uwsgi_opt_true help: do not follow symlinks when checking for mtime emperor-procname argument: required_argument parser: uwsgi_opt_set_str help: set the Emperor process name 3.13. uWSGI Options 75 uWSGI Documentation, Release 2.0 emperor-freq argument: required_argument parser: uwsgi_opt_set_int help: set the Emperor scan frequency (default 3 seconds) emperor-required-heartbeat argument: required_argument parser: uwsgi_opt_set_int help: set the Emperor tolerance about heartbeats emperor-curse-tolerance argument: required_argument parser: uwsgi_opt_set_int help: set the Emperor tolerance about cursed vassals emperor-pidfile argument: required_argument parser: uwsgi_opt_set_str help: write the Emperor pid in the specified file emperor-tyrant argument: no_argument parser: uwsgi_opt_true help: put the Emperor in Tyrant mode emperor-tyrant-nofollow argument: no_argument parser: uwsgi_opt_true help: do not follow symlinks when checking for uid/gid in Tyrant mode emperor-stats argument: required_argument parser: uwsgi_opt_set_str help: run the Emperor stats server 76 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 emperor-stats-server argument: required_argument parser: uwsgi_opt_set_str help: run the Emperor stats server early-emperor argument: no_argument parser: uwsgi_opt_true help: spawn the emperor as soon as possibile emperor-broodlord argument: required_argument parser: uwsgi_opt_set_int help: run the emperor in BroodLord mode emperor-throttle argument: required_argument parser: uwsgi_opt_set_int help: set throttling level (in milliseconds) for bad behaving vassals (default 1000) emperor-max-throttle argument: required_argument parser: uwsgi_opt_set_int help: set max throttling level (in milliseconds) for bad behaving vassals (default 3 minutes) emperor-magic-exec argument: no_argument parser: uwsgi_opt_true help: prefix vassals config files with exec:// if they have the executable bit emperor-on-demand-extension argument: required_argument parser: uwsgi_opt_set_str help: search for text file (vassal name + extension) containing the on demand socket name 3.13. uWSGI Options 77 uWSGI Documentation, Release 2.0 emperor-on-demand-ext argument: required_argument parser: uwsgi_opt_set_str help: search for text file (vassal name + extension) containing the on demand socket name emperor-on-demand-directory argument: required_argument parser: uwsgi_opt_set_str help: enable on demand mode binding to the unix socket in the specified directory named like the vassal + .socket emperor-on-demand-dir argument: required_argument parser: uwsgi_opt_set_str help: enable on demand mode binding to the unix socket in the specified directory named like the vassal + .socket emperor-on-demand-exec argument: required_argument parser: uwsgi_opt_set_str help: use the output of the specified command as on demand socket name (the vassal name is passed as the only argument) emperor-extra-extension argument: required_argument parser: uwsgi_opt_add_string_list help: allows the specified extension in the Emperor (vassal will be called with –config) emperor-extra-ext argument: required_argument parser: uwsgi_opt_add_string_list help: allows the specified extension in the Emperor (vassal will be called with –config) emperor-no-blacklist argument: no_argument parser: uwsgi_opt_true help: disable Emperor blacklisting subsystem 78 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 emperor-use-clone argument: required_argument parser: uwsgi_opt_set_unshare help: use clone() instead of fork() passing the specified unshare() flags emperor-cap argument: required_argument parser: uwsgi_opt_set_emperor_cap help: set vassals capability vassals-cap argument: required_argument parser: uwsgi_opt_set_emperor_cap help: set vassals capability vassal-cap argument: required_argument parser: uwsgi_opt_set_emperor_cap help: set vassals capability imperial-monitor-list argument: no_argument parser: uwsgi_opt_true help: list enabled imperial monitors imperial-monitors-list argument: no_argument parser: uwsgi_opt_true help: list enabled imperial monitors vassals-inherit argument: required_argument parser: uwsgi_opt_add_string_list help: add config templates to vassals config (uses –inherit) 3.13. uWSGI Options 79 uWSGI Documentation, Release 2.0 vassals-include argument: required_argument parser: uwsgi_opt_add_string_list help: include config templates to vassals config (uses –include instead of –inherit) vassals-inherit-before argument: required_argument parser: uwsgi_opt_add_string_list help: add config templates to vassals config (uses –inherit, parses before the vassal file) vassals-include-before argument: required_argument parser: uwsgi_opt_add_string_list help: include config templates to vassals config (uses –include instead of –inherit, parses before the vassal file) vassals-start-hook argument: required_argument parser: uwsgi_opt_set_str help: run the specified command before each vassal starts vassals-stop-hook argument: required_argument parser: uwsgi_opt_set_str help: run the specified command after vassal’s death vassal-sos-backlog argument: required_argument parser: uwsgi_opt_set_int help: ask emperor for sos if backlog queue has more items than the value specified vassals-set argument: required_argument parser: uwsgi_opt_add_string_list help: automatically set the specified option (via –set) for every vassal 80 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 vassal-set argument: required_argument parser: uwsgi_opt_add_string_list help: automatically set the specified option (via –set) for every vassal heartbeat argument: required_argument parser: uwsgi_opt_set_int help: announce healthiness to the emperor reload-mercy argument: required_argument parser: uwsgi_opt_set_int help: set the maximum time (in seconds) we wait for workers and other processes to die during reload/shutdown worker-reload-mercy argument: required_argument parser: uwsgi_opt_set_int help: set the maximum time (in seconds) a worker can take to reload/shutdown (default is 60) mule-reload-mercy argument: required_argument parser: uwsgi_opt_set_int help: set the maximum time (in seconds) a mule can take to reload/shutdown (default is 60) exit-on-reload argument: no_argument parser: uwsgi_opt_true help: force exit even if a reload is requested die-on-term argument: no_argument parser: uwsgi_opt_true help: exit instead of brutal reload on SIGTERM 3.13. uWSGI Options 81 uWSGI Documentation, Release 2.0 force-gateway argument: no_argument parser: uwsgi_opt_true help: force the spawn of the first registered gateway without a master help argument: no_argument shortcut: -h parser: uwsgi_help flags: UWSGI_OPT_IMMEDIATE help: show this help usage argument: no_argument shortcut: -h parser: uwsgi_help flags: UWSGI_OPT_IMMEDIATE help: show this help print-sym argument: required_argument parser: uwsgi_print_sym flags: UWSGI_OPT_IMMEDIATE help: print content of the specified binary symbol print-symbol argument: required_argument parser: uwsgi_print_sym flags: UWSGI_OPT_IMMEDIATE help: print content of the specified binary symbol 82 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 reaper argument: no_argument shortcut: -r parser: uwsgi_opt_true help: call waitpid(-1,...) after each request to get rid of zombies max-requests argument: required_argument shortcut: -R parser: uwsgi_opt_set_64bit help: reload workers after the specified amount of managed requests min-worker-lifetime argument: required_argument parser: uwsgi_opt_set_64bit help: number of seconds worker must run before being reloaded (default is 60) max-worker-lifetime argument: required_argument parser: uwsgi_opt_set_64bit help: reload workers after the specified amount of seconds (default is disabled) socket-timeout argument: required_argument shortcut: -z parser: uwsgi_opt_set_int help: set internal sockets timeout no-fd-passing argument: no_argument parser: uwsgi_opt_true help: disable file descriptor passing 3.13. uWSGI Options 83 uWSGI Documentation, Release 2.0 locks argument: required_argument parser: uwsgi_opt_set_int help: create the specified number of shared locks lock-engine argument: required_argument parser: uwsgi_opt_set_str help: set the lock engine ftok argument: required_argument parser: uwsgi_opt_set_str help: set the ipcsem key via ftok() for avoiding duplicates persistent-ipcsem argument: no_argument parser: uwsgi_opt_true help: do not remove ipcsem’s on shutdown sharedarea argument: required_argument shortcut: -A parser: uwsgi_opt_add_string_list help: create a raw shared memory area of specified pages (note: it supports keyval too) reference: SharedArea – share memory pages between uWSGI components safe-fd argument: required_argument parser: uwsgi_opt_safe_fd help: do not close the specified file descriptor 84 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 fd-safe argument: required_argument parser: uwsgi_opt_safe_fd help: do not close the specified file descriptor cache argument: required_argument parser: uwsgi_opt_set_64bit help: create a shared cache containing given elements cache-blocksize argument: required_argument parser: uwsgi_opt_set_64bit help: set cache blocksize cache-store argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: enable persistent cache to disk cache-store-sync argument: required_argument parser: uwsgi_opt_set_int help: set frequency of sync for persistent cache cache-no-expire argument: no_argument parser: uwsgi_opt_true help: disable auto sweep of expired items cache-expire-freq argument: required_argument parser: uwsgi_opt_set_int help: set the frequency of cache sweeper scans (default 3 seconds) 3.13. uWSGI Options 85 uWSGI Documentation, Release 2.0 cache-report-freed-items argument: no_argument parser: uwsgi_opt_true help: constantly report the cache item freed by the sweeper (use only for debug) cache-udp-server argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: bind the cache udp server (used only for set/update/delete) to the specified socket cache-udp-node argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: send cache update/deletion to the specified cache udp server cache-sync argument: required_argument parser: uwsgi_opt_set_str help: copy the whole content of another uWSGI cache server on server startup cache-use-last-modified argument: no_argument parser: uwsgi_opt_true help: update last_modified_at timestamp on every cache item modification (default is disabled) add-cache-item argument: required_argument parser: uwsgi_opt_add_string_list help: add an item in the cache 86 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 load-file-in-cache argument: required_argument parser: uwsgi_opt_add_string_list help: load a static file in the cache load-file-in-cache-gzip argument: required_argument parser: uwsgi_opt_add_string_list help: load a static file in the cache with gzip compression cache2 argument: required_argument parser: uwsgi_opt_add_string_list help: create a new generation shared cache (keyval syntax) queue argument: required_argument parser: uwsgi_opt_set_int help: enable shared queue queue-blocksize argument: required_argument parser: uwsgi_opt_set_int help: set queue blocksize queue-store argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: enable persistent queue to disk queue-store-sync argument: required_argument parser: uwsgi_opt_set_int help: set frequency of sync for persistent queue 3.13. uWSGI Options 87 uWSGI Documentation, Release 2.0 spooler argument: required_argument shortcut: -Q parser: uwsgi_opt_add_spooler flags: UWSGI_OPT_MASTER help: run a spooler on the specified directory spooler-external argument: required_argument parser: uwsgi_opt_add_spooler flags: UWSGI_OPT_MASTER help: map spoolers requests to a spooler directory managed by an external instance spooler-ordered argument: no_argument parser: uwsgi_opt_true help: try to order the execution of spooler tasks spooler-chdir argument: required_argument parser: uwsgi_opt_set_str help: chdir() to specified directory before each spooler task spooler-processes argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_IMMEDIATE help: set the number of processes for spoolers spooler-quiet argument: no_argument parser: uwsgi_opt_true help: do not be verbose with spooler tasks 88 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 spooler-max-tasks argument: required_argument parser: uwsgi_opt_set_int help: set the maximum number of tasks to run before recycling a spooler spooler-harakiri argument: required_argument parser: uwsgi_opt_set_int help: set harakiri timeout for spooler tasks spooler-frequency argument: required_argument parser: uwsgi_opt_set_int help: set spooler frequency spooler-freq argument: required_argument parser: uwsgi_opt_set_int help: set spooler frequency mule argument: optional_argument parser: uwsgi_opt_add_mule flags: UWSGI_OPT_MASTER help: add a mule mules argument: required_argument parser: uwsgi_opt_add_mules flags: UWSGI_OPT_MASTER help: add the specified number of mules 3.13. uWSGI Options 89 uWSGI Documentation, Release 2.0 farm argument: required_argument parser: uwsgi_opt_add_farm flags: UWSGI_OPT_MASTER help: add a mule farm mule-msg-size argument: optional_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set mule message buffer size signal argument: required_argument parser: uwsgi_opt_signal flags: UWSGI_OPT_IMMEDIATE help: send a uwsgi signal to a server signal-bufsize argument: required_argument parser: uwsgi_opt_set_int help: set buffer size for signal queue signals-bufsize argument: required_argument parser: uwsgi_opt_set_int help: set buffer size for signal queue signal-timer argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: add a timer (syntax: ) 90 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 timer argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: add a timer (syntax: ) signal-rbtimer argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: add a redblack timer (syntax: ) rbtimer argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: add a redblack timer (syntax: ) rpc-max argument: required_argument parser: uwsgi_opt_set_64bit help: maximum number of rpc slots (default: 64) disable-logging argument: no_argument shortcut: -L parser: uwsgi_opt_false help: disable request logging flock argument: required_argument parser: uwsgi_opt_flock flags: UWSGI_OPT_IMMEDIATE help: lock the specified file before starting, exit if locked 3.13. uWSGI Options 91 uWSGI Documentation, Release 2.0 flock-wait argument: required_argument parser: uwsgi_opt_flock_wait flags: UWSGI_OPT_IMMEDIATE help: lock the specified file before starting, wait if locked flock2 argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_IMMEDIATE help: lock the specified file after logging/daemon setup, exit if locked flock-wait2 argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_IMMEDIATE help: lock the specified file after logging/daemon setup, wait if locked pidfile argument: required_argument parser: uwsgi_opt_set_str help: create pidfile (before privileges drop) pidfile2 argument: required_argument parser: uwsgi_opt_set_str help: create pidfile (after privileges drop) chroot argument: required_argument parser: uwsgi_opt_set_str help: chroot() to the specified directory 92 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 pivot-root argument: required_argument parser: uwsgi_opt_set_str help: pivot_root() to the specified directories (new_root and put_old must be separated with a space) pivot_root argument: required_argument parser: uwsgi_opt_set_str help: pivot_root() to the specified directories (new_root and put_old must be separated with a space) uid argument: required_argument parser: uwsgi_opt_set_uid help: setuid to the specified user/uid gid argument: required_argument parser: uwsgi_opt_set_gid help: setgid to the specified group/gid add-gid argument: required_argument parser: uwsgi_opt_add_string_list help: add the specified group id to the process credentials immediate-uid argument: required_argument parser: uwsgi_opt_set_immediate_uid flags: UWSGI_OPT_IMMEDIATE help: setuid to the specified user/uid IMMEDIATELY 3.13. uWSGI Options 93 uWSGI Documentation, Release 2.0 immediate-gid argument: required_argument parser: uwsgi_opt_set_immediate_gid flags: UWSGI_OPT_IMMEDIATE help: setgid to the specified group/gid IMMEDIATELY no-initgroups argument: no_argument parser: uwsgi_opt_true help: disable additional groups set via initgroups() cap argument: required_argument parser: uwsgi_opt_set_cap help: set process capability unshare argument: required_argument parser: uwsgi_opt_set_unshare help: unshare() part of the processes and put it in a new namespace unshare2 argument: required_argument parser: uwsgi_opt_set_unshare help: unshare() part of the processes and put it in a new namespace after rootfs change setns-socket argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: expose a unix socket returning namespace fds from /proc/self/ns 94 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 setns-socket-skip argument: required_argument parser: uwsgi_opt_add_string_list help: skip the specified entry when sending setns file descriptors setns-skip argument: required_argument parser: uwsgi_opt_add_string_list help: skip the specified entry when sending setns file descriptors setns argument: required_argument parser: uwsgi_opt_set_str help: join a namespace created by an external uWSGI instance setns-preopen argument: no_argument parser: uwsgi_opt_true help: open /proc/self/ns as soon as possible and cache fds jailed argument: no_argument parser: uwsgi_opt_true help: mark the instance as jailed (force the execution of post_jail hooks) jail argument: required_argument parser: uwsgi_opt_set_str help: put the instance in a FreeBSD jail jail-ip4 argument: required_argument parser: uwsgi_opt_add_string_list help: add an ipv4 address to the FreeBSD jail 3.13. uWSGI Options 95 uWSGI Documentation, Release 2.0 jail-ip6 argument: required_argument parser: uwsgi_opt_add_string_list help: add an ipv6 address to the FreeBSD jail jidfile argument: required_argument parser: uwsgi_opt_set_str help: save the jid of a FreeBSD jail in the specified file jid-file argument: required_argument parser: uwsgi_opt_set_str help: save the jid of a FreeBSD jail in the specified file jail2 argument: required_argument parser: uwsgi_opt_add_string_list help: add an option to the FreeBSD jail libjail argument: required_argument parser: uwsgi_opt_add_string_list help: add an option to the FreeBSD jail jail-attach argument: required_argument parser: uwsgi_opt_set_str help: attach to the FreeBSD jail refork argument: no_argument parser: uwsgi_opt_true help: fork() again after privileges drop. Useful for jailing systems 96 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 re-fork argument: no_argument parser: uwsgi_opt_true help: fork() again after privileges drop. Useful for jailing systems refork-as-root argument: no_argument parser: uwsgi_opt_true help: fork() again before privileges drop. Useful for jailing systems re-fork-as-root argument: no_argument parser: uwsgi_opt_true help: fork() again before privileges drop. Useful for jailing systems refork-post-jail argument: no_argument parser: uwsgi_opt_true help: fork() again after jailing. Useful for jailing systems re-fork-post-jail argument: no_argument parser: uwsgi_opt_true help: fork() again after jailing. Useful for jailing systems hook-asap argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook as soon as possible hook-pre-jail argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook before jailing 3.13. uWSGI Options 97 uWSGI Documentation, Release 2.0 hook-post-jail argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook after jailing hook-in-jail argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook in jail after initialization hook-as-root argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook before privileges drop hook-as-user argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook after privileges drop hook-as-user-atexit argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook before app exit and reload hook-pre-app argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook before app loading hook-post-app argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook after app loading 98 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 hook-accepting argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook after each worker enter the accepting phase hook-accepting1 argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook after the first worker enters the accepting phase hook-accepting-once argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook after each worker enter the accepting phase (once per-instance) hook-accepting1-once argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook after the first worker enters the accepting phase (once per instance) hook-master-start argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook when the Master starts hook-touch argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook when the specified file is touched (syntax: <file> ) hook-emperor-start argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook when the Emperor starts 3.13. uWSGI Options 99 uWSGI Documentation, Release 2.0 hook-emperor-stop argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook when the Emperor send a stop message hook-emperor-reload argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook when the Emperor send a reload message hook-emperor-lost argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook when the Emperor connection is lost hook-as-vassal argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook before exec()ing the vassal hook-as-emperor argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook in the emperor after the vassal has been started hook-as-mule argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook in each mule hook-as-gateway argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified hook in each gateway 100 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 after-request-hook argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified function/symbol after each request after-request-call argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified function/symbol after each request exec-asap argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command as soon as possible exec-pre-jail argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command before jailing exec-post-jail argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command after jailing exec-in-jail argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command in jail after initialization exec-as-root argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command before privileges drop 3.13. uWSGI Options 101 uWSGI Documentation, Release 2.0 exec-as-user argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command after privileges drop exec-as-user-atexit argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command before app exit and reload exec-pre-app argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command before app loading exec-post-app argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command after app loading exec-as-vassal argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command before exec()ing the vassal exec-as-emperor argument: required_argument parser: uwsgi_opt_add_string_list help: run the specified command in the emperor after the vassal has been started mount-asap argument: required_argument parser: uwsgi_opt_add_string_list help: mount filesystem as soon as possible 102 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 mount-pre-jail argument: required_argument parser: uwsgi_opt_add_string_list help: mount filesystem before jailing mount-post-jail argument: required_argument parser: uwsgi_opt_add_string_list help: mount filesystem after jailing mount-in-jail argument: required_argument parser: uwsgi_opt_add_string_list help: mount filesystem in jail after initialization mount-as-root argument: required_argument parser: uwsgi_opt_add_string_list help: mount filesystem before privileges drop mount-as-vassal argument: required_argument parser: uwsgi_opt_add_string_list help: mount filesystem before exec()ing the vassal mount-as-emperor argument: required_argument parser: uwsgi_opt_add_string_list help: mount filesystem in the emperor after the vassal has been started umount-asap argument: required_argument parser: uwsgi_opt_add_string_list help: unmount filesystem as soon as possible 3.13. uWSGI Options 103 uWSGI Documentation, Release 2.0 umount-pre-jail argument: required_argument parser: uwsgi_opt_add_string_list help: unmount filesystem before jailing umount-post-jail argument: required_argument parser: uwsgi_opt_add_string_list help: unmount filesystem after jailing umount-in-jail argument: required_argument parser: uwsgi_opt_add_string_list help: unmount filesystem in jail after initialization umount-as-root argument: required_argument parser: uwsgi_opt_add_string_list help: unmount filesystem before privileges drop umount-as-vassal argument: required_argument parser: uwsgi_opt_add_string_list help: unmount filesystem before exec()ing the vassal umount-as-emperor argument: required_argument parser: uwsgi_opt_add_string_list help: unmount filesystem in the emperor after the vassal has been started wait-for-interface argument: required_argument parser: uwsgi_opt_add_string_list help: wait for the specified network interface to come up before running root hooks 104 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 wait-for-interface-timeout argument: required_argument parser: uwsgi_opt_set_int help: set the timeout for wait-for-interface wait-interface argument: required_argument parser: uwsgi_opt_add_string_list help: wait for the specified network interface to come up before running root hooks wait-interface-timeout argument: required_argument parser: uwsgi_opt_set_int help: set the timeout for wait-for-interface wait-for-iface argument: required_argument parser: uwsgi_opt_add_string_list help: wait for the specified network interface to come up before running root hooks wait-for-iface-timeout argument: required_argument parser: uwsgi_opt_set_int help: set the timeout for wait-for-interface wait-iface argument: required_argument parser: uwsgi_opt_add_string_list help: wait for the specified network interface to come up before running root hooks wait-iface-timeout argument: required_argument parser: uwsgi_opt_set_int help: set the timeout for wait-for-interface 3.13. uWSGI Options 105 uWSGI Documentation, Release 2.0 call-asap argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function as soon as possible call-pre-jail argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function before jailing call-post-jail argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function after jailing call-in-jail argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function in jail after initialization call-as-root argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function before privileges drop call-as-user argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function after privileges drop call-as-user-atexit argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function before app exit and reload 106 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 call-pre-app argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function before app loading call-post-app argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function after app loading call-as-vassal argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function() before exec()ing the vassal call-as-vassal1 argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function before exec()ing the vassal call-as-vassal3 argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function(char *, uid_t, gid_t) before exec()ing the vassal call-as-emperor argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function() in the emperor after the vassal has been started call-as-emperor1 argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function in the emperor after the vassal has been started 3.13. uWSGI Options 107 uWSGI Documentation, Release 2.0 call-as-emperor2 argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function(char *, pid_t) in the emperor after the vassal has been started call-as-emperor4 argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified function(char *, pid_t, uid_t, gid_t) in the emperor after the vassal has been started ini argument: required_argument parser: uwsgi_opt_load_ini flags: UWSGI_OPT_IMMEDIATE help: load config from ini file yaml argument: required_argument shortcut: -y parser: uwsgi_opt_load_yml flags: UWSGI_OPT_IMMEDIATE help: load config from yaml file yml argument: required_argument shortcut: -y parser: uwsgi_opt_load_yml flags: UWSGI_OPT_IMMEDIATE help: load config from yaml file json argument: required_argument shortcut: -j parser: uwsgi_opt_load_json flags: UWSGI_OPT_IMMEDIATE 108 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 help: load config from json file js argument: required_argument shortcut: -j parser: uwsgi_opt_load_json flags: UWSGI_OPT_IMMEDIATE help: load config from json file weight argument: required_argument parser: uwsgi_opt_set_64bit help: weight of the instance (used by clustering/lb/subscriptions) auto-weight argument: required_argument parser: uwsgi_opt_true help: set weight of the instance (used by clustering/lb/subscriptions) automatically no-server argument: no_argument parser: uwsgi_opt_true help: force no-server mode command-mode argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_IMMEDIATE help: force command mode no-defer-accept argument: no_argument parser: uwsgi_opt_true help: disable deferred-accept on sockets 3.13. uWSGI Options 109 uWSGI Documentation, Release 2.0 tcp-nodelay argument: no_argument parser: uwsgi_opt_true help: enable TCP NODELAY on each request so-keepalive argument: no_argument parser: uwsgi_opt_true help: enable TCP KEEPALIVEs so-send-timeout argument: no_argument parser: uwsgi_opt_set_int help: set SO_SNDTIMEO socket-send-timeout argument: no_argument parser: uwsgi_opt_set_int help: set SO_SNDTIMEO so-write-timeout argument: no_argument parser: uwsgi_opt_set_int help: set SO_SNDTIMEO socket-write-timeout argument: no_argument parser: uwsgi_opt_set_int help: set SO_SNDTIMEO socket-sndbuf argument: required_argument parser: uwsgi_opt_set_64bit help: set SO_SNDBUF 110 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 socket-rcvbuf argument: required_argument parser: uwsgi_opt_set_64bit help: set SO_RCVBUF limit-as argument: required_argument parser: uwsgi_opt_set_megabytes help: limit processes address space/vsz limit-nproc argument: required_argument parser: uwsgi_opt_set_int help: limit the number of spawnable processes reload-on-as argument: required_argument parser: uwsgi_opt_set_megabytes flags: UWSGI_OPT_MEMORY help: reload if address space is higher than specified megabytes reload-on-rss argument: required_argument parser: uwsgi_opt_set_megabytes flags: UWSGI_OPT_MEMORY help: reload if rss memory is higher than specified megabytes evil-reload-on-as argument: required_argument parser: uwsgi_opt_set_megabytes flags: UWSGI_OPT_MASTER | UWSGI_OPT_MEMORY help: force the master to reload a worker if its address space is higher than specified megabytes 3.13. uWSGI Options 111 uWSGI Documentation, Release 2.0 evil-reload-on-rss argument: required_argument parser: uwsgi_opt_set_megabytes flags: UWSGI_OPT_MASTER | UWSGI_OPT_MEMORY help: force the master to reload a worker if its rss memory is higher than specified megabytes reload-on-fd argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: reload if the specified file descriptor is ready brutal-reload-on-fd argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: brutal reload if the specified file descriptor is ready ksm argument: optional_argument parser: uwsgi_opt_set_int help: enable Linux KSM pcre-jit argument: no_argument parser: uwsgi_opt_pcre_jit flags: UWSGI_OPT_IMMEDIATE help: enable pcre jit (if available) never-swap argument: no_argument parser: uwsgi_opt_true help: lock all memory pages avoiding swapping 112 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 touch-reload argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: reload uWSGI if the specified file is modified/touched touch-workers-reload argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: trigger reload of (only) workers if the specified file is modified/touched touch-chain-reload argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: trigger chain reload if the specified file is modified/touched touch-logrotate argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: trigger logrotation if the specified file is modified/touched touch-logreopen argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: trigger log reopen if the specified file is modified/touched touch-exec argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: run command when the specified file is modified/touched (syntax: file command) 3.13. uWSGI Options 113 uWSGI Documentation, Release 2.0 touch-signal argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: signal when the specified file is modified/touched (syntax: file signal) fs-reload argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: graceful reload when the specified filesystem object is modified fs-brutal-reload argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: brutal reload when the specified filesystem object is modified fs-signal argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise a uwsgi signal when the specified filesystem object is modified (syntax: file signal) check-mountpoint argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: destroy the instance if a filesystem is no more reachable (useful for reliable Fuse management) mountpoint-check argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: destroy the instance if a filesystem is no more reachable (useful for reliable Fuse management) 114 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 check-mount argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: destroy the instance if a filesystem is no more reachable (useful for reliable Fuse management) mount-check argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: destroy the instance if a filesystem is no more reachable (useful for reliable Fuse management) propagate-touch argument: no_argument parser: uwsgi_opt_true help: over-engineering option for system with flaky signal management limit-post argument: required_argument parser: uwsgi_opt_set_64bit help: limit request body no-orphans argument: no_argument parser: uwsgi_opt_true help: automatically kill workers if master dies (can be dangerous for availability) prio argument: required_argument parser: uwsgi_opt_set_rawint help: set processes/threads priority 3.13. uWSGI Options 115 uWSGI Documentation, Release 2.0 cpu-affinity argument: required_argument parser: uwsgi_opt_set_int help: set cpu affinity post-buffering argument: required_argument parser: uwsgi_opt_set_64bit help: enable post buffering post-buffering-bufsize argument: required_argument parser: uwsgi_opt_set_64bit help: set buffer size for read() in post buffering mode body-read-warning argument: required_argument parser: uwsgi_opt_set_64bit help: set the amount of allowed memory allocation (in megabytes) for request body before starting printing a warning upload-progress argument: required_argument parser: uwsgi_opt_set_str help: enable creation of .json files in the specified directory during a file upload no-default-app argument: no_argument parser: uwsgi_opt_true help: do not fallback to default app manage-script-name argument: no_argument parser: uwsgi_opt_true help: automatically rewrite SCRIPT_NAME and PATH_INFO 116 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 ignore-script-name argument: no_argument parser: uwsgi_opt_true help: ignore SCRIPT_NAME catch-exceptions argument: no_argument parser: uwsgi_opt_true help: report exception as http output (discouraged, use only for testing) reload-on-exception argument: no_argument parser: uwsgi_opt_true help: reload a worker when an exception is raised reload-on-exception-type argument: required_argument parser: uwsgi_opt_add_string_list help: reload a worker when a specific exception type is raised reload-on-exception-value argument: required_argument parser: uwsgi_opt_add_string_list help: reload a worker when a specific exception value is raised reload-on-exception-repr argument: required_argument parser: uwsgi_opt_add_string_list help: reload a worker when a specific exception type+value (language-specific) is raised exception-handler argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: add an exception handler 3.13. uWSGI Options 117 uWSGI Documentation, Release 2.0 enable-metrics argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER help: enable metrics subsystem metric argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: add a custom metric metric-threshold argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: add a metric threshold/alarm metric-alarm argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: add a metric threshold/alarm alarm-metric argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: add a metric threshold/alarm metrics-dir argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: export metrics as text files to the specified directory 118 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 metrics-dir-restore argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: restore last value taken from the metrics dir metric-dir argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: export metrics as text files to the specified directory metric-dir-restore argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: restore last value taken from the metrics dir metrics-no-cores argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_METRICS|UWSGI_OPT_MASTER help: disable generation of cores-related metrics reference: The Metrics subsystem Do not expose metrics of async cores. udp argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: run the udp server on the specified address 3.13. uWSGI Options 119 uWSGI Documentation, Release 2.0 stats argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: enable the stats server on the specified address stats-server argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: enable the stats server on the specified address stats-http argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER help: prefix stats server json output with http headers stats-minified argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER help: minify statistics json output stats-min argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER help: minify statistics json output stats-push argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER|UWSGI_OPT_METRICS help: push the stats json to the specified destination 120 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 stats-pusher-default-freq argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set the default frequency of stats pushers stats-pushers-default-freq argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set the default frequency of stats pushers stats-no-cores argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER help: disable generation of cores-related stats reference: The Metrics subsystem Do not expose the information about cores in the stats server. stats-no-metrics argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER help: do not include metrics in stats output reference: The Metrics subsystem Do not expose the metrics at all in the stats server. multicast argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: subscribe to specified multicast group 3.13. uWSGI Options 121 uWSGI Documentation, Release 2.0 multicast-ttl argument: required_argument parser: uwsgi_opt_set_int help: set multicast ttl multicast-loop argument: required_argument parser: uwsgi_opt_set_int help: set multicast loop (default 1) master-fifo argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: enable the master fifo notify-socket argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: enable the notification socket subscription-notify-socket argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: set the notification socket for subscriptions legion argument: required_argument parser: uwsgi_opt_legion flags: UWSGI_OPT_MASTER help: became a member of a legion 122 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 legion-mcast argument: required_argument parser: uwsgi_opt_legion_mcast flags: UWSGI_OPT_MASTER help: became a member of a legion (shortcut for multicast) legion-node argument: required_argument parser: uwsgi_opt_legion_node flags: UWSGI_OPT_MASTER help: add a node to a legion legion-freq argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set the frequency of legion packets legion-tolerance argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set the tolerance of legion subsystem legion-death-on-lord-error argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: declare itself as a dead node for the specified amount of seconds if one of the lord hooks fails legion-skew-tolerance argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set the clock skew tolerance of legion subsystem (default 30 seconds) 3.13. uWSGI Options 123 uWSGI Documentation, Release 2.0 legion-lord argument: required_argument parser: uwsgi_opt_legion_hook flags: UWSGI_OPT_MASTER help: action to call on Lord election legion-unlord argument: required_argument parser: uwsgi_opt_legion_hook flags: UWSGI_OPT_MASTER help: action to call on Lord dismiss legion-setup argument: required_argument parser: uwsgi_opt_legion_hook flags: UWSGI_OPT_MASTER help: action to call on legion setup legion-death argument: required_argument parser: uwsgi_opt_legion_hook flags: UWSGI_OPT_MASTER help: action to call on legion death (shutdown of the instance) legion-join argument: required_argument parser: uwsgi_opt_legion_hook flags: UWSGI_OPT_MASTER help: action to call on legion join (first time quorum is reached) legion-node-joined argument: required_argument parser: uwsgi_opt_legion_hook flags: UWSGI_OPT_MASTER help: action to call on new node joining legion 124 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 legion-node-left argument: required_argument parser: uwsgi_opt_legion_hook flags: UWSGI_OPT_MASTER help: action to call node leaving legion legion-quorum argument: required_argument parser: uwsgi_opt_legion_quorum flags: UWSGI_OPT_MASTER help: set the quorum of a legion legion-scroll argument: required_argument parser: uwsgi_opt_legion_scroll flags: UWSGI_OPT_MASTER help: set the scroll of a legion legion-scroll-max-size argument: required_argument parser: uwsgi_opt_set_16bit help: set max size of legion scroll buffer legion-scroll-list-max-size argument: required_argument parser: uwsgi_opt_set_64bit help: set max size of legion scroll list buffer subscriptions-sign-check argument: required_argument parser: uwsgi_opt_scd flags: UWSGI_OPT_MASTER help: set digest algorithm and certificate directory for secured subscription system 3.13. uWSGI Options 125 uWSGI Documentation, Release 2.0 subscriptions-sign-check-tolerance argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set the maximum tolerance (in seconds) of clock skew for secured subscription system subscriptions-sign-skip-uid argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: skip signature check for the specified uid when using unix sockets credentials subscriptions-credentials-check argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: add a directory to search for subscriptions key credentials subscriptions-use-credentials argument: no_argument parser: uwsgi_opt_true help: enable management of SCM_CREDENTIALS in subscriptions UNIX sockets subscription-algo argument: required_argument parser: uwsgi_opt_ssa help: set load balancing algorithm for the subscription system subscription-dotsplit argument: no_argument parser: uwsgi_opt_true help: try to fallback to the next part (dot based) in subscription key 126 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 subscribe-to argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: subscribe to the specified subscription server st argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: subscribe to the specified subscription server subscribe argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: subscribe to the specified subscription server subscribe2 argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: subscribe to the specified subscription server using advanced keyval syntax subscribe-freq argument: required_argument parser: uwsgi_opt_set_int help: send subscription announce at the specified interval subscription-tolerance argument: required_argument parser: uwsgi_opt_set_int help: set tolerance for subscription servers 3.13. uWSGI Options 127 uWSGI Documentation, Release 2.0 unsubscribe-on-graceful-reload argument: no_argument parser: uwsgi_opt_true help: force unsubscribe request even during graceful reload start-unsubscribed argument: no_argument parser: uwsgi_opt_true help: configure subscriptions but do not send them (useful with master fifo) snmp argument: optional_argument parser: uwsgi_opt_snmp help: enable the embedded snmp server snmp-community argument: required_argument parser: uwsgi_opt_snmp_community help: set the snmp community string ssl-verbose argument: no_argument parser: uwsgi_opt_true help: be verbose about SSL errors ssl-sessions-use-cache argument: optional_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: use uWSGI cache for ssl sessions storage 128 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 ssl-session-use-cache argument: optional_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: use uWSGI cache for ssl sessions storage ssl-sessions-timeout argument: required_argument parser: uwsgi_opt_set_int help: set SSL sessions timeout (default: 300 seconds) ssl-session-timeout argument: required_argument parser: uwsgi_opt_set_int help: set SSL sessions timeout (default: 300 seconds) sni argument: required_argument parser: uwsgi_opt_sni help: add an SNI-governed SSL context sni-dir argument: required_argument parser: uwsgi_opt_set_str help: check for cert/key/client_ca file in the specified directory and create a sni/ssl context on demand sni-dir-ciphers argument: required_argument parser: uwsgi_opt_set_str help: set ssl ciphers for sni-dir option sni-regexp argument: required_argument parser: uwsgi_opt_sni help: add an SNI-governed SSL context (the key is a regexp) 3.13. uWSGI Options 129 uWSGI Documentation, Release 2.0 ssl-tmp-dir argument: required_argument parser: uwsgi_opt_set_str help: store ssl-related temp files in the specified directory check-interval argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set the interval (in seconds) of master checks forkbomb-delay argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: sleep for the specified number of seconds when a forkbomb is detected binary-path argument: required_argument parser: uwsgi_opt_set_str help: force binary path privileged-binary-patch argument: required_argument parser: uwsgi_opt_set_str help: patch the uwsgi binary with a new command (before privileges drop) unprivileged-binary-patch argument: required_argument parser: uwsgi_opt_set_str help: patch the uwsgi binary with a new command (after privileges drop) 130 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 privileged-binary-patch-arg argument: required_argument parser: uwsgi_opt_set_str help: patch the uwsgi binary with a new command and arguments (before privileges drop) unprivileged-binary-patch-arg argument: required_argument parser: uwsgi_opt_set_str help: patch the uwsgi binary with a new command and arguments (after privileges drop) async argument: required_argument parser: uwsgi_opt_set_int help: enable async mode with specified cores max-fd argument: required_argument parser: uwsgi_opt_set_int help: set maximum number of file descriptors (requires root privileges) logto argument: required_argument parser: uwsgi_opt_set_str help: set logfile/udp address logto2 argument: required_argument parser: uwsgi_opt_set_str help: log to specified file or udp address after privileges drop log-format argument: required_argument parser: uwsgi_opt_set_str help: set advanced format for request logging 3.13. uWSGI Options 131 uWSGI Documentation, Release 2.0 logformat argument: required_argument parser: uwsgi_opt_set_str help: set advanced format for request logging logformat-strftime argument: no_argument parser: uwsgi_opt_true help: apply strftime to logformat output log-format-strftime argument: no_argument parser: uwsgi_opt_true help: apply strftime to logformat output logfile-chown argument: no_argument parser: uwsgi_opt_true help: chown logfiles logfile-chmod argument: required_argument parser: uwsgi_opt_logfile_chmod help: chmod logfiles log-syslog argument: optional_argument parser: uwsgi_opt_set_logger flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: log to syslog 132 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 log-socket argument: required_argument parser: uwsgi_opt_set_logger flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: send logs to the specified socket req-logger argument: required_argument parser: uwsgi_opt_set_req_logger flags: UWSGI_OPT_REQ_LOG_MASTER help: set/append a request logger logger-req argument: required_argument parser: uwsgi_opt_set_req_logger flags: UWSGI_OPT_REQ_LOG_MASTER help: set/append a request logger logger argument: required_argument parser: uwsgi_opt_set_logger flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: set/append a logger logger-list argument: no_argument parser: uwsgi_opt_true help: list enabled loggers loggers-list argument: no_argument parser: uwsgi_opt_true help: list enabled loggers 3.13. uWSGI Options 133 uWSGI Documentation, Release 2.0 threaded-logger argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: offload log writing to a thread log-encoder argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: add an item in the log encoder chain log-req-encoder argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: add an item in the log req encoder chain log-drain argument: required_argument parser: uwsgi_opt_add_regexp_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: drain (do not show) log lines matching the specified regexp log-filter argument: required_argument parser: uwsgi_opt_add_regexp_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: show only log lines matching the specified regexp log-route argument: required_argument parser: uwsgi_opt_add_regexp_custom_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: log to the specified named logger if regexp applied on logline matches 134 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 log-req-route argument: required_argument parser: uwsgi_opt_add_regexp_custom_list flags: UWSGI_OPT_REQ_LOG_MASTER help: log requests to the specified named logger if regexp applied on logline matches use-abort argument: no_argument parser: uwsgi_opt_true help: call abort() on segfault/fpe, could be useful for generating a core dump alarm argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: create a new alarm, syntax: alarm-cheap argument: required_argument parser: uwsgi_opt_true help: use main alarm thread rather than create dedicated threads for curl-based alarms alarm-freq argument: required_argument parser: uwsgi_opt_set_int help: tune the anti-loop alam system (default 3 seconds) alarm-fd argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when an fd is read for read (by default it reads 1 byte, set 8 for eventfd) 3.13. uWSGI Options 135 uWSGI Documentation, Release 2.0 alarm-segfault argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when the segmentation fault handler is executed segfault-alarm argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when the segmentation fault handler is executed alarm-backlog argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when the socket backlog queue is full backlog-alarm argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when the socket backlog queue is full lq-alarm argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when the socket backlog queue is full alarm-lq argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when the socket backlog queue is full 136 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 alarm-listen-queue argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when the socket backlog queue is full listen-queue-alarm argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: raise the specified alarm when the socket backlog queue is full log-alarm argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: raise the specified alarm when a log line matches the specified regexp, syntax: [,alarm...] alarm-log argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: raise the specified alarm when a log line matches the specified regexp, syntax: [,alarm...] not-log-alarm argument: required_argument parser: uwsgi_opt_add_string_list_custom flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: skip the specified alarm when a log line matches the specified regexp, syntax: [,alarm...] not-alarm-log argument: required_argument parser: uwsgi_opt_add_string_list_custom flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: skip the specified alarm when a log line matches the specified regexp, syntax: [,alarm...] 3.13. uWSGI Options 137 uWSGI Documentation, Release 2.0 alarm-list argument: no_argument parser: uwsgi_opt_true help: list enabled alarms alarms-list argument: no_argument parser: uwsgi_opt_true help: list enabled alarms alarm-msg-size argument: required_argument parser: uwsgi_opt_set_64bit help: set the max size of an alarm message (default 8192) log-master argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER|UWSGI_OPT_LOG_MASTER help: delegate logging to master process log-master-bufsize argument: required_argument parser: uwsgi_opt_set_64bit help: set the buffer size for the master logger. bigger log messages will be truncated log-master-stream argument: no_argument parser: uwsgi_opt_true help: create the master logpipe as SOCK_STREAM log-master-req-stream argument: no_argument parser: uwsgi_opt_true help: create the master requests logpipe as SOCK_STREAM 138 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 log-reopen argument: no_argument parser: uwsgi_opt_true help: reopen log after reload log-truncate argument: no_argument parser: uwsgi_opt_true help: truncate log on startup log-maxsize argument: required_argument parser: uwsgi_opt_set_64bit flags: UWSGI_OPT_MASTER|UWSGI_OPT_LOG_MASTER help: set maximum logfile size log-backupname argument: required_argument parser: uwsgi_opt_set_str help: set logfile name after rotation logdate argument: optional_argument parser: uwsgi_opt_log_date help: prefix logs with date or a strftime string log-date argument: optional_argument parser: uwsgi_opt_log_date help: prefix logs with date or a strftime string log-prefix argument: optional_argument parser: uwsgi_opt_log_date help: prefix logs with a string 3.13. uWSGI Options 139 uWSGI Documentation, Release 2.0 log-zero argument: no_argument parser: uwsgi_opt_true help: log responses without body log-slow argument: required_argument parser: uwsgi_opt_set_int help: log requests slower than the specified number of milliseconds log-4xx argument: no_argument parser: uwsgi_opt_true help: log requests with a 4xx response log-5xx argument: no_argument parser: uwsgi_opt_true help: log requests with a 5xx response log-big argument: required_argument parser: uwsgi_opt_set_64bit help: log requestes bigger than the specified size log-sendfile argument: required_argument parser: uwsgi_opt_true help: log sendfile requests log-ioerror argument: required_argument parser: uwsgi_opt_true help: log requests with io errors 140 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 log-micros argument: no_argument parser: uwsgi_opt_true help: report response time in microseconds instead of milliseconds log-x-forwarded-for argument: no_argument parser: uwsgi_opt_true help: use the ip from X-Forwarded-For header instead of REMOTE_ADDR master-as-root argument: no_argument parser: uwsgi_opt_true help: leave master process running as root drop-after-init argument: no_argument parser: uwsgi_opt_true help: run privileges drop after plugin initialization drop-after-apps argument: no_argument parser: uwsgi_opt_true help: run privileges drop after apps loading force-cwd argument: required_argument parser: uwsgi_opt_set_str help: force the initial working directory to the specified value binsh argument: required_argument parser: uwsgi_opt_add_string_list help: override /bin/sh (used by exec hooks, it always fallback to /bin/sh) 3.13. uWSGI Options 141 uWSGI Documentation, Release 2.0 chdir argument: required_argument parser: uwsgi_opt_set_str help: chdir to specified directory before apps loading chdir2 argument: required_argument parser: uwsgi_opt_set_str help: chdir to specified directory after apps loading lazy argument: no_argument parser: uwsgi_opt_true help: set lazy mode (load apps in workers instead of master) lazy-apps argument: no_argument parser: uwsgi_opt_true help: load apps in each worker instead of the master cheap argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER help: set cheap mode (spawn workers only after the first request) cheaper argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER | UWSGI_OPT_CHEAPER help: set cheaper mode (adaptive process spawning) 142 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 cheaper-initial argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER | UWSGI_OPT_CHEAPER help: set the initial number of processes to spawn in cheaper mode cheaper-algo argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: choose to algorithm used for adaptive process spawning cheaper-step argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER | UWSGI_OPT_CHEAPER help: number of additional processes to spawn at each overload cheaper-overload argument: required_argument parser: uwsgi_opt_set_64bit flags: UWSGI_OPT_MASTER | UWSGI_OPT_CHEAPER help: increase workers after specified overload cheaper-algo-list argument: no_argument parser: uwsgi_opt_true help: list enabled cheapers algorithms cheaper-algos-list argument: no_argument parser: uwsgi_opt_true help: list enabled cheapers algorithms 3.13. uWSGI Options 143 uWSGI Documentation, Release 2.0 cheaper-list argument: no_argument parser: uwsgi_opt_true help: list enabled cheapers algorithms cheaper-rss-limit-soft argument: required_argument parser: uwsgi_opt_set_64bit flags: UWSGI_OPT_MASTER | UWSGI_OPT_CHEAPER help: don’t spawn new workers if total resident memory usage of all workers is higher than this limit cheaper-rss-limit-hard argument: required_argument parser: uwsgi_opt_set_64bit flags: UWSGI_OPT_MASTER | UWSGI_OPT_CHEAPER help: if total workers resident memory usage is higher try to stop workers idle argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: set idle mode (put uWSGI in cheap mode after inactivity) die-on-idle argument: no_argument parser: uwsgi_opt_true help: shutdown uWSGI when idle mount argument: required_argument parser: uwsgi_opt_add_string_list help: load application under mountpoint 144 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 worker-mount argument: required_argument parser: uwsgi_opt_add_string_list help: load application under mountpoint in the specified worker or after workers spawn threads argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS help: run each worker in prethreaded mode with the specified number of threads thread-stacksize argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS help: set threads stacksize threads-stacksize argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS help: set threads stacksize thread-stack-size argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS help: set threads stacksize threads-stack-size argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS help: set threads stacksize 3.13. uWSGI Options 145 uWSGI Documentation, Release 2.0 vhost argument: no_argument parser: uwsgi_opt_true help: enable virtualhosting mode (based on SERVER_NAME variable) vhost-host argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_VHOST help: enable virtualhosting mode (based on HTTP_HOST variable) route argument: required_argument parser: uwsgi_opt_add_route help: add a route route-host argument: required_argument parser: uwsgi_opt_add_route help: add a route based on Host header route-uri argument: required_argument parser: uwsgi_opt_add_route help: add a route based on REQUEST_URI route-qs argument: required_argument parser: uwsgi_opt_add_route help: add a route based on QUERY_STRING route-remote-addr argument: required_argument parser: uwsgi_opt_add_route help: add a route based on REMOTE_ADDR 146 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 route-user-agent argument: required_argument parser: uwsgi_opt_add_route help: add a route based on HTTP_USER_AGENT route-remote-user argument: required_argument parser: uwsgi_opt_add_route help: add a route based on REMOTE_USER route-referer argument: required_argument parser: uwsgi_opt_add_route help: add a route based on HTTP_REFERER route-label argument: required_argument parser: uwsgi_opt_add_route help: add a routing label (for use with goto) route-if argument: required_argument parser: uwsgi_opt_add_route help: add a route based on condition route-if-not argument: required_argument parser: uwsgi_opt_add_route help: add a route based on condition (negate version) route-run argument: required_argument parser: uwsgi_opt_add_route help: always run the specified route action 3.13. uWSGI Options 147 uWSGI Documentation, Release 2.0 final-route argument: required_argument parser: uwsgi_opt_add_route help: add a final route final-route-status argument: required_argument parser: uwsgi_opt_add_route help: add a final route for the specified status final-route-host argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on Host header final-route-uri argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on REQUEST_URI final-route-qs argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on QUERY_STRING final-route-remote-addr argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on REMOTE_ADDR final-route-user-agent argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on HTTP_USER_AGENT 148 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 final-route-remote-user argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on REMOTE_USER final-route-referer argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on HTTP_REFERER final-route-label argument: required_argument parser: uwsgi_opt_add_route help: add a final routing label (for use with goto) final-route-if argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on condition final-route-if-not argument: required_argument parser: uwsgi_opt_add_route help: add a final route based on condition (negate version) final-route-run argument: required_argument parser: uwsgi_opt_add_route help: always run the specified final route action error-route argument: required_argument parser: uwsgi_opt_add_route help: add an error route 3.13. uWSGI Options 149 uWSGI Documentation, Release 2.0 error-route-status argument: required_argument parser: uwsgi_opt_add_route help: add an error route for the specified status error-route-host argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on Host header error-route-uri argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on REQUEST_URI error-route-qs argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on QUERY_STRING error-route-remote-addr argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on REMOTE_ADDR error-route-user-agent argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on HTTP_USER_AGENT error-route-remote-user argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on REMOTE_USER 150 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 error-route-referer argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on HTTP_REFERER error-route-label argument: required_argument parser: uwsgi_opt_add_route help: add an error routing label (for use with goto) error-route-if argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on condition error-route-if-not argument: required_argument parser: uwsgi_opt_add_route help: add an error route based on condition (negate version) error-route-run argument: required_argument parser: uwsgi_opt_add_route help: always run the specified error route action response-route argument: required_argument parser: uwsgi_opt_add_route help: add a response route response-route-status argument: required_argument parser: uwsgi_opt_add_route help: add a response route for the specified status 3.13. uWSGI Options 151 uWSGI Documentation, Release 2.0 response-route-host argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on Host header response-route-uri argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on REQUEST_URI response-route-qs argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on QUERY_STRING response-route-remote-addr argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on REMOTE_ADDR response-route-user-agent argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on HTTP_USER_AGENT response-route-remote-user argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on REMOTE_USER response-route-referer argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on HTTP_REFERER 152 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 response-route-label argument: required_argument parser: uwsgi_opt_add_route help: add a response routing label (for use with goto) response-route-if argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on condition response-route-if-not argument: required_argument parser: uwsgi_opt_add_route help: add a response route based on condition (negate version) response-route-run argument: required_argument parser: uwsgi_opt_add_route help: always run the specified response route action router-list argument: no_argument parser: uwsgi_opt_true help: list enabled routers routers-list argument: no_argument parser: uwsgi_opt_true help: list enabled routers error-page-403 argument: required_argument parser: uwsgi_opt_add_string_list help: add an error page (html) for managed 403 response 3.13. uWSGI Options 153 uWSGI Documentation, Release 2.0 error-page-404 argument: required_argument parser: uwsgi_opt_add_string_list help: add an error page (html) for managed 404 response error-page-500 argument: required_argument parser: uwsgi_opt_add_string_list help: add an error page (html) for managed 500 response websockets-ping-freq argument: required_argument parser: uwsgi_opt_set_int help: set the frequency (in seconds) of websockets automatic ping packets websocket-ping-freq argument: required_argument parser: uwsgi_opt_set_int help: set the frequency (in seconds) of websockets automatic ping packets websockets-pong-tolerance argument: required_argument parser: uwsgi_opt_set_int help: set the tolerance (in seconds) of websockets ping/pong subsystem websocket-pong-tolerance argument: required_argument parser: uwsgi_opt_set_int help: set the tolerance (in seconds) of websockets ping/pong subsystem websockets-max-size argument: required_argument parser: uwsgi_opt_set_64bit help: set the max allowed size of websocket messages (in Kbytes, default 1024) 154 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 websocket-max-size argument: required_argument parser: uwsgi_opt_set_64bit help: set the max allowed size of websocket messages (in Kbytes, default 1024) chunked-input-limit argument: required_argument parser: uwsgi_opt_set_64bit help: set the max size of a chunked input part (default 1MB, in bytes) chunked-input-timeout argument: required_argument parser: uwsgi_opt_set_int help: set default timeout for chunked input clock argument: required_argument parser: uwsgi_opt_set_str help: set a clock source clock-list argument: no_argument parser: uwsgi_opt_true help: list enabled clocks clocks-list argument: no_argument parser: uwsgi_opt_true help: list enabled clocks add-header argument: required_argument parser: uwsgi_opt_add_string_list help: automatically add HTTP headers to response 3.13. uWSGI Options 155 uWSGI Documentation, Release 2.0 rem-header argument: required_argument parser: uwsgi_opt_add_string_list help: automatically remove specified HTTP header from the response del-header argument: required_argument parser: uwsgi_opt_add_string_list help: automatically remove specified HTTP header from the response collect-header argument: required_argument parser: uwsgi_opt_add_string_list help: store the specified response header in a request var (syntax: header var) response-header-collect argument: required_argument parser: uwsgi_opt_add_string_list help: store the specified response header in a request var (syntax: header var) check-static argument: required_argument parser: uwsgi_opt_check_static flags: UWSGI_OPT_MIME help: check for static files in the specified directory check-static-docroot argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MIME help: check for static files in the requested DOCUMENT_ROOT 156 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 static-check argument: required_argument parser: uwsgi_opt_check_static flags: UWSGI_OPT_MIME help: check for static files in the specified directory static-map argument: required_argument parser: uwsgi_opt_static_map flags: UWSGI_OPT_MIME help: map mountpoint to static directory (or file) static-map2 argument: required_argument parser: uwsgi_opt_static_map flags: UWSGI_OPT_MIME help: like static-map but completely appending the requested resource to the docroot static-skip-ext argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: skip specified extension from staticfile checks static-index argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: search for specified file if a directory is requested static-safe argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: skip security checks if the file is under the specified path 3.13. uWSGI Options 157 uWSGI Documentation, Release 2.0 static-cache-paths argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MIME|UWSGI_OPT_MASTER help: put resolved paths in the uWSGI cache for the specified amount of seconds static-cache-paths-name argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MIME|UWSGI_OPT_MASTER help: use the specified cache for static paths mimefile argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: set mime types file path (default /etc/apache2/mime.types) mime-file argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: set mime types file path (default /etc/apache2/mime.types) mimefile argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: set mime types file path (default /etc/mime.types) mime-file argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: set mime types file path (default /etc/mime.types) 158 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 static-expires-type argument: required_argument parser: uwsgi_opt_add_dyn_dict flags: UWSGI_OPT_MIME help: set the Expires header based on content type static-expires-type-mtime argument: required_argument parser: uwsgi_opt_add_dyn_dict flags: UWSGI_OPT_MIME help: set the Expires header based on content type and file mtime static-expires argument: required_argument parser: uwsgi_opt_add_regexp_dyn_dict flags: UWSGI_OPT_MIME help: set the Expires header based on filename regexp static-expires-mtime argument: required_argument parser: uwsgi_opt_add_regexp_dyn_dict flags: UWSGI_OPT_MIME help: set the Expires header based on filename regexp and file mtime static-expires-uri argument: required_argument parser: uwsgi_opt_add_regexp_dyn_dict flags: UWSGI_OPT_MIME help: set the Expires header based on REQUEST_URI regexp static-expires-uri-mtime argument: required_argument parser: uwsgi_opt_add_regexp_dyn_dict flags: UWSGI_OPT_MIME help: set the Expires header based on REQUEST_URI regexp and file mtime 3.13. uWSGI Options 159 uWSGI Documentation, Release 2.0 static-expires-path-info argument: required_argument parser: uwsgi_opt_add_regexp_dyn_dict flags: UWSGI_OPT_MIME help: set the Expires header based on PATH_INFO regexp static-expires-path-info-mtime argument: required_argument parser: uwsgi_opt_add_regexp_dyn_dict flags: UWSGI_OPT_MIME help: set the Expires header based on PATH_INFO regexp and file mtime static-gzip argument: required_argument parser: uwsgi_opt_add_regexp_list flags: UWSGI_OPT_MIME help: if the supplied regexp matches the static file translation it will search for a gzip version static-gzip-all argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MIME help: check for a gzip version of all requested static files static-gzip-dir argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: check for a gzip version of all requested static files in the specified dir/prefix static-gzip-prefix argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: check for a gzip version of all requested static files in the specified dir/prefix 160 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 static-gzip-ext argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: check for a gzip version of all requested static files with the specified ext/suffix static-gzip-suffix argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: check for a gzip version of all requested static files with the specified ext/suffix honour-range argument: no_argument parser: uwsgi_opt_true help: enable support for the HTTP Range header offload-threads argument: required_argument parser: uwsgi_opt_set_int help: set the number of offload threads to spawn (per-worker, default 0) offload-thread argument: required_argument parser: uwsgi_opt_set_int help: set the number of offload threads to spawn (per-worker, default 0) file-serve-mode argument: required_argument parser: uwsgi_opt_fileserve_mode flags: UWSGI_OPT_MIME help: set static file serving mode 3.13. uWSGI Options 161 uWSGI Documentation, Release 2.0 fileserve-mode argument: required_argument parser: uwsgi_opt_fileserve_mode flags: UWSGI_OPT_MIME help: set static file serving mode disable-sendfile argument: no_argument parser: uwsgi_opt_true help: disable sendfile() and rely on boring read()/write() check-cache argument: optional_argument parser: uwsgi_opt_set_str help: check for response data in the specified cache (empty for default cache) close-on-exec argument: no_argument parser: uwsgi_opt_true help: set close-on-exec on connection sockets (could be required for spawning processes in requests) close-on-exec2 argument: no_argument parser: uwsgi_opt_true help: set close-on-exec on server sockets (could be required for spawning processes in requests) mode argument: required_argument parser: uwsgi_opt_set_str help: set uWSGI custom mode env argument: required_argument parser: uwsgi_opt_set_env help: set environment variable 162 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 envdir argument: required_argument parser: uwsgi_opt_add_string_list help: load a daemontools compatible envdir early-envdir argument: required_argument parser: uwsgi_opt_envdir flags: UWSGI_OPT_IMMEDIATE help: load a daemontools compatible envdir ASAP unenv argument: required_argument parser: uwsgi_opt_unset_env help: unset environment variable vacuum argument: no_argument parser: uwsgi_opt_true help: try to remove all of the generated file/sockets file-write argument: required_argument parser: uwsgi_opt_add_string_list help: write the specified content to the specified file (syntax: file=value) before privileges drop cgroup argument: required_argument parser: uwsgi_opt_add_string_list help: put the processes in the specified cgroup cgroup-opt argument: required_argument parser: uwsgi_opt_add_string_list help: set value in specified cgroup option 3.13. uWSGI Options 163 uWSGI Documentation, Release 2.0 cgroup-dir-mode argument: required_argument parser: uwsgi_opt_set_str help: set permission for cgroup directory (default is 700) namespace argument: required_argument parser: uwsgi_opt_set_str help: run in a new namespace under the specified rootfs namespace-keep-mount argument: required_argument parser: uwsgi_opt_add_string_list help: keep the specified mountpoint in your namespace ns argument: required_argument parser: uwsgi_opt_set_str help: run in a new namespace under the specified rootfs namespace-net argument: required_argument parser: uwsgi_opt_set_str help: add network namespace ns-net argument: required_argument parser: uwsgi_opt_set_str help: add network namespace enable-proxy-protocol argument: no_argument parser: uwsgi_opt_true help: enable PROXY1 protocol support (only for http parsers) 164 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 reuse-port argument: no_argument parser: uwsgi_opt_true help: enable REUSE_PORT flag on socket (BSD only) tcp-fast-open argument: required_argument parser: uwsgi_opt_set_int help: enable TCP_FASTOPEN flag on TCP sockets with the specified qlen value tcp-fastopen argument: required_argument parser: uwsgi_opt_set_int help: enable TCP_FASTOPEN flag on TCP sockets with the specified qlen value tcp-fast-open-client argument: no_argument parser: uwsgi_opt_true help: use sendto(..., MSG_FASTOPEN, ...) instead of connect() if supported tcp-fastopen-client argument: no_argument parser: uwsgi_opt_true help: use sendto(..., MSG_FASTOPEN, ...) instead of connect() if supported zerg argument: required_argument parser: uwsgi_opt_add_string_list help: attach to a zerg server zerg-fallback argument: no_argument parser: uwsgi_opt_true help: fallback to normal sockets if the zerg server is not available 3.13. uWSGI Options 165 uWSGI Documentation, Release 2.0 zerg-server argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MASTER help: enable the zerg server on the specified UNIX socket cron argument: required_argument parser: uwsgi_opt_add_cron flags: UWSGI_OPT_MASTER help: add a cron task cron2 argument: required_argument parser: uwsgi_opt_add_cron2 flags: UWSGI_OPT_MASTER help: add a cron task (key=val syntax) unique-cron argument: required_argument parser: uwsgi_opt_add_unique_cron flags: UWSGI_OPT_MASTER help: add a unique cron task cron-harakiri argument: required_argument parser: uwsgi_opt_set_int help: set the maximum time (in seconds) we wait for cron command to complete legion-cron argument: required_argument parser: uwsgi_opt_add_legion_cron flags: UWSGI_OPT_MASTER help: add a cron task runnable only when the instance is a lord of the specified legion 166 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 cron-legion argument: required_argument parser: uwsgi_opt_add_legion_cron flags: UWSGI_OPT_MASTER help: add a cron task runnable only when the instance is a lord of the specified legion unique-legion-cron argument: required_argument parser: uwsgi_opt_add_unique_legion_cron flags: UWSGI_OPT_MASTER help: add a unique cron task runnable only when the instance is a lord of the specified legion unique-cron-legion argument: required_argument parser: uwsgi_opt_add_unique_legion_cron flags: UWSGI_OPT_MASTER help: add a unique cron task runnable only when the instance is a lord of the specified legion loop argument: required_argument parser: uwsgi_opt_set_str help: select the uWSGI loop engine loop-list argument: no_argument parser: uwsgi_opt_true help: list enabled loop engines loops-list argument: no_argument parser: uwsgi_opt_true help: list enabled loop engines 3.13. uWSGI Options 167 uWSGI Documentation, Release 2.0 worker-exec argument: required_argument parser: uwsgi_opt_set_str help: run the specified command as worker worker-exec2 argument: required_argument parser: uwsgi_opt_set_str help: run the specified command as worker (after post_fork hook) attach-daemon argument: required_argument parser: uwsgi_opt_add_daemon flags: UWSGI_OPT_MASTER help: attach a command/daemon to the master process (the command has to not go in background) attach-control-daemon argument: required_argument parser: uwsgi_opt_add_daemon flags: UWSGI_OPT_MASTER help: attach a command/daemon to the master process (the command has to not go in background), when the daemon dies, the master dies too smart-attach-daemon argument: required_argument parser: uwsgi_opt_add_daemon flags: UWSGI_OPT_MASTER help: attach a command/daemon to the master process managed by a pidfile (the command has to daemonize) smart-attach-daemon2 argument: required_argument parser: uwsgi_opt_add_daemon flags: UWSGI_OPT_MASTER help: attach a command/daemon to the master process managed by a pidfile (the command has to NOT daemonize) 168 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 legion-attach-daemon argument: required_argument parser: uwsgi_opt_add_daemon flags: UWSGI_OPT_MASTER help: same as –attach-daemon but daemon runs only on legion lord node legion-smart-attach-daemon argument: required_argument parser: uwsgi_opt_add_daemon flags: UWSGI_OPT_MASTER help: same as –smart-attach-daemon but daemon runs only on legion lord node legion-smart-attach-daemon2 argument: required_argument parser: uwsgi_opt_add_daemon flags: UWSGI_OPT_MASTER help: same as –smart-attach-daemon2 but daemon runs only on legion lord node daemons-honour-stdin argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MASTER help: do not change the stdin of external daemons to /dev/null attach-daemon2 argument: required_argument parser: uwsgi_opt_add_daemon2 flags: UWSGI_OPT_MASTER help: attach-daemon keyval variant (supports smart modes too) plugins argument: required_argument parser: uwsgi_opt_load_plugin flags: UWSGI_OPT_IMMEDIATE help: load uWSGI plugins 3.13. uWSGI Options 169 uWSGI Documentation, Release 2.0 plugin argument: required_argument parser: uwsgi_opt_load_plugin flags: UWSGI_OPT_IMMEDIATE help: load uWSGI plugins need-plugins argument: required_argument parser: uwsgi_opt_load_plugin flags: UWSGI_OPT_IMMEDIATE help: load uWSGI plugins (exit on error) need-plugin argument: required_argument parser: uwsgi_opt_load_plugin flags: UWSGI_OPT_IMMEDIATE help: load uWSGI plugins (exit on error) plugins-dir argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_IMMEDIATE help: add a directory to uWSGI plugin search path plugin-dir argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_IMMEDIATE help: add a directory to uWSGI plugin search path plugins-list argument: no_argument parser: uwsgi_opt_true help: list enabled plugins 170 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 plugin-list argument: no_argument parser: uwsgi_opt_true help: list enabled plugins autoload argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_IMMEDIATE help: try to automatically load plugins when unknown options are found dlopen argument: required_argument parser: uwsgi_opt_load_dl flags: UWSGI_OPT_IMMEDIATE help: blindly load a shared library allowed-modifiers argument: required_argument parser: uwsgi_opt_set_str help: comma separated list of allowed modifiers remap-modifier argument: required_argument parser: uwsgi_opt_set_str help: remap request modifier from one id to another dump-options argument: no_argument parser: uwsgi_opt_true help: dump the full list of available options 3.13. uWSGI Options 171 uWSGI Documentation, Release 2.0 show-config argument: no_argument parser: uwsgi_opt_true help: show the current config reformatted as ini binary-append-data argument: required_argument parser: uwsgi_opt_binary_append_data flags: UWSGI_OPT_IMMEDIATE help: return the content of a resource to stdout for appending to a uwsgi binary (for data:// usage) print argument: required_argument parser: uwsgi_opt_print help: simple print iprint argument: required_argument parser: uwsgi_opt_print flags: UWSGI_OPT_IMMEDIATE help: simple print (immediate version) exit argument: optional_argument parser: uwsgi_opt_exit flags: UWSGI_OPT_IMMEDIATE help: force exit() of the instance cflags argument: no_argument parser: uwsgi_opt_cflags flags: UWSGI_OPT_IMMEDIATE help: report uWSGI CFLAGS (useful for building external plugins) 172 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 dot-h argument: no_argument parser: uwsgi_opt_dot_h flags: UWSGI_OPT_IMMEDIATE help: dump the uwsgi.h used for building the core (useful for building external plugins) config-py argument: no_argument parser: uwsgi_opt_config_py flags: UWSGI_OPT_IMMEDIATE help: dump the uwsgiconfig.py used for building the core (useful for building external plugins) build-plugin argument: required_argument parser: uwsgi_opt_build_plugin flags: UWSGI_OPT_IMMEDIATE help: build a uWSGI plugin for the current binary version argument: no_argument parser: uwsgi_opt_print help: print uWSGI version 3.13.2 plugin: router_access 3.13.3 plugin: ldap ldap argument: required_argument parser: uwsgi_opt_load_ldap flags: UWSGI_OPT_IMMEDIATE help: load configuration from ldap server 3.13. uWSGI Options 173 uWSGI Documentation, Release 2.0 ldap-schema argument: no_argument parser: uwsgi_opt_ldap_dump flags: UWSGI_OPT_IMMEDIATE help: dump uWSGI ldap schema ldap-schema-ldif argument: no_argument parser: uwsgi_opt_ldap_dump_ldif flags: UWSGI_OPT_IMMEDIATE help: dump uWSGI ldap schema in ldif format 3.13.4 plugin: graylog2 3.13.5 plugin: servlet 3.13.6 plugin: carbon carbon argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: push statistics to the specified carbon server carbon-timeout argument: required_argument parser: uwsgi_opt_set_int help: set carbon connection timeout in seconds (default 3) carbon-freq argument: required_argument parser: uwsgi_opt_set_int help: set carbon push frequency in seconds (default 60) 174 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 carbon-id argument: required_argument parser: uwsgi_opt_set_str help: set carbon id carbon-no-workers argument: no_argument parser: uwsgi_opt_true help: disable generation of single worker metrics carbon-max-retry argument: required_argument parser: uwsgi_opt_set_int help: set maximum number of retries in case of connection errors (default 1) carbon-retry-delay argument: required_argument parser: uwsgi_opt_set_int help: set connection retry delay in seconds (default 7) carbon-root argument: required_argument parser: uwsgi_opt_set_str help: set carbon metrics root node (default ‘uwsgi’) carbon-hostname-dots argument: required_argument parser: uwsgi_opt_set_str help: set char to use as a replacement for dots in hostname (dots are not replaced by default) carbon-name-resolve argument: no_argument parser: uwsgi_opt_true help: allow using hostname as carbon server address (default disabled) 3.13. uWSGI Options 175 uWSGI Documentation, Release 2.0 carbon-resolve-names argument: no_argument parser: uwsgi_opt_true help: allow using hostname as carbon server address (default disabled) carbon-idle-avg argument: required_argument parser: uwsgi_opt_set_str help: average values source during idle period (no requests), can be “last”, “zero”, “none” (default is last) carbon-use-metrics argument: no_argument parser: uwsgi_opt_true help: don’t compute all statistics, use metrics subsystem data instead (warning! key names will be different) 3.13.7 plugin: mono mono-app argument: required_argument parser: uwsgi_opt_add_string_list help: load a Mono asp.net app from the specified directory mono-gc-freq argument: required_argument parser: uwsgi_opt_set_64bit help: run the Mono GC every requests (default: run after every request) mono-key argument: required_argument parser: uwsgi_opt_add_string_list help: select the ApplicationHost based on the specified CGI var mono-version argument: required_argument parser: uwsgi_opt_set_str help: set the Mono jit version 176 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 mono-config argument: required_argument parser: uwsgi_opt_set_str help: set the Mono config file mono-assembly argument: required_argument parser: uwsgi_opt_set_str help: load the specified main assembly (default: uwsgi.dll) mono-exec argument: required_argument parser: uwsgi_opt_add_string_list help: exec the specified assembly just before app loading mono-index argument: required_argument parser: uwsgi_opt_add_string_list help: add an asp.net index file 3.13.8 plugin: router_xmldir 3.13.9 plugin: cplusplus 3.13.10 plugin: msgpack 3.13.11 plugin: rbthreads rbthreads argument: no_argument parser: uwsgi_opt_true help: enable ruby native threads rb-threads argument: no_argument parser: uwsgi_opt_true help: enable ruby native threads 3.13. uWSGI Options 177 uWSGI Documentation, Release 2.0 rbthread argument: no_argument parser: uwsgi_opt_true help: enable ruby native threads rb-thread argument: no_argument parser: uwsgi_opt_true help: enable ruby native threads 3.13.12 plugin: rack rails argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_POST_BUFFERING help: load a rails <= 2.x app rack argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_POST_BUFFERING help: load a rack app ruby-gc-freq argument: required_argument parser: uwsgi_opt_set_int help: set ruby GC frequency rb-gc-freq argument: required_argument parser: uwsgi_opt_set_int help: set ruby GC frequency 178 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 rb-lib argument: required_argument parser: uwsgi_opt_add_string_list help: add a directory to the ruby libdir search path ruby-lib argument: required_argument parser: uwsgi_opt_add_string_list help: add a directory to the ruby libdir search path rb-require argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script ruby-require argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script rbrequire argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script rubyrequire argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script require argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script 3.13. uWSGI Options 179 uWSGI Documentation, Release 2.0 shared-rb-require argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script (shared) shared-ruby-require argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script (shared) shared-rbrequire argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script (shared) shared-rubyrequire argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script (shared) shared-require argument: required_argument parser: uwsgi_opt_add_string_list help: import/require a ruby module/script (shared) gemset argument: required_argument parser: uwsgi_opt_set_str help: load the specified gemset (rvm) rvm argument: required_argument parser: uwsgi_opt_set_str help: load the specified gemset (rvm) 180 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 rvm-path argument: required_argument parser: uwsgi_opt_add_string_list help: search for rvm in the specified directory rbshell argument: optional_argument parser: uwsgi_opt_rbshell help: run a ruby/irb shell rbshell-oneshot argument: no_argument parser: uwsgi_opt_rbshell help: set ruby/irb shell (one shot) 3.13.13 plugin: redislog 3.13.14 plugin: corerouter 3.13.15 plugin: router_redis 3.13.16 plugin: rados rados-mount argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: virtual mount the specified rados volume in a uri rados-timeout argument: required_argument parser: uwsgi_opt_set_int help: timeout for async operations 3.13. uWSGI Options 181 uWSGI Documentation, Release 2.0 3.13.17 plugin: transformation_template 3.13.18 plugin: router_http 3.13.19 plugin: v8 v8-load argument: required_argument parser: uwsgi_opt_add_string_list help: load a javascript file v8-preemptive argument: required_argument parser: uwsgi_opt_set_int help: put v8 in preemptive move (single isolate) with the specified frequency v8-gc-freq argument: required_argument parser: uwsgi_opt_set_64bit help: set the v8 garbage collection frequency v8-module-path argument: required_argument parser: uwsgi_opt_add_string_list help: set the v8 modules search path v8-jsgi argument: required_argument parser: uwsgi_opt_set_str help: load the specified JSGI 3.0 application 3.13.20 plugin: psgi psgi argument: required_argument parser: uwsgi_opt_set_str help: load a psgi app 182 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 psgi-enable-psgix-io argument: no_argument parser: uwsgi_opt_true help: enable psgix.io support perl-no-die-catch argument: no_argument parser: uwsgi_opt_true help: do not catch $SIG{__DIE__} perl-local-lib argument: required_argument parser: uwsgi_opt_set_str help: set perl locallib path perl-version argument: no_argument parser: uwsgi_opt_print flags: UWSGI_OPT_IMMEDIATE help: print perl version perl-args argument: required_argument parser: uwsgi_opt_set_str help: add items (space separated) to @ARGV perl-arg argument: required_argument parser: uwsgi_opt_add_string_list help: add an item to @ARGV perl-exec argument: required_argument parser: uwsgi_opt_add_string_list help: exec the specified perl file before fork() 3.13. uWSGI Options 183 uWSGI Documentation, Release 2.0 perl-exec-post-fork argument: required_argument parser: uwsgi_opt_add_string_list help: exec the specified perl file after fork() perl-auto-reload argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_MASTER help: enable perl auto-reloader with the specified frequency perl-auto-reload-ignore argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER help: ignore the specified files when auto-reload is enabled plshell argument: optional_argument parser: uwsgi_opt_plshell help: run a perl interactive shell plshell-oneshot argument: no_argument parser: uwsgi_opt_plshell help: run a perl interactive shell (one shot) perl-no-plack argument: no_argument parser: uwsgi_opt_true help: force the use of do instead of Plack::Util::load_psgi 184 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.13.21 plugin: transformation_chunked 3.13.22 plugin: lua lua argument: required_argument parser: uwsgi_opt_set_str help: load lua wsapi app lua-load argument: required_argument parser: uwsgi_opt_add_string_list help: load a lua file lua-shell argument: no_argument parser: uwsgi_opt_luashell help: run the lua interactive shell (debug.debug()) luashell argument: no_argument parser: uwsgi_opt_luashell help: run the lua interactive shell (debug.debug()) lua-gc-freq argument: no_argument parser: uwsgi_opt_set_int help: set the lua gc frequency (default: 0, runs after every request) 3.13.23 plugin: pyuwsgi 3.13.24 plugin: php php-ini argument: required_argument parser: uwsgi_opt_php_ini help: set php.ini path 3.13. uWSGI Options 185 uWSGI Documentation, Release 2.0 php-config argument: required_argument parser: uwsgi_opt_php_ini help: set php.ini path php-ini-append argument: required_argument parser: uwsgi_opt_add_string_list help: set php.ini path (append mode) php-config-append argument: required_argument parser: uwsgi_opt_add_string_list help: set php.ini path (append mode) php-set argument: required_argument parser: uwsgi_opt_add_string_list help: set a php config directive php-index argument: required_argument parser: uwsgi_opt_add_string_list help: list the php index files php-docroot argument: required_argument parser: uwsgi_opt_set_str help: force php DOCUMENT_ROOT php-allowed-docroot argument: required_argument parser: uwsgi_opt_add_string_list help: list the allowed document roots 186 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 php-allowed-ext argument: required_argument parser: uwsgi_opt_add_string_list help: list the allowed php file extensions php-allowed-script argument: required_argument parser: uwsgi_opt_add_string_list help: list the allowed php scripts (require absolute path) php-server-software argument: required_argument parser: uwsgi_opt_set_str help: force php SERVER_SOFTWARE php-app argument: required_argument parser: uwsgi_opt_set_str help: force the php file to run at each request php-app-qs argument: required_argument parser: uwsgi_opt_set_str help: when in app mode force QUERY_STRING to the specified value + REQUEST_URI php-fallback argument: required_argument parser: uwsgi_opt_set_str help: run the specified php script when the request one does not exist php-app-bypass argument: required_argument parser: uwsgi_opt_add_regexp_list help: if the regexp matches the uri the –php-app is bypassed 3.13. uWSGI Options 187 uWSGI Documentation, Release 2.0 php-var argument: required_argument parser: uwsgi_opt_add_string_list help: add/overwrite a CGI variable at each request php-dump-config argument: no_argument parser: uwsgi_opt_true help: dump php config (if modified via –php-set or append options) php-exec-before argument: required_argument parser: uwsgi_opt_add_string_list help: run specified php code before the requested script php-exec-begin argument: required_argument parser: uwsgi_opt_add_string_list help: run specified php code before the requested script php-exec-after argument: required_argument parser: uwsgi_opt_add_string_list help: run specified php code after the requested script php-exec-end argument: required_argument parser: uwsgi_opt_add_string_list help: run specified php code after the requested script php-sapi-name argument: required_argument parser: uwsgi_opt_set_str help: hack the sapi name (required for enabling zend opcode cache) 188 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.13.25 plugin: router_expires 3.13.26 plugin: symcall symcall argument: required_argument parser: uwsgi_opt_add_string_list help: load the specified C symbol as the symcall request handler (supports too) symcall-use-next argument: no_argument parser: uwsgi_opt_true help: use RTLD_NEXT when searching for symbols symcall-register-rpc argument: required_argument parser: uwsgi_opt_add_string_list help: load the specified C symbol as an RPC function (syntax: name function) symcall-post-fork argument: required_argument parser: uwsgi_opt_add_string_list help: call the specified C symbol after each fork() 3.13.27 plugin: xslt xslt-docroot argument: required_argument parser: uwsgi_opt_add_string_list help: add a document_root for xslt processing xslt-ext argument: required_argument parser: uwsgi_opt_add_string_list help: search for xslt stylesheets with the specified extension 3.13. uWSGI Options 189 uWSGI Documentation, Release 2.0 xslt-var argument: required_argument parser: uwsgi_opt_add_string_list help: get the xslt stylesheet path from the specified request var xslt-stylesheet argument: required_argument parser: uwsgi_opt_add_string_list help: if no xslt stylesheet file can be found, use the specified one xslt-content-type argument: required_argument parser: uwsgi_opt_set_str help: set the content-type for the xslt rsult (default: text/html) 3.13.28 plugin: logsocket 3.13.29 plugin: dummy 3.13.30 plugin: alarm_curl 3.13.31 plugin: signal 3.13.32 plugin: notfound notfound-log argument: no_argument parser: uwsgi_opt_true help: log requests to the notfound plugin 3.13.33 plugin: logzmq log-zeromq argument: required_argument parser: uwsgi_opt_set_logger flags: UWSGI_OPT_MASTER | UWSGI_OPT_LOG_MASTER help: send logs to a zeromq server 190 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.13.34 plugin: fastrouter fastrouter argument: required_argument parser: uwsgi_opt_corerouter help: run the fastrouter on the specified port reference: The uWSGI FastRouter fastrouter-processes argument: required_argument parser: uwsgi_opt_set_int help: prefork the specified number of fastrouter processes fastrouter-workers argument: required_argument parser: uwsgi_opt_set_int help: prefork the specified number of fastrouter processes fastrouter-zerg argument: required_argument parser: uwsgi_opt_corerouter_zerg help: attach the fastrouter to a zerg server fastrouter-use-cache argument: optional_argument parser: uwsgi_opt_set_str help: use uWSGI cache as hostname->server mapper for the fastrouter fastrouter-use-pattern argument: required_argument parser: uwsgi_opt_corerouter_use_pattern help: use a pattern for fastrouter hostname->server mapping 3.13. uWSGI Options 191 uWSGI Documentation, Release 2.0 fastrouter-use-base argument: required_argument parser: uwsgi_opt_corerouter_use_base help: use a base dir for fastrouter hostname->server mapping fastrouter-fallback argument: required_argument parser: uwsgi_opt_add_string_list help: fallback to the specified node in case of error fastrouter-use-code-string argument: required_argument parser: uwsgi_opt_corerouter_cs help: use code string as hostname->server mapper for the fastrouter fastrouter-use-socket argument: optional_argument parser: uwsgi_opt_corerouter_use_socket help: forward request to the specified uwsgi socket fastrouter-to argument: required_argument parser: uwsgi_opt_add_string_list help: forward requests to the specified uwsgi server (you can specify it multiple times for load balancing) fastrouter-gracetime argument: required_argument parser: uwsgi_opt_set_int help: retry connections to dead static nodes after the specified amount of seconds fastrouter-events argument: required_argument parser: uwsgi_opt_set_int help: set the maximum number of concurrent events 192 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 fastrouter-quiet argument: required_argument parser: uwsgi_opt_true help: do not report failed connections to instances fastrouter-cheap argument: no_argument parser: uwsgi_opt_true help: run the fastrouter in cheap mode fastrouter-subscription-server argument: required_argument parser: uwsgi_opt_corerouter_ss help: run the fastrouter subscription server on the specified address fastrouter-subscription-slot argument: required_argument parser: uwsgi_opt_deprecated help:* deprecated * fastrouter-timeout argument: required_argument parser: uwsgi_opt_set_int help: set fastrouter timeout fastrouter-post-buffering argument: required_argument parser: uwsgi_opt_set_64bit help: enable fastrouter post buffering fastrouter-post-buffering-dir argument: required_argument parser: uwsgi_opt_set_str help: put fastrouter buffered files to the specified directory 3.13. uWSGI Options 193 uWSGI Documentation, Release 2.0 fastrouter-stats argument: required_argument parser: uwsgi_opt_set_str help: run the fastrouter stats server fastrouter-stats-server argument: required_argument parser: uwsgi_opt_set_str help: run the fastrouter stats server fastrouter-ss argument: required_argument parser: uwsgi_opt_set_str help: run the fastrouter stats server fastrouter-harakiri argument: required_argument parser: uwsgi_opt_set_int help: enable fastrouter harakiri fastrouter-uid argument: required_argument parser: uwsgi_opt_uid help: drop fastrouter privileges to the specified uid fastrouter-gid argument: required_argument parser: uwsgi_opt_gid help: drop fastrouter privileges to the specified gid fastrouter-resubscribe argument: required_argument parser: uwsgi_opt_add_string_list help: forward subscriptions to the specified subscription server 194 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 fastrouter-resubscribe-bind argument: required_argument parser: uwsgi_opt_set_str help: bind to the specified address when re-subscribing fastrouter-buffer-size argument: required_argument parser: uwsgi_opt_set_64bit help: set internal buffer size (default: page size) 3.13.35 plugin: pypy pypy-lib argument: required_argument parser: uwsgi_opt_set_str help: set the path/name of the pypy library pypy-setup argument: required_argument parser: uwsgi_opt_set_str help: set the path of the python setup script pypy-home argument: required_argument parser: uwsgi_opt_set_str help: set the home of pypy library pypy-wsgi argument: required_argument parser: uwsgi_opt_set_str help: load a WSGI module pypy-wsgi-file argument: required_argument parser: uwsgi_opt_set_str help: load a WSGI/mod_wsgi file 3.13. uWSGI Options 195 uWSGI Documentation, Release 2.0 pypy-ini-paste argument: required_argument parser: uwsgi_opt_pypy_ini_paste flags: UWSGI_OPT_IMMEDIATE help: load a paste.deploy config file containing uwsgi section pypy-paste argument: required_argument parser: uwsgi_opt_set_str help: load a paste.deploy config file pypy-eval argument: required_argument parser: uwsgi_opt_add_string_list help: evaluate pypy code before fork() pypy-eval-post-fork argument: required_argument parser: uwsgi_opt_add_string_list help: evaluate pypy code soon after fork() pypy-exec argument: required_argument parser: uwsgi_opt_add_string_list help: execute pypy code from file before fork() pypy-exec-post-fork argument: required_argument parser: uwsgi_opt_add_string_list help: execute pypy code from file soon after fork() pypy-pp argument: required_argument parser: uwsgi_opt_add_string_list help: add an item to the pythonpath 196 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 pypy-python-path argument: required_argument parser: uwsgi_opt_add_string_list help: add an item to the pythonpath pypy-pythonpath argument: required_argument parser: uwsgi_opt_add_string_list help: add an item to the pythonpath 3.13.36 plugin: stats_pusher_socket 3.13.37 plugin: xattr 3.13.38 plugin: router_spnego 3.13.39 plugin: alarm_xmpp 3.13.40 plugin: emperor_zeromq 3.13.41 plugin: pty pty-socket argument: required_argument parser: uwsgi_opt_set_str help: bind the pty server on the specified address pty-log argument: no_argument parser: uwsgi_opt_true help: send stdout/stderr to the log engine too pty-input argument: no_argument parser: uwsgi_opt_true help: read from original stdin in addition to pty 3.13. uWSGI Options 197 uWSGI Documentation, Release 2.0 pty-connect argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_NO_INITIAL help: connect the current terminal to a pty server pty-uconnect argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_NO_INITIAL help: connect the current terminal to a pty server (using uwsgi protocol) pty-no-isig argument: no_argument parser: uwsgi_opt_true help: disable ISIG terminal attribute in client mode pty-exec argument: required_argument parser: uwsgi_opt_set_str help: run the specified command soon after the pty thread is spawned 3.13.42 plugin: gridfs gridfs-mount argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: mount a gridfs db on the specified mountpoint gridfs-debug argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_MIME help: report gridfs mountpoint and itemname for each request (debug) 198 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.13.43 plugin: pam pam argument: required_argument parser: uwsgi_opt_set_str help: set the pam service name to use pam-user argument: required_argument parser: uwsgi_opt_set_str help: set a fake user for pam 3.13.44 plugin: stackless stackless argument: no_argument parser: uwsgi_opt_true help: use stackless as suspend engine 3.13.45 plugin: rawrouter rawrouter argument: required_argument parser: uwsgi_opt_undeferred_corerouter help: run the rawrouter on the specified port rawrouter-processes argument: required_argument parser: uwsgi_opt_set_int help: prefork the specified number of rawrouter processes rawrouter-workers argument: required_argument parser: uwsgi_opt_set_int help: prefork the specified number of rawrouter processes 3.13. uWSGI Options 199 uWSGI Documentation, Release 2.0 rawrouter-zerg argument: required_argument parser: uwsgi_opt_corerouter_zerg help: attach the rawrouter to a zerg server rawrouter-use-cache argument: optional_argument parser: uwsgi_opt_set_str help: use uWSGI cache as hostname->server mapper for the rawrouter rawrouter-use-pattern argument: required_argument parser: uwsgi_opt_corerouter_use_pattern help: use a pattern for rawrouter hostname->server mapping rawrouter-use-base argument: required_argument parser: uwsgi_opt_corerouter_use_base help: use a base dir for rawrouter hostname->server mapping rawrouter-fallback argument: required_argument parser: uwsgi_opt_add_string_list help: fallback to the specified node in case of error rawrouter-use-code-string argument: required_argument parser: uwsgi_opt_corerouter_cs help: use code string as hostname->server mapper for the rawrouter rawrouter-use-socket argument: optional_argument parser: uwsgi_opt_corerouter_use_socket help: forward request to the specified uwsgi socket 200 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 rawrouter-to argument: required_argument parser: uwsgi_opt_add_string_list help: forward requests to the specified uwsgi server (you can specify it multiple times for load balancing) rawrouter-gracetime argument: required_argument parser: uwsgi_opt_set_int help: retry connections to dead static nodes after the specified amount of seconds rawrouter-events argument: required_argument parser: uwsgi_opt_set_int help: set the maximum number of concurrent events rawrouter-max-retries argument: required_argument parser: uwsgi_opt_set_int help: set the maximum number of retries/fallbacks to other nodes rawrouter-quiet argument: required_argument parser: uwsgi_opt_true help: do not report failed connections to instances rawrouter-cheap argument: no_argument parser: uwsgi_opt_true help: run the rawrouter in cheap mode rawrouter-subscription-server argument: required_argument parser: uwsgi_opt_corerouter_ss help: run the rawrouter subscription server on the spcified address 3.13. uWSGI Options 201 uWSGI Documentation, Release 2.0 rawrouter-subscription-slot argument: required_argument parser: uwsgi_opt_deprecated help:* deprecated * rawrouter-timeout argument: required_argument parser: uwsgi_opt_set_int help: set rawrouter timeout rawrouter-stats argument: required_argument parser: uwsgi_opt_set_str help: run the rawrouter stats server rawrouter-stats-server argument: required_argument parser: uwsgi_opt_set_str help: run the rawrouter stats server rawrouter-ss argument: required_argument parser: uwsgi_opt_set_str help: run the rawrouter stats server rawrouter-harakiri argument: required_argument parser: uwsgi_opt_set_int help: enable rawrouter harakiri rawrouter-xclient argument: no_argument parser: uwsgi_opt_true help: use the xclient protocol to pass the client addres 202 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 rawrouter-buffer-size argument: required_argument parser: uwsgi_opt_set_64bit help: set internal buffer size (default: page size) 3.13.46 plugin: mongodblog 3.13.47 plugin: clock_realtime 3.13.48 plugin: ruby19 3.13.49 plugin: alarm_speech 3.13.50 plugin: tuntap tuntap-router argument: required_argument parser: uwsgi_opt_add_string_list help: run the tuntap router (syntax: [stats] [gateway]) tuntap-device argument: required_argument parser: uwsgi_opt_add_string_list help: add a tuntap device to the instance (syntax: [ ]) tuntap-use-credentials argument: optional_argument parser: uwsgi_opt_set_str help: enable check of SCM_CREDENTIALS for tuntap client/server tuntap-router-firewall-in argument: required_argument parser: uwsgi_tuntap_opt_firewall help: add a firewall rule to the tuntap router (syntax: ) 3.13. uWSGI Options 203 uWSGI Documentation, Release 2.0 tuntap-router-firewall-out argument: required_argument parser: uwsgi_tuntap_opt_firewall help: add a firewall rule to the tuntap router (syntax: ) tuntap-router-route argument: required_argument parser: uwsgi_tuntap_opt_route help: add a routing rule to the tuntap router (syntax: ) tuntap-router-stats argument: required_argument parser: uwsgi_opt_set_str help: run the tuntap router stats server tuntap-device-rule argument: required_argument parser: uwsgi_opt_add_string_list help: add a tuntap device rule (syntax: [target]) 3.13.51 plugin: jvm jvm-main-class argument: required_argument parser: uwsgi_opt_add_string_list help: load the specified class and call its main() function jvm-opt argument: required_argument parser: uwsgi_opt_add_string_list help: add the specified jvm option jvm-class argument: required_argument parser: uwsgi_opt_add_string_list help: load the specified class 204 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 jvm-classpath argument: required_argument parser: uwsgi_opt_add_string_list help: add the specified directory to the classpath 3.13.52 plugin: router_metrics 3.13.53 plugin: legion_cache_fetch 3.13.54 plugin: dumbloop dumbloop-modifier1 argument: required_argument parser: uwsgi_opt_set_int help: set the modifier1 for the code_string dumbloop-code argument: required_argument parser: uwsgi_opt_set_str help: set the script to load for the code_string dumbloop-function argument: required_argument parser: uwsgi_opt_set_str help: set the function to run for the code_string 3.13.55 plugin: logcrypto 3.13.56 plugin: http http argument: required_argument parser: uwsgi_opt_corerouter help: add an http router/server on the specified address 3.13. uWSGI Options 205 uWSGI Documentation, Release 2.0 httprouter argument: required_argument parser: uwsgi_opt_corerouter help: add an http router/server on the specified address https argument: required_argument parser: uwsgi_opt_https help: add an https router/server on the specified address with specified certificate and key https2 argument: required_argument parser: uwsgi_opt_https2 help: add an https/spdy router/server using keyval options https-export-cert argument: no_argument parser: uwsgi_opt_true help: export uwsgi variable HTTPS_CC containing the raw client certificate https-session-context argument: required_argument parser: uwsgi_opt_set_str help: set the session id context to the specified value http-to-https argument: required_argument parser: uwsgi_opt_http_to_https help: add an http router/server on the specified address and redirect all of the requests to https http-processes argument: required_argument parser: uwsgi_opt_set_int help: set the number of http processes to spawn 206 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 http-workers argument: required_argument parser: uwsgi_opt_set_int help: set the number of http processes to spawn http-var argument: required_argument parser: uwsgi_opt_add_string_list help: add a key=value item to the generated uwsgi packet http-to argument: required_argument parser: uwsgi_opt_add_string_list help: forward requests to the specified node (you can specify it multiple time for lb) http-zerg argument: required_argument parser: uwsgi_opt_corerouter_zerg help: attach the http router to a zerg server http-fallback argument: required_argument parser: uwsgi_opt_add_string_list help: fallback to the specified node in case of error http-modifier1 argument: required_argument parser: uwsgi_opt_set_int help: set uwsgi protocol modifier1 http-modifier2 argument: required_argument parser: uwsgi_opt_set_int help: set uwsgi protocol modifier2 3.13. uWSGI Options 207 uWSGI Documentation, Release 2.0 http-use-cache argument: optional_argument parser: uwsgi_opt_set_str help: use uWSGI cache as key->value virtualhost mapper http-use-pattern argument: required_argument parser: uwsgi_opt_corerouter_use_pattern help: use the specified pattern for mapping requests to unix sockets http-use-base argument: required_argument parser: uwsgi_opt_corerouter_use_base help: use the specified base for mapping requests to unix sockets http-events argument: required_argument parser: uwsgi_opt_set_int help: set the number of concurrent http async events http-subscription-server argument: required_argument parser: uwsgi_opt_corerouter_ss help: enable the subscription server http-timeout argument: required_argument parser: uwsgi_opt_set_int help: set internal http socket timeout http-manage-expect argument: optional_argument parser: uwsgi_opt_set_64bit help: manage the Expect HTTP request header (optionally checking for Content-Length) 208 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 http-keepalive argument: optional_argument parser: uwsgi_opt_set_int help: HTTP 1.1 keepalive support (non-pipelined) requests http-auto-chunked argument: no_argument parser: uwsgi_opt_true help: automatically transform output to chunked encoding during HTTP 1.1 keepalive (if needed) http-auto-gzip argument: no_argument parser: uwsgi_opt_true help: automatically gzip content if uWSGI-Encoding header is set to gzip, but content size (Content-Length/Transfer- Encoding) and Content-Encoding are not specified http-raw-body argument: no_argument parser: uwsgi_opt_true help: blindly send HTTP body to backends (required for WebSockets and Icecast support in backends) http-websockets argument: no_argument parser: uwsgi_opt_true help: automatically detect websockets connections and put the session in raw mode http-use-code-string argument: required_argument parser: uwsgi_opt_corerouter_cs help: use code string as hostname->server mapper for the http router http-use-socket argument: optional_argument parser: uwsgi_opt_corerouter_use_socket help: forward request to the specified uwsgi socket 3.13. uWSGI Options 209 uWSGI Documentation, Release 2.0 http-gracetime argument: required_argument parser: uwsgi_opt_set_int help: retry connections to dead static nodes after the specified amount of seconds http-quiet argument: required_argument parser: uwsgi_opt_true help: do not report failed connections to instances http-cheap argument: no_argument parser: uwsgi_opt_true help: run the http router in cheap mode http-stats argument: required_argument parser: uwsgi_opt_set_str help: run the http router stats server http-stats-server argument: required_argument parser: uwsgi_opt_set_str help: run the http router stats server http-ss argument: required_argument parser: uwsgi_opt_set_str help: run the http router stats server http-harakiri argument: required_argument parser: uwsgi_opt_set_int help: enable http router harakiri 210 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 http-stud-prefix argument: required_argument parser: uwsgi_opt_add_addr_list help: expect a stud prefix (1byte family + 4/16 bytes address) on connections from the specified address http-uid argument: required_argument parser: uwsgi_opt_uid help: drop http router privileges to the specified uid http-gid argument: required_argument parser: uwsgi_opt_gid help: drop http router privileges to the specified gid http-resubscribe argument: required_argument parser: uwsgi_opt_add_string_list help: forward subscriptions to the specified subscription server http-buffer-size argument: required_argument parser: uwsgi_opt_set_64bit help: set internal buffer size (default: page size) http-server-name-as-http-host argument: required_argument parser: uwsgi_opt_true help: force SERVER_NAME to HTTP_HOST http-headers-timeout argument: required_argument parser: uwsgi_opt_set_int help: set internal http socket timeout for headers 3.13. uWSGI Options 211 uWSGI Documentation, Release 2.0 http-connect-timeout argument: required_argument parser: uwsgi_opt_set_int help: set internal http socket timeout for backend connections http-manage-source argument: no_argument parser: uwsgi_opt_true help: manage the SOURCE HTTP method placing the session in raw mode http-enable-proxy-protocol argument: optional_argument parser: uwsgi_opt_true help: manage PROXY protocol requests 0x1f argument: 0x8b shortcut: -Z_DEFLATED help: 0 3.13.57 plugin: ring ring-load argument: required_argument parser: uwsgi_opt_add_string_list help: load the specified clojure script clojure-load argument: required_argument parser: uwsgi_opt_add_string_list help: load the specified clojure script ring-app argument: required_argument parser: uwsgi_opt_set_str help: map the specified ring application (syntax namespace:function) 212 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.13.58 plugin: spooler 3.13.59 plugin: fiber fiber argument: no_argument parser: uwsgi_opt_true help: enable ruby fiber as suspend engine 3.13.60 plugin: stats_pusher_mongodb 3.13.61 plugin: objc_gc 3.13.62 plugin: matheval 3.13.63 plugin: router_static 3.13.64 plugin: logfile 3.13.65 plugin: cgi cgi argument: required_argument parser: uwsgi_opt_add_cgi help: add a cgi mountpoint/directory/script cgi-map-helper argument: required_argument parser: uwsgi_opt_add_cgi_maphelper help: add a cgi map-helper cgi-helper argument: required_argument parser: uwsgi_opt_add_cgi_maphelper help: add a cgi map-helper cgi-from-docroot argument: no_argument parser: uwsgi_opt_true help: blindly enable cgi in DOCUMENT_ROOT 3.13. uWSGI Options 213 uWSGI Documentation, Release 2.0 cgi-buffer-size argument: required_argument parser: uwsgi_opt_set_64bit help: set cgi buffer size cgi-timeout argument: required_argument parser: uwsgi_opt_set_int help: set cgi script timeout cgi-index argument: required_argument parser: uwsgi_opt_add_string_list help: add a cgi index file cgi-allowed-ext argument: required_argument parser: uwsgi_opt_add_string_list help: cgi allowed extension cgi-unset argument: required_argument parser: uwsgi_opt_add_string_list help: unset specified environment variables cgi-loadlib argument: required_argument parser: uwsgi_opt_add_string_list help: load a cgi shared library/optimizer cgi-optimize argument: no_argument parser: uwsgi_opt_true help: enable cgi realpath() optimizer 214 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 cgi-optimized argument: no_argument parser: uwsgi_opt_true help: enable cgi realpath() optimizer cgi-path-info argument: no_argument parser: uwsgi_opt_true help: disable PATH_INFO management in cgi scripts cgi-do-not-kill-on-error argument: no_argument parser: uwsgi_opt_true help: do not send SIGKILL to cgi script on errors cgi-async-max-attempts argument: no_argument parser: uwsgi_opt_set_int help: max waitpid() attempts in cgi async mode (default 10) 3.13.66 plugin: rrdtool rrdtool argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MASTER|UWSGI_OPT_METRICS help: store rrd files in the specified directory rrdtool-freq argument: required_argument parser: uwsgi_opt_set_int help: set collect frequency 3.13. uWSGI Options 215 uWSGI Documentation, Release 2.0 rrdtool-lib argument: required_argument parser: uwsgi_opt_set_str help: set the name of rrd library (default: librrd.so) 3.13.67 plugin: transformation_gzip 3.13.68 plugin: geoip geoip-country argument: required_argument parser: uwsgi_opt_set_str help: load the specified geoip country database geoip-city argument: required_argument parser: uwsgi_opt_set_str help: load the specified geoip city database geoip-use-disk argument: no_argument parser: uwsgi_opt_true help: do not cache geoip databases in memory 3.13.69 plugin: systemd_logger 3.13.70 plugin: logpipe 3.13.71 plugin: cheaper_backlog2 3.13.72 plugin: cheaper_busyness 3.13.73 plugin: webdav webdav-mount argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: map a filesystem directory as a webdav store 216 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 webdav-css argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a css url for automatic webdav directory listing webdav-javascript argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a javascript url for automatic webdav directory listing webdav-js argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a javascript url for automatic webdav directory listing webdav-class-directory argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MIME help: set the css directory class for automatic webdav directory listing webdav-div argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MIME help: set the div id for automatic webdav directory listing webdav-lock-cache argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MIME help: set the cache to use for webdav locking 3.13. uWSGI Options 217 uWSGI Documentation, Release 2.0 webdav-principal-base argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_MIME help: enable WebDAV Current Principal Extension using the specified base webdav-add-option argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV standard to the OPTIONS response webdav-add-prop argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all resources webdav-add-collection-prop argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all collections webdav-add-object-prop argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all objects webdav-add-prop-href argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all resources (href value) 218 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 webdav-add-collection-prop-href argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all collections (href value) webdav-add-object-prop-href argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all objects (href value) webdav-add-prop-comp argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all resources (xml value) webdav-add-collection-prop-comp argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all collections (xml value) webdav-add-object-prop-comp argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV property to all objects (xml value) webdav-add-rtype-prop argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV resourcetype property to all resources 3.13. uWSGI Options 219 uWSGI Documentation, Release 2.0 webdav-add-rtype-collection-prop argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV resourcetype property to all collections webdav-add-rtype-object-prop argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: add a WebDAV resourcetype property to all objects webdav-skip-prop argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: do not add the specified prop if available in resource xattr 3.13.74 plugin: router_cache 3.13.75 plugin: rpc 3.13.76 plugin: mongrel2 zeromq argument: required_argument parser: uwsgi_opt_add_lazy_socket help: create a mongrel2/zeromq pub/sub pair zmq argument: required_argument parser: uwsgi_opt_add_lazy_socket help: create a mongrel2/zeromq pub/sub pair 220 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 zeromq-socket argument: required_argument parser: uwsgi_opt_add_lazy_socket help: create a mongrel2/zeromq pub/sub pair zmq-socket argument: required_argument parser: uwsgi_opt_add_lazy_socket help: create a mongrel2/zeromq pub/sub pair mongrel2 argument: required_argument parser: uwsgi_opt_add_lazy_socket help: create a mongrel2/zeromq pub/sub pair 3.13.77 plugin: router_uwsgi 3.13.78 plugin: ping ping argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_NO_INITIAL | UWSGI_OPT_NO_SERVER help: ping specified uwsgi host ping-timeout argument: required_argument parser: uwsgi_opt_set_int help: set ping timeout 3.13.79 plugin: stats_pusher_statsd 3.13.80 plugin: router_radius 3.13.81 plugin: tornado tornado argument: required_argument 3.13. uWSGI Options 221 uWSGI Documentation, Release 2.0 parser: uwsgi_opt_setup_tornado flags: UWSGI_OPT_THREADS help: a shortcut enabling tornado loop engine with the specified number of async cores and optimal parameters 3.13.82 plugin: zergpool zergpool argument: required_argument parser: uwsgi_opt_add_string_list help: start a zergpool on specified address for specified address zerg-pool argument: required_argument parser: uwsgi_opt_add_string_list help: start a zergpool on specified address for specified address 3.13.83 plugin: emperor_amqp 3.13.84 plugin: mongodb 3.13.85 plugin: curl_cron curl-cron argument: required_argument parser: uwsgi_opt_add_cron_curl flags: UWSGI_OPT_MASTER help: add a cron task invoking the specified url via CURL cron-curl argument: required_argument parser: uwsgi_opt_add_cron_curl flags: UWSGI_OPT_MASTER help: add a cron task invoking the specified url via CURL 222 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 legion-curl-cron argument: required_argument parser: uwsgi_opt_add_legion_cron_curl flags: UWSGI_OPT_MASTER help: add a cron task invoking the specified url via CURL runnable only when the instance is a lord of the specified legion legion-cron-curl argument: required_argument parser: uwsgi_opt_add_legion_cron_curl flags: UWSGI_OPT_MASTER help: add a cron task invoking the specified url via CURL runnable only when the instance is a lord of the specified legion curl-cron-legion argument: required_argument parser: uwsgi_opt_add_legion_cron_curl flags: UWSGI_OPT_MASTER help: add a cron task invoking the specified url via CURL runnable only when the instance is a lord of the specified legion cron-curl-legion argument: required_argument parser: uwsgi_opt_add_legion_cron_curl flags: UWSGI_OPT_MASTER help: add a cron task invoking the specified url via CURL runnable only when the instance is a lord of the specified legion 3.13.86 plugin: nagios nagios argument: no_argument parser: uwsgi_opt_true flags: UWSGI_OPT_NO_INITIAL help: nagios check 3.13. uWSGI Options 223 uWSGI Documentation, Release 2.0 3.13.87 plugin: exception_log 3.13.88 plugin: gccgo go-load argument: required_argument parser: uwsgi_opt_add_string_list help: load a go shared library in the process address space, eventually patching main.main and __go_init_main gccgo-load argument: required_argument parser: uwsgi_opt_add_string_list help: load a go shared library in the process address space, eventually patching main.main and __go_init_main go-args argument: required_argument parser: uwsgi_opt_set_str help: set go commandline arguments gccgo-args argument: required_argument parser: uwsgi_opt_set_str help: set go commandline arguments goroutines argument: required_argument parser: uwsgi_opt_setup_goroutines flags: UWSGI_OPT_THREADS help: a shortcut setting optimal options for goroutine-based apps, takes the number of max goroutines to spawn as argument 224 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.13.89 plugin: syslog 3.13.90 plugin: router_basicauth 3.13.91 plugin: libffi 3.13.92 plugin: zabbix zabbix-template argument: optional_argument parser: uwsgi_opt_zabbix_template flags: UWSGI_OPT_METRICS help: print (or store to a file) the zabbix template for the current metrics setup 3.13.93 plugin: router_rewrite 3.13.94 plugin: transformation_offload 3.13.95 plugin: ugreen ugreen argument: no_argument parser: uwsgi_opt_true help: enable ugreen coroutine subsystem ugreen-stacksize argument: required_argument parser: uwsgi_opt_set_int help: set ugreen stack size in pages 3.13.96 plugin: forkptyrouter forkptyrouter argument: required_argument parser: uwsgi_opt_undeferred_corerouter help: run the forkptyrouter on the specified address 3.13. uWSGI Options 225 uWSGI Documentation, Release 2.0 forkpty-router argument: required_argument parser: uwsgi_opt_undeferred_corerouter help: run the forkptyrouter on the specified address forkptyurouter argument: required_argument parser: uwsgi_opt_forkpty_urouter help: run the forkptyrouter on the specified address forkpty-urouter argument: required_argument parser: uwsgi_opt_forkpty_urouter help: run the forkptyrouter on the specified address forkptyrouter-command argument: required_argument parser: uwsgi_opt_set_str help: run the specified command on every connection (default: /bin/sh) forkpty-router-command argument: required_argument parser: uwsgi_opt_set_str help: run the specified command on every connection (default: /bin/sh) forkptyrouter-cmd argument: required_argument parser: uwsgi_opt_set_str help: run the specified command on every connection (default: /bin/sh) forkpty-router-cmd argument: required_argument parser: uwsgi_opt_set_str help: run the specified command on every connection (default: /bin/sh) 226 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 forkptyrouter-rows argument: required_argument parser: uwsgi_opt_set_16bit help: set forkptyrouter default pty window rows forkptyrouter-cols argument: required_argument parser: uwsgi_opt_set_16bit help: set forkptyrouter default pty window cols forkptyrouter-processes argument: required_argument parser: uwsgi_opt_set_int help: prefork the specified number of forkptyrouter processes forkptyrouter-workers argument: required_argument parser: uwsgi_opt_set_int help: prefork the specified number of forkptyrouter processes forkptyrouter-zerg argument: required_argument parser: uwsgi_opt_corerouter_zerg help: attach the forkptyrouter to a zerg server forkptyrouter-fallback argument: required_argument parser: uwsgi_opt_add_string_list help: fallback to the specified node in case of error forkptyrouter-events argument: required_argument parser: uwsgi_opt_set_int help: set the maximum number of concufptyent events 3.13. uWSGI Options 227 uWSGI Documentation, Release 2.0 forkptyrouter-cheap argument: no_argument parser: uwsgi_opt_true help: run the forkptyrouter in cheap mode forkptyrouter-timeout argument: required_argument parser: uwsgi_opt_set_int help: set forkptyrouter timeout forkptyrouter-stats argument: required_argument parser: uwsgi_opt_set_str help: run the forkptyrouter stats server forkptyrouter-stats-server argument: required_argument parser: uwsgi_opt_set_str help: run the forkptyrouter stats server forkptyrouter-ss argument: required_argument parser: uwsgi_opt_set_str help: run the forkptyrouter stats server forkptyrouter-harakiri argument: required_argument parser: uwsgi_opt_set_int help: enable forkptyrouter harakiri 3.13.97 plugin: rsyslog rsyslog-packet-size argument: required_argument parser: uwsgi_opt_set_int 228 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 help: set maximum packet size for syslog messages (default 1024) WARNING! using packets > 1024 breaks RFC 3164 (#4.1) rsyslog-split-messages argument: no_argument parser: uwsgi_opt_true help: split big messages into multiple chunks if they are bigger than allowed packet size (default is false) 3.13.98 plugin: echo 3.13.99 plugin: emperor_mongodb 3.13.100 plugin: transformation_tofile 3.13.101 plugin: sqlite3 sqlite3 argument: required_argument parser: uwsgi_opt_load_sqlite3 flags: UWSGI_OPT_IMMEDIATE help: load config from sqlite3 db sqlite argument: required_argument parser: uwsgi_opt_load_sqlite3 flags: UWSGI_OPT_IMMEDIATE help: load config from sqlite3 db 3.13.102 plugin: sslrouter sslrouter argument: required_argument parser: uwsgi_opt_sslrouter help: run the sslrouter on the specified port 3.13. uWSGI Options 229 uWSGI Documentation, Release 2.0 sslrouter2 argument: required_argument parser: uwsgi_opt_sslrouter2 help: run the sslrouter on the specified port (key-value based) sslrouter-session-context argument: required_argument parser: uwsgi_opt_set_str help: set the session id context to the specified value sslrouter-processes argument: required_argument parser: uwsgi_opt_set_int help: prefork the specified number of sslrouter processes sslrouter-workers argument: required_argument parser: uwsgi_opt_set_int help: prefork the specified number of sslrouter processes sslrouter-zerg argument: required_argument parser: uwsgi_opt_corerouter_zerg help: attach the sslrouter to a zerg server sslrouter-use-cache argument: optional_argument parser: uwsgi_opt_set_str help: use uWSGI cache as hostname->server mapper for the sslrouter sslrouter-use-pattern argument: required_argument parser: uwsgi_opt_corerouter_use_pattern help: use a pattern for sslrouter hostname->server mapping 230 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 sslrouter-use-base argument: required_argument parser: uwsgi_opt_corerouter_use_base help: use a base dir for sslrouter hostname->server mapping sslrouter-fallback argument: required_argument parser: uwsgi_opt_add_string_list help: fallback to the specified node in case of error sslrouter-use-code-string argument: required_argument parser: uwsgi_opt_corerouter_cs help: use code string as hostname->server mapper for the sslrouter sslrouter-use-socket argument: optional_argument parser: uwsgi_opt_corerouter_use_socket help: forward request to the specified uwsgi socket sslrouter-to argument: required_argument parser: uwsgi_opt_add_string_list help: forward requests to the specified uwsgi server (you can specify it multiple times for load balancing) sslrouter-gracetime argument: required_argument parser: uwsgi_opt_set_int help: retry connections to dead static nodes after the specified amount of seconds sslrouter-events argument: required_argument parser: uwsgi_opt_set_int help: set the maximum number of concurrent events 3.13. uWSGI Options 231 uWSGI Documentation, Release 2.0 sslrouter-max-retries argument: required_argument parser: uwsgi_opt_set_int help: set the maximum number of retries/fallbacks to other nodes sslrouter-quiet argument: required_argument parser: uwsgi_opt_true help: do not report failed connections to instances sslrouter-cheap argument: no_argument parser: uwsgi_opt_true help: run the sslrouter in cheap mode sslrouter-subscription-server argument: required_argument parser: uwsgi_opt_corerouter_ss help: run the sslrouter subscription server on the spcified address sslrouter-timeout argument: required_argument parser: uwsgi_opt_set_int help: set sslrouter timeout sslrouter-stats argument: required_argument parser: uwsgi_opt_set_str help: run the sslrouter stats server sslrouter-stats-server argument: required_argument parser: uwsgi_opt_set_str help: run the sslrouter stats server 232 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 sslrouter-ss argument: required_argument parser: uwsgi_opt_set_str help: run the sslrouter stats server sslrouter-harakiri argument: required_argument parser: uwsgi_opt_set_int help: enable sslrouter harakiri sslrouter-sni argument: no_argument parser: uwsgi_opt_true help: use SNI to route requests sslrouter-buffer-size argument: required_argument parser: uwsgi_opt_set_64bit help: set internal buffer size (default: page size) 3.13.103 plugin: asyncio asyncio argument: required_argument parser: uwsgi_opt_setup_asyncio flags: UWSGI_OPT_THREADS help: a shortcut enabling asyncio loop engine with the specified number of async cores and optimal parameters 3.13. uWSGI Options 233 uWSGI Documentation, Release 2.0 3.13.104 plugin: ssi 3.13.105 plugin: clock_monotonic 3.13.106 plugin: router_memcached 3.13.107 plugin: router_redirect 3.13.108 plugin: emperor_pg 3.13.109 plugin: stats_pusher_file 3.13.110 plugin: jwsgi jwsgi argument: required_argument parser: uwsgi_opt_set_str help: load the specified JWSGI application (syntax class:method) 3.13.111 plugin: gevent gevent argument: required_argument parser: uwsgi_opt_setup_gevent flags: UWSGI_OPT_THREADS help: a shortcut enabling gevent loop engine with the specified number of async cores and optimal parameters gevent-monkey-patch argument: no_argument parser: uwsgi_opt_true help: call gevent.monkey.patch_all() automatically on startup gevent-wait-for-hub argument: no_argument parser: uwsgi_opt_true help: wait for gevent hub’s death instead of the control greenlet 234 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.13.112 plugin: python wsgi-file argument: required_argument parser: uwsgi_opt_set_str help: load .wsgi file file argument: required_argument parser: uwsgi_opt_set_str help: load .wsgi file eval argument: required_argument parser: uwsgi_opt_set_str help: eval python code module argument: required_argument shortcut: -w parser: uwsgi_opt_set_str help: load a WSGI module wsgi argument: required_argument shortcut: -w parser: uwsgi_opt_set_str help: load a WSGI module callable argument: required_argument parser: uwsgi_opt_set_str help: set default WSGI callable name 3.13. uWSGI Options 235 uWSGI Documentation, Release 2.0 test argument: required_argument shortcut: -J parser: uwsgi_opt_set_str help: test a mdule import home argument: required_argument shortcut: -H parser: uwsgi_opt_set_str help: set PYTHONHOME/virtualenv virtualenv argument: required_argument shortcut: -H parser: uwsgi_opt_set_str help: set PYTHONHOME/virtualenv venv argument: required_argument shortcut: -H parser: uwsgi_opt_set_str help: set PYTHONHOME/virtualenv pyhome argument: required_argument shortcut: -H parser: uwsgi_opt_set_str help: set PYTHONHOME/virtualenv py-programname argument: required_argument parser: uwsgi_opt_set_str help: set python program name 236 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 py-program-name argument: required_argument parser: uwsgi_opt_set_str help: set python program name pythonpath argument: required_argument parser: uwsgi_opt_pythonpath help: add directory (or glob) to pythonpath python-path argument: required_argument parser: uwsgi_opt_pythonpath help: add directory (or glob) to pythonpath pp argument: required_argument parser: uwsgi_opt_pythonpath help: add directory (or glob) to pythonpath pymodule-alias argument: required_argument parser: uwsgi_opt_add_string_list help: add a python alias module post-pymodule-alias argument: required_argument parser: uwsgi_opt_add_string_list help: add a python module alias after uwsgi module initialization import argument: required_argument parser: uwsgi_opt_add_string_list help: import a python module 3.13. uWSGI Options 237 uWSGI Documentation, Release 2.0 pyimport argument: required_argument parser: uwsgi_opt_add_string_list help: import a python module py-import argument: required_argument parser: uwsgi_opt_add_string_list help: import a python module python-import argument: required_argument parser: uwsgi_opt_add_string_list help: import a python module shared-import argument: required_argument parser: uwsgi_opt_add_string_list help: import a python module in all of the processes shared-pyimport argument: required_argument parser: uwsgi_opt_add_string_list help: import a python module in all of the processes shared-py-import argument: required_argument parser: uwsgi_opt_add_string_list help: import a python module in all of the processes shared-python-import argument: required_argument parser: uwsgi_opt_add_string_list help: import a python module in all of the processes 238 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 pyargv argument: required_argument parser: uwsgi_opt_set_str help: manually set sys.argv optimize argument: required_argument shortcut: -O parser: uwsgi_opt_set_int help: set python optimization level pecan argument: required_argument parser: uwsgi_opt_set_str help: load a pecan config file paste argument: required_argument parser: uwsgi_opt_set_str help: load a paste.deploy config file paste-logger argument: no_argument parser: uwsgi_opt_true help: enable paste fileConfig logger web3 argument: required_argument parser: uwsgi_opt_set_str help: load a web3 app pump argument: required_argument parser: uwsgi_opt_set_str help: load a pump app 3.13. uWSGI Options 239 uWSGI Documentation, Release 2.0 wsgi-lite argument: required_argument parser: uwsgi_opt_set_str help: load a wsgi-lite app ini-paste argument: required_argument parser: uwsgi_opt_ini_paste flags: UWSGI_OPT_IMMEDIATE help: load a paste.deploy config file containing uwsgi section ini-paste-logged argument: required_argument parser: uwsgi_opt_ini_paste flags: UWSGI_OPT_IMMEDIATE help: load a paste.deploy config file containing uwsgi section (load loggers too) reload-os-env argument: no_argument parser: uwsgi_opt_true help: force reload of os.environ at each request no-site argument: no_argument parser: uwsgi_opt_true help: do not import site module pyshell argument: optional_argument parser: uwsgi_opt_pyshell help: run an interactive python shell in the uWSGI environment 240 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 pyshell-oneshot argument: optional_argument parser: uwsgi_opt_pyshell help: run an interactive python shell in the uWSGI environment (one-shot variant) python argument: required_argument parser: uwsgi_opt_pyrun help: run a python script in the uWSGI environment py argument: required_argument parser: uwsgi_opt_pyrun help: run a python script in the uWSGI environment pyrun argument: required_argument parser: uwsgi_opt_pyrun help: run a python script in the uWSGI environment py-tracebacker argument: required_argument parser: uwsgi_opt_set_str flags: UWSGI_OPT_THREADS|UWSGI_OPT_MASTER help: enable the uWSGI python tracebacker py-auto-reload argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS|UWSGI_OPT_MASTER help: monitor python modules mtime to trigger reload (use only in development) 3.13. uWSGI Options 241 uWSGI Documentation, Release 2.0 py-autoreload argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS|UWSGI_OPT_MASTER help: monitor python modules mtime to trigger reload (use only in development) python-auto-reload argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS|UWSGI_OPT_MASTER help: monitor python modules mtime to trigger reload (use only in development) python-autoreload argument: required_argument parser: uwsgi_opt_set_int flags: UWSGI_OPT_THREADS|UWSGI_OPT_MASTER help: monitor python modules mtime to trigger reload (use only in development) py-auto-reload-ignore argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_THREADS|UWSGI_OPT_MASTER help: ignore the specified module during auto-reload scan (can be specified multiple times) wsgi-env-behaviour argument: required_argument parser: uwsgi_opt_set_str help: set the strategy for allocating/deallocating the WSGI env wsgi-env-behavior argument: required_argument parser: uwsgi_opt_set_str help: set the strategy for allocating/deallocating the WSGI env 242 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 start_response-nodelay argument: no_argument parser: uwsgi_opt_true help: send WSGI http headers as soon as possible (PEP violation) wsgi-strict argument: no_argument parser: uwsgi_opt_true help: try to be fully PEP compliant disabling optimizations wsgi-accept-buffer argument: no_argument parser: uwsgi_opt_true help: accept CPython buffer-compliant objects as WSGI response in addition to string/bytes wsgi-accept-buffers argument: no_argument parser: uwsgi_opt_true help: accept CPython buffer-compliant objects as WSGI response in addition to string/bytes python-version argument: no_argument parser: uwsgi_opt_pyver flags: UWSGI_OPT_IMMEDIATE help: report python version python-raw argument: required_argument parser: uwsgi_opt_set_str help: load a python file for managing raw requests py-sharedarea argument: required_argument parser: uwsgi_opt_add_string_list help: create a sharedarea from a python bytearray object of the specified size 3.13. uWSGI Options 243 uWSGI Documentation, Release 2.0 py-call-osafterfork argument: no_argument parser: uwsgi_opt_true help: enable child processes running cpython to trap OS signals 3.13.113 plugin: cache 3.13.114 plugin: glusterfs glusterfs-mount argument: required_argument parser: uwsgi_opt_add_string_list flags: UWSGI_OPT_MIME help: virtual mount the specified glusterfs volume in a uri glusterfs-timeout argument: required_argument parser: uwsgi_opt_set_int help: timeout for glusterfs async mode 3.13.115 plugin: greenlet greenlet argument: no_argument parser: uwsgi_opt_true help: enable greenlet as suspend engine 244 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.13.116 plugin: airbrake 3.13.117 plugin: libtcc 3.13.118 plugin: transformation_toupper 3.13.119 plugin: router_hash 3.13.120 plugin: example 3.13.121 plugin: coroae coroae argument: required_argument parser: uwsgi_opt_setup_coroae help: a shortcut enabling Coro::AnyEvent loop engine with the specified number of async cores and optimal param- eters 3.14 Defining new options for your instances Sometimes the built-in options are not enough. For example, you may need to give your customers custom options for configuring their apps on your platform. Or you need to configure so many instances you want to simplify things such as per-datacenter or per-server-type options. Declaring new options for your config files/command-line is a good way of achieving these goals. To define new options use --declare-option: --declare-option =[;;...] An useful example could be defining a “redirect” option, using the redirect plugin of the InternalRouting subsystem: --declare-option "redirect=route=\$1 redirect:\$2" This will declare a new option called redirect that takes 2 arguments. Those arguments will be expanded using the $-prefixed variables. Like shell scripts, the backslash is required to make your shell not expand these values. Now you will be able to define a redirect in your config files: uwsgi --declare-option "redirect=route=\$1 redirect:\$2" --ini config.ini Config.ini: [uwsgi] socket= :3031 ; define my redirects redirect= ^/foo http://unbit.it redirect= \.jpg$ http://uwsgi.it/test redirect= ^/foo/bar/ /test or directly on the command line: uwsgi --declare-option "redirect=route=\$1 redirect:\$2" --socket :3031 --redirect "^/foo http://unbit.it" --redirect "\.jpg$ http://uwsgi.it/test" --redirect "^/foo/bar/ /test" 3.14. Defining new options for your instances 245 uWSGI Documentation, Release 2.0 3.14.1 More fun: a bunch of shortcuts Now we will define new options for frequently-used apps. Shortcuts.ini: [uwsgi] ; let’s define a shortcut for trac (new syntax: trac=) declare-option= trac=plugin=python;env=TRAC_ENV=$1;module=trac.web.main:dispach_request ; one for web2py (new syntax: web2py=) declare-option= web2py=plugin=python;chdir=$1;module=wsgihandler ; another for flask (new syntax: flask=) declare-option= flask=plugin=python;wsgi-file=$1;callable=app To hook up a Trac instance on /var/www/trac/fooenv: [uwsgi] ; include new shortcuts ini= shortcuts.ini ; classic options http= :8080 master= true threads=4 ; our new option trac= /var/www/trac/fooenv A config for Web2py, in XML: shortcuts.ini :443,test.crt,test.key,HIGH 4 /var/www/we2py 3.14.2 A trick for the Emperor: automatically import shortcuts for your vassals If you manage your customers/users with the Emperor, you can configure it to automatically import your shortcuts in each vassal. uwsgi --emperor /etc/uwsgi/vassals --vassals-include /etc/uwsgi/shortcuts.ini For multiple shortcuts use: uwsgi --emperor /etc/uwsgi/vassals --vassals-include /etc/uwsgi/shortcuts.ini --vassals-include /etc/uwsgi/shortcuts2.ini --vassals-include /etc/uwsgi/shortcuts3.ini Or (with a bit of configuration logic magic): [uwsgi] emperor= /etc/uwsgi/vassals 246 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 for= shortcuts shortcuts2 shortcuts3 vassals-include = /etc/uwsgi/%(_).ini endfor= 3.14.3 An advanced trick: embedding shortcuts in your uWSGI binary uWSGI’s build system allows you to embed files, be they generic files or configuration, in the server binary. Abus- ing this feature will enable you to embed your new option shortcuts into the server binary, automagically allowing users to use them. To embed your shortcuts file, edit your build profile (like buildconf/base.ini) and set embed_config to the path of the shortcuts file. Rebuild your server and your new options will be available. See also: BuildConf 3.15 How uWSGI parses config files Until uWSGI 1.1 the parsing order has not been ‘stable’ or ‘reliable’. Starting from uWSGI 1.1 (thanks to its new options subsystem) we have a general rule: top-bottom and expand asap. Top-bottom means options are internally ordered as they are parsed, while “expand asap” means to inject the options of a requested config file, interrupting the currently parsed one: Note that the inherit option behaves differently from the other inlcude options: It is expanded after variable expansion, so any environment variables, external files and placeholders are not expanded. Magic variables (e.g. %n) are expanded normally. file1.ini (the one requested from the command line) [uwsgi] socket= :3031 ini= file2.ini socket= :3032 chdir= /var/www file2.ini [uwsgi] master= true memory-report= true processes=4 internally will be assembled in: [uwsgi] socket= :3031 ini= file2.ini master= true memory-report= true processes=4 socket= :3032 chdir= /var/www A more complex example: file1.ini (the one requested from the command line) 3.15. How uWSGI parses config files 247 uWSGI Documentation, Release 2.0 [uwsgi] socket= :3031 ini= file2.ini socket= :3032 chdir= /var/www file2.ini [uwsgi] master= true xml= file3.xml memory-report= true processes=4 file3.xml router_uwsgi ^/foo uwsgi:127.0.0.1:4040,0,0 will result in: [uwsgi] socket= :3031 ini= file2.ini master= true xml= file3.xml plugins= router_uwsgi route= ^/foo uwsgi:127.0.0.1:4040,0,0 memory-report= true processes=4 socket= :3032 chdir= /var/www 3.15.1 Expanding variables/placeholders After the internal config tree is assembled, variables and placeholder substitution will be applied. The first step is substituting all of the $(VALUE) occurences with the value of the environment variable VALUE [uwsgi] foobar= $(PATH) foobar value will be the content of shell’s PATH variable The second step will expand text files embraced in @(FILENAME) [uwsgi] nodename= @(/etc/hostname) nodename value will be the content of /etc/hostname The last step is placeholder substitution. A placeholder is a reference to another option: [uwsgi] socket= :3031 foobar= %(socket) the content of foobar will be mapped to the content of socket. 248 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.15.2 A note on magic variables Config files, support another form of variables, called ‘magic’ variables. As they refer to the config file itself, they will be parsed asap: [uwsgi] my_config_file= %p The content of my_config_file will be set to %p value (the current file’s absolute path) as soon as it is parsed. That means %p (or whatever magic vars you need) will be always be consistent in the currently parsing config file. 3.16 uwsgi protocol magic variables You can dynamically tune or configure various aspects of the uWSGI server using special variables passed by the web server (or in general by a uwsgi compliant client). • For Nginx, the uwsgi_param ; directive is used. • For Apache, the SetEnv directive is used. 3.16.1 UWSGI_SCHEME Set the URL scheme when it cannot be reliably determined. This may be used to force HTTPS (with the value https), for instance. 3.16.2 UWSGI_SCRIPT Load the specified script as a new application mapped to SCRIPT_NAME. The app will obviously only be loaded once, not on each request. uwsgi_param UWSGI_SCRIPT werkzeug.testapp:test_app; uwsgi_param SCRIPT_NAME /testapp; 3.16.3 UWSGI_MODULE and UWSGI_CALLABLE Load a new app (defined as module:callable) mapped into SCRIPT_NAME. uwsgi_param UWSGI_MODULE werkzeug.testapp; uwsgi_param UWSGI_CALLABLE test_app; uwsgi_param SCRIPT_NAME /testapp; 3.16.4 UWSGI_PYHOME Dynamically set the Python Virtualenv support for a dynamic application. See also: DynamicVirtualenv 3.16. uwsgi protocol magic variables 249 uWSGI Documentation, Release 2.0 3.16.5 UWSGI_CHDIR chdir() to the specified directory before managing the request. 3.16.6 UWSGI_FILE Load the specified file as a new dynamic app. 3.16.7 UWSGI_TOUCH_RELOAD Reload the uWSGI stack when the specified file’s modification time has changed since the last request. location / { include uwsgi_params; uwsgi_param UWSGI_TOUCH_RELOAD /tmp/touchme.foo; uwsgi_pass /tmp/uwsgi.sock; } 3.16.8 UWSGI_CACHE_GET See also: The uWSGI caching framework Check the uWSGI cache for a specified key. If the value is found, it will be returned as raw HTTP output instead of the usual processing of the request. location / { include uwsgi_params; uwsgi_param UWSGI_CACHE_GET $request_uri; uwsgi_pass 127.0.0.1:3031; } 3.16.9 UWSGI_SETENV Set the specified environment variable for a new dynamic app. Note: To allow this in Python applications you need to enable the reload-os-env uWSGI option. Dynamically load a Django app without using a WSGI file/module: location / { include uwsgi_params; uwsgi_param UWSGI_SCRIPT django.core.handlers.wsgi:WSGIHandler(); uwsgi_param UWSGI_CHDIR /mydjangoapp_path; uwsgi_param UWSGI_SETENV DJANGO_SETTINGS_MODULE=myapp.settings; } 3.16.10 UWSGI_APPID Note: Available since 0.9.9. 250 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 Bypass SCRIPT_NAME and VirtualHosting to let the user choose the mountpoint without limitations (or headaches). The concept is very generic: UWSGI_APPID is the identifier of an application. If it is not found in the internal list of apps, it will be loaded. server { server_name server001; location / { include uwsgi_params; uwsgi_param UWSGI_APPID myfunnyapp; uwsgi_param UWSGI_FILE /var/www/app1.py } } server { server_name server002; location / { include uwsgi_params; uwsgi_param UWSGI_APPID myamazingapp; uwsgi_param UWSGI_FILE /var/www/app2.py } } 3.17 The uwsgi Protocol The uwsgi (lowercase!) protocol is the native protocol used by the uWSGI server. It is a binary protocol that can carry any type of data. The first 4 bytes of a uwsgi packet describe the type of the data contained by the packet. Every uwsgi request generates a response in the uwsgi format. Even the web server handlers obey this rule, as an HTTP response is a valid uwsgi packet (look at the modifier1 = 72). The protocol works mainly via TCP but the master process can bind to a UDP Unicast/Multicast for The embedded SNMP server or cluster management/messaging requests. SCTP support is being worked on. 3.17.1 uwsgi packet header struct uwsgi_packet_header { uint8_t modifier1; uint16_t datasize; uint8_t modifier2; }; Unless otherwise specified the datasize value contains the size (16-bit little endian) of the packet body. 3.17.2 Packet descriptions 3.17. The uwsgi Protocol 251 uWSGI Documentation, Release 2.0 modifier1 datasize modifier2 packet type 0 size of WSGI block vars (HTTP request body excluded) 0 Standard WSGI request followed by the HTTP request body 1 reserved for UNBIT 2 reserved for UNBIT 3 reserved for UNBIT 5 size of PSGI block vars (HTTP request body excluded) 0 Standard PSGI request followed by the HTTP request body 6 size of LUA WSAPI block vars (HTTP request body excluded) 0 Standard LUA/WSAPI request followed by the HTTP request body 7 size of RACK block vars (HTTP request body excluded) 0 Standard RACK request followed by the HTTP request body 8 size of JWSGI/Ring block vars (HTTP request body excluded) 0 Standard JVM request for The JWSGI interface and The Clojure/Ring JVM request handler followed by the HTTP request body 9 size of CGI block vars (HTTP request body excluded) 0 Standard Running CGI scripts on uWSGI request followed by the HTTP request body 10 size of block vars 0- 255 Management interface request: setup flag specified by modifier2. For a list of management flag look at ManagementFlag 14 size of CGI block vars (HTTP request body excluded) 0 Standard Running PHP scripts in uWSGI request followed by the HTTP request body 15 size of Mono ASP.NET block vars (HTTP request body excluded) 0 Standard The Mono ASP.NET plugin request followed by the HTTP request body 17 size of Spooler block vars 0- 255 The uWSGI Spooler request, the block vars is converted to a dictionary/hash/table and passed to the spooler callable. The second modifier is currently ignored. 18 size of CGI block vars 0-255 direct call to c-like symbols 22 size of code string 0- 255 Raw Code evaluation. The interpreter is choosen by the modifier2. 0 is Python, 5 is Perl. It does not return a valid uwsgi response, but a raw string (that may be an HTTP response) 23 size of CGI vars 0- 255 invoke the The XSLT plugin 24 size of CGI vars 0- 255 invoke the uWSGI V8 support 25 size of CGI vars 0- 255 invoke the The GridFS plugin 26 size of CGI vars 0- 255 invoke the The GlusterFS plugin 27 0 0- 255 call the FastFuncs specified by the modifier2 field 28 0 0- 255 invoke the The RADOS plugin 30 size of WSGI block vars (HTTP request body excluded) 0 (if defined the size of the block vars is 24bit le, for now none of the webserver handlers support this feature) Standard WSGI request followed by the HTTP request body. The PATH_INFO is automatically modified, removing the SCRIPT_NAME from it 31 size of block vars 0- 255 Generic message passing (reserved) 32 size of char array 0- 255 array of char passing (reserved) 33 size of marshal object 0- 255 marshalled/serialzed object passing (reserved) 48 snmp specific snmp specific identify a SNMP request/response (mainly via UDP) 72 chr(TT) chr(P) Corresponds to the ‘HTTP’ string and signals that this is a raw HTTP response. 73 announce message size (for sanity check) announce type (0 = hostname) announce message 74 multicast message size (for sanity check) 0 array of chars; a custom multicast message managed by uwsgi.multicast_manager 95 cluster membership dict size action add/remove/enable/disable node from a cluster. Action may be 0 = add, 1 = remove, 2 = enable, 3 = disable. Add action requires a dict of at least 3 keys: hostname, address and workers 96 log message size 0 Remote logging (clustering/multicast/unicast) 97 0 0, 1 brutal reload request (0 request - 1 confirmation) 98 0 0, 1 graceful reload request (0 request - 1 confirmation) 99 size of options dictionary (if response) 0, 1 request configuration data from a uwsgi node (even via multicast) 100 0 0, 1 PING- PONG if modifier2 is 0 it is a PING request otherwise it is a PONG (a response). Useful for cluster health- check 101 size of packet 0 ECHO service 109 size of clean payload 0 to 255 legion msg (UDP, the body is encrypted) 110 size of payload 0 to 255 uwsgi_signal framework (payload is optional), modifier2 is the signal num 111 size of packet 0, 1, 2, 3 Cache operations. 0: read, 1: write, 2: delete, 3: dict_based 173 size of packet 0, 1 RPC. The packet is an uwsgi array where the first item is the name of the function and the following are the args (if modifier2 is 1 the RPC will be ‘raw’ and all of the response will be returned to the app, uwsgi header included, if available. 200 0 0 Close mark for persistent connections 224 size of packet 0 Subscription packet. see SubscriptionServer 255 0 0- 255 Generic response. Request dependent. For example a spooler response set 0 for a failed spool or 1 for a successful one 3.17.3 The uwsgi vars The uwsgi block vars represent a dictionary/hash. Every key-value is encoded in this way: struct uwsgi_var { uint16_t key_size; uint8_t key[key_size]; uint16_t val_size; 252 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 uint8_t val[val_size]; } 3.18 Managing external daemons/services uWSGI can easily monitor external processes, allowing you to increase reliability and usability of your multi-tier apps. For example you can manage services like Memcached, Redis, Celery, Ruby delayed_job or even dedicated PostgreSQL instances. 3.18.1 Kinds of services Currently uWSGI supports 3 categories of processes: •--attach-daemon – directly attached non daemonized processes •--smart-attach-daemon – pidfile governed (both foreground and daemonized) •--smart-attach-daemon2 – pidfile governed with daemonization management The first category allows you to directly attach processes to the uWSGI master. When the master dies or is reloaded these processes are destroyed. This is the best choice for services that must be flushed whenever the app is restarted. Pidfile governed processes can survive death or reload of the master so long as their pidfiles are available and the pid contained therein matches a running pid. This is the best choice for processes requiring longer persistence, and for which a brutal kill could mean loss of data such as a database. The last category is a superset of the second one. If your process does not support daemonization or writing to pidfile, you can let the master do the management. Very few daemons/applications require this feature, but it could be useful for tiny prototype applications or simply poorly designed ones. Since uWSGI 2.0 a fourth option, --attach-daemon2 has been added for advanced configurations (see below). 3.18.2 Examples Managing a memcached instance in ‘dumb’ mode. Whenever uWSGI is stopped or reloaded, memcached is de- stroyed. [uwsgi] master= true socket= :3031 attach-daemon= memcached -p 11311 -u roberto Managing a memcached instance in ‘smart’ mode. Memcached survives uWSGI stop and reload. [uwsgi] master= true socket= :3031 smart-attach-daemon= /tmp/memcached.pid memcached -p 11311 -d -P /tmp/memcached.pid -u roberto Managing 2 mongodb instances in smart mode: [uwsgi] master= true socket= :3031 smart-attach-daemon= /tmp/mongo1.pid mongod --pidfilepath /tmp/mongo1.pid --dbpath foo1 --port 50001 smart-attach-daemon= /tmp/mongo2.pid mongod --pidfilepath /tmp/mongo2.pid --dbpath foo2 --port 50002 3.18. Managing external daemons/services 253 uWSGI Documentation, Release 2.0 Managing PostgreSQL dedicated-instance (cluster in /db/foobar1): [uwsgi] master= true socket= :3031 smart-attach-daemon= /db/foobar1/postmaster.pid /usr/lib/postgresql/9.1/bin/postgres -D /db/foobar1 Managing celery: [uwsgi] master= true socket= :3031 smart-attach-daemon= /tmp/celery.pid celery -A tasks worker --pidfile=/tmp/celery.pid Managing delayed_job: [uwsgi] master= true socket= :3031 env= RAILS_ENV=production rbrequire= bundler/setup rack= config.ru chdir= /var/apps/foobar smart-attach-daemon= %(chdir)/tmp/pids/delayed_job.pid %(chdir)/script/delayed_job start Managing dropbear: [uwsgi] namespace= /ns/001/:testns namespace-keep-mount= /dev/pts socket= :3031 exec-as-root= chown -R www-data /etc/dropbear attach-daemon= /usr/sbin/dropbear -j -k -p 1022 -E -F -I 300 When using the namespace option you can attach a dropbear daemon to allow direct access to the system inside the specified namespace. This requires the /dev/pts filesystem to be mounted inside the namespace, and the user your workers will be running as have access to the /etc/dropbear directory inside the namespace. 3.18.3 Legion support Starting with uWSGI 1.9.9 it’s possible to use the The uWSGI Legion subsystem subsystem for daemon management. Legion daemons will be executed only on the legion lord node, so there will always be a single daemon instance running in each legion. Once the lord dies a daemon will be spawned on another node. To add a legion daemon use –legion-attach-daemon, –legion-smart-attach-daemon and –legion-smart-attach-daemon2 options, they have the same syntax as normal daemon options. The difference is the need to add legion name as first argument. Example: Managing celery beat: [uwsgi] master= true socket= :3031 legion-mcast= mylegion 225.1.1.1:9191 90 bf-cbc:mysecret legion-smart-attach-daemon= mylegion /tmp/celery-beat.pid celery beat --pidfile=/tmp/celery-beat.pid 254 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.18.4 –attach-daemon2 This option has been added in uWSGI 2.0 and allows advanced configurations. It is a keyval option, and it accepts the following keys: • command/cmd/exec: the command line to execute • freq: maximum attempts before considering a daemon “broken” • pidfile: the pidfile to check (enable smart mode) • control: if set, the daemon becomes a ‘control’ one: if it dies the whole uWSGI instance dies • daemonize/daemon: daemonize the process (enable smart2 mode) • touch semicolon separated list of files to check: whenever they are ‘touched’, the daemon is restarted • stopsignal/stop_signal: the signal number to send to the daemon when uWSGI is stopped • reloadsignal/reload_signal: the signal to send to the daemon when uWSGI is reloaded • stdin: if set the file descriptor zero is not remapped to /dev/null • uid: drop privileges to the specified uid (requires master running as root) • gid: drop privileges to the specified gid (requires master running as root) • ns_pid: spawn the process in a new pid namespace (requires master running as root, Linux only) • chdir: chdir() to the specified directory before running the command (added in uWSGI 2.0.6) Example: [uwsgi] attach-daemon2= cmd=my_daemon.sh,pidfile=/tmp/my.pid,uid=33,gid=33,stopsignal=3 3.19 The Master FIFO Available from uWSGI 1.9.17. Generally you use UNIX signals to manage the master, but we are running out of signal numbers and (more impor- tantly) not needing to mess with PIDs greatly simplifies the implementation of external management scripts. So, instead of signals, you can tell the master to create a UNIX named pipe (FIFO) that you may use to issue commands to the master. To create a FIFO just add --master-fifo then start issuing commands to it. echo r > /tmp/yourfifo You can send multiple commands in one shot. # add 3 workers and print stats echo +++s > /tmp/yourfifo 3.19.1 Available commands • ‘0’ to ‘9’ - set the fifo slot (see below) • ‘+’ - increase the number of workers when in cheaper mode (add --cheaper-algo manual for full control) • ‘-‘ - decrease the number of workers when in cheaper mode (add --cheaper-algo manual for full control) 3.19. The Master FIFO 255 uWSGI Documentation, Release 2.0 • ‘B’ - ask Emperor for reinforcement (broodlord mode, requires uWSGI >= 2.0.7) • ‘C’ - set cheap mode • ‘c’ - trigger chain reload • ‘E’ - trigger an Emperor rescan • ‘f’ - re-fork the master (dangerous, but very powerful) • ‘l’ - reopen log file (need –log-master and –logto/–logto2) • ‘L’ - trigger log rotation (need –log-master and –logto/–logto2) • ‘p’ - pause/resume the instance • ‘P’ - update pidfiles (can be useful after master re-fork) • ‘Q’ - brutally shutdown the instance • ‘q’ - gracefully shutdown the instance • ‘R’ - send brutal reload • ‘r’ - send graceful reload • ‘S’ - block/unblock subscriptions • ‘s’ - print stats in the logs • ‘W’ - brutally reload workers • ‘w’ - gracefully reload workers 3.19.2 FIFO slots uWSGI supports up to 10 different FIFO files. By default the first specified is bound (mapped as ‘0’). During the instance’s lifetime you can change from one FIFO to another by simply sending the number of the FIFO slot to use. [uwsgi] master-fifo = /tmp/fifo0 master-fifo = /tmp/fifo1 master-fifo = /var/run/foofifo processes = 2 ... By default /tmp/fifo0 will be allocated, but after sending: echo 1 > /tmp/fifo0 the /tmp/fifo1 file will be bound. This is very useful to map FIFO files to specific instance when you (ab)use the ‘fork the master’ command (the ‘f’ one). echo 1fp > /tmp/fifo0 After sending this command, a new uWSGI instance (inheriting all of the bound sockets) will be spawned, the old one will be put in “paused” mode (the ‘p’ command). As we have sent the ‘1’ command before ‘f’ and ‘p’ the old instance will now accept commands on /tmp/fifo1 (the slot 1), and the new one will use the default one (‘0’). 256 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 There are lot of tricks you can accomplish, and lots of ways to abuse the forking of the master. Just take into account that corner-case problems can occur all over the place, especially if you use the most complex features of uWSGI. 3.19.3 Notes • The FIFO is created in non-blocking modes and recreated by the master every time a client disconnects. • You can override (or add) commands using the global array uwsgi_fifo_table via plugins or C hooks. • Only the uid running the master has write access to the fifo. 3.20 Socket activation with inetd/xinetd Inetd and Xinetd are two daemons used to start network processes on demand. You can use this in uWSGI too. 3.20.1 Inetd 127.0.0.1:3031 stream tcp wait root /usr/bin/uwsgi uwsgi -M -p 4 --wsgi-file /root/uwsgi/welcome.py --log-syslog=uwsgi With this config you will run uWSGI on port 3031 as soon as the first connection is made. Note: the first argument (the one soon after /usr/bin/uwsgi) is mapped to argv[0]. Do not forget this – always set it to uwsgi if you want to be sure. 3.20.2 Xinetd service uwsgi { disable = no id = uwsgi-000 type = UNLISTED socket_type = stream server = /root/uwsgi/uwsgi server_args = --chdir /root/uwsgi/ --module welcome --logto /tmp/uwsgi.log port = 3031 bind = 127.0.0.1 user = root wait = yes } Again, you do not need to specify the socket in uWSGI, as it will be passed to the server by xinetd. 3.21 Running uWSGI via Upstart Upstart is the init system of Ubuntu-like distributions. It is based on declarative configuration files – not shell scripts of yore – that are put in the /etc/init directory. 3.20. Socket activation with inetd/xinetd 257 uWSGI Documentation, Release 2.0 3.21.1 A simple script (/etc/init/uwsgi.conf) # simple uWSGI script description "uwsgi tiny instance" start on runlevel [2345] stop on runlevel [06] exec uwsgi --master --processes 4 --die-on-term --socket :3031 --wsgi-file /var/www/myapp.wsgi 3.21.2 Using the Emperor See also: The uWSGI Emperor – multi-app deployment A better approach than init files for each app would be to only start an Emperor via Upstart and let it deal with the rest. # Emperor uWSGI script description "uWSGI Emperor" start on runlevel [2345] stop on runlevel [06] exec uwsgi --emperor /etc/uwsgi If you want to run the Emperor under the master process (for accessing advanced features) remember to add –die-on- term # Emperor uWSGI script description "uWSGI Emperor" start on runlevel [2345] stop on runlevel [06] exec uwsgi --master --die-on-term --emperor /etc/uwsgi 3.21.3 What is –die-on-term? By default uWSGI maps the SIGTERM signal to “a brutal reload procedure”. However, Upstart uses SIGTERM to completely shutdown processes. die-on-term inverts the meanings of SIGTERM and SIGQUIT to uWSGI. The first will shutdown the whole stack, the second one will brutally reload it. 3.21.4 Socket activation (from Ubuntu 12.04) Newer Upstart releases have an Inetd-like feature that lets processes start when connections are made to specific sockets. You can use this feature to start uWSGI only when a client (or the webserver) first connects to it. The ‘start on socket’ directive will trigger the behaviour. You do not need to specify the socket in uWSGI as it will be passed to it by Upstart itself. 258 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 # simple uWSGI script description "uwsgi tiny instance" start on socket PROTO=inet PORT=3031 stop on runlevel [06] exec uwsgi --master --processes 4 --die-on-term --wsgi-file /var/www/myapp.wsgi 3.22 Systemd uWSGI is a new-style daemon for systemd. It can notify status change and readyness. When uWSGI detects it is running under systemd, the notification system is enabled. 3.22.1 Adding the Emperor to systemd The best approach to integrate uWSGI apps with your init system is using the Emperor. Your init system will talk only with the Emperor that will rule all of the apps itself. Create a systemd service file (you can save it as /etc/systemd/system/emperor.uwsgi.service) [Unit] Description=uWSGI Emperor After=syslog.target [Service] ExecStart=/root/uwsgi/uwsgi --ini /etc/uwsgi/emperor.ini Restart=always KillSignal=SIGQUIT Type=notify StandardError=syslog NotifyAccess=main [Install] WantedBy=multi-user.target Then run it systemctl start emperor.uwsgi.service And check its status. systemctl status emperor.uwsgi.service You will see the Emperor reporting the number of governed vassals to systemd (and to you). emperor.uwsgi.service - uWSGI Emperor Loaded: loaded(/etc/systemd/system/emperor.uwsgi.service) Active: active(running) since Tue, 17 May 2011 08:51:31 +0200; 5s ago Main PID: 30567(uwsgi) Status: "The Emperor is governing 1 vassals" CGroup: name=systemd:/system/emperor.uwsgi.service 30567 /root/uwsgi/uwsgi --ini /etc/uwsgi/emperor.ini 3.22. Systemd 259 uWSGI Documentation, Release 2.0 30568 /root/uwsgi/uwsgi --ini werkzeug.ini 30569 /root/uwsgi/uwsgi --ini werkzeug.ini You can stop the Emperor (and all the apps it governs) with systemctl stop emperor.uwsgi.service A simple emperor.ini could look like this (www-data is just an anonymous user) NOTE: DO NOT daemonize the Emperor (or the master) unless you know what you are doing!!! [uwsgi] emperor= /etc/uwsgi/vassals uid= www-data gid= www-data If you want to allow each vassal to run under different privileges, remove the uid and gid options from the emperor configuration (and please read the Emperor docs!) 3.22.2 Logging Using the previous service file all of the Emperor messages go to the syslog. You can avoid it by removing the StandardError=syslog directive. If you do that, be sure to set a --logto option in your Emperor configuration, otherwise all of your logs will be lost! 3.22.3 Putting sockets in /run/ On a modern system, /run/ is mounted as a tmpfs and is the right place to put sockets and pidfiles into. You can have systemd create a uwsgi directory to put them into by creating a systemd-tmpfiles configuration file (you can save it as /etc/tmpfiles.d/emperor.uwsgi.conf): d /run/uwsgi 0755 www-data www-data - 3.22.4 Socket activation Starting from uWSGI 0.9.8.3 socket activation is available. You can setup systemd to spawn uWSGI instances only after the first socket connection. Create the required emperor.uwsgi.socket (in /etc/systemd/system/emperor.uwsgi.socket). Note that the *.socket file name must match the *.service file name. [Unit] Description=Socket for uWSGI Emperor [Socket] # Change this to your uwsgi application port or unix socket location ListenStream=/tmp/uwsgid.sock [Install] WantedBy=sockets.target Then disable the service and enable the socket unit. # systemctl disable emperor.uwsgi.service # systemctl enable emperor.uwsgi.socket 260 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.23 Running uWSGI instances with Circus Circus (http://circus.readthedocs.org/en/0.7/) is a process manager written in Python. It is very similar to projects like Supervisor, but with several additional features. Although most, if not all, of it’s functionalities have a counterpart in uWSGI, Circus can be used as a library allowing you to build dynamic configurations (and extend uWSGI patterns). This aspect is very important and may be the real selling point of Circus. 3.23.1 Socket activation Based on the venerable inetd pattern, Circus can bind to sockets and pass them to children. Start with a simple Circus config (call it circus.ini): [circus] endpoint= tcp://127.0.0.1:5555 pubsub_endpoint= tcp://127.0.0.1:5556 stats_endpoint= tcp://127.0.0.1:5557 [watcher:dummy] cmd= uwsgi --http-socket fd://$(circus.sockets.foo) --wsgi-file yourapp.wsgi use_sockets= True send_hup= True stop_signal= QUIT [socket:foo] host= 0.0.0.0 port= 8888 run it with circusd circus.ini 3.23.2 (Better) Socket activation If you want to spawn instances on demand, you will likely want to shut them down when they are no longer used. To accomplish that use the –idle uWSGI option. [circus] check_delay=5 endpoint= tcp://127.0.0.1:5555 pubsub_endpoint= tcp://127.0.0.1:5556 stats_endpoint= tcp://127.0.0.1:5557 [watcher:dummy] cmd= uwsgi --master --idle 60 --http-socket fd://$(circus.sockets.foo) --wsgi-file yourapp.wsgi use_sockets= True warmup_delay=0 send_hup= True stop_signal= QUIT [socket:foo] host= 0.0.0.0 port= 8888 This time we have enabled the master process. It will manage the –idle option, shutting down the instance if it is inactive for more than 60 seconds. 3.23. Running uWSGI instances with Circus 261 uWSGI Documentation, Release 2.0 3.24 Embedding an application in uWSGI Starting from uWSGI 0.9.8.2, you can embed files in the server binary. These can be any file type, including config- uration files. You can embed directories too, so by hooking the Python module loader you can transparently import packages, too. In this example we’ll be embedding a full Flask project. 3.24.1 Step 1: creating the build profile We’re assuming you have your uWSGI source at the ready. In the buildconf directory, define your profile – let’s call it flask.ini: [uwsgi] inherit= default bin_name= myapp embed_files= bootstrap.py,myapp.py myapp.py is a simple flask app. from flask import Flask app= Flask(__name__) app.debug= True @app.route(’/’) def index(): return "Hello World" bootstrap.py is included in the source distribution. It will extend the python import subsystem to use files em- bedded in uWSGI. Now compile your app-inclusive server. Files will be embedded as symbols in the executable. Dots and dashes, etc. in filenames are thus transformed to underscores. python uwsgiconfig.py --build flask As bin_name was myapp, you can now run ./myapp --socket :3031 --import sym://bootstrap_py --module myapp:app The sym:// pseudoprotocol enables uWSGI to access the binary’s embedded symbols and data, in this case importing bootstrap.py directly from the binary image. 3.24.2 Step 2: embedding the config file We want our binary to automatically load our Flask app without having to pass a long command line. Let’s create the configuration – flaskconfig.ini: [uwsgi] socket= 127.0.0.1:3031 import= sym://bootstrap_py module= myapp:app And add it to the build profile as a config file. 262 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 [uwsgi] inherit= default bin_name= myapp embed_files= bootstrap.py,myapp.py embed_config= flaskconfig.ini Then, after you rebuild the server python uwsgiconfig.py --build flask you can now simply launch ./myapp # Remember that this new binary continues to be able to take parameters and config files: ./myapp --master --processes 4 3.24.3 Step 3: embedding flask itself Now, we are ready to kick asses with uWSGI ninja awesomeness. We want a single binary embedding all of the Flask modules, including Werkzeug and Jinja2, Flask’s dependencies. We need to have these packages’ directories and then specify them in the build profile. [uwsgi] inherit= default bin_name= myapp embed_files= bootstrap.py,myapp.py,werkzeug=site-packages/werkzeug,jinja2=site-packages/jinja2,flask=site-packages/flask embed_config= flaskconfig.ini Note: This time we have used the form “name=directory” to force symbols to a specific names to avoid ending up with a clusterfuck like site_packages_flask___init___py. Rebuild and re-run. We’re adding –no-site when running to show you that the embedded modules are being loaded. python uwsgiconfig.py --build flask ./myapp --no-site --master --processes 4 3.24.4 Step 4: adding templates Still not satisfied? WELL YOU SHOULDN’T BE. [uwsgi] inherit= default bin_name= myapp embed_files= bootstrap.py,myapp.py,werkzeug=site-packages/werkzeug,jinja2=site-packages/jinja2,flask=site-packages/flask,templates embed_config= flaskconfig.ini Templates will be added to the binary... but we’ll need to instruct Flask on how to load templates from the binary image by creating a custom Jinja2 template loader. from flask import Flask, render_template from flask.templating import DispatchingJinjaLoader class SymTemplateLoader(DispatchingJinjaLoader): def symbolize(self, name): 3.24. Embedding an application in uWSGI 263 uWSGI Documentation, Release 2.0 return name.replace(’.’,’_’).replace(’/’,’_’).replace(’-’,’_’) def get_source(self, environment, template): try: import uwsgi source= uwsgi.embedded_data("templates_ %s"% self.symbolize(template)) return source, None, lambda: True except: pass return super(SymTemplateLoader, self).get_source(environment, template) app= Flask(__name__) app.debug= True app.jinja_env.loader= SymTemplateLoader(app) @app.route(’/’) def index(): return render_template(’hello.html’) @app.route(’/foo’) def foo(): return render_template(’bar/foo.html’) POW! BIFF! NINJA AWESOMENESS. 3.25 Logging See also: Formatting uWSGI requests logs 3.25.1 Basic logging The most basic form of logging in uWSGI is writing requests, errors, and informational messages to stdout/stderr. This happens in the default configuration. The most basic form of log redirection is the --logto /--logto2 / --daemonize options which allow you to redirect logs to files. Basic logging to files To log to files instead of stdout/stderr, use --logto, or to simultaneously daemonize uWSGI, --daemonize. ./uwsgi -s :3031 -w simple_app --daemonize /tmp/mylog.log ./uwsgi -s :3031 -w simple_app --logto /tmp/mylog.log # logto2 only opens the log file after privileges have been dropped to the specified uid/gid. ./uwsgi -s :3031 -w simple_app --uid 1001 --gid 1002 --logto2 /tmp/mylog.log Basic logging (connected UDP mode) With UDP logging you can centralize cluster logging or redirect the persistence of logs to another machine to offload disk I/O. UDP logging works in both daemonized and interactive modes. UDP logging operaties in connected-socket mode, so the UDP server must be available before uWSGI starts. For a more raw approach (working in unconnected mode) see the section on socket logging. 264 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 To enable conencted UDP mode pass the address of a UDP server to the --daemonize/--logto option: ./uwsgi -s :3031 -w simple_app --daemonize 192.168.0.100:1717 ./uwsgi -s :3031 -w simple_app --logto 192.168.0.100:1717 This will redirect all the stdout/stderr data to the UDP socket on 192.168.0.100, port 1717. Now you need an UDP server that will manage your UDP messages. You could use netcat, or even uWSGI: nc -u -p 1717 -s 192.168.0.100 -l ./uwsgi --udp 192.168.0.100:1717 The second way is a bit more useful as it will print the source (ip:port) of every message. In case of multiple uWSGI server logging on the same UDP server it will allow you to recognize one server from another. Naturally you can write your own apps to manage/filter/save the logs received via udp. 3.25.2 Pluggable loggers uWSGI also supports pluggable loggers, which allow you more flexibility on where and what to log. Depending on the configuration of your uWSGI build, some loggers may or may not be available. Some may require to be loaded as plugins. To find out what plugins are available in your build, invoke uWSGI with --logger-list. To set up a pluggable logger, use the --logger or --req-logger options. --logger will set up a logger for every message while --req-logger will set up a logger for request information messages. This is the syntax: --logger [:options] --logger " [:options]" # The quotes are only required on the command line -- config files don’t use them You may set up as many loggers as you like. Named plugins are used for log routing. A very simple example of split request/error logging using plain text files follows. [uwsgi] req-logger= file:/tmp/reqlog logger= file:/tmp/errlog 3.25.3 Log routing By default all log lines are sent to all declared loggers. If this is not what you want, you can use --log-route (and --log-req-route for request loggers) to specify a regular expression to route certain log messages to different destinations. For instance: [uwsgi] logger= mylogger1 syslog logger= theredisone redislog:127.0.0.1:6269 logger= theredistwo redislog:127.0.0.1:6270 logger= file:/tmp/foobar # This logger will log everything else, as it’s not named logger= internalservererror file:/tmp/errors # ... log-route= internalservererror (HTTP/1.\d 500) log-route= mylogger1 uWSGI listen queue of socket . * full This will log each 500 level error to /tmp/errors, while listen queue full errors will end up in /tmp/foobar. This is somewhat similar to the The uWSGI alarm subsystem (from 1.3), though alarms are usually heavier and should only be used for critical situations. 3.25. Logging 265 uWSGI Documentation, Release 2.0 3.25.4 Logging to files logfile plugin – embedded by default. 3.25.5 Logging to sockets logsocket plugin – embedded by default. You can log to an unconnected UNIX or UDP socket using --logger socket:... (or --log-socket ...). uwsgi --socket :3031 --logger socket:/tmp/uwsgi.logsock will send log entries to the Unix socket /tmp/uwsgi.logsock. uwsgi --socket :3031 --logger socket:192.168.173.19:5050 will send log datagrams to the UDP address 192.168.173.19 on port 5050. You may also multicast logs to multiple log servers by passing the multicast address: uwsgi --socket :3031 --logger socket:225.1.1.1:1717 3.25.6 Logging to syslog logsyslog plugin – embedded by default The logsyslog plugin routes logs to Unix standard syslog. You may pass an optional ID to send and the “facility” for the log entry. uwsgi --socket :3031 --logger syslog:uwsgi1234 or uwsgi --socket :3031 --logger syslog:uwsgi1234,local6 to send to the local6 facility 3.25.7 Logging to remote syslog logrsyslog plugin – embedded by default The logrsyslog plugin routes logs to Unix standard syslog residing on a remote server. In addtition to the ad- dress+port of the remote syslog server, you may pass an optional ID to send as the “facility” parameter for the log entry. uwsgi --socket :3031 --logger rsyslog:12.34.56.78:12345,uwsgi1234 3.25.8 Redis logger redislog plugin – embedded by default. By default the redislog plugin will ‘publish’ each logline to a redis pub/sub queue. The logger plugin syntax is: --logger redislog[:,,] 266 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 By default host is mapped to 127.0.0.1:6379, command is mapped to “publish uwsgi” and prefix is empty. To publish to a queue called foobar, use redislog:127.0.0.1:6379,publish foobar. Redis logging is not limited to pub/sub. You could for instance push items into a list, as in the next example. --logger redislog:/tmp/redis.sock,rpush foo,example.com As error situations could cause the master to block while writing a log line to a remote server, it’s a good idea to use --threaded-logger to offload log writes to a secondary thread. 3.25.9 MongoDB logger mongodblog plugin – embedded by default. The logger syntax for MongoDB logging (mongodblog) is --logger mongodblog[:,,] Where host is the address of the MongoDB instance (default 127.0.0.1:27017), collection names the collection to write log lines into (default uwsgi.logs) and node is an identification string for the instance sending logs (default: server hostname). --logger mongodblog Will run the logger with default values, while --logger mongodblog:127.0.0.1:9090,foo.bar Will write logs to the mongodb server 127.0.0.1:9090 in the collection foo.bar using the default node name. As with the Redis logger, offloading log writes to a dedicated thread is a good choice. [uwsgi] threaded-logger= true logger= mongodblog:127.0.0.1:27017,uwsgi.logs_of_foobar # As usual, you could have multiple loggers: # logger = mongodblog:192.168.173.22:27017,uwsgi.logs_of_foobar socket= :3031 3.25.10 ZeroMQ logging As with UDP logging you can centralize/distribute logging via ZeroMQ. Build your logger daemon using a ZMQ_PULL socket: import zmq ctx= zmq.Context() puller= ctx.socket(zmq.PULL) puller.bind("tcp://192.168.173.18:9191") while True: message= puller.recv() print message, Now run your uWSGI server: uwsgi --logger zeromq:tcp://192.168.173.18:9191 --socket :3031 --module werkzeug.testapp:test_app (--log-zeromq is an alias for this logger.) 3.25. Logging 267 uWSGI Documentation, Release 2.0 3.25.11 Crypto logger (plugin) If you host your applications on cloud services without persistent storage you may want to send your logs to external systems. However logs often contain sensitive information that should not be transferred in clear. The logcrypto plugin logger attempts to solve this issue by encrypting each log packet before sending it over UDP to a server able to decrypt it. The next example will send each log packet to a UDP server available at 192.168.173.22:1717 encrypting the text with the secret key ciaociao with Blowfish in CBC mode. uwsgi --plugin logcrypto --logger crypto:addr=192.168.173.22:1717,algo=bf-cbc,secret=ciaociao -M -p 4 -s :3031 An example server is available at https://github.com/unbit/uwsgi/blob/master/contrib/cryptologger.rb 3.25.12 Graylog2 logger (plugin) graylog2 plugin – not compiled by default. This plugin will send logs to a Graylog2 server in Graylog2’s native GELF format. uwsgi --plugin graylog2 --logger graylog2:127.0.0.1:1234,dsfargeg 3.25.13 Systemd logger (plugin) systemd_logger plugin – not compiled by default. This plugin will write log entries into the Systemd journal. uwsgi --plugin systemd_logger --logger systemd 3.25.14 Writing your own logger plugins This plugin, foolog.c will write your messages in the file specified with –logto/–daemonize with a simple prefix using vector IO. #include ssize_t uwsgi_foolog_logger(struct uwsgi_logger *ul, char *message, size_t len) { struct iovec iov[2]; iov[0].iov_base="[foo]"; iov[0].iov_len=6; iov[1].iov_base= message; iov[1].iov_len= len; return writev(uwsgi.original_log_fd, iov,2); } void uwsgi_foolog_register() { uwsgi_register_logger("syslog", uwsgi_syslog_logger); } struct uwsgi_plugin foolog_plugin={ .name="foolog", 268 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 .on_load= uwsgi_foolog_register, }; 3.26 Formatting uWSGI requests logs uWSGI has a --logformat option for building custom request loglines. The syntax is simple: [uwsgi] logformat= i am a logline reporting "%(method) %(uri) %(proto)" returning with status %(status) All of the variables marked with %() are substituted using specific rules. Three kinds of logvars are defined (“off- setof”, functions, and user-defined). 3.26.1 offsetof These are taken blindly from the internal wsgi_request structure of the current request. •%(uri) -> REQUEST_URI •%(method) -> REQUEST_METHOD •%(user) -> REMOTE_USER •%(addr) -> REMOTE_ADDR •%(host) -> HTTP_HOST •%(proto) -> SERVER_PROTOCOL •%(uagent) -> HTTP_USER_AGENT (starting from 1.4.5) •%(referer) -> HTTP_REFERER (starting from 1.4.5) 3.26.2 functions These are simple functions called for generating the logvar value: •%(status) -> HTTP response status code •%(micros) -> response time in microseconds •%(msecs) -> response time in milliseconds •%(time) -> timestamp of the start of the request •%(ctime) -> ctime of the start of the request •%(epoch) -> the current time in Unix format •%(size) -> response body size + response headers size (since 1.4.5) •%(ltime) -> human-formatted (Apache style) request time (since 1.4.5) •%(hsize) -> response headers size (since 1.4.5) •%(rsize) -> response body size (since 1.4.5) •%(cl) -> request content body size (since 1.4.5) •%(pid) -> pid of the worker handling the request (since 1.4.6) 3.26. Formatting uWSGI requests logs 269 uWSGI Documentation, Release 2.0 •%(wid) -> id of the worker handling the request (since 1.4.6) •%(switches) -> number of async switches (since 1.4.6) •%(vars) -> number of CGI vars in the request (since 1.4.6) •%(headers) -> number of generated response headers (since 1.4.6) •%(core) -> the core running the request (since 1.4.6) •%(vsz) -> address space/virtual memory usage (in bytes) (since 1.4.6) •%(rss) -> RSS memory usage (in bytes) (since 1.4.6) •%(vszM) -> address space/virtual memory usage (in megabytes) (since 1.4.6) •%(rssM) -> RSS memory usage (in megabytes) (since 1.4.6) •%(pktsize) -> size of the internal request uwsgi packet (since 1.4.6) •%(modifier1) -> modifier1 of the request (since 1.4.6) •%(modifier2) -> modifier2 of the request (since 1.4.6) •%(metric.XXX) -> access the XXX metric value (see The Metrics subsystem) •%(rerr) -> number of read errors for the request (since 1.9.21) •%(werr) -> number of write errors for the request (since 1.9.21) •%(ioerr) -> number of write and read errors for the request (since 1.9.21) •%(tmsecs) -> timestamp of the start of the request in milliseconds since the epoch (since 1.9.21) •%(tmicros) -> timestamp of the start of the request in microseconds since the epoch (since 1.9.21) •%(var.XXX) -> the content of request variable XXX (like var.PATH_INFO, available from 1.9.21) 3.26.3 User-defined logvars You can define logvars within your request handler. These variables live only per-request. import uwsgi def application(env, start_response): uwsgi.set_logvar(’foo’,’bar’) # returns ’bar’ print uwsgi.get_logvar(’foo’) uwsgi.set_logvar(’worker_id’, str(uwsgi.worker_id())) ... With the following log format you will be able to access code-defined logvars: uwsgi --logformat ’worker id = %(worker_id) for request "%(method) %(uri) %(proto)" test = %(foo)’ 3.26.4 Apache-style combined request logging To generate Apache-compatible logs: [uwsgi] ... log-format = %(addr) - %(user) [%(ltime)] "%(method) %(uri) %(proto)" %(status) %(size) "%(referer)" "%(uagent)" ... 270 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.26.5 Hacking logformat (Updated to 1.9.21) You can register new “logchunk” (the function to call for each logformat symbol) with struct uwsgi_logchunk *uwsgi_register_logchunk(char *name, ssize_t (*func)(struct wsgi_request *, char **), int need_free); • name – the name of the symbol • need_free – if 1, means the pointer set by func must be free()d • func – the function to call in the log handler static ssize_t uwsgi_lf_foobar(struct wsgi_request *wsgi_req, char **buf) { *buf= uwsgi_num2str(wsgi_req->status); return strlen(*buf); } static void register_logchunks() { uwsgi_register_logchunk("foobar", uwsgi_lf_foobar,1); } struct uwsgi_plugin foobar_plugin={ .name="foobar", .on_load= register_logchunks, }; Now if you load the foobar plugin, you will be able to use the %(foobar) request logging variable (that would report the request status). 3.27 Log encoders uWSGI 1.9.16 got the “log encoding” feature. An encoder receives a logline and give back a “transformation” of it. Encoders can be added by plugins, and can be enabled in chain (the output of an encoder will be the input of the following one and so on). [uwsgi] ; send logs to udp address 192.168.173.13:1717 logger = socket:192.168.173.13:1717 ; before sending a logline to the logger encode it in gzip log-encoder = gzip ; after gzip add a ’clear’ prefix to easy decode log-encoder = prefix i am gzip encoded ... with this configuration the log server will receive the “i am gzip encoded” string followed by the tru log message encoded in gzip The log encoder syntax is the following: log-encoder = [ args] so args (if any) are separated by a single space 3.27. Log encoders 271 uWSGI Documentation, Release 2.0 3.27.1 Request logs VS stdout/stderr The –log-encoder option encodes only the stdout/stderr logs. If you want to encode request logs to use –log-req-encoder: [uwsgi] ; send request logs to udp address 192.168.173.13:1717 req-logger = socket:192.168.173.13:1717 ; before sending a logline to the logger encode it in gzip log-req-encoder = gzip ; after gzip add a ’clear’ prefix to easy decode log-req-encoder = prefix i am gzip encoded ... 3.27.2 Routing encoders Log routing allows sending each logline to a different log engine based on regexps. You can use the same system with encoders too: [uwsgi] ; by default send logs to udp address 192.168.173.13:1717 logger = socket:192.168.173.13:1717 ; an alternative logger using the same address logger = secondlogger socket:192.168.173.13:1717 ; use ’secondlogger’ for the logline containing ’uWSGI’ log-route = secondlogger uWSGI ; before sending a logline to the ’secondlogger’ logger encode it in gzip log-encoder = gzip:secondlogger ... 3.27.3 Core encoders The following encoders are available in the uwsgi ‘core’: prefix add a raw prefix to each log msg suffix add a raw suffix to each log msg nl add a newline char to each log msg gzip compress each msg with gzip (requires zlib) compress compress each msg with zlib compress (requires zlib) format apply the specified format to each log msg: [uwsgi] ... log-encoder = format [FOO ${msg} BAR] ... json like format but each variable is json escaped [uwsgi] ... log-encoder = json {"unix":${unix}, "msg":"${msg}"} ... 272 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 The following variables (for format and json) are available: ${msg} the raw log message (newline stripped) ${msgnl} the raw log message (with newline) ${unix} the current unix time ${micros} the current unix time in microseconds ${strftime:xxx} strftime using the xxx format: [uwsgi] ... ; we need to escape % to avoid magic vars nameclash log-encoder = json {"unix":${unix}, "msg":"${msg}", "date":"${strftime:%%d/%%m/%%Y %%H:%%M:%%S}"} ... 3.27.4 The msgpack encoder This is the first log-encoder plugin officially added to uWSGI sources. It allows encoding of loglines in msgpack (http://msgpack.org/) format. The syntax is pretty versatile as it has been developed for adding any information to a single packet log-encoder = msgpack format is pretty complex as it is a list of the single items in the whole packet. For example if you want to encode the {‘foo’:’bar’, ‘test’:17} dictionary you need to read it as: a map of 2 items | the string foo | the string bar | the string test | the integer 17 for a total of 5 items. A more complex structure {‘boo’:30, ‘foo’:’bar’, ‘test’: [1,3,3,17.30,nil,true,false]} will be a map of 3 items | the string boo | the number 30| the string foo| the string bar | the string test | an array of 7 items | the integer 1 | the integer 3 | the integer 3 | the float 17.30 | a nil | a true | a false The string is a representation of this way: map:2|str:foo|str:bar|str:test|int:17 The pipe is the seprator of each item. The string before the colon is the type of item, followed by the optional argument The following item types are supported: map a dictionary, the argument is the number of items array an array, the argument is the number of items str a string, the argument is the string itself bin a byte array, the argument is the binary stream itself int an integer, the argument is the number float a float, the argument is the number nil undefined/NULL true boolean TRUE false boolean FALSE 3.27. Log encoders 273 uWSGI Documentation, Release 2.0 in addition to msgpack types, a series of dynamic types are available: msg translate the logline to a msgpack string with newline chopped msgbin translate the logline to a msgpack byte array with newline chopped msgnl translate the logline to a msgpack string (newline included) msgbin translate the logline to a msgpack byte array (newline included) unix translate to an integer of the unix time micros translate to an integer of the unix time in microseconds strftime translate to a string using strftime syntax. The strftime format is the argument As an example you can send logline to a logstash server via udp: (logstash debug configuration): input { udp { codec=> msgpack {} port=> 1717 } } output { stdout { debug=> true} elasticsearch { embedded=> true} } [uwsgi] logger = socket:192.168.173.13:1717 log-encoder = msgpack map:4|str:message|msg|str:hostname|str:%h|str:version|str:%V|str:appname|str:myapp ... this will generate the following structure: { "message":"*** Starting uWSGI 1.9.16-dev-29d80ce (64bit) on [Sat Sep 7 15:04:32 2013] ***", "hostname": "unbit.it", "version": "1.9.16-dev", "appname": "myapp" } that will be stored in elasticsearch 3.27.5 Notes Encoders automatically enable –log-master For best performance consider allocating a thread for log sending with –threaded-logger 3.28 Hooks (updated to uWSGI 1.9.16) uWSGI’s main directive is being “modular”. The vast majority of its features are exposed as plugins, both to allow users to optimize their build and to encourage developers to extend it. 274 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 Writing plugins can be an annoying task, especially if you only need to change/implement a single function. For simple tasks, uWSGI exposes an hook API you can abuse to modify uWSGI’s internal behaviors. 3.28.1 The “hookable” uWSGI phases Before being ready to manage requests, uWSGI goes through various “phases”. You can attach one or more “hooks” to these phases. Each phase can be “fatal”, if so, a failing hook will mean failing of the whole uWSGI instance (generally calling exit(1)). Currently (September 2013) the following phases are available: • asap run directly after configuration file has been parsed, before anything else is done. it is fatal. • pre-jail run before any attempt to drop privileges or put the process in some form of jail. it is fatal. • post-jail run soon after any jailing, but before privileges drop. If jailing requires fork(), the parent process run this phase. it is fatal. • in-jail run soon after jayling, but after post-jail. If jailing requires fork(), the chidlren run this phase. it is fatal. • as-root run soon before privileges drop (last chance to run something as root). it is fatal. • as-user run soon after privileges drop. it is fatal. • pre-app run before applications loading. it is fatal. • post-app run after applications loading. it is fatal. • accepting run before the each worker starts accepting requests (available from uWSGI 1.9.21). • accepting1 run before the first worker starts accepting requests (available from uWSGI 1.9.21). • accepting-once run before the each worker starts accepting requests (available from uWSGI 1.9.21, runs one time per instance). • accepting1-once run before the first worker starts accepting requests (available from uWSGI 1.9.21, runs one time per instance). • as-user-atexit run before shutdown of the instance. it is non-fatal. • as-emperor run soon after the spawn of a vassal in the Emperor process. it is non-fatal. • as-vassal run in the vassal before executing the uwsgi binary. it is fatal. 3.28.2 The “hardcoded” hooks As said before, the purpose of the hook subsystem is to allow attaching “hooks” to the various uWSGI phases. There are two kind of hooks. The simple ones are the so-called “hardcoded” ones. They expose common patterns at the cost of versatility. Currently (September 2013) the following “hardcoded” hooks are available (they run in the order they are shown below): 3.28. Hooks 275 uWSGI Documentation, Release 2.0 mount – mount filesystems Arguments: <filesystem> [flags] The exposed flags are the ones available for the operating system. As an example on Linux you will options like bind, recursive, readonly etc. umount – unmount filesystems Arguments: [flags] exec run shell commands Arguments: [args...] Run the command under /bin/sh. If for some reason you do not want to use /bin/sh as the running shell, you can override it with the --binsh option. You can specify multiple --binsh options – they will be tried until one valid shell is found. call call functions in the current process address space Arguments: [args...] Generally the arguments are ignored (the only exceptions are the emperor/vassal phases, see below) as the system expects to call the symbol without arguments. can be any symbol currently available in the process’s address space. This allows some interesting tricks when combined with the --dlopen uWSGI option: // foo.c #include void foo_hello() { printf("I am the foo_hello function called by a hook!\n"); } Build this as a shared library: gcc -o foo.so -shared -fPIC foo.c and load it into the uWSGI symbol table. uwsgi --dlopen ./foo.so ... From now on, the “foo_hello” symbol is available in the uWSGI symbol table, ready to be called by the ‘call’ hooks. Warning: As –dlopen is a wrapper for the dlopen() function, beware of absolute paths and library search paths. If you do not want headaches, use always absolute paths when dealing with shared libraries. 3.28.3 Attaching “hardcoded” hooks Each hardcoded hook exposes a set of options for each phase (with some exceptions). Each option is composed by the name of the hook and its phase, so to run a command in the as-root phase you will use --exec-as-root, or --exec-as-user for the as-user phase. 276 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 Remember, you can attach all of the hooks you need to a hook-phase pair. [uwsgi] ... exec-as-root = cat /proc/cpuinfo exec-as-root = echo 1 > /proc/sys/net/ipv4/ip_forward exec-as-user = ls /tmp exec-as-user-at-exit = rm /tmp/foobar dlopen = ./foo.so call-as-user = foo_hello ... The only exception to the rule are the as-emperor and as-vassal phases. For various reasons they expose a bunch of handy variants – see below. 3.28.4 The “advanced” hooks A problem that limits their versatility (a big no-no in the uWSGI state of mind) with hardcoded hooks, is that you cannot control the order of the whole chain (as each phase executes each hooks grouped by type). If you want more control, “advanced” hooks are the best choice. Each phase has a single chain in which you specify the hook the call and which handler. Handlers specify how to run hooks. New handlers can be registered by plugins. Currently the handlers exposed by the core are: • exec - same as the ‘exec’ hardcoded options • call - call the specified symbol ignoring return value • callret - call the specified symbol expecting an int return. anything != 0 means failure • callint - call the specified symbol parsing the argument as an int • callintret - call the specified symbol parsing the argument as an int and expecting an int return. • mount - same as ‘mount’ hardcoded options • umount - same as ‘umount’ hardcoded options • cd - convenience handler, same as call:chdir • exit - convenience handler, same as callint:exit [num] • print - convenience handler, same as calling the uwsgi_log symbol • write - (from uWSGI 1.9.21), write a string to the specified file using write:<file> • writefifo - (from uWSGI 1.9.21), write a string to the specified FIFO using writefifo:<file> • unlink - (from uWSGI 1.9.21), unlink the specified file [uwsgi] ... hook-as-root = mount:proc none /proc hook-as-root = exec:cat /proc/self/mounts hook-pre-app = callint:putenv PATH=bin:$(PATH) hook-post-app = call:uwsgi_log application has been loaded hook-as-user-atexit = print:goodbye cruel world ... 3.28. Hooks 277 uWSGI Documentation, Release 2.0 3.29 Glossary harakiri A feature of uWSGI that aborts workers that are serving requests for an excessively long time. Configured using the harakiri family of options. Every request that will take longer than the seconds specified in the harakiri timeout will be dropped and the corresponding worker recycled. master uWSGI’s built-in prefork+threading multi-worker management mode, activated by flicking the master switch on. For all practical serving deployments it’s not really a good idea not to use master mode. 3.30 uWSGI third party plugins The following plugins (unless otherwise specified) are not commercially supported. Feel free to add your plugin to the list by sending a pull request to the uwsgi-docs project. 3.30.1 uwsgi-capture • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-capture Allows gathering video4linux frames in a sharedarea. 3.30.2 uwsgi-wstcp • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-wstcp Maps websockets to TCP connections (useful for proxying via javascript). 3.30.3 uwsgi-pgnotify • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-pgnotify Integrates the PostgreSQL notification system with the uWSGI signal framework. 3.30.4 uwsgi-quota • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-quota Allows to set and monitor filesystem quotas. 278 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.30.5 uwsgi-eventfd • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-eventfd Allows to monitor eventfd() objects (like events sent by the cgroup system). 3.30.6 uwsgi-console-broadcast • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-console-broadcast Exposes hooks for sending broadcast messages to user terminals. 3.30.7 uwsgi-strophe • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-strophe Integration with the libstrophe library (xmpp). 3.30.8 uwsgi-alarm-chain • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-alarm-chain Virtual alarm handler combining multiple alarms into a single one. 3.30.9 uwsgi-netlink • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-netlink Integration with the Linux netlink subsystem. 3.30.10 uwsgi-pushover • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-pushover Integration with Pushover.net services. 3.30. uWSGI third party plugins 279 uWSGI Documentation, Release 2.0 3.30.11 uwsgi-consul • License: MIT • Author: unbit, ultrabug • Website: https://github.com/unbit/uwsgi-consul Integration with consul agents (consul.io) 3.30.12 uwsgi-influxdb • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-influxdb Allows sending metrics to influxdb 3.30.13 uwsgi-opentsdb • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-opentsdb Allows sending metrics to opentsdb 3.30.14 uwsgi-cares • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-cares exposes non-blocking dns query via the cares library 3.30.15 uwsgi-ganglia • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-ganglia Allows sending metrics to ganglia 3.30.16 uwsgi-bonjour • License: MIT • Author: unbit, 20tab • Website: https://github.com/unbit/uwsgi-bonjour Automatically register domain names in OSX bonjour subsystem 280 Chapter 3. Table of Contents uWSGI Documentation, Release 2.0 3.30.17 uwsgi-avahi • License: MIT • Author: 20tab • Website: https://github.com/20tab/uwsgi-avahi Automatically register domain names in avahi subsystem 3.30.18 uwsgi-datadog • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-datadog Automatically send metrics to datadog (https://www.datadoghq.com/) 3.30.19 uwsgi-apparmor • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-apparmor Allows setting apparmor profiles for instances 3.30.20 uwsgi-docker • License: MIT • Author: unbit • Website: https://github.com/unbit/uwsgi-docker Allows running dockerized (https://docker.io) vassals 3.30. uWSGI third party plugins 281 uWSGI Documentation, Release 2.0 282 Chapter 3. Table of Contents CHAPTER 4 Tutorials 4.1 The uWSGI Caching Cookbook This is a cookbook of various caching techniques using uWSGI internal routing, The uWSGI caching framework and uWSGI Transformations The examples assume a modular uWSGI build. You can ignore the ‘plugins’ option, if you are using a monolithic build. Recipes are tested over uWSGI 1.9.7. Older versions may not work. 4.1.1 Let’start This is a simple perl/PSGI Dancer app we deploy on an http-socket with 4 processes use Dancer; get ’/’=> sub { "Hello World!" }; dance; This is the uWSGI config, pay attention to the log-micros directive. The objective of uWSGI in-memory caching is generating a response in less than 1 millisecond (yes, this is true), so we want to get the response time logging in microseconds. [uwsgi] ; load the PSGI plugin as the default one plugins= 0:psgi ; load the Dancer app psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true Run the uWSGI instance in your terminal and just make a bunch of requests to it 283 uWSGI Documentation, Release 2.0 curl -D /dev/stdout http://localhost:9090/ If all goes well you should see something similar in your uWSGI logs: [pid: 26586|app: 0|req: 1/1] 192.168.173.14(){24 vars in 327 bytes}[Wed Apr 17 09:06:58 2013] GET /=> generated 12 bytes in 3497 micros(HTTP/1.1 200) 4 headers in 126 bytes(0 switches on core 0) [pid: 26586|app: 0|req: 2/2] 192.168.173.14(){24 vars in 327 bytes}[Wed Apr 17 09:07:14 2013] GET /=> generated 12 bytes in 1134 micros(HTTP/1.1 200) 4 headers in 126 bytes(0 switches on core 0) [pid: 26586|app: 0|req: 3/3] 192.168.173.14(){24 vars in 327 bytes}[Wed Apr 17 09:07:16 2013] GET /=> generated 12 bytes in 1249 micros(HTTP/1.1 200) 4 headers in 126 bytes(0 switches on core 0) [pid: 26586|app: 0|req: 4/4] 192.168.173.14(){24 vars in 327 bytes}[Wed Apr 17 09:07:17 2013] GET /=> generated 12 bytes in 953 micros(HTTP/1.1 200) 4 headers in 126 bytes(0 switches on core 0) [pid: 26586|app: 0|req: 5/5] 192.168.173.14(){24 vars in 327 bytes}[Wed Apr 17 09:07:18 2013] GET /=> generated 12 bytes in 1016 micros(HTTP/1.1 200) 4 headers in 126 bytes(0 switches on core 0) while curl will return: HTTP/1.1 200 OK Server: Perl Dancer 1.3112 Content-Length: 12 Content-Type: text/html X-Powered-By: Perl Dancer 1.3112 Hello World! The first request on a process took about 3 milliseconds (this is normal as lot of code is executed the first time), but the following run in about 1 millisecond). Now we want to store the response in the uWSGI cache. 4.1.2 The first recipe We first create a uWSGI cache named ‘mycache’ with 100 slot of 64k (new options are at the end of the config) and at each request for ‘/’ we search in it for a specific item named ‘myhome’. This time we load the router_cache plugin too (it is builtin by default in monolithic servers) [uwsgi] ; load the PSGI plugin as the default one plugins= 0:psgi,router_cache ; load the Dancer app psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; at each request for / check for a ’myhome’ item in the ’mycache’ cache ; ’route’ apply a regexp to the PATH_INFO request var route= ^/$ cache:key=myhome,name=mycache restart uWSGI and re-run the previous test with curl. Sadly nothing will change. Why ? Because you did not instructed uWSGI to store the plugin response in the cache. You need to use the cachestore routing action [uwsgi] ; load the PSGI plugin as the default one 284 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 plugins= 0:psgi,router_cache ; load the Dancer app psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; at each request for / check for a ’myhome’ item in the ’mycache’ cache ; ’route’ apply a regexp to the PATH_INFO request var route= ^/$ cache:key=myhome,name=mycache ; store each successfull request (200 http status code) for ’/’ in the ’myhome’ item route= ^/$ cachestore:key=myhome,name=mycache Now re-run the test, and you should see requests going down to a range of 100-300 microseconds (it depends on various factors, but you should gain at least 60% in response time) Log line report -1 as the app id: [pid: 26703|app: -1|req: -1/2] 192.168.173.14(){24 vars in 327 bytes}[Wed Apr 17 09:24:52 2013] GET /=> generated 12 bytes in 122 micros(HTTP/1.1 200) 2 headers in 64 bytes(0 switches on core 0) this is because when a response is served from the cache your app/plugin is not touched (in this case, no perl call is involved) You will note less headers too: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 12 Hello World! This is because only the body of a response is cached. By default the generated response is set as text/html but you can change it or let the mime types engine do the work for you (see later) 4.1.3 Cache them all !!! We want to cache all of our requests. Some of them returns images and css, while the others are always text/html [uwsgi] ; load the PSGI plugin as the default one plugins= 0:psgi,router_cache ; load the Dancer app psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true 4.1. The uWSGI Caching Cookbook 285 uWSGI Documentation, Release 2.0 ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; load the mime types engine mime-file= /etc/mime.types ; at each request starting with /img check it in the cache (use mime types engine for the content type) route= ^/img/(.+) cache:key=/img/$1,name=mycache,mime=1 ; at each request ending with .css check it in the cache route= \.css$ cache:key=${REQUEST_URI},name=mycache,content_type=text/css ; fallback to text/html all of the others request route=.* cache:key=${REQUEST_URI},name=mycache ; store each successfull request (200 http status code) in the ’mycache’ cache using the REQUEST_URI as key route=.* cachestore:key=${REQUEST_URI},name=mycache 4.1.4 Multiple caches You may want/need to store items in different caches. We can chnage the previous recipe to use three different caches for images, css and html responses. [uwsgi] ; load the PSGI plugin as the default one plugins= 0:psgi,router_cache ; load the Dancer app psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; create a cache for images with dynamic size (images can be big, so do not waste memory) cache2= name=images,items=20,bitmap=1,blocks=100 ; a cache for css (20k per-item is more than enough) cache2= name=stylesheets,items=30,blocksize=20000 ; load the mime types engine mime-file= /etc/mime.types ; at each request starting with /img check it in the ’images’ cache (use mime types engine for the content type) route= ^/img/(.+) cache:key=/img/$1,name=images,mime=1 ; at each request ending with .css check it in the ’stylesheets’ cache route= \.css$ cache:key=${REQUEST_URI},name=stylesheets,content_type=text/css ; fallback to text/html all of the others request route=.* cache:key=${REQUEST_URI},name=mycache 286 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 ; store each successfull request (200 http status code) in the ’mycache’ cache using the REQUEST_URI as key route=.* cachestore:key=${REQUEST_URI},name=mycache ; store images and stylesheets in the corresponding caches route= ^/img/ cachestore:key=${REQUEST_URI},name=images route= ^/css/ cachestore:key=${REQUEST_URI},name=stylesheets Important, every matched ‘cachestore’ will overwrite the previous one. So we are putting .* as the first rule. 4.1.5 Being more aggressive, the Expires HTTP header You can set an expiration for each cache item. If an item has an expire, it will be translated to an HTTP Expires headers. This means, once you have sent a cache item to the browser, it will not request it until it expires !!! We use the previous recipe simply adding different expires to the items [uwsgi] ; load the PSGI plugin as the default one plugins= 0:psgi,router_cache ; load the Dancer app psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; create a cache for images with dynamic size (images can be big, so do not waste memory) cache2= name=images,items=20,bitmap=1,blocks=100 ; a cache for css (20k per-item is more than enough) cache2= name=stylesheets,items=30,blocksize=20000 ; load the mime types engine mime-file= /etc/mime.types ; at each request starting with /img check it in the ’images’ cache (use mime types engine for the content type) route= ^/img/(.+) cache:key=/img/$1,name=images,mime=1 ; at each request ending with .css check it in the ’stylesheets’ cache route= \.css$ cache:key=${REQUEST_URI},name=stylesheets,content_type=text/css ; fallback to text/html all of the others request route=.* cache:key=${REQUEST_URI},name=mycache ; store each successfull request (200 http status code) in the ’mycache’ cache using the REQUEST_URI as key route=.* cachestore:key=${REQUEST_URI},name=mycache,expires=60 ; store images and stylesheets in the corresponding caches route= ^/img/ cachestore:key=${REQUEST_URI},name=images,expires=3600 route= ^/css/ cachestore:key=${REQUEST_URI},name=stylesheets,expires=3600 images and stylesheets are cached for 1 hour, while html response are cached for 1 minute 4.1. The uWSGI Caching Cookbook 287 uWSGI Documentation, Release 2.0 4.1.6 Monitoring Caches The stats server exposes caches informations. There is an ncurses-based tool (https://pypi.python.org/pypi/uwsgicachetop) using that infos 4.1.7 Storing GZIP variant of an object Back to the first recipe. We may want to store two copies of a response. The “clean” one and a gzipped one for clients supporting gzip encoding. To enable the gzip copy you only need to choose a name for the item and pass it as the ‘gzip’ option of the cachestore action. Then check for HTTP_ACCEPT_ENCODING request header. If it contains the ‘gzip’ word you can send it the gzip variant. [uwsgi] ; load the PSGI plugin as the default one plugins= 0:psgi,router_cache ; load the Dancer app psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; if the client support GZIP give it the gzip body route-if= contains:${HTTP_ACCEPT_ENCODING};gzip cache:key=gzipped_myhome,name=mycache,content_encoding=gzip ; else give it the clear version route= ^/$ cache:key=myhome,name=mycache ; store each successfull request (200 http status code) for ’/’ in the ’myhome’ item in gzip too route= ^/$ cachestore:key=myhome,gzip=gzipped_myhome,name=mycache 4.1.8 Storing static files in the cache for fast serving You can populate a uWSGI cache on server startup with static files for fast serving them. The option –load-file-in- cache is the right tool for the job [uwsgi] plugins= 0:notfound,router_cache http-socket= :9090 cache2= name=files,bitmap=1,items=1000,blocksize=10000,blocks=2000 load-file-in-cache= files /usr/share/doc/socat/index.html route-run= cache:key=${REQUEST_URI},name=files You can specify all of the –load-file-in-cache directive you need but a better approach would be 288 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 [uwsgi] plugins= router_cache http-socket= :9090 cache2= name=files,bitmap=1,items=1000,blocksize=10000,blocks=2000 for-glob= /usr/share/doc/socat/ *.html load-file-in-cache = files %(_) endfor= route-run= cache:key=${REQUEST_URI},name=files this will store all of the html files in /usr/share/doc/socat. Items are stored with the path as the key. When a non-existent item is requested the connection is closed and you should get an ugly -- unavailable modifier requested: 0 -- This is because the internal routing system failed to manage the request, and no request plugin is available to manage the request. You can build a better infrastructure using the simple ‘notfound’ plugin (it will always return a 404) [uwsgi] plugins= 0:notfound,router_cache http-socket= :9090 cache2= name=files,bitmap=1,items=1000,blocksize=10000,blocks=2000 for-glob= /usr/share/doc/socat/ *.html load-file-in-cache = files %(_) endfor= route-run= cache:key=${REQUEST_URI},name=files You can store file in the cache as gzip too using –load-file-in-cache-gzip This option does not allow to set the name of the cache item, so to support client iwith and without gzip support we can use 2 different caches [uwsgi] plugins= 0:notfound,router_cache http-socket= :9090 cache2= name=files,bitmap=1,items=1000,blocksize=10000,blocks=2000 cache2= name=compressedfiles,bitmap=1,items=1000,blocksize=10000,blocks=2000 for-glob= /usr/share/doc/socat/ *.html load-file-in-cache = files %(_) load-file-in-cache-gzip = compressedfiles %(_) endfor= ; take the item from the compressed cache route-if= contains:${HTTP_ACCEPT_ENCODING};gzip cache:key=${REQUEST_URI},name=compressedfiles,content_encoding=gzip ; fallback to the uncompressed one route-run= cache:key=${REQUEST_URI},name=files 4.1.9 Caching for authenticated users If you authenticate users with http basic auth, you can differentiate caching for each one using the ${REMOTE_USER} request variable: [uwsgi] ; load the PSGI plugin as the default one plugins= 0:psgi,router_cache ; load the Dancer app 4.1. The uWSGI Caching Cookbook 289 uWSGI Documentation, Release 2.0 psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; check if the user is authenticated route-if-not= empty:${REMOTE_USER} goto:cacheme route-run= break: ; the following rules are executed only if REMOTE_USER is defined route-label= cacheme route= ^/$ cache:key=myhome_for_${REMOTE_USER},name=mycache ; store each successfull request (200 http status code) for ’/’ route= ^/$ cachestore:key=myhome_for_${REMOTE_USER},name=mycache Cookie-based authentication is generally more complex, but the vast majority of time a session id is passed as a cookie. You may want to use this session_id as the key [uwsgi] ; load the PHP plugin as the default one plugins= 0:php,router_cache ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; check if the user is authenticated route-if-not= empty:${cookie[PHPSESSID]} goto:cacheme route-run= break: ; the following rules are executed only if the PHPSESSID cookie is defined route-label= cacheme route= ^/$ cache:key=myhome_for_${cookie[PHPSESSID]},name=mycache ; store each successfull request (200 http status code) for ’/’ route= ^/$ cachestore:key=myhome_for_${cookie[PHPSESSID]},name=mycache Obviously a malicious user could build a fake session id and could potentially fill your cache. You should always check the session id. There is no single solution, but a good example for file-based php session is the following one: [uwsgi] ; load the PHP plugin as the default one plugins= 0:php,router_cache ; enable the master process master= true ; spawn 4 processes 290 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; create a cache with 100 items (default size per-item is 64k) cache2= name=mycache,items=100 ; check if the user is authenticated route-if-not= empty:${cookie[PHPSESSID]} goto:cacheme route-run= break: ; the following rules are executed only if the PHPSESSID cookie is defined route-label= cacheme ; stop if the session file does not exist route-if-not= isfile:/var/lib/php5/sessions/sess_${cookie[PHPSESSID]} break: route= ^/$ cache:key=myhome_for_${cookie[PHPSESSID]},name=mycache ; store each successfull request (200 http status code) for ’/’ route= ^/$ cachestore:key=myhome_for_${cookie[PHPSESSID]},name=mycache 4.1.10 Caching to files Sometimes, instead of caching in memory you want to store static files. The transformation_tofile plugin allows you to store responses in files: [uwsgi] ; load the PHP plugin as the default one plugins= 0:psgi,transformation_tofile,router_static ; load the Dancer app psgi= myapp.pl ; enable the master process master= true ; spawn 4 processes processes=4 ; bind an http socket to port 9090 http-socket= :9090 ; log response time with microseconds resolution log-micros= true ; check if a file exists route-if= isfile:/var/www/cache/${hex[PATH_INFO]}.html static:/var/www/cache/${hex[PATH_INFO]}.html ; otherwise store the response in it route-run= tofile:/var/www/cache/${hex[PATH_INFO]}.html the hex[] routing var take a request variable content and encode it in hexadecimal. As PATH_INFO tend to contains / it is a better approach than storing full path names (or using other encoding scheme like base64 that can include slashes too) 4.2 Setting up Django and your web server with uWSGI and nginx This tutorial is aimed at the Django user who wants to set up a production web server. It takes you through the steps required to set up Django so that it works nicely with uWSGI and nginx. It covers all three components, providing a complete stack of web application and server software. 4.2. Setting up Django and your web server with uWSGI and nginx 291 uWSGI Documentation, Release 2.0 Django Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. nginx (pronounced engine-x) is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. 4.2.1 Some notes about this tutorial Note This is a tutorial. It is not intended to provide a reference guide, never mind an exhaustive reference, to the subject of deployment. nginx and uWSGI are good choices for Django deployment, but they are not the only ones, or the ‘official’ ones. There are excellent alternatives to both, and you are encouraged to investigate them. The way we deploy Django here is a good way, but it is not the only way; for some purposes it is probably not even the best way. It is however a reliable and easy way, and the material covered here will introduce you to concepts and procedures you will need to be familiar with whatever software you use for deploying Django. By providing you with a working setup, and rehearsing the steps you must take to get there, it will offer you a basis for exploring other ways to achieve this. Note This tutorial makes some assumptions about the system you are using. It is assumed that you are using a Unix-like system, and that it features an aptitude-like package manager. However if you need to ask questions like “What’s the equivalent of aptitude on Mac OS X?”, you’ll be able to find that kind of help fairly easily. While this tutorial assumes Django 1.4 or later, which will automatically create a wsgi module in your new project, the instructions will work with earlier versions. You will though need to obtain that Django wsgi module yourself, and you may find that the Django project directory structure is slightly different. 4.2.2 Concept A web server faces the outside world. It can serve files (HTML, images, CSS, etc) directly from the file system. However, it can’t talk directly to Django applications; it needs something that will run the application, feed it requests from web clients (such as browsers) and return responses. A Web Server Gateway Interface - WSGI - does this job. WSGI is a Python standard. uWSGI is a WSGI implementation. In this tutorial we will set up uWSGI so that it creates a Unix socket, and serves responses to the web server via the WSGI protocol. At the end, our complete stack of components will look like this: the web client <-> the web server <-> the socket <-> uwsgi <-> Django 4.2.3 Before you start setting up uWSGI virtualenv Make sure you are in a virtualenv for the software we need to install (we will describe how to install a system-wide uwsgi later): 292 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 virtualenv uwsgi-tutorial cd uwsgi-tutorial source bin/activate Django Install Django into your virtualenv, create a new project, and cd into the project: pip install Django django-admin.py startproject mysite cd mysite About the domain and port In this tutorial we will call your domain example.com. Substitute your own FQDN or IP address. Throughout, we’ll be using port 8000 for the web server to publish on, just like the Django runserver does by default. You can use whatever port you want of course, but I have chosen this one so it doesn’t conflict with anything a web server might be doing already. 4.2.4 Basic uWSGI installation and configuration Install uWSGI into your virtualenv pip install uwsgi Of course there are other ways to install uWSGI, but this one is as good as any. Remember that you will need to have Python development packages installed. In the case of Debian, or Debian-derived systems such as Ubuntu, what you need to have installed is pythonX.Y-dev, where X.Y is your version of Python. Basic test Create a file called test.py: # test.py def application(env, start_response): start_response(’200 OK’,[(’Content-Type’,’text/html’)]) return [b"Hello World"]# python3 #return ["Hello World"] # python2 Note: Take into account that Python 3 requires bytes(). Run uWSGI: uwsgi --http :8000 --wsgi-file test.py The options mean: • http :8000: use protocol http, port 8000 • wsgi-file test.py: load the specified file, test.py This should serve a ‘hello world’ message directly to the browser on port 8000. Visit: 4.2. Setting up Django and your web server with uWSGI and nginx 293 uWSGI Documentation, Release 2.0 http://example.com:8000 to check. If so, it means the following stack of components works: the web client <-> uWSGI <-> Python Test your Django project Now we want uWSGI to do the same thing, but to run a Django site instead of the test.py module. If you haven’t already done so, make sure that your mysite project actually works: python manage.py runserver 0.0.0.0:8000 And if it that works, run it using uWSGI: uwsgi --http :8000 --module mysite.wsgi • module mysite.wsgi: load the specified wsgi module Point your browser at the server; if the site appears, it means uWSGI is able to serve your Django application from your virtualenv, and this stack operates correctly: the web client <-> uWSGI <-> Django Now normally we won’t have the browser speaking directly to uWSGI. That’s a job for the webserver, which will act as a go-between. 4.2.5 Basic nginx Install nginx sudo apt-get install nginx sudo /etc/init.d/nginx start # start nginx And now check that the nginx is serving by visiting it in a web browser on port 80 - you should get a message from nginx: “Welcome to nginx!”. That means these components of the full stack are working together: the web client <-> the web server If something else is already serving on port 80 and you want to use nginx there, you’ll have to reconfigure nginx to serve on a different port. For this tutorial though, we’re going to be using port 8000. Configure nginx for your site You will need the uwsgi_params file, which is available in the nginx directory of the uWSGI distribution, or from https://github.com/nginx/nginx/blob/master/conf/uwsgi_params Copy it into your project directory. In a moment we will tell nginx to refer to it. Now create a file called mysite_nginx.conf, and put this in it: # mysite_nginx.conf # the upstream component nginx needs to connect to upstream django{ # server unix:///path/to/your/mysite/mysite.sock; # for a file socket 294 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 server 127.0.0.1:8001; # for a web port socket (we’ll use this first) } # configuration of the server server{ # the port your site will be served on listen 8000; # the domain name it will serve for server_name .example.com; # substitute your machine’s IP address or FQDN charset utf-8; # max upload size client_max_body_size 75M; # adjust to taste # Django media location /media{ alias /path/to/your/mysite/media; # your Django project’s media files - amend as required } location /static{ alias /path/to/your/mysite/static; # your Django project’s static files - amend as required } # Finally, send all non-media requests to the Django server. location /{ uwsgi_pass django; include /path/to/your/mysite/uwsgi_params; # the uwsgi_params file you installed } } This conf file tells nginx to serve up media and static files from the filesystem, as well as handle requests that require Django’s intervention. For a large deployment it is considered good practice to let one server handle static/media files, and another handle Django applications, but for now, this will do just fine. Symlink to this file from /etc/nginx/sites-enabled so nginx can see it: sudo ln -s ~/path/to/your/mysite/mysite_nginx.conf /etc/nginx/sites-enabled/ Deploying static files Before running nginx, you have to collect all Django static files in the static folder. First of all you have to edit mysite/settings.py adding: STATIC_ROOT = os.path.join(BASE_DIR, "static/") and then run python manage.py collectstatic Basic nginx test Restart nginx: sudo /etc/init.d/nginx restart To check that media files are being served correctly, add an image called media.png to the /path/to/your/project/project/media directory, then visit 4.2. Setting up Django and your web server with uWSGI and nginx 295 uWSGI Documentation, Release 2.0 http://example.com:8000/media/media.png - if this works, you’ll know at least that nginx is serving files cor- rectly. It is worth not just restarting nginx, but actually stopping and then starting it again, which will inform you if there is a problem, and where it is. 4.2.6 nginx and uWSGI and test.py Let’s get nginx to speak to the “hello world” test.py application. uwsgi --socket :8001 --wsgi-file test.py This is nearly the same as before, except this time one of the options is different: • socket :8001: use protocol uwsgi, port 8001 nginx meanwhile has been configured to communicate with uWSGI on that port, and with the outside world on port 8000. Visit: http://example.com:8000/ to check. And this is our stack: the web client <-> the web server <-> the socket <-> uWSGI <-> Python Meanwhile, you can try to have a look at the uswgi output at http://example.com:8001 - but quite probably, it won’t work because your browser speaks http, not uWSGI, though you should see output from uWSGI in your terminal. 4.2.7 Using Unix sockets instead of ports So far we have used a TCP port socket, because it’s simpler, but in fact it’s better to use Unix sockets than ports - there’s less overhead. Edit mysite_nginx.conf, changing it to match: server unix:///path/to/your/mysite/mysite.sock; # for a file socket # server 127.0.0.1:8001; # for a web port socket (we’ll use this first) and restart nginx. Run uWSGI again: uwsgi --socket mysite.sock --wsgi-file test.py This time the socket option tells uWSGI which file to use. Try http://example.com:8000/ in the browser. If that doesn’t work Check your nginx error log(/var/log/nginx/error.log). If you see something like: connect() to unix:///path/to/your/mysite/mysite.sock failed(13: Permission denied) then probably you need to manage the permissions on the socket so that nginx is allowed to use it. Try: 296 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 uwsgi --socket mysite.sock --wsgi-file test.py --chmod-socket=666 # (very permissive) or: uwsgi --socket mysite.sock --wsgi-file test.py --chmod-socket=664 # (more sensible) You may also have to add your user to nginx’s group (which is probably www-data), or vice-versa, so that nginx can read and write to your socket properly. It’s worth keeping the output of the nginx log running in a terminal window so you can easily refer to it while trou- bleshooting. 4.2.8 Running the Django application with uswgi and nginx Let’s run our Django application: uwsgi --socket mysite.sock --module mysite.wsgi --chmod-socket=664 Now uWSGI and nginx should be serving up not just a “Hello World” module, but your Django project. 4.2.9 Configuring uWSGI to run with a .ini file We can put the same options that we used with uWSGI into a file, and then ask uWSGI to run with that file. It makes it easier to manage configurations. Create a file called ‘mysite_uwsgi.ini‘: # mysite_uwsgi.ini file [uwsgi] # Django-related settings # the base directory (full path) chdir= /path/to/your/project # Django’s wsgi file module= project.wsgi # the virtualenv (full path) home= /path/to/virtualenv # process-related settings # master master= true # maximum number of worker processes processes= 10 # the socket (use the full path to be safe socket= /path/to/your/project/mysite.sock # ... with appropriate permissions - may be needed # chmod-socket = 664 # clear environment on exit vacuum= true And run uswgi using this file: uwsgi --ini mysite_uwsgi.ini # the --ini option is used to specify a file Once again, test that the Django site works as expected. 4.2. Setting up Django and your web server with uWSGI and nginx 297 uWSGI Documentation, Release 2.0 4.2.10 Install uWSGI system-wide So far, uWSGI is only installed in our virtualenv; we’ll need it installed system-wide for deployment purposes. Deactivate your virtualenv: deactivate and install uWSGI system-wide: sudo pip install uwsgi # Or install LTS (long term support). pip install http://projects.unbit.it/downloads/uwsgi-lts.tar.gz The uWSGI wiki describes several installation procedures. Before installing uWSGI system-wide, it’s worth consid- ering which version to choose and the most apppropriate way of installing it. Check again that you can still run uWSGI just like you did before: uwsgi --ini mysite_uwsgi.ini # the --ini option is used to specify a file 4.2.11 Emperor mode uWSGI can run in ‘emperor’ mode. In this mode it keeps an eye on a directory of uWSGI config files, and will spawn instances (‘vassals’) for each one it finds. Whenever a config file is amended, the emperor will automatically restart the vassal. # create a directory for the vassals sudo mkdir /etc/uwsgi sudo mkdir /etc/uwsgi/vassals # symlink from the default config directory to your config file sudo ln -s /path/to/your/mysite/mysite_uwsgi.ini /etc/uwsgi/vassals/ # run the emperor uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data You may need to run uWSGI with sudo: sudo uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data The options mean: • emperor: where to look for vassals (config files) • uid: the user id of the process once it’s started • gid: the group id of the process once it’s started Check the site; it should be running. 4.2.12 Make uWSGI startup when the system boots The last step is to make it all happen automatically at system startup time. Edit /etc/rc.local and add: /usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data 298 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 before the line “exit 0”. And that should be it! 4.2.13 Further configuration It is important to understand that this has been a tutorial, to get you started. You do need to read the nginx and uWSGI documentation, and study the options available before deployment in a production environment. Both nginx and uWSGI benefit from friendly communities, who are able to offer invaluable advice about configuration and usage. nginx General configuration of nginx is not within the scope of this tutorial though you’ll probably want it to listen on port 80, not 8000, for a production website. You also ought to consider at having a separate server for non-Django serving, of static files for example. uWSGI uWSGI supports multiple ways to configure it. See uWSGI’s documentation and examples. Some uWSGI options have been mentioned in this tutorial; others you ought to look at for a deployment in production include (listed here with example settings): env= DJANGO_SETTINGS_MODULE=mysite.settings # set an environment variable pidfile= /tmp/project-master.pid # create a pidfile harakiri= 20 # respawn processes taking more than 20 seconds limit-as= 128 # limit the project to 128 MB max-requests= 5000 # respawn processes after serving 5000 requests daemonize= /var/log/uwsgi/yourproject.log # background the process & log 4.3 Running uWSGI on Dreamhost shared hosting Note: the following tutorial gives suggestions on how to name files with the objective of hosting multiple applications on your account. You are obviously free to change naming schemes. The tutorial assumes a shared hosting account, but it works on the VPS offer too (even if on such a system you have lot more freedom and you could use better techniques to accomplish the result) 4.3.1 Preparing the environment Log in via ssh to your account and move to the home (well, you should be already there after login). Download a uWSGI tarball (anything >= 1.4 is good, but for maximum performance use >= 1.9), explode it and build it normally (run make). At the end of the procedure copy the resulting uwsgi binary to your home (just to avoid writing longer paths later). Now move to the document root of your domain (it should be named like the domain) and put a file named uwsgi.fcgi in it with that content: 4.3. Running uWSGI on Dreamhost shared hosting 299 uWSGI Documentation, Release 2.0 #!/bin/sh /home/XXX/uwsgi /home/XXX/YYY.ini change XXX with your account name and YYY with your domain name (it is only a convention, if you know what you are doing feel free to change it) Give the file ‘execute’ permission chmod +x uwsgi.fcgi Now in your home create a YYY.ini (remember to change YYY with your domain name) with that content [uwsgi] flock= /home/XXX/YYY.ini account= XXX domain= YYY protocol= fastcgi master= true processes=3 logto= /home/%(account)/%(domain).uwsgi.log virtualenv= /home/%(account)/venv module= werkzeug.testapp:test_app touch-reload= %p auto-procname= true procname-prefix-spaced= [%(domain)] change the first three lines accordingly. 4.3.2 Preparing the python virtualenv As we want to run the werkzeug test app, we need to install its package in a virtualenv. Move to the home: virtualenv venv venv/bin/easy_install werkzeug 4.3.3 The .htaccess Move again to the document root to create the .htaccess file that will instruct Apache to forward request to uWSGI RewriteEngine On RewriteBase / RewriteRule ^uwsgi.fcgi/ -[L] RewriteRule ^(.*)$ uwsgi.fcgi/$1[L] 4.3.4 Ready Go to your domain and you should see the Werkzeug test page. If it does not show you can check uWSGI logs in the file you specified with the logto option. 300 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 4.3.5 The flock trick As the apache mod_fcgi/mod_fastcgi/mod_fcgid implemenetations are very flaky on process management, you can easily end with lot of copies of the same process running. The flock trick avoid that. Just remember that the flock option is very special as you cannot use placeholder or other advanced techniques with it. You can only specify the absolute path of the file to lock. 4.3.6 Statistics As always remember to use uWSGI internal stats system first, install uwsgitop venv/bin/easy_install uwsgitop Enable the stats server on the uWSGI config [uwsgi] flock= /home/XXX/YYY.ini account= XXX domain= YYY protocol= fastcgi master= true processes=3 logto= /home/%(account)/%(domain).uwsgi.log virtualenv= /home/%(account)/venv module= werkzeug.testapp:test_app touch-reload= %p auto-procname= true procname-prefix-spaced= [%(domain)] stats= /home/%(account)/stats_%(domain).sock (as we have touch-reload in place, as soon as you update the ini file your instance is reloaded, and you will be able to suddenly use uwsgitop) venv/bin/uwsgitop /home/WWW/stats_YYY.sock (remember to change XXX and YYY accordingly) 4.3.7 Running Perl/PSGI apps (requires uWSGI >= 1.9) Older uWSGI versions does not work well with plugins other than the python one, as the fastcgi implementation has lot of limits. Starting from 1.9, fastCGI is a first-class citizen in the uWSGI project, so all of the plugins work with it. As before, compile the uWSGI sources but this time we will build a PSGI monolithic binary: UWSGI_PROFILE=psgi make copy the resulting binary in the home as uwsgi_perl Now edit the previously created uwsgi.fcgi file changing it to 4.3. Running uWSGI on Dreamhost shared hosting 301 uWSGI Documentation, Release 2.0 #!/bin/sh /home/XXX/uwsgi_perl /home/XXX/YYY.ini (again, change XXX and YYY accordingly) Now upload an app.psgi file in the document root (this is your app) my $app= sub { my $env= shift; return [ ’200’, [ ’Content-Type’=> ’text/plain’], [ "Hello World"] ]; }; and change the uWSGI ini file accordingly [uwsgi] flock= /home/XXX/YYY.ini account= XXX domain= YYY psgi= /home/%(account)/%(domain)/app.psgi fastcgi-modifier1=5 protocol= fastcgi master= true processes=3 logto= /home/%(account)/%(domain).uwsgi.log virtualenv= /home/%(account)/venv touch-reload= %p auto-procname= true procname-prefix-spaced= [%(domain)] stats= /home/%(account)/stats_%(domain).sock The only difference from the python one, is the usage of ‘psgi’ instead of ‘module’ and the addition of fastcgi-modifier1 that set the uWSGI modifier to the perl/psgi one 4.3.8 Running Ruby/Rack apps (requires uWSGI >= 1.9) By default you can use passenger on Dreamhost servers to host ruby/rack applications, but you may need a more advanced application servers for your work (or you may need simply more control over the deployment process) As the PSGI one you need a uWSGI version >= 1.9 to get better (and faster) fastcgi support Build a new uWSGI binary with rack support UWSGI_PROFILE=rack make and copy it in the home as ‘’uwsgi_ruby’‘ Edit (again) the uwsgi.fcgi file changing it to #!/bin/sh /home/XXX/uwsgi_rack /home/XXX/YYY.ini and create a Rack application in the document root (call it app.ru) 302 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 class RackFoo def call(env) [200,{ ’Content-Type’=> ’text/plain’},[’ciao’]] end end run RackFoo.new Finally change the uWSGI .ini file for a rack app: [uwsgi] flock= /home/XXX/YYY.ini account= XXX domain= YYY rack= /home/%(account)/%(domain)/app.ru fastcgi-modifier1=7 protocol= fastcgi master= true processes=3 logto= /home/%(account)/%(domain).uwsgi.log virtualenv= /home/%(account)/venv touch-reload= %p auto-procname= true procname-prefix-spaced= [%(domain)] stats= /home/%(account)/stats_%(domain).sock Only differences from the PSGI one, is the use of ‘rack’ instead of ‘psgi’, and the modifier1 mapped to 7 (the ruby/rack one) 4.3.9 Serving static files It is unlikely you will need to serve static files on uWSGI on a dreamhost account. You can directly use apache for that (eventually remember to change the .htaccess file accordingly) 4.4 Running python webapps on Heroku with uWSGI Prerequisites: a Heroku account (on the cedar platform), git (on the local system) and the heroku toolbelt. Note: you need a uWSGI version >= 1.4.6 to correctly run python apps. Older versions may work, but are not supported. 4.4.1 Preparing the environment On your local system prepare a directory for your project: mkdir uwsgi-heroku cd uwsgi-heroku git init . heroku create 4.4. Running python webapps on Heroku with uWSGI 303 uWSGI Documentation, Release 2.0 the last command will create a new heroku application (you can check it on the web dashboard). For our example we will run the Werkzeug WSGI testapp, so we need to install the werkzeug package in addition to uWSGI. First step is creating a requirements.txt file and tracking it with git. The content of the file will be simply uwsgi werkzeug Let’s track it with git git add requirements.txt 4.4.2 Creating the uWSGI config file Now we can create our uWSGI configuration file. Basically all of the features can be used on heroku [uwsgi] http-socket= :$(PORT) master= true processes=4 die-on-term= true module= werkzeug.testapp:test_app memory-report= true as you can see this is a pretty standard configuration. The only heroku-required options are –http-socket and –die-on- term. The first is required to bind the uWSGI socket to the port requested by the Heroku system (exported via the environ- ment variable PORT we can access with $(PORT)) The second one (–die-on-term) is required to change the default behaviour of uWSGI when it receive a SIGTERM (brutal realod, while Heroku expect a shutdown) The memory-report option (as we are in a memory contrained environment) is a good thing. Remember to track the file git add uwsgi.ini 4.4.3 Preparing for the first commit/push We now need the last step: creating the Procfile. The Procfile is a file describing which commands to start. Generally (with other deployment systems) you will use it for every additional process required by your app (like memcached, redis, celery...), but under uWSGI you can continue using its advanced facilities to manage them. So, the Procfile, only need to start your uWSGI instance: web: uwsgi uwsgi.ini Track it git add Procfile And finally let’s commit all: 304 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 git commit -a -m "first commit" and push it (read: deploy) to Heroku: git push heroku master The first time it will requires a couple of minutes as it need to prepare your virtualenv and compile uWSGI. Following push will be much faster. 4.4.4 Checking your app Running heroku logs you will be able to access uWSGI logs. You should get all of your familiar information, and eventually some hint in case of problems. 4.4.5 Using another version of python Heroku supports different python versions. By default (currently, february 2013), Python 2.7.3 is enabled. If you need another version just create a runtime.txt in your repository with a string like that: python-2.7.2 to use python 2.7.2 Remember to add/commit that in the repository. Every time you change the python version, a new uWSGI binary is built. 4.4.6 Multiprocess or Multithread ? It obviosuly depend on your app. But as we are on a memory-limited environment you can expect better memory usage with threads. In addition to this, if you plan to put production-apps on Heroku be sure to understand how Dynos and their proxy works (it is very important. really) 4.4.7 Async/Greethreads/Coroutine ? As always, do not trust people suggesting you to ALWAYS use some kind of async mode (like gevent). If your app is async-friendly you can obviously use gevent (it is built by default in recent uWSGI releases), but if you do not know that, remain with multiprocess (or multithread). 4.4.8 Harakiri As said previously, if you plan to put production-apps on heroku, be sure to understand how dynos and their proxy works. Based on that, try to always set the harakiri parameters to a good value for your app. (do not ask for a default value, IT IS APP-DEPENDENT) 4.4. Running python webapps on Heroku with uWSGI 305 uWSGI Documentation, Release 2.0 4.4.9 Static files Generally, serving static files on Heroku is not a good idea (mainly from a design point of view). You could obviously have that need. In such a case remember to use uWSGI facilities for that, in particular offloading is the best way to leave your workers free while you serve big files (in addition to this remember that your static files must be tracked with git) 4.4.10 Adaptive process spawning None of the supported algorithm are good for the Heroku approach and, very probably, it makes little sense to use a dynamic process number on such a platform. 4.4.11 Logging If you plan to use heroku on production, remember to send your logs (via udp for example) on an external server (with persistent storage). Check the uWSGI available loggers. Surely one will fit your need. (pay attention to security, as logs will fly in clear). UPDATE: a udp logger with crypto features is on work. 4.4.12 Alarms All of the alarms plugin should work without problems 4.4.13 The Spooler As your app runs on a non-persistent filesystem, using the Spooler is a bad idea (you will easily lose tasks). 4.4.14 Mules They can be used without problems 4.4.15 Signals (timers, filemonitors, crons...) They all works, but do not rely on cron facilities, as heroku can kill/destroy/restarts your instances in every moment. 4.4.16 External daemons The –attach-daemon option and its –smart variants work without problems. Just remember you are on a volatile filesystem and you are not free to bind on port/addresses as you may wish 306 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 4.4.17 Monitoring your app (advanced/hacky) Albeit Heroku works really well with newrelic services, you always need to monitor the internals of your uWSGI instance. Generally you enable the stats subsystem with a tool like uwsgitop as the client. You can simply add uwsgitop to you requirements.txt uwsgi uwsgitop werkzeug and enable the stats server on a TCP port (unix sockets will not work as the instance running uwsgitop is not on the same server !!!): [uwsgi] http-socket= :$(PORT) master= true processes=4 die-on-term= true module= werkzeug.testapp:test_app memory-report= true stats= :22222 Now we have a problem: how to reach our instance ? We need to know the LAN address of the machine where our instance is phisically running. To accomplish that, a raw trick is running ifconfig on uWSGI startup: [uwsgi] http-socket= :$(PORT) master= true processes=4 die-on-term= true module= werkzeug.testapp:test_app memory-report= true stats= :22222 exec-pre-app= /sbin/ifconfig eth0 Now thanks to the heroku logs command you can know where your stats server is heroku run uwsgitop 10.x.x.x:22222 change x.x.x with the discovered address and remember that you could not be able to bind on port 22222, so change it accordingly. Is it worthy to make such a mess to get monitoring ? If you are testing your app before going to production, it could be a good idea, but if you plan to buy more dynos, all became so complex that you’d better to use some heroku-blessed technique (if any) 4.5 Running Ruby/Rack webapps on Heroku with uWSGI Prerequisites: a Heroku account (on the cedar platform), git (on the local system) and the heroku toolbelt (or the old/deprecated heroku gem) Note: you need a uWSGI version >= 1.4.8 to correctly run ruby/rack apps. Older versions may work, but are not supported. 4.5. Running Ruby/Rack webapps on Heroku with uWSGI 307 uWSGI Documentation, Release 2.0 4.5.1 Preparing the environment (a Sinatra application) On your local system prepare the structure for your sinatra application mkdir uwsgi-heroku cd uwsgi-heroku git init . heroku create --stack cedar the last command will create a new heroku application (you can check it on the web dashboard). Next step is creating our Gemfile (this file containes the gem required by the application) source ’https://rubygems.org’ gem"uwsgi" gem"sinatra" we now need to run bundle install to create the Gemfile.lock file let’s track the two with git: git add Gemfile git add Gemfile.lock Finally create a config.ru file containing the Sinatra sample app require ’sinatra’ get ’/hi’ do return "ciao" end run Sinatra::Application and track it git add config.ru 4.5.2 Creating the uWSGI config file We are now ready to create the uWSGI configuration (we will use the .ini format in a file called uwsgi.ini). The minimal setup for heroku is the following (check the comments in the file for an explanation) [uwsgi] ; bind to the heroku required port http-socket= :$(PORT) ; force the usage of the ruby/rack plugin for every request (7 is the official numbero for ruby/rack) http-socket-modifier1=7 ; load the bundler subsystem rbrequire= bundler/setup ; load the application rack= config.ru ; when the app receives the TERM signal let’s destroy it (instead of brutal reloading) die-on-term= true but a better setup will be 308 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 [uwsgi] ; bind to the heroku required port http-socket= :$(PORT) ; force the usage of the ruby/rack plugin for every request (7 is the official numbero for ruby/rack) http-socket-modifier1=7 ; load the bundler subsystem rbrequire= bundler/setup ; load the application rack= config.ru ; when the app receives the TERM signal let’s destroy it (instead of brutal reloading) die-on-term= true ; enable the master process master= true ; spawn 4 processes to increase concurrency processes=4 ; report memory usage after each request memory-report= true ; reload if the rss memory is higher than 100M reload-on-rss= 100 Let’s track it git add uwsgi.ini 4.5.3 Deploying to heroku We need to create the last file (required by Heroku). It is the Procfile, that instruct the Heroku system on which process to start for a web application. We want to spawn uwsgi (installed as a gem via bundler) using the uwsgi.ini config file web: bundle exec uwsgi uwsgi.ini track it git add Procfile And let’s commit all: git commit -a -m "first attempt" And push to heroku: git push heroku master If all goes well, you will see your page under your app url on the /hi path Remember to run heroku logs to check if all is ok. 4.5.4 fork() for dummies uWSGI allows you to choose how to abuse the fork() syscall in your app. By default the approach is loading the application in the master process and then fork() to the workers that will inherit a copy of the master process. This approach speedup startup and can potentially consume less memory. The truth is that often (for the way ruby garbage collection works) you will get few memory gain. The real advantage in in performance as the vast majority 4.5. Running Ruby/Rack webapps on Heroku with uWSGI 309 uWSGI Documentation, Release 2.0 of time during application startup is spent in (slowly) searching for files. With the fork() early approach you can avoid repeating that slow procedure one time for worker. Obviously the uWSGI mantra is “do whatever you need, if you can’t, it is a uWSGI bug” so if your app is not fork()- friendly you can add the lazy-apps = true option that will load your app one time per-worker. 4.5.5 The ruby GC By default uWSGI, calls the ruby Garbage collector after each request. This ensure an optimal use of memory (re- member on Heroku, your memory is limited) you should not touch the default approach, but if you experience a drop in performance you may want to tune it using the ruby-gc-freq = n option where n is the number of requests after the GC is called. 4.5.6 Concurrency Albeit uWSGI supports lot of different paradigms for concurrency, the multiprocess one is suggested for the vast majority of ruby/rack apps. Basically all popular ruby-frameworks rely on that. Remember that your app is limited so spawn a number of processes that can fit in your Heroku dyno. Starting from uWSGI 1.9.14, native ruby 1.9/2.x threads support has been added. Rails4 (only in production mode !!!) supports them: [uwsgi] ... ; spawn 8 threads per-process threads = 8 ; maps them as ruby threads rbthreads = true ; do not forget to set production mode for rails4 apps !!! env = RAILS_ENV=production ... 4.5.7 Harakiri If you plan to put production-apps on heroku, be sure to understand how dynos and their proxy works. Based on that, try to always set the harakiri parameters to a good value for your app. (do not ask for a default value, IT IS APP-DEPENDENT) Harakiri, is the maximum time a single request can run, before being destroyed by the master 4.5.8 Static files Generally, serving static files on Heroku is not a good idea (mainly from a design point of view). You could obviously have that need. In such a case remember to use uWSGI facilities for that, in particular offloading is the best way to leave your workers free while you serve big files (in addition to this remember that your static files must be tracked with git) Try to avoid serving static files from your ruby/rack code. It will be extremely slow (compared to the uWSGI facilities) and can hold your worker busy for the whole transfer of the file 310 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 4.5.9 Adaptive process spawning None of the supported algorithms are good for the Heroku approach and, very probably, it makes little sense to use a dynamic process number on such a platform. 4.5.10 Logging If you plan to use heroku on production, remember to send your logs (via udp for example) on an external server (with persistent storage). Check the uWSGI available loggers. Surely one will fit your need. (pay attention to security, as logs will fly in clear). UPDATE: a udp logger with crypto features is on work. 4.5.11 Alarms All of the alarms plugin should work without problems 4.5.12 The Spooler As your app runs on a non-persistent filesystem, using the Spooler is a bad idea (you will easily lose tasks). 4.5.13 Mules They can be used without problems 4.5.14 Signals (timers, filemonitors, crons...) They all works, but do not rely on cron facilities, as heroku can kill/destroy/restarts your instances in every moment. 4.5.15 External daemons The –attach-daemon option and its –smart variants work without problems. Just remember you are on a volatile filesystem and you are not free to bind on port/addresses as you may wish 4.6 Reliably use FUSE filesystems for uWSGI vassals (with Linux) Requirements: uWSGI 1.9.18, Linux kernel with FUSE and namespaces support. FUSE is a technology allowing the implementation of filesystems in user space (hence the name: Filesystem in Userspace). There are hundreds of high-quality FUSE filesystems, so having your application relying on them is a common situation. FUSE filesystems are normal system processes, so as any process in the system, they can crash (or you may involuntar- ily kill them). In addition to this, if you host multiple applications, each one requiring a FUSE mount point, you may want to avoid polluting the main mount points namespace and, more important, avoid having unused mount points in your system (i.e. an instance is completely removed and you do not want its FUSE mount point to be still available in the system). The purpose of this tutorial is to configure an Emperor and a series of vassals, each one mounting a FUSE filesystem. 4.6. Reliably use FUSE filesystems for uWSGI vassals (with Linux) 311 uWSGI Documentation, Release 2.0 4.6.1 A Zip filesystem fuse-zip is a FUSE process exposing a zip file as a filesystem. Our objective is to store whole app in a zip archive and instruct uWSGI to mount it as a filesystem (via FUSE) under /app. The Emperor [uwsgi] emperor= /etc/uwsgi/vassals emperor-use-clone= fs,pid The trick here is to use Linux namespaces to create vassals in a new pid and filesystem namespace. The first one (fs) allows mount point created by the vassal to be available only to the vassal (without messing with the main system), while the pid allows the uWSGI master to be the “init” process (pid 1) of the vassal. Being “pid 1” means that when you die all your children die too. In our scenario (where our vassal launches a FUSE process on startup) it means that when the vassal is destroyed, the FUSE process is destroyed too, as well as its mount point. A Vassal [uwsgi] uid= user001 gid= user001 ; mount FUSE filesystem under /app (but only if it is not a reload) if-not-reload= exec-as-user = fuse-zip -r /var/www/app001.zip /app endif= http-socket= :9090 psgi= /app/myapp.pl Here we use the -r option of the fuse-zip command for a read-only mount. Monitoring mount points The problem with the current setup is that if the fuse-zip process dies, the instance will no more be able to access /app until it is respawned. uWSGI 1.9.18 added the --mountpoint-check option. It forces the master to constantly verify the specified filesystem. If it fails, the whole instance will be brutally destroyed. As we are under The Emperor, soon after the vassal is destroyed it will be restarted in a clean state (allowing the FUSE mount point to be started again). [uwsgi] uid= user001 gid= user001 ; mount FUSE filesystem under /app (but only if it is not a reload) if-not-reload= exec-as-user = fuse-zip -r /var/www/app001.zip /app endif= http-socket= :9090 312 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 psgi= /app/myapp.pl mountpoint-check= /app 4.6.2 Going Heavy Metal: A CoW rootfs (unionfs-fuse) unionfs-fuse is a user-space implementation of a union filesystem. A union filesystem is a stack of multiple filesystems, so directories with same name are merged into a single view. Union filesystems are more than this and one of the most useful features is copy-on-write (COW or CoW). Enabling CoWs means you will have an immutable/read-only mount point base and all of the modifications to it will go to another mount point. Our objective is to have a read-only rootfs shared by all of our customers, and a writable mount point (configured as CoW) for each customer, in which every modification will be stored. The Emperor Previous Emperor configuration can be used, but we need to prepare our filesystems. The layout will be: /ufs (where we initially mount our unionfs for each vassal) /ns /ns/precise (the shared rootfs, based on Ubuntu Precise Pangolin) /ns/lucid (an alternative rootfs for old-fashioned customers, based on Ubuntu Lucid Lynx) /ns/saucy (another shared rootfs, based on Ubuntu Saucy Salamander) /ns/cow (the customers’ writable areas) /ns/cow/user001 /ns/cow/user002 /ns/cow/userXXX ... We create our rootfs: debootstrap precise /ns/precise debootstrap lucid /ns/lucid debootstrap saucy /ns/saucy And we create the .old_root directory in each one (it is required for pivot_root, see below): mkdir /ns/precise/.old_root mkdir /ns/lucid/.old_root mkdir /ns/saucy/.old_root Be sure to install the required libraries into each of them (especially the libraries required for your language). The uwsgi binary must be executable in this rootfs, so you have to invest a bit of time in it (a good approach is having a language plugin compiled for each distribution and placed into a common directory, for example, each rootfs could have an /opt/uwsgi/plugins/psgi_plugin.so file and so on). A Vassal Here things get a bit more complicated. We need to launch the unionfs process (as root, as it must be our new rootfs) and then call pivot_root (a more advanced chroot available on Linux). 4.6. Reliably use FUSE filesystems for uWSGI vassals (with Linux) 313 uWSGI Documentation, Release 2.0 Hooks are the best way to run custom commands (or functions) at various uWSGI startup phases. In our example we will run FUSE processes at the “pre-jail” phase, and deal with mount points at the “as-root” phase (that happens after pivot_root). [uwsgi] ; choose the approach that suits you best (plugins loading) ; this will be used for the first run ... plugins-dir= /ns/precise/opt/uwsgi/plugins ; and this after a reload (where our rootfs is already /ns/precise) plugins-dir= /opt/uwsgi/plugins plugin= psgi ; drop privileges uid= user001 gid= user001 ; chdir to / to avoid problems after pivot_root hook-pre-jail= callret:chdir / ; run unionfs-fuse using chroot (it is required to avoid deadlocks) and cow (we mount it under /ufs) hook-pre-jail= exec:unionfs-fuse -ocow,chroot=/ns,default_permissions,allow_other /precise=RO:/cow/%(uid)=RW /ufs ; change the rootfs to the unionfs one ; the .old_root directory is where the old rootfs is still available pivot_root= /ufs /ufs/.old_root ; now we are in the new rootfs and in ’as-root’ phase ; remount the /proc filesystem hook-as-root= mount:proc none /proc ; bind mount the original /dev in the new rootfs (simplifies things a lot) hook-as-root= mount:none /.old_root/dev /dev bind ; recursively un-mount the old rootfs hook-as-root= umount:/.old_root rec,detach ; common bind http-socket= :9090 ; load the app (fix it according to your requirements) psgi= /var/www/myapp.pl ; constantly check for the rootfs (seems odd but is is very useful) mountpoint-check=/ If your app will try to write to its filesystem, you will see that all of the created/updated files are available in its /cow directory. 4.6.3 Notes Some FUSE filesystems do not commit writes until they are unmounted. In such a case unmounting on vassal shutdown is a good trick: [uwsgi] ; vassal options ... ... ; umount on exit exec-as-user-atexit = fusermount -u /app 314 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 4.7 Build a dynamic proxy using RPC and internal routing Work in progress (requires uWSGI 1.9.14, we use PyPy as the engine) 4.7.1 step 1: build your mapping function we use the hostname as the mapping (you can use whatever you need) import uwsgi def my_mapper(hostname): return "127.0.0.1:3031" uwsgi.register_rpc(’the_mapper’, my_mapper) save it as myfuncs.py 4.7.2 step 2: building a routing table [uwsgi] ; enable the pypy engine pypy-home= /opt/pypy ; execute the myfuncs.py file (the ’the_mapper’ rpc function will be registered) pypy-exec= myfuncs.py ; bind to a port http-socket= :9090 ; let’s define our routing table ; at every request (route-run execute the action without making check, use it instead of --route .*) run the_mapper passing HTTP_HOST as argument ; and place the result in the MYNODE variable route-run= rpcvar:MYNODE the_mapper ${HTTP_HOST} ; print the MYNODE variable (just for fun) route-run= log:${MYNODE} ; proxy the request to the choosen backend node route-run= http:${MYNODE} ; enable offloading for automagic non-blocking behaviour ; a good value for offloading is the number of cpu cores offload-threads=2 4.8 Setting up Graphite on Ubuntu using the Metrics subsystem This tutorial will guide you in installing a multi-app server, with each application sending metrics to a central graphite/carbon server. Graphite is available here: http://graphite.wikidot.com/ The uWSGI Metrics subsystem is documented here The Metrics subsystem The tutorial assumes an Ubuntu Saucy (13.10) release on amd64 While for Graphite we will use Ubuntu official packages, uWSGI core and plugins will be downloaded and installed from official sources 4.7. Build a dynamic proxy using RPC and internal routing 315 uWSGI Documentation, Release 2.0 4.8.1 Installing Graphite and the others needed packages sudo apt-get install python-dev ruby-dev bundler build-essential libpcre3-dev graphite-carbon graphite-web python-dev and ruby-dev are required as we want to support both WSGI and Rack apps. pcre development headers allow you to build uWSGI with internal routing support (something you always want) 4.8.2 Initializing Graphite The first step will be enabling th Carbon server. The Graphite project is composed by three subsystems: whisper, carbon and the web frontend Whisper is a data storage format (similar to rrdtool) Carbon is the server gathering metrics and storing them in whisper files (well it does more, but this is its main purpose) The web frontend visualize the charts/graphs built from the data gathered by the carbon server. To enable the carbon server edit /etc/default/graphite-carbon and set CARBON_CACHE_ENABLED to true Before starting the carbon server we need to build its search index. Just run: sudo /usr/bin/graphite-build-search-index Then start the carbon server (at the next reboot it will be automatically started) sudo /etc/init.d/carbon-cache start 4.8.3 Building and Installing uWSGI Download latest stable uWSGI tarball wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz explode it, and from the created directory run: python uwsgiconfig.py --build core this will build the uWSGI “core” binary. We now want to build the python, rack and carbon plugins: python uwsgiconfig.py --plugin plugins/python core python uwsgiconfig.py --plugin plugins/rack core python uwsgiconfig.py --plugin plugins/carbon core now we have uwsgi, python_plugin.so, rack_plugin.so and carbon_plugin.so let’s copy it to system directories: sudo mkdir /etc/uwsgi sudo mkdir /usr/lib/uwsgi sudo cp uwsgi /usr/bin/uwsgi sudo cp python_plugin.so /usr/lib/uwsgi sudo cp rack_plugin.so /usr/lib/uwsgi sudo cp carbon_plugin.so /usr/lib/uwsgi 316 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 4.8.4 Setting up the uWSGI Emperor Create an upstart config file for starting The uWSGI Emperor – multi-app deployment # Emperor uWSGI script description "uWSGI Emperor" start on runlevel[2345] stop on runlevel[06] exec /usr/bin/uwsgi --emperor /etc/uwsgi save it as /etc/init/emperor.conf and start the Emperor: start emperor From now on, to start uWSGI instances just drop their config files into /etc/uwsgi 4.8.5 Spawning the Graphite web interface Before starting the graphite web interface (that is a Django app) we need to initialize its database. Just run: sudo graphite-manage syncdb this is the standard django syncdb command for manage.py. Just answer the questions to create an admin user. Now we are ready to create a uWSGI vassal: [uwsgi] plugins-dir= /usr/lib/uwsgi plugins= python uid= _graphite gid= _graphite wsgi-file= /usr/share/graphite-web/graphite.wsgi http-socket= :8080 Save it as /etc/uwsgi/graphite.ini the _graphite user (and group) is created by the graphite ubuntu package. Our uWSGI vassal will run under this privileges. The web interface will be available on the port 8080 of your server natively speaking HTTP. If you prefer to proxy it, just change http-socket to http or place it behind a full webserver like nginx (this step is not covered in this tutorial) 4.8.6 Spawning vassals sending metrics to Graphite We are now ready to send applications metrics to the carbon/graphite server. For every vassal file in /etc/uwsgi just be sure to add the following options: [uwsgi] ... plugins = carbon enable-metrics = true carbon-use-metrics = true carbon-id = %n 4.8. Setting up Graphite on Ubuntu using the Metrics subsystem 317 uWSGI Documentation, Release 2.0 carbon = 127.0.0.1:2003 ... The carbon-id set a meaningful prefix to each metric (%n automatically translates to the name without extension of the vassal file). The carbon option set the address of the carbon server to send metrics to (by default the carbon server binds on port 2003, but you can change it editing /etc/carbon/carbon.conf and restarting the carbon server) 4.8.7 Using Graphiti (Ruby/Sinatra based) as alternative frontend Graphiti is an alternative dashboard/frontend from Graphite writte in Sinatra (a Ruby/Rack framework). Graphiti requires redis, so be sure a redis server is running in your system. Running: sudo apt-get install redis-server will be enough First step is cloning the graphiti app (place it where you want/need): git clone https://github.com/paperlesspost/graphiti.git then run the bundler tool (if you are not confident with the ruby world it is a tool for managing dependencies) bundle install Note: if the eventmachine gem installation fails, add “gem ‘eventmachine”’ in the Gemfile as the first gem and run bundle update. This will ensure latest eventmachine version will be installed After bundle has installed all of the gems, you have to copy the graphiti example configuration: cp config/settings.yml.example config/settings.yml edit it and set graphite_base_url to the url where the graphite web interface (the django one) is running. Now we can deploy it on uWSGI [uwsgi] plugins-dir= /usr/lib/uwsgi plugins= rack chdir= rack= config.ru rbrequire= bundler/setup http-socket= :9191 uid= _graphite gid= _graphite save it as /etc/uwsgi/graphiti.ini to let the Emperor deploy it You can now connect to port 9191 to manage your gathered metrics. As always you are free to place the instance under a proxy. 318 Chapter 4. Tutorials uWSGI Documentation, Release 2.0 4.8.8 Notes By default the carbon server listens on a public address. Unless you know what you are doing you should point it to a local one (like 127.0.0.1) uWSGI exports a gazillion of metrics (and more are planned), do not be afraid to use them There is no security between apps and the carbon server, any apps can write metrics to it. If you are hosting untrusted apps you’d better to use other approcahes (like giving a graphite instance to every user in the system) The same is true for redis, if you run untrusted apps a shared redis instance is absolutely not a good choice from a secuity point of view 4.8. Setting up Graphite on Ubuntu using the Metrics subsystem 319 uWSGI Documentation, Release 2.0 320 Chapter 4. Tutorials CHAPTER 5 Articles 5.1 Serializing accept(), AKA Thundering Herd, AKA the Zeeg Prob- lem One of the historical problems in the UNIX world is the “thundering herd”. What is it? Take a process binding to a networking address (it could be AF_INET, AF_UNIX or whatever you want) and then forking itself: int s= socket(...) bind(s, ...) listen(s, ...) fork() After having forked itself a bunch of times, each process will generally start blocking on accept() for(;;) { int client= accept(...); if (client<0) continue; ... } The funny problem is that on older/classic UNIX, accept() is woken up in each process blocked on it whenever a connection is attempted on the socket. Only one of those processes will be able to truly accept the connection, the others will get a boring EAGAIN. This results in a vast number of wasted cpu cycles (the kernel scheduler has to give control to all of the sleeping processes waiting on that socket). This behaviour (for various reasons) is amplified when instead of processes you use threads (so, you have multiple threads blocked on accept()). The de facto solution was placing a lock before the accept() call to serialize its usage: for(;;) { lock(); int client= accept(...); unlock(); if (client<0) continue; ... } 321 uWSGI Documentation, Release 2.0 For threads, dealing with locks is generally easier but for processes you have to fight with system-specific solutions or fall back to the venerable SysV ipc subsystem (more on this later). In modern times, the vast majority of UNIX systems have evolved, and now the kernel ensures (more or less) only one process/thread is woken up on a connection event. Ok, problem solved, what we are talking about? 5.1.1 select()/poll()/kqueue()/epoll()/... In the pre-1.0 era, uWSGI was a lot simpler (and less interesting) than the current form. It did not have the sig- nal framework and it was not able to listen to multiple addresses; for this reason its loop engine was only calling accept() in each process/thread, and thundering herd (thanks to modern kernels) was not a problem. Evolution has a price, so after a while the standard loop engine of a uWSGI process/thread moved from: for(;;) { int client= accept(s, ...); if (client<0) continue; ... } to a more complex: for(;;) { int interesting_fd= wait_for_fds(); if (fd_need_accept(interesting_fd)) { int client= accept(interesting_fd, ...); if (client<0) continue; } else if (fd_is_a_signal(interesting_fd)) { manage_uwsgi_signal(interesting_fd); } ... } The problem is now the wait_for_fds() example function: it will call something like select(), poll() or the more modern epoll() and kqueue(). These kinds of system calls are “monitors” for file descriptors, and they are woken up in all of the processes/threads waiting for the same file descriptor. Before you start blaming your kernel developers, this is the right approach, as the kernel cannot know if you are waiting for those file descriptors to call accept() or to make something funnier. So, welcome again to the thundering herd. 5.1.2 Application Servers VS WebServers The popular, battle tested, solid, multiprocess reference webserver is Apache HTTPD. It survived decades of IT evolutions and it’s still one of the most important technologies powering the whole Internet. Born as multiprocess-only, Apache had to always deal with the thundering herd problem and they solved it using SysV ipc semaphores. (Note: Apache is really smart about that, when it only needs to wait on a single file descriptor, it only calls accept() taking advantage of modern kernels anti-thundering herd policies) 322 Chapter 5. Articles uWSGI Documentation, Release 2.0 (Update: Apache 2.x even allows you to choose which lock technique to use, included flock/fcntl for very ancient systems, but on the vast majority of the system, when in multiprocess mode it will use the sysv semaphores) Even on modern Apache releases, stracing one of its process (bound to multiple interfaces) you will see something like that (it is a Linux system): semop(...); // lock epoll_wait(...); accept(...); semop(...); // unlock ... // manage the request the SysV semaphore protect your epoll_wait from thundering herd. So, another problem solved, the world is a such a beatiful place... but .... SysV IPC is not good for application servers :(* The definition of “application server” is pretty generic, in this case we refer to one or more process/processes gen- erated by an unprivileged (non-root) user binding on one or more network address and running custom, highly non- deterministic code. Even if you had a minimal/basic knowledge on how SysV IPC works, you will know each of its components is a limited resource in the system (and in modern BSDs these limits are set to ridiculously low values, PostgreSQL FreeBSD users know this problem very well). Just run ‘ipcs’ in your terminal to get a list of the allocated objects in your kernel. Yes, in your kernel. SysV ipc objects are persistent resources, they need to be removed manually by the user. The same user that could allocate hundreds of those objects and fill your limited SysV IPC memory. One of the most common problems in the Apache world caused by the SysV ipc usage is the leakage when you brutally kills Apache instances (yes, you should never do it, but you don’t have a choice if you are so brave/fool to host unreliable PHP apps in your webserver process). To better understand it, spawn Apache and killall -9 apache2. Respawn it and run ‘ipcs’ you will get a new semaphore object every time. Do you see the problem? (to Apache gurus: yes I know there are hacky tricks to avoid that, but this is the default behaviour) Apache is generally a system service, managed by a conscious sysadmin, so except few cases you can continue trusting it for more decades, even if it decides to use more SysV ipc objects :) Your application server, sadly, is managed by different kind of users, from the most skilled one to the one who should change job as soon as possible to the one with the site cracked by a moron wanting to take control of your server. Application servers are not dangerous, users are. And application servers are run by users. The world is an ugly place. 5.1.3 How application server developers solved it Fast answer: they generally do not solve/care it Note: we are talking about multiprocessing, we have already seen multithreading is easy to solve. Serving static files or proxying (the main activities of a webserver) is generally a fast, non-blocking (very deterministic under various points of view) activity. Instead, a web application is way slower and heavier, so, even on moderately loaded sites, the amount of sleeping processes is generally low. On highly loaded sites you will pray for a free process, and in non-loaded sites the thundering herd problem is com- pletely irrelevant (unless you are running your site on a 386). Given the relatively low number of processes you generally allocate for an application server, we can say thundering herd is a no-problem. 5.1. Serializing accept(), AKA Thundering Herd, AKA the Zeeg Problem 323 uWSGI Documentation, Release 2.0 Another approach is dynamic process spawning. If you ensure your application server has always the minimum required number of processes running you will highly reduce the thundering herd problem. (check the family of –cheaper uWSGI options) 5.1.4 No-problem ??? So, again, what we are talking about ? We are talking about “common cases”, and for common cases there are a plethora of valid choices (instead of uWSGI, obviously) and the vast majority of problems we are talking about are non-existent. Since the beginning of the uWSGI project, being developed by a hosting company where “common cases” do not exist, we cared a lot about corner-case problems, bizarre setups and those problems the vast majority of users never need to care about. In addition to this, uWSGI supports operational modes only common/available in general-purpose webservers like Apache (I have to say Apache is probably the only general purpose webserver as it allows basically anything in its process space in a relatively safe and solid way), so lot of new problems combined with user bad-behaviour arise. One of the most challenging development phase of uWSGI was adding multithreading. Threads are powerful, but are really hard to manage in the right way. Threads are way cheaper than processes, so you generally allocate dozens of them for your app (remember, not used memory is wasted memory). Dozens (or hundreds) of threads waiting for the same set of file descriptors bring us back to a thundering herd problem (unless all of your threads are constantly used). For such a reason when you enable multiple threads in uWSGI a pthread mutex is allocated, serializing epoll()/kqueue()/poll()/select()... usage in each thread. Another problem solved (and strange for uWSGI, without the need of an option ;) But... 5.1.5 The Zeeg problem: Multiple processes with multiple threads On June 27, 2013, David Cramer wrote an interesting blog post (you may not agree with its conclusions, but it does not matter now, you can continue hating uWSGI safely or making funny jokes about its naming choices or the number of options). http://justcramer.com/2013/06/27/serving-python-web-applications/ The problem David faced was such a strong thundering herd that its response time was damaged by it (non constant performance was the main result of its tests). Why did it happen? Wasn’t the mutex allocated by uWSGI solving it? David is (was) running uWSGI with 10 process and each of them with 10 threads: uwsgi --processes 10 --threads 10 ... While the mutex protects each thread in a single process to call accept() on the same request, there is no such mechanism (or better, it is not enabled by default, see below) to protect multiple processes from doing it, so given the number of threads (100) available for managing requests, it is unlikely that a single process is completely blocked (read: with all of its 10 threads blocked in a request) so welcome back to the thundering herd. 324 Chapter 5. Articles uWSGI Documentation, Release 2.0 5.1.6 How David solved it ? uWSGI is a controversial piece of software, no shame in that. There are users fiercely hating it and others morbidly loving it, but all agree that docs could be way better ([OT] it is good when all the people agree on something, but pull requests on uwsgi-docs are embarrassingly low and all from the same people.... come on, help us !!!) David used an empirical approach, spotted its problem and decided to solve it running independent uwsgi processes bound on different sockets and configured nginx to round robin between them. It is a very elegant approach, but it has a problem: nginx cannot know if the process on which is sending the request has all of its thread busy. It is a working but suboptimal solution. The best way would be having an inter-process locking (like Apache), serializing all of the accept() in both threads and processes 5.1.7 uWSGI docs sucks: –thunder-lock Michael Hood (you will find his name in the comments of David’s post, too) signalled the problem in the uWSGI mailing-list/issue tracker some time ago, he even came out with an initial patch that ended with the --thunder-lock option (this is why open-source is better ;) --thunder-lock is available since uWSGI 1.4.6 but never got documentation (of any kind) Only the people following the mailing-list (or facing the specific problem) know about it. 5.1.8 SysV IPC semaphores are bad how you solved it ? Interprocess locking has been an issue since uWSGI 0.0.0.0.0.1, but we solved it in the first public release of the project (in 2009). We basically checked each operating system capabilities and chose the best/fastest ipc locking they could offer, filling our code with dozens of #ifdef. When you start uWSGI you should see in its logs which “lock engine” has been chosen. There is support for a lot of them: • pthread mutexes with _PROCESS_SHARED and _ROBUST attributes (modern Linux and Solaris) • pthread mutexes with _PROCESS_SHARED (older Linux) • OSX Spinlocks (MacOSX, Darwin) • Posix semaphores (FreeBSD >= 9) • Windows mutexes (Windows/Cygwin) • SysV IPC semaphores (fallback for all the other systems) Their usage is required for uWSGI-specific features like caching, rpc and all of those features requiring changing shared memory structures (allocated with mmap() + _SHARED) Each of these engines is different from the others and dealing with them has been a pain and (more important) some of them are not “ROBUST”. The “ROBUST” term is pthread-borrowed. If a lock is “robust”, it means if the process locking it dies, the lock is released. You would expect it from all of the lock engines, but sadly only few of them works reliably. For this reason the uWSGI master process has to allocate an additional thread (the ‘deadlock’ detector) constantly checking for non-robust unreleased locks mapped to dead processes. 5.1. Serializing accept(), AKA Thundering Herd, AKA the Zeeg Problem 325 uWSGI Documentation, Release 2.0 It is a pain, however, anyone will tell you IPC locking is easy should be accepted in a JEDI school... 5.1.9 uWSGI developers are fu*!ing cowards Both David Cramer and Graham Dumpleton (yes, he is the mod_wsgi author but heavily contributed to uWSGI development as well to the other WSGI servers, this is another reason why open source is better) asked why --thunder-lock is not the default when multiprocess + multithread is requested. This is a good question with a simple answer: we are cowards who only care about money. uWSGI is completely open source, but its development is sponsored (in various way) by the companies using it and by Unbit.it customers. Enabling “risky” features by default for a “common” usage (like multiprocess+multithread) is too much for us, and in addition to this, the situation (especially on linux) of library/kernel incompatibilities is a real pain. As an example for having ROBUST pthread mutexes you need a modern kernel with a modern glibc, but commonly used distros (like the centos family) have a mix of older kernels with newer glibc and the opposite too. This leads to the inability to correctly detect which is the best locking engine for a platform, and so, when the uwsgiconfig.py script is in doubt it falls back to the safest approach (like non-robust pthread mutexes on linux). The deadlock-detector should save you from most of the problem, but the “should” word is the key. Making a test suite (or even a single unit test) on this kind of code is basically impossible (well, at least for me), so we cannot be sure all is in the right place (and reporting threading bugs is hard for users as well as skilled developer, unless you work on pypy ;) Linux pthread robust mutexes are solid, we are “pretty” sure about that, so you should be able to enable --thunder-lock on modern Linux systems with a 99.999999% success rates, but we prefer (for now) users con- sciously enable it 5.1.10 When SysV IPC semaphores are a better choice Yes, there are cases on which SysV IPC semaphores gives you better results than system-specific features. Marcin Deranek of Booking.com has been battle-testing uWSGI for months and helped us with fixing corner-case situations even in the locking area. He noted system-specific lock-engines tend to favour the kernel scheduler (when choosing which process wins the next lock after an unlock) instead of a round-robin distribution. As for their specific need for an equal distribution of requests among processes is better (they use uWSGI with perl, so no threading is in place, but they spawn lot of processes) they (currently) choose to use the “ipcsem” lock engine with: uwsgi --lock-engine ipcsem --thunder-lock --processes 100 --psgi .... The funny thing (this time) is that you can easily test if the lock is working well. Just start blasting the server and you will see in the request logs how the reported pid is different each time, while with system-specific locking the pids are pretty random with a pretty heavy tendency of favouring the last used process. Funny enough, the first problem they faced was the ipcsem leakage (when you are in emergency, graceful reload/stop is your enemy and kill -9 will be your silver bullet) To fix it, the –ftok option is available allowing you to give a unique id to the semaphore object and to reuse it if it is available from a previous run: uwsgi --lock-engine ipcsem --thunder-lock --processes 100 --ftok /tmp/foobar --psgi .... –ftok takes a file as an argument, it will use it to build the unique id. A common pattern is using the pidfile for it 326 Chapter 5. Articles uWSGI Documentation, Release 2.0 5.1.11 What about other portable lock engines ? In addition to “ipcsem”, uWSGI (where available) adds “posixsem” too. They are used by default only on FreeBSD >= 9, but are available on Linux too. They are not “ROBUST”, but they do not need shared kernel resources, so if you trust our deadlock detector they are a pretty-good approach. (Note: Graham Dumpleton pointed me to the fact they can be enabled on Apache 2.x too) 5.1.12 Conclusions You can have the best (or the worst) software of the whole universe, but without docs it does not exist. The Apache team still slam the face of the vast majority of us trying to touch their market share :) 5.1.13 Bonus chapter: using the Zeeg approach in a uWSGI friendly way I have to admit, I am not a big fan of supervisord. It is a good software without doubts, but I consider the Emperor and the –attach-daemon facilities a better approach to the deployment problems. In addition to this, if you want to have a “scriptable”/”extendable” process supervisor I think Circus (http://circus.readthedocs.org/) is a lot more fun and capable (the first thing I have done after implementing socket activation in the uWSGI Emperor was making a pull request [merged, if you care] for the same feature in Circus). Obviously supervisord works and is used by lot of people, but as a heavy uWSGI user I tend to abuse its features to accomplish a result. The first approach I would use is binding to 10 different ports and mapping each of them to a specific process: [uwsgi] processes=5 threads=5 ; create 5 sockets socket= :9091 socket= :9092 socket= :9093 socket= :9094 socket= :9095 ; map each socket (zero-indexed) to the specific worker map-socket= 0:1 map-socket= 1:2 map-socket= 2:3 map-socket= 3:4 map-socket= 4:5 Now you have a master monitoring 5 processes, each one bound to a different address (no --thunder-lock needed) For the Emperor fanboys you can make such a template (call it foo.template): [uwsgi] processes=1 threads= 10 socket= :%n Now make a symbolic link for each instance+port you want to spawn: 5.1. Serializing accept(), AKA Thundering Herd, AKA the Zeeg Problem 327 uWSGI Documentation, Release 2.0 ln -s foo.template 9091.ini ln -s foo.template 9092.ini ln -s foo.template 9093.ini ln -s foo.template 9094.ini ln -s foo.template 9095.ini ln -s foo.template 9096.ini 5.1.14 Bonus chapter 2: securing SysV IPC semaphores My company hosting platform in heavily based on Linux cgroups and namespaces. The first (cgroups) are used to limit/account resource usage, while the second (namespaces) are used to give an “iso- lated” system view to users (like seeing a dedicated hostname or root filesystem). As we allow users to spawn PostgreSQL instances in their accounts we need to limit SysV objects. Luckily, modern Linux kernels have a namespace for IPC, so calling unshare(CLONE_NEWIPC) will create a whole new set (detached from the others) of IPC objects. Calling --unshare ipc in customer-dedicated Emperors is a common approach. When combined with memory cgroup you will end with a pretty secure setup. 5.1.15 Credits: Author: Roberto De Ioris Fixed by: Honza Pokorny 5.2 The Art of Graceful Reloading Author: Roberto De Ioris The following article is language-agnostic, and albeit uWSGI-specific, some of its initial considerations apply to other application servers and platforms too. All of the described techniques assume a modern (>= 1.4) uWSGI release with the master process enabled. 5.2.1 What is a “graceful reload”? During the life-cycle of your webapp you will reload it hundreds of time. You need reloading for code updates, you need reloading for changes in the uWSGI configuration, you need reloading to reset the state of your app. Basically, reloading is one of the most simple, frequent and dangerous operation you do every time. So, why “graceful”? Take a traditional (and highly suggested) architecture: a proxy/load balancer (like nginx) forwards requests to one or more uWSGI daemons listening on various addresses. If you manage your reloads as “stop the instance, start the instance”, the time slice between two phases will result in a brutal disservice for your customers. The main trick for avoiding it is: not closing the file descriptors mapped to the uWSGI daemon addresses and abusing the Unix fork() behaviour (read: file descriptors are inherited by default) to exec() the uwsgi binary again. 328 Chapter 5. Articles uWSGI Documentation, Release 2.0 The result is your proxy enqueuing requests to the socket until the latter will be able to accept() them again, with the user/customer only seeing a little slowdown in the first response (the time required for the app to be fully loaded again). Another important step of graceful reload is to avoid destroying workers/threads that are still managing requests. Obviously requests could be stuck, so you should have a timeout for running workers (in uWSGI it is called the “worker’s mercy” and it has a default value of 60 seconds). These kind of tricks are pretty easy to accomplish and basically all of the modern servers/application servers do it (more or less). But, as always, the world is an ugly place and lot of problems arise, and the “inherited sockets” approach is often not enough. 5.2.2 Things go wrong We have seen that holding the uWSGI sockets alive allows the proxy webserver to enqueue requests without spitting out errors to the clients. This is true only if your app restarts fast, and, sadly, this may not always happen. Frameworks like Ruby on Rails or Zope start up really slow by default, your app could start up slowly by itself, or your machine could be so overloaded that every process spawn (fork()) takes ages. In addition to this, your site could be so famous that even if your app restarts in a couple of seconds, the queue of your sockets could be filled up forcing the proxy server to raise an error. Do not forget, your workers/threads that are still running requests could block the reload (for various reasons) for more seconds than your proxy server could tolerate. Finally, you could have made an application error in your just-committed code, so uWSGI will not start, or will start sending wrong things or errors... Reloads (brutal or graceful) can easily fail. 5.2.3 The listen queue Let’s start with the dream of every webapp developer: success. Your app is visited by thousands of clients and you obviously make money with it. Unfortunately, it is a very complex app and requires 10 seconds to warm up. During graceful reloads, you expect new clients to wait 10 seconds (best case) to start seeing contents, but, unfortu- nately, you have hundreds of concurrent requests, so first 100 customers will wait during the server warm-up, while the others will get an error from the proxy. This happens because the default size of uWSGI’s listen queue is 100 slots. Before you ask, it is an average value choosen by the maximum value allowed by default by your kernel. Each operating system has a default limit (Linux has 128, for example), so before increasing it you need to increase your kernel limit too. So, once your kernel is ready, you can increase the listen queue to the maximum number of users you expect to enqueue during a reload. To increase the listen queue you use the --listen option where is the maximum number of slots. To raise kernel limits, you should check your OS docs. Some examples: • sysctl kern.ipc.somaxconn on FreeBSD •/proc/sys/net/core/somaxconn on Linux. 5.2. The Art of Graceful Reloading 329 uWSGI Documentation, Release 2.0 Note: This is only one of the reasons to tune the listen queue, but do not blindly set it to huge values as a way to increase availability. 5.2.4 Proxy timeouts This is another thing you need to check if your reloads take a lot of time. Generally, proxies allow you to set two timeouts: connect Maximum amount of time the proxy will wait for a successful connection. read Maximum amount of time the server will be able to wait for data before giving up. When tuning the reloads, only the “connection” timeout matters. This timeout enters the game in the time slice between uWSGI’s bind to an interface (or inheritance of it) and the call to accept(). 5.2.5 Waiting instead of errors is good, no errors and no waiting is even better This is the focus of this article. We have seen how to increase the tolerance of your proxy during application server reloading. The customers will wait instead of getting scary errors, but we all want to make money, so why force them to wait? We want zero-downtime and zero-wait. 5.2.6 Preforking VS lazy-apps VS lazy This is one of the controversial choices of the uWSGI project. By default uWSGI loads the whole application in the first process and after the app is loaded it does fork() itself multiple times. This is the common Unix pattern, it may highly reduce the memory usage of your app, allows lot of funny tricks and on some languages may bring you a lot of headaches. Albeit its name, uWSGI was born as a Perl application server (it was not called uWSGI and it was not open source), and in the Perl world preforking is generally the blessed way. This is not true for a lot of other languages, platforms and frameworks, so before starting dealing with uWSGI you should choose how to manage fork() in your stack. Seeing it from the “graceful reloading” point of view, preforking extremely speeds up things: your app is loaded only one time, and spawning additional workers will be really fast. Avoiding disk access for each worker of your stack will increase startup times, expecially for frameworks or languages doing a lot of disk access to find modules. Unfortunately, the preforking approach forces you to reload the whole stack whenever you make code changes instead of reloading only the workers. In addition to this, your app could need preforking, or could completely crash due to it because of the way it has been developed. lazy-apps mode instead loads your application one time per worker. It will require about O(n) time to load it (where n is the number of workers), will very probably consume more memory, but will run in a more consistent and clean environment. Remember: lazy-apps is different from lazy, the first one only instructs uWSGI to load the application one time per worker, while the second is more invasive (and generally discouraged) as it changes a lot of internal defaults. The following approaches will show you how to accomplish zero-downtime/wait reloads in both preforking and lazy modes. 330 Chapter 5. Articles uWSGI Documentation, Release 2.0 Note: Each approach has pros and cons, choose carefully. 5.2.7 Standard (default/boring) graceful reload (aka SIGHUP) To trigger it, you can: • send SIGHUP to the master • write r to The Master FIFO • use --touch-reload option • call uwsgi.reload() API. In preforking and lazy-apps mode, it will: 1. Wait for running workers. 2. Close all of the file descriptors except the ones mapped to sockets. 3. Call exec() on itself. In lazy mode, it will: 1. Wait for running workers. 2. Restart all of them (this means you cannot change uWSGI options during this kind of reload). Warning: lazy is discouraged! Pros: • easy to manage • no corner-case problems • no inconsistent states • basically full reset of the instance. Cons: • the ones we seen before • listen queue filling up • stuck workers • potentially long waiting times. 5.2.8 Workers reloading in lazy-apps mode Requires --lazy-apps option. To trigger it: • write w to The Master FIFO • use --touch-workers-reload option. It will wait for running workers and then restart each of them. Pros: 5.2. The Art of Graceful Reloading 331 uWSGI Documentation, Release 2.0 • avoids restarting the whole instance. Cons: • no user-experience improvements over standard graceful reload, it is only a shortcut for situation when code updates do not imply instance reconfiguration. 5.2.9 Chain reloading (lazy apps) Requires --lazy-apps option. To trigger it: • write c to The Master FIFO • use --touch-chain-reload option. This is the first approach that improves user experience. When triggered, it will restart one worker at time, and the following worker is not reloaded until the previous one is ready to accept new requests. Pros: • potentially highly reduces waiting time for clients • reduces the load of the machine during reloads (no multiple processes loading the same code). Cons: • only useful for code updates • you need a good amount of workers to get a better user experience. 5.2.10 Zerg mode Requires a zerg server or a zerg pool. To trigger it, run the instance in zerg mode. This is the first approach that uses multiple instances of the same application to increase user experience. Zerg mode works by making use of the venerable “fd passing over Unix sockets” technique. Basically, an external process (the zerg server/pool) binds to the various sockets required by your app. Your uWSGI instance, instead of binding by itself, asks the zerg server/pool to pass it the file descriptor. This means multiple unrelated instances can ask for the same file descriptors and work together. Zerg mode was born to improve auto-scalability, but soon became one of the most loved approaches for zero-downtime reloading. Now, examples. Spawn a zerg pool exposing 127.0.0.1:3031 to the Unix socket /var/run/pool1: [uwsgi] master= true zerg-pool= /var/run/pool1:127.0.0.1:3031 Now spawn one or more instances attached to the zerg pool: [uwsgi] ; this will give access to 127.0.0.1:3031 to the instance zerg= /var/run/pool1 332 Chapter 5. Articles uWSGI Documentation, Release 2.0 When you want to make update of code or options, just spawn a new instance attached to the zerg, and shut down the old one when the new one is ready to accept requests. The so-called “zerg dance” is a trick for automation of this kind of reload. There are various ways to accomplish it, the objective is to automatically “pause” or “destroy” the old instance when the new one is fully ready and able to accept requests. More on this below. Pros: • potentially the silver bullet • allows instances with different options to cooperate for the same app. Cons: • requires an additional process • can be hard to master • reload requires copy of the whole uWSGI stack. 5.2.11 The Zerg Dance: Pausing instances We all make mistakes, sysadmins must improve their skill of fast disaster recovery. Focusing on avoiding them is a waste of time. Unfortunately, we are all humans. Rolling back deployments could be your life-safer. We have seen how zerg mode allows us to have multiple instances asking on the same socket. In the previous section we used it to spawn a new instance working together with the old one. Now, instead of shutting down the old instance, why not “pause” it? A paused instance is like the standby mode of your TV. It consumes very few resources, but you can bring it back very quickly. “Zerg Dance” is the battle-name for the procedure of continuos swapping of instances during reloads. Every reload results in a “sleeping” instance and a running one. Following reloads destroy the old sleeping instance and transform the old running to the sleeping one and so on. There are literally dozens of ways to accomplish the “Zerg Dance”, the fact that you can easily use scripts in your reloading procedures makes this approach extremely powerful and customizable. Here we will see the one that requires zero scripting, it could be the less versatile (and requires at least uWSGI 1.9.21), but should be a good starting point for the improvements. The Master FIFO is the best way to manage instances instead of relying on Unix signals. Basically, you write single- char commands to govern the instance. The funny thing about the Master FIFOs is that you can have many of them configured for your instance and swap one with another very easily. An example will clarify things. We spawn an instance with 3 Master FIFOs: new (the default one), running and sleeping: [uwsgi] ; fifo ’0’ master-fifo= /var/run/new.fifo ; fifo ’1’ master-fifo= /var/run/running.fifo ; fifo ’2’ master-fifo= /var/run/sleeping.fifo ; attach to zerg zerg= /var/run/pool1 ; other options ... 5.2. The Art of Graceful Reloading 333 uWSGI Documentation, Release 2.0 By default the “new” one will be active (read: will be able to process commands). Now we want to spawn a new instance, that once is ready to accept requests will put the old one in sleeping mode. To do it, we will use uWSGI’s advanced hooks. Hooks allow you to “make things” at various phases of uWSGI’s life cycle. When the new instance is ready, we want to force the old instance to start working on the sleeping FIFO and be in “pause” mode: [uwsgi] ; fifo ’0’ master-fifo= /var/run/new.fifo ; fifo ’1’ master-fifo= /var/run/running.fifo ; fifo ’2’ master-fifo= /var/run/sleeping.fifo ; attach to zerg zerg= /var/run/pool1 ; hooks ; destroy the currently sleeping instance if-exists= /var/run/sleeping.fifo hook-accepting1-once = writefifo:/var/run/sleeping.fifo Q endif= ; force the currently running instance to became sleeping (slot 2) and place it in pause mode if-exists= /var/run/running.fifo hook-accepting1-once = writefifo:/var/run/running.fifo 2p endif= ; force this instance to became the running one (slot 1) hook-accepting1-once= writefifo:/var/run/new.fifo 1 The hook-accepting1-once phase is run one time per instance soon after the first worker is ready to accept requests. The writefifo command allows writing to FIFOs without failing if the other peers are not connected (this is different from a simple write command that would fail or completely block when dealing with bad FIFOs). Note: Both features have been added only in uWSGI 1.9.21, with older releases you can use the --hook-post-app option instead of --hook-accepting1-once, but you will lose the “once” feature, so it will work reliably only in preforking mode. Instead of writefifo you can use the shell variant: exec:echo > . Now start running instances with the same config files over and over again. If all goes well, you should always end with two instances, one sleeping and one running. Finally, if you want to bring back a sleeping instance, just do: # destroy the running instance echo Q > /var/run/running.fifo # unpause the sleeping instance and set it as the running one echo p1 > /var/run/sleeping.fifo Pros: • truly zero-downtime reload. Cons: • requires high-level uWSGI and Unix skills. 334 Chapter 5. Articles uWSGI Documentation, Release 2.0 5.2.12 SO_REUSEPORT (Linux >= 3.9 and BSDs) On recent Linux kernels and modern BSDs you may try --reuse-port option. This option allows multiple unre- lated instances to bind on the same network address. You may see it as a kernel-level zerg mode. Basically, all of the Zerg approaches can be followed. Once you add --reuse-port to you instance, all of the sockets will have the SO_REUSEPORT flag set. Pros: • similar to zerg mode, could be even easier to manage. Cons: • requires kernel support • could lead to inconsistent states • you lose ability to use TCP addresses as a way to avoid incidental multiple instances running. 5.2.13 The Black Art (for rich and brave people): master forking To trigger it, write f to The Master FIFO. This is the most dangerous of the ways to reload, but once mastered, it could lead to pretty cool results. The approach is: call fork() in the master, close all of the file descriptors except the socket-related ones, and exec() a new uWSGI instance. You will end with two specular uWSGI instances working on the same set of sockets. The scary thing about it is how easy (just write a single char to the master FIFO) is to trigger it... With a bit of mastery you can implement the zerg dance on top of it. Pros: • does not require kernel support nor an additional process • pretty fast. Cons: • a whole copy for each reload • inconstent states all over the place (pidfiles, logging, etc.: the master FIFO commands could help fix them). 5.2.14 Subscription system This is probably the best approach when you can count on multiple servers. You add the “fastrouter” between your proxy server (e.g., nginx) and your instances. Instances will “subscribe” to the fastrouter that will pass requests from proxy server (nginx) to them while load balancing and constantly monitoring all of them. Subscriptions are simple UDP packets that instruct the fastrouter which domain maps to which instance or instances. As you can subscribe, you can unsubscribe too, and this is where the magic happens: [uwsgi] subscribe-to= 192.168.0.1:4040:unbit.it unsubscribe-on-graceful-reload= true ; all of the required options ... 5.2. The Art of Graceful Reloading 335 uWSGI Documentation, Release 2.0 Adding unsubscribe-on-graceful-reload will force the instance to send an “unsubscribe” packet to the fastrouter, so until it will not be back no request will be sent to it. Pros: • low-cost zero-downtime • a KISS approach (finally). Cons: • requires a subscription server (like the fastrouter) that introduces overhead (even if we are talking about mi- croseconds). 5.2.15 Inconsistent states Sadly, most of the approaches involving copies of the whole instance (like Zerg Dance or master forking) lead to inconsistent states. Take, for example, an instance writing pidfiles: when starting a copy of it, that pidfile will be overwritten. If you carefully plan your configurations, you can avoid inconsistent states, but thanks to The Master FIFO you can manage some of them (read: the most common ones): • l command will reopen logfiles •P command will update all of the instance pidfiles. 5.2.16 Fighting inconsistent states with the Emperor If you manage your instances with the Emperor, you can use its features to avoid (or reduce number of) inconsistent states. Giving each instance a different symbolic link name will allow you to map files (like pidfiles or logs) to different paths: [uwsgi] logto= /var/log/%n.log pidfile= /var/run/%n.pid ; and so on ... 5.2.17 Dealing with ultra-lazy apps (like Django) Some applications or frameworks (like Django) may load the vast majority of their code only at the first request. This means that customer will continue to experience slowdowns during reload even when using things like zerg mode or similar. This problem is hard to solve (impossible?) in the application server itself, so you should find a way to force your app to load itself ASAP. A good trick (read: works with Django) is to call the entry-point function (like the WSGI callable) in the app itself: def application(environ, sr): sr(’200 OK’, [(’Content-Type’,’text/plain’)]) yield "Hello" application({}, lambda x, y: None)# call the entry-point function You may need to pass CGI vars to the environ to make a true request: it depends on the WSGI app. 336 Chapter 5. Articles uWSGI Documentation, Release 2.0 5.2.18 Finally: Do not blindly copy & paste! Please, turn on your brain and try to adapt shown configs to your needs, or invent new ones. Each app and system is different from the others. Experiment before making a choice. 5.2.19 References The Master FIFO Hooks Zerg mode The uWSGI FastRouter uWSGI Subscription Server 5.3 Fun with Perl, Eyetoy and RaspberryPi Author: Roberto De Ioris Date: 2013-12-07 5.3.1 Intro This article is the result of various experiments aimed at improving uWSGI performance and usability in various areas before the 2.0 release. To follow the article you need: • a Raspberry Pi (any model) with a Linux distribution installed (I used standard Raspbian) • a PS3 Eyetoy webcam • a websocket-enabled browser (basically any serious browser) • a bit of Perl knowledge (really only a bit, there’s less than 10 lines of Perl ;) • Patience (building uWSGI + PSGI + coroae on the RPI requires 13 minutes) 5.3.2 uWSGI subsystems and plugins The project makes use of the following uWSGI subsystems and plugins: • WebSocket support • SharedArea – share memory pages between uWSGI components (for storing frames) • uWSGI Mules (for gathering frames) • The Symcall plugin • uWSGI Perl support (PSGI) • uWSGI asynchronous/non-blocking modes (updated to uWSGI 1.9) (optional, we use Coro::Anyevent but you can rely on standard processes, though you’ll need way more memory) 5.3. Fun with Perl, Eyetoy and RaspberryPi 337 uWSGI Documentation, Release 2.0 5.3.3 What we want to accomplish We want our RPI to gather frames from the Eyetoy and stream them to various connected clients using websockets, using a HTML5 canvas element to show them. The whole system must use as little memory as possible, as few CPU cycles as possible, and it should support a large number of clients (... though well, even 10 clients will be a success for the Raspberry Pi hardware ;) 5.3.4 Technical background The Eyetoy captures frames in YUYV format (known as YUV 4:2:2). This means we need 4 bytes for 2 pixels. By default the resolution is set to 640x480, so each frame will need 614,400 bytes. Once we have a frame we need to decode it to RGBA to allow the HTML5 canvas to show it. The translation between YUYV and RGBA is pretty heavy for the RPI (especially if you need to do it for every connected client) so we will do it in the browser using Javascript. (There are other approaches we could follow, just check the end of the article for them.) The uWSGI stack is composed by a mule gathering frames from the Eyetoy and writing them to the uWSGI SharedArea. Workers constantly read from that SharedArea and send frames as binary websocket messages. 5.3.5 Let’s start: the uwsgi-capture plugin uWSGI 1.9.21 introduced a simplified (and safe) procedure to build uWSGI plugins. (Expect more third party plugins soon!) The project at: https://github.com/unbit/uwsgi-capture shows a very simple plugin using the Video4Linux 2 API to gather frames. Each frame is written in a shared area initialized by the plugin itself. The first step is getting uWSGI and building it with the ‘coroae’ profile: sudo apt-get install git build-essential libperl-dev libcoro-perl git clone https://github.com/unbit/uwsgi cd uwsgi make coroae The procedure requires about 13 minutes. If all goes well you can clone the uwsgi-capture plugin and build it. git clone https://github.com/unbit/uwsgi-capture ./uwsgi --build-plugin uwsgi-capture You now have the capture_plugin.so file in your uwsgi directory. Plug your Eyetoy into an USB port on your RPI and check if it works: ./uwsgi --plugin capture --v4l-capture /dev/video0 (the --v4l-capture option is exposed by the capture plugin) If all goes well you should see the following lines in uWSGI startup logs: /dev/video0 detected width= 640 /dev/video0 detected height= 480 /dev/video0 detected format= YUYV 338 Chapter 5. Articles uWSGI Documentation, Release 2.0 sharedarea 0 created at 0xb6935000(150 pages, area at 0xb6936000) /dev/video0 started streaming frames to sharedarea 0 (the sharedarea memory pointers will obviously probably be different) The uWSGI process will exit soon after this as we did not tell it what to do. :) The uwsgi-capture plugin exposes 2 functions: • captureinit(), mapped as the init() hook of the plugin, will be called automatically by uWSGI. If the –v4l-capture option is specified, this function will initialize the specified device and will map it to a uWSGI sharedarea. • captureloop() is the function gathering frames and writing them to the sharedarea. This function should constantly run (even if there are no clients reading frames) We want a mule to run the captureloop() function. ./uwsgi --plugin capture --v4l-capture /dev/video0 --mule="captureloop()" --http-socket :9090 This time we have bound uWSGI to HTTP port 9090 with a mule mapped to the “captureloop()” function. This mule syntax is exposed by the symcall plugin that takes control of every mule argument ending with “()” (the quoting is required to avoid the shell making a mess of the parentheses). If all goes well you should see your uWSGI server spawning a master, a mule and a worker. 5.3.6 Step 2: the PSGI app Time to write our websocket server sending Eyetoy frames (you can find sources for the example here: https://github.com/unbit/uwsgi-capture/tree/master/rpi-examples). The PSGI app will be very simple: use IO::File; use File::Basename; my $app= sub { my $env= shift; # websockets connection happens on /eyetoy if ($env->{PATH_INFO} eq ’/eyetoy’){ # complete the handshake uwsgi::websocket_handshake($env->{HTTP_SEC_WEBSOCKET_KEY},$env->{HTTP_ORIGIN}); while(1){ # wait for updates in the sharedarea uwsgi::sharedarea_wait(0, 50); # send a binary websocket message directly from the sharedarea uwsgi::websocket_send_binary_from_sharedarea(0,0) } } # other requests generate the html else { return [200,[’Content-Type’=> ’text/html’], new IO::File(dirname(__FILE__).’/eyetoy.html’)]; } } The only interesting parts are: uwsgi::sharedarea_wait(0, 50); 5.3. Fun with Perl, Eyetoy and RaspberryPi 339 uWSGI Documentation, Release 2.0 This function suspends the current request until the specified shared area (the ‘zero’ one) gets an update. As this func- tion is basically a busy-loop poll, the second argument specifies the polling frequency in milliseconds. 50 milliseconds gave us good results (feel free to try with other values). uwsgi::websocket_send_binary_from_sharedarea(0,0) This is a special utility function sending a websocket binary message directly from the sharedarea (yep, zero-copy). The first argument is the sharedarea id (the ‘zero’ one) and the second is the position in the sharedarea to start reading from (zero again, as we want a full frame). 5.3.7 Step 3: HTML5 The HTML part (well it would be better to say Javascript part) is very easy, aside from the YUYV to RGB(A) transform voodoo. Nothing special here. The vast majority of the code is related to YUYV->RGBA conversion. Pay attention to set the websocket communication in ‘binary’ mode (binaryType = ‘arraybuffer’ is enough) and be sure to use an Uint8ClampedArray (otherwise performance will be terribly bad) 5.3.8 Ready to watch ./uwsgi --plugin capture --v4l-capture /dev/video0 --http-socket :9090 --psgi uwsgi-capture/rpi-examples/eyetoy.pl --mule="captureloop()" Connect with your browser to TCP port 9090 of your Raspberry Pi and start watching. 5.3.9 Concurrency While you watch your websocket stream, you may want to start another browser window to see a second copy of your video. Unfortunately you spawned uWSGI with a single worker, so only a single client can get the stream. You can add multiple workers easily: ./uwsgi --plugin capture --v4l-capture /dev/video0 --http-socket :9090 --psgi uwsgi-capture/rpi-examples/eyetoy.pl --mule="captureloop()" --processes 10 Like this up to 10 people will be able to watch the stream. But coroutines are way better (and cheaper) for I/O bound applications such as this: ./uwsgi --plugin capture --v4l-capture /dev/video0 --http-socket :9090 --psgi uwsgi-capture/rpi-examples/eyetoy.pl --mule="captureloop()" --coroae 10 Now, magically, we are able to manage 10 clients with but a single process! The memory on the RPI will be grateful to you. 5.3.10 Zero-copy all the things Why are we using the SharedArea? 5.3. Fun with Perl, Eyetoy and RaspberryPi 341 uWSGI Documentation, Release 2.0 The SharedArea is one of the most advanced uWSGI features. If you give a look at the uwsgi-capture plugin you will see how it easily creates a sharedarea pointing to a mmap()’ed region. Basically each worker, thread (but please do not use threads with Perl) or coroutine will have access to that memory in a concurrently safe way. In addition to this, thanks to the websocket/sharedarea cooperation API you can directly send websocket packets from a sharedarea without copying memory (except for the resulting websocket packet). This is way faster than something like: my $chunk= uwsgi::sharedarea_read(0,0) uwsgi::websocket_send_binary($chunk) We would need to allocate the memory for $chunk at every iteration, copying the sharedarea content into it and finally encapsulating it in a websocket message. With the sharedarea you remove the need to allocate (and free) memory constantly and to copy it from sharedarea to the Perl VM. 5.3.11 Alternative approaches There are obviously other approaches you can follow. You could hack uwsgi-capture to allocate a second sharedarea into which it will directly write RGBA frames. JPEG encoding is relatively fast, you can try encoding frames in the RPI and sending them as MJPEG frames (instead of using websockets): my $writer=$responder->([200,[’Content-Type’=> ’multipart/x-mixed-replace; boundary=uwsgi_mjpeg_frame’]]); $writer->write("--uwsgi_mjpeg_frame\r\n"); while(1){ uwsgi::sharedarea_wait(0); my $chunk= uwsgi::sharedarea_read(0,0); $writer->write("Content-Type: image/jpeg\r\n"); $writer->write("Content-Length: ".length($chunk)."\r\n\r\n"); $writer->write($chunk); $writer->write("\r\n--uwsgi_mjpeg_frame\r\n"); } 5.3.12 Other languages At the time of writing, the uWSGI PSGI plugin is the only one exposing the additional API for websockets+sharedarea. The other language plugins will be updated soon. 5.3.13 More hacking The RPI board is really fun to tinker with and uWSGI is a great companion for it (especially its lower-level API functions). Note: As an exercise left to the reader: remember you can mmap() the address 0x20200000 to access the Raspberry PI GPIO controller... ready to write a uwsgi-gpio plugin? 342 Chapter 5. Articles uWSGI Documentation, Release 2.0 5.4 Offloading Websockets and Server-Sent Events AKA “Combine them with Django safely” Author: Roberto De Ioris Date: 20140315 5.4.1 Disclaimer This article shows a pretty advanced way for combining websockets (or sse) apps with Django in a “safe way”. It will not show you how cool are websockets and sse, or how to write better apps with them, it is an attempt to try to avoid bad practices with them. In my opinion the Python web-oriented world is facing a communication/marketing problem: There is a huge num- ber of people running heavily blocking apps (like Django) on non-blocking technologies (like gevent) only because someone told them it is cool and will solve all of their scaling issues. This is completely WRONG, DANGEROUS and EVIL, you cannot mix blocking apps with non-blocking engines, even a single, ultra-tiny blocking part can potentially destroy your whole stack. As i have already said dozens of time, if your app is 99.9999999% non-blocking, it is still blocking. And no, monkey patching on your Django app is not magic. Unless you are using pretty-customized database adapters, tuned for working in a non-blocking way, you are doing wrong. At the cost of looking a huber-asshole, i strongly suggest you to completely ignore people suggesting you to move your Django app to gevent, eventlet, tornado or whatever, without warning you about the hundreds of problems you may encounter. Having said that, i love gevent, it is probably the best (with perl’s Coro::AnyEvent) supported loop engine in the uWSGI project. So in this article i will use gevent for managing websocket/sse traffic and plain multiprocessing for the Django part. If this last sentence looks a nonsense to you, you probably do not know what uWSGI offloading is... 5.4.2 uWSGI offloading The concept is not a new thing, or a uWSGI specific one. Projects like nodejs or twisted use it by ages. Note: an example of a webapp serving a static file is not very interesting, nor the best thing to show, but will be useful later, when presenting a real-world scenario with X-Sendfile Immagine this simple WSGI app: def application(env, start_response): start_response(’200 OK’,[(’Content-Type’,’text/plain’)]) f= open(’/etc/services’) # do not do it, if the file is 4GB it will allocate 4GB of memory !!! yield f.read() it will simply returns the content of /etc/services. It is a pretty tiny file, so in few milliseconds your process will be ready to process another request. What if /etc/services is 4 gigabytes ? Your process (or thread) will be blocked for various seconds (even minutes), and will not be able to manage another request until the file is completely transferred. 5.4. Offloading Websockets and Server-Sent Events AKA “Combine them with Django safely” 343 uWSGI Documentation, Release 2.0 Would not be cool if you can tell to another thread to send the file for you, so you will be able to manage another request ? Offloading is exactly this: it will give you one ore more threads for doing simple and slow task for you. Which kind of tasks ? All of those that can be managed in a non-blocking way, so a single thread can manage thousand of transfer for you. You can see it as the DMA engine in your computer, your CPU will program the DMA to tranfer memory from a controller to the RAM, and then will be freed to accomplish another task while the DMA works in background. To enable offloading in uWSGI you only need to add the --offload-threads option, where is the number of threads per-process to spawn. (generally a single thread will be more than enough, but if you want to use/abuse your multiple cpu cores feel free to increase it) Once offloading is enabled, uWSGI will automatically use it whenever it detects that an operation can be offloaded safely. In the python/WSGI case the use of wsgi.file_wrapper will be offloaded automatically, as well as when you use the uWSGI proxy features for passing requests to other server speaking the uwsgi or HTTP protocol. A cool example (showed even in the Snippets page of uWSGI docs) is implementing a offload-powered X-Sendfile feature: [uwsgi] ; load router_static plugin (compiled in by default in monolithic profiles) plugins= router_static ; spawn 2 offload threads offload-threads=2 ; files under /etc can be safely served (DANGEROUS !!!) static-safe= /etc ; collect the X-Sendfile response header as X_SENDFILE var collect-header= X-Sendfile X_SENDFILE ; if X_SENDFILE is not empty, pass its value to the "static" routing action (it will automatically use offloading if available) response-route-if-not= empty:${X_SENDFILE} static:${X_SENDFILE} ; now the classic options plugins= python ; bind to HTTP port 8080 http-socket= :8080 ; load a simple wsgi-app wsgi-file= myapp.py now in our app we can X-Sendfile to send static files without blocking: def application(env, start_response): start_response(’200 OK’,[(’X-Sendfile’,’/etc/services’)]) return [] A very similar concept will be used in this article: We will use a normal Django to setup our session, to authorize the user and whatever (that is fast) you want, then we will return a special header that will instruct uWSGI to offload the connection to another uWSGI instance (listening on a private socket) that will manage the websocket/sse transaction using gevent in a non-blocking way. 344 Chapter 5. Articles uWSGI Documentation, Release 2.0 5.4.3 Our SSE app The SSE part will be very simple, a gevent-based WSGI app will send the current time every second: from sse import Sse import time def application(e, start_response): print e # create the SSE session session= Sse() # prepare HTTP headers headers=[] headers.append((’Content-Type’,’text/event-stream’)) headers.append((’Cache-Control’,’no-cache’)) start_response(’200 OK’, headers) # enter the loop while True: # monkey patching will prevent sleep() to block time.sleep(1) # add the message session.add_message(’message’, str(time.time())) # send to the client yield str(session) Let’s run it on /tmp/foo UNIX socket (save the app as sseapp.py) uwsgi --wsgi-file sseapp.py --socket /tmp/foo --gevent 1000 --gevent-monkey-patch (monkey patching is required for time.sleep(), feel free to use gevent primitives for sleeping if you want/prefer) 5.4.4 The (boring) HTML/Javascript

Server sent events

it is very simple, it will connect to /subscribe and will start waiting for events 5.4. Offloading Websockets and Server-Sent Events AKA “Combine them with Django safely” 345 uWSGI Documentation, Release 2.0 5.4.5 The Django view Our django view, will be very simple, it will simply generate a special response header (we will call it X-Offload-to- SSE) with the username of the logged user as its value: def subscribe(request): response= HttpResponse() response[’X-Offload-to-SSE’]= request.user return response Now we are ready for the “advanced” part 5.4.6 Let’s offload the SSE transaction The configuration could look a bit complex but it is the same concept of the X-Sendfile seen before [uwsgi] ; the boring part http-socket= :9090 offload-threads=2 wsgi-file= sseproject/wsgi.py ; collect X-Offload-to-SSE header and store in var X_OFFLOAD collect-header= X-Offload-to-SSE X_OFFLOAD ; if X_OFFLOAD is defined, do not send the headers generated by Django response-route-if-not= empty:${X_OFFLOAD} disableheaders: ; if X_OFFLOAD is defined, offload the request to the app running on /tmp/foo response-route-if-not= empty:${X_OFFLOAD} uwsgi:/tmp/foo,0,0 The only “new’ part is the use of ‘disableheaders routing action. It is required otherwise the headers generated by Django will be sent along the ones generated by the gevent-based app. You could avoid it (remember that disableheaders has been added only in 2.0.3) removing the call to start_response() in the gevent app (at the risk of being cursed by some WSGI-god) and changing the Django view to set the right headers: def subscribe(request): response= HttpResponse() response[’Content-Type’]=’text/event-stream’ response[’X-Offload-to-SSE’]= request.user return response Eventually you may want to be more “streamlined” and simply detect for ‘text/event-stream’ content_type presence: [uwsgi] ; the boring part http-socket= :9090 offload-threads=2 wsgi-file= sseproject/wsgi.py ; collect Content-Type header and store in var CONTENT_TYPE collect-header= Content-Type CONTENT_TYPE ; if CONTENT_TYPE is ’text/event-stream’, forward the request response-route-if= equal:${CONTENT_TYPE};text/event-stream uwsgi:/tmp/foo,0,0 Now, how to access the username of the Django-logged user in the gevent app ? You should have noted that the gevent-app prints the content of the WSGI environ on each request. Such environment is the same of the Django app + the collected headers. So accessing environ[’X_OFFLOAD’] will return the logged 346 Chapter 5. Articles uWSGI Documentation, Release 2.0 username. (obviously in the second example, where the content type is used, the variable with the username is no more collected, so you should fix it) You can pass all of the infos you need using the same approach, you can collect all of the vars you need and so on. You can even add variables at runtime [uwsgi] ; the boring part http-socket= :9090 offload-threads=2 wsgi-file= sseproject/wsgi.py ; collect Content-Type header and store in var CONTENT_TYPE collect-header= Content-Type CONTENT_TYPE response-route-if= equal:${CONTENT_TYPE};text/event-stream addvar:FOO=BAR response-route-if= equal:${CONTENT_TYPE};text/event-stream addvar:TEST1=TEST2 ; if CONTENT_TYPE is ’text/event-stream’, forward the request response-route-if= equal:${CONTENT_TYPE};text/event-stream uwsgi:/tmp/foo,0,0 or (using goto for better readability) [uwsgi] ; the boring part http-socket= :9090 offload-threads=2 wsgi-file= sseproject/wsgi.py ; collect Content-Type header and store in var CONTENT_TYPE collect-header= Content-Type CONTENT_TYPE response-route-if= equal:${CONTENT_TYPE};text/event-stream goto:offload response-route-run= last: response-route-label= offload response-route-run= addvar:FOO=BAR response-route-run= addvar:TEST1=TEST2 response-route-run= uwsgi:/tmp/foo,0,0 5.4.7 Simplifying things using the uwsgi api (>= uWSGI 2.0.3) While dealing with headers is pretty HTTP friendly, uWSGI 2.0.3 added the possibility to define per-request variables directly in your code. This allows a more “elegant” approach (even if highly non-portable) import uwsgi def subscribe(request): uwsgi.add_var("LOGGED_IN_USER", request.user) uwsgi.add_var("USER_IS_UGLY","probably") uwsgi.add_var("OFFLOAD_TO_SSE","y") uwsgi.add_var("OFFLOAD_SERVER","/tmp/foo") return HttpResponse() Now the config can change to a more gentle: 5.4. Offloading Websockets and Server-Sent Events AKA “Combine them with Django safely” 347 uWSGI Documentation, Release 2.0 ; the boring part http-socket= :9090 offload-threads=2 wsgi-file= sseproject/wsgi.py ; if OFFLOAD_TO_SSE is ’y’, do not send the headers generated by Django response-route-if= equal:${OFFLOAD_TO_SSE};y disableheaders: ; if OFFLOAD_TO_SSE is defined, offload the request to the app running on ’OFFLOAD_SERVER’ response-route-if= equal:${OFFLOAD_TO_SSE};y uwsgi:${OFFLOAD_SERVER},0,0 Have you noted how we allowed the Django app to set the backend server to use using a request variable ? Now we can go even further. We will not use the routing framework (except for disabling headers generation) import uwsgi def subscribe(request): uwsgi.add_var("LOGGED_IN_USER", request.user) uwsgi.add_var("USER_IS_UGLY","probably") uwsgi.route("uwsgi","/tmp/foo,0,0") return HttpResponse() and a simple: ; the boring part http-socket= :9090 offload-threads=2 wsgi-file= sseproject/wsgi.py response-route= ^/subscribe disableheaders: 5.4.8 What about Websockets ? We have seen how to offload SSE (that are mono-directional), we can offload websockets too (that are bidirectional). The concept is the same, you only need to ensure (as before) that no headers are sent by django, (otherwise the websocket handshake will fail) and then you can change your gevent app: import time import uwsgi def application(e, start_response): print e uwsgi.websocket_handshake() # enter the loop while True: # monkey patching will prevent sleep() to block time.sleep(1) # send to the client uwsgi.websocket_send(str(time.time())) 5.4.9 Using redis or uWSGI caching framework Request vars are handy (and funny), but they are limited (see below). If you need to pass a big amount of data between Django and the sse/websocket app, Redis is a great way (and works perfectly with gevent). Basically you store infos from django to redis and than you pass only the hash key (via request vars) to the sse/websocket app. 348 Chapter 5. Articles uWSGI Documentation, Release 2.0 The same can be accomplished with the uWSGI caching framework, but take in account redis has a lot of data primi- tives, while uWSGI only supports key->value items. 5.4.10 Common pitfalls • The amount of variables you can add per-request is limited by the uwsgi packet buffer (default 4k). You can increase it up to 64k with the –buffer-size option • This is the whole point of this article: do not use the Django ORM in your gevent apps unless you know what you are doing !!! (read, you have a django database adapter that supports gevent and does not sucks compared to the standard ones...) • Forget about finding a way to disable headers generation in django. This is a “limit/feature” of its WSGI adapter, use the uWSGI facilities (if available) or do not generate headers in your gevent app. Eventually you can modify wsgi.py in this way: """ WSGI config for sseproject project. It exposes the WSGI callable as a module-level variable named ‘‘application‘‘. For more information on this file, see https://docs.djangoproject.com/en/1.6/howto/deployment/wsgi/ """ import os os.environ.setdefault("DJANGO_SETTINGS_MODULE","sseproject.settings") from django.core.wsgi import get_wsgi_application django_application= get_wsgi_application() def fake_start_response(status, headers, exc_info=None): pass def application(environ, start_response): if environ[’PATH_INFO’] ==’/subscribe’: return django_application(environ, fake_start_response) return django_application(environ, start_response) 5.4. Offloading Websockets and Server-Sent Events AKA “Combine them with Django safely” 349 uWSGI Documentation, Release 2.0 350 Chapter 5. Articles CHAPTER 6 uWSGI Subsystems 6.1 The uWSGI alarm subsystem (from 1.3) As of 1.3, uWSGI includes an alarm system. This subsystem allows the developer/sysadmin to ‘announce’ special conditions of an app via various channels. For example, you may want to get notified via Jabber/XMPP of a full listen queue, or a harakiri condition. The alarm subsystem is based on two components: an event monitor and an event action. An event monitor is something waiting for a specific condition (like an event on a file descriptor or a specific log message). As soon as the condition is true an action (like sending an email) is triggered. 6.1.1 Embedded event monitors Event monitors can be added via plugins, the uWSGI core includes the following: • log-alarm triggers an alarm when a specific regexp matches a log line • alarm-fd triggers an alarm when the specified file descriptor is ready (which is pretty low-level and the basis of most of the alarm plugins) • alarm-backlog triggers an alarm when the socket backlog queue is full • alarm-segfault (since 1.9.9) triggers an alarm when uWSGI segfaults. • alarm-cheap Use main alarm thread rather than create dedicated thread for each curl-based alarm 6.1.2 Defining an alarm You can define an unlimited number of alarms. Each alarm has a unique name. Currently the following alarm actions are available in the main distribution: ’cmd’ - run a command passing the log line to the stdin ’signal’ - generate an uWSGI signal ’mule’ - send the log line to a mule ’curl’ - pass the log line to a curl url (http,https and smtp are supported) ’xmpp’ - send the log line via XMPP/jabber To define an alarm, use the option --alarm. 351 uWSGI Documentation, Release 2.0 --alarm " :" Remember to quote ONLY when you are defining alarms on the command line. [uwsgi] alarm= mailme cmd:mail -s ’uWSGI alarm’ -a ’From: foobar@example.com’ admin@example.com alarm= cachefull signal:17 Here we define two alarms: mailme and cachefull. The first one invokes the mail binary to send the log line to a mail address; the second one generates an uWSGI signal. We now need to add rules to trigger alarms: [uwsgi] alarm= mailme cmd:mail -s ’uWSGI alarm’ -a ’From: foobar@example.com’ admin@example.com alarm= cachefull signal:17 log-alarm= cachefull,mailme uWSGI listen queue of socket log-alarm= mailme HARAKIRI ON WORKER The syntax of log-alarm is --log-alarm " " In our previous example we defined two conditions using regexps applied to log lines. The first one will trigger both alarms when the listen queue is full, while the second will only invoke ‘mailme’ when a worker commits harakiri. 6.1.3 Damnit, this... this is the rawest thing I’ve seen... You may be right. But if you throw away your “being a cool programmer with a lot of friends and zero money” book for a moment, you will realize just how many things you can do with such a simple system. Want an example? [uwsgi] alarm= jabber xmpp:foobar@jabber.xxx;mysecretpassword;admin@jabber.xxx,admin2@jabber.xxx log-alarm= jabber ^TERRIBLE ALARM Now in your app you only need to add print "TERRIBLE ALARM! The world exploded!!!" to send a Jabber message to admin@jabber.xxx and admin2@jabber.xxx without adding any significant overhead to your app (as alarms are triggered by one or more threads in the master process, without bothering workers). How about another example? Check this Rack middleware: class UploadCheck def initialize(app) @app= app end def call(env) if env[’REQUEST_METHOD’] == ’POST’ and env[’PATH_INFO’] == ’/upload’ puts"TERRIBLE ALARM! An upload has been made!" end @app.call(env) end end 352 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.1.4 Protecting from bad rules Such a versatile system could be open to a lot of ugly bugs, mainly infinite loops. Thus, try to build your regexps carefully. The embedded anti-loop subsystem should protect against loglines wrongly generated by alarm plugin. This system is not perfect so please double-check your regexps. If you are building a plugin, be sure to prepend your log messages with the ‘[uwsgi-alarm’ string. These lines will be skipped and directly passed to the log subsystem. A convenience API function is available: uwsgi_log_alarm(). 6.1.5 How does log-alarm work? Enabling log-alarm automatically puts the uWSGI instance in log-master mode, delegating log writes to the master. The alarm subsystem is executed by the master just before passing the log line to the log plugin. Blocking alarm plugins should run in a thread (like the curl and xmpp one), while the simple ones (like signal and cmd) may run directly in the master. 6.1.6 Available plugins and their syntax cmd Run a shell command, passing the log line to its stdin: cmd: signal Raise an uWSGI signal. signal:[signum] See also: The uWSGI Signal Framework mule Send the log line to a mule waiting for messages. mule:[mule_id] See also: uWSGI Mules curl Send the log line to a cURL-able URL. This alarm plugin is not compiled in by default, so if you need to build it just run: python uwsgiconfig.py --plugin plugins/alarm_curl curl:[;opt1=val1;opt2=val2] url is any standard cURL URL, while the options currently exposed are 6.1. The uWSGI alarm subsystem (from 1.3) 353 uWSGI Documentation, Release 2.0 • “auth_pass” • “auth_user” • “conn_timeout” • “mail_from” • “mail_to” • “method” • “ssl” • “subject” • “timeout” • “url” • “ssl_insecure” So, for sending mail via SMTP AUTH: [uwsgi] plugins= alarm_curl alarm= test curl:smtp://mail.example.com;mail_to=admin@example.com;mail_from=uwsgi@example.com;auth_user=uwsgi;auth_pass=secret;subject=alarm from uWSGI !!! Or to use Gmail to send alarms: [uwsgi] plugins= alarm_curl alarm= gmail curl:smtps://smtp.gmail.com;mail_to=admin@example.com;auth_user=uwsgi@gmail.com;auth_pass=secret;subject=alarm from uWSGI !!! Or to PUT the log line to an HTTP server protected with basic authentication: [uwsgi] plugins= alarm_curl alarm= test2 curl:http://192.168.173.6:9191/argh;auth_user=topogigio;auth_pass=foobar Or to POST the log line to an HTTPS server with self-generated SSL certificate. [uwsgi] plugins= alarm_curl alarm= test3 curl:https://192.168.173.6/argh;method=POST;ssl_insecure=true xmpp Probably the most interesting one of the built-in bunch. You need the libgloox package to build the XMPP alarm plugin (on Debian/Ubuntu, apt-get install gloox-dev). python uwsgiconfig.py --plugin plugins/alarm_xmpp xmpp:;; You can set multiple recipients using ‘,’ as delimiter. [uwsgi] plugins= alarm_xmpp alarm= jabber xmpp:app@example.it;secret1;foo1@foo.it,foo2@foo.it 354 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 An even more interesting thing still about the XMPP plugin is that you will see the Jabber account of your app going down when your app dies. :-) Some XMPP servers (most notably the OSX Server one) requires you to bind to a resource. You can do thus by appending /resource to the JID. [uwsgi] plugins= alarm_xmpp alarm= jabber xmpp:max@server.local/uWSGI;secret1;foo1@foo.it,foo2@foo.it speech A toy plugin for OSX, used mainly for showing off Objective-C integration with uWSGI. It simply uses the OSX speech synthesizer to ‘announce’ the alarm. python uwsgiconfig.py --plugin plugins/alarm_speech [uwsgi] plugins= alarm_speech http-socket= :8080 alarm= say speech: log-alarm= say . * Turn on your speakers, run uWSGI and start listening... airbrake Starting with 1.9.9 uWSGI includes the --alarm-segfault option to raise an alarm when uWSGI segfaults. The airbrake plugin can be used to send segfault backtraces to airbrake compatible servers. Like Airbrake itself and its open source clone errbit (https://github.com/errbit/errbit), Airbrake support is experimental and it might not fully work in all cases. plugins= airbrake alarm= errbit airbrake:http://errbit.domain.com/notifier_api/v2/notices;apikey=APIKEY;subject=uWSGI segfault alarm-segfault= errbit Note that alarm-segfault does not require the Airbrake plugin. A backtrace can just as well be sent using any other alarm plugin. 6.2 The uWSGI caching framework Note: This page is about “new-generation” cache introduced in uWSGI 1.9. For old-style cache (now simply named “web caching”) check WebCaching framework uWSGI includes a very fast, all-in-memory, zero-IPC, SMP-safe, constantly-optimizing, highly-tunable, key-value store simply called “the caching framework”. A single uWSGI instance can create an unlimited number of “caches” each one with different setup and purpose. 6.2.1 Creating a “cache” To create a cache you use the --cache2 option. It takes a dictionary of arguments specifying the cache configuration. To have a valid cache you need to specify its name and the maximum number of items it can contains. 6.2. The uWSGI caching framework 355 uWSGI Documentation, Release 2.0 uwsgi --cache2 name=mycache,items=100 --socket :3031 this will create a cache named “mycache” with a maximum of 100 items. Each item can be at most 64k. 6.2.2 A sad/weird/strange/bad note about “the maximum number of items” If you start with a 100 item cache you will suddenly note that the true maximum number of items you can use is indeed 99. This is because the first item of the cache is always internally used as “NULL/None/undef” item. Remember this when you start planning your cache configuration. 6.2.3 Configuring the cache (how it works) The uWSGI cache works like a file system. You have an area for storing keys (metadata) followed by a series of fixed size blocks in which to store the content of each key. Another memory area, the hash table is allocated for fast search of keys. When you request a key, it is first hashed over the hash table. Each hash points to a key in the metadata area. Keys can be linked to manage hash collisions. Each key has a reference to the block containing its value. 6.2.4 Single block (faster) vs. bitmaps (slower) Warning: Bitmap mode is considered production ready only from uWSGI 2.0.2! (That is, it was buggy before that.) In the standard (“single block”) configuration a key can only map to a single block. Thus if you have a cache block size of 64k your items can be at most 65,535 bytes long. Conversely items smaller than that will still consume 64k of memory. The advantage of this approach is its simplicity and speed. The system does not need to scan the memory for free blocks every time you insert an object in the cache. If you need a more versatile (but relatively slower) approach, you can enable the “bitmap” mode. Another memory area will be created containing a map of all of the used and free blocks of the cache. When you insert an item the bitmap is scanned for contiguous free blocks. Blocks must be contiguous, this could lead to a bit of fragmentation but it is not as big a problem as with disk storage, and you can always tune the block size to reduce fragmentation. 6.2.5 Persistent storage You can store cache data in a backing store file to implement persistence. As this is managed by mmap() it is almost transparent to the user. You should not rely on this for data safety (disk syncing is managed asynchronously); use it only for performance purposes. 6.2.6 Network access All of your caches can be accessed over the network. A request plugin named “cache” (modifier1 111) manages requests from external nodes. On a standard monolithic build of uWSGI the cache plugin is always enabled. The cache plugin works in a fully non-blocking way, and it is greenthreads/coroutine friendly so you can use technologies like gevent or Coro::AnyEvent with it safely. 356 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.2.7 UDP sync This technique has been inspired by the STUD project, which uses something like this for SSL session scaling (and coincidentally the same approach can be used with uWSGI SSL/HTTPS routers). Basically whenever you set/update/delete an item from the cache, the operation is propagated to remote nodes via simple UDP packets. There are no built-in guarantees with UDP syncing so use it only for very specific purposes, like Scaling SSL connections (uWSGI 1.9). 6.2.8 –cache2 options This is the list of all of the options (and their aliases) of --cache2. name Set the name of the cache. Must be unique in an instance. max-items || maxitems || items Set the maximum number of cache items. blocksize Set the size (in bytes) of a single block. blocks Set the number of blocks in the cache. Useful only in bitmap mode, otherwise the number of blocks is equal to the maximum number of items. hash Set the hash algorithm used in the hash table. Currentl options are “djb33x” (default) and “murmur2”. hashsize || hash_size this is the size of the hash table in bytes. Generally 65536 (the default) is a good value. Change it only if you know what you are doing or if you have a lot of collisions in your cache. keysize || key_size Set the maximum size of a key, in bytes (default 2048) store Set the filename for the persistent storage. If it doesn’t exist, the system assumes an empty cache and the file will be created. 6.2. The uWSGI caching framework 357 uWSGI Documentation, Release 2.0 store_sync || storesync Set the number of seconds after which msync() is called to flush memory cache on disk when in persistent mode. By default it is disabled leaving the decision-making to the kernel. store_delete || storedelete uwsgi will not start if the existing cache store file does not match the configured items/blocksize. If this option is set uwsgi will delete the existing file and create a new one. node || nodes A semicolon separated list of UDP servers which will receive UDP cache updates. sync A semicolon separated list of uwsgi addresses which the cache subsystem will connect to for getting a full dump of the cache. It can be used for initial cache synchronization. The first node sending a valid dump will stop the procedure. udp || udp_servers || udp_server || udpserver A semicolon separated list of UDP addresses on which to bind the cache to wait for UDP updates. bitmap Set to 1 to enable bitmap mode. lastmod Setting lastmod to 1 will update last_modified_at timestamp of each cache on every cache item modification. Enable it if you want to track this value or if other features depend on it. This value will then be accessible via the stats socket. ignore_full By default uWSGI will print warning message on every cache set operation if the cache is full. To disable this warning set this option. Available since 2.0.4 purge_lru This option allows the caching framework to evict Least Recently Used (LRU) item when you try to add new item to cache storage that is full. The expires argument described below will be ignored. An item is considered used when it’s accessed, added and updated by cache_get(), cache_set() and cache_update(); whereas the existence check by cache_exists() is not. 358 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.2.9 Accessing the cache from your applications using the cache api You can access the various cache in your instance or on remote instances by using the cache API. Currently the following functions are exposed (each language might name them a bit differently from the standard): • cache_get(key[,cache]) • cache_set(key,value[,expires,cache]) • cache_update(key,value[,expires,cache]) • cache_exists(key[,cache]) • cache_del(key[,cache]) • cache_clear([cache]) If the language/platform calling the cache API differentiates between strings and bytes (like Python 3 and Java) you have to assume that keys are strings and values are bytes (or bytearray in the java way). Otherwise keys and values are both strings in no specific encoding, as internally the cache values and keys are simple binary blobs. The expires argument (default to 0 for disabled) is the number of seconds after the object is no more valid (and will be removed by the cache sweeper when purge_lru is not set, see below) The cache argument is the so called “magic identifier”. Its syntax is cache[@node]. To operate on the local cache “mycache” you set it as “mycache”, while to operate on “yourcache” on the uWSGI server at 192.168.173.22 port 4040 the value will be yourcache@192.168.173.22:4040. An empty cache value means the default cache which is generally the first initialized. The default value is empty. All of the network operations are transparent, fully non-blocking, and threads/greenthreads friendly. 6.2.10 The Cache sweeper thread When at least one cache is configured without purge_lru and the master is enabled a thread named “the cache sweeper” is started. Its main purpose is deleting expired keys from the cache. So, if you want auto-expiring you need to enable the master. 6.2.11 Web caching In its first incarnation the uWSGI caching framework was meant only for caching of web pages. The old system has been rebuilt. It is now named WebCaching framework. Enabling the old-style --cache option will create a cache named “default”. 6.2.12 Monitoring caches The stats server exposes caches informations. There is an ncurses-based tool (https://pypi.python.org/pypi/uwsgicachetop) using that infos for real-time monitoring. 6.3 WebCaching framework Note: This is a port of the old caching subsystem to the new uWSGI caching API documented here The uWSGI caching framework. Using the options here will create a new-style cache named “default”. 6.3. WebCaching framework 359 uWSGI Documentation, Release 2.0 To enable web caching, allocate slots for your items using the cache option. The following command line would create a cache that can contain at most 1000 items. ./uwsgi --socket 127.0.0.1:3031 --module mysimpleapp --master --processes 4 --cache 1000 To use the cache in your application, uwsgi.cache_set("foo_key","foo_value")# set a key value= uwsgi.cache_get("foo_key")# get a key. 6.3.1 Persistent storage You can store cache data in a backing store file to implement persistence. Simply add the cache-store option. Every kernel will commit data to the disk at a different rate. You can set if/when to force this with cache-store-sync , where n is the number of master cycles to wait before each disk sync. 6.3.2 Cache sweeper Since uWSGI 1.2, cache item expiration is managed by a thread in the master process, to reduce the risk of deadlock. This thread can be disabled (making item expiry a no-op) with the cache-no-expire option. The frequency of the cache sweeper thread can be set with cache-expire-freq . You can make the sweeper log the number of freed items with cache-report-freed-items. 6.3.3 Directly accessing the cache from your web server location /{ uwsgi_pass 127.0.0.1:3031; uwsgi_modifier1 111; uwsgi_modifier2 3; uwsgi_param key $request_uri; } That’s it! Nginx would now get HTTP responses from a remote uwsgi protocol compliant server. Although honestly this is not very useful, as if you get a cache miss, you will see a blank page. A better system, that will fallback to a real uwsgi request would be location /{ uwsgi_pass 192.168.173.3:3032; uwsgi_modifier1 111; uwsgi_modifier2 3; uwsgi_param key $request_uri; uwsgi_pass_request_headers off; error_page 502 504= @real; } location @real{ uwsgi_pass 192.168.173.3:3032; uwsgi_modifier1 0; uwsgi_modifier2 0; include uwsgi_params; } 360 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.3.4 Django cache backend If you are running Django, there’s a ready-to-use application called django-uwsgi-cache. It is maintained by Ionel Cristian M˘arie at https://github.com/ionelmc/django-uwsgi-cache and availabe on pypi. 6.4 The uWSGI cron-like interface uWSGI’s master has an internal cron-like facility that can generate events at predefined times. You can use it • via the uWSGI API, in which case cron events will generate uWSGI signals • directly via options, in which case events will run shell commands 6.4.1 uWSGI signal based The uwsgi.add_cron() function is the interface to the uWSGI signal cron facility. The syntax is uwsgi.add_cron(signal, minute, hour, day, month, weekday) The last 5 arguments work similarly to a standard crontab, but instead of “*”, use -1, and instead of “/2”, “/3”, etc. use -2 and -3, etc. import uwsgi def five_o_clock_on_the_first_day_of_the_month(signum): print "It’s 5 o’clock of the first day of the month." uwsgi.register_signal(99,"", five_o_clock_on_the_first_day_of_the_month) uwsgi.add_cron(99,0,5,1,-1,-1) 6.4.2 Timers vs. cron Recurring events not related to specific dates should use timers/rb_timers. When you are interested in a specific date/hour use cron. For example: uwsgi.add_cron(99,-1,-1,-1,-1,-1)# ugly, bad and inefficient way to run signal 99 every minute :( uwsgi.add_timer(99, 60)# much better. 6.4.3 Notes • day and weekday are ORed as the original crontab specifications. • By default, you can define up to 64 signal-based cron jobs per master. This value may be increased in uwsgi.h. 6.4.4 Option-based cron You can define cron tasks directly in configuration with the cron option. You can specify an unlimited number of option-based cron records. The syntax is the same of the signal-based ones. 6.4. The uWSGI cron-like interface 361 uWSGI Documentation, Release 2.0 For example: [uwsgi] cron= 59 2 -1 -1 -1 /usr/bin/backup_my_home --recursive cron= 9 11 -1 -1 2 /opt/dem/bin/send_reminders 59 2 -1 -1 -1 /usr/bin/backup_my_home --recursive 9 11 -1 -1 2 /opt/dem/bin/send_reminders [uwsgi] ; every two hours cron= -1 -2 -1 -1 -1 /usr/bin/backup_my_home --recursive Legion crons When your instance is part of a The uWSGI Legion subsystem, you can configure it to run crons only if it is the Lord of the specified Legion: [uwsgi] legion= mycluster 225.1.1.1:1717 100 bf-cbc:hello legion-node= mycluster 225.1.1.1:1717 ; every two hours legion-cron= mycluster -1 -2 -1 -1 -1 /usr/bin/backup_my_home --recursive Unique crons Note: This feature is available since 1.9.11. Some commands can take a long time to finish or just hang doing their thing. Sometimes this is okay, but there are also cases when running multiple instances of the same command can be dangerous. For such cases the unique-cron and unique-legion-cron options were added. The syntax is the same as with cron and legion-cron, but the difference is that uWSGI will keep track of execution state and not execute the cronjob again until it is complete. Example: [uwsgi] cron= -1 -1 -1 -1 -1 sleep 70 This would execute sleep 70 every minute, but sleep command will be running longer than our execution in- terval, we will end up with a growing number of sleep processes. To fix this we can simply replace cron with unique-cron and uWSGI will make sure that only single sleep process is running. A new process will be started right after the previous one finishes. Harakiri Note: Available since 1.9.11. --cron-harakiri will enforce a time limit on executed commands. If any command is taking longer it will be killed. 362 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 [uwsgi] cron= sleep 30 cron-harakiri= 10 This will kill the cron command after 10 seconds. Note that cron-harakiri is a global limit, it affects all cron commands. To set a per-command time limit, use the cron2 option (see below). New syntax for cron options Note: Available since 1.9.11 To allow better control over crons, a new option was added to uWSGI: [uwsgi] cron2= option1=value,option2=value command to execute Example: [uwsgi] cron2= minute=-2,unique=1 sleep 130 Will spawn an unique cron command sleep 130 every 2 minutes. Option list is optional, available options for every cron: • minute - minute part of crontab entry, default is -1 (interpreted as *) • hour - hour part of crontab entry, default is -1 (interpreted as *) • day - day part of crontab entry, default is -1 (interpreted as *) • month - month part of crontab entry, default is -1 (interpreted as *) • week - week part of crontab entry, default is -1 (interpreted as *) • unique - marks cron command as unique (see above), default is 0 (not unique) • harakiri - set harakiri timeout (in seconds) for this cron command, default is 0 (no harakiri) • legion - set legion name for use with this cron command, cron legions are only executed on the legion lord node. Note that you cannot use spaces in options list. (minute=1, hour=2 will not work, but minute=1,hour=2 will work just fine.) If any option is missing, a default value is used. [uwsgi] # execute ‘‘my command‘‘ every minute (-1 -1 -1 -1 -1 crontab). cron2= my command # execute unique command ‘‘/usr/local/bin/backup.sh‘‘ at 5:30 every day. cron2= minute=30,hour=5,unique=1 /usr/local/bin/backup.sh [uwsgi] legion= mycluster 225.1.1.1:1717 100 bf-cbc:hello legion-node= mycluster 225.1.1.1:1717 cron2= minute=-10,legion=mycluster my command This will disable harakiri for my command, but other cron commands will still be killed after 10 seconds: 6.4. The uWSGI cron-like interface 363 uWSGI Documentation, Release 2.0 [uwsgi] cron-harakiri= 10 cron2= harakiri=0 my command cron2= my second command 6.5 The uWSGI FastRouter For advanced setups uWSGI includes the “fastrouter” plugin, a proxy/load-balancer/router speaking the uwsgi proto- col. It is built in by default. You can put it between your webserver and real uWSGI instances to have more control over the routing of HTTP requests to your application servers. 6.5.1 Getting started First of all you have to run the fastrouter, binding it to a specific address. Multiple addresses are supported as well. uwsgi --fastrouter 127.0.0.1:3017 --fastrouter /tmp/uwsgi.sock --fastrouter @foobar Note: This is the most useless Fastrouter setup in the world. Congratulations! You have just run the most useless Fastrouter setup in the world. Simply binding the fastrouter to a couple of addresses will not instruct it on how to route requests. To give it intelligence you have to tell it how to route requests. 6.5.2 Way 1: –fastrouter-use-base This option will tell the fastrouter to connect to a UNIX socket with the same name of the requested host in a specified directory. uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-base /tmp/sockets/ If you receive a request for example.com the fastrouter will forward the request to /tmp/sockets/example.com. 6.5.3 Way 2: –fastrouter-use-pattern Same as the previous setup but you will be able to use a pattern, with %s mapping to the requested key/hostname. uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-base /tmp/sockets/%s/uwsgi.sock Requests for example.com will be mapped to /tmp/sockets/example.com/uwsgi.sock. 6.5.4 Way 3: –fastrouter-use-cache You can store the key/value mappings in the uWSGI cache. Choose a way to fill the cache, for instance a Python script like this... import uwsgi # Requests for example.com on port 8000 will go to 127.0.0.1:4040 uwsgi.cache_set("example.com:8000","127.0.0.1:4040") 364 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 # Requests for unbit.it will go to 127.0.0.1:4040 with the modifier1 set to 5 (perl/PSGI) uwsgi.cache_set("unbit.it","127.0.0.1:4040,5") Then run your Fastrouter-enabled server, telling it to run the script first. uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-cache --cache 100 --file foobar.py 6.5.5 Way 4: –fastrouter-subscription-server This is probably one of the best way for massive auto-scaling hosting. It uses the subscription server to allow instances to announce themselves and subscribe to the fastrouter. uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-subscription-server 192.168.0.100:7000 This will spawn a subscription server on address 192.168.0.100 port 7000 Now you can spawn your instances subscribing to the fastrouter: uwsgi --socket :3031 -M --subscribe-to 192.168.0.100:7000:example.com uwsgi --socket :3032 -M --subscribe-to 192.168.0.100:7000:unbit.it,5 --subscribe-to 192.168.0.100:7000:uwsgi.it As you probably noted, you can subscribe to multiple fastrouters, with multiple keys. Multiple instances subscribing to the same fastrouter with the same key will automatically get load balanced and monitored. Handy, isn’t it? Like with the caching key/value store, modifier1 can be set with a comma. (,5 above) Another feature of the subscription system is avoiding to choose ports. You can bind instances to random port and the subscription system will send the real value to the subscription server. uwsgi --socket 192.168.0.100:0 -M --subscribe-to 192.168.0.100:7000:example.com Mapping files If you need to specify a massive amount of keys, you can use a mapping file instead. # mappings.txt unbit.it unbit.it:8000,5 uwsgi.it projects.unbit.it uwsgi --socket :3031 -M --subscribe-to 192.168.0.100:7000:@mappings.txt 6.5.6 Way 5: –fastrouter-use-code-string If Darth Vader wears a t-shirt with your face (and in some other corner cases too), you can customize the fastrouter with code-driven mappings. Choose a uWSGI-supported language (like Python or Lua) and define your mapping function. def get(key): return ’127.0.0.1:3031’ uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-code-string 0:mapper.py:get This will instruct the fastrouter to load the script mapper.py using plugin (modifier1) 0 and call the ‘get’ global, passing it the key. In the previous example you will always route requests to 127.0.0.1:3031. Let’s create a more advanced system, for fun! 6.5. The uWSGI FastRouter 365 uWSGI Documentation, Release 2.0 domains={} domains[’example.com’]={’nodes’:(’127.0.0.1:3031’,’192.168.0.100:3032’),’node’:0} domains[’unbit.it’]={’nodes’:(’127.0.0.1:3035,5’,’192.168.0.100:3035,5’),’node’:0} DEFAULT_NODE=’192.168.0.1:1717’ def get(key): if key not in domains: return DEFAULT_NODE # get the node to forward requests to nodes= domains[key][’nodes’] current_node= domains[key][’node’] value= nodes[current_node] # round robin :P next_node= current_node+1 if next_node>= len(nodes): next_node=0 domains[key][’node’]= next_node return value uwsgi --fastrouter 127.0.0.1:3017 --fastrouter-use-code-string 0:megamapper.py:get With only few lines we have implemented round-robin load-balancing with a fallback node. Pow! You could add some form of node monitoring, starting threads in the script, or other insane things. (Be sure to add them to the docs!) Attention: Remember to not put blocking code in your functions. The fastrouter is totally non-blocking, do not ruin it! 6.5.7 Cheap mode and shared sockets A common setup is having a webserver/proxy connected to a fastrouter and a series of uWSGI instances subscribed to it. Normally you’d use the webserver node as a uWSGI instance node. This node will subscribe to the local fastrouter. Well... don’t waste cycles on that! Shared sockets are a way to share sockets among various uWSGI components. Let’s use that to share a socket between the fastrouter and uWSGI instance. [uwsgi] ;create a shared socket (the webserver will connect to it) shared-socket= 127.0.0.1:3031 ; bind the fastrouter to the shared socket fastrouter= =0 ; bind an instance to the same socket socket= =0 ; having a master is always a good thing... master= true ; our subscription server fastrouter-subscription-server= 192.168.0.100:4040 ; our app wsgi-file= /var/www/myheavyapp.wsgi ; a bunch of processes processes=4 366 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 ; and put the fastrouter in cheap mode fastrouter-cheap= true With this setup your requests will go directly to your app (no proxy overhead) or to the fastrouter (to pass requests to remote nodes). When the fastrouter is in cheap mode, it will not respond to requests until a node is available. This means that when there are no nodes subscribed, only your local app will respond. When all of the nodes go down, the fastrouter will return in cheap mode. Seeing a pattern? Another step to awesome autoscaling. 6.5.8 Notes • The fastrouter uses the following vars (in order of precedence) to choose a key to use: – UWSGI_FASTROUTER_KEY - the most versatile, as it doesn’t depend on the request in any way – HTTP_HOST – SERVER_NAME • You can increase the number of async events the fastrouter can manage (by default it is system-dependent) using –fastrouter-events You can change the default timeout with –fastrouter-timeout By default the fastrouter will set fd socket passing when used over unix sockets. If you do not want it add –no-fd-passing 6.6 uWSGI internal routing Updated to 1.9 As of uWSGI 1.9, a programmable internal routing subsystem is available (older releases after 1.1 have a less featureful version). You can use the internal routing subsystem to dynamically alter the way requests are handled. For example you can use it to trigger a 301 redirect on specific URLs, or to serve content from the cache on specific conditions. The internal routing subsystem is inspired by Apache’s mod_rewrite and Linux’s iptables command. Please, before blasting it for being messy, not-elegant nor Turing-complete, remember that it must be FAST and only FAST. If you need elegance and more complexity, do that in your code. 6.6.1 The routing chains During the request cycle, various “chains” are traversed. Each chain contains a routing table (see below). Chains can be “recursive”. A “recursive” chain can be called multiple times in a request cycle. This is the order of chains: request it is applied before the request is passed to the plugin error it is applied as soon as an HTTP status code is generate (recursive chain) response it is is applied after the last response header has been generated (just before sending the body) final it is aplied after the response has been sent to the client The request chain is (for convention) the ‘default’ one, so its options are not prefixed, while the others requires a prefix. Example: route-user-agent -> happens in the request chain while 6.6. uWSGI internal routing 367 uWSGI Documentation, Release 2.0 response-route-uri -> happens in the response chain 6.6.2 The internal routing table The internal routing table is a sequence of ‘’rules” executed one after another (forward jumps are allowed too). Each rule is composed by a ‘’subject’‘, a ‘’condition” and an ‘’action” The ‘’condition” is generally a PCRE regexp applied to the subject, if it matches the action is triggered. Subjects are request’s variables. Currently the following subjects are supported: • host (check HTTP_HOST) • uri (check REQUEST_URI) • qs (check QUERY_STRING) • remote-addr (check REMOTE_ADDR) • remote-user (check REMOTE_USER) • referer (check HTTP_REFERER) • user-agent (check HTTP_USER_AGENT) • status (check HTTP response status code, not available in the request chain) • default (default subject, maps to PATH_INFO) In addition to this, a pluggable system of lower-level conditions is available. You can access this system using the --route-if option. Currently the following checks are supported: • exists (check if the subject exists in the filesystem) • isfile (check if the subject is a file) • isdir (check if the subject is a directory) • isexec (check if the subject is an executable file) • equal/isequal/eq/== (check if the subject is equal to the specified pattern) • ishigherequal/>= • ishigher/> • islower/< • islowerequal/<= • startswith (check if the subject starts with the specified pattern) • endswith (check if the subject ends with the specified pattern) • regexp/re (check if the subject matches the specified regexp) • empty (check if the subject is empty) • contains When a check requires a pattern (like with ‘equal’ or ‘regexp’) you split it from the subject with a semicolon: ; never matches route-if= equal:FOO;BAR log:never here ; matches route if= regexp:FOO;^F log:starts with F Actions are the functions to run if a rule matches. This actions are exported by plugins and have a return value. 368 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.6.3 Action return values Each action has a return value which tells the routing engine what to do next. The following return codes are supported: • NEXT (continue to the next rule) • CONTINUE (stop scanning the internal routing table and run the request) • BREAK (stop scanning the internal routing table and close the request) • GOTO x (go to rule x) When a rule does not match, NEXT is assumed. 6.6.4 The first example [uwsgi] route-user-agent=.*curl.* redirect:http://uwsgi.it route-remote-addr= ^127\.0\.0\.1$ break:403 Forbidden route= ^/test log:someone called /test route= \.php$ rewrite:/index.php route=.* addheader:Server: my uWSGI server route-host= ^localhost$ logvar:local=1 route-uri= ^/foo/(. *)\.jpg$ cache:key=$1.jpg route-if= equal:${PATH_INFO};/bad break:500 Internal Server Error The previous rules, build the following table: • if the HTTP_USER_AGENT var contains ‘curl’ redirect the request to http://uwsgi.it (code 302, action returns BREAK) • if REMOTE_ADDR is ‘127.0.0.1’ returns a 403 Forbidden (action returns BREAK) • if PATH_INFO starts with /test print the string ‘someone called /test’ in the logs (action returns NEXT) • if PATH_INFO ends with ‘.php’ rewrite it to /index.php (action returns NEXT) • for all of the PATH_INFO add the HTTP header ‘Server: my uWSGI server’ to the response (action returns NEXT) • if HTTP_HOST is localhost add the logvar ‘local’ setting it to ‘1’ • if REQUEST_URI starts with /foo and ends with .jpg get it from the uWSGI cache using the supplied key (built over regexp grouping) (action returns BREAK) • if the PATH_INFO is equal to /bad throws a 500 error 6.6.5 Accessing request vars In addition to PCRE placeholders/groups (using $1 to $9) you can access request variables (PATH_INFO, SCRIPT_NAME, REQUEST_METHOD...) using the ${VAR} syntax. [uwsgi] route-user-agent=.*curl.* redirect:http://uwsgi.it${REQUEST_URI} 6.6.6 Accessing cookies You can access a cookie value using the ${cookie[name]} syntax: 6.6. uWSGI internal routing 369 uWSGI Documentation, Release 2.0 [uwsgi] route= ^/foo log:${cookie[foobar]} this will log the content of the ‘foobar’ cookie of the current request 6.6.7 Accessing query string items You can access the value of the HTTP query string using the ${qs[name]} syntax: [uwsgi] route= ^/foo log:${qs[foobar]} this will log the content of the ‘foobar’ item of the current request’s query string 6.6.8 Pluggable routing variables Both the cookie and qs vars, are so-called “routing vars”. They are pluggable, so external plugins can add new vars to add new features to your application. (Check the The GeoIP plugin plugin for an example of this.) A number of embedded routing variables are also available. • mime – returns the mime type of the specified var: ${mime[REQUEST_URI]} [uwsgi] route= ^/images/(.+) addvar:MYFILE=$1.jpg route= ^/images/ addheader:Content-Type: ${mime[MYFILE]} • time – returns time/date in various form. The only supported (for now) is time[unix] returning the epoch • httptime – return http date adding the numeric argument (if specified)to the current time (use empty arg for current server time) [uwsgi] ; add Date header route-run= addheader:Date ${httptime[]} • math – requires matheval support. Example: math[CONTENT_LENGTH+1] • base64 – encode the specified var in base64 • hex – encode the specified var in hex • uwsgi – return internal uWSGI information, uwsgi[wid], uwsgi[pid], uwsgi[uuid] and uwsgi[status] are cur- rently supported 6.6.9 Is –route-if not enough? Why –route-uri and friends? This is a good question. You just need to always remember that uWSGI is about versatility and performance. Gaining cycles is always good. The --route-if option, while versatile, cannot be optimized as all of its parts have to be recomputed at every request. This is obviously very fast, but --route-uri option (and friends) can be pre- optimized (during startup) to directly map to the request memory areas, so if you can use them, you definitely should. :) 370 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.6.10 GOTO Yes, the most controversial construct of the whole information technology industry (and history) is here. You can make forward (only forward!) jumps to specific points of the internal routing table. You can set labels to mark specific point of the table, or if you are brave (or foolish) jump directly to a rule number. Rule numbers are printed on server startup, but please use labels. [uwsgi] route-host= ^localhost$ goto:localhost route-host= ^sid\.local$ goto:sid.local route=.* last: route-label= sid.local route-user-agent=.*curl.* redirect:http://uwsgi.it route-remote-addr= ^192\.168\.. * break:403 Forbidden route= ^/test log:someone called /test route= \.php$ rewrite:/index.php route=.* addheader:Server: my sid.local server route=.* logvar:local=0 route-uri= ^/foo/(. *)\.jpg$ cache:key=$1.jpg route=.* last: route-label= localhost route-user-agent=.*curl.* redirect:http://uwsgi.it route-remote-addr= ^127\.0\.0\.1$ break:403 Forbidden route= ^/test log:someone called /test route= \.php$ rewrite:/index.php route=.* addheader:Server: my uWSGI server route=.* logvar:local=1 route-uri= ^/foo/(. *)\.jpg$ cache:key=$1.jpg route=.* last: The example is like the previous one, but we with some differences between domains. Check the use of “last:”, to interrupt the routing table scan. You can rewrite the first 2 rules as one: [uwsgi] route-host=(.*) goto:$1 6.6.11 Collecting response headers As we have already seen, each uWSGI request has a set of variables associated. They are generally the CGI vars passed by the webserver, but you can extend them with other variables too (check the ‘addvar’ action). uWSGI 1.9.16 added a new feature allowing you to store the content of a response header in a request var. This simplify the write of more advanced rules. For example you may want to gzip all of the text/html responses: [uwsgi] ; store Content-Type response header in MY_CONTENT_TYPE var collect-header= Content-Type MY_CONTENT_TYPE ; if response is text/html, and client supports it, gzip it response-route-if= equal:${MY_CONTENT_TYPE};text/html goto:gzipme response-route-run= last: response-route-label= gzipme 6.6. uWSGI internal routing 371 uWSGI Documentation, Release 2.0 ; gzip only if the client support it response-route-if= contains:${HTTP_ACCEPT_ENCODING};gzip gzip: 6.6.12 The available actions continue/last Return value: CONTINUE Stop the scanning of the internal routing table and continue to the selected request handler. break Return value: BREAK Stop scanning the internal routing table and close the request. Can optionally returns the specified HTTP status code: [uwsgi] route= ^/notfound break:404 Not Found route= ^/bad break: route= ^/error break:500 Note: break doesn’t support request vars because it’s intended to notify browser about the error, not end user. That said, we can tell following code will send what it reads to browser (i.e. without ${REMOTE_ADDR} being translated to the remote IP address). [uwsgi] route-remote-addr= ^127\.0\.0\.1$ break:403 Forbidden for ip ${REMOTE_ADDR} If you really want to do wacky stuff, see clearheaders. return/break-with-status Return value: BREAK return uses uWSGI built-in status code and return both status code and message body. It’s similar to break but as mentioned above break doesn’t have the error message body. return:403 is equivalent to following: [uwsgi] route-run= clearheaders:403 Forbidden route-run = addheader:Content-Type: text/plain route-run = addheader:Content-Length: 9 route-run = send:Forbidden route-run = break: log Return value: NEXT Print the specified message in the logs. [uwsgi] route= ^/logme/(.) log:hey i am printing $1 372 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 logvar Return value: NEXT Add the specified logvar. [uwsgi] route= ^/logme/(.) logvar:item=$1 goto Return value: NEXT Make a forward jump to the specified label or rule position addvar Return value: NEXT Add the specified CGI (environment) variable to the request. [uwsgi] route= ^/foo/(.) addvar:FOOVAR=prefix$1suffix addheader Return value: NEXT Add the specified HTTP header to the response. [uwsgi] route= ^/foo/(.) addheader:Foo: Bar delheader//remheader Return value: NEXT Remove the specified HTTP header from the response. [uwsgi] route= ^/foo/(.) delheader:Foo signal Return value: NEXT Raise the specified uwsgi signal. send Return value: NEXT Extremely advanced (and dangerous) function allowing you to add raw data to the response. 6.6. uWSGI internal routing 373 uWSGI Documentation, Release 2.0 [uwsgi] route= ^/foo/(.) send:destroy the world send-crnl Return value: NEXT Extremely advanced (and dangerous) function allowing you to add raw data to the response, suffixed with rn. [uwsgi] route= ^/foo/(.) send-crnl:HTTP/1.0 100 Continue redirect/redirect-302 Return value: BREAK Plugin: router_redirect Return a HTTP 302 Redirect to the specified URL. redirect-permanent/redirect-301 Return value: BREAK Plugin: router_redirect Return a HTTP 301 Permanent Redirect to the specified URL. rewrite Return value: NEXT Plugin: router_rewrite A rewriting engine inspired by Apache mod_rewrite. Rebuild PATH_INFO and QUERY_STRING according to the specified rules before the request is dispatched to the request handler. [uwsgi] route-uri= ^/foo/(. *) rewrite:/index.php?page=$1.php rewrite-last Alias for rewrite but with a return value of CONTINUE, directly passing the request to the request handler next. uwsgi Return value: BREAK Plugin: router_uwsgi Rewrite the modifier1, modifier2 and optionally UWSGI_APPID values of a request or route the request to an external uwsgi server. 374 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 [uwsgi] route= ^/psgi uwsgi:127.0.0.1:3031,5,0 This configuration routes all of the requests starting with /psgi to the uwsgi server running on 127.0.0.1:3031 setting modifier1 to 5 and modifier2 to 0. If you only want to change the modifiers without routing the request to an external server, use the following syntax. [uwsgi] route= ^/psgi uwsgi:,5,0 To set a specific UWSGI_APPID value, append it. [uwsgi] route= ^/psgi uwsgi:127.0.0.1:3031,5,0,fooapp The subrequest is async-friendly (engines such as gevent or ugreen are supported) and if offload threads are available they will be used. http Return value: BREAK Plugin: router_http Route the request to an external HTTP server. [uwsgi] route= ^/zope http:127.0.0.1:8181 You can substitute an alternative Host header with the following syntax: [uwsgi] route= ^/zope http:127.0.0.1:8181,myzope.uwsgi.it static Return value: BREAK Plugin: router_static Serve a static file from the specified physical path. [uwsgi] route= ^/logo static:/var/www/logo.png basicauth Return value: NEXT or BREAK 401 on failed authentication Plugin: router_basicauth Four syntaxes are supported. • basicauth:realm,user:password – a simple user:password mapping • basicauth:realm,user: – only authenticates username 6.6. uWSGI internal routing 375 uWSGI Documentation, Release 2.0 • basicauth:realm,htpasswd – use a htpasswd-like file. All POSIX crypt() algorithms are supported. This is _not_ the same behavior as Apache’s traditional htpasswd files, so use the -d flag of the htpasswd utility to create compatible files. • basicauth:realm, – Useful to cause a HTTP 401 response immediately. As routes are parsed top-bottom, you may want to raise that to avoid bypassing rules. Example: [uwsgi] route= ^/foo basicauth-next:My Realm,foo:bar route= ^/foo basicauth:My Realm,foo2:bar2 route= ^/bar basicauth:Another Realm,kratos: Example: using basicauth for Trac [uwsgi] ; load plugins (if required) plugins= python,router_basicauth ; bind to port 9090 using http protocol http-socket= :9090 ; set trac instance path env= TRAC_ENV=myinstance ; load trac module= trac.web.main:dispatch_request ; trigger authentication on /login route= ^/login basicauth-next:Trac Realm,pippo:pluto route= ^/login basicauth:Trac Realm,foo:bar ;high performance file serving static-map= /chrome/common=/usr/local/lib/python2.7/dist-packages/trac/htdocs basicauth-next same as basicauth but returns NEXT on failed authentication. ldapauth Return value: NEXT or BREAK 401 on failed authentication Plugin: ldap This auth router is part of the LDAP plugin, so it has to be loaded in order for this to be available. It’s like the basicauth router, but uses an LDAP server for authentication, syntax: ldapauth:realm,options Available options: • url - LDAP server URI (required) • binddn - DN used for binding. Required if the LDAP server does not allow anonymous searches. • bindpw - password for the binddn user. • basedn - base DN used when searching for users (required) • filter - filter used when searching for users (default is “(objectClass=*)”) • attr - LDAP attribute that holds user login (default is “uid”) 376 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 • loglevel - 0 - don’t log any binds, 1 - log authentication errors, 2 - log both successful and failed binds Example: route= ^/protected ldapauth:LDAP auth realm,url=ldap://ldap.domain.com;basedn=ou=users,dc=domain;binddn=uid=proxy,ou=users,dc=domain;bindpw=password;loglevel=1;filter=(objectClass=posixAccount) ldapauth-next Same as ldapauth but returns NEXT on failed authentication. cache Return value: BREAK Plugin: router_cache cachestore/cache-store cachevar cacheset memcached rpc The “rpc” routing instruction allows you to call uWSGI RPC functions directly from the routing subsystem and forward their output to the client. [uwsgi] http-socket= :9090 route= ^/foo addheader:Content-Type: text/html route= ^/foo rpc:hello ${REQUEST_URI} ${HTTP_USER_AGENT} route= ^/bar/(.+)$ rpc:test $1 ${REMOTE_ADDR} uWSGI %V route= ^/pippo/(.+)$ rpc:test@127.0.0.1:4141 $1 ${REMOTE_ADDR} uWSGI %V import= funcs.py call Plugin: rpc rpcret Plugin: rpc rpcret calls the specified rpc function and uses its return value as the action return code (next, continue, goto, etc) rpcblob//rpcnext Plugin: rpc rpcnext/rpcblob calls the specified RPC function, sends the response to the client and continues to the next rule. 6.6. uWSGI internal routing 377 uWSGI Documentation, Release 2.0 rpcraw Plugin: rpc access spnego In development... radius In development... xslt See also: The XSLT plugin ssi See also: SSI (Server Side Includes) plugin gridfs See also: The GridFS plugin 378 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 donotlog chdir setapp setuser sethome setfile setscriptname setprocname alarm flush fixcl cgi Plugin: cgi cgihelper Plugin: cgi access Plugin: router_access cache-continue Plugin: router_cache cachevar Plugin: router_cache cacheinc Plugin: router_cache cachedec Plugin: router_cache 6.6. uWSGI internal routing 379 uWSGI Documentation, Release 2.0 cachemul Plugin: router_cache cachediv Plugin: router_cache proxyhttp Plugin: router_http memcached Plugin: router_memcached memcached-continue Plugin: router_memcached memcachedstore Plugin: router_memcached memcached-store Plugin: router_memcached redis Plugin: router_redis redis-continue Plugin: router_redis redisstore Plugin: router_redis redis-store Plugin: router_redis proxyuwsgi Plugin: router_uwsgi 380 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 harakiri Set harakiri for the current request. file Directly transfer the specified filename without using acceleration (sendfile, offloading, etc.). [uwsgi] http-socket= :9090 route-run= file:filename=/var/www/${PATH_INFO} clearheaders clear the response headers, setting a new HTTP status code, useful for resetting a response [uwsgi] http-socket= :9090 response-route= ^/foo goto:foobar response-route-run= last: response-route-label= foobar response-route-run= clearheaders:404 Not Found response-route-run= addheader:Content-Type: text/html resetheaders alias for clearheaders 6.7 The uWSGI Legion subsystem As of uWSGI 1.9-dev a new subsystem for clustering has been added: The Legion subsystem. A Legion is a group of uWSGI nodes constantly fighting for domination. Each node has a valor value (different from the others, if possible). The node with the highest valor is the Lord of the Legion (or if you like a less gaming nerd, more engineer-friendly term: the master). This constant fight generates 7 kinds of events: 1. setup - when the legion subsystem is started on a node 2. join - the first time quorum is reached, only on the newly joined node 3. lord - when this node becomes the lord 4. unlord - when this node loses the lord title 5. death - when the legion subsystem is shutting down 6. node-joined - when any new node joins our legion 7. node-left - when any node leaves our legion You can trigger actions every time such an event rises. Note: openssl headers must be installed to build uWSGI with Legion support. 6.7. The uWSGI Legion subsystem 381 uWSGI Documentation, Release 2.0 6.7.1 IP takeover This is a very common configuration for clustered environments. The IP address is a resource that must be owned by only one node. For this example, that node is our Lord. If we configure a Legion right (remember, a single uWSGI instances can be a member of all of the legions you need) we could easily implement IP takeover. [uwsgi] legion= clusterip 225.1.1.1:4242 98 bf-cbc:hello legion-node= clusterip 225.1.1.1:4242 legion-lord= clusterip cmd:ip addr add 192.168.173.111/24 dev eth0 legion-lord= clusterip cmd:arping -c 3 -S 192.168.173.111 192.168.173.1 legion-setup= clusterip cmd:ip addr del 192.168.173.111/24 dev eth0 legion-unlord= clusterip cmd:ip addr del 192.168.173.111/24 dev eth0 legion-death= clusterip cmd:ip addr del 192.168.173.111/24 dev eth0 In this example we join a legion named clusterip. To receive messages from the other nodes we bind on the multicast address 225.1.1.1:4242. The valor of this node will be 98 and each message will be encrypted using Blowfish in CBC with the shared secret hello. The legion-node option specifies the destination of our announce messages. As we are using multicast we only need to specify a single “node”. The last options are the actions to trigger on the various states of the cluster. For an IP takeover solution we simply rely on the Linux iproute commands to set/unset ip addresses and to send an extra ARP message to announce the change. Obviously this specific example requires root privileges or the CAP_NET_ADMIN Linux capability, so be sure to not run untrusted applications on the same uWSGI instance managing IP takeover. The Quorum To choose a Lord each member of the legion has to cast a vote. When all of the active members of a legion agree on a Lord, the Lord is elected and the old Lord is demoted. Every time a new node joins or leaves a legion the quorum is re-computed and logged to the whole cluster. Choosing the Lord Generally the node with the higher valor is chosen as the Lord, but there can be cases where multiple nodes have the same valor. When a node is started a UUID is assigned to it. If two nodes with same valor are found the one with the lexicographically higher UUID wins. Split brain Even though each member of the Legion has to send a checksum of its internal cluster-membership, the system is still vulnerable to the split brain problem. If a node loses network connectivity with the cluster, it could believe it is the only node available and starts going in Lord mode. For many scenarios this is not optimal. If you have more than 2 nodes in a legion you may want to consider tun- ing the quorum level. The quorum level is the amount of votes (as opposed to nodes) needed to elect a lord. legion-quorum is the option for the job. You can reduce the split brain problem asking the Legion subsystem to check for at least 2 votes: [uwsgi] legion= clusterip 225.1.1.1:4242 98 bf-cbc:hello legion-node= clusterip 225.1.1.1:4242 382 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 legion-quorum= clusterip 2 legion-lord= clusterip cmd:ip addr add 192.168.173.111/24 dev eth0 legion-lord= clusterip cmd:arping -c 3 -S 192.168.173.111 192.168.173.1 legion-setup= clusterip cmd:ip addr del 192.168.173.111/24 dev eth0 legion-unlord= clusterip cmd:ip addr del 192.168.173.111/24 dev eth0 legion-death= clusterip cmd:ip addr del 192.168.173.111/24 dev eth0 As of 1.9.7 you can use nodes with valor 0 (concept similar to MongoDB’s Arbiter Nodes), such nodes will be counted when checking for quorum but may never become The Lord. This is useful when you only need a couple nodes while protecting against split-brain. Actions Each one of the four phases of a legion can trigger an action. The actions system is modular so you can add new kinds of actions. Currently the supported actions are: 6.7.2 cmd: Run a shell command. 6.7.3 signal: Raise a uWSGI signal. 6.7.4 log: Log a message. For example you could combine the log action with the alarm subsystem to have cluster monitoring for free. Multicast, broadcast and unicast Even if multicast is probably the easiest way to implement clustering it is not available in all networks. If multicast is not an option, you can rely on normal IP addresses. Just bind to an address and add all of the legion-node options you need: [uwsgi] legion= mycluster 192.168.173.17:4242 98 bf-cbc:hello legion-node= mycluster 192.168.173.22:4242 legion-node= mycluster 192.168.173.30:4242 legion-node= mycluster 192.168.173.5:4242 This is for a cluster of 4 nodes (this node + 3 other nodes) Multiple Legions You can join multiple legions in the same instance. Just remember to use different addresses (ports in case of multicast) for each legion. 6.7. The uWSGI Legion subsystem 383 uWSGI Documentation, Release 2.0 [uwsgi] legion= mycluster 192.168.173.17:4242 98 bf-cbc:hello legion-node= mycluster 192.168.173.22:4242 legion-node= mycluster 192.168.173.30:4242 legion-node= mycluster 192.168.173.5:4242 legion= mycluster2 225.1.1.1:4243 99 aes-128-cbc:secret legion-node= mycluster2 225.1.1.1:4243 legion= anothercluster 225.1.1.1:4244 91 aes-256-cbc:secret2 legion-node= anothercluster 225.1.1.1:4244 Security Each packet sent by the Legion subsystem is encrypted using a specified cipher, a preshared secret, and an optional IV (initialization vector). Depending on cipher, the IV may be a required parameter. To get the list of supported ciphers, run openssl enc -h. Important: Each node of a Legion has to use the same encryption parameters. To specify the IV just add another parameter to the legion option. [uwsgi] legion= mycluster 192.168.173.17:4242 98 bf-cbc:hello thisistheiv legion-node= mycluster 192.168.173.22:4242 legion-node= mycluster 192.168.173.30:4242 legion-node= mycluster 192.168.173.5:4242 To reduce the impact of replay-based attacks, packets with a timestamp lower than 30 seconds are rejected. This is a tunable parameter. If you have no control on the time of all of the nodes you can increase the clock skew tolerance. Tuning and Clock Skew Currently there are three parameters you can tune. These tuables affect all Legions in the system. The frequency (in seconds) at which each packet is sent (legion-freq ), the amount of seconds after a node not sending packets is considered dead (legion-tolerance ), and the amount of clock skew between nodes (legion-skew-tolerance ). The Legion subsystem requires tight time synchronization, so the use of NTP or similar is highly recom- mended. By default each packet is sent every 3 seconds, a node is considered dead after 15 seconds, and a clock skew of 30 seconds is tolerated. Decreasing skew tolerance should increase security against replay attacks. Lord scroll (coming soon) The Legion subsystem can be used for a variety of purposes ranging from master election to node autodiscovery or simple monitoring. One example is to assign a “blob of data” (a scroll) to every node, One use of this is to pass reconfiguration parameters to your app, or to log specific messages. Currently the scroll system is being improved upon, so if you have ideas join our mailing list or IRC channel. Legion API You can know if the instance is a lord of a Legion by simply calling 384 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 int uwsgi_legion_i_am_the_lord(char *legion_name); It returns 1 if the current instance is the lord for the specified Legion. • The Python plugin exposes it as uwsgi.i_am_the_lord(name) • The PSGI plugin exposes it as uwsgi::i_am_the_lord(name) • The Rack plugin exposes it as UWSGI::i_am_the_lord(name) Obviously more API functions will be added in the future, feel free to expose your ideas. Stats The Legion information is available in the The uWSGI Stats Server. Be sure to understand the difference between “nodes” and “members”. Nodes are the peer you configure with the legion-node option while members are the effective nodes that joined the cluster. The old clustering subsystem During 0.9 development cycle a clustering subsystem (based on multicast) was added. It was very raw, unreliable and very probably no-one used it seriously. The new method is transforming it in a general API that can use different backends. The Legion subsystem can be one of those backends, as well as projects like corosync or the redhat cluster suite. 6.8 Locks uWSGI supports a configurable number of locks you can use to synchronize worker processes. Lock 0 (zero) is always available, but you can add more with the locks option. If your app has a lot of critical areas, holding and releasing the same lock over and over again can kill performance. def use_lock_zero_for_important_things(): uwsgi.lock() # Implicit parameter 0 # Critical section uwsgi.unlock() # Implicit parameter 0 def use_another_lock(): uwsgi.lock(1) time.sleep(1)# Take that, performance! Ha! uwsgi.unlock(1) 6.9 uWSGI Mules Mules are worker processes living in the uWSGI stack but not reachable via socket connections, that can be used as a generic subsystem to offload tasks. You can see them as a more primitive spooler. They can access the entire uWSGI API and can manage signals and be communicated with through a simple string- based message system. To start a mule (you can start an unlimited number of them), use the mule option as many times as you need. Mules have two modes, 6.8. Locks 385 uWSGI Documentation, Release 2.0 • Signal only mode (the default). In this mode the mules load your application as normal workers would. They can only respond to uWSGI signals. • Programmed mode. In this mode mules load a program separate from your application. See ProgrammedMules. By default each mule starts in signal-only mode. uwsgi --socket :3031 --mule --mule --mule --mule :3031 6.9.1 Basic usage import uwsgi from uwsgidecorators import timer, signal, filemon # run a timer in the first available mule @timer(30, target=’mule’) def hello(signum): print "Hi! I am responding to signal %d, running on mule %d"% (signum, uwsgi.mule_id()) # map signal 17 to mule 2 @signal(17, target=’mule2’) def i_am_mule2(signum): print "Greetings! I am running in mule number two." # monitor /tmp and arouse all of the mules on modifications @filemon(’/tmp’, target=’mules’) def tmp_modified(signum): print "/tmp has been modified. I am mule %d!"% uwsgi.mule_id() 6.9.2 Giving a brain to mules As mentioned before, mules can be programmed. To give custom logic to a mule, pass the name of a script to the mule option. uwsgi --socket :3031 --mule=somaro.py --mule --mule --mule This will run 4 mules, 3 in signal-only mode and one running somaro.py. # somaro.py from threading import Thread import time def loop1(): while True: print "loop1: Waiting for messages... yawn." message= uwsgi.mule_get_msg() print message 386 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 def loop2(): print "Hi! I am loop2." while True: time.sleep(2) print "This is a thread!" t= Thread(target=loop2) t.daemon= True t.start() if __name__ ==’__main__’: loop1() So as you can see from the example, you can use mule_get_msg() to receive messages in a programmed mule. Multiple threads in the same programmed mule can wait for messages. If you want to block a mule to wait on an uWSGI signal instead of a message you can use uwsgi.signal_wait(). Use uwsgi.mule_msg() to send a message to a programmed mule. Mule messages can be sent from anywhere in the uWSGI stack, including but not limited to workers, the spoolers, another mule. # Send the string "ciuchino" to mule1. # If you do not specify a mule ID, the message will be processed by the first available programmed mule. uwsgi.mule_msg("ciuchino",1) As you can spawn an unlimited number of mules, you may need some form of synchronization – for example if you are developing a task management subsystem and do not want two mules to be able to start the same task simultaneously. You’re in luck – see Locks. 6.10 The uWSGI offloading subsystem Offloading is a way to optimize tiny tasks, delegating them to one or more threads. This threads run such tasks in a non-blocking/evented way allowing for a huge amount of concurrency. Various component of the uWSGI stack has been made offload-friendly, and the long-term target is to allow application code to abuse it. To start the offloading subsystem just add –offload-threads , where is the number of threads (per-worker) to spawn. They are native threads, they are lock-free (no shared resources), thundering-herd free (requests to the system are made in round-robin) and they are the best way to abuse your CPU cores. The number of offloaded requests is accounted in the “offloaded_requests” metric of the stats subsystem. 6.10.1 Offloading static files The first component made offload-aware has been the static file serving system. When offload threads are available, the whole transfer of the file is delegated to one of those threads, freeing your worker suddenly (so it will be ready to accept new requests) Example: [uwsgi] socket= :3031 check-static= /var/www offload-threads=4 6.10. The uWSGI offloading subsystem 387 uWSGI Documentation, Release 2.0 6.10.2 Offloading internal routing The router_uwsgi and router_http plugins are offload-friendly. You can route requests to external uwsgi/HTTP servers without being worried about having a blocked worker during the response generation. Example: [uwsgi] socket= :3031 offload-threads=8 route= ^/foo http:127.0.0.1:8080 route= ^/bar http:127.0.0.1:8181 route= ^/node http:127.0.0.1:9090 Since 1.9.11 the cache router is offload friendly too. [uwsgi] socket= :3031 offload-threads=8 route-run= cache:key=${REQUEST_URI} As soon as the object is retrieved from the cache, it will be transferred in one of the offload threads. 6.10.3 The Future The offloading subsystem has a great potential, you can think of it as a software DMA: you program it, and then it goes alone. Currently it is pretty monolithic, but the idea is to allow more complex plugins (a redis one is in the works). Next step is allowing the user to “program” it via the uwsgi api. 6.11 The uWSGI queue framework In addition to the caching framework, uWSGI includes a shared queue. At the low level it is a simple block-based shared array, with two optional counters, one for stack-style, LIFO usage, the other one for FIFO. The array is circular, so when one of the two pointers reaches the end (or the beginning), it is reset. Remember this! To enable the queue, use the queue option. Queue blocks are 8 KiB by default. Use queue-blocksize to change this. # 100 slots, 8 KiB of data each uwsgi --socket :3031 --queue 100 # 42 slots, 128 KiB of data each uwsgi --socket :3031 --queue 42 --queue-blocksize 131072 6.11.1 Using the queue as a shared array 388 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 # Put a binary string in slot 17. uwsgi.queue_set(17,"Hello, uWSGI queue!") # Get it back. print uwsgi.queue_get(17) 6.11.2 Using the queue as a shared stack Warning: Remember that uwsgi.queue_pop() and uwsgi.queue_last() will remove the item or items from the queue. # Push a value onto the end of the stack. uwsgi.queue_push("Hello, uWSGI stack!") # Pop it back print uwsgi.queue_pop() # Get the number of the next available slot in the stack print uwsgi.queue_slot() # Pop the last N items from the stack items= uwsgi.queue_last(3) 6.11.3 Using the queue as a FIFO queue Note: Currently you can only pull, not push. To enqueue an item, use uwsgi.queue_set(). # Grab an item from the queue uwsgi.queue_pull() # Get the current pull/slot position (this is independent from the stack-based one) print uwsgi.queue_pull_slot() 6.11.4 Notes • You can get the queue size with uwsgi.queue_size. • Use the queue-store option to persist the queue on disk. Use queue-store-sync (in master cycles – usually seconds) to force disk syncing of the queue. • The tests/queue.py application is a fully working example. 6.12 uWSGI RPC Stack uWSGI contains a fast, simple, pan-and-cross-language RPC stack. Although you may fall in love with this subsystem, try to use it only when you need it. There are plenty of higher-level RPC technologies better suited for the vast majority of situations. 6.12. uWSGI RPC Stack 389 uWSGI Documentation, Release 2.0 That said, the uWSGI RPC subsystem shines with its performance and memory usage. As an example, if you need to split the load of a request to multiple servers, the uWSGI RPC is a great choice, as it allows you to offload tasks with very little effort. Its biggest limit is in its “typeless” approach. RPC functions can take up to 254 args. Each argument has to be a string with a 16 bit maximum size (65535 bytes), while the return value has to be a string (this time 64-bit, so that’s not a practical limit). Warning: 64 bit response length has been implemented only in uWSGI 1.9.20, older releases have 16 bit response length limit. Note: RPC functions receive arguments in the form of binary strings, so every RPC exportable function must assume that each argument is a string. Every RPC function returns a binary string of 0 or more characters. So, if you need “elegance” or strong typing, just look in another place (or roll your own typing on top of uWSGI RPC, maybe...). Since 1.9 the RPC subsystem is fully async-friendly, so you can use it with gevent and Coro::AnyEvent etc. 6.12.1 Learning by example Let’s start with a simple RPC call from 10.0.0.1:3031 to 10.0.0.2:3031. So let’s export a “hello” function on .2. import uwsgi def hello_world(): return "Hello World" uwsgi.register_rpc("hello", hello_world) This uses uwsgi.register_rpc() to declare a function called “hello” to be exported. We’ll start the server with --socket :3031. On the caller’s side, on 10.0.0.1, let’s declare the world’s (second) simplest WSGI app. import uwsgi def application(env, start_response): start_response(’200 Ok’, [(’Content-Type’,’text/html’)]) return uwsgi.rpc(’10.0.0.2:3031’,’hello’) That’s it! What about, let’s say, Lua? Glad you asked. If you want to export functions in Lua, simply do: function hello_with_args(arg1, arg2) return ’argsare’..arg1..’’..arg2 end uwsgi.register_rpc(’hellolua’, hello_with_args) And in your Python WSGI app: 390 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 import uwsgi def application(env, start_response): start_response(’200 Ok’, [(’Content-Type’,’text/html’)] return uwsgi.rpc(’10.0.0.2:3031’,’hellolua’,’foo’,’bar’) And other languages/platforms? Check the language specific docs, basically all of them should support registering and calling RPC functions. You can build multi-languages app with really no effort at all and will be happily surprised about how easy it is to call Java functions from Perl, JavaScript from Python and so on. 6.12.2 Doing RPC locally Doing RPC locally may sound a little silly, but if you need to call a Lua function from Python with the absolute least possible overhead, uWSGI RPC is your man. If you want to call a RPC defined in the same server (governed by the same master, etc.), simply set the first parameter of uwsgi.rpc to None or nil, or use the convenience function uwsgi.call(). 6.12.3 Doing RPC from the internal routing subsystem The RPC plugin exports a bunch of internal routing actions: • rpc call the specified rpc function and send the response to the client • rpcnext/rpcblob call the specified rpc function, send the response to the client and continue to the next rule • rpcret calls the specified rpc function and uses its return value as the action return code (next, continue, goto ...) [uwsgi] route= ^/foo rpc:hello ${REQUEST_URI} ${REMOTE_ADDR} ; call on remote nodes route= ^/multi rpcnext:part1@192.168.173.100:3031 route= ^/multi rpcnext:part2@192.168.173.100:3031 route= ^/multi rpcnext:part3@192.168.173.100:3031 6.12.4 Doing RPC from nginx As Nginx supports low-level manipulation of the uwsgi packets sent to upstream uWSGI servers, you can do RPC directly through it. Madness! location /call{ uwsgi_modifier1 173; uwsgi_modifier2 1; uwsgi_param hellolua foo uwsgi_param bar"" uwsgi_pass 10.0.0.2:3031; uwsgi_pass_request_headers off; uwsgi_pass_request_body off; } 6.12. uWSGI RPC Stack 391 uWSGI Documentation, Release 2.0 Zero size strings will be ignored by the uWSGI array parser, so you can safely use them when the numbers of param- eters + function_name is not even. Modifier2 is set to 1 to inform that raw strings (HTTP responses in this case) are received. Otherwise the RPC subsystem would encapsulate the output in an uwsgi protocol packet, and nginx isn’t smart enough to read those. 6.12.5 HTTP PATH_INFO -> RPC bridge 6.12.6 XML-RPC -> RPC bridge 6.13 SharedArea – share memory pages between uWSGI compo- nents Warning: SharedArea is a very low-level mechanism. For an easier-to-use alternative, see the Caching and Queue frameworks. Warning: This page refers to “new generation” sharedarea introduced in uWSGI 1.9.21, the older API is no longer supported. The sharedarea subsystem allows you to share pages of memory between your uWSGI components (workers, spoolers, mules, etc.) in a very fast (and safe) way. Contrary to the higher-level caching framework, sharedarea operations are way faster (a single copy instead of the double, as required by caching) and offers various optimizations for specific needs. Each sharedarea (yes, you can have multiple areas) has a size (generally specified in the number of pages), so if you need an 8 KiB shared area on a system with 4 KiB pages, you would use sharedarea=2. The sharedarea subsystem is fully thread-safe. 6.13.1 Simple option VS keyval The sharedarea subsystem exposes (for now) a single option: --sharedarea. It takes two kinds of arguments: the number of pages (simple approach) or a keyval arg (for advanced tuning). The following keyval keys are available: • pages – set the number of pages • file – create the sharedarea from a file that will be mmaped • fd – create the sharedarea from a file descriptor that will be mmaped • size – mainly useful with the fd and ptr keys to specify the size of the map (can be used as a shortcut to avoid calculation of the pages value too) • ptr – directly map the area to the specified memory pointer. 6.13.2 The API The API is pretty big, the sharedarea will be the de-facto toy for writing highly optimized web apps (especially for embedded systems). 392 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 Most of the documented uses make sense on systems with slow CPUs or very small amounts of memory. sharedarea_read(id, pos[, len]) Read len bytes from the specified sharedarea starting at offset pos. If len is not specified, the memory will be read til the end (starting from pos). sharedarea_write(id, pos, string) Write the specified string (it is language-dependent, obviously) to the specified sharedarea at offset pos. sharedarea_read8|16|32|64(id, pos) Read a signed integer (8, 16, 32 or 64 bit) from the specified po- sition. sharedarea_write8|16|32|64(id, pos) Write a signed integer (8, 16, 32 or 64 bit) to the specified posi- tion. sharedarea_inc8|16|32|64(id, pos) Increment the signed integer (8, 16, 32 or 64 bit) at the specified position. sharedarea_dec8|16|32|64(id, pos) Decrement the signed integer (8, 16, 32 or 64 bit) at the specified position. sharedarea_wait(id[, freq, timeout]) Wait for modifications of the specified sharedarea (see below). sharedarea_rlock(id) lock a shared area for read (use only if you know what you are doing, generally the sharedarea api functions implement locking by themselves) sharedarea_wlock(id) lock a shared area for write (use only if you know what you are doing, generally the sharedarea api functions implement locking by themselves) sharedarea_unlock(id) unlock a shared area (use only if you know what you are doing, generally the sharedarea api functions implement locking by themselves) 6.13.3 Waiting for updates One of the most powerful features of sharedareas (compared to caching) is “waiting for updates”. Your worker/thread/async_core can be suspended until a sharedarea is modified. Technically, a millisecond-resolution timer is triggered, constantly checking for updates (the operation is very fast, as the sharedarea object has an update counter, so we only need to check that value for changes). 6.13.4 Optional API The following functions require specific features from the language, so not all of the language plugins are able to support them. sharedarea_readfast(id, pos, object, [, len]) Read len bytes from the specified sharedarea starting at offset pos to the specified object. If len is not specified, the memory will be read til the end (starting from pos). Currently is implemented only for Perl. sharedarea_memoryview(id) returns python memoryview object you can directly manipulate (works only on CPython) sharedarea_object(id) some plugin exposes an alternative way to create sharedareas from internal objects. This functions returns the original object (currently implemented only on CPython on top of bytearrays using --py-sharedarea option) 6.13.5 Websockets integration API This is currently supported only in the psgi/perl plugin: 6.13. SharedArea – share memory pages between uWSGI components 393 uWSGI Documentation, Release 2.0 websocket_send_from_sharedarea(id, pos) send a websocket message directly from the specified sharedarea websocket_send_binary_from_sharedarea(id, pos) send a websocket binary message directly from the specified sharedarea 6.13.6 Advanced usage (from C) Work in progress. Check https://github.com/unbit/uwsgi-capture for an example of sharedarea managed from C 6.14 The uWSGI Signal Framework Warning: Raw usage of uwsgi signals is for advanced users only. You should see uWSGI API - Python decorators for a more elegant abstraction. Note: uWSGI Signals have _nothing_ in common with UNIX/Posix signals (if you are looking for those, Managing the uWSGI server is your page). Over time, your uWSGI stack is growing, you add spoolers, more processes, more plugins, whatever. The more features you add the more you need all of these components to speak to each other. Another important task for today’s rich/advanced web apps is to respond to different events. An event could be a file modification, a new cluster node popping up, another one (sadly) dying, a timer having elapsed... whatever you can imagine. Communication and event management are all managed by the same subsystem – the uWSGI signal framework. uWSGI signals are managed with sockets, so they are fully reliable. When you send an uWSGI signal, you can be sure that it will be delivered. 6.14.1 The Signals table Signals are simple 1 byte messages that are routed by the master process to workers and spoolers. When a worker receives a signal it searches the signals table for the corresponding handler to execute. The signal table is shared by all workers (and protected against race conditions by a shared lock). Every uWSGI process (mainly the master though) can write into it to set signal handlers and recipient processes. Warning: Always pay attention to who will run the signal handler. It must have access to the handler itself. This means that if you define a new function in worker1 and register it as a signal handler, only worker1 can run it. The best way to register signals is defining them in the master, so (thanks to fork()) all workers see them. 6.14.2 Defining signal handlers To manage the signals table the uWSGI API exposes one simple function, uwsgi.register_signal(). These are two simple examples of defining signal table items, in Python and Lua. 394 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 import uwsgi def hello_signal(num): print "i am the signal %d"% num def hello_signal2(num): print "Hi, i am the signal %d"% num # define 2 signal table items (30 and 22) uwsgi.register_signal(30,"worker", hello_signal) uwsgi.register_signal(22,"workers", hello_signal2) function hello_signal(sig) print("iamLua,receivedsignal".. sig..) end # define a single signal table item (signal1) uwsgi.register_signal(1,"worker", hello_signal) 6.14.3 Raising signals Signals may be raised using uwsgi.signal(). When you send a signal, it is copied into the master’s queue. The master will then check the signal table and dispatch the messages. 6.14.4 External events The most useful feature of uWSGI signals is that they can be used to announce external events. At the time of writing the available external events are • filesystem modifications • timers/rb_timers • cron Filesystem modifications To map a specific file/directory modification event to a signal you can use uwsgi.add_file_monitor(). An example: import uwsgi def hello_file(num): print "/tmp has been modified !!!" uwsgi.register_signal(17,"worker", hello_file) uwsgi.add_file_monitor(17,"/tmp") From now on, every time /tmp is modified, signal 17 will be raised and hello_file will be run by the first available worker. 6.14. The uWSGI Signal Framework 395 uWSGI Documentation, Release 2.0 Timers Timers are another useful feature in web programming – for instance to clear sessions and shopping carts and what- have-you. Timers are implemented using kernel facilities (most notably kqueue on BSD systems and timerfd() on modern Linux kernels). uWSGI also contains support for rb_timers, timers implemented in user space using red-black trees. To register a timer, use uwsgi.add_timer(). To register an rb_timer, use uwsgi.add_rb_timer(). import uwsgi def hello_timer(num): print "2 seconds elapsed, signal %d raised"% num def oneshot_timer(num): print "40 seconds elapsed, signal %d raised. You will never see me again."% num uwsgi.register_signal(26,"worker", hello_timer) uwsgi.register_signal(30,"", oneshot_timer) uwsgi.add_timer(26,2)# never-ending timer every 2 seconds uwsgi.add_timer(30, 40,1)# one shot timer after 40 seconds Signal 26 will be raised every 2 seconds and handled by the first available worker. Signal 30 will be raised after 40 seconds and executed only once. 6.14.5 signal_wait and signal_received Unregistered signals (those without an handler associated) will be routed to the first available worker to use the uwsgi.signal_wait() function. uwsgi.signal_wait() signum = uwsgi.signal_received() You can combine external events (file monitors, timers...) with this technique to implement event-based apps. A good example is a chat server where every core waits for text sent by users. You can also wait for specific (even registered) signals by passing a signal number to signal_wait. 6.14.6 Todo • Signal table entry cannot be removed (this will be fixed soon) • Iterations works only with rb_timers • uwsgi.signal_wait() does not work in async mode (will be fixed) • Cluster nodes popup/die signals are still not implemented. • Bonjour/avahi/MDNS event will be implemented in 0.9.9 • PostgreSQL notifications will be implemented in 0.9.9 • Add iterations to file monitoring (to allow one-shot event as timers) 396 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.15 The uWSGI Spooler Updated to uWSGI 2.0.1 Supported on: Perl, Python, Ruby The Spooler is a queue manager built into uWSGI that works like a printing/mail system. You can enqueue massive sending of emails, image processing, video encoding, etc. and let the spooler do the hard work in background while your users get their requests served by normal workers. A spooler works by defining a directory in which “spool files” will be written, every time the spooler find a file in its directory it will parse it and will run a specific function. You can have multiple spoolers mapped to different directories and even multiple spoolers mapped to the same one. The --spooler option allows you to generate a spooler process, while the --spooler-processes allows you to set how many processes to spawn for every spooler. The spooler is able to manage uWSGI signals too, so you can use it as a target for your handlers. This configuration will generate a spooler for your instance (myspool directory must exists) [uwsgi] spooler = myspool ... while this one will create two spoolers: [uwsgi] spooler = myspool spooler = myspool2 ... having multiple spoolers allows you to prioritize tasks (and eventually parallelize them) 6.15.1 Spool files Spool files are serialized hashes/dictionaries of strings. The spooler will parse them and pass the resulting hash/dictionary to the spooler function (see below). The serialization format is the same used for the ‘uwsgi’ protocol, so you are limited to 64k (even if there is a trick for passing bigger values, see the ‘body’ magic key below). The modifier1 for spooler packets is the 17, so a {‘hello’ => ‘world’} hash will be encoded as: header key1 value1 17|14|0|0 |5|0|h|e|l|l|o |5|0|w|o|r|l|d A locking system allows you to safely manually remove spool files if something goes wrong, or to move them between spooler directories. Spool dirs over NFS are allowed, but if you do not have proper NFS locking in place, avoid mapping the same spooler NFS directory to spooler on different machines. 6.15.2 Setting the spooler function/callable Because there are dozens of different ways to enqueue spooler requests, we’re going to cover receiving the requests first. 6.15. The uWSGI Spooler 397 uWSGI Documentation, Release 2.0 To have a fully operational spooler you need to define a “spooler function/callable” to process the requests. Regardless of the the number of configured spoolers, the same function will be executed. It is up to the developer to instruct it to recognize tasks. If you don’t process requests, the spool directory will just fill up. This function must returns an integer value: • -2 (SPOOL_OK) – the task has been completed, the spool file will be removed • -1 (SPOOL_RETRY) – something is temporarily wrong, the task will be retried at the next spooler iteration • 0 (SPOOL_IGNORE) – ignore this task, if multiple languages are loaded in the instance all of them will fight for managing the task. This return values allows you to skip a task in specific languages. Any other value will be interpreted as -1 (retry). Each language plugin has its own way to define the spooler function: Perl: uwsgi::spooler( sub { my ($env)=@_; print $env->{foobar}; return uwsgi::SPOOL_OK; } ); # hint - uwsgi:: is available when running using perl-exec= or psgi= # no don’t need to use "use" or "require" it, it’s already there. Python: import uwsgi def my_spooler(env): print env[’foobar’] return uwsgi.SPOOL_OK uwsgi.spooler= my_spooler Ruby: module UWSGI module_function def spooler(env) puts env.inspect return UWSGI::SPOOL_OK end end Spooler functions must be defined in the master process, so if you are in lazy-apps mode, be sure to place it in a file that is parsed early in the server setup. (in Python you can use –shared-import, in Ruby –shared-require, in Perl –perl-exec). Python has support for importing code directly in the spooler with the --spooler-python-import option. 6.15.3 Enqueueing requests to a spooler The ‘spool’ api function allows you to enqueue a hash/dictionary into the spooler specified by the instance: 398 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 # add this to your instance .ini file spooler=/path/to/spooler # that’s it! now use one of the code blocks below to send requests # note: you’ll still need to register some sort of receiving function (specified above) # python import uwsgi uwsgi.spool({’foo’:’bar’,’name’:’Kratos’,’surname’:’the same of Zeus’}) # or uwsgi.spool(foo=’bar’, name=’Kratos’, surname=’the same of Zeus’) # for python3 use bytes instead of strings !!! # perl uwsgi::spool({foo=> ’bar’, name=> ’Kratos’, surname=> ’the same of Zeus’}) # the uwsgi:: functions are available when executed within psgi or perl-exec # ruby UWSGI.spool(foo=> ’bar’, name=> ’Kratos’, surname=> ’the same of Zeus’) Some keys have a special meaning: • ‘spooler’ => specify the ABSOLUTE path of the spooler that has to manage this task • ‘at’ => unix time at which the task must be executed (read: the task will not be run until the ‘at’ time is passed) • ‘priority’ => this will be the subdirectory in the spooler directory in which the task will be placed, you can use that trick to give a good-enough prioritization to tasks (for better approach use multiple spoolers) • ‘body’ => use this key for objects bigger than 64k, the blob will be appended to the serialzed uwsgi packet and passed back to the spooler function as the ‘body’ argument Note: Spool arguments must be strings (or bytes for python3). The API functions will try to cast non-string values to strings/bytes, but do not rely on that functionality! 6.15.4 External spoolers You could want to implement a centralized spooler for your server across many uWSGI instances. A single instance will manage all of the tasks enqueued by multiple uWSGI instances. To accomplish this setup, each uWSGI instance has to know which spooler directories are valid (consider it a form of security). To add an external spooler directory use the --spooler-external option, then add to it using the spool function. The spooler locking subsystem will avoid any messes that you might think could occur. 6.15.5 Networked spoolers You can even enqueue tasks over the network (be sure the ‘spooler’ plugin is loaded in your instance, but generally it is built in by default). As we have already seen, spooler packets have modifier1 17, you can directly send those packets to an uWSGI socket of an instance with a spooler enabled. We will use the Perl Net::uwsgi module (exposing a handy uwsgi_spool function) in this example (but feel free to use whatever you want to write the spool files). 6.15. The uWSGI Spooler 399 uWSGI Documentation, Release 2.0 #!/usr/bin/perl use Net::uwsgi; uwsgi_spool(’localhost:3031’,{’test’=>’test001’,’argh’=>’boh’,’foo’=>’bar’}); uwsgi_spool(’/path/to/my.sock’,{’test’=>’test001’,’argh’=>’boh’,’foo’=>’bar’}); [uwsgi] socket = /path/to/my.sock socket = localhost:3031 spooler = /path/for/files spooler-processes=1 perl-exec = /path/for/script-which-registers-spooler-sub.pl ... (thanks brianhorakh for the example) 6.15.6 Priorities We have already seen that you can use the ‘priority’ key to give order in spooler parsing. While having multiple spoolers would be an extremely better approach, on system with few resources ‘priorities’ are a good trick. They works only if you enable the --spooler-ordered option. This option allows the spooler to scan directories entry in alphabetical order. If during the scan a directory with a ‘number’ name is found, the scan is suspended and the content of this subdirectory will be explored for tasks. /spool /spool/ztask /spool/xtask /spool/1/task1 /spool/1/task0 /spool/2/foo With this layout the order in which files will be parsed is: /spool/1/task0 /spool/1/task1 /spool/2/foo /spool/xtask /spool/ztask Remember, priorities only work for subdirectories named as ‘numbers’ and you need the --spooler-ordered option. The uWSGI spooler gives special names to tasks so the ordering of enqueuing is always respected. 6.15.7 Options spooler=directory run a spooler on the specified directory spooler-external=directory map spoolers requests to a spooler directory managed by an external instance spooler-ordered try to order the execution of spooler tasks (uses scandir instead of readdir) spooler-chdir=directory call chdir() to specified directory before each spooler task spooler-processes=## set the number of processes for spoolers 400 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 spooler-quiet do not be verbose with spooler tasks spooler-max-tasks=## set the maximum number of tasks to run before recycling a spooler (to help alleviate memory leaks) spooler-harakiri=## set harakiri timeout for spooler tasks, see [harakiri] for more information. spooler-frequency=## set the spooler frequency spooler-python-import=??? import a python module directly in the spooler 6.15.8 Tips and tricks You can re-enqueue a spooler request by returning uwsgi.SPOOL_RETRY in your callable: def call_me_again_and_again(env): return uwsgi.SPOOL_RETRY You can set the spooler poll frequency using the --spooler-frequency option (default 30 seconds). You could use the The uWSGI caching framework or SharedArea – share memory pages between uWSGI components to exchange memory structures between spoolers and workers. Python (uwsgidecorators.py) and Ruby (uwsgidsl.rb) exposes higher-level facilities to manage the spooler, try to use them instead of the low-level approach described here. When using a spooler as a target for a uWSGI signal handler you can specify which one to route signal to using its ABSOLUTE directory name. 6.16 uWSGI Subscription Server Some components of the uWSGI stack require a key-value mapping system. For example the The uWSGI FastRouter needs to know which server to contact for a specific request. In big networks with a lot of nodes manually managing this configuration could be a real hell. uWSGI implements a subscription system where the node itself announces its presence to Subscription Servers, which will in turn populate their internal dictionaries. uwsgi --fastrouter :1717 --fastrouter-subscription-server 192.168.0.100:2626 This will run an uWSGI fastrouter on port 1717 and create an empty dictionary where the hostname is the key and the uwsgi address is the value. To populate this dictionary you can contact 192.168.0.100:2626, the address of the subscription server. For every key multiple addresses can exist, enabling load balancing (various algorithms are available). A node can announce its presence to a Subscription Server using the subscribe-to or subscribe2 options. uwsgi --socket 192.168.0.10:3031 --wsgi myapp -M --subscribe-to 192.168.0.100:2626:uwsgi.it The FastRouter will map every request for uwsgi.it to 192.168.0.10:3031. To now add a second node for uwsgi.it simply run it and subscribe: uwsgi --socket 192.168.0.11:3031 --wsgi myapp --master --subscribe-to 192.168.0.100:2626:uwsgi.it 6.16. uWSGI Subscription Server 401 uWSGI Documentation, Release 2.0 Dead nodes are automatically removed from the pool. The syntax for subscribe2 is similar but it allows far more control since it allows to specify additional options like the address to which all requests should be forwarded. Its value syntax is a string with “key=value” pairs, each separated by a comma. uwsgi -s 192.168.0.10:3031 --wsgi myapp --master --subscribe2 server=192.168.0.100:2626,key=uwsgi.it,addr=192.168.0.10:3031 For possibile subscribe2 keys, see below. The subscription system is currently available for cluster joining (when multicast/broadcast is not available), the Fas- trouter, the HTTP/HTTPS/SPDY router, the rawrouter and the sslrouter. That said, you can create an evented/fast_as_hell HTTP load balancer in no time. uwsgi --http :80 --http-subscription-server 192.168.0.100:2626 --master Now simply subscribe your nodes to the HTTP subscription server. 6.16.1 Securing the Subscription System The subscription system is meant for “trusted” networks. All of the nodes in your network can potentially make a total mess with it. If you are building an infrastructure for untrusted users or you simply need more control over who can subscribe to a Subscription Server you can use openssl rsa public/private key pairs for “signing” you subscription requests. # First, create the private key for the subscriber. DO NOT SET A PASSPHRASE FOR THIS KEY. openssl genrsa -out private.pem # Generate the public key for the subscription server: openssl rsa -pubout -out test.uwsgi.it_8000.pem -in private.pem The keys must be named after the domain/key we are subscribing to serve, plus the .pem extension. Note: If you’re subscribing to a pool for an application listening on a specified port you need to use the domain_port.pem scheme for your key files. Generally all of the DNS-allowed chars are supported, all of the others are mapped to an underscore. An example of an RSA protected server looks like this: [uwsgi] master=1 http= :8000 http-subscription-server= 127.0.0.1:2626 subscriptions-sign-check= SHA1:/etc/uwsgi/keys The last line tells uWSGI that public key files will be stored in /etc/uwsgi/keys. At each subscription request the server will check for the availability of the public key file and use it, if available, to verify the signature of the packet. Packets that do not correctly verify are rejected. On the client side you need to pass your private key along with other subscribe-to options. Here’s an example: [uwsgi] socket= 127.0.0.1:8080 subscribe-to= 127.0.0.1:2626:test.uwsgi.it:8000,5,SHA1:/home/foobar/private.pem psgi= test.psgi Let’s analyze the subscribe-to usage: 402 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 • 127.0.0.1:2626 is the subscription server we want to subscribe to. • test.uwsgi.it:8000 is the subscription key. • 5 is the modifier1 value for our psgi app • SHA1:/home/private/test.uwsgi.it_8000.pem is the : couple for authenticating to the server (the field is the private key path). Note: Please make sure you’re using the same digest method (SHA1 in the examples above) both on the server and on the client. To avoid replay attacks, each subscription packet has an increasing number (normally the unix time) avoiding the allowance of duplicated packets. Even if an attacker manages to sniff a subscription packet it will be unusable as it is already processed previously. Obviously if someone manages to steal your private key he will be able to build forged packets. Using SSH keys SSH-formatted keys are generally loved by developers (well, more than classic PEM files). Both –subscribe-to and –subscribe2 (see below) support SSH private keys, while for the server part you have the encode the public key in pkcs8: ssh-keygen -f chiavessh001.pub -e -m pkcs8 6.16.2 –subscribe2 This is the keyval version of –subscribe-to. It supports more tricks and a (generally) more readable syntax: uwsgi --socket 127.*:0 --subscribe2 server=127.0.0.1:7171,key=ubuntu64.local:9090,sign=SHA1:chiavessh001 Supported fields are: • server the address of the subscription server • key the key to subscribe (generally the domain name) • addr the address to subscribe (the value of the item) • socket the socket number (zero-based), this is like ‘addr’ by take the uWSGI internal socket number • weight the load balancing value • modifier1 and modifier2 • sign :<file> the signature for the secured system • check it takes a file as argument. If it exists the packet is sent, otherwise it is skipped • sni_key set the keyfile to use for SNI proxy management • sni_crt set the crt file to use for SNI proxy management • sni_ca set the ca file to use for SNI proxy management • algo (uWSGI 2.1) set the load balancing algorithm to use (they are pluggable, included are wrr, lrc, wlrc and iphash) • proto (uWSGI 2.1) the protocol to use, by default it is ‘uwsgi’ • backup (uWSGI 2.1) set the backup level (change meaning based on algo) 6.16. uWSGI Subscription Server 403 uWSGI Documentation, Release 2.0 6.16.3 Notifications When you subscribe to a server, you can ask it to “acknowledge” the acceptance of your request. Just add --subscription-notify-socket pointing to a datagram (unix or udp) address, on which your instance will bind and the subscription server will send ack to. 6.16.4 Mountpoints (uWSGI 2.1) Generally you subscribe your apps to specific domains. Thanks to the mountpoints support introduced in uWSGI 2.1, you can now subscribe each node to specific directory (only one level after the domain name is allowed): First of all you need to tell the subscription server to accept (and manage) mountpoint requests: uwsgi --master --http :8080 --http-subscription-server 127.0.0.1:4040 --subscription-mountpoints then you can start subscribing to mountpoints uwsgi --socket 127.0.0.1:0 --subscribe2 server=127.0.0.1:4040,key=mydomain.it/foo uwsgi --socket 127.0.0.1:0 --subscribe2 server=127.0.0.1:4040,key=mydomain.it/bar uwsgi --socket 127.0.0.1:0 --subscribe2 server=127.0.0.1:4040,key=mydomain.it/foo uwsgi --socket 127.0.0.1:0 --subscribe2 server=127.0.0.1:4040,key=mydomain.it the first and the third instance will answer to all of the requests for /foo, the second will answer for /bar and the last one will manage all of the others For the secured subscription system, you only need to use the domain key (you do not need to generate a certificate for each mountpoint) 6.17 Serving static files with uWSGI (updated to 1.9) Unfortunately you cannot live without serving static files via some protocol (HTTP, SPDY or something else). Fortunately uWSGI has a wide series of options and micro-optimizations for serving static files. Generally your webserver of choice (Nginx, Mongrel2, etc. will serve static files efficiently and quickly and will simply forward dynamic requests to uWSGI backend nodes. The uWSGI project has ISPs and PaaS (that is, the hosting market) as the main target, where generally you would want to avoid generating disk I/O on a central server and have each user-dedicated area handle (and account for) that itself. More importantly still, you want to allow customers to customize the way they serve static assets without bothering your system administrator(s). 6.17.1 Mode 1: Check for a static resource before passing the request to your app This a fairly common way of managing static files in web apps. Frameworks like Ruby on Rails and many PHP apps have used this method for ages. Suppose your static assets are under /customers/foobar/app001/public. You want to check each request has a corresponding file in that directory before passing the request to your dynamic app. The --check-static option is for you: 404 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 --check-static /customers/foobar/app001/public If uWSGI receives a request for /foo.png will first check for the existence of /customers/foobar/app001/public/foo.png and if it is not found it will invoke your app. You can specify --check-static multiple times to specify multiple possible root paths. --check-static /customers/foobar/app001/public --check-static /customers/foobar/app001/static uWSGI will first check for /customers/foobar/app001/public/foo.png; if it does not find it it will try /customers/foobar/app001/static/foo.png before finally delegating the request to your app. 6.17.2 Mode 2: trust frontend’s DOCUMENT_ROOT If your frontend (a webserver, a uWSGI corerouters...) set the DOCUMENT_ROOT value, you can instruct uWSGI to trust it as a valid directory for checking for static files with the --check-static-docroot option. 6.17.3 Mode 3: using static file mount points A more general approach is “mapping” specific request prefixes to physical directories on your file system. The --static-map mountpoint=path is the option for you. --static-map /images=/var/www/img If you get a request for /images/logo.png and /var/www/img/logo.png exists, it will be served. Otherwise your app will manage the request. You can specify multiple --static-map options, even for the same mountpoint. --static-map /images=/var/www/img --static-map /images=/var/www/img2 --static-map /images=/var/www/img3 The file will be searched in each directory until it’s found, or if it’s not, the request will be managed by your app. In some specific cases you may want to build the internal path in a different way, retaining the original path portion of the request. The --static-map2 option will do this. --static-map2 /images=/var/www/img A request for /images/logo.png will be looked for as /var/www/img/images/logo.png. You can map (or map2) both directories and files. --static-map /images/logo.gif=/tmp/oldlogo.gif # (psst: put favicons here) 6.17.4 Mode 4: using advanced internal routing When mappings are not enough, advanced internal routing (available from 1.9) will be your last resort. Thanks to the power of regular expressions you will be able to build very complex mappings. [uwsgi] route= /static/(. *)\.png static:/var/www/images/pngs/$1/highres.png route= *\.jpg static:/var/www/always_the_same_photo.jpg 6.17. Serving static files with uWSGI (updated to 1.9) 405 uWSGI Documentation, Release 2.0 6.17.5 Setting the index page By default, requests for a “directory” (like / or /foo) are bypassed (if advanced internal routing is not in place). If you want to map specific files to a “directory” request (like the venerable index.html) just use the --static-index option. --static-index index.html --static-index index.htm --static-index home.html As with the other options, the first one matching will stop the chain. 6.17.6 MIME types Your HTTP/SPDY/whateveryouwant responses for static files should always return the correct mime type for the specific file to let user agents handle them correctly. By default uWSGI builds its list of MIME types from the /etc/mime.types file. You can load additional files with the --mime-file option. --mime-file /etc/alternatives.types --mime-file /etc/apache2/mime.types All of the files will be combined into a single auto-optimizing linked list. 6.17.7 Skipping specific extensions Some platforms/languages, most-notably CGI based ones, like PHP are deployed in a very simple manner. You simply drop them in the document root and they are executed whenever you call them. This approach, when combined with static file serving, requires a bit of attention for avoiding your CGI/PHP/whatever to be served like static files. The --static-skip-ext will do the trick. A very common pattern on CGI and PHP deployment is this: --static-skip-ext .php --static-skip-ext .cgi --static-skip-ext .php4 6.17.8 Setting the Expires headers When serving static files, abusing client browser caching is the path to wisdom. By default uWSGI will add a Last-Modified header to all static responses, and will honor the If-Modified-Since request header. This might be not enough for high traffic sites. You can add automatic Expires headers using one of the following options: •--static-expires-type will set the Expires header to the specified number of seconds for the specified MIME type. •--static-expires-type-mtime is similar, but based on file modification time, not the current time. •--static-expires (and -mtime) will set Expires header for all of the filenames (after finishing mapping to the filesystem) matching the specified regexp. •--static-expires-uri (and -mtime) match regexps against REQUEST_URI •--static-expires-path-info (and -mtime) match regexps against PATH_INFO 406 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 # Expire an hour from now --static-expires-type text/html=3600 # Expire an hour from the file’s modification time --static-expires-type-mtime text/html=3600 # Same as static-expires-type, but based on a regexp: --static-expires /var/www/static/foo*\.jpg 3600 6.17.9 Transfer modes If you have developed an asynchronous/nonblocking application, serving static files directly from uWSGI is not a big problem. All of the transfers are managed in the async way, so your app will not block during them. In multi-process/multi-threaded modes, your processes (or threads) will be blocked during the whole transfer of the file. For smaller files this is not a problem, but for the bigger one it’s a great idea to offload their transfer to something else. You have various ways to do this: X-Sendfile If your web server supports the X-Sendfile header and has access to the file you want to send (for example it is on the same machine of your application or can access it via NFS) you can avoid the transfer of the file from your app with the --file-serve-mode x-sendfile option. With this, uWSGI will only generate response headers and the web server will be delegated to transferring the physical file. X-Accel-Redirect This is currently (January 2013) supported only on Nginx. Works in the same way as X-Sendfile, the only difference is in the option argument. --file-serve-mode x-accel-redirect Offloading This is the best approach if your frontend server has no access to the static files. It uses the The uWSGI offloading subsystem to delegate the file transfer to a pool of non-blocking threads. Each one of these threads can manage thousands of file transfers concurrently. To enable file transfer offloading just use the option --offload-threads specifying the number of threads to spawn (try to set it to the number of CPU cores to take advantage of SMP). 6.17.10 GZIP (uWSGI 1.9) uWSGI 1.9 can check for a *.gz variant of a static file. Many users/sysadmins underestimate the CPU impact of on-the-fly Gzip encoding. 6.17. Serving static files with uWSGI (updated to 1.9) 407 uWSGI Documentation, Release 2.0 Compressing files every time (unless your webservers is caching them in some way) will use CPU and you will not be able to use advanced (zero-copy) techniques like sendfile(). For a very loaded site (or network) this could be a problem (especially when gzip encoding is a need for a better, more responsive user experience). Although uWSGI is able to compress contents on the fly (this is used in the HTTP/HTTPS/SPDY router for example), the best approach for serving gzipped static files is generating them “manually” (but please use a script, not an intern to do this), and let uWSGI choose if it is best to serve the uncompressed or the compressed one every time. In this way serving gzip content will be no different from serving standard static files (sendfile, offloading...) To trigger this behavior you have various options: • static-gzip checks for .gz variant for all of the requested files matching the specified regexp (the regexp is applied to the full filesystem path of the file) • static-gzip-dir /static-gzip-prefix checks for .gz variant for all of the files under the specified directory • static-gzip-ext /static-gzip-suffix check for .gz variant for all of the files with the specified extension/suffix • static-gzip-all check for .gz variant for all requested static files So basically if you have /var/www/uwsgi.c and /var/www/uwsgi.c.gz, clients accepting gzip as their Content-Encoding will be transparently served the gzipped version instead. 6.17.11 Security Every static mapping is fully translated to the “real” path (so symbolic links are translated too). If the resulting path is not under the one specified in the option, a security error will be triggered and the request refused. If you trust your UNIX skills and know what you are doing, you can add a list of “safe” paths. If a translated path is not under a configured directory but it is under a safe one, it will be served nevertheless. Example: --static-map /foo=/var/www/ /var/www/test.png is a symlink to /tmp/foo.png After the translation of /foo/test.png, uWSGI will raise a security error as /tmp/foo.png is not under /var/www/. Using --static-map /foo=/var/www/ --static-safe /tmp will bypass that limit. You can specify multiple --static-safe options. 6.17.12 Caching paths mappings/resolutions One of the bottlenecks in static file serving is the constant massive amount of stat() syscalls. You can use the uWSGI caching subsystem to store mappings from URI to filesystem paths. --static-cache-paths 30 408 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 will cache each static file translation for 30 seconds in the uWSGI cache. From uWSGI 1.9 an updated caching subsystem has been added, allowing you to create multiple caches. If you want to store translations in a specific cache you can use --static-cache-paths-name . 6.17.13 Bonus trick: storing static files in the cache You can directly store a static file in the uWSGI cache during startup using the option --load-file-in-cache (you can specify it multiple times). The content of the file will be stored under the key <filename>. So please pay attention – load-file-in-cache ./foo.png will store the item as ./foo.png, not its full path. 6.17.14 Notes • The static file serving subsystem automatically honours the If-Modified-Since HTTP request header 6.18 SNI - Server Name Identification (virtual hosting for SSL nodes) uWSGI 1.9 (codenamed “ssl as p0rn”) added support for SNI (Server Name Identification) throughout the whole SSL subsystem. The HTTPS router, the SPDY router and the SSL router can all use it transparently. SNI is an extension to the SSL standard which allows a client to specify a “name” for the resource it wants. That name is generally the requested hostname, so you can implement virtual hosting-like behavior like you do using the HTTP Host: header without requiring extra IP addresses etc. In uWSGI an SNI object is composed of a name and a value. The name is the servername/hostname while the value is the “SSL context” (you can think of it as the sum of certificates, key and ciphers for a particular domain). 6.18.1 Adding SNI objects To add an SNI object just use the --sni option: --sni crt,key[,ciphers,client_ca] For example: --sni "unbit.com unbit.crt,unbit.key" or for client-based SSL authentication and OpenSSL HIGH cipher levels --sni "secure.unbit.com unbit.crt,unbit.key,HIGH,unbit.ca" 6.18.2 Adding complex SNI objects Sometimes you need more complex keys for your SNI objects (like when using wildcard certificates) If you have built uWSGI with PCRE/regexp support (as you should) you can use the --sni-regexp option. --sni-regexp"*.unbit.com unbit.crt,unbit.key,HIGH,unbit.ca" 6.18. SNI - Server Name Identification (virtual hosting for SSL nodes) 409 uWSGI Documentation, Release 2.0 6.18.3 Massive SNI hosting One of uWSGI’s main purposes is massive hosting, so SSL without support for that would be pretty annoying. If you have dozens (or hundreds, for that matter) of certificates mapped to the same IP address you can simply put them in a directory (following a simple convention we’ll elaborate in a bit) and let uWSGI scan it whenever it needs to find a context for a domain. To add a directory just use --sni-dir like --sni-dir /etc/customers/certificates Now, if you have unbit.com and example.com certificates (.crt) and keys (.key) just drop them in there following these naming rules: •/etc/customers/certificates/unbit.com.crt •/etc/customers/certificates/unbit.com.key •/etc/customers/certificates/unbit.com.ca •/etc/customers/certificates/example.com.crt •/etc/customers/certificates/example.com.key As you can see, example.com has no .ca file, so client authentication will be disabled for it. If you want to force a default cipher set to the SNI contexts, use --sni-dir-ciphers HIGH (or whatever other value you need) Note: Unloading SNI objects is not supported. Once they are loaded into memory they will be held onto until reload. 6.18.4 Subscription system and SNI uWSGI 2.0 added support for SNI in the subscription system. The https/spdy router and the sslrouter can dinamically load certificates and keys from the paths specified in a sub- scription packet: uwsgi --subscribe2 key=mydomain.it,socket=0,sni_key=/foo/bar.key,sni_crt=/foo/bar.crt the router will create a new SSL context based on the specified files (be sure the router can reach them) and will destroy it when the last node disconnect. This is useful for massive hosting where customers have their certificates in the home and you want them the change/update those files without bothering you. Note: We understand that directly encapsulating keys and cert in the subscription packets will be much more useful, but network transfer of keys is something really foolish from a security point of view. We are investigating if combining it with the secured subscription system (where each packet is encrypted) could be a solution. 410 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.19 The GeoIP plugin The geoip plugin adds new routing vars to your internal routing subsystem. GeoIP’s vars are prefixed with the “geoip” tag. To build the geoip plugin you need the official GeoIP C library and its headers. The supported databases are the country and city one, and they are completely loaded on memory at startup. The country database give access to the following variables: • ${geoip[country_code]} • ${geoip[country_code3]} • ${geoip[country_name]} while the city one offers a lot more at the cost of increased memory usage for storing the database • ${geoip[continent]} • ${geoip[country_code]} • ${geoip[country_code3]} • ${geoip[country_name]} • ${geoip[region]} • ${geoip[region_name]} • ${geoip[city]} • ${geoip[postal_code]} • ${geoip[latitude]} (${geoip[lat]}) • ${geoip[longitude]} (${geoip[lon]}) • ${geoip[dma]} • ${geoip[area]} 6.19.1 Enabling geoip lookup To enable the GeoIP lookup system you need to load at least one database. After having loaded the geoip plugin you will get 2 new options: •--geoip-country specifies a country database •--geoip-city specifies a city database If you do not specify at least one of them, the system will always return empty strings. 6.19.2 An example [uwsgi] plugin= geoip http-socket= :9090 ; load the geoip city database geoip-city= GeoLiteCity.dat module= werkzeug.testapp:test_app ; first some debug info (addvar will ad WSGI variables you will see in the werkzeug testapp) route-run= log:${geoip[country_name]}/${geoip[country_code3]} route-run= addvar:COUNTRY=${geoip[country_name]} 6.19. The GeoIP plugin 411 uWSGI Documentation, Release 2.0 route-run= log:${geoip[city]}/${geoip[region]}/${geoip[continent]} route-run= addvar:COORDS=${geoip[lon]}/${geoip[lat]} route-run= log:${geoip[region_name]} route-run= log:${geoip[dma]}/${geoip[area]} ; then something more useful ; block access to all of the italians (hey i am italian do not start blasting me...) route-if= equal:${geoip[country_name]};Italy break:403 Italians cannot see this site :P ; try to serve a specific page translation route= ^/foo/bar/test.html static:/var/www/${geoip[country_code]}/test.html 6.19.3 Memory usage The country database is tiny so you will generally have no problem in using it. Instead, the city database can be huge (from 20MB to more than 40MB). If you have lot of instances using the GeoIP city database and you are on a recent Linux system, consider using Using Linux KSM in uWSGI to reduce memory usage. All of the memory used by the GeoIP database can be shared by all instances with it. 6.20 uWSGI Transformations Starting from uWSGI 1.9.7, a “transformations” API has been added to uWSGI internal routing. A transformation is like a filter applied to the response generated by your application. Transformations can be chained (the output of a transformation will be the input of the following one) and can com- pletely overwrite response headers. The most common example of transformation is gzip encoding. The output of your application is passed to a function compressing it with gzip and setting the Content-Encoding header. This feature rely on 2 external packages: libpcre3- dev, libz-dev on Ubuntu. [uwsgi] plugin= python,transformation_gzip http-socket= :9090 ; load the werkzeug test app module= werkzeug.testapp:test_app ; if the client supports gzip encoding goto to the gzipper route-if= contains:${HTTP_ACCEPT_ENCODING};gzip goto:mygzipper route-run= last: route-label= mygzipper ; pass the response to the gzip transformation route= ^/$ gzip: The cachestore routing instruction is a transformation too, so you can cache various states of the response. [uwsgi] plugin= python,transformation_gzip http-socket= :9090 ; load the werkezeug test app module= werkzeug.testapp:test_app ; create a cache of 100 items cache= 100 ; if the client support gzip encoding goto to the gzipper route-if= contains:${HTTP_ACCEPT_ENCODING};gzip goto:mygzipper 412 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 route= ^/$ cache:key=werkzeug_homepage route= ^/$ cachestore:key=werkzeug_homepage route-run= last: route-label= mygzipper route= ^/$ cache:key=werkzeug_homepage.gz ; first cache the ’clean’ response (for client not supporting gzip) route= ^/$ cachestore:key=werkzeug_homepage ; then pass the response to the gzip transformation route= ^/$ gzip: ; and cache it again in another item (gzipped) route= ^/$ cachestore:key=werkzeug_homepage.gz Another common transformation is applying stylesheets to XML files. (see The XSLT plugin) The toxslt transformation is exposed by the xslt plugin: uwsgi --plugin xslt --http-socket :9090 -w mycd --route-run "toxslt:stylesheet=t/xslt/cd.xml.xslt,params=foobar=test&agent=\${HTTP_USER_AGENT}" The mycd module here is a simple XML generator. Its output is then passed to the XSLT transformation. 6.20.1 Streaming vs. buffering Each transformation announces itself as a “streaming” one or a “buffering” one. Streaming ones are transformations that can be applied to response chunks (parts). An example of a streaming trans- formation is gzip (you do not need the whole body to begin compressing it). Buffering transformations are those requiring the full body before applying something to it. XSLT is an example of buffering transformation. Another example of buffering transformations are those used for storing response in some kind of cache. If your whole pipeline is composed by only “streaming” transformations, your client will receive the output chunk by chunk. On the other hand a single buffering transformation will make the whole pipeline buffered, so your client will get the output only at the end. An often using streaming functionality is gzip + chunked: [uwsgi] plugins = transformation_gzip,transformation_chunked route-run = gzip: route-run = chunked: ... The whole transformation pipeline is composed by streaming plugins, so you will get each HTTP chunk in realtime. 6.20.2 Flushing magic The “flush” transformation is a special one. It allows you to send the current contents of the transformation buffer to the client (without clearing the buffer). You can use it for implementing streaming mode when buffering will be applied. A common example is having streaming + caching: [uwsgi] plugins = transformation_toupper,transform_tofile ; convert each char to uppercase route-run = toupper: ; after each chunk converted to upper case, flush to the client route-run = flush: 6.20. uWSGI Transformations 413 uWSGI Documentation, Release 2.0 ; buffer the whole response in memory for finally storing it in a file route-run = tofile:filename=/tmp/mycache ... You can call flush multiple times and in various parts of the chain. Experiment a bit with it... 6.20.3 Available transformations (last update 20130504) • gzip, exposed by the transformation_gzip plugin (encode the response buffer to gzip) • toupper, exposed by the transformation_toupper plugin (example plugin transforming each charac- ter in uppercase) • tofile, exposed by the transformation_tofile plugin (used for caching to response buffer to a static file) • toxslt, exposed by the xslt plugin (apply xslt stylesheet to an XML response buffer) • cachestore, exposed by the router_cache plugin (cache the response buffer in the uWSGI cache) • chunked, encode the output in HTTP chunked • flush, flush the current buffer to the client • memcachedstore, store the response buffer in a memcached object • redisstore, store the response buffer in a redis object • template, apply routing translations to each chunk 6.20.4 Working on • rpc, allows applying rpc functions to a response buffer (limit 64k size) • lua, apply a lua function to a response buffer (no limit in size) 6.21 WebSocket support In uWSGI 1.9, a high performance websocket (RFC 6455) implementation has been added. Although many different solutions exist for WebSockets, most of them rely on a higher-level language implementation, that rarely is good enough for topics like gaming or streaming. The uWSGI websockets implementation is compiled in by default. Websocket support is sponsored by 20Tab S.r.l. http://20tab.com/ They released a full game (a bomberman clone based on uWSGI websockets api): https://github.com/20tab/Bombertab 6.21.1 An echo server This is how a uWSGI websockets application looks like: def application(env, start_response): # complete the handshake uwsgi.websocket_handshake(env[’HTTP_SEC_WEBSOCKET_KEY’], env.get(’HTTP_ORIGIN’,’’)) while True: 414 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 msg= uwsgi.websocket_recv() uwsgi.websocket_send(msg) You do not need to worry about keeping the connection alive or reject dead peers. The uwsgi.websocket_recv() function will do all of the dirty work for you in background. 6.21.2 Handshaking Handshaking is the first phase of a websocket connection. To send a full handshake response you can use the uwsgi.websocket_handshake([key,origin, proto]) function. Without a correct handshake the connection will never complete. In the 1.9 series, the key parameter is required. In 2.0+ you can call websocket_handshake without arguments (the response will be built automatically from request’s data). 6.21.3 Sending Sending data to the browser is really easy. uwsgi.websocket_send(msg) – nothing more. 6.21.4 Receiving This is the real core of the whole implementation. This function actually lies about its real purpose. It does return a websocket message, but it really also holds the connection opened (using the ping/pong subsystem) and monitors the stream’s status. msg = uwsgi.websocket_recv() The function can receive messages from a named channel (see below) and automatically forward them to your web- socket connection. It will always return only websocket messages sent from the browser – any other communication happens in the background. There is a non-blocking variant too – msg = uwsgi.websocket_recv_nb(). See: https://github.com/unbit/uwsgi/blob/master/tests/websockets_chat_async.py 6.21.5 PING/PONG To keep a websocket connection opened, you should constantly send ping (or pong, see later) to the browser and expect a response from it. If the response from the browser/client does not arrive in a timely fashion the connection is closed (uwsgi.websocket_recv() will raise an exception). In addition to ping, the uwsgi.websocket_recv() function send the so called ‘gratuitous pong’. They are used to inform the client of server availability. All of these tasks happen in background. YOU DO NOT NEED TO MANAGE THEM! 6.21.6 Available proxies Unfortunately not all of the HTTP webserver/proxies work flawlessly with websockets. • The uWSGI HTTP/HTTPS/SPDY router supports them without problems. Just remember to add the --http-websockets option. 6.21. WebSocket support 415 uWSGI Documentation, Release 2.0 uwsgi --http :8080 --http-websockets --wsgi-file myapp.py or uwsgi --http :8080 --http-raw-body --wsgi-file myapp.py This is slightly more “raw”, but supports things like chunked input. • Haproxy works fine. • nginx >= 1.4 works fine and without additional configuration. 6.21.7 Language support • Python https://github.com/unbit/uwsgi/blob/master/tests/websockets_echo.py • Perl https://github.com/unbit/uwsgi/blob/master/tests/websockets_echo.pl • PyPy https://github.com/unbit/uwsgi/blob/master/tests/websockets_chat_async.py • Ruby https://github.com/unbit/uwsgi/blob/master/tests/websockets_echo.ru • Lua https://github.com/unbit/uwsgi/blob/master/tests/websockets_echo.lua 6.21.8 Supported concurrency models • Multiprocess • Multithreaded • uWSGI native async api • Coro::AnyEvent • gevent • Ruby fibers + uWSGI async • Ruby threads • greenlet + uWSGI async • uGreen + uWSGI async • PyPy continulets 6.21.9 wss:// (websockets over https) The uWSGI HTTPS router works without problems with websockets. Just remember to use wss:// as the connection scheme in your client code. 6.21.10 Websockets over SPDY n/a 416 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 6.21.11 Routing The http proxy internal router supports websocket out of the box (assuming your front-line proxy already supports them) [uwsgi] route= ^/websocket uwsgi:127.0.0.1:3032,0,0 or [uwsgi] route= ^/websocket http:127.0.0.1:8080 6.21.12 Api uwsgi.websocket_handshake([key, origin, proto]) uwsgi.websocket_recv() uwsgi.websocket_send(msg) uwsgi.websocket_send_binary(msg) (added in 1.9.21 to support binary messages) uwsgi.websocket_recv_nb() uwsgi.websocket_send_from_sharedarea(id, pos) (added in 1.9.21, allows sending directly from a SharedArea – share memory pages between uWSGI components) uwsgi.websocket_send_binary_from_sharedarea(id, pos) (added in 1.9.21, allows sending directly from a SharedArea – share memory pages between uWSGI components) 6.22 The Metrics subsystem Available from 1.9.19. The uWSGI metrics subsystem allows you to manage “numbers” from your app. While the caching subsystem got some math capabilities during the 1.9 development cycle, the metrics subsystem is optimized by design for storing numbers and applying functions over them. So, compared to the caching subsystem it’s way faster and requires a fraction of the memory. When enabled, the metric subsystem configures a vast amount of metrics (like requests per-core, memory usage, etc) but, in addition to this, you can configure your own metrics, such as the number of active users or, say, hits of a particular URL, as well as the memory consumption of your app or the whole server. To enable the metrics subsystem just add --enable-metrics to your options, or configure a stats pusher (see below). The metrics subsystem is completely thread-safe. By default uWSGI creates a lot of metrics (and more are planned), so before adding your own be sure uWSGI does not already expose the one(s) you need. 6.22.1 Metric names and oids Each metric must have a name (containing only numbers, letters, underscores, dashes and dots) and an optional oid (required for mapping a metric to The embedded SNMP server). 6.22. The Metrics subsystem 417 uWSGI Documentation, Release 2.0 6.22.2 Metric types Before dealing with metrics you need to understand the various types represented by each metric: COUNTER (type 0) This is a generally-growing up number (like the number of requests). GAUGE (type 1) This is a number that can increase or decrease dynamically (like the memory used by a worker, or CPU load). ABSOLUTE (type 2) This is an absolute number, like the memory of the whole server, or the size of the hard disk. ALIAS (type 3) This is a virtual metric pointing to another one . You can use it to give different names to already existing metrics. 6.22.3 Metric collectors Once you define a metric type, you need to tell uWSGI how to ‘collect’ the specific metric. There are various collectors available (and more can be added via plugins). • ptr – The value is collected from a memory pointer • file – the value is collected from a file • sum – the value is the sum of other metrics • avg – compute the algebraic average of the children (added in 1.9.20) • accumulator – always add the sum of children to the final value. See below for an example. Round 1: child1 = 22, child2 = 17 -> metric_value = 39 Round 2: child1 = 26, child2 = 30 -> metric_value += 56 • multiplier - Multiply the sum of children by the specified argument (arg1n). child1 = 22, child2 = 17, arg1n = 3 -> metric_value = (22+17)*3 • func - the value is computed calling a specific function every time • manual - the NULL collector. The value must be updated manually from applications using the metrics API. 6.22.4 Custom metrics You can define additional metrics to manage from your app. The --metric option allows you to add more metrics. It has two syntaxes: “simplified” and “keyval”. 418 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 uwsgi --http-socket :9090 --metric foobar will create a metric ‘foobar’ with type ‘counter’, manual collector and no oid. For creating advanced metrics you need the keyval way: uwsgi --http-socket :9090 --metric name=foobar,type=gauge,oid=100.100.100 The following keys are available: • name – set the metric name • oid – set the metric oid • type – set the metric type, can be counter, gauge, absolute, alias • initial_value – set the metric to a specific value on startup • freq – set the collection frequency in seconds (default to 1) • reset_after_push – reset the metric to zero (or the configured initial_value) after it’s been pushed to the backend (so every freq seconds) • children – maps children to the metric (see below) • alias – the metric will be a simple alias for the specified one (–metric name=foobar,alias=worker.0.requests,type=alias) • arg1 to arg3 – string based arguments (see below) • arg1n to arg3n – number based arguments (see below) • collector set the collector, can be ptr, file, sum, func or anything exposed by plugins. Not specifying a collector means the metric is manual (your app needs to update it). The ptr is currently unimplemented, while the other collector requires a bit of additional configuration: collector=file requires arg1 for the filename and an optional arg1n for the so-called split value. uwsgi --metric name=loadavg,type=gauge,collector=file,arg1=/proc/loadavg,arg1n=1,freq=3 This will add a ‘loadavg‘ metric, of type gauge, updated every 3 seconds with the content of /proc/loadavg. The content is split (using \n, \t, spaces, \r and zero as separator) and the item 1 (the returned array is zero-based) used as the return value. The splitter is very powerful, making it possible to gather information from more complex files, such as /proc/meminfo. uwsgi --metric name=memory,type=gauge,collector=file,arg1=/proc/meminfo,arg1n=4,freq=3 Once split, /proc/meminfo has the MemFree value in the 4th slot. collector=sum requires the list of metrics that must be summed up. Each metric has the concept of ‘children’. The sum collector will sum the values of all of its children: uwsgi --metric name=reqs,collector=sum,children=worker.1.requests;worker.2.requests This will sum the value of worker.1.requests and worker.2.requests every second. collector=func is a convenience collector avoiding you to write a whole plugin for adding a new collector. Let’s define a C function (call the file mycollector.c or whatever you want): int64_t my_collector(void *metric) { return 173; } 6.22. The Metrics subsystem 419 uWSGI Documentation, Release 2.0 and build it as a shared library... gcc -shared -o mycollector.so mycollector.c now run uWSGI loading the library... uwsgi --dlopen ./mycollector.so --metric name=mine,collector=func,arg1=my_collector,freq=10 this will call the C function my_collector every 10 seconds and will set the value of the metric ‘mine’ to its return value. The function must returns an int64_t value. The argument it takes is a uwsgi_metric pointer. You generally do not need to parse the metric, so just casting to void will avoid headaches. 6.22.5 The metrics directory UNIX sysadmins love text files. They are generally the things they have to work on most of the time. If you want to make a UNIX sysadmin happy, just give him or her some text file to play with. (Or some coffee, or whiskey maybe, depending on their tastes. But generally, text files should do just fine.) The metrics subsystem can expose all of its metrics in the form of text files in a directory: uwsgi --metrics-dir mymetrics ... The directory must exist in advance. This will create a text file for each metric in the ‘mymetrics’ directory. The content of each file is the value of the metric (updated in real time). Each file is mapped into the process address space, so do not worry if your virtual memory increases slightly. 6.22.6 Restoring metrics (persistent metrics) When you restart a uWSGI instance, all of its metrics are reset. This is generally the best thing to do, but if you want, you can restore the previous situation using the values stored in the metrics directory defined before. Just add the --metrics-dir-restore option to force the metric subsystem to read-back the values from the metric directory before starting to collect values. 6.22.7 API Your language plugins should expose at least the following api functions. Currently they are implemented in Perl, CPython, PyPy and Ruby • metric_get(name) • metric_set(name, value) • metric_set_max(name, value) – only set the metric name if the give value is greater than the one currently stored • metric_set_min(name, value) – only set the metric name if the give value is lower than the one currently stored metric_set_max and metric_set_min can be used to avoid having to call metric_get when you need a metric to be set at a maximal or minimal value. Another simple use case is to use the avg collector to calculate an average between some max and min set metrics. 420 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 • metric_inc(name[, delta]) • metric_dec(name[, delta]) • metric_mul(name[, delta]) • metric_div(name[, delta]) • metrics (tuple/array of metric keys, should be immutable and not-callable, currently unimplemented) 6.22.8 Stats pushers Collected metrics can be sent to external systems for analysis or chart generation. Stats pushers are plugins aimed at sending metrics to those systems. There are two kinds of stats pushers at the moment: JSON and raw. The JSON stats pusher send the whole JSON stats blob (the same you get from the stats server), while ‘raw’ ones send the metrics list. Currently available stats pushers: rrdtool • Type: raw • Plugin: rrdtool (builtin by default) • Requires (during runtime): librrd.so • Syntax: --stats-push rrdtool:my_rrds ... This will store an rrd file for each metric in the specified directory. Each rrd file has a single data source named ‘metric’. Usage: uwsgi --rrdtool my_rrds ... # or uwsgi --stats-push rrdtool:my_rrds ... By default the RRD files are updated every 300 seconds. You can tune this value with --rrdtool-freq The librrd.so library is detected at runtime. If you need you can specify its absolute path with --rrdtool-lib. statsd • Type: raw • Plugin: stats_pusher_statsd • Syntax: --stats-push statsd:address[,prefix] Push metrics to a statsd server. Usage: uwsgi --stats-push statsd:127.0.0.1:8125,myinstance ... 6.22. The Metrics subsystem 421 uWSGI Documentation, Release 2.0 carbon • Type: raw • Plugin: carbon (built-in by default) • See: Integration with Graphite/Carbon zabbix • Type: raw • Plugin: zabbix • Syntax: --stats-push zabbix:address[,prefix] Push metrics to a zabbix server. The plugin exposes a --zabbix-template option that will generate a zabbix template (on stdout or in the specified file) containing all of the exposed metrics as trapper items. Note: On some Zabbix versions you will need to authorize the IP addresses allowed to push items. Usage: uwsgi --stats-push zabbix:127.0.0.1:10051,myinstance ... mongodb • Type: json • Plugin: stats_pusher_mongodb • Required (build time): libmongoclient.so • Syntax (keyval): --stats-push mongodb:addr=,collection=,freq= Push statistics (as JSON) the the specified MongoDB database. file • Type: json • Plugin: stats_pusher_file Example plugin storing stats JSON in a file. socket • Type: raw • Plugin: stats_pusher_socket (builtin by default) • Syntax: --stats-push socket:address[,prefix] Push metrics to a UDP server with the following format: ( is in the numeric form previously reported). Example: 422 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 uwsgi --stats-push socket:127.0.0.1:8125,myinstance ... 6.22.9 Alarms/Thresholds You can configure one or more “thresholds” for each metric. Once this limit is reached the specified alarm (see The uWSGI alarm subsystem (from 1.3)) is triggered. Once the alarm is delivered you may choose to reset the counter to a specific value (generally 0), or continue triggering alarms with a specified rate. [uwsgi] ... metric-alarm = key=worker.0.avg_response_time,value=2000,alarm=overload,rate=30 metric-alarm = key=loadavg,value=3,alarm=overload,rate=120 metric-threshold = key=mycounter,value=1000,reset=0 ... Specifying an alarm is not required. Using the threshold value to automatically reset a metric is perfectly valid. Note: --metric-threshold and --metric-alarm are aliases for the same option. 6.22.10 SNMP integration The The embedded SNMP server server exposes metrics starting from the 1.3.6.1.4.1.35156.17.3 OID. For example to get the value of worker.0.requests: snmpget -v2c -c : 1.3.6.1.4.1.35156.17.3.0.1 Remember: only metrics with an associated OID can be used via SNMP. 6.22.11 Internal Routing integration The ‘’router_metrics” plugin (builtin by default) adds a series of actions to the internal routing subsystem. • metricinc:[,value] increase the • metricdec:[,value] decrease the • metricmul:[,value] multiply the • metricdiv:[,value] divide the • metricset:, set to In addition to action, a route var named “metric” is added. Example: [uwsgi] metric= mymetric route= ^/foo metricinc:mymetric route-run= log:the value of the metric ’mymetric’ is ${metric[mymetric]} log-format= %(time) - %(metric.mymetric) 6.22. The Metrics subsystem 423 uWSGI Documentation, Release 2.0 6.22.12 Request logging You can access metrics values from your request logging format using the %(metric.xxx) placeholder: [uwsgi] log-format= [hello] %(time) %(metric.worker.0.requests) 6.22.13 Officially Registered Metrics This is a work in progress. The best way to know which default metrics are exposed is enabling the stats server and querying it (or adding the --metrics-dir option). • worker/3 (exports information about workers, example worker.1.requests [or 3.1.1] reports the number of re- quests served by worker 1) • plugin/4 (namespace for metrics automatically added by plugins, example plugins.foo.bar) • core/5 (namespace for general instance informations) • router/6 (namespace for corerouters, example router.http.active_sessions) • socket/7 (namespace for sockets, example socket.0.listen_queue) • mule/8 (namespace for mules, example mule.1.signals) • spooler/9 (namespace for spoolers, example spooler.1.signals) • system/10 (namespace for system metrics, like loadavg or free memory) 6.22.14 OID assigment for plugins If you want to write a plugin that will expose metrics, please add the OID namespace that you are going to use to the list below and make a pull request first. This will ensure that all plugins are using unique OID namespaces. Prefix all plugin metric names with plugin name to ensure no conflicts if same keys are used in multiple plugins (example plugin.myplugin.foo.bar, worker.1.plugin.myplugin.foo.bar) • (3|4).100.1 - cheaper_busyness 6.22.15 External tools Check: https://github.com/unbit/unbit-bars 6.23 The Chunked input API An API for managing HTTP chunked input requests has been added in uWSGI 1.9.13. The API is very low-level to allow easy integration with standard apps. There are only two functions exposed: • chunked_read([timeout]) • chunked_read_nb() 424 Chapter 6. uWSGI Subsystems uWSGI Documentation, Release 2.0 This API is supported (from uWSGI 1.9.20) on CPython, PyPy and Perl. 6.23.1 Reading chunks To read a chunk (blocking) just run my $msg= uwsgi::chunked_read If no timeout is specified, the default one will be used. If you do not get a chunk in time, the function will croak (or raise an exception when under Python). Under non-blocking/async engines you may want to use my $msg= uwsgi::chunked_read_nb The function will soon return undef (or None on Python) if no chunks are available (and croak/raise an exception on error). A full PSGI streaming echo example: # simple PSGI echo app reading chunked input sub streamer{ $responder= shift; # generate the headers and start streaming the response my $writer=$responder->([200,[’Content-Type’=> ’text/plain’]]); while(1){ my $msg= uwsgi::chunked_read; last unless $msg; $writer->write($msg); } $writer->close; } my $app= sub { return \&streamer; }; 6.23.2 Tuning the chunks buffer Before starting to read chunks, uWSGI allocates a fixed buffer for storing chunks. All of the messages are always stored in the same buffer. If a message bigger than the buffer is received, an exception will be raised. By default the buffer is limited to 1 MB. You can tune it with the --chunked-input-limit option (bytes). 6.23.3 Integration with proxies If you plan to put uWSGI behind a proxy/router be sure it supports chunked input requests (or generally raw HTTP requests). When using the uWSGI HTTP router just add –http-raw-body to support chunked input. HAProxy works out of the box. Nginx >= 1.4 supports chunked input. 6.23. The Chunked input API 425 uWSGI Documentation, Release 2.0 6.23.4 Options •--chunked-input-limit: the limit (in bytes) of a chunk message (default 1MB) •--chunked-input-timeout: the default timeout (in seconds) for blocking chunked_read (default to the same –socket-timeout value, 4 seconds) 6.23.5 Notes • Calling chunked API functions after having consumed even a single byte of the request body is wrong (this includes --post-buffering). • Chunked API functions can be called independently by the presence of “Transfer-Encoding: chunked” header. 426 Chapter 6. uWSGI Subsystems CHAPTER 7 Scaling with uWSGI 7.1 The uWSGI cheaper subsystem – adaptive process spawning uWSGI provides the ability to dynamically scale the number of running workers via pluggable algorithms. Use uwsgi --cheaper-algos-list to get the list of available algorithms. 7.1.1 Usage To enable cheaper mode add the cheaper = N option to the uWSGI configuration file, where N is the minimum number of workers uWSGI can run. The cheaper value must be lower than the maximum number of configured workers (workers or processes option). # set cheaper algorithm to use, if not set default will be used cheaper-algo= spare # minimum number of workers to keep at all times cheaper=2 # number of workers to spawn at startup cheaper-initial=5 # maximum number of workers that can be spawned workers= 10 # how many workers should be spawned at a time cheaper-step=1 This configuration will tell uWSGI to run up to 10 workers under load. If the app is idle uWSGI will stop workers but it will always leave at least 2 of them running. With cheaper-initial you can control how many workers should be spawned at startup. If your average load requires more than minimum number of workers you can have them spawned right away and then “cheaped” (killed off) if load is low enough. When the cheaper algorithm decides that it needs more workers it will spawn cheaper-step of them. This is useful if you have a high maximum number of workers – in the event of a sudden load spike it would otherwise take a lot of time to spawn enough workers one by one. 7.1.2 Setting memory limits Starting with 1.9.16 rss memory limits can be set to stop cheaper spawning new workers if process count limit was not reached, but total sum of rss memory used by all workers reached given limit. 427 uWSGI Documentation, Release 2.0 # soft limit will prevent cheaper from spawning new workers # if workers total rss memory is equal or higher # we use 128MB soft limit below (values are in bytes) cheaper-rss-limit-soft= 134217728 # hard limit will force cheaper to cheap single worker # if workers total rss memory is equal or higher # we use 160MB hard limit below (values are in bytes) cheaper-rss-limit-hard= 167772160 Notes: • Hard limit is optional, soft limit alone can be used. • Hard value must be higher then soft value, both values shouldn’t be too close to each other. • Hard value should be soft + at least average worker memory usage for given app. • Soft value is the limiter for cheaper, it won’t spawn more workers, but already running workers memory usage might grow, to handle that reload-on-rss can be set to. To set unbreakable barrier for app memory usage cgroups are recommended. 7.1.3 spare cheaper algorithm This is the default algorithm. If all workers are busy for cheaper_overload seconds then uWSGI will spawn new workers. When the load is gone it will begin stopping processes one at a time. 7.1.4 backlog cheaper algorithm Note: backlog is only available on Linux and only on TCP sockets (not UNIX domain sockets). If the socket’s listen queue has more than cheaper_overload requests waiting to be processed, uWSGI will spawn new workers. If the backlog is lower it will begin killing processes one at a time. 7.1.5 cheaper busyness algorithm Note: This algorithm is optional, it is only available if the cheaper_busyness plugin is compiled and loaded. This plugin implements an algorithm which adds or removes workers based on average utilization for a given time period. It’s goal is to keep more workers than the minimum needed available at any given time, so the app will always have capacity for new requests. If you want to run only minimum number of workers then use the spare or backlog algorithms. This plugin primarily is used because the way spare and backlog plugins work causes very aggressive scaling behavior. If you set a low cheaper value (for example 1), then uWSGI will keep only 1 worker running and spawn new workers only when that running worker is overloaded. If an app requires more workers, then uWSGI will be spawning and stopping workers all the time. Only during times of very low load the would the minimum number of workers be enough. The Busyness algorithm tries to do the opposite: spawn as many workers as needed and stop some of them only when there is a good chance that they are not needed. This should lead to a more stable worker count and much less respawns. Since for most of the time we have more worker capacity than actually needed, average application response times should be lower than with other plugins. 428 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 Options: cheaper-overload Specifies the window, in seconds, for tracking the average busyness of workers. Example: cheaper-overload= 30 This option will check busyness every 30 seconds. If during the last 30 seconds all workers were running for 3 seconds and idle for the remaining 27 seconds the calculated busyness will be 10% (3/30). This value will decide how fast uWSGI can respond to load spikes. New workers will be spawned at most every cheaper-overload seconds (unless you are running uWSGI on Linux – see cheaper-busyness-backlog-alert for details). If you want to react to load spikes faster, keep this value low so busyness is calculated more often. Keep in mind this may cause workers to be started/stopped more often than required since every minor spike may spawn new workers. With a high cheaper-overload value the worker count will change much less since longer cycles will eat all short spikes of load and extreme values. Default is 3, for busyness plugin it’s best to use higher value (10-30). cheaper-step How many workers to spawn when the algorithm decides they are needed. Default is 1. cheaper-initial The number of workers to be started when starting the application. After the app is started the algorithm can stop or start workers if needed. cheaper-busyness-max This is the maximum busyness we allow. Every time the calculated busyness for last cheaper-overload seconds is higher than this value, uWSGI will spawn cheaper-step new workers. Default is 50. cheaper-busyness-min This is minimum busyness. If current busyness is below this value, the app is considered as being in an “idle cycl” and uWSGI will start counting them. Once we reach needed number of idle cycles uWSGI will kill one worker. Default is 25. cheaper-busyness-multiplier This option tells uWSGI how many idle cycles we need before stopping a worker. After reaching this limit uWSGI will stop a worker and reset this counter. For example: cheaper-overload= 10 cheaper-busyness-multiplier= 20 cheaper-busyness-min= 25 7.1. The uWSGI cheaper subsystem – adaptive process spawning 429 uWSGI Documentation, Release 2.0 If average worker busyness is under 25% for 20 checks in a row, executed every 10 seconds (total of 200 sec- onds), tone worker will be stopped. The idle cycles counter will be reset if average busyness jumps above cheaper-busyness-max and we spawn new workers. If during idle cycle counting the average busyness jumps above cheaper-busyness-min but still below cheaper-busyness-max, then the idle cycles counter is ad- justed and we need to wait extra one idle cycle. If during idle cycle counting the average busyness jumps above cheaper-busyness-min but still below cheaper-busyness-max three times in a row, then the idle cycle counter is reset. cheaper-busyness-penalty uWSGI will automatically tune the number of idle cycles needed to stop worker when worker is stopped due to enough idle cycles and then spawned back to fast (less than the same time we need to cheap worker), then we will increment the cheaper-busyness-multiplier value this value. Default is 1. Example: cheaper-overload= 10 cheaper-busyness-multiplier= 20 cheaper-busyness-min= 25 cheaper-busyness-penalty=2 If average worker busyness is under 25% for 20 checks in a row, executed every 10 seconds (total 200 seconds), one worker will be stopped. If new worker is spawned in less than 200 seconds (counting from the time when we spawned the last worker before it), the cheaper-busyness-multiplier value will be incremented up to 22 (20+2). Now we will need to wait 220 seconds (22*10) to cheap another worker. This option is used to prevent workers from being started and stopped all the time since once we stop one worker, busyness might jump up enough to hit cheaper-busyness-max. Without this, or if tuned poorly, we can get into a stop/start feedback loop . cheaper-busyness-verbose This option enables debug logs from the cheaper_busyness plugin. cheaper-busyness-backlog-alert This option is only available on Linux. It is used to allow quick response to load spikes even with high cheaper-overload values. On every uWSGI master cycle (default 1 second) the current listen queue is checked. If it is higher than this value, an emergency worker is spawned. When using this option it is safe to use high cheaper-overload values to have smoother scaling of worker count. Default is 33. cheaper-busyness-backlog-multiplier This option is only available on Linux. It works just like cheaper-busyness-multiplier, except it is used only for emergency workers spawned when listen queue was higher than cheaper-busyness-backlog-alert. Emergency workers are spawned in case of big load spike to prevent currently running workers from being overloaded. Sometimes load spike are random and short which can spawn a lot of emergency workers. In such cases we would need to wait several cycles before reaping those workers. This provides an alternate multiplier to reap these processes faster. Default is 3. cheaper-busyness-backlog-step This option is only available on Linux. It sets the number of emergency workers spawned when listen queue is higher than cheaper-busyness-backlog-alert. Defaults to 1. 430 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 cheaper-busyness-backlog-nonzero This option is only available on Linux. It will spawn new emergency workers if the request listen queue is > 0 for more than N seconds. It is used to protect the server from the corner case where there is only a single worker running and the worker is handling a long running request. If uWSGI receives new requests they would stay in the request queue until that long running request is completed. With this option we can detect such a condition and spawn new worker to prevent queued requests from being timed out. Default is 60. Notes regarding Busyness • Experiment with settings, there is no one golden rule of what values should be used for everyone. Test and pick values that are best for you. Monitoring uWSGI stats (via Carbon, for instance) will make it easy to decide on good values. • Don’t expect busyness to be constant. it will change frequently. In the end, real users interact with your apps in very random way. It’s recommended to use longer –cheaper-overload values (>=30) to have less spikes. • If you want to run some benchmarks with this plugin, you should use tools that add randomness to the work load • With a low number of workers (2-3) starting new worker or stopping one might affect busyness a lot, if You have 2 workers with busyness of 50%, than stopping one of them will increase busyness to 100%. Keep that in mind when picking min and max levels, with only few workers running most of the time max should be more than double of min, otherwise every time one worker is stopped it might increase busyness to above max level. • With a low number of workers (1-4) and default settings expect this plugin will keep average busyness below the minimum level; adjust levels to compensate for this. • With a higher number of workers required to handle load, worker count should stabilize somewhere near mini- mum busyness level, jumping a little bit around this value • When experimenting with this plugin it is advised to enable --cheaper-busyness-verbose to get an idea of what it is doing. An example log follows. # These messages are logged at startup to show current settings [busyness] settings: min=20%, max=60%, overload=20, multiplier=15, respawn penalty=3 [busyness] backlog alert is set to 33 request(s) # With --cheaper-busyness-verbose enabled You can monitor calculated busyness [busyness] worker nr1 20s average busyness is at 11% [busyness] worker nr2 20s average busyness is at 11% [busyness] worker nr3 20s average busyness is at 20% [busyness] 20s average busyness of3 worker(s) is at 14% # Average busyness is under 20%, we start counting idle cycles # we have overload=20 and multiplier=15 so we need to wait 300 seconds before we can stop worker # cycle we just had was counted as idle so we need to wait another 280 seconds # 1 missing second below is just from rounding, master cycle is every 1 second but it also takes some time, this is normal [busyness] need to wait 279 more second(s) to cheap worker # We waited long enough and we can stop one worker [busyness] worker nr1 20s average busyness is at6% [busyness] worker nr2 20s average busyness is at 22% [busyness] worker nr3 20s average busyness is at 19% [busyness] 20s average busyness of3 worker(s) is at 15% [busyness] 20s average busyness is at 15%, cheap one of3 running workers # After stopping one worker average busyness is now higher, which is no surprise 7.1. The uWSGI cheaper subsystem – adaptive process spawning 431 uWSGI Documentation, Release 2.0 [busyness] worker nr2 20s average busyness is at 36% [busyness] worker nr3 20s average busyness is at 24% [busyness] 20s average busyness of2 worker(s) is at 30% # 30% is above our minimum (20%), but it’s still far from our maximum (60%) # since this is not idle cycle uWSGI will ignore it when counting when to stop worker [busyness] 20s average busyness is at 30%,1 non-idle cycle(s), adjusting cheaper timer # After a while our average busyness is still low enough, so we stop another worker [busyness] 20s average busyness is at3%, cheap one of2 running workers # With only one worker running we won’t see per worker busyness since it’s the same as total average [busyness] 20s average busyness of1 worker(s) is at 16% [busyness] 20s average busyness of1 worker(s) is at 17% # Shortly after stopping second worker and with only one running we have load spike that is enough to hit our maximum level # this was just few cycles after stopping worker so uWSGI will increase multiplier # now we need to wait extra 3 cycles before stopping worker [busyness] worker(s) respawned to fast, increasing cheaper multiplier to 18(+3) # Initially we needed to wait only 300 seconds, now we need to have 360 subsequent seconds when workers busyness is below minimum level # 10*20 + 3*20 = 360 [busyness] worker nr1 20s average busyness is at9% [busyness] worker nr2 20s average busyness is at 17% [busyness] worker nr3 20s average busyness is at 17% [busyness] worker nr4 20s average busyness is at 21% [busyness] 20s average busyness of4 worker(s) is at 16% [busyness] need to wait 339 more second(s) to cheap worker 7.2 The uWSGI Emperor – multi-app deployment If you need to deploy a big number of apps on a single server, or a group of servers, the Emperor mode is just the ticket. It is a special uWSGI instance that will monitor specific events and will spawn/stop/reload instances (known as vassals, when managed by an Emperor) on demand. By default the Emperor will scan specific directories for supported (.ini, .xml, .yml, .json, etc.) uWSGI configuration files, but it is extensible using imperial monitor plugins. The dir:// and glob:// plugins are embedded in the core, so they need not be loaded, and are automatically detected. The dir:// plugin is the default. • Whenever an imperial monitor detects a new configuration file, a new uWSGI instance will be spawned with that configuration. • Whenever a configuration file is modified (its modification time changed, so touch --no-dereference may be your friend), the corresponding app will be reloaded. • Whenever a config file is removed, the corresponding app will be stopped. • If the emperor dies, all the vassals die. • If a vassal dies for any reason, the emperor will respawn it. Multiple sources of configuration may be monitored by specifying --emperor multiple times. See also: See Imperial monitors for a list of the Imperial Monitor plugins shipped with uWSGI and how to use them. 432 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 7.2.1 Imperial monitors dir:// – scan a directory for uWSGI config files Simply put all of your config files in a directory, then point the uWSGI emperor to it. The Emperor will start scanning this directory. When it finds a valid config file it will spawn a new uWSGI instance. For our example, we’re deploying a Werkzeug test app, a Trac instance, a Ruby on Rails app and a Django app. werkzeug.xml werkzeug.testapp:test_app 4 127.0.0.1:3031 trac.ini [uwsgi] master= true processes=2 module= trac.web.main:dispatch_request env= TRAC_ENV=/opt/project001 socket= 127.0.0.1:3032 rails.yml uwsgi: plugins: rack rack: config.ru master: 1 processes: 8 socket: 127.0.0.1:3033 post-buffering: 4096 chdir: /opt/railsapp001 django.ini [uwsgi] socket= 127.0.0.1:3034 threads= 40 master=1 env= DJANGO_SETTINGS_MODULE=myapp.settings module= django.core.handlers.wsgi:WSGIHandler() chdir= /opt/djangoapp001 Put these 4 files in a directory, for instance /etc/uwsgi/vassals in our example, then spawn the Emperor: uwsgi --emperor /etc/uwsgi/vassals The emperor will find the uWSGI instance configuration files in that directory (the dir:// plugin declaration is implicit) and start the daemons needed to run them. glob:// – monitor a shell pattern glob:// is similar to dir://, but a glob expression must be specified: 7.2. The uWSGI Emperor – multi-app deployment 433 uWSGI Documentation, Release 2.0 uwsgi --emperor "/etc/vassals/domains/*/conf/uwsgi.xml" uwsgi --emperor "/etc/vassals/*.ini" Note: Remember to quote the pattern, otherwise your shell will most likely interpret it and expand it at invocation time, which is not what you want. As the Emperor can search for configuration files in subdirectory hierarchies, you could have a structure like this: /opt/apps/app1/app1.xml /opt/apps/app1/...all the app files... /opt/apps/app2/app2.ini /opt/apps/app2/...all the app files... and run uWSGI with: uwsgi --emperor /opt/apps/app*/app*.* pg:// – scan a PostgreSQL table for configuration You can specify a query to run against a PostgreSQL database. Its result must be a list of 3 to 5 fields defining a vassal: 1. The instance name, including a valid uWSGI config file extension. (Such as django-001.ini) 2.A TEXT blob containing the vassal configuration, in the format based on the extension in field 1 3. A number representing the modification time of this row in UNIX format (seconds since the epoch). 4. The UID of the vassal instance. Required in Tyrant mode (secure multi-user hosting) mode only. 5. The GID of the vassal instance. Required in Tyrant mode (secure multi-user hosting) mode only. uwsgi --plugin emperor_pg --emperor "pg://host=127.0.0.1 user=foobar dbname=emperor;SELECT name,config,ts FROM vassals" • Whenever a new tuple is added a new instance is created and spawned with the config specified in the second field. • Whenever the modification time field changes, the instance is reloaded. • If a tuple is removed, the corresponding vassal will be destroyed. mongodb:// – Scan MongoDB collections for configuration uwsgi --plugin emperor_mongodb --emperor "mongodb://127.0.0.1:27107,emperor.vassals,{enabled:1}" This will scan all of the documents in the emperor.vassals collection having the field enabled set to 1. An Emperor-compliant document must define three fields: name, config and ts. In Tyrant mode (secure multi-user hosting) mode, 2 more fields are required. • name (string) is the name of the vassal (remember to give it a valid extension, like .ini) • config (multiline string) is the vassal config in the format described by the name‘s extension. • ts (date) is the timestamp of the config (Note: MongoDB internally stores the timestamp in milliseconds.) • uid (number) is the UID to run the vassal as. Required in Tyrant mode (secure multi-user hosting) mode only. • gid (number) is the GID to run the vassal as. Required in Tyrant mode (secure multi-user hosting) mode only. 434 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 amqp:// – Use an AMQP compliant message queue to announce events Set your AMQP (RabbitMQ, for instance) server address as the –emperor argument: uwsgi --plugin emperor_amqp --emperor amqp://192.168.0.1:5672 Now the Emperor will wait for messages in the uwsgi.emperor exchange. This should be a fanout type exchange, but you can use other systems for your specific needs. Messages are simple strings containing the absolute path of a valid uWSGI config file. # The pika module is used in this example, but you’re free to use whatever adapter you like. import pika # connect to RabbitMQ server connection= pika.BlockingConnection(pika.ConnectionParameters(’192.168.0.1’)) # get the channel channel= connection.channel() # create the exchange (if not already available) channel.exchange_declare(exchange=’uwsgi.emperor’, type=’fanout’) # publish a new config file channel.basic_publish(exchange=’uwsgi.emperor’, routing_key=’’, body=’/etc/vassals/mydjangoapp.xml’) The first time you launch the script, the emperor will add the new instance (if the config file is available). From now on every time you re-publish the message the app will be reloaded. When you remove the config file the app is removed too. Tip: You can subscribe all of your emperors in the various servers to this exchange to allow cluster-synchronized reloading/deploy. AMQP with HTTP uWSGI is capable of loading configuration files over HTTP. This is a very handy way to dynamically generate config- uration files for massive hosting. Simply declare the HTTP URL of the config file in the AMQP message. Remember that it must end with one of the valid config extensions, but under the hood it can be generated by a script. If the HTTP URL returns a non-200 status code, the instance will be removed. channel.basic_publish(exchange=’uwsgi.emperor’, routing_key=’’, body=’http://example.com/confs/trac.ini’) Direct AMQP configuration Configuration files may also be served directly over AMQP. The routing_key will be the (virtual) config filename, and the message will be the content of the config file. channel.basic_publish( exchange=’uwsgi.emperor’, routing_key=’mydomain_trac_config.ini’, body=""" [uwsgi] socket=:3031 env = TRAC_ENV=/accounts/unbit/trac/uwsgi module = trac.web.main:dispatch_request processes = 4""") The same reloading rules of previous modes are valid. When you want to remove an instance simply set an empty body as the “configuration”. 7.2. The uWSGI Emperor – multi-app deployment 435 uWSGI Documentation, Release 2.0 channel.basic_publish(exchange=’uwsgi.emperor’, routing_key=’mydomain_trac_config.ini’, body=’’) zmq:// – ZeroMQ The Emperor binds itself to a ZeroMQ PULL socket, ready to receive commands. uwsgi --plugin emperor_zeromq --emperor zmq://tcp://127.0.0.1:5252 Each command is a multipart message sent over a PUSH zmq socket. A command is composed by at least 2 parts: command and name command is the action to execute, while name is the name of the vassal. 3 optional parts can be specified. • config (a string containing the vassal config) • uid (the user id to drop priviliges to in case of tyrant mode) • gid (the group id to drop priviliges to in case of tyrant mode) There are 2 kind of commands (for now): • touch • destroy The first one is used for creating and reloading instances while the second is for destroying. If you do not specify a config string, the Emperor will assume you are referring to a static file available in the Emperor current directory. import zmq c= zmq.Context() s= zmq.Socket(c, zmq.PUSH) s.connect(’tcp://127.0.0.1:5252’) s.send_multipart([’touch’,’foo.ini’,"[uwsgi]\nsocket=:4142"]) zoo:// – Zookeeper Currently in development. ldap:// – LDAP Currently in development. 7.2.2 The Emperor protocol As of 1.3 you can spawn custom applications via the Emperor. Non-uWSGI Vassals should never daemonize, to maintain a link with the Emperor. If you want/need better integration with the Emperor, implement the Emperor protocol. The protocol An environment variable UWSGI_EMPEROR_FD is passed to every vassal, containing a file descriptor number. 436 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 import os has_emperor= os.environ.get(’UWSGI_EMPEROR_FD’) if has_emperor: print "I’m a vassal snake!" Or in Perl, my $has_emperor = $ENV{’UWSGI_EMPEROR_FD’} if ($has_emperor) { print "I am a vassal.\n" } Or in C, int emperor_fd=-1; char *has_emperor= getenv("UWSGI_EMPEROR_FD"); if (has_emperor) { emperor_fd= atoi(has_emperor); fprintf(stderr,"I am a vassal. \n"); } From now you can receive (and send) messages from (and to) the Emperor over this file descriptor. Messages are byte sized (0-255), and each number (byte) has a meaning. 0 Sent by the Emperor to stop a vassal 1 Sent by the Emperor to reload a vassal / sent by a vassal when it has been spawned 2 Sent by a vassal to ask the Emperor for configuration chunk 5 Sent by a vassal when it is ready to accept requests 17 Sent by a vassal after the first request to announce loyalty 22 Sent by a vassal to notify the Emperor of voluntary shutdown 26 Heartbeat sent by the vassal. After the first received heartbeat, the Emperor will expect more of them from the vassal. 30 Sent by the vassal to ask for Auto-scaling with Broodlord mode mode. 7.2.3 Special configuration variables Using Placeholders and Magic variables in conjunction with the Emperor will probably save you a lot of time and make your configuration more DRY. Suppose that in /opt/apps there are only Django apps. /opt/apps/app.skel (the .skel extension is not a known configuration file type to uWSGI and will be skipped) [uwsgi] chdir= /opt/apps/%n master= true threads= 20 socket= /tmp/sockets/%n.sock env= DJANGO_SETTINGS_MODULE=%n.settings module= django.core.handlers.wsgi:WSGIHandler() And then for each app create a symlink: ln-s/opt/apps/app.skel/opt/apps/app1.ini ln-s/opt/apps/app.skel/opt/apps/app2.ini Finally, start the Emperor with the --emperor-nofollow option. Now you can reload each vassal separately with the command: 7.2. The uWSGI Emperor – multi-app deployment 437 uWSGI Documentation, Release 2.0 touch --no-dereference $INI_FILE 7.2.4 Passing configuration parameters to all vassals Starting from 1.9.19 you can pass options using the --vassal-set facility [uwsgi] emperor= /etc/uwsgi/vassals vassal-set= processes=8 vassal-set= enable-metrics=1 this will add --set processes=8 and --set enable-metrics=1 to each vassal You can force the Emperor to pass options to uWSGI instances using environment variables too. Every environment variable of the form UWSGI_VASSAL_xxx will be rewritten in the new instance as UWSGI_xxx, with the usual configuration implications. For example: UWSGI_VASSAL_SOCKET=/tmp/%n.sock uwsgi --emperor /opt/apps will let you avoid specifying the socket option in configuration files. Alternatively, you can use the --vassals-include option let each vassal automatically include a complete config file: uwsgi--emperor/opt/apps--vassals-include/etc/uwsgi/vassals-default.ini Note that if you do this, %n (and other magic variables) in the included file will resolve to the name of the included file, not the original vassal configuration file. If you want to set options in the included file using the vassal name, you’ll have to use placeholders. For example, in the vassal config, you write: [uwsgi] vassal_name = %n ... more options In the vassal-defaults.ini, you write: [uwsgi] socket = /tmp/sockets/%(vassal_name).sock 7.2.5 Tyrant mode (secure multi-user hosting) The emperor is normally be run as root, setting the UID and GID in each instance’s config. The vassal instance then drops privileges before serving requests. In this mode, if your users have access to their own uWSGI configuration files, you can’t trust them to set the correct uid and gid. You could run the emperor as unprivileged user (with uid and gid) but all of the vassals would then run under the same user, as unprivileged users are not able to promote themselves to other users. For this case the Tyrant mode is available – just add the emperor-tyrant option. In Tyrant mode the Emperor will run the vassal with the UID/GID of its configuration file (or for other Imperial Monitors, by some other method of configuration). If Tyrant mode is used, the vassal configuration files must have UID/GID > 0. An error will occur if the UID or GID is zero, or if the UID or GID of the configuration of an already running vassal changes. 438 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 Tyrant mode for paranoid sysadmins (Linux only) If you have built a uWSGI version with Setting POSIX Capabilities options enabled, you can run the Emperor as unprivileged user but maintaining the minimal amount of root-capabilities needed to apply the tyrant mode [uwsgi] uid= 10000 gid= 10000 emperor= /tmp emperor-tyrant= true cap= setgid,setuid 7.2.6 On demand vassals (socket activation) Inspired by the venerable xinetd/inetd approach, you can spawn your vassals only after the first connection to a specific socket. This feature is available as of 1.9.1. Check the changelog for more information: uWSGI 1.9.1 7.2.7 Loyalty As soon as a vassal manages a request it will became “loyal”. This status is used by the Emperor to identify bad- behaving vassals and punish them. 7.2.8 Throttling Whenever two or more vassals are spawned in the same second, the Emperor will start a throttling subsystem to avoid fork bombing. The system adds a throttle delta (specified in milliseconds via the OptionEmperorThrottle option) whenever it happens, and waits for that duration before spawning a new vassal. Every time a new vassal spawns without triggering throttling, the current throttling duration is halved. 7.2.9 Blacklist system Whenever a non-loyal vassal dies, it is put in a shameful blacklist. When in a blacklist, that vassal will be throttled up to a maximum value (tunable via OptionEmperorMaxThrottle), starting from the default throttle delta of 3. Whenever a blacklisted vassal dies, its throttling value is increased by the delta (OptionEmperorThrottle). 7.2.10 Heartbeat system Vassals can voluntarily ask the Emperor to monitor their status. Workers of heartbeat-enabled vassals will send “heart- beat” messages to the Emperor. If the Emperor does not receive heartbeats from an instance for more than N (default 30, OptionEmperorRequiredHeartbeat) seconds, that instance will be considered hung and thus reloaded. To enable sending of heartbeat packet in a vassal, add the OptionHeartbeat option. Important: If all of your workers are stuck handling perfectly legal requests such as slow, large file uploads, the Emperor will trigger a reload as if the workers are hung. The reload triggered is a graceful one, so you can be able to tune your config/timeout/mercy for sane behaviour. 7.2. The uWSGI Emperor – multi-app deployment 439 uWSGI Documentation, Release 2.0 7.2.11 Using Linux namespaces for vassals On Linux you can tell the Emperor to run vassals in “unshared” contexts. That means you can run each vassal with a dedicated view of the filesystems, ipc, uts, networking, pids and uids. Things you generally do with tools like lxc or its abstractions like docker are native in uWSGI. For example if you want to run each vassals in a new namespace: [uwsgi] emperor= /etc/uwsgi/vassals emperor-use-clone= fs,net,ipc,pid,uts now each vassal will be able to modify the filesystem layout, networking, hostname and so on without damaging the main system. A couple of helper daemons are included in the uWSGI distribution to simplify management of jailed vassals. Most notably The TunTap Router allows full user-space networking in jails, while the forkpty router allows allocation of pseudoterminals in jails It is not needed to unshare all of the subsystem in your vassals, sometimes you only want to give dedicated ipc and hostname to a vassal and hide from the processes list: [uwsgi] emperor= /etc/uwsgi/vassals emperor-use-clone= fs,ipc,pid,uts a vassal could be: [uwsgi] ; set the hostname exec-as-root= hostname foobar ; umount /proc and remount to hide processes ; as we are in the ’fs’ namespace umounting /proc does not interfere with the main one exec-as-root= umount /proc exec-as-root= mount -t proc none /proc ; drop privileges uid= foobar gid= foobar ; bind to the socket socket= /tmp/myapp.socket psgi= myapp.pl 7.2.12 The Imperial Bureau of Statistics You can enable a statistics/status service for the Emperor by adding the OptionEmperorStats option with a TCP ad- dress. By connecting to that address, you’ll get a JSON-format blob of statistics. 7.2.13 Running non-uWSGI apps or using alternative uWSGIs as vassals You can exec() a different binary as your vassal using the privileged-binary-patch/unprivileged-binary-patch options. The first one patches the binary after socket inheritance and shared socket initialization (so you will be able to use uWSGI-defined sockets). The second one patches the binary after privileges drop. In this way you will be able to use uWSGI’s UID/GID/chroot/namespace/jailing options. The binary is called with the same arguments that were passed to the vassal by the Emperor. 440 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 ; i am a special vassal calling a different binary in a new linux network namespace [uwsgi] uid= 1000 gid= 1000 unshare= net unprivileged-binary-patch= /usr/bin/myfunnyserver Important: DO NOT DAEMONIZE your apps. If you do so, the Emperor will lose its connection with them. The uWSGI arguments are passed to the new binary. If you do not like that behaviour (or need to pass custom arguments) add -arg to the binary patch option, yielding: ; i am a special vassal calling a different binary in a new linux network namespace ; with custom options [uwsgi] uid= 1000 gid= 1000 unshare= net unprivileged-binary-patch-arg= ps aux or: ;nginx example [uwsgi] privileged-binary-patch-arg= nginx -g "daemon off;" See also: Your custom vassal apps can also speak with the emperor using the emperor protocol. 7.2.14 Integrating the Emperor with the FastRouter The FastRouter is a proxy/load-balancer/router speaking The uwsgi Protocol. Yann Malet from Lincoln Loop has released a draft about massive Emperor + Fastrouter deployment (PDF) using The uWSGI caching framework as a hostname to socket mapping storage. 7.2.15 Notes • At startup, the emperor chdir() to the vassal dir. All vassal instances will start from here. • If the uwsgi binary is not in your system path you can force its path with binary-path: ./uwsgi --emperor /opt/apps --binary-path /opt/uwsgi/uwsgi • Sending SIGUSR1 to the emperor will print vassal status in its log. • Stopping (SIGINT/SIGTERM/SIGQUIT) the Emperor will invoke Ragnarok and kill all the vassals. • Sending SIGHUP to the Emperor will reload all vassals. • The emperor should generally not be run with --master, unless master features like advanced logging are specifically needed. • The emperor should generally be started at server boot time and left alone, not reloaded/restarted except for uWSGI upgrades; emperor reloads are a bit drastic, reloading all vassals at once. Instead vassals should be reloaded individually when needed, in the manner of the imperial monitor in use. 7.2. The uWSGI Emperor – multi-app deployment 441 uWSGI Documentation, Release 2.0 7.2.16 Todo • Docs-TODO: Clarify what the “chdir-on-startup” behavior does with non-filesystem monitors. • Export more magic vars • Add support for multiple sections in xml/ini/yaml files (this will allow to have a single config file for multiple instances) 7.3 Auto-scaling with Broodlord mode Broodlord (taken from Starcraft, like Zerg mode mode) is a way for vassals to ask for more workers from the Emperor. Broodlord mode alone is not very useful. However, when combined with Zerg mode, Idle and The uWSGI Emperor – multi-app deployment it can be used to implement auto-scaling for your apps. 7.3.1 A ‘simple’ example We’ll start apps with a single worker, adding resources on demand. Broodlord mode expects an additional stanza in your config file to be used for zergs. [uwsgi] socket= :3031 master= true vassal-sos-backlog= 10 module= werkzeug.testapp:test_app processes=1 zerg-server= /tmp/broodlord.sock disable-logging= true [zerg] zerg= /tmp/broodlord.sock master= true module= werkzeug.testapp:test_app processes=1 disable-logging= true idle= 30 die-on-idle= true The vassal-sos-backlog option (supported only on Linux and TCP sockets) will ask the Emperor for zergs when the listen queue is higher than the given value. By default the value is 10. More “vassal-sos-” options will be added in the future to allow for more specific detect-overload systems. The [zerg] stanza is the config the Emperor will run when a vassal requires resources. The die-on-idle option will completely destroy the zerg when inactive for more than 30 seconds. This configuration shows how to combine the various uWSGI features to implement different means of scaling. To run the Emperor we need to specify how many zerg instances can be run: uwsgi --emperor /etc/vassals --emperor-broodlord 40 This will allow you to run up to 40 additional zerg workers for your apps. 7.3.2 –vassal-sos This has been added in 2.0.7, and allows the vassal to ask for reinforcement as soon as all of its workers are busy. 442 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 The option takes a value (integer) that is the number of seconds to wait before asking for a new reinforcement. 7.3.3 Manually asking for reinforcement You can use the master fifo (with command ‘B’) To force an instance to ask for reinforcement by the Emperor echo B > /var/run/master.fifo 7.3.4 Under the hood (aka: hacking broodlord mode) Technically broodlord mode is a simple message sent by a vassal to force the Emperor to spawn another vassal with ‘:zerg’ suffix as the instance name. Albeit the suffix is ‘:zerg’ this does not mean you need to use zerg mode. A ‘zerg’ instance could be a completely independent one simply subscribing to a router, or binding to a SO_REUSEPORT socket. This is an example with subscription system [uwsgi] socket= 127.0.0.1:0 subscribe2= server=127.0.0.1:4040,key=foobar.it psgi= app.pl processes=4 vassal-sos=3 [zerg] socket= 127.0.0.1:0 subscribe2= server=127.0.0.1:4040,key=foobar.it psgi= app.pl idle= 60 processes=1 7.4 Zerg mode Note: Yes, that’s Zerg as in the “quantity-over-quality” Starcraft race. If you haven’t played Starcraft, be prepared for some nonsense. Note: Also note that this nonsense is mostly limited to the nomenclature. Zerg Mode is serious business. When your site load is variable, it would be nice to be able to add workers dynamically. You can obviously edit your configuration to hike up workers and reload your uWSGI instance, but for very loaded apps this is undesirable, and frankly – who wants to do manual work like that to scale an app? Enabling Zerg mode you can allow “uwsgi-zerg” instances to attach to your already running server and help it in the work. Zerg mode is obviously local only. You cannot use it to add remote instances – this is a job better done by the The uWSGI FastRouter, the HTTP plugin or your web server’s load balancer. 7.4. Zerg mode 443 uWSGI Documentation, Release 2.0 7.4.1 Enabling the zerg server If you want an uWSGI instance to be rushed by zerg, you have to enable the Zerg server. It will be bound to an UNIX socket and will pass uwsgi socket file descriptors to the Zerg workers connecting to it. Warning: The socket must be an UNIX socket because it must be capable of passing through file descriptors. A TCP socket simply will not work. For security reasons the UNIX socket does not inherit the chmod-socket option, but will always use the current umask. If you have filesystem permission issues, on Linux you can use the UNIX sockets in abstract namespace, by prepending an @ to the socket name. • A normal UNIX socket: ./uwsgi -M -p 8 --module welcome --zerg-server /var/run/mutalisk • A socket in a Linux abstract namespace: ./uwsgi -M -p 8 --module welcome --zerg-server @nydus 7.4.2 Attaching zergs to the zerg server To add a new instance to your zerg pool, simply use the –zerg option ./uwsgi --zerg /var/run/mutalisk --master --processes 4 --module welcome # (or --zerg @nydus, following the example above) In this way 4 new workers will start serving requests. When your load returns to normal values, you can simply shutdown all of the uwsgi-zerg instances without problems. You can attach an unlimited number of uwsgi-zerg instances. 7.4.3 Fallback if a zerg server is not available By default a Zerg client will not run if the Zerg server is not available. Thus, if your zerg server dies, and you reload the zerg client, it will simply shutdown. If you want to avoid that behaviour, add a --socket directive mapping to the required socket (the one that should be managed by the zerg server) and add the --zerg-fallback option. With this setup, if a Zerg server is not available, the Zerg client will continue binding normally to the specified socket(s). 7.4.4 Using Zerg as testers A good trick you can use, is suspending the main instance with the SIGTSTP signal and loading a new version of your app in a Zerg. If the code is not ok you can simply shutdown the Zerg and resume the main instance. 444 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 7.4.5 Zerg Pools Zergpools are special Zerg servers that only serve Zerg clients, nothing more. You can use them to build high-availability systems that reduce downtime during tests/reloads. You can run an unlimited number of zerg pools (on several UNIX sockets) and map an unlimited number of sockets to them. [uwsgi] master= true zergpool= /tmp/zergpool_1:127.0.0.1:3031,127.0.0.1:3032 zergpool= /tmp/zergpool_2:192.168.173.22:3031,192.168.173.22:3032 With a config like this, you will have two zergpools, each serving two sockets. You can now attach instances to them. uwsgi --zerg /tmp/zergpool_1 --wsgi-file myapp.wsgi --master --processes 8 uwsgi --zerg /tmp/zergpool_2 --rails /var/www/myapp --master --processes 4 or you can attach a single instance to multiple Zerg servers. uwsgi --zerg /tmp/zergpool_1 --zerg /tmp/zergpool_2 --wsgi-file myapp.wsgi --master --processes 8 7.5 Adding applications dynamically NOTE: this is not the best approach for hosting multiple applications. You’d better to run a uWSGI instance for each app. You can start the uWSGI server without configuring an application. To load a new application you can use these variables in the uwsgi packet: • UWSGI_SCRIPT – pass the name of a WSGI script defining an application callable • or UWSGI_MODULE and UWSGI_CALLABLE – the module name (importable path) and the name of the callable to invoke from that module Dynamic apps are officially supported on Cherokee, Nginx, Apache, cgi_dynamic. They are easily addable to the Tomcat and Twisted handlers. 7.5.1 Defining VirtualEnv with dynamic apps Virtualenvs are based on the Py_SetPythonHome() function. This function has effect only if called before Py_Initialize() so it can’t be used with dynamic apps. To define a VirtualEnv with DynamicApps, a hack is the only solution. First you have to tell python to not import the site module. This module adds all site-packages to sys.path. To emulate virtualenvs, we must load the site module only after subinterpreter initialization. Skipping the first import site, we can now simply set sys.prefix and sys.exec_prefix on dynamic app loading and call PyImport_ImportModule("site"); // Some users would want to not disable initial site module loading, so the site module must be reloaded: PyImport_ReloadModule(site_module); Now we can set the VirtualEnv dynamically using the UWSGI_PYHOME var: 7.5. Adding applications dynamically 445 uWSGI Documentation, Release 2.0 location/{ uwsgi_pass 192.168.173.5:3031; include uwsgi_params; uwsgi_param UWSGI_SCRIPT mytrac; uwsgi_param UWSGI_PYHOME/Users/roberto/uwsgi/VENV2; } 7.6 Scaling SSL connections (uWSGI 1.9) Distributing SSL servers in a cluster is a hard topic. The biggest problem is sharing SSL sessions between different nodes. The problem is amplified in non-blocking servers due to OpenSSL’s limits in the way sessions are managed. For example, you cannot share sessions in Memcached servers and access them in a non-blocking way. A common solution (well, a compromise, maybe) until now has been to use a single SSL terminator balancing requests to multiple non-encrypted backends. This solution kinda works, but obviously it does not scale. Starting from uWSGI 1.9-dev an implementation (based on the stud project) of distributed caching has been added. 7.6.1 Setup 1: using the uWSGI cache for storing SSL sessions You can configure the SSL subsystem of uWSGI to use the shared cache. The SSL sessions will time out according to the expiry value of the cache item. This way the cache sweeper thread (managed by the master) will destroy sessions in the cache. Important: The order of the options is important. cache options must be specified BEFORE ssl-sessions-use-cache and https options. [uwsgi] ; spawn the master process (it will run the cache sweeper thread) master= true ; store up to 20k sessions cache= 20000 ; 4k per object is enough for SSL sessions cache-blocksize= 4096 ; force the SSL subsystem to use the uWSGI cache as session storage ssl-sessions-use-cache= true ; set SSL session timeout (in seconds) ssl-sessions-timeout= 300 ; set the session context string (see later) https-session-context= foobar ; spawn an HTTPS router https= 192.168.173.1:8443,foobar.crt,foobar.key ; spawn 8 processes for the HTTPS router (all sharing the same session cache) http-processes=8 ; add a bunch of uwsgi nodes to relay traffic to http-to= 192.168.173.10:3031 http-to= 192.168.173.11:3031 http-to= 192.168.173.12:3031 ; add stats stats= 127.0.0.1:5001 446 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 Now start blasting your HTTPS router and then telnet to port 5001. Under the “cache” object of the JSON output you should see the values “items” and “hits” increasing. The value “miss” is increased every time a session is not found in the cache. It is a good metric of the SSL performance users can expect. 7.6.2 Setup 2: synchronize caches of different HTTPS routers The objective is to synchronize each new session in each distributed cache. To accomplish that you have to spawn a special thread (cache-udp-server) in each instance and list all of the remote servers that should be synchronized. A pure-TCP load balancer (like HAProxy or uWSGI’s Rawrouter) can be used to load balance between the various HTTPS routers. Here’s a possible Rawrouter config. [uwsgi] master= true rawrouter= 192.168.173.99:443 rawrouter-to= 192.168.173.1:8443 rawrouter-to= 192.168.173.2:8443 rawrouter-to= 192.168.173.3:8443 Now you can configure the first node (the new options are at the end of the .ini config) [uwsgi] ; spawn the master process (it will run the cache sweeper thread) master= true ; store up to 20k sessions cache= 20000 ; 4k per object is enough for SSL sessions cache-blocksize= 4096 ; force the SSL subsystem to use the uWSGI cache as session storage ssl-sessions-use-cache= true ; set SSL session timeout (in seconds) ssl-sessions-timeout= 300 ; set the session context string (see later) https-session-context= foobar ; spawn an HTTPS router https= 192.168.173.1:8443,foobar.crt,foobar.key ; spawn 8 processes for the HTTPS router (all sharing the same session cache) http-processes=8 ; add a bunch of uwsgi nodes to relay traffic to http-to= 192.168.173.10:3031 http-to= 192.168.173.11:3031 http-to= 192.168.173.12:3031 ; add stats stats= 127.0.0.1:5001 ; spawn the cache-udp-server cache-udp-server= 192.168.173.1:7171 ; propagate updates to the other nodes cache-udp-node= 192.168.173.2:7171 cache-udp-node= 192.168.173.3:7171 and the other two... [uwsgi] ; spawn the master process (it will run the cache sweeper thread) master= true ; store up to 20k sessions 7.6. Scaling SSL connections (uWSGI 1.9) 447 uWSGI Documentation, Release 2.0 cache= 20000 ; 4k per object is enough for SSL sessions cache-blocksize= 4096 ; force the SSL subsystem to use the uWSGI cache as session storage ssl-sessions-use-cache= true ; set SSL session timeout (in seconds) ssl-sessions-timeout= 300 ; set the session context string (see later) https-session-context= foobar ; spawn an HTTPS router https= 192.168.173.1:8443,foobar.crt,foobar.key ; spawn 8 processes for the HTTPS router (all sharing the same session cache) http-processes=8 ; add a bunch of uwsgi nodes to relay traffic to http-to= 192.168.173.10:3031 http-to= 192.168.173.11:3031 http-to= 192.168.173.12:3031 ; add stats stats= 127.0.0.1:5001 ; spawn the cache-udp-server cache-udp-server= 192.168.173.2:7171 ; propagate updates to the other nodes cache-udp-node= 192.168.173.1:7171 cache-udp-node= 192.168.173.3:7171 [uwsgi] ; spawn the master process (it will run the cache sweeper thread) master= true ; store up to 20k sessions cache= 20000 ; 4k per object is enough for SSL sessions cache-blocksize= 4096 ; force the SSL subsystem to use the uWSGI cache as session storage ssl-sessions-use-cache= true ; set SSL session timeout (in seconds) ssl-sessions-timeout= 300 ; set the session context string (see later) https-session-context= foobar ; spawn an HTTPS router https= 192.168.173.1:8443,foobar.crt,foobar.key ; spawn 8 processes for the HTTPS router (all sharing the same session cache) http-processes=8 ; add a bunch of uwsgi nodes to relay traffic to http-to= 192.168.173.10:3031 http-to= 192.168.173.11:3031 http-to= 192.168.173.12:3031 ; add stats stats= 127.0.0.1:5001 ; spawn the cache-udp-server cache-udp-server= 192.168.173.3:7171 ; propagate updates to the other nodes cache-udp-node= 192.168.173.1:7171 cache-udp-node= 192.168.173.2:7171 Start hammering the Rawrouter (remember to use a client supporting persistent SSL sessions, like your browser) and get cache statistics from the stats server of each HTTPS terminator node. If the count of “hits” is a lot higher than the 448 Chapter 7. Scaling with uWSGI uWSGI Documentation, Release 2.0 “miss” value the system is working well and your load is distributed and in awesome hyper high performance mode. So, what is https-session-context, you ask? Basically each SSL session before being used is checked against a fixed string (the session context). If the session does not match that string, it is rejected. By default the session context is initialized to a value built from the HTTP server address. Forcing it to a shared value will avoid a session created in a node being rejected in another one. 7.6.3 Using named caches Starting from uWSGI 1.9 you can have multiple caches. This is a setup with 2 nodes using a new generation cache named “ssl”. The cache2 option allows also to set a custom key size. Since SSL session keys are not very long, we can use it to optimize memory usage. In this example we use 128 byte key size limit, which should be enough for session IDs. [uwsgi] ; spawn the master process (it will run the cache sweeper thread) master= true ; store up to 20k sessions cache2= name=ssl,items=20000,keysize=128,blocksize=4096,node=127.0.0.1:4242,udp=127.0.0.1:4141 ; force the SSL subsystem to use the uWSGI cache as session storage ssl-sessions-use-cache= ssl ; set sessions timeout (in seconds) ssl-sessions-timeout= 300 ; set the session context string https-session-context= foobar ; spawn an HTTPS router https= :8443,foobar.crt,foobar.key ; spawn 8 processes for the HTTPS router (all sharing the same session cache) http-processes=8 module= werkzeug.testapp:test_app ; add stats stats= :5001 and the second node... [uwsgi] ; spawn the master process (it will run the cache sweeper thread) master= true ; store up to 20k sessions cache2= name=ssl,items=20000,blocksize=4096,node=127.0.0.1:4141,udp=127.0.0.1:4242 ; force the SSL subsystem to use the uWSGI cache as session storage ssl-sessions-use-cache= ssl ; set session timeout ssl-sessions-timeout= 300 ; set the session context string https-session-context= foobar ; spawn an HTTPS router https= :8444,foobar.crt,foobar.key ; spawn 8 processes for the HTTPS router (all sharing the same sessions cache) http-processes=8 module= werkzeug.testapp:test_app ; add stats stats= :5002 7.6. Scaling SSL connections (uWSGI 1.9) 449 uWSGI Documentation, Release 2.0 7.6.4 Notes If you do not want to manually configure the cache UDP nodes and your network configuration supports it, you can use UDP multicast. [uwsgi] ... cache-udp-server = 225.1.1.1:7171 cache-udp-node = 225.1.1.1:7171 • A new gateway server is in development, named “udprepeater”. It will basically forward all of UDP packets it receives to the subscribed back-end nodes. It will allow you to maintain the zero-config style of the subscription system (basically you only need to configure a single cache UDP node pointing to the repeater). • Currently there is no security between the cache nodes. For some users this may be a huge problem, so a security mode (encrypting the packets) is in development. 450 Chapter 7. Scaling with uWSGI CHAPTER 8 Securing uWSGI 8.1 Setting POSIX Capabilities POSIX capabilities allow fine-grained permissions for processes. In addition to the standard UNIX permission scheme, they define a new set of privileges for system resources. To enable capabilities support (Linux Only) you have to install the libcap headers (libcap-dev on Debian-based distros) before building uWSGI. As usual your processes will lose practically all of the capabilities after a setuid call. The uWSGI cap option allows you to define a list of capabilities to maintain through the call. For example, to allow your unprivileged app to bind on privileged ports and set the system clock, you will use the following options. uwsgi --socket :1000 --uid 5000 --gid 5000 --cap net_bind_service,sys_time All of the processes generated by uWSGI will then inherit this behaviour. If your system supports capabailities not available in the uWSGI list you can simply specify the number of the constant: uwsgi --socket :1000 --uid 5000 --gid 5000 --cap net_bind_service,sys_time,42 In addition to net_bind_service and sys_time, a new capability numbered ‘42’ is added. 8.1.1 Available capabilities This is the list of available capabilities. audit_control CAP_AUDIT_CONTROL audit_write CAP_AUDIT_WRITE chown CAP_CHOWN dac_override CAP_DAC_OVERRIDE dac_read_search CAP_DAC_READ_SEARCH fowner CAP_FOWNER fsetid CAP_FSETID ipc_lock CAP_IPC_LOCK ipc_owner CAP_IPC_OWNER kill CAP_KILL lease CAP_LEASE linux_immutable CAP_LINUX_IMMUTABLE mac_admin CAP_MAC_ADMIN mac_override CAP_MAC_OVERRIDE Continued on next page 451 uWSGI Documentation, Release 2.0 Table 8.1 – continued from previous page mknod CAP_MKNOD net_admin CAP_NET_ADMIN net_bind_service CAP_NET_BIND_SERVICE net_broadcast CAP_NET_BROADCAST net_raw CAP_NET_RAW setfcap CAP_SETFCAP setgid CAP_SETGID setpcap CAP_SETPCAP setuid CAP_SETUID sys_admin CAP_SYS_ADMIN sys_boot CAP_SYS_BOOT sys_chroot CAP_SYS_CHROOT sys_module CAP_SYS_MODULE sys_nice CAP_SYS_NICE sys_pacct CAP_SYS_PACCT sys_ptrace CAP_SYS_PTRACE sys_rawio CAP_SYS_RAWIO sys_resource CAP_SYS_RESOURCE sys_time CAP_SYS_TIME sys_tty_config CAP_SYS_TTY_CONFIG syslog CAP_SYSLOG wake_alarm CAP_WAKE_ALARM 8.2 Running uWSGI in a Linux CGroup Linux cgroups are an amazing feature available in recent Linux kernels. They allow you to “jail” your processes in constrained environments with limited CPU, memory, scheduling priority, IO, etc.. Note: uWSGI has to be run as root to use cgroups. uid and gid are very, very necessary. 8.2.1 Enabling cgroups First you need to enable cgroup support in your system. Create the /cgroup directory and add this to your /etc/fstab: none /cgroup cgroup cpu,cpuacct,memory Then mount /cgroup and you’ll have jails with controlled CPU and memory usage. There are other Cgroup subsystems, but CPU and memory usage are the most useful to constrain. Let’s run uWSGI in a cgroup: ./uwsgi -M -p 8 --cgroup /cgroup/jail001 -w simple_app -m --http :9090 Cgroups are simple directories. With this command your uWSGI server and its workers are “jailed” in the ‘cgroup/jail001’ cgroup. If you make a bunch of requests to the server, you will see usage counters – cpuacct.* and memoryfiles.* in the cgroup directory growing. You can also use pre-existing cgroups by specifying a directory that already exists. 452 Chapter 8. Securing uWSGI uWSGI Documentation, Release 2.0 8.2.2 A real world example: Scheduling QoS for your customers Suppose you’re hosting apps for 4 customers. Two of them are paying you $100 a month, one is paying $200, and the last is paying $400. To have a good Quality of Service implementation, the $100 apps should get 1/8, or 12.5% of your CPU power, the $200 app should get 1/4 (25%) and the last should get 50%. To implement this, we have to create 4 cgroups, one for each app, and limit their scheduling weights. ./uwsgi --uid 1001 --gid 1001 -s /tmp/app1 -w app1 --cgroup /cgroup/app1 --cgroup-opt cpu.shares=125 ./uwsgi --uid 1002 --gid 1002 -s /tmp/app2 -w app1 --cgroup /cgroup/app2 --cgroup-opt cpu.shares=125 ./uwsgi --uid 1003 --gid 1003 -s /tmp/app3 -w app1 --cgroup /cgroup/app3 --cgroup-opt cpu.shares=250 ./uwsgi --uid 1004 --gid 1004 -s /tmp/app4 -w app1 --cgroup /cgroup/app4 --cgroup-opt cpu.shares=500 The cpu.shares values are simply computed relative to each other, so you can use whatever scheme you like, such as (125, 125, 250, 500) or even (1, 1, 2, 4). With CPU handled, we turn to limiting memory. Let’s use the same scheme as before, with a maximum of 2 GB for all apps altogether. ./uwsgi --uid 1001 --gid 1001 -s /tmp/app1 -w app1 --cgroup /cgroup/app1 --cgroup-opt cpu.shares=125 --cgroup-opt memory.limit_in_bytes=268435456 ./uwsgi --uid 1002 --gid 1002 -s /tmp/app2 -w app1 --cgroup /cgroup/app2 --cgroup-opt cpu.shares=125 --cgroup-opt memory.limit_in_bytes=268435456 ./uwsgi --uid 1003 --gid 1003 -s /tmp/app3 -w app1 --cgroup /cgroup/app3 --cgroup-opt cpu.shares=250 --cgroup-opt memory.limit_in_bytes=536870912 ./uwsgi --uid 1004 --gid 1004 -s /tmp/app4 -w app1 --cgroup /cgroup/app4 --cgroup-opt cpu.shares=500 --cgroup-opt memory.limit_in_bytes=1067459584 8.3 Using Linux KSM in uWSGI Kernel Samepage Merging is a feature of Linux kernels >= 2.6.32 which allows processes to share pages of memory with the same content. This is accomplished by a kernel daemon that periodically performs scans, comparisons, and, if possible, merges of specific memory areas. Born as an enhancement for KVM it can be used for processes that use common data (such as uWSGI processes with language interpreters and standard libraries). If you are lucky, using KSM may exponentially reduce the memory usage of your uWSGI instances. Especially in massive Emperor deployments: enabling KSM for each vassal may result in massive memory savings. KSM in uWSGI was the idea of Giacomo Bagnoli of Asidev s.r.l.. Many thanks to him. 8.3.1 Enabling the KSM daemon To enable the KSM daemon (ksmd), simply set /sys/kernel/mm/ksm/run to 1, like so: echo 1 > /sys/kernel/mm/ksm/run Note: Remember to do this on machine startup, as the KSM daemon does not run by default. Note: KSM is an opt-in feature that has to be explicitly requested by processes, so just enabling KSM will not be a savior for everything on your machine. 8.3.2 Enabling KSM support in uWSGI If you have compiled uWSGI on a kernel with KSM support, you will be able to use the ksm option. This option will instruct uWSGI to register process memory mappings (via madvice syscall) after each request or master cycle. If no page mapping has changed from the last scan, no expensive syscalls are used. 8.3. Using Linux KSM in uWSGI 453 uWSGI Documentation, Release 2.0 8.3.3 Performance impact Checking for process mappings requires parsing the /proc/self/maps file after each request. In some setups this may hurt performance. You can tune the frequency of the uWSGI page scanner by passing an argument to the ksm option. # Scan for process mappings every 10 requests (or 10 master cycles) ./uwsgi -s :3031 -M -p 8 -w myapp --ksm=10 8.3.4 Check if KSM is working well The /sys/kernel/mm/ksm/pages_shared and /sys/kernel/mm/ksm/pages_sharing files contain statistics regarding KSM’s efficiency. The higher values, the less memory consumption for your uWSGI instances. KSM statistics with collectd A simple Bash script like this is useful for keeping an eye on KSM’s efficiency: #!/bin/bash export LC_ALL=C if [ -e /sys/kernel/mm/ksm/pages_sharing]; then pages_sharing=‘cat /sys/kernel/mm/ksm/pages_sharing‘; page_size=‘getconf PAGESIZE‘; saved=$(echo "scale=0;$pages_sharing * $page_size"|bc); echo "PUTVAL <%= cn %>/ksm/gauge-saved interval=60 N:$saved" fi In your collectd configuration, add something like this: LoadPlugin exec Exec "nobody" "/usr/local/bin/ksm_stats.sh" 8.4 Jailing your apps using Linux Namespaces If you have a recent Linux kernel (>2.6.26) you can use its support for namespaces. 8.4.1 What are namespaces? They are an elegant (more elegant than most of the jailing systems you might find in other operating systems) way to “detach” your processes from a specific layer of the kernel and assign them to a new one. The ‘chroot’ system available on UNIX/Posix systems is a primal form of namespaces: a process sees a completely new file system root and has no access to the original one. Linux extends this concept to the other OS layers (PIDs, users, IPC, networking etc.), so a specific process can live in a “virtual OS” with a new group of pids, a new set of users, a completely unshared IPC system (semaphores, shared memory etc.), a dedicated network interface and its own hostname. uWSGI got full namespaces support in 1.9/2.0 development cycle. 454 Chapter 8. Securing uWSGI uWSGI Documentation, Release 2.0 8.4.2 clone() vs unshare() To place the current process in a new namespace you have two syscalls: the venerable clone(), that will create a new process in the specified namespace and the new kid on the block, unshare(), that changes namespaces for the current running process. clone() can be used by the Emperor to directly spawn vassals in new namespaces: [uwsgi] emperor= /etc/uwsgi/vassals emperor-use-clone= fs,net,ipc,uts,pid will run each vassal with a dedicated filesystem, networking, SysV IPC and UTS view. [uwsgi] unshare = ipc,uts ... will run the current instance in the specified namespaces. Some namespace subsystems require additional steps for sane usage (see below). 8.4.3 Supported namespaces • fs -> CLONE_NEWNS, filesystems • ipc -> CLONE_NEWIPC, sysv ipc • pid -> CLONE_NEWPID, when used with unshare() requires an additional fork(). Use one of the –refork-* options. • uts -> CLONE_NEWUTS, hostname • net -> CLONE_NEWNET, new networking, UNIX sockets from different namespaces are still usable, they are a good way for inter-namespaces communications • user -> CLONE_NEWUSER, still complex to manage (and has differences in behaviours between kernel versions) use with caution 8.4.4 setns() In addition to creating new namespaces for a process you can attach to already running ones using the setns() call. Each process exposes its namespaces via the /proc/self/ns directory. The setns() syscall uses the file descriptors obtained from the files in that directory to attach to namespaces. As we have already seen, UNIX sockets are a good way to communicate between namespaces, the uWSGI setns() feature works by creating an UNIX socket that receives requests from processes wanting to join its namespace. As UNIX sockets allow file descriptors passing, the “client” only need to call setns() on them. • setns-socket exposes /proc/self/ns on the specified unix socket address • setns connect to the specified unix socket address, get the filedescriptors and use setns() on them • setns-preopen if enabled the /proc/self/ns files are opened on startup (before privileges drop) and cached. This is useful for avoiding running the main instance as root. • setns-socket-skip some file in /proc/self/ns can create problems (mostly the ‘user’ one). You can skip them specifying the name. (you can specify this option multiple times) 8.4. Jailing your apps using Linux Namespaces 455 uWSGI Documentation, Release 2.0 8.4.5 pivot_root This option allows you to change the rootfs of your currently running instance. It is better than chroot as it allows you to access the old file system tree before (manually) unmounting it. It is a bit complex to master correctly as it requires a couple of assumptions: pivot_root is the directory to mount as the new rootfs and is where to access the old tree. must be a mounted file system, and must be under this file system. A common pattern is: [uwsgi] unshare = fs hook-post-jail = mount:none /distros/precise /ns bind pivot_root = /ns /ns/.old_root ... (Remember to create /ns and /distro/precise/.old_root.) When you have created the new file system layout you can umount /.old_root recursively: [uwsgi] unshare= fs hook-post-jail= mount:none /distros/precise /ns bind pivot_root= /ns /ns/.old_root ; bind mount some useful fs like /dev and /proc hook-as-root= mount:proc none /proc nodev hidepid=2 hook-as-root= mount:none /.old_root/dev /dev bind hook-as-root= mount:none /.old_root/dev/pts /dev/pts bind ; umount the old tree hook-as-root= umount:/.old_root rec,detach 8.4.6 Why not lxc? LXC (LinuX Containers) is a project allowing you to build full subsystems using Linux namespaces. You may ask why “reinvent the wheel” while LXC implements a fully “virtualized” system. Apples and oranges... LXC’s objective is giving users the view of a virtual server. uWSGI namespaces support is lower level – you can use it to detach single components (for example you may only want to unshare IPC) to increase security and isolation. Not all the scenario requires a full system-like view (and in lot of case is suboptimal, while in other is the best approach), try to see namespaces as a way to increase security and isolation, when you need/can isolate a component do it with clone/unshare. When you want to give users a full system-like access go with LXC. 8.5 The old way: the –namespace option Before 1.9/2.0 a full featured system-like namespace support was added. It works as a chroot() on steroids. It should be moved as an external plugin pretty soon, but will be always part of the main distribution, as it is used by lot of people for its simplicity. You basically need to set a root filesystem and an hostname to start your instance in a new namespace: 456 Chapter 8. Securing uWSGI uWSGI Documentation, Release 2.0 Let’s start by creating a new root filesystem for our jail. You’ll need debootstrap (or an equivalent package for your distribution). We’re placing our rootfs in /ns/001, and then create a ‘uwsgi’ user that will run the uWSGI server. We will use the chroot command to ‘adduser’ in the new rootfs, and we will install the Flask package, required by uwsgicc. (All this needs to be executed as root) mkdir -p /ns/001 debootstrap maverick /ns/001 chroot /ns/001 # in the chroot jail now adduser uwsgi apt-get install mercurial python-flask su - uwsgi # as uwsgi now git clone https://github.com/unbit/uwsgicc.git . exit # out of su - uwsgi exit # out of the jail Now on your real system run uwsgi --socket 127.0.0.1:3031 --chdir /home/uwsgi/uwsgi --uid uwsgi --gid uwsgi --module uwsgicc --master --processes 4 --namespace /ns/001:mybeautifulhostname If all goes well, uWSGI will set /ns/001 as the new root filesystem, assign mybeautifulhostname as the hostname and hide the PIDs and IPC of the host system. The first thing you should note is the uWSGI master becoming PID 1 (the “init” process) in the new namespace. All processes generated by the uWSGI stack will be reparented to it if something goes wrong. If the master dies, all jailed processes die. Now point your web browser to your web server and you should see the uWSGI Control Center interface. Pay attention to the information area. The node name (used by cluster subsystem) matches the real hostname as it does not make sense to have multiple jail in the same cluster group. In the hostname field instead you will see the hostname you have set. Another important thing is that you can see all the jail processes from your real system (they will have a different set of PIDs), so if you want to take control of the jail you can easily do it. Note: A good way to limit hardware usage of jails is to combine them with the cgroups subsystem. See also: Running uWSGI in a Linux CGroup 8.5.1 Reloading uWSGI When running in a jail, uWSGI uses another system for reloading: it’ll simply tell workers to bugger off and then exit. The parent process living outside the namespace will see this and respawn the stack in a new jail. 8.5.2 How secure is this sort of jailing? Hard to say! All software tends to be secure until a hole is found. 8.5. The old way: the –namespace option 457 uWSGI Documentation, Release 2.0 8.5.3 Additional filesystems When app is jailed to namespace it only has access to its virtual jail root filesystem. If there is any other filesystem mounted inside the jail directory, it won’t be accessible, unless you use namespace-keep-mount. # app1 jail is located here namespace= /apps/app1 # nfs share mounted on the host side namespace-keep-mount= /apps/app1/nfs This will bind /apps/app1/nfs to jail, so that jailed app can access it under /nfs directory # app1 jail is located here namespace= /apps/app1 # nfs share mounted on the host side namespace-keep-mount= /mnt/nfs1:/nfs If the filesystem that we want to bind is mounted in path not contained inside our jail, than we can use “:” syntax for –namespace-keep-mount. In this case the /mnt/nfs1 will be binded to /nfs directory inside the jail. 8.6 FreeBSD Jails uWSGI 1.9.16 introduced native FreeBSD jails support. FreeBSD jails can be seen as new-generation chroot() with fine-grained tuning of what this “jail” can see. They are very similar to Linux namespaces even if a bit higher-level (from the API point of view). Jails are available since FreeBSD 4 8.6.1 Why managing jails with uWSGI ? Generally jails are managed using the system tool “jail” and its utilities. Til now running uWSGI in FreeBSD jails was pretty common, but for really massive setups (read: hosting business) where an Emperor (for example) manages hundreds of unrelated uWSGI instances, the setup could be really overkill. Managing jails directly in uWSGI config files highly reduce sysadmin costs and helps having a better organization of the whole infrastructure. 8.6.2 Old-style jails (FreeBSD < 8) FreeBSD exposes two main api for managing jails. The old (and easier) one is based on the jail() function. It is available since FreeBSD 4 and allows you to set the rootfs, the hostname and one ore more ipv4/ipv6 addresses Two options are needed for running a uWSGI instance in a jail: –jail and –jail-ip4/–jail-ip6 (effectively they are 3 if you use IPv6) --jail [hostname] [jailname] --jail-ip4
(can be specified multiple times) --jail-ip6
(can be specified multiple times) 458 Chapter 8. Securing uWSGI uWSGI Documentation, Release 2.0 Showing how to create the rootfs for your jail is not the objective of this document, but personally i hate rebuilding from sources, so generally i simply explode the base.tgz file from an official repository and chroot() to it to make the fine tuning. An important thing you have to remember is that the ip addresses you attach to a jail must be available in the system (as aliases). As always we tend to abuse uWSGI facilities. In our case the –exec-pre-jail hook will do the trick [uwsgi] ; create the jail with /jails/001 as rootfs and ’foobar’ as hostname jail= /jails/001 foobar ; create the alias on ’em0’ exec-pre-jail= ifconfig em0 192.168.0.40 alias ; attach the alias to the jail jail-ip4= 192.168.0.40 ; bind the http-socket (we are now in the jail) http-socket= 192.168.0.40:8080 ; load the application (remember we are in the jail) wsgi-file= myapp.wsgi ; drop privileges uid= kratos gid= kratos ; common options master= true processes=2 8.6.3 New style jails (FreeBSD >= 8) FreeBSD 8 introdiced a new advanced api for managing jails. Based on the jail_set() syscall, libjail exposes dozens of features and allows fine-tuning of your jails. To use the new api you need the –jail2 option (aliased as –libjail) --jail2 [=value] Each –jail2 option maps 1:1 with a jail attribute so you can basically tune everything ! [uwsgi] ; create the jail with /jails/001 as rootfs jail2= path=/jails/001 ; set hostname to ’foobar’ jail2= host.hostname=foobar ; create the alias on ’em0’ exec-pre-jail= ifconfig em0 192.168.0.40 alias ; attach the alias to the jail jail2= ip4.addr=192.168.0.40 ; bind the http-socket (we are now in the jail) http-socket= 192.168.0.40:8080 ; load the application (remember we are in the jail) wsgi-file= myapp.wsgi ; drop privileges uid= kratos gid= kratos 8.6. FreeBSD Jails 459 uWSGI Documentation, Release 2.0 ; common options master= true processes=2 Note for FreeBSD >= 8.4 but < 9.0 uWSGI uses ipc semaphores on FreeBSD < 9 (newer FreeBSD releases have POSIX semaphores support). Since FreeBSD 8.4 you need to explicitely allows sysvipc in jails. So be sure to have [uwsgi] ... jail2 = allow.sysvipc=1 ... 8.6.4 DevFS The DevFS virtual filesystem manages the /dev directory on FreeBSD. The /dev filesystem is not mounted in the jail, but you can need it for literally hundreds of reasons. Two main approaches are available: mounting it in the /dev/ directory of the roots before creating the jail, or allowing the jail to mount it [uwsgi] ; avoid re-mounting the file system every time if-not-exists= /jails/001/dev/zero exec-pre-jail = mount -t devfs devfs /jails/001/dev endif= ; create the jail with /jails/001 as rootfs jail2= path=/jails/001 ; set hostname to ’foobar’ jail2= host.hostname=foobar ; create the alias on ’em0’ exec-pre-jail= ifconfig em0 192.168.0.40 alias ; attach the alias to the jail jail2= ip4.addr=192.168.0.40 ; bind the http-socket (we are now in the jail) http-socket= 192.168.0.40:8080 ; load the application (remember we are in the jail) wsgi-file= myapp.wsgi ; drop privileges uid= kratos gid= kratos ; common options master= true processes=2 or (allow the jail itself to mount it) [uwsgi] ; create the jail with /jails/001 as rootfs jail2= path=/jails/001 460 Chapter 8. Securing uWSGI uWSGI Documentation, Release 2.0 ; set hostname to ’foobar’ jail2= host.hostname=foobar ; create the alias on ’em0’ exec-pre-jail= ifconfig em0 192.168.0.40 alias ; attach the alias to the jail jail2= ip4.addr=192.168.0.40 ; allows mount of devfs in the jail jail2= enforce_statfs=1 jail2= allow.mount jail2= allow.mount.devfs ; ... and mount it if-not-exists= /dev/zero exec-post-jail = mount -t devfs devfs /dev endif= ; bind the http-socket (we are now in the jail) http-socket= 192.168.0.40:8080 ; load the application (remember we are in the jail) wsgi-file= myapp.wsgi ; drop privileges uid= kratos gid= kratos ; common options master= true processes=2 8.6.5 Reloading Reloading (or binary patching) is a bit annoying to manage as uWSGI need to re-exec itself, so you need a copy of the binary, plugins and the config file in your jail (unless you can sacrifice graceful reload and simply delegate the Emperor to respawn the instance) Another approach is (like with devfs) mounting the directory with the uwsgi binary (and the eventual plugins) in the jail itself and instruct uWSGI to use this new path with –binary-path 8.6.6 The jidfile Each jail can be referenced by a unique name (optional) or its “jid”. This is similar to a “pid”, as you can use it to send commands (and updates) to an already running jail. The –jidfile <file> option allows you to store the jid in a file for use with external applications. 8.6.7 Attaching to a jail You can attach uWSGI instances to already running jails (they can be standard persistent jail too) using –jail-attach The id argument can be a jid or the name of the jail. This feature requires FreeBSD 8 8.6. FreeBSD Jails 461 uWSGI Documentation, Release 2.0 8.6.8 Debian/kFreeBSD This is an official Debian project aiming at building an os with FreeBSD kernel and common Debian userspace. It works really well, and it has support for jails too. Let’s create a jail with debootstrap debootstrap wheezy /jails/wheezy add a network alias ifconfig em0 192.168.173.105 netmask 255.255.255.0 alias (change em0 with your network interface name) and run it uwsgi --http-socket 192.168.173.105:8080 --jail /jails/wheezy -jail-ip4 192.168.173.105 8.6.9 Jails with Forkpty Router You can easily attach to FreeBSD jails with The Forkpty Router Just remember to have /dev (well, /dev/ptmx) mounted in your jail to allow the forkpty() call Learn how to deal with devfs_ruleset to increase security of your devfs 8.6.10 Notes A jail is destroyed when the last process running in it dies By default everything mounted under the rootfs (before entering the jail) will be seen by the jail it self (we have seen it before when dealing with devfs) 8.7 The Forkpty Router Dealing with containers is now a common deployment pattern. One of the most annoying tasks when dealing with jails/namespaces is ‘attaching’ to already running instances. The forkpty router aims at simplifyng the process giving a pseudoterminal server to your uWSGI instances. A client connect to the socket exposed by the forkpty router and get a new pseudoterminal connected to a process (generally a shell, but can be whatever you want) 8.7.1 uwsgi mode VS raw mode Clients connecting to the forkpty router can use two protocols for data exchange: uwsgi and raw mode. The raw mode simply maps the socket to the pty, for such a reason you will not be able to resize your terminal or send specific signals. The advantage of this mode is in performance: no overhead for each char. The uwsgi mode encapsulates every instruction (stdin, signals, window changes) in a uwsgi packet. This is very similar to how ssh works, so if you plan to use the forkpty router for shell sessions the uwsgi mode is the best choice (in terms of user experience). The overhead of the uwsgi protocol (worst case) is 5 bytes for each stdin event (single char) 462 Chapter 8. Securing uWSGI uWSGI Documentation, Release 2.0 8.7.2 Running the forkpty router The plugin is not builtin by default, so you have to compile it: uwsgi --build-plugin plugins/forkptyrouter or, using the old plugin build system: python uwsgiconfig.py --plugin plugins/forkptyrouter generally compiling the pty plugin is required too (for client access) uwsgi --build-plugin plugins/pty or again, using the old build system: python uwsgiconfig.py --plugin plugins/pty Alternatively, you can build all in one shot with: UWSGI_EMBED_PLUGINS=pty,forkptyrouter make Now you can run the forkptyrouter as a standard gateway (we use UNIX socket as we want a communication channel with jails, and we unshare the uts namespace to give a new hostname) [uwsgi] master= true unshare= uts exec-as-root= hostname iaminajail uid= kratos gid= kratos forkpty-router= /tmp/fpty.socket and connect with the pty client: uwsgi --pty-connect /tmp/fpty.socket now you have a shell (/bin/sh by default) in the uWSGI instance. Running hostname will give you ‘iaminajail’ Eventually you can avoid using uWSGI to attacj to the pty and instead you can rely on this simple python script: import socket import sys import os import select import copy from termios import * import atexit s= socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) s.connect(sys.argv[1]) tcattr= tcgetattr(0) orig_tcattr= copy.copy(tcattr) atexit.register(tcsetattr,0, TCSANOW, orig_tcattr) tcattr[0]|= IGNPAR tcattr[0]&=~(ISTRIP| IMAXBEL| BRKINT| INLCR| IGNCR| ICRNL| IXON| IXANY| IXOFF); tcattr[0]&=~IUCLC; tcattr[3]&=~(ICANON| ECHO| ECHOE| ECHOK| ECHONL); tcattr[3]&=~IEXTEN; 8.7. The Forkpty Router 463 uWSGI Documentation, Release 2.0 tcattr[1]&=~OPOST; tcattr[6][VMIN]=1; tcattr[6][VTIME]=0; tcsetattr(0, TCSANOW, tcattr); while True: (rl, wl, xl)= select.select([0, s], [], []) if s in rl: buf=s.recv(4096) if not buf: break os.write(1, buf) if 0 in rl: buf= os.read(0, 4096) if not buf: break s.send(buf) The previous example uses raw mode, if you resize the client terminal you will se no updates. To use the ‘uwsgi’ mode add a ‘u’: [uwsgi] master= true unshare= uts exec-as-root= hostname iaminajail uid= kratos gid= kratos forkpty-urouter= /tmp/fpty.socket uwsgi --pty-uconnect /tmp/fpty.socket a single instance can expose both protocols on different sockets [uwsgi] master= true unshare= uts exec-as-root= hostname iaminajail uid= kratos gid= kratos forkpty-router= /tmp/raw.socket forkpty-urouter= /tmp/uwsgi.socket 8.7.3 Changing the default command By default the forkpty router run /bin/sh on new connections. You can change the command using the –forkptyrouter-command [uwsgi] master= true unshare= uts exec-as-root= hostname iaminajail uid= kratos gid= kratos forkpty-router= /tmp/raw.socket forkpty-urouter= /tmp/uwsgi.socket forkptyrouter-command= /bin/zsh 464 Chapter 8. Securing uWSGI uWSGI Documentation, Release 2.0 8.8 The TunTap Router The TunTap router is an ad-hoc solution for giving network connectivity to Linux processes running in a dedicated network namespace (well obviously it has other uses, but very probably this is the most interesting one, and the one for which it was developed) The TunTap router is not compiled in by default. For having it in one shot: UWSGI_EMBED_PLUGINS=tuntap make (yes the plugin is named only ‘tuntap’ as effectively it exposes various tuntap devices features) The best way to use it is binding it to a unix socket, allowing processes in new namespaces to reach it (generally unix sockets are the best communication channel for linux namespaces). 8.8.1 The first config We want our vassals to live in the 192.168.0.0/24 network, with 192.168.0.1 as default gateway. The default gateway (read: the tuntap router) is managed by the Emperor itself [uwsgi] ; create the tun device ’emperor0’ and bind it to a unix socket tuntap-router= emperor0 /tmp/tuntap.socket ; give it an ip address exec-as-root= ifconfig emperor0 192.168.0.1 netmask 255.255.255.0 up ; setup nat exec-as-root= iptables -t nat -F exec-as-root= iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE ; enable linux ip forwarding exec-as-root= echo 1 >/proc/sys/net/ipv4/ip_forward ; force vassals to be created in a new network namespace emperor-use-clone= net emperor= /etc/vassals The vassals spawned by this Emperor will born without network connectivity. To give them access to the public network we create a new tun device (it will exist only in the vassal network names- pace) instructing it to route traffic to the Emperor tuntap unix socket: [uwsgi] ; create uwsgi0 tun interface and force it to connect to the Emperor exposed unix socket tuntap-device = uwsgi0 /tmp/tuntap.socket ; bring up loopback exec-as-root = ifconfig lo up ; bring up interface uwsgi0 exec-as-root = ifconfig uwsgi0 192.168.0.2 netmask 255.255.255.0 up ; and set the default gateway exec-as-root = route add default gw 192.168.0.1 ; classic options uid = customer001 gid = customer001 socket = /var/www/foobar.socket psgi-file = foobar.pl ... 8.8. The TunTap Router 465 uWSGI Documentation, Release 2.0 8.8.2 The embedded firewall The TunTap router includes a very simple firewall for governing vassal’s traffic Firewalling is based on 2 chains (in and out), and each rule is formed by 3 parameters: The firewall is applied to traffic from the clients to the tuntap device (out) and the opposite (in) The first matching rule stops the chain, if no rule applies, the policy is “allow” the following rules allows access from vassals to the internet, but block vassals intercommunication [uwsgi] tuntap-router= emperor0 /tmp/tuntap.socket tuntap-router-firewall-out= allow 192.168.0.0/24 192.168.0.1 tuntap-router-firewall-out= deny 192.168.0.0/24 192.168.0.0/24 tuntap-router-firewall-out= allow 192.168.0.0/24 0.0.0.0 tuntap-router-firewall-out= deny tuntap-router-firewall-in= allow 192.168.0.1 192.168.0.0/24 tuntap-router-firewall-in= deny 192.168.0.0/24 192.168.0.0/24 tuntap-router-firewall-in= allow 0.0.0.0 192.168.0.0/24 tuntap-router-firewall-in= deny exec-as-root= ifconfig emperor0 192.168.0.1 netmask 255.255.255.0 up ; setup nat exec-as-root= iptables -t nat -F exec-as-root= iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE ; enable linux ip forwarding exec-as-root= echo 1 >/proc/sys/net/ipv4/ip_forward ; force vassals to be created in a new network namespace emperor-use-clone= net emperor= /etc/vassals 8.8.3 Security The “switching” part of the TunTap router (read: mapping ip addresses to vassals) is pretty simple: the first packet received from a vassal by the TunTap router register the vassal for that ip address. A good approach (from a security point of view) is sending a ping packet soon after network setup in the vassal: [uwsgi] ; create uwsgi0 tun interface and force it to connect to the Emperor exposed unix socket tuntap-device = uwsgi0 /tmp/tuntap.socket ; bring up loopback exec-as-root = ifconfig lo up ; bring up interface uwsgi0 exec-as-root = ifconfig uwsgi0 192.168.0.2 netmask 255.255.255.0 up ; and set the default gateway exec-as-root = route add default gw 192.168.0.1 ; ping something to register exec-as-root = ping -c 1 192.168.0.1 ; classic options ... after a vassal/ip pair is registered, only that combo will be valid (so other vassals will not be able to use that address until the one holding it dies) 466 Chapter 8. Securing uWSGI uWSGI Documentation, Release 2.0 8.8.4 The Future This is becoming a very important part of the unbit.it networking stack. We are currently working on: • dynamic firewall rules (luajit resulted a great tool for writing fast networking rules) • federation/proxy of tuntap router (the tuntaprouter can multiplex vassals networking over a tcp connection to an external tuntap router [that is why you can bind a tuntap router to a tcp address]) • authentication of vassals (maybe the old UNIX ancillary credentials could be enough) • a stats server for network statistics (rx/tx/errors) • a bandwidth shaper based on the blastbeat project 8.8. The TunTap Router 467 uWSGI Documentation, Release 2.0 468 Chapter 8. Securing uWSGI CHAPTER 9 Keeping an eye on your apps 9.1 Monitoring uWSGI with Nagios The official uWSGI distribution includes a plugin adding Nagios-friendly output. To monitor, and eventually get warning messages, via Nagios, launch the following command, where node is the socket (UNIX or TCP) to monitor. uwsgi --socket --nagios 9.1.1 Setting warning messages You can set a warning message directly from your app with the uwsgi.set_warning_message() function. All ping responses (used by Nagios too) will report this message. 9.2 The embedded SNMP server The uWSGI server embeds a tiny SNMP server that you can use to integrate your web apps with your monitoring infrastructure. To enable SNMP support, you must run the uWSGI UDP server and choose a SNMP community string (which is the rudimentary authentication system used by SNMP). ./uwsgi -s :3031 -w staticfilesnmp --udp 192.168.0.1:2222 --snmp --snmp-community foo # or the following. Using the SNMP option to pass the UDP address is a lot more elegant. ;) ./uwsgi -s :3031 -w myapp --master --processes 4 --snmp=192.168.0.1:2222 --snmp-community foo This will run the uWSGI server on TCP port 3031 and UDP port 2222 with SNMP enabled with “foo” as the commu- nity string. Please note that the SNMP server is started in the master process after dropping the privileges. If you want it to listen on a privileged port, you can either use Capabilities on Linux, or use the master-as-root option to run the master process as root. The staticfilesnmp.py file is included in the distribution and is a simple app that exports a counter via SNMP. The uWSGI SNMP server exports 2 group of information: • General information is managed by the uWSGI server itself. The base OID to access uWSGI SNMP information is 1.3.6.1.4.1.35156.17 469 uWSGI Documentation, Release 2.0 (iso.org.dod.internet.private.enterprise.unbit.uwsgi). General options are mapped to 1.3.6.1.4.1.35156.17.1.x. • Custom information is managed by the apps and accessed via 1.3.6.1.4.1.35156.17.2.x So, to get the number of requests managed by the uWSGI server, you could do snmpget -v2c -c foo 192.168.0.1:2222 1.3.6.1.4.1.35156.17.1.1 # 1.1 corresponds to ‘‘general.requests‘‘ 9.2.1 Exporting custom values To manage custom values from your app you have these Python functions, • uwsgi.snmp_set_counter32() • uwsgi.snmp_set_counter64() • uwsgi.snmp_set_gauge() • uwsgi.snmp_incr_counter32() • uwsgi.snmp_incr_counter64() • uwsgi.snmp_incr_gauge() • uwsgi.snmp_decr_counter32() • uwsgi.snmp_decr_counter64() • uwsgi.snmp_decr_gauge() So if you wanted to export the number of users currently logged in (this is a gauge as it can lower) as custom OID 40, you’d call users_logged_in= random.randint(0, 1024)# a more predictable source of information would be better. uwsgi.snmp_set_gauge(40, users_logged_in) and to look it up, snmpget -v2c -c foo 192.168.0.1:2222 1.3.6.1.4.1.35156.17.2.40 The system snmp daemon (net-snmp) can be configured to proxy SNMP requests to uwsgi. This allows you to run the system daemon and uwsgi at the same time, and runs all SNMP requests through the system daemon first. To configure the system snmp daemon (net-snmp) to proxy connections to uwsgi, add these lines to the bottom of /etc/snmp/snmpd.conf and restart the daemon: proxy -v 2c -c foo 127.0.0.1:2222 .1.3.6.1.4.1.35156.17 view systemview included .1.3.6.1.4.1.35156.17 Replace ‘foo’ and ‘2222’ with the community and port configured in uwsgi. 9.3 Pushing statistics (from 1.4) IMPORTANT: the Metrics subsystem offers a better introduction to the following concepts. See The Metrics subsystem Starting from uWSGI 1.4 you can push statistics (the same JSON blob you get with the The uWSGI Stats Server) via various systems (called stats pushers). Statistics are pushed at regular intervals (default 3 seconds). 470 Chapter 9. Keeping an eye on your apps uWSGI Documentation, Release 2.0 9.3.1 The ‘file’ stats pusher By default the ‘file’ stats pusher is available up to 1.9.18. Starting from 1.9.19 is available as a plugin (stats_pusher_file). It allows you to save json chunks to a file (open in appended mode) [uwsgi] socket= :3031 module= foobar master= true stats-push= file:path=/tmp/foobar,freq=10 this config will append JSON to the /tmp/foobar file every 10 seconds 9.3.2 The ‘mongodb’ stats pusher This is the first developed stats pusher plugin, allowing you to store JSON data directly on a mongodb collection [uwsgi] plugins= stats_pusher_mongodb socket= :3031 module= foobar master= true stats-push= mongodb:addr=127.0.0.1:5151,collection=uwsgi.mystats,freq=4 This config will insert JSON data to the collection uwsgi.mystats on the mongodb server 127.0.0.1:5151 every 4 seconds. To build the plugin you need mongodb development headers (mongodb-dev on Debian/Ubuntu) python uwsgiconfig.py --plugin plugins/stats_pusher_mongodb will do the trick 9.3.3 Notes You can configure all of the stats pusher you need, just specify multiple stats-push options [uwsgi] plugins= stats_pusher_mongodb socket= :3031 module= foobar master= true stats-push= mongodb:addr=127.0.0.1:5151,collection=uwsgi.mystats,freq=4 stats-push= mongodb:addr=127.0.0.1:5152,collection=uwsgi.mystats,freq=4 stats-push= mongodb:addr=127.0.0.1:5153,collection=uwsgi.mystats,freq=4 stats-push= mongodb:addr=127.0.0.1:5154,collection=uwsgi.mystats,freq=4 9.4 Integration with Graphite/Carbon Graphite is a kick-ass realtime graphing application built on top of three components: • Whisper – a data storage system • Carbon – a server for receiving data 9.4. Integration with Graphite/Carbon 471 uWSGI Documentation, Release 2.0 • Python web application for graph rendering and management. The uWSGI Carbon plugin allows you to send uWSGI’s internal statistics to one or more Carbon servers. It is compiled in by default as of uWSGI 1.0, though it can also be built as a plugin. 9.4.1 Quickstart For the sake of illustration, let’s say your Carbon server is listening on 127.0.0.1:2003 and your uWSGI instance is on the machine debian32, listening on 127.0.0.1:3031 with 4 processes. By adding the --carbon option to your uWSGI instance you’ll instruct it to send its statistics to the Carbon server periodically. The default period is 60 seconds. uwsgi --socket 127.0.0.1:3031 --carbon 127.0.0.1:2003 --processes 4 Metrics are named like uwsgi...requests and uwsgi...worker.requests, where: • hostname – machine’s hostname • id – name of the first uWSGI socket (with dots replaced by underscores) • n – number of the worker processes (1-based). Examples of names of Carbon metrics generated by uWSGI: • uwsgi.debian32.127_0_0_1:3031.requests (uwsgi...requests) • uwsgi.debian32.127_0_0_1:3031.worker1.requests (uwsgi...worker.requests) • uwsgi.debian32.127_0_0_1:3031.worker2.requests (uwsgi...worker.requests) • uwsgi.debian32.127_0_0_1:3031.worker3.requests (uwsgi...worker.requests) • uwsgi.debian32.127_0_0_1:3031.worker4.requests (uwsgi...worker.requests). See also: Setting up Graphite on Ubuntu using the Metrics subsystem 9.5 The uWSGI Stats Server In addition to SNMP, uWSGI also supports a Stats Server mechanism which exports the uWSGI state as a JSON object to a socket. Simply use the stats option followed by a valid socket address. --stats 127.0.0.1:1717 --stats /tmp/statsock --stats :5050 --stats @foobar If a client connects to the specified socket it will get a JSON object containing uWSGI internal statistics before the connection ends. uwsgi --socket :3031 --stats :1717 --module welcome --master --processes 8 then nc 127.0.0.1 1717 # or for convenience... uwsgi --connect-and-read 127.0.0.1:1717 472 Chapter 9. Keeping an eye on your apps uWSGI Documentation, Release 2.0 will return something like this: { "workers":[{ "id":1, "pid": 31759, "requests":0, "exceptions":0, "status": "idle", "rss":0, "vsz":0, "running_time":0, "last_spawn": 1317235041, "respawn_count":1, "tx":0, "avg_rt":0, "apps":[{ "id":0, "modifier1":0, "mountpoint":"", "requests":0, "exceptions":0, "chdir":"" }] }, { "id":2, "pid": 31760, "requests":0, "exceptions":0, "status": "idle", "rss":0, "vsz":0, "running_time":0, "last_spawn": 1317235041, "respawn_count":1, "tx":0, "avg_rt":0, "apps":[{ "id":0, "modifier1":0, "mountpoint":"", "requests":0, "exceptions":0, "chdir":"" }] }, { "id":3, "pid": 31761, "requests":0, "exceptions":0, "status": "idle", "rss":0, "vsz":0, "running_time":0, "last_spawn": 1317235041, "respawn_count":1, "tx":0, "avg_rt":0, "apps":[{ 9.5. The uWSGI Stats Server 473 uWSGI Documentation, Release 2.0 "id":0, "modifier1":0, "mountpoint":"", "requests":0, "exceptions":0, "chdir":"" }] }, { "id":4, "pid": 31762, "requests":0, "exceptions":0, "status": "idle", "rss":0, "vsz":0, "running_time":0, "last_spawn": 1317235041, "respawn_count":1, "tx":0, "avg_rt":0, "apps":[{ "id":0, "modifier1":0, "mountpoint":"", "requests":0, "exceptions":0, "chdir":"" }] }, { "id":5, "pid": 31763, "requests":0, "exceptions":0, "status": "idle", "rss":0, "vsz":0, "running_time":0, "last_spawn": 1317235041, "respawn_count":1, "tx":0, "avg_rt":0, "apps":[{ "id":0, "modifier1":0, "mountpoint":"", "requests":0, "exceptions":0, "chdir":"" }] }, { "id":6, "pid": 31764, "requests":0, "exceptions":0, "status": "idle", "rss":0, "vsz":0, "running_time":0, 474 Chapter 9. Keeping an eye on your apps uWSGI Documentation, Release 2.0 "last_spawn": 1317235041, "respawn_count":1, "tx":0, "avg_rt":0, "apps":[{ "id":0, "modifier1":0, "mountpoint":"", "requests":0, "exceptions":0, "chdir":"" }] }, { "id":7, "pid": 31765, "requests":0, "exceptions":0, "status": "idle", "rss":0, "vsz":0, "running_time":0, "last_spawn": 1317235041, "respawn_count":1, "tx":0, "avg_rt":0, "apps":[{ "id":0, "modifier1":0, "mountpoint":"", "requests":0, "exceptions":0, "chdir":"" }] }, { "id":8, "pid": 31766, "requests":0, "exceptions":0, "status": "idle", "rss":0, "vsz":0, "running_time":0, "last_spawn": 1317235041, "respawn_count":1, "tx":0, "avg_rt":0, "apps":[{ "id":0, "modifier1":0, "mountpoint":"", "requests":0, "exceptions":0, "chdir":"" }] }] } 9.5. The uWSGI Stats Server 475 uWSGI Documentation, Release 2.0 9.5.1 uwsgitop uwsgitop is a top-like command that uses the stats server. It is available on PyPI, so use easy_install or pip to install it (package name uwsgitop, naturally). The sources are available on Github. https://github.com/unbit/uwsgitop 9.6 The Metrics subsystem Available from 1.9.19. The uWSGI metrics subsystem allows you to manage “numbers” from your app. While the caching subsystem got some math capabilities during the 1.9 development cycle, the metrics subsystem is optimized by design for storing numbers and applying functions over them. So, compared to the caching subsystem it’s way faster and requires a fraction of the memory. When enabled, the metric subsystem configures a vast amount of metrics (like requests per-core, memory usage, etc) but, in addition to this, you can configure your own metrics, such as the number of active users or, say, hits of a particular URL, as well as the memory consumption of your app or the whole server. To enable the metrics subsystem just add --enable-metrics to your options, or configure a stats pusher (see below). The metrics subsystem is completely thread-safe. By default uWSGI creates a lot of metrics (and more are planned), so before adding your own be sure uWSGI does not already expose the one(s) you need. 9.6.1 Metric names and oids Each metric must have a name (containing only numbers, letters, underscores, dashes and dots) and an optional oid (required for mapping a metric to The embedded SNMP server). 9.6.2 Metric types Before dealing with metrics you need to understand the various types represented by each metric: COUNTER (type 0) This is a generally-growing up number (like the number of requests). GAUGE (type 1) This is a number that can increase or decrease dynamically (like the memory used by a worker, or CPU load). ABSOLUTE (type 2) This is an absolute number, like the memory of the whole server, or the size of the hard disk. 476 Chapter 9. Keeping an eye on your apps uWSGI Documentation, Release 2.0 ALIAS (type 3) This is a virtual metric pointing to another one . You can use it to give different names to already existing metrics. 9.6.3 Metric collectors Once you define a metric type, you need to tell uWSGI how to ‘collect’ the specific metric. There are various collectors available (and more can be added via plugins). • ptr – The value is collected from a memory pointer • file – the value is collected from a file • sum – the value is the sum of other metrics • avg – compute the algebraic average of the children (added in 1.9.20) • accumulator – always add the sum of children to the final value. See below for an example. Round 1: child1 = 22, child2 = 17 -> metric_value = 39 Round 2: child1 = 26, child2 = 30 -> metric_value += 56 • multiplier - Multiply the sum of children by the specified argument (arg1n). child1 = 22, child2 = 17, arg1n = 3 -> metric_value = (22+17)*3 • func - the value is computed calling a specific function every time • manual - the NULL collector. The value must be updated manually from applications using the metrics API. 9.6.4 Custom metrics You can define additional metrics to manage from your app. The --metric option allows you to add more metrics. It has two syntaxes: “simplified” and “keyval”. uwsgi --http-socket :9090 --metric foobar will create a metric ‘foobar’ with type ‘counter’, manual collector and no oid. For creating advanced metrics you need the keyval way: uwsgi --http-socket :9090 --metric name=foobar,type=gauge,oid=100.100.100 The following keys are available: • name – set the metric name • oid – set the metric oid • type – set the metric type, can be counter, gauge, absolute, alias • initial_value – set the metric to a specific value on startup • freq – set the collection frequency in seconds (default to 1) • reset_after_push – reset the metric to zero (or the configured initial_value) after it’s been pushed to the backend (so every freq seconds) • children – maps children to the metric (see below) 9.6. The Metrics subsystem 477 uWSGI Documentation, Release 2.0 • alias – the metric will be a simple alias for the specified one (–metric name=foobar,alias=worker.0.requests,type=alias) • arg1 to arg3 – string based arguments (see below) • arg1n to arg3n – number based arguments (see below) • collector set the collector, can be ptr, file, sum, func or anything exposed by plugins. Not specifying a collector means the metric is manual (your app needs to update it). The ptr is currently unimplemented, while the other collector requires a bit of additional configuration: collector=file requires arg1 for the filename and an optional arg1n for the so-called split value. uwsgi --metric name=loadavg,type=gauge,collector=file,arg1=/proc/loadavg,arg1n=1,freq=3 This will add a ‘loadavg‘ metric, of type gauge, updated every 3 seconds with the content of /proc/loadavg. The content is split (using \n, \t, spaces, \r and zero as separator) and the item 1 (the returned array is zero-based) used as the return value. The splitter is very powerful, making it possible to gather information from more complex files, such as /proc/meminfo. uwsgi --metric name=memory,type=gauge,collector=file,arg1=/proc/meminfo,arg1n=4,freq=3 Once split, /proc/meminfo has the MemFree value in the 4th slot. collector=sum requires the list of metrics that must be summed up. Each metric has the concept of ‘children’. The sum collector will sum the values of all of its children: uwsgi --metric name=reqs,collector=sum,children=worker.1.requests;worker.2.requests This will sum the value of worker.1.requests and worker.2.requests every second. collector=func is a convenience collector avoiding you to write a whole plugin for adding a new collector. Let’s define a C function (call the file mycollector.c or whatever you want): int64_t my_collector(void *metric) { return 173; } and build it as a shared library... gcc -shared -o mycollector.so mycollector.c now run uWSGI loading the library... uwsgi --dlopen ./mycollector.so --metric name=mine,collector=func,arg1=my_collector,freq=10 this will call the C function my_collector every 10 seconds and will set the value of the metric ‘mine’ to its return value. The function must returns an int64_t value. The argument it takes is a uwsgi_metric pointer. You generally do not need to parse the metric, so just casting to void will avoid headaches. 9.6.5 The metrics directory UNIX sysadmins love text files. They are generally the things they have to work on most of the time. If you want to make a UNIX sysadmin happy, just give him or her some text file to play with. (Or some coffee, or whiskey maybe, depending on their tastes. But generally, text files should do just fine.) 478 Chapter 9. Keeping an eye on your apps uWSGI Documentation, Release 2.0 The metrics subsystem can expose all of its metrics in the form of text files in a directory: uwsgi --metrics-dir mymetrics ... The directory must exist in advance. This will create a text file for each metric in the ‘mymetrics’ directory. The content of each file is the value of the metric (updated in real time). Each file is mapped into the process address space, so do not worry if your virtual memory increases slightly. 9.6.6 Restoring metrics (persistent metrics) When you restart a uWSGI instance, all of its metrics are reset. This is generally the best thing to do, but if you want, you can restore the previous situation using the values stored in the metrics directory defined before. Just add the --metrics-dir-restore option to force the metric subsystem to read-back the values from the metric directory before starting to collect values. 9.6.7 API Your language plugins should expose at least the following api functions. Currently they are implemented in Perl, CPython, PyPy and Ruby • metric_get(name) • metric_set(name, value) • metric_set_max(name, value) – only set the metric name if the give value is greater than the one currently stored • metric_set_min(name, value) – only set the metric name if the give value is lower than the one currently stored metric_set_max and metric_set_min can be used to avoid having to call metric_get when you need a metric to be set at a maximal or minimal value. Another simple use case is to use the avg collector to calculate an average between some max and min set metrics. • metric_inc(name[, delta]) • metric_dec(name[, delta]) • metric_mul(name[, delta]) • metric_div(name[, delta]) • metrics (tuple/array of metric keys, should be immutable and not-callable, currently unimplemented) 9.6.8 Stats pushers Collected metrics can be sent to external systems for analysis or chart generation. Stats pushers are plugins aimed at sending metrics to those systems. There are two kinds of stats pushers at the moment: JSON and raw. The JSON stats pusher send the whole JSON stats blob (the same you get from the stats server), while ‘raw’ ones send the metrics list. 9.6. The Metrics subsystem 479 uWSGI Documentation, Release 2.0 Currently available stats pushers: rrdtool • Type: raw • Plugin: rrdtool (builtin by default) • Requires (during runtime): librrd.so • Syntax: --stats-push rrdtool:my_rrds ... This will store an rrd file for each metric in the specified directory. Each rrd file has a single data source named ‘metric’. Usage: uwsgi --rrdtool my_rrds ... # or uwsgi --stats-push rrdtool:my_rrds ... By default the RRD files are updated every 300 seconds. You can tune this value with --rrdtool-freq The librrd.so library is detected at runtime. If you need you can specify its absolute path with --rrdtool-lib. statsd • Type: raw • Plugin: stats_pusher_statsd • Syntax: --stats-push statsd:address[,prefix] Push metrics to a statsd server. Usage: uwsgi --stats-push statsd:127.0.0.1:8125,myinstance ... carbon • Type: raw • Plugin: carbon (built-in by default) • See: Integration with Graphite/Carbon zabbix • Type: raw • Plugin: zabbix • Syntax: --stats-push zabbix:address[,prefix] Push metrics to a zabbix server. The plugin exposes a --zabbix-template option that will generate a zabbix template (on stdout or in the specified file) containing all of the exposed metrics as trapper items. 480 Chapter 9. Keeping an eye on your apps uWSGI Documentation, Release 2.0 Note: On some Zabbix versions you will need to authorize the IP addresses allowed to push items. Usage: uwsgi --stats-push zabbix:127.0.0.1:10051,myinstance ... mongodb • Type: json • Plugin: stats_pusher_mongodb • Required (build time): libmongoclient.so • Syntax (keyval): --stats-push mongodb:addr=,collection=,freq= Push statistics (as JSON) the the specified MongoDB database. file • Type: json • Plugin: stats_pusher_file Example plugin storing stats JSON in a file. socket • Type: raw • Plugin: stats_pusher_socket (builtin by default) • Syntax: --stats-push socket:address[,prefix] Push metrics to a UDP server with the following format: ( is in the numeric form previously reported). Example: uwsgi --stats-push socket:127.0.0.1:8125,myinstance ... 9.6.9 Alarms/Thresholds You can configure one or more “thresholds” for each metric. Once this limit is reached the specified alarm (see The uWSGI alarm subsystem (from 1.3)) is triggered. Once the alarm is delivered you may choose to reset the counter to a specific value (generally 0), or continue triggering alarms with a specified rate. [uwsgi] ... metric-alarm = key=worker.0.avg_response_time,value=2000,alarm=overload,rate=30 metric-alarm = key=loadavg,value=3,alarm=overload,rate=120 metric-threshold = key=mycounter,value=1000,reset=0 ... 9.6. The Metrics subsystem 481 uWSGI Documentation, Release 2.0 Specifying an alarm is not required. Using the threshold value to automatically reset a metric is perfectly valid. Note: --metric-threshold and --metric-alarm are aliases for the same option. 9.6.10 SNMP integration The The embedded SNMP server server exposes metrics starting from the 1.3.6.1.4.1.35156.17.3 OID. For example to get the value of worker.0.requests: snmpget -v2c -c : 1.3.6.1.4.1.35156.17.3.0.1 Remember: only metrics with an associated OID can be used via SNMP. 9.6.11 Internal Routing integration The ‘’router_metrics” plugin (builtin by default) adds a series of actions to the internal routing subsystem. • metricinc:[,value] increase the • metricdec:[,value] decrease the • metricmul:[,value] multiply the • metricdiv:[,value] divide the • metricset:, set to In addition to action, a route var named “metric” is added. Example: [uwsgi] metric= mymetric route= ^/foo metricinc:mymetric route-run= log:the value of the metric ’mymetric’ is ${metric[mymetric]} log-format= %(time) - %(metric.mymetric) 9.6.12 Request logging You can access metrics values from your request logging format using the %(metric.xxx) placeholder: [uwsgi] log-format= [hello] %(time) %(metric.worker.0.requests) 9.6.13 Officially Registered Metrics This is a work in progress. The best way to know which default metrics are exposed is enabling the stats server and querying it (or adding the --metrics-dir option). • worker/3 (exports information about workers, example worker.1.requests [or 3.1.1] reports the number of re- quests served by worker 1) • plugin/4 (namespace for metrics automatically added by plugins, example plugins.foo.bar) • core/5 (namespace for general instance informations) 482 Chapter 9. Keeping an eye on your apps uWSGI Documentation, Release 2.0 • router/6 (namespace for corerouters, example router.http.active_sessions) • socket/7 (namespace for sockets, example socket.0.listen_queue) • mule/8 (namespace for mules, example mule.1.signals) • spooler/9 (namespace for spoolers, example spooler.1.signals) • system/10 (namespace for system metrics, like loadavg or free memory) 9.6.14 OID assigment for plugins If you want to write a plugin that will expose metrics, please add the OID namespace that you are going to use to the list below and make a pull request first. This will ensure that all plugins are using unique OID namespaces. Prefix all plugin metric names with plugin name to ensure no conflicts if same keys are used in multiple plugins (example plugin.myplugin.foo.bar, worker.1.plugin.myplugin.foo.bar) • (3|4).100.1 - cheaper_busyness 9.6.15 External tools Check: https://github.com/unbit/unbit-bars 9.6. The Metrics subsystem 483 uWSGI Documentation, Release 2.0 484 Chapter 9. Keeping an eye on your apps CHAPTER 10 Async and loop engines 10.1 uWSGI asynchronous/non-blocking modes (updated to uWSGI 1.9) Warning: Beware! Async modes will not speed up your app, they are aimed at improving concurrency. Do not expect that enabling some of the modes will work flawlessly, asynchronous/evented/non-blocking systems require app cooperation, so if your app is developed without taking specific async engine rules into consideration, you are doing it wrong. Do not trust people suggesting you to blindly use async/evented/non-blocking systems! 10.1.1 Glossary uWSGI, following its modular approach, splits async engines into two families. Suspend/Resume engines They simply implement coroutine/green threads techniques. They have no event engine, so you have to use the one supplied by uWSGI. An Event engine is generally a library exporting primitives for platform-independent non- blocking I/O (libevent, libev, libuv, etc.). The uWSGI event engine is enabled using the --async option. Currently the uWSGI distribution includes the following suspend/resume engines: • uGreen - Unbit’s green thread implementation (based on swapcontext()) • Greenlet - Python greenlet module • Stackless - Stackless Python • Fiber - Ruby 1.9 fibers Running the uWSGI async mode without a proper suspend/resume engine will raise a warning, so for a minimal non-blocking app you will need something like that: uwsgi --async 100 --ugreen --socket :3031 An important aspect of suspend/resume engines is that they can easily destroy your process if it is not aware of them. Some of the language plugins (most notably Python) have hooks to cooperate flawlessly with coroutines/green threads. Other languages may fail miserably. Always check the uWSGI mailing list or IRC channel for updated information. Older uWSGI releases supported an additional system: callbacks. Callbacks is the approach used by popular systems like node.js. This approach requires heavy app cooperation, and for complex projects like uWSGI dealing with this is 485 uWSGI Documentation, Release 2.0 extremely complex. For that reason, callback approach is not supported (even if technically possible) Software based on callbacks (like The Tornado loop engine) can be used to combine them with some form of suspend engine. I/O engines (or event systems) uWSGI includes an highly optimized evented technology, but can use alternative approaches too. I/O engines always require some suspend/resume engine, otherwise ugly things happen (the whole uWSGI codebase is coroutine-friendly, so you can play with stacks pretty easily). Currently supported I/O engines are: • The Tornado loop engine • libuv (work in progress) • libev (work in progress) Loop engines Loop engines are packages/libraries exporting both suspend/resume techniques and an event system. When loaded, they override the way uWSGI manages connections and signal handlers (uWSGI signals, not POSIX signals). Currently uWSGI supports the following loop engines: • Gevent (Python, libev, greenlet) • Coro::AnyEvent (Perl, coro, anyevent) Although they are generally used by a specific language, pure-C uWSGI plugins (like the CGI one) can use them to increase concurrency without problems. 10.1.2 Async switches To enable async mode, you use the --async option (or some shortcut for it, exported by loop engine plugins). The argument of the --async option is the number of “cores” to initialize. Each core can manage a single request, so the more core you spawn, more requests you will be able to manage (and more memory you will use). The job of the suspend/resume engines is to stop the current request management, move to another core, and eventually come back to the old one (and so on). Technically, cores are simple memory structures holding request’s data, but to give the user the illusion of a multi- threaded system we use that term. The switch between cores needs app cooperation. There are various ways to accomplish that, and generally, if you are using a loop engine, all is automagic (or requires very little effort). Warning: If you are in doubt, do not use async mode. 10.1.3 Running uWSGI in Async mode To start uWSGI in async mode, pass the --async option with the number of “async cores” you want. ./uwsgi --socket :3031 -w tests.cpubound_async --async 10 486 Chapter 10. Async and loop engines uWSGI Documentation, Release 2.0 This will start uWSGI with 10 async cores. Each async core can manage a request, so with this setup you can accept 10 concurrent requests with only one process. You can also start more processes (with the --processes option), each will have its own pool of async cores. When using harakiri mode, every time an async core accepts a request, the harakiri timer is reset. So even if a request blocks the async system, harakiri will save you. The tests.cpubound_async app is included in the source distribution. It’s very simple: def application(env, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) for i in range(1, 10000): yield "

%s

"%i Every time the application does yield from the response function, the execution of the app is stopped, and a new request or a previously suspended request on another async core will take over. This means the number of async cores is the number of requests that can be queued. If you run the tests.cpubound_async app on a non-async server, it will block all processing: will not accept other requests until the heavy cycle of 10000

s is done. 10.1.4 Waiting for I/O If you are not under a loop engine, you can use the uWSGI API to wait for I/O events. Currently only 2 functions are exported: • uwsgi.wait_fd_read() • uwsgi.wait_fd_write() These functions may be called in succession to wait for multiple file descriptors: uwsgi.wait_fd_read(fd0) uwsgi.wait_fd_read(fd1) uwsgi.wait_fd_read(fd2) yield ""# yield the app, let uWSGI do its magic 10.1.5 Sleeping On occasion you might want to sleep in your app, for example to throttle bandwidth. Instead of using the blocking time.sleep(N) function, use uwsgi.async_sleep(N) to yield control for N seconds. See also: See tests/sleeping_async.py for an example. 10.1.6 Suspend/Resume Yielding from the main application routine is not very practical, as most of the time your app is more advanced than a simple callable and is formed of tons of functions and various levels of call depth. Worry not! You can force a suspend (using coroutine/green thread) by simply calling uwsgi.suspend(): uwsgi.wait_fd_read(fd0) uwsgi.suspend() 10.1. uWSGI asynchronous/non-blocking modes (updated to uWSGI 1.9) 487 uWSGI Documentation, Release 2.0 uwsgi.suspend() will automatically call the chosen suspend engine (uGreen, greenlet, etc.). 10.1.7 Static files Static file server will automatically use the loaded async engine. 10.2 The Gevent loop engine Gevent is an amazing non-blocking Python network library built on top of libev and greenlet. Even though uWSGI supports Greenlet as suspend-resume/greenthread/coroutine library, it requires a lot of effort and code modi- fications to work with gevent. The gevent plugin requires gevent 1.0.0 and uWSGI asynchronous/non-blocking modes (updated to uWSGI 1.9) mode. 10.2.1 Notes • The SignalFramework is fully working with Gevent mode. Each handler will be executed in a dedicated greenlet. Look at tests/ugevent.py for an example. • uWSGI multithread mode (threads option) will not work with Gevent. Running Python threads in your apps is supported. • Mixing uWSGI’s Async API with gevent’s is EXPLICITLY FORBIDDEN. 10.2.2 Building the plugin (uWSGI >= 1.4) The gevent plugin is compiled in by default when the default profile is used. Doing the following will install the python plugin as well as the gevent one: pip install uwsgi 10.2.3 Building the plugin (uWSGI < 1.4) A ‘gevent’ build profile can be found in the buildconf directory. python uwsgiconfig --build gevent # or... UWSGI_PROFILE=gevent make # or... UWSGI_PROFILE=gevent pip install git+git://github.com/unbit/uwsgi.git # or... python uwsgiconfig --plugin plugins/gevent # external plugin 10.2.4 Running uWSGI in gevent mode uwsgi --gevent 100 --socket :3031 --module myapp or for a modular build: uwsgi --plugins python,gevent --gevent 100 --socket :3031 --module myapp the argument of –gevent is the number of async cores to spawn 488 Chapter 10. Async and loop engines uWSGI Documentation, Release 2.0 10.2.5 A crazy example The following example shows how to sleep in a request, how to make asynchronous network requests and how to continue doing logic after a request has been closed. import gevent import gevent.socket def bg_task(): for i in range(1,10): print "background task", i gevent.sleep(2) def long_task(): for i in range(1,10): print i gevent.sleep() def application(e, sr): sr(’200 OK’, [(’Content-Type’,’text/html’)]) t= gevent.spawn(long_task) t.join() yield "sleeping for 3 seconds...
" gevent.sleep(3) yield "done
" yield "getting some ips...
" urls=[’www.google.com’,’www.example.com’,’www.python.org’,’projects.unbit.it’] jobs= [gevent.spawn(gevent.socket.gethostbyname, url) for url in urls] gevent.joinall(jobs, timeout=2) for j in jobs: yield "ip = %s
"%j.value gevent.spawn(bg_task) # this task will go on after request end 10.2.6 Monkey patching uWSGI uses native gevent api, so it does not need monkey patching. That said, your code may need it, so remem- ber to call gevent.monkey.patch_all() at the start of your app. As of uWSGI 1.9, the convenience option --gevent-monkey-patch will do that for you. A common example is using psycopg2_gevent with django. Django will make a connection to postgres for each thread (storing it in thread locals). As the uWSGI gevent plugin runs on a single thread this approach will lead to a deadlock in psycopg. Enabling monkey patch will allow you to map thread locals to greenlets (though you could avoid full monkey patching and only call gevent.monkey.patch_thread()) and solves the issue: import gevent.monkey gevent.monkey.patch_thread() import gevent_psycopg2 gevent_psycopg2.monkey_patch() or (to monkey patch everything) import gevent.monkey gevent.monkey.patch_all() 10.2. The Gevent loop engine 489 uWSGI Documentation, Release 2.0 import gevent_psycopg2 gevent_psycopg2.monkey_patch() 10.2.7 Notes on clients and frontends • If you’re testing a WSGI application that generates a stream of data, you should know that curl by default buffers data until a newline. So make sure you either disable curl’s buffering with the -N flag or have regular newlines in your output. • If you are using Nginx in front of uWSGI and wish to stream data from your app, you’ll probably want to disable Nginx’s buffering. uwsgi_buffering off; 10.3 The Tornado loop engine Available from: ‘uWSGI 1.9.19-dev‘ Supported suspend engines: ‘greenlet‘ Supported CPython versions: ‘all of tornado supported versions‘ The tornado loop engine allows you to integrate your uWSGI stack with the Tornado IOLoop class. Basically every I/O operation of the server is mapped to a tornado IOLoop callback. Making RPC, remote caching, or simply writing responses is managed by the Tornado engine. As uWSGI is not written with a callback-based programming approach, integrating with those kind of libraries requires some form of “suspend” engine (green threads/coroutines) Currently the only supported suspend engine is the “greenlet” one. Stackless python could work too (needs testing). PyPy is currently not supported (albeit technically possibile thanks to continulets). Drop a mail to Unbit staff if you are interested. 10.3.1 Why ? The Tornado project includes a simple WSGI server by itself. In the same spirit of the Gevent plugin, the purpose of Loop engines is allowing external prejects to use (and abuse) the uWSGI api, for better performance, versatility and (maybe the most important thing) resource usage. All of the uWSGI subsystems are available (from caching, to websockets, to metrics) in your tornado apps, and the WSGI engine is the battle-tested uWSGI one. 10.3.2 Installation The tornado plugin is currently not built-in by default. To have both tornado and greenlet in a single binary you can do UWSGI_EMBED_PLUGINS=tornado,greenlet pip install tornado greenlet uwsgi or (from uWSGI sources, if you already have tornado and greenlet installed) UWSGI_EMBED_PLUGINS=tornado,greenlet make 490 Chapter 10. Async and loop engines uWSGI Documentation, Release 2.0 10.3.3 Running it The --tornado option is exposed by the tornado plugin, allowing you to set optimal parameters: uwsgi --http-socket :9090 --wsgi-file myapp.py --tornado 100 --greenlet this will run a uWSGI instance on http port 9090 using tornado as I/O (and time) management and greenlet as suspend engine 100 async cores are allocated, allowing you to manage up to 100 concurrent requests 10.3.4 Integrating WSGI with the tornado api For the way WSGI works, dealing with callback based programming is pretty hard (if not impossible). Thanks to greenlet we can suspend the execution of our WSGI callable until a tornado IOLoop event is available: from tornado.httpclient import AsyncHTTPClient import greenlet import functools # this gives us access to the main IOLoop (the same used by uWSGI) from tornado.ioloop import IOLoop io_loop= IOLoop.instance() # this is called at the end of the external HTTP request def handle_request(me, response): if response.error: print("Error:", response.error) else: me.result= response.body # back to the WSGI callable me.switch() def application(e, sr): me= greenlet.getcurrent() http_client= AsyncHTTPClient() http_client.fetch("http://localhost:9191/services", functools.partial(handle_request, me)) # suspend the execution until an IOLoop event is available me.parent.switch() sr(’200 OK’, [(’Content-Type’,’text/plain’)]) return me.result 10.3.5 Welcome to Callback-Hell As always, it is not the job of uWSGI to judge programming approaches. It is a tool for sysadmins, and sysadmins should be tolerant with developers choices. One of the things you will pretty soon experiment with this approach to programming is the callback-hell. Let’s extend the previous example to wait 10 seconds before sending back the response to the client from tornado.httpclient import AsyncHTTPClient import greenlet import functools # this gives us access to the main IOLoop (the same used by uWSGI) from tornado.ioloop import IOLoop 10.3. The Tornado loop engine 491 uWSGI Documentation, Release 2.0 io_loop= IOLoop.instance() def sleeper(me): #TIMED OUT # finally come back to WSGI callable me.switch() # this is called at the end of the external HTTP request def handle_request(me, response): if response.error: print("Error:", response.error) else: me.result= response.body # add another callback in the chain me.timeout= io_loop.add_timeout(time.time()+ 10, functools.partial(sleeper, me)) def application(e, sr): me= greenlet.getcurrent() http_client= AsyncHTTPClient() http_client.fetch("http://localhost:9191/services", functools.partial(handle_request, me)) # suspend the execution until an IOLoop event is available me.parent.switch() # unregister the timer io_loop.remove_timeout(me.timeout) sr(’200 OK’, [(’Content-Type’,’text/plain’)]) return me.result here we have chained two callbacks, with the last one being responsable for giving back control to the WSGI callable The code could looks ugly or overcomplex (compared to other approaches like gevent) but this is basically the most efficient way to increase concurrency (both in terms of memory usage and performance). Technologies like node.js are becoming popular thanks to the results they allow to accomplish. 10.3.6 WSGI generators (aka yield all over the place) Take the following WSGI app: def application(e, sr): sr(’200 OK’, [(’Content-Type’,’text/html’)]) yield "one" yield "two" yield "three" if you have already played with uWSGI async mode, you knows that every yield internally calls the used suspend engine (greenlet.switch() in our case). That means we will enter the tornado IOLoop engine soon after having called “application()”. How we can give the control back to our callable if we are not waiting for events ? The uWSGI async api has been extended to support the “schedule_fix” hook. It allows you to call a hook soon after the suspend engine has been called. In the tornado’s case this hook is mapped to something like: io_loop.add_callback(me.switch) in this way after every yield a me.switch() function is called allowing the resume of the callable. Thanks to this hook you can transparently host standard WSGI applications without changing them. 492 Chapter 10. Async and loop engines uWSGI Documentation, Release 2.0 10.3.7 Binding and listening with Tornado The Tornado IOLoop is executed after fork() in every worker. If you want to bind to network addresses with Tornado, remember to use different ports for each workers: from uwsgidecorators import * import tornado.web # this is our Tornado-managed app class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") t_application= tornado.web.Application([ (r"/", MainHandler), ]) # here happens the magic, we bind after every fork() @postfork def start_the_tornado_servers(): application.listen(8000+ uwsgi.worker_id()) # this is our WSGI callable managed by uWSGI def application(e, sr): ... Remember: do no start the IOLoop class. uWSGI will do it by itself as soon as the setup is complete 10.4 uGreen – uWSGI Green Threads uGreen is an implementation of green threads on top of the uWSGI async platform. It is very similar to Python’s greenlet but built on top of the POSIX swapcontext() function. To take advantage of uGreen you have to set the number of async cores that will be mapped to green threads. For example if you want to spawn 30 green threads: ./uwsgi -w tests.cpubound_green -s :3031 --async 30 --ugreen The ugreen option will enable uGreen on top of async mode. Now when you call uwsgi.suspend() in your app, you’ll be switched off to another green thread. 10.4.1 Security and performance To ensure (relative) isolation of green threads, every stack area is protected by so called “guard pages”. An attempt to write out of the stack area of a green thread will result in a segmentation fault/bus error (and the process manager, if enabled, will respawn the worker without too much damage). The context switch is very fast, we can see it as: • On switch 1. Save the Python Frame pointer 2. Save the recursion depth of the Python environment (it is simply an int) 3. Switch to the main stack 10.4. uGreen – uWSGI Green Threads 493 uWSGI Documentation, Release 2.0 • On return 1. Re-set the uGreen stack 2. Re-set the recursion depth 3. Re-set the frame pointer The stack/registers switch is done by the POSIX swapcontext() call and we don’t have to worry about it. 10.4.2 Async I/O For managing async I/O you can use the Async mode FD wait functions uwsgi.wait_fd_read() and uwsgi.wait_fd_write(). 10.4.3 Stack size You can choose the uGreen stack size using the ugreen-stacksize option. The argument is in pages, not bytes. 10.4.4 Is this better than Greenlet or Stackless Python? Weeeeelll... it depends. uGreen is faster (the stack is preallocated) but requires more memory (to allocate a stack area for every core). Stackless and Greenlet probably require less memory... but Stackless requires a heavily patched version of Python. If you’re heavily invested in making your app as async-snappy as possible, it’s always best to do some tests to choose the best one for you. As far as uWSGI is concerned, you can move from async engine to another without changing your code. 10.4.5 What about python-coev? Lots of uGreen has been inspired by it. The author’s way to map Python threads to their implementation allows python-coev to be a little more “trustworthy” than Stackless Python. However, like Stackless, it requires a patched version of Python... :( 10.4.6 Can I use uGreen to write Comet apps? Yeah! Sure! Go ahead. In the distribution you will find the ugreenchat.py script. It is a simple/dumb multiuser Comet chat. If you want to test it (for example 30 users) run it with ./uwsgi -s :3031 -w ugreenchat --async 30 --ugreen The code has comments for every ugreen-related line. You’ll need Bottle, an amazing Python web micro framework to use it. 10.4.7 Psycopg2 improvements uGreen can benefit from the new psycopg2 async extensions and the psycogreen project. See the tests/psycopg2_green.py and tests/psycogreen_green.py files for examples. 494 Chapter 10. Async and loop engines uWSGI Documentation, Release 2.0 10.5 The asyncio loop engine (CPython >= 3.4, uWSGI >= 2.0.4) Warning: Status: EXPERIMENTAL, lot of implications, especially in respect to the WSGI standard The asyncio plugin exposes a loop engine built on top of the asyncio CPython API (https://docs.python.org/3.4/library/asyncio.html#module-asyncio). As uWSGI is not callback based, you need a suspend engine (currently only the ‘greenlet’ one is supported) to manage the WSGI callable. 10.5.1 Why not map the WSGI callable to a coroutine? The reason is pretty simple: this would break WSGI in every possible way. (Let’s not go into the details here.) For this reason each uWSGI core is mapped to a greenlet (running the WSGI callable). This greenlet registers events and coroutines in the asyncio event loop. 10.5.2 Callback vs. coroutines When starting to playing with asyncio you may get confused between callbacks and coroutines. Callbacks are executed when a specific event raises (for example when a file descriptor is ready for read). They are basically standard functions executed in the main greenlet (and eventually they can switch back control to a specific uWSGI core). Coroutines are more complex: they are pretty close to a greenlet, but internally they work on Python frames instead of C stacks. From a Python programmer point of view, coroutines are very special generators. Your WSGI callable can spawn coroutines. 10.5.3 Building uWSGI with asyncio support An ‘asyncio’ build profile is available in the official source tree (it will build greenlet support too). CFLAGS="-I/usr/local/include/python3.4" make PYTHON=python3.4 asyncio or CFLAGS="-I/usr/local/include/python3.4" UWSGI_PROFILE="asyncio" pip3 install uwsgi be sure to use Python 3.4+ as the Python version and to add the greenlet include directory to CFLAGS (this may not be needed if you installed greenlet support from your distribution’s packages). 10.5.4 The first example: a simple callback Let’s start with a simple WSGI callable triggering a function 2 seconds after the callable has returned (magic!). import asyncio def two_seconds_elapsed(): print("Hello 2 seconds elapsed") def application(environ, start_response): 10.5. The asyncio loop engine (CPython >= 3.4, uWSGI >= 2.0.4) 495 uWSGI Documentation, Release 2.0 start_response(’200 OK’, [(’Content-Type’,’text/html’)]) asyncio.get_event_loop().call_later(2, two_seconds_elapsed) return [b"Hello World"] Once called, the application function will register a callable in the asyncio event loop and then will return to the client. After two seconds the event loop will run the function. You can run the example with: uwsgi --asyncio 10 --http-socket :9090 --greenlet --wsgi-file app.py --asyncio is a shortcut enabling 10 uWSGI async cores, enabling you to manage up to 10 concurrent requests with a single process. But how to wait for a callback completion in the WSGI callable? We can suspend our WSGI function using greenlets (remember our WSGI callable is wrapped on a greenlet): import asyncio import greenlet def two_seconds_elapsed(me): print("Hello 2 seconds elapsed") # back to WSGI callable me.switch() def application(environ, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) myself= greenlet.getcurrent() asyncio.get_event_loop().call_later(2, two_seconds_elapsed, myself) # back to event loop myself.parent.switch() return [b"Hello World"] And we can go even further abusing the uWSGI support for WSGI generators: import asyncio import greenlet def two_seconds_elapsed(me): print("Hello 2 seconds elapsed") me.switch() def application(environ, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) myself= greenlet.getcurrent() asyncio.get_event_loop().call_later(2, two_seconds_elapsed, myself) myself.parent.switch() yield b"One" asyncio.get_event_loop().call_later(2, two_seconds_elapsed, myself) myself.parent.switch() yield b"Two" 10.5.5 Another example: Futures and coroutines You can spawn coroutines from your WSGI callable using the asyncio.Task facility: 496 Chapter 10. Async and loop engines uWSGI Documentation, Release 2.0 import asyncio import greenlet @asyncio.coroutine def sleeping(me): yield from asyncio.sleep(2) # back to callable me.switch() def application(environ, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) myself= greenlet.getcurrent() # enqueue the coroutine asyncio.Task(sleeping(myself)) # suspend to event loop myself.parent.switch() # back from event loop return [b"Hello World"] Thanks to Futures we can even get results back from coroutines... import asyncio import greenlet @asyncio.coroutine def sleeping(me, f): yield from asyncio.sleep(2) f.set_result(b"Hello World") # back to callable me.switch() def application(environ, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) myself= greenlet.getcurrent() future= asyncio.Future() # enqueue the coroutine with a Future asyncio.Task(sleeping(myself, future)) # suspend to event loop myself.parent.switch() # back from event loop return [future.result()] A more advanced example using the aiohttp module (remember to pip install aiohttp it, it’s not a stan- dard library module) import asyncio import greenlet import aiohttp @asyncio.coroutine def sleeping(me, f): yield from asyncio.sleep(2) response= yield from aiohttp.request(’GET’,’http://python.org’) body= yield from response.read_and_close() # body is a byterray ! f.set_result(body) me.switch() 10.5. The asyncio loop engine (CPython >= 3.4, uWSGI >= 2.0.4) 497 uWSGI Documentation, Release 2.0 def application(environ, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) myself= greenlet.getcurrent() future= asyncio.Future() asyncio.Task(sleeping(myself, future)) myself.parent.switch() # this time we use yield, just for fun... yield bytes(future.result()) 10.5.6 Status • The plugin is considered experimental (the implications of asyncio with WSGI are currently unclear). In the future it could be built by default when Python >= 3.4 is detected. • While (more or less) technically possible, mapping a WSGI callable to a Python 3 coroutine is not expected in the near future. • The plugin registers hooks for non blocking reads/writes and timers. This means you can automagically use the uWSGI API with asyncio. Check the https://github.com/unbit/uwsgi/blob/master/tests/websockets_chat_asyncio.py example. 498 Chapter 10. Async and loop engines CHAPTER 11 Web Server support 11.1 Apache support Currently there are three uwsgi-protocol related apache2 modules available. 11.1.1 mod_uwsgi This is the original module. It is solid, but incredibly ugly and does not follow a lot of apache coding convention style. mod_uwsgi can be used in two ways: • The “assbackwards” way (the default one). It is the fastest but somewhat far from the Apache2 API. If you do not use Apache2 filters (including gzip) for content generated by uWSGI, use this mode. • The “cgi” mode. This one is somewhat slower but better integrated with Apache. To use the CGI mode, pass -C to the uWSGI server. Options Note: All of the options can be set per-host or per-location. uWSGISocket [timeout] Absolute path and optional timeout in seconds of uwsgi server socket. uWSGISocket2 Absolute path of failover uwsgi server socket uWSGIServer Address and port of an UWSGI server (e.g. localhost:4000) uWSGIModifier1 Set uWSGI modifier1 uWSGIModifier2 Set uWSGI modifier2 uWSGIForceScriptName Force SCRIPT_NAME (app name) uWSGIForceCGIMode Force uWSGI CGI mode for perfect integration with apache filters uWSGIForceWSGIScheme Force the WSGI scheme var (set by default to “http”) uWSGIMaxVars Set the maximum allowed number of uwsgi protocol variables (default 128) To pass custom variables use the SetEnv directive: SetEnv UWSGI_SCRIPT yourapp 499 uWSGI Documentation, Release 2.0 11.1.2 mod_proxy_uwsgi This is the latest module and probably the best bet for the future. It is a “proxy” module, so you will get all of the features exported by mod_proxy. It is fully “apache api compliant” so it should be easy to integrate with the available modules. Using it is easy; just remember to load mod_proxy and mod_proxy_uwsgi modules in your apache config. ProxyPass /foo uwsgi://127.0.0.1:3032/ ProxyPass /bar uwsgi://127.0.0.1:3033/ ProxyPass / uwsgi://127.0.0.1:3031/ The first two forms set SCRIPT_NAME respectively to /foo and /bar while the last one use an empty SCRIPT_NAME. You can set additional uwsgi vars using the SetEnv directive and load balance requests using mod_proxy_balancer. BalancerMember uwsgi://192.168.1.50:3031/ BalancerMember uwsgi://192.168.1.51:3031/ ProxyPass / balancer://mycluster Pay attention to the last slash in the member/node definition. It is optional for non-empty SCRIPT_NAME/mountpoints but required for apps mounted in the root of the domain. Currently the module lacks the ability to set modifiers, though this will be fixed soon. Note: mod_proxy_uwsgi is considered stable starting from uWSGI 2.0.6 Note: If you want to use this module (and help the uWSGI project), report any bugs you find, rather than falling back to the ancient (and ugly) mod_uwsgi Starting from apache 2.4.9 support for unix sockets has been added. The syntax is pretty simple ProxyPass / unix:/tmp/uwsgi.sock|uwsgi: 11.1.3 mod_Ruwsgi This module is based on the SCGI module written by Roger Florkowski. Note: This module is currently undocumented. 11.2 Cherokee support Note: Recent official versions of Cherokee have an uWSGI configuration wizard. If you want to use it you have to install uWSGI in a directory included in your system PATH. • Set the UWSGI handler for your target. • If you are using the default target (/) remember to uncheck the check_file property. • Configure an “information source” of type “Remote”, specifying the socket name of uWSGI. If your uWSGI has TCP support, you can build a cluster by spawning the uWSGI server on a different machine. Note: Remember to add a target for all of your URI containing static files (ex. /media /images ...) using an appropriate handler 500 Chapter 11. Web Server support uWSGI Documentation, Release 2.0 11.2.1 Dynamic apps If you want to hot-add apps specify the UWSGI_SCRIPT var in the uWSGI handler options: • In the section: Add new custom environment variable specify UWSGI_SCRIPT as name and the name of your WSGI script (without the .py extension) as the value. Your app will be loaded automatically at the first request. 11.3 Native HTTP support 11.3.1 HTTPS support (from 1.3) Use the https ,, option. This option may be specified multiple times. First generate your server key, certificate signing request, and self-sign the certificate using the OpenSSL toolset: Note: You’ll want a real SSL certificate for production use. openssl genrsa -out foobar.key 2048 openssl req -new -key foobar.key -out foobar.csr openssl x509 -req -days 365 -in foobar.csr -signkey foobar.key -out foobar.crt Then start the server using the SSL certificate and key just generated: uwsgi --master --https 0.0.0.0:8443,foobar.crt,foobar.key As port 443, the port normally used by HTTPS, is privileged (ie. non-root processes may not bind to it), you can use the shared socket mechanism and drop privileges after binding like thus: uwsgi --shared-socket 0.0.0.0:443 --uid roberto --gid roberto --https =0,foobar.crt,foobar.key uWSGI will bind to 443 on any IP, then drop privileges to those of roberto, and use the shared socket 0 (=0) for HTTPS. Note: The =0 syntax is currently undocumented. Setting SSL/TLS ciphers The https option takes an optional fourth argument you can use to specify the OpenSSL cipher suite. [uwsgi] master= true shared-socket= 0.0.0.0:443 uid= www-data gid= www-data https= =0,foobar.crt,foobar.key,HIGH http-to= /tmp/uwsgi.sock This will set all of the HIGHest ciphers (whenever possible) for your SSL/TLS transactions. 11.3. Native HTTP support 501 uWSGI Documentation, Release 2.0 Client certificate authentication The https option can also take an optional 5th argument. You can use it to specify a CA certificate to authenticate your clients with. Generate your CA key and certificate (this time the key will be 4096 bits and password-protected): openssl genrsa -des3 -out ca.key 4096 openssl req -new -x509 -days 365 -key ca.key -out ca.crt Generate the server key and CSR (as before): openssl genrsa -out foobar.key 2048 openssl req -new -key foobar.key -out foobar.csr Sign the server certificate with your new CA: openssl x509 -req -days 365 -in foobar.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out foobar.crt Create a key and a CSR for your client, sign it with your CA and package it as PKCS#12. Repeat these steps for each client. openssl genrsa -des3 -out client.key 2048 openssl req -new -key client.key -out client.csr openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt openssl pkcs12 -export -in client.crt -inkey client.key -name "Client 01" -out client.p12 Then configure uWSGI for certificate client authentication [uwsgi] master= true shared-socket= 0.0.0.0:443 uid= www-data gid= www-data https= =0,foobar.crt,foobar.key,HIGH,!ca.crt http-to= /tmp/uwsgi.sock Note: If you don’t want the client certificate authentication to be mandatory, remove the ‘!’ before ca.crt in the https options. 11.3.2 HTTP sockets The http-socket option will make uWSGI natively speak HTTP. If your web server does not support the uwsgi protocol but is able to speak to upstream HTTP proxies, or if you are using a service like Webfaction or Heroku to host your application, you can use http-socket. If you plan to expose your app to the world with uWSGI only, use the http option instead, as the router/proxy/load-balancer will then be your shield. 11.3.3 The uWSGI HTTP/HTTPS router uWSGI includes an HTTP/HTTPS router/proxy/load-balancer that can forward requests to uWSGI workers. The server can be used in two ways: embedded and standalone. In embedded mode, it will automatically spawn workers and setup the communication socket. In standalone mode you have to specify the address of a uwsgi socket to connect to. Embedded mode: ./uwsgi --http 127.0.0.1:8080 --master --module mywsgiapp --processes 4 502 Chapter 11. Web Server support uWSGI Documentation, Release 2.0 This will spawn a HTTP server on port 8080 that forwards requests to a pool of 4 uWSGI workers managed by the master process. Standalone mode: ./uwsgi --master --http 127.0.0.1:8080 --http-to /tmp/uwsgi.sock This will spawn a HTTP router (governed by a master for your safety) that will forward requests to the uwsgi socket /tmp/uwsgi.sock. You can bind to multiple addresses/ports. [uwsgi] http= 0.0.0.0:8080 http= 192.168.173.17:8181 http= 127.0.0.1:9090 master= true http-to= /tmp/uwsgi.sock And load-balance to multiple nodes: [uwsgi] http= 0.0.0.0:8080 http= 192.168.173.17:8181 http= 127.0.0.1:9090 master= true http-to= /tmp/uwsgi.sock http-to= 192.168.173.1:3031 http-to= 192.168.173.2:3031 http-to= 192.168.173.3:3031 • If you want to go massive (virtualhosting and zero-conf scaling) combine the HTTP router with the uWSGI Subscription Server. • You can make the HTTP server pass custom uwsgi variables to workers with the http-var KEY=VALUE option. • You can use the http-modifier1 option to pass a custom modifier1 value to workers. 11.3.4 HTTPS support see HTTPS support (from 1.3) 11.3.5 HTTP Keep-Alive If your backends set the correct HTTP headers, you can use the http-keepalive option. Your backends will need to set a valid Content-Length in each response or use chunked encoding. Simply setting “Connection: close” is not enough. Also remember to set “Connection: Keep-Alive” in your response. You can automate that using the add-header "Connection: Keep-Alive" option. 11.3. Native HTTP support 503 uWSGI Documentation, Release 2.0 11.3.6 Can I use uWSGI’s HTTP capabilities in production? If you need a load balancer/proxy it can be a very good idea. It will automatically find new uWSGI instances and can load balance in various ways. If you want to use it as a real webserver you should take into account that serving static files in uWSGI instances is possible, but not as good as using a dedicated full-featured web server. If you host static assets in the cloud or on a CDN, using uWSGI’s HTTP capabilities you can definitely avoid configuring a full webserver. Note: If you use Amazon’s ELB (Elastic Load Balancer) in HTTP mode in front of uWSGI in HTTP mode, a valid Content-Length must be set by the backend. 11.4 HTTPS support (from 1.3) Use the https ,, option. This option may be specified multiple times. First generate your server key, certificate signing request, and self-sign the certificate using the OpenSSL toolset: Note: You’ll want a real SSL certificate for production use. openssl genrsa -out foobar.key 2048 openssl req -new -key foobar.key -out foobar.csr openssl x509 -req -days 365 -in foobar.csr -signkey foobar.key -out foobar.crt Then start the server using the SSL certificate and key just generated: uwsgi --master --https 0.0.0.0:8443,foobar.crt,foobar.key As port 443, the port normally used by HTTPS, is privileged (ie. non-root processes may not bind to it), you can use the shared socket mechanism and drop privileges after binding like thus: uwsgi --shared-socket 0.0.0.0:443 --uid roberto --gid roberto --https =0,foobar.crt,foobar.key uWSGI will bind to 443 on any IP, then drop privileges to those of roberto, and use the shared socket 0 (=0) for HTTPS. Note: The =0 syntax is currently undocumented. 11.4.1 Setting SSL/TLS ciphers The https option takes an optional fourth argument you can use to specify the OpenSSL cipher suite. [uwsgi] master= true shared-socket= 0.0.0.0:443 uid= www-data gid= www-data https= =0,foobar.crt,foobar.key,HIGH http-to= /tmp/uwsgi.sock This will set all of the HIGHest ciphers (whenever possible) for your SSL/TLS transactions. 504 Chapter 11. Web Server support uWSGI Documentation, Release 2.0 11.4.2 Client certificate authentication The https option can also take an optional 5th argument. You can use it to specify a CA certificate to authenticate your clients with. Generate your CA key and certificate (this time the key will be 4096 bits and password-protected): openssl genrsa -des3 -out ca.key 4096 openssl req -new -x509 -days 365 -key ca.key -out ca.crt Generate the server key and CSR (as before): openssl genrsa -out foobar.key 2048 openssl req -new -key foobar.key -out foobar.csr Sign the server certificate with your new CA: openssl x509 -req -days 365 -in foobar.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out foobar.crt Create a key and a CSR for your client, sign it with your CA and package it as PKCS#12. Repeat these steps for each client. openssl genrsa -des3 -out client.key 2048 openssl req -new -key client.key -out client.csr openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt openssl pkcs12 -export -in client.crt -inkey client.key -name "Client 01" -out client.p12 Then configure uWSGI for certificate client authentication [uwsgi] master= true shared-socket= 0.0.0.0:443 uid= www-data gid= www-data https= =0,foobar.crt,foobar.key,HIGH,!ca.crt http-to= /tmp/uwsgi.sock Note: If you don’t want the client certificate authentication to be mandatory, remove the ‘!’ before ca.crt in the https options. 11.5 The SPDY router (uWSGI 1.9) Starting from uWSGI 1.9 the HTTPS router has been extended to support version 3 of the SPDY protocol. To run the HTTPS router with SPDY support, use the --https2 option: uwsgi --https2 addr=0.0.0.0:8443,cert=foobart.crt,key=foobar.key,spdy=1 --module werkzeug.testapp:test_app This will start an HTTPS router on port 8443 with SPDY support, forwarding requests to the Werkzeug’s test app the instance is running. If you’ll go to https://address:8443/ with a SPDY-enabled browser, you will see additional WSGI variables reported by Werkzeug: • SPDY – on • SPDY.version – protocol version (generally 3) • SPDY.stream – stream identifier (an odd number). Opening privileged ports as non-root user will require shared-socket and a slightly different syntax: 11.5. The SPDY router (uWSGI 1.9) 505 uWSGI Documentation, Release 2.0 uwsgi --shared-socket :443 --https2 addr==0,cert=foobart.crt,key=foobar.key,spdy=1 --module werkzeug.testapp:test_app --uid user Both HTTP and HTTPS can be used at the same time (=0 and =1 are references to the privileged ports opened by shared-socket commands): uwsgi --shared-socket :80 --shared-socket :443 --http=0 --https2 addr==1,cert=foobart.crt,key=foobar.key,spdy=1 --module werkzeug.testapp:test_app --uid user 11.5.1 Notes • You need at least OpenSSL 1.x to use SPDY (all of the modern Linux distros should have it). • During uploads, the window size is constantly updated. • The --http-timeout directive is used to set the SPDY timeout. This is the maximum amount of inactivity after the SPDY connection is closed. • PING requests from the browsers are all acknowledged. • On connect, the SPDY router sends a settings packet to the client with optimal values. • If a stream fails in some catastrophic way, the whole connection is closed hard. • RST messages are always honoured. 11.5.2 TODO • Add old SPDY v2 support (is it worth it?) • Allow PUSHing of resources from the uWSGI cache • Allow tuning internal buffers 11.6 Lighttpd support Note: Lighttpd support is experimental. The uwsgi handler for Lighttpd lives in the /lighttpd directory of the uWSGI distribution. 11.6.1 Building the module First download the source of lighttpd and uncompress it. Copy the lighttpd/mod_uwsgi.c file from the uWSGI distribution into Lighttpd’s /src directory. Add the following to the lighttpd src/Makefile.am file, after the accesslog block: lib_LTLIBRARIES += mod_uwsgi.la mod_uwsgi_la_SOURCES = mod_uwsgi.c mod_uwsgi_la_LDFLAGS = -module -export-dynamic -avoid-version -no-undefined mod_uwsgi_la_LIBADD = $(common_libadd) Then launch autoreconf-fi and as usual, 506 Chapter 11. Web Server support uWSGI Documentation, Release 2.0 ./configure && make && make install 11.6.2 Configuring Lighttpd Modify your configuration file: server.modules = ( ... "mod_uwsgi", ... ) # ... uwsgi.server = ( "/pippo" => (( "host" => "192.168.173.15", "port" => 3033 )), "/" => (( "host" => "127.0.0.1", "port" => 3031 )), ) If you specify multiple hosts under the same virtual path/URI, load balancing will be activated with the “Fair” algo- rithm. 11.7 Attaching uWSGI to Mongrel2 Mongrel2 is a next-next-generation webserver that focuses on modern webapps. Just like uWSGI, it is fully language agnostic, cluster-friendly and delightfully controversial :) It uses the amazing ZeroMQ library for communication, allowing reliable, easy message queueing and configuration- free scalability. Starting from version 0.9.8-dev, uWSGI can be used as a Mongrel2 handler. 11.7.1 Requirements To enable ZeroMQ/Mongrel2 support in uWSGI you need the zeromq library (2.1+) and the uuid library. Mongrel2 can use JSON or tnetstring to pass data (such as headers and various other information) to handlers. uWSGI supports tnetstring out of the box but requires the Jansson library to parse JSON data. If you don’t install jansson or do not want to use JSON, make sure you specify protocol=’tnetstring’ in the Handler in the Mongrel2 configuration, as the default is to use JSON. This would result in a rather obscure “JSON support not enabled. Skip request” message in the uWSGI log. 11.7.2 Configuring Mongrel2 You can find mongrel2-uwsgi.conf shipped with the uWSGI source. You can use this file as a base to configure Mongrel2. main= Server( uuid="f400bf85-4538-4f7a-8908-67e313d515c2", access_log="/logs/access.log", error_log="/logs/error.log", chroot="./", 11.7. Attaching uWSGI to Mongrel2 507 uWSGI Documentation, Release 2.0 default_host="192.168.173.11", name="test", pid_file="/run/mongrel2.pid", port=6767, hosts=[ Host(name="192.168.173.11", routes={ ’/’: Handler(send_spec=’tcp://192.168.173.11:9999’, send_ident=’54c6755b-9628-40a4-9a2d-cc82a816345e’, recv_spec=’tcp://192.168.173.11:9998’, recv_ident=’’, protocol=’tnetstring’) }) ] ) settings={’upload.temp_store’:’tmp/mongrel2.upload.XXXXXX’} servers= [main] It is a pretty standard Mongrel2 configuration with upload streaming enabled. 11.7.3 Configuring uWSGI for Mongrel2 To attach uWSGI to Mongrel2, simply use the OptionZeromq option: uwsgi --zeromq tcp://192.168.173.11:9999,tcp://192.168.173.11:9998 You can spawn multiple processes (each one will subscribe to Mongrel2 with a different uuid) uwsgi --zeromq tcp://192.168.173.11:9999,tcp://192.168.173.11:9998 -p 4 You can use threads too. Each thread will subscribe to the Mongrel2 queue but the responder socket will be shared by all the threads and protected by a mutex. uwsgi --zeromq tcp://192.168.173.11:9999,tcp://192.168.173.11:9998 -p 4 --threads 8 # This will spawn 4 processes with 8 threads each, totaling 32 threads. 11.7.4 Test them all Add an application to uWSGI (we will use the werkzeug.testapp as always) uwsgi --zeromq tcp://192.168.173.11:9999,tcp://192.168.173.11:9998 -p 4 --threads 8 --module werkzeug.testapp:test_app Now launch the command on all the servers you want, Mongrel2 will distribute requests to them automagically. 11.7.5 Async mode Warning: Async support for ZeroMQ is still under development, as ZeroMQ uses edge triggered events that complicate things in the uWSGI async architecture. 11.7.6 Chroot By default Mongrel2 will chroot(). This is a good thing for security, but can cause headaches regarding file upload streaming. Remember that Mongrel2 will save the uploaded file in its own chroot jail, so if your uWSGI instance does 508 Chapter 11. Web Server support uWSGI Documentation, Release 2.0 not live in the same chroot jail, you’ll have to choose the paths carefully. In the example Mongrel2 configuration file we have used a relative path to easily allow uWSGI to reach the file. 11.7.7 Performance Mongrel2 is extremely fast and reliable even under huge loads. tnetstring and JSON are text-based (so they are a little less effective than the binary uwsgi protocol. However, as Mongrel2 does not require the expensive one-connection- for-request method, you should get pretty much the same (if not higher) results compared to a (for example) Nginx + uWSGI approach. 11.7.8 uWSGI clustering + ZeroMQ You can easily mix uWSGI clustering with ZeroMQ. Choose the main node and run uwsgi --zeromq tcp://192.168.173.11:9999,tcp://192.168.173.11:9998 -p 4 --threads 8 --module werkzeug.testapp:test_app --cluster 225.1.1.1:1717 And on all the other nodes simply run uwsgi --cluster 225.1.1.1:1717 11.7.9 Mixing standard sockets with ZeroMQ You can add uwsgi/HTTP/FastCGI/... sockets to your uWSGI server in addition to ZeroMQ, but if you do, remember to disable threads! This limitation will probably be fixed in the future. 11.7.10 Logging via ZeroMQ See also: ZeroMQLogging 11.8 Nginx support Nginx natively includes support for upstream servers speaking the uwsgi protocol since version 0.8.40. If you are unfortunate enough to use an older version (that nevertheless is 0.7.63 or newer), you can find a module in the nginx directory of the uWSGI distribution. 11.8.1 Building the module (Nginx 0.8.39 and older) Download a >=0.7.63 release of nginx and untar it at the same level of your uWSGI distribution directory. Move yourself into the nginx-0.7.x directory and ./configure nginx to add the uwsgi handler to its module list: ./configure --add-module=../uwsgi/nginx/ then make and make install it. If all goes well you can now configure Nginx to pass requests to the uWSGI server. 11.8. Nginx support 509 uWSGI Documentation, Release 2.0 11.8.2 Configuring Nginx First of all copy the uwsgi_params file (available in the nginx directory of the uWSGI distribution) into your Nginx configuration directory, then in a location directive in your Nginx configuration add: uwsgi_pass unix:///tmp/uwsgi.sock; include uwsgi_params; – or if you are using TCP sockets, uwsgi_pass 127.0.0.1:3031; include uwsgi_params; Then simply reload Nginx and you are ready to rock your uWSGI powered applications through Nginx. What is the uwsgi_params file? It’s convenience, nothing more! For your reading pleasure, the contents of the file as of uWSGI 1.3: uwsgi_param QUERY_STRING $query_string; uwsgi_param REQUEST_METHOD $request_method; uwsgi_param CONTENT_TYPE $content_type; uwsgi_param CONTENT_LENGTH $content_length; uwsgi_param REQUEST_URI $request_uri; uwsgi_param PATH_INFO $document_uri; uwsgi_param DOCUMENT_ROOT $document_root; uwsgi_param SERVER_PROTOCOL $server_protocol; uwsgi_param REMOTE_ADDR $remote_addr; uwsgi_param REMOTE_PORT $remote_port; uwsgi_param SERVER_ADDR $server_addr; uwsgi_param SERVER_PORT $server_port; uwsgi_param SERVER_NAME $server_name; See also: uwsgi protocol magic variables 11.8.3 Clustering Nginx has a beautiful integrated cluster support for all the upstream handlers. Add an upstream directive outside the server configuration block: upstream uwsgicluster { server unix:///tmp/uwsgi.sock; server 192.168.1.235:3031; server 10.0.0.17:3017; } Then modify your uwsgi_pass directive: uwsgi_pass uwsgicluster; Your requests will be balanced between the uWSGI servers configured. 510 Chapter 11. Web Server support uWSGI Documentation, Release 2.0 11.8.4 Dynamic apps The uWSGI server can load applications on demand when passed special vars. uWSGI can be launched without passing it any application configuration: ./uwsgi -s /tmp/uwsgi.sock If a request sets the UWSGI_SCRIPT var, the server will load the specified module: location / { root html; uwsgi_pass uwsgicluster; uwsgi_param UWSGI_SCRIPT testapp; include uwsgi_params; } You can even configure multiple apps per-location: location / { root html; uwsgi_pass uwsgicluster; uwsgi_param UWSGI_SCRIPT testapp; include uwsgi_params; } location /django { uwsgi_pass uwsgicluster; include uwsgi_params; uwsgi_param UWSGI_SCRIPT django_wsgi; uwsgi_param SCRIPT_NAME /django; uwsgi_modifier1 30; } The WSGI standard dictates that SCRIPT_NAME is the variable used to select a specific application. The uwsgi_modifier1 30 option sets the uWSGI modifier UWSGI_MODIFIER_MANAGE_PATH_INFO. This per-request modifier instructs the uWSGI server to rewrite the PATH_INFO value removing the SCRIPT_NAME from it. 11.8.5 Static files For best performance and security, remember to configure Nginx to serve static files instead of letting your poor application handle that instead. The uWSGI server can serve static files flawlessly but not as quickly and efficiently as a dedicated web server like Nginx. For example you can the Django /media path could be mapped like this: location /media { alias /var/lib/python-support/python2.6/django/contrib/admin/media; } Some applications need to pass control to the UWSGI server only if the requested filename does not exist: if (!-f $request_filename) { uwsgi_pass uwsgicluster; } 11.8. Nginx support 511 uWSGI Documentation, Release 2.0 WARNING If used incorrectly a configuration like this may cause security problems. For your sanity’s sake, double-triple- quadruple check that your application files, configuration files and any other sensitive files are outside of the root of the static files. 11.8.6 Virtual Hosting You can use Nginx’s virtual hosting without particular problems. If you run “untrusted” web apps (such as those of your clients if you happen to be an ISP) you should limit their memory/address space usage and use a different uid for each host/application: server { listen 80; server_name customersite1.com; access_log /var/log/customersite1/access_log; location / { root /var/www/customersite1; uwsgi_pass 127.0.0.1:3031; include uwsgi_params; } } server { listen 80; server_name customersite2.it; access_log /var/log/customersite2/access_log; location / { root /var/www/customersite2; uwsgi_pass 127.0.0.1:3032; include uwsgi_params; } } server { listen 80; server_name sivusto3.fi; access_log /var/log/customersite3/access_log; location / { root /var/www/customersite3; uwsgi_pass 127.0.0.1:3033; include uwsgi_params; } } The customers’ applications can now be run (using the process manager of your choice, such as rc.local, Running uWSGI via Upstart, Supervisord or whatever strikes your fancy) with a different uid and a limited (if you want) address space for each socket: uwsgi --uid 1001 -w customer1app --limit-as 128 -p 3 -M -s 127.0.0.1:3031 uwsgi --uid 1002 -w customer2app --limit-as 128 -p 3 -M -s 127.0.0.1:3032 uwsgi --uid 1003 -w django3app --limit-as 96 -p 6 -M -s 127.0.0.1:3033 512 Chapter 11. Web Server support CHAPTER 12 Language support 12.1 Python support 12.1.1 The uwsgi Python module The uWSGI server automagically adds a uwsgi module into your Python apps. This is useful for configuring the uWSGI server, use its internal functions and get statistics (as well as detecting whether you’re actually running under uWSGI). Note: Many of these functions are currently woefully undocumented. Module-level globals uwsgi.numproc The number of processes/workers currently running. uwsgi.buffer_size The current configured buffer size in bytes. uwsgi.started_on(int) The Unix timestamp of uWSGI’s startup. uwsgi.fastfuncs This is the dictionary used to define FastFuncs. uwsgi.applist This is the list of applications currently configured. uwsgi.applications This is the dynamic applications dictionary. See also: Application dictionary uwsgi.message_manager_marshal The callable to run when the uWSGI server receives a marshalled message. uwsgi.magic_table The magic table of configuration placeholders. 513 uWSGI Documentation, Release 2.0 uwsgi.opt The current configuration options, including any custom placeholders. Cache functions uwsgi.cache_get(key[, cache_server ]) Get a value from the cache. Parameters • key – The cache key to read. • cache_server – The UNIX/TCP socket where the cache server is listening. Optional. uwsgi.cache_set(key, value[, expire, cache_server ]) Set a value in the cache. Parameters • key – The cache key to write. • value – The cache value to write. • expire – Expiry time of the value, in seconds. • cache_server – The UNIX/TCP socket where the cache server is listening. Optional. uwsgi.cache_update(key, value[, expire, cache_server ]) uwsgi.cache_del(key[, cache_server ]) Delete the given cached value from the cache. Parameters • key – The cache key to delete. • cache_server – The UNIX/TCP socket where the cache server is listening. Optional. uwsgi.cache_exists(key[, cache_server ]) Quickly check whether there is a value in the cache associated with the given key. Parameters • key – The cache key to check. • cache_server – The UNIX/TCP socket where the cache server is listening. Optional. uwsgi.cache_clear() Queue functions uwsgi.queue_get() uwsgi.queue_set() uwsgi.queue_last() uwsgi.queue_push() uwsgi.queue_pull() uwsgi.queue_pop() uwsgi.queue_slot() 514 Chapter 12. Language support uWSGI Documentation, Release 2.0 uwsgi.queue_pull_slot() SNMP functions uwsgi.snmp_set_community(str) Parameters str – The string containing the new community value. Sets the SNMP community string. uwsgi.snmp_set_counter32(oidnum, value) uwsgi.snmp_set_counter64(oidnum, value) uwsgi.snmp_set_gauge(oidnum, value) Parameters • oidnum – An integer containing the oid number target. • value – An integer containing the new value of the counter or gauge. Sets the counter or gauge to a specific value. uwsgi.snmp_incr_counter32(oidnum, value) uwsgi.snmp_incr_counter64(oidnum, value) uwsgi.snmp_incr_gauge(oidnum, value) uwsgi.snmp_decr_counter32(oidnum, value) uwsgi.snmp_decr_counter64(oidnum, value) uwsgi.snmp_decr_gauge(oidnum, value) Parameters • oidnum – An integer containing the oid number target. • value – An integer containing the amount to increase or decrease the counter or gauge. If not specified the default is 1. Increases or decreases the counter or gauge by a specific amount. Note: uWSGI OID tree starts at 1.3.6.1.4.1.35156.17 Spooler functions uwsgi.send_to_spooler(message_dict=None, spooler=None, priority=None, at=None, body=None, **kwargs) Parameters • message_dict – The message (string keys, string values) to spool. Either this, or **kwargs may be set. • spooler – The spooler (id or directory) to use. • priority – The priority of the message. Larger = less important. • at – The minimum UNIX timestamp at which this message should be processed. • body – A binary (bytestring) body to add to the message, in addition to the message dictio- nary itself. Its value will be available in the key body in the message. 12.1. Python support 515 uWSGI Documentation, Release 2.0 Send data to the The uWSGI Spooler. Also known as spool(). Note: Any of the keyword arguments may also be passed in the message dictionary. This means they’re reserved words, in a way... uwsgi.set_spooler_frequency(seconds) Set how often the spooler runs. uwsgi.spooler_jobs() uwsgi.spooler_pid() Advanced methods uwsgi.send_message() Send a generic message using The uwsgi Protocol. Note: Until version 2f970ce58543278c851ff30e52758fd6d6e69fdc this function was called send_uwsgi_message(). uwsgi.route() uwsgi.send_multi_message() Send a generic message to multiple recipients using The uwsgi Protocol. Note: Until version 2f970ce58543278c851ff30e52758fd6d6e69fdc this function was called send_multi_uwsgi_message(). See also: Clustering for examples uwsgi.reload() Gracefully reload the uWSGI server stack. See also: Reload uwsgi.stop() uwsgi.workers() → dict Get a statistics dictionary of all the workers for the current server. A dictionary is returned. uwsgi.masterpid() → int Return the process identifier (PID) of the uWSGI master process. uwsgi.total_requests() → int Returns the total number of requests managed so far by the pool of uWSGI workers. uwsgi.get_option() Also available as getoption(). uwsgi.set_option() Also available as setoption(). uwsgi.sorry_i_need_to_block() uwsgi.request_id() uwsgi.worker_id() 516 Chapter 12. Language support uWSGI Documentation, Release 2.0 uwsgi.mule_id() uwsgi.log() uwsgi.log_this_request() uwsgi.set_logvar() uwsgi.get_logvar() uwsgi.disconnect() uwsgi.grunt() uwsgi.lock(locknum=0) Parameters locknum – The lock number to lock. Lock 0 is always available. uwsgi.is_locked() uwsgi.unlock(locknum=0) Parameters locknum – The lock number to unlock. Lock 0 is always available. uwsgi.cl() uwsgi.setprocname() uwsgi.listen_queue() uwsgi.register_signal(num, who, function) Parameters • num – the signal number to configure • who – a magic string that will set which process/processes receive the signal. – worker/worker0 will send the signal to the first available worker. This is the default if you specify an empty string. – workers will send the signal to every worker. – workerN (N > 0) will send the signal to worker N. – mule/mule0 will send the signal to the first available mule. (See uWSGI Mules) – mules will send the signal to all mules – muleN (N > 0) will send the signal to mule N. – cluster will send the signal to all the nodes in the cluster. Warning: not implemented. – subscribed will send the signal to all subscribed nodes. Warning: not implemented. – spooler will send the signal to the spooler. cluster and subscribed are special, as they will send the signal to the master of all cluster/subscribed nodes. The other nodes will have to define a local handler though, to avoid a terrible signal storm loop. • function – A callable that takes a single numeric argument. uwsgi.signal(num) Parameters num – the signal number to raise 12.1. Python support 517 uWSGI Documentation, Release 2.0 uwsgi.signal_wait([signum]) Block the process/thread/async core until a signal is received. Use signal_received to get the number of the signal received. If a registered handler handles a signal, signal_wait will be interrupted and the actual handler will handle the signal. Parameters signum – Optional - the signal to wait for uwsgi.signal_registered() uwsgi.signal_received() Get the number of the last signal received. Used in conjunction with signal_wait. uwsgi.add_file_monitor() uwsgi.add_timer(signum, seconds[, iterations=0]) Parameters • signum – The signal number to raise. • seconds – The interval at which to raise the signal. • iterations – How many times to raise the signal. 0 (the default) means infinity. uwsgi.add_probe() uwsgi.add_rb_timer(signum, seconds[, iterations=0]) Add an user-space (red-black tree backed) timer. Parameters • signum – The signal number to raise. • seconds – The interval at which to raise the signal. • iterations – How many times to raise the signal. 0 (the default) means infinity. uwsgi.add_cron(signal, minute, hour, day, month, weekday) For the time parameters, you may use the syntax -n to denote “every n”. For instance hour=-2 would declare the signal to be sent every other hour. Parameters • signal – The signal number to raise. • minute – The minute on which to run this event. • hour – The hour on which to run this event. • day – The day on which to run this event. This is “OR”ed with weekday. • month – The month on which to run this event. • weekday – The weekday on which to run this event. This is “OR”ed with day. (In accor- dance with the POSIX standard, 0 is Sunday, 6 is Monday) uwsgi.register_rpc() uwsgi.rpc() uwsgi.rpc_list() uwsgi.call() uwsgi.sendfile() uwsgi.set_warning_message() uwsgi.mem() 518 Chapter 12. Language support uWSGI Documentation, Release 2.0 uwsgi.has_hook() uwsgi.logsize() uwsgi.send_multicast_message() uwsgi.cluster_nodes() uwsgi.cluster_node_name() uwsgi.cluster() uwsgi.cluster_best_node() uwsgi.connect() uwsgi.connection_fd() uwsgi.is_connected() uwsgi.send() uwsgi.recv() uwsgi.recv_block() uwsgi.recv_frame() uwsgi.close() uwsgi.i_am_the_spooler() uwsgi.fcgi() uwsgi.parsefile() uwsgi.embedded_data(symbol_name) Parameters string – The symbol name to extract. Extracts a symbol from the uWSGI binary image. See also: Embedding an application in uWSGI uwsgi.extract() uwsgi.mule_msg(string[, id ]) Parameters • string – The bytestring message to send. • id – Optional - the mule ID to receive the message. If you do not specify an ID, the message will go to the first available programmed mule. Send a message to a mule. uwsgi.farm_msg() uwsgi.mule_get_msg() Returns A mule message, once one is received. Block until a mule message is received and return it. This can be called from multiple threads in the same programmed mule. uwsgi.farm_get_msg() uwsgi.in_farm() 12.1. Python support 519 uWSGI Documentation, Release 2.0 uwsgi.ready() uwsgi.set_user_harakiri() Async functions uwsgi.async_sleep(seconds) Suspend handling the current request for seconds seconds and pass control to the next async core. Parameters seconds – Sleep time, in seconds. uwsgi.async_connect() uwsgi.async_send_message() uwsgi.green_schedule() uwsgi.suspend() Suspend handling the current request and pass control to the next async core clamoring for attention. uwsgi.wait_fd_read(fd[, timeout ]) Suspend handling the current request until there is something to be read on file descriptor fd. May be called several times before yielding/suspending to add more file descriptors to the set to be watched. Parameters • fd – File descriptor number. • timeout – Optional timeout (infinite if omitted). uwsgi.wait_fd_write(fd[, timeout ]) Suspend handling the current request until there is nothing more to be written on file descriptor fd. May be called several times to add more file descriptors to the set to be watched. Parameters • fd – File descriptor number. • timeout – Optional timeout (infinite if omitted). SharedArea functions See also: SharedArea – share memory pages between uWSGI components uwsgi.sharedarea_read(pos, len) → bytes Read a byte string from the uWSGI SharedArea – share memory pages between uWSGI components. Parameters • pos – Starting position to read from. • len – Number of bytes to read. Returns Bytes read, or None if the shared area is not enabled or the read request is invalid. uwsgi.sharedarea_write(pos, str) → long Write a byte string into the uWSGI SharedArea – share memory pages between uWSGI components. Parameters • pos – Starting position to write to. • str – Bytestring to write. 520 Chapter 12. Language support uWSGI Documentation, Release 2.0 Returns Number of bytes written, or None if the shared area is not enabled or the write could not be fully finished. uwsgi.sharedarea_readbyte(pos) → int Read a single byte from the uWSGI SharedArea – share memory pages between uWSGI components. Parameters pos – The position to read from. Returns Bytes read, or None if the shared area is not enabled or the read request is invalid. uwsgi.sharedarea_writebyte(pos, val) → int Write a single byte into the uWSGI SharedArea – share memory pages between uWSGI components. Parameters • pos – The position to write the value to. • val (integer) – The value to write. Returns The byte written, or None if the shared area is not enabled or the write request is invalid. uwsgi.sharedarea_readlong(pos) → int Read a 64-bit (8-byte) long from the uWSGI SharedArea – share memory pages between uWSGI components. Parameters pos – The position to read from. Returns The value read, or None if the shared area is not enabled or the read request is invalid. uwsgi.sharedarea_writelong(pos, val) → int Write a 64-bit (8-byte) long into the uWSGI SharedArea – share memory pages between uWSGI components. Parameters • pos – The position to write the value to. • val (long) – The value to write. Returns The value written, or None if the shared area is not enabled or the write request is invalid. uwsgi.sharedarea_inclong(pos) → int Atomically increment a 64-bit long value in the uWSGI SharedArea – share memory pages between uWSGI components. Parameters pos – The position of the value. Returns The new value at the given position, or None if the shared area is not enabled or the read request is invalid. Erlang functions uwsgi.erlang_send_message(node, process_name, message) uwsgi.erlang_register_process(process_name, callable) uwsgi.erlang_recv_message(node) uwsgi.erlang_connect(address) Returns File descriptor or -1 on error uwsgi.erlang_rpc(node, module, function, argument) 12.1. Python support 521 uWSGI Documentation, Release 2.0 12.1.2 uWSGI API - Python decorators The uWSGI API is very low-level, as it must be language-independent. That said, being too low-level is not a Good Thing for many languages, such as Python. Decorators are, in our humble opinion, one of the more kick-ass features of Python, so in the uWSGI source tree you will find a module exporting a bunch of decorators that cover a good part of the uWSGI API. Notes Signal-based decorators execute the signal handler in the first available worker. If you have enabled the spooler you can execute the signal handlers in it, leaving workers free to manage normal requests. Simply pass target=’spooler’ to the decorator. @timer(3, target=’spooler’) def hello(signum): print("hello") Example: a Django session cleaner and video encoder Let’s define a task.py module and put it in the Django project directory. from uwsgidecorators import * from django.contrib.sessions.models import Session import os @cron(40,2,-1,-1,-1) def clear_django_session(num): print("it’s 2:40 in the morning: clearing django sessions") Session.objects.all().delete() @spool def encode_video(arguments): os.system("ffmpeg -i \"%s\" image%%d.jpg"% arguments[’filename’]) The session cleaner will be executed every day at 2:40, to enqueue a video encoding we simply need to spool it from somewhere else. from task import encode_video def index(request): # launching video encoding encode_video.spool(filename=request.POST[’video_filename’]) return render_to_response(’enqueued.html’) Now run uWSGI with the spooler enabled: [uwsgi] ; a couple of placeholder django_projects_dir= /var/www/apps my_project= foobar ; chdir to app project dir and set pythonpath chdir= %(django_projects_dir)/%(my_project) pythonpath= %(django_projects_dir) ; load django module= django.core.handlers:WSGIHandler() 522 Chapter 12. Language support uWSGI Documentation, Release 2.0 env= DJANGO_SETTINGS_MODULE=%(my_project).settings ; enable master master= true ; 4 processes should be enough processes=4 ; enable the spooler (the mytasks dir must exist!) spooler= %(chdir)/mytasks ; load the task.py module import= task ; bind on a tcp socket socket= 127.0.0.1:3031 The only especially relevant option is the import one. It works in the same way as module but skips the WSGI callable search. You can use it to preload modules before the loading of WSGI apps. You can specify an unlimited number of ‘’‘import’‘’ directives. Example: web2py + spooler + timer First of all define your spooler and timer functions (we will call it :file:mytasks.py) from uwsgidecorators import * @spool def a_long_task(args): print(args) @spool def a_longer_task(args) print("longer.....") @timer(3) def three_seconds(signum): print("3 seconds elapsed") @timer(10, target=’spooler’) def ten_seconds_in_the_spooler(signum): print("10 seconds elapsed in the spooler") Now run web2py. uwsgi --socket :3031 --spooler myspool --master --processes 4 --import mytasks --module web2py.wsgihandler As soon as the application is loaded, you will see the 2 timers running in your logs. Now we want to enqueue tasks from our web2py controllers. Edit one of them and add import mytasks # be sure mytasks is importable! def index(): # this is a web2py action mytasks.a_long_task.spool(foo=’bar’) return "Task enqueued" uwsgidecorators API reference uwsgidecorators.postfork(func) uWSGI is a preforking (or “fork-abusing”) server, so you might need to execute a fixup task after each fork(). 12.1. Python support 523 uWSGI Documentation, Release 2.0 The postfork decorator is just the ticket. You can declare multiple postfork tasks. Each decorated function will be executed in sequence after each fork(). @postfork def reconnect_to_db(): myfoodb.connect() @postfork def hello_world(): print("Hello World") uwsgidecorators.spool(func) The uWSGI spooler can be very useful. Compared to Celery or other queues it is very “raw”. The spool decorator will help! @spool def a_long_long_task(arguments): print(arguments) for i in xrange(0, 10000000): time.sleep(0.1) @spool def a_longer_task(args): print(args) for i in xrange(0, 10000000): time.sleep(0.5) # enqueue the tasks a_long_long_task.spool(foo=’bar’,hello=’world’) a_longer_task.spool({’pippo’:’pluto’}) The functions will automatically return uwsgi.SPOOL_OK so they will be executed one time independently by their return status. uwsgidecorators.spoolforever(func) Use spoolforever when you want to continuously execute a spool task. A @spoolforever task will always return uwsgi.SPOOL_RETRY. @spoolforever def a_longer_task(args): print(args) for i in xrange(0, 10000000): time.sleep(0.5) # enqueue the task a_longer_task.spool({’pippo’:’pluto’}) uwsgidecorators.spoolraw(func) Advanced users may want to control the return value of a task. @spoolraw def a_controlled_task(args): if args[’foo’] ==’bar’: return uwsgi.SPOOL_OK return uwsgi.SPOOL_RETRY a_controlled_task.spool(foo=’bar’) uwsgidecorators.rpc(“name”, func) 524 Chapter 12. Language support uWSGI Documentation, Release 2.0 uWSGI uWSGI RPC Stack is the fastest way to remotely call functions in applications hosted in uWSGI in- stances. You can easily define exported functions with the @rpc decorator. @rpc(’helloworld’) def ciao_mondo_function(): return "Hello World" uwsgidecorators.signal(num)(func) You can register signals for the signal framework in one shot. @signal(17) def my_signal(num): print("i am signal %d"% num) uwsgidecorators.timer(interval, func) Execute a function at regular intervals. @timer(3) def three_seconds(num): print("3 seconds elapsed") uwsgidecorators.rbtimer(interval, func) Works like @timer but using red black timers. uwsgidecorators.cron(min, hour, day, mon, wday, func) Easily register functions for the CronInterface. @cron(59,3,-1,-1,-1) def execute_me_at_three_and_fiftynine(num): print("it’s 3:59 in the morning") Since 1.2, a new syntax is supported to simulate crontab-like intervals (every Nth minute, etc.). */5 ** ** can be specified in uWSGI like thus: @cron(-5,-1,-1,-1,-1) def execute_me_every_five_min(num): print("5 minutes, what a long time!") uwsgidecorators.filemon(path, func) Execute a function every time a file/directory is modified. @filemon("/tmp") def tmp_has_been_modified(num): print("/tmp directory has been modified. Great magic is afoot") uwsgidecorators.erlang(process_name, func) Map a function as an Erlang process. @erlang(’foobar’) def hello(): return "Hello" uwsgidecorators.thread(func) Mark function to be executed in a separate thread. Important: Threading must be enabled in uWSGI with the enable-threads or threads option. @thread def a_running_thread(): while True: 12.1. Python support 525 uWSGI Documentation, Release 2.0 time.sleep(2) print("i am a no-args thread") @thread def a_running_thread_with_args(who): while True: time.sleep(2) print("Hello %s (from arged-thread)"% who) a_running_thread() a_running_thread_with_args("uWSGI") You may also combine @thread with @postfork to spawn the postfork handler in a new thread in the freshly spawned worker. @postfork @thread def a_post_fork_thread(): while True: time.sleep(3) print("Hello from a thread in worker %d"% uwsgi.worker_id()) uwsgidecorators.lock(func) This decorator will execute a function in fully locked environment, making it impossible for other workers or threads (or the master, if you’re foolish or brave enough) to run it simultaneously. Obviously this may be combined with @postfork. @lock def dangerous_op(): print("Concurrency is for fools!") uwsgidecorators.mulefunc([mulespec], func) Offload the execution of the function to a mule. When the offloaded function is called, it will return immediately and execution is delegated to a mule. @mulefunc def i_am_an_offloaded_function(argument1, argument2): print argument1,argument2 You may also specify a mule ID or mule farm to run the function on. Please remember to register your function with a uwsgi import configuration option. @mulefunc(3) def on_three(): print "I’m running on mule 3." @mulefunc(’old_mcdonalds_farm’) def on_mcd(): print "I’m running on a mule on Old McDonalds’ farm." uwsgidecorators.harakiri(time, func) Starting from uWSGI 1.3-dev, a customizable secondary harakiri subsystem has been added. You can use this decorator to kill a worker if the given call is taking too long. @harakiri(10) def slow_function(foo, bar): for i in range(0, 10000): for y in range(0, 10000): pass 526 Chapter 12. Language support uWSGI Documentation, Release 2.0 # or the alternative lower level api uwsgi.set_user_harakiri(30)# you have 30 seconds. fight! slow_func() uwsgi.set_user_harakiri(0)# clear the timer, all is well 12.1.3 Pump support Note: Pump is not a PEP nor a standard. Pump is a new project aiming at a “better” WSGI. An example Pump app, for your convenience: def app(req): return { "status": 200, "headers":{"content_type":"text/html"}, "body":"

Hello!

" } To load a Pump app simply use the pump option to declare the callable. uwsgi --http-socket :8080 -M -p 4 --pump myapp:app myapp is the name of the module (that must be importable!) and app is the callable. The callable part is optional – by default uWSGI will search for a callable named ‘application’. 12.1.4 Python Tracebacker New in version 1.3-dev. Usually if you want to get a real-time traceback from your app you’d have to modify your code to add a hook or entry point for that as described on the TipsAndTricks page. Starting from 1.3-dev, uWSGI includes a similar technique allowing you to get realtime traceback via a UNIX socket. To enable the tracebacker, add the option py-tracebacker= where is the _basename_ for the created UNIX sockets. If you have 4 uWSGI workers and you add py-tracebacker=/tmp/tbsocket, four sockets named /tmp/tbsocket1 through /tmp/tbsocket4 will be created. Connecting to one of them will return the current traceback of the threads running in the worker. To connect to those sockets you can use whatever application or method you like the best, but uWSGI includes a convenience option connect-and-read you can use: uwsgi --connect-and-read /tmp/tbsocket1 An example Let’s write a silly test application called slow.py: 12.1. Python support 527 uWSGI Documentation, Release 2.0 import time def dormi(): time.sleep(60) def dormi2(): dormi() def dormi3(): dormi2() def dormi4(): dormi3() def dormi5(): dormi4() def application(e, start_response): start_response(’200 OK’, [(’Content-Type’,’text/html’)]) dormi5() return "hello" And then run it: uwsgi --http :8080 -w slow --master --processes 2 --threads 4 --py-tracebacker /tmp/tbsocket. Then make a bunch of requests into it: curl http://localhost:8080 & curl http://localhost:8080 & curl http://localhost:8080 & curl http://localhost:8080 & Now, while these requests are running (they’ll take pretty much exactly a minute to complete each), you can retrieve the traceback for, let’s say, the two first workers: ./uwsgi --connect-and-read /tmp/tbsocket.1 ./uwsgi --connect-and-read /tmp/tbsocket.2 The tracebacker output will be something like this: *** uWSGI Python tracebacker output *** thread_id = uWSGIWorker1Core1 filename = ./slow.py lineno = 22 function = application line = dormi5() thread_id = uWSGIWorker1Core1 filename = ./slow.py lineno = 14 function = dormi5 line = def dormi5(): dormi4() thread_id = uWSGIWorker1Core1 filename = ./slow.py lineno = 13 function = dormi4 line = def dormi4(): dormi3() thread_id = uWSGIWorker1Core1 filename = ./slow.py lineno = 12 function = dormi3 line = def dormi3(): dormi2() thread_id = uWSGIWorker1Core1 filename = ./slow.py lineno = 11 function = dormi2 line = def dormi2(): dormi() thread_id = uWSGIWorker1Core1 filename = ./slow.py lineno = 9 function = dormi line = time.sleep(60) thread_id = uWSGIWorker1Core3 filename = ./slow.py lineno = 22 function = application line = dormi5() thread_id = uWSGIWorker1Core3 filename = ./slow.py lineno = 14 function = dormi5 line = def dormi5(): dormi4() thread_id = uWSGIWorker1Core3 filename = ./slow.py lineno = 13 function = dormi4 line = def dormi4(): dormi3() thread_id = uWSGIWorker1Core3 filename = ./slow.py lineno = 12 function = dormi3 line = def dormi3(): dormi2() thread_id = uWSGIWorker1Core3 filename = ./slow.py lineno = 11 function = dormi2 line = def dormi2(): dormi() thread_id = uWSGIWorker1Core3 filename = ./slow.py lineno = 9 function = dormi line = time.sleep(60) thread_id = MainThread filename = ./slow.py lineno = 22 function = application line = dormi5() thread_id = MainThread filename = ./slow.py lineno = 14 function = dormi5 line = def dormi5(): dormi4() 528 Chapter 12. Language support uWSGI Documentation, Release 2.0 thread_id = MainThread filename = ./slow.py lineno = 13 function = dormi4 line = def dormi4(): dormi3() thread_id = MainThread filename = ./slow.py lineno = 12 function = dormi3 line = def dormi3(): dormi2() thread_id = MainThread filename = ./slow.py lineno = 11 function = dormi2 line = def dormi2(): dormi() thread_id = MainThread filename = ./slow.py lineno = 9 function = dormi line = time.sleep(60) Combining the tracebacker with Harakiri If a request is killed by the harakiri feature, a traceback is automatically logged during the Harakiri phase. 12.1.5 Aliasing Python modules Having multiple version of a Python package/module/file is very common. Manipulating PYTHONPATH or using virtualenvs are a way to use various versions without changing your code. But hey, why not have an aliasing system that lets you arbitrarily map module names to files? That’s why we have the pymodule-alias option! Case 1 - Mapping a simple file to a virtual module Let’s say we have swissknife.py that contains lots of useful classes and functions. It’s imported in gazillions of places in your app. Now, we’ll want to modify it, but keep the original file intact for whichever reason, and call it swissknife_mk2. Your options would be 1. to modify all of your code to import and use swissknife_mk2 instead of swissknife. Yeah, no, not’s going to happen. 2. modify the first line of all your files to read import swissknife_mk2 as swissknife. A lot better but you make software for money... and time is money, so why the fuck not use something more powerful? So don’t touch your files – just remap! ./uwsgi -s :3031 -w myproject --pymodule-alias swissknife=swissknife_mk2 # Kapow! uWSGI one-two ninja punch right there! # You can put the module wherever you like, too: ./uwsgi -s :3031 -w myproject --pymodule-alias swissknife=/mnt/floppy/KNIFEFAC/SWISSK~1.PY # Or hey, why not use HTTP? ./uwsgi -s :3031 -w myproject --pymodule-alias swissknife=http://uwsgi.it/modules/swissknife_extreme.py You can specify multiple pymodule-alias directives. uwsgi: socket: :3031 module: myproject pymodule-alias: funnymodule=/opt/foo/experimentalfunnymodule.py pymodule-alias: uglymodule=/opt/foo/experimentaluglymodule.py Case 2 - mapping a packages to directories You have this shiny, beautiful Django project and something occurs to you: Would it work with Django trunk? On to set up a new virtualenv... nah. Let’s just use pymodule-alias! 12.1. Python support 529 uWSGI Documentation, Release 2.0 ./uwsgi-s :3031-w django_uwsgi--pymodule-alias django=django-trunk/django Case 3 - override specific submodules You have a Werkzeug project where you want to override - for whichever reason - werkzeug.test_app with one of your own devising. Easy, of course! ./uwsgi-s :3031-w werkzeug.testapp:test_app()--pymodule-alias werkzeug.testapp=mytestapp See also: Python configuration options 12.1.6 Application dictionary You can use the application dictionary mechanism to avoid setting up your application in your configuration. import uwsgi import django.core.handlers.wsgi application= django.core.handlers.wsgi.WSGIHandler() def myapp(environ, start_response): start_response(’200 OK’, [(’Content-Type’,’text/plain’)]) yield ’Hello World\n’ uwsgi.applications={ ’’: application, ’/django’:’application’, ’/myapp’: myapp } Passing this Python module name (that is, it should be importable and without the .py extension) to uWSGI’s module / wsgi option, uWSGI will search the uwsgi.applications dictionary for the URL prefix/callable mappings. The value of every item can be a callable, or its name as a string. 12.1.7 Virtualenv support virtualenv is a mechanism that lets you isolate one (or more) Python applications’ libraries (and interpreters, when not using uWSGI) from each other. Virtualenvs should be used by any respectable modern Python application. Quickstart 1. Create your virtualenv: $ virtualenv myenv New python executable in myenv/bin/python Installing setuptools...............done. Installing pip.........done. 2. Install all the modules you need (using Flask as an example): 530 Chapter 12. Language support uWSGI Documentation, Release 2.0 $ ./myenv/bin/pip install flask $ # Many modern Python projects ship with a ‘requirements.txt‘ file that you can use with pip like this: $ ./myenv/bin/pip install -r requirements.txt 3. Copy your WSGI module into this new environment (under lib/python2.x if you do not want to modify your PYTHONPATH). Note: It’s common for many deployments that your application will live outside the virtualenv. How to configure this is not quite documented yet, but it’s probably very easy. Run the uwsgi server using the home/virtualenv option (-H for short): $ uwsgi -H myenv -s 127.0.0.1:3031 -M -w envapp 12.1.8 Python 3 The WSGI specification was updated for Python 3 as PEP3333. One major change is that applications are required to respond only with bytes instances, not (Unicode) strings, back to the WSGI stack. You should encode strings or use bytes literals: def application(environ, start_response): start_response(’200 OK’, [(’Content-Type’,’text/plain’)]) yield ’Hello’.encode(’utf-8’) yield b’World\n’ 12.1.9 Paste support If you are a user or developer of Paste-compatible frameworks, such as Pylons and Turbogears or applications using them, you can use the uWSGI --paste option to conveniently deploy your application. For example, if you have a virtualenv in /opt/tg2env containing a Turbogears app called addressbook config- ured in /opt/tg2env/addressbook/development.ini: uwsgi --paste config:/opt/tg2env/addressbook/development.ini --socket :3031 -H /opt/tg2env That’s it! No additional configuration or Python modules to write. Warning: If you setup multiple process/workers (master mode) you will receive an error: AssertionError: The EvalException middleware is not usable in a multi-process environment in which case you’ll have to set the debug option in your paste configuration file to False – or revert to single process environment. 12.1.10 Pecan support If you are a user or developer of the Pecan WSGI framework, you can use the uWSGI --pecan option to conveniently deploy your application. For example, if you have a virtualenv in /opt/pecanenv containing a Pecan app called addressbook configured in /opt/pecanenv/addressbook/development.py: 12.1. Python support 531 uWSGI Documentation, Release 2.0 uwsgi --pecan /opt/pecanenv/addressbook/development.py --socket :3031 -H /opt/pecanenv Warning: If you setup multiple process/workers (master mode) you will receive an error: AssertionError: The DebugMiddleware middleware is not usable in a multi-process environment in which case you’ll have to set the debug option in your pecan configuration file to False – or revert to single process environment. 12.1.11 Using the uwsgi_admin Django app First of all you need to get the uwsgi_admin app from https://github.com/unbit/uwsgi_django (once it was in the django directory of the distribution). It plugs into Django’s admin app, so if uwsgi_admin is importable, just add it into your INSTALLED_APPS. INSTALLED_APPS=( # ... ’django.contrib.admin’, ’uwsgi_admin’, # ... ) Then modify your urls.py accordingly. For example: # ... url(r’^admin/uwsgi/’, include(’mysite.uwsgi_admin.urls’)), url(r’^admin/’, include(admin.site.urls)), # ... Be sure to place the URL pattern for uwsgi_admin before the one for the admin site, or it will never match. /admin/uwsgi/ will then serve uWSGI statistics and has a button for graceful reloading of the server (when running under a Master). Note that memory usage is reported only when the memory-report option is enabled. 12.2 The PyPy plugin 12.2.1 Benchmarks for the PyPy plugin Note: As of November 2013, this benchmark is very outdated. Most of the numbers here have changed for the better with newer PyPy releases. This is mainly targeted at PyPy developers to spot slow paths or to fix corner-case bugs. uWSGI stresses lot of areas of PyPy (most of them rarely used in pure-Python apps), so making these benchmarks is good both for uWSGI and PyPy. • Results are rounded for ease of reading. Each test is executed 10 times on an 8-core Intel i7-3615QM CPU @ 2.30GHz. • The CPython version is 2.7.5, PyPy is latest tip at 2013-05-23. • Tests are run with logging disabled. • Tests are run without thunder locking 532 Chapter 12. Language support uWSGI Documentation, Release 2.0 • The client suite introduces ad-hoc errors and disconnections, so numbers are way lower that what you can get with ‘ab’ or ‘httperf’ Generally the command lines are: uwsgi --http-socket :9090 --wsgi hello --disable-logging uwsgi --http-socket :9090 --pypy-home /opt/pypy --pypy-wsgi hello --disable-logging Simple Hello World The most useless of the tests (as it shows only how uWSGI performs instead of the chosen Python engine). def application(e, sr): sr(’200 Ok’, [(’Content-Type’,’text/html’)]) return "ciao" CPython: 6500 RPS, memory used 7MB (no leak detected) Syscalls used: 0.000403 gettimeofday({1369293059, 218207}, NULL)=0 0.000405 read(5, "GET / HTTP/1.1\r\nUser-Agent: curl/7.30.0\r\nHost: ubuntu64.local:9090\r\nAccept: */*\r\n\r\n", 4096)= 83 0.000638 write(5, "HTTP/1.1 200 Ok\r\nContent-Type: text/html\r\n\r\n", 44)= 44 0.000678 write(5, "ciao", 4)=4 0.000528 gettimeofday({1369293059, 220477}, NULL)=0 0.000394 close(5) PyPy: 6560 RPS, memory used 71MB (no leak detected) Syscalls: No differences with CPython Considerations: • There is only slightly (read: irrelevant) better performance in PyPy. • Memory usage is 10x higher with PyPy. This is caused by the difference in the binary size (about 4 megs for libpython, about 50 for stripped libpypy-c). It is important to note that this 10x increase is only on startup, after the app is loaded memory allocations are really different. It looks like the PyPy team is working on reducing the binary size too. CPU bound test (fibonacci) def fib(n): if n ==0: return 0 if n ==1: return 1 return fib(n-1)+ fib(n-2) def application(e, sr): sr(’200 Ok’, [(’Content-Type’,’text/html’)]) fib(36) return "ciao" This is where PyPy shines. • CPython: time-to-complete 6400 milliseconds, memory used 65 MB • PyPy: time-to-complete 900 milliseconds, memory used 71 MB 12.2. The PyPy plugin 533 uWSGI Documentation, Release 2.0 • The response time here is astonishing, there is no debate about how much better PyPy can be for CPU intensive (and/or highly recursive) tasks. • More interesting is how the memory usage of PyPy remains the same of the simple hello world, while CPython’s increased tenfold. • Syscall usage is again the same. Werkzeug testapp You may think this is not very different from the Hello World example, but this specific application does actually call lots of Python functions and inspects the entire WSGI environ dictionary. This is very near to a standard application without I/O. CPython: 600 RPS, memory usage 13MB Syscalls: 0.000363 gettimeofday({1369294531, 360307}, NULL)=0 0.000421 read(5, "GET / HTTP/1.1\r\nUser-Agent: curl/7.30.0\r\nHost: ubuntu64.local:9090\r\nAccept: */*\r\n\r\n", 4096)= 83 0.002046 getcwd("/root/uwsgi", 1024)= 12 0.000483 stat("/root/uwsgi/.",{st_mode=S_IFDIR|0755, st_size=12288, ...})=0 0.000602 stat("/usr/local/lib/python2.7/dist-packages/greenlet-0.4.0-py2.7-linux-x86_64.egg",{st_mode=S_IFDIR|S_ISGID|0755, st_size=4096, ...})=0 0.000530 stat("/usr/local/lib/python2.7/dist-packages/gevent-1.0dev-py2.7-linux-x86_64.egg",{st_mode=S_IFDIR|S_ISGID|0755, st_size=4096, ...})=0 0.000506 stat("/usr/lib/python2.7",{st_mode=S_IFDIR|0755, st_size=28672, ...})=0 0.000440 stat("/usr/lib/python2.7/plat-x86_64-linux-gnu",{st_mode=S_IFDIR|0755, st_size=4096, ...})=0 0.000463 stat("/usr/lib/python2.7/lib-tk",{st_mode=S_IFDIR|0755, st_size=4096, ...})=0 0.000624 stat("/usr/lib/python2.7/lib-old", 0x7fffb70da6a0)= -1 ENOENT(No such file or directory) 0.000434 stat("/usr/lib/python2.7/lib-dynload",{st_mode=S_IFDIR|0755, st_size=12288, ...})=0 0.000515 stat("/usr/local/lib/python2.7/dist-packages",{st_mode=S_IFDIR|S_ISGID|0775, st_size=4096, ...})=0 0.000569 stat("/usr/lib/python2.7/dist-packages",{st_mode=S_IFDIR|0755, st_size=12288, ...})=0 0.000387 stat("/usr/lib/python2.7/dist-packages/gtk-2.0",{st_mode=S_IFDIR|0755, st_size=4096, ...})=0 0.000347 stat("/usr/lib/pymodules/python2.7",{st_mode=S_IFDIR|0755, st_size=4096, ...})=0 0.000675 write(5, "HTTP/1.1 200 OK\r\nContent-Type: text/html; charset=utf-8\r\nContent-Length: 7554\r\n\r\n", 81)= 81 0.000575 write(5, "\nWSGI Information\n