FastAPI + SQLAlchemy: Asynchronous IO and Back Pressure

I wanted an asynchronous SQLAlchemy API but ended up building a synchronous SQLAlchemy API.

4 June 2021 Updated 4 June 2021
post main image

APIs are becoming more and more important. Companies want to share their data with customers. Or want to enable third parties to create applications based on their APIs.

Few months ago I created an API with Flask, SQLAlchemy, Marshmallow and APISpec, it was not really difficult, and works fine. Then I read more about FastAPI, an API framework that also supports Python async out of the box. It is based on Starlette, a lightweight ASGI framework for building high performance asyncio services.

This post is about why I build a FastAPI synchronous SQLAlchemy demo API application and not an asynchronous SQLAlchemy API. The subject is complex. Below I quote some texts from the FastAPI website and Github to avoid telling you the wrong story.

In case you are interested, you can see the demo application here:

API with asynchronous database access

In general, asynchronous request are not faster than synchronous ones, they do the same thing. A synchronous request stops the caller execution and you must wait for task completion. With asynchronous requests, the request is queued and processed some time later, you do not wait for the result.This gives you the ability to do other things while waiting for task completion.

Asynchronous operation and Back Pressure

I always have been very careful with asynchronous operations. A trivial example is a web page with a button that uses an AJAX call. I allowed a new request only when the previous request was finished. Very conservative but I just want to avoid problems.

Armin Ronacher, creator of the Flask web framework, wrote an interesting article about Back Pressure, see links below, please read. In the synchronous case, access to our database gets blocked when we run out of databases sessions, new requests are blocked until sessions become available again.

In the asynchronous case, all requests are added to a queue waiting to be processed. When the incoming rate is higher than the ability of the system to process them, the server will run out of memory and crash. You can prevent this by looking at the maximum number of requests that can be queued. That sounds easy but it is not.

Preventing Back Pressure issues

FastAPI Github issue #857 - @dmontagu wrote:

While it would be better if there was a straightforward way to handle this in the application layer, even if you have very heavy load, in practice you can usually prevent Back Pressure issues by just using a rate-limiting load balancer and a good auto-scaling policy. This won't handle all cases, and won't save you from particularly bad design choices, but for 99% of deployed python applications this would already be overkill.

Starlette Github issue #802 - @tomchristie wrote:

Uvicorn currently allows you to set a --limit-concurrency <int> which hard-limits the maximum number of allowable tasks which may be running before 503's will be returned. In the most constrained case you could just set this based on the number of available database
connections. Actually we could be doing better there, tho - it'd be nicer if Uvicorn queued requests beyond that limit, and only responded with a 503 after a timeout period.

The other thing that we could do that'd be smart is defaulting timeout exceptions to 503 responses. So... supposing you've got a resource limit on: acquiring database connections, connection to the cache, sending outgoing HTTP requests, sending emails.

Those kinds of components always ought to have some kind of pool limiting available, and should raise timeouts if the resource cannot be acquired for a period of time. If Starlette is configured to map those exceptions onto 503 responses, then you'll get graceful failure behavior when a server is overloaded.'

Synchronous vs asynchronous SQLAlchemy

I use SQLAlchemy ORM where possible. SQLAlchemy 1.4 was released recently and has support for Python asyncio. But there are limitations. One is that you cannot use lazy loading because related objects can expire and could raise an asyncio exception. A way out of this is to use the function 'run_sync()' that makes it possible to run synchronous functions under asyncio.

Pydantic's orm_mode

From the FastAPI 'SQL (Relational) Databases' page:

Without orm_mode, if you returned a SQLAlchemy model from your path operation, it wouldn't include the relationship data. Even if you declared those relationships in your Pydantic models. But with ORM mode, as Pydantic itself will try to access the data it needs from attributes (instead of assuming a dict), you can declare the specific data you want to return and it will be able to go and get it, even from ORMs.

This would mean that if you use a Pydantic model for the response, you will get also the relationship data. I do not know if this always will get the relationship data reliably, must try this.

I do not use asynchronous IO (yet)

In short, async IO introduces extra complexities and, in my case, the performance probably will not improve. There are many applications that can benefit but a standard database driven API does not. Then why still use FastAPI? Main reason for me is that in near future I may want to add more functionality that can benefit from asyncio.

Synchronous SQLAlchemy: one database session object during a request

I am using the method and code described in the FastAPI section 'Alternative DB session with middleware': 'The middleware we'll add (just a function) will create a new SQLAlchemy SessionLocal for each request, add it to the request and then close it once the request is finished.'

The code shown in this section is:

async def db_session_middleware(request: Request, call_next):
    response = Response("Internal server error", status_code=500)
        request.state.db = SessionLocal()
        response = await call_next(request)
    return response

def get_db(request: Request):
    return request.state.db

All requests are assigned a database session object. If we ever want to add some asynchronous function(s) we can exclude these routes from db_session_middleware().

Dependencies that return database objects

If you are using synchronous SQLAlchemy and use Dependency Injection that returns a database object, you MUST use a database session object that is valid the whole request. Otherwise, the retrieved object can expire. That sounds obvious but the FastAPI documentation is not very clear about this. For example assume we have a function 'read_items()' with a dependency get_current_user():

def get_current_user(
    db: Session = Depends(get_db)):
    # get User object by user_id in JWT
    return db_user

def read_items(
        db_user: schemas.User = Depends(get_current_user)
        db: Session = Depends(get_db),
    # do something
    items = crud.get_items(db, db_user)
    return users

Assume we do not use the db_session_middleware() function as mentioned above but instead would use:

def get_db():
    db = SessionLocal()
        yield db

In this case we can have multiple database sessions during a request and the db_user object can expire in the read_items() function, because the database session is closed after processing get_current_user().

If we would be using asynchronous SQLAlchemy, the returned db_user object is detached from the session and will remain available. At least that is what I would expect.


I wanted to create an API based on FastAPI and SQLAlchemy and PostgreSQL. The first thing I was confronted with was the question whether to use synchronous or asynchronous SQLAlchemy. I already knew that asynchronous operations can cause problems if not handled properly. It was informative to read what others said about asynchronous IO (and FastAPI). To prevent problems I decided to go for synchronous SQLAlchemy first.

Links / credits

Alternative DB session with middleware

Asynchronous I/O (asyncio)

Asynchronous Python and Databases

asyncio support for SQLAlchemy (and Flask, and any other blocking-IO library)

Back pressure? #802

Backpressure explained — the resisted flow of data through software

I'm not feeling the async pressure


Leave a comment

Comment anonymously or log in to comment.


Leave a reply

Reply anonymously or log in to reply.