Files
fastapi/docs/en/docs/tutorial/sql-databases.md
Sebastián Ramírez 0976185af9 Add support for Pydantic v2 (#9816)
*  Pydantic v2 migration, initial implementation (#9500)

*  Add compat layer, for Pydantic v1 and v2

*  Re-export Pydantic needed internals from compat, to later patch them for v1

* ♻️ Refactor internals to use new compatibility layers and run with Pydantic v2

* 📝 Update examples to run with Pydantic v2

*  Update tests to use Pydantic v2

* 🎨 [pre-commit.ci] Auto format from pre-commit.com hooks

*  Temporarily disable Peewee tests, afterwards I'll enable them only for Pydantic v1

* 🐛 Fix JSON Schema generation and OpenAPI ref template

* 🐛 Fix model field creation with defaults from Pydantic v2

* 🐛 Fix body field creation, with new FieldInfo

*  Use and check new ResponseValidationError for server validation errors

*  Fix test_schema_extra_examples tests with ResponseValidationError

*  Add dirty-equals to tests for compatibility with Pydantic v1 and v2

*  Add util to regenerate errors with custom loc

*  Generate validation errors with loc

*  Update tests for compatibility with Pydantic v1 and v2

*  Update tests for Pydantic v2 in tests/test_filter_pydantic_sub_model.py

*  Refactor tests in tests/test_dependency_overrides.py for Pydantic v2, separate parameterized into independent tests to use insert_assert

*  Refactor OpenAPI test for tests/test_infer_param_optionality.py for consistency, and make it compatible with Pydantic v1 and v2

*  Update tests for tests/test_multi_query_errors.py for Pydantic v1 and v2

*  Update tests for tests/test_multi_body_errors.py for Pydantic v1 and v2

*  Update tests for tests/test_multi_body_errors.py for Pydantic v1 and v2

* 🎨 [pre-commit.ci] Auto format from pre-commit.com hooks

* ♻️ Refactor tests for tests/test_path.py to inline pytest parameters, to make it easier to make them compatible with Pydantic v2

*  Refactor and udpate tests for tests/test_path.py for Pydantic v1 and v2

* ♻️ Refactor and update tests for tests/test_query.py with compatibility for Pydantic v1 and v2

*  Fix test with optional field without default None

*  Update tests for compatibility with Pydantic v2

*  Update tutorial tests for Pydantic v2

* ♻️ Update OAuth2 dependencies for Pydantic v2

* ♻️ Refactor str check when checking for sequence types

* ♻️ Rename regex to pattern to keep in sync with Pydantic v2

* ♻️ Refactor _compat.py, start moving conditional imports and declarations to specifics of Pydantic v1 or v2

*  Update tests for OAuth2 security optional

*  Refactor tests for OAuth2 optional for Pydantic v2

*  Refactor tests for OAuth2 security for compatibility with Pydantic v2

* 🐛 Fix location in compat layer for Pydantic v2 ModelField

*  Refactor tests for Pydantic v2 in tests/test_tutorial/test_bigger_applications/test_main_an_py39.py

* 🐛 Add missing markers in Python 3.9 tests

*  Refactor tests for bigger apps for consistency with annotated ones and with support for Pydantic v2

* 🐛 Fix jsonable_encoder with new Pydantic v2 data types and Url

* 🐛 Fix invalid JSON error for compatibility with Pydantic v2

*  Update tests for behind_a_proxy for Pydantic v2

*  Update tests for tests/test_tutorial/test_body/test_tutorial001_py310.py for Pydantic v2

*  Update tests for tests/test_tutorial/test_body/test_tutorial001.py with Pydantic v2 and consistency with Python 3.10 tests

*  Fix tests for tutorial/body_fields for Pydantic v2

*  Refactor tests for tutorial/body_multiple_params with Pydantic v2

*  Update tests for tutorial/body_nested_models for Pydantic v2

*  Update tests for tutorial/body_updates for Pydantic v2

*  Update test for tutorial/cookie_params for Pydantic v2

*  Fix tests for tests/test_tutorial/test_custom_request_and_route/test_tutorial002.py for Pydantic v2

*  Update tests for tutorial/dataclasses for Pydantic v2

*  Update tests for tutorial/dependencies for Pydantic v2

*  Update tests for tutorial/extra_data_types for Pydantic v2

*  Update tests for tutorial/handling_errors for Pydantic v2

*  Fix test markers for Python 3.9

*  Update tests for tutorial/header_params for Pydantic v2

*  Update tests for Pydantic v2 in tests/test_tutorial/test_openapi_callbacks/test_tutorial001.py

*  Fix extra tests for Pydantic v2

*  Refactor test for parameters, to later fix Pydantic v2

*  Update tests for tutorial/query_params for Pydantic v2

* ♻️ Update examples in docs to use new pattern instead of the old regex

*  Fix several tests for Pydantic v2

*  Update and fix test for ResponseValidationError

* 🐛 Fix check for sequences vs scalars, include bytes as scalar

* 🐛 Fix check for complex data types, include UploadFile

* 🐛 Add list to sequence annotation types

* 🐛 Fix checks for uploads and add utils to find if an annotation is an upload (or bytes)

*  Add UnionType and NoneType to compat layer

*  Update tests for request_files for compatibility with Pydantic v2 and consistency with other tests

*  Fix testsw for request_forms for Pydantic v2

*  Fix tests for request_forms_and_files for Pydantic v2

*  Fix tests in tutorial/security for compatibility with Pydantic v2

* ⬆️ Upgrade required version of email_validator

*  Fix tests for params repr

*  Add Pydantic v2 pytest markers

* Use match_pydantic_error_url

* 🎨 [pre-commit.ci] Auto format from pre-commit.com hooks

* Use field_serializer instead of encoders in some tests

* Show Undefined as ... in repr

* Mark custom encoders test with xfail

* Update test to reflect new serialization of Decimal as str

* Use `model_validate` instead of `from_orm`

* Update JSON schema to reflect required nullable

* Add dirty-equals to pyproject.toml

* Fix locs and error creation for use with pydantic 2.0a4

* Use the type adapter for serialization. This is hacky.

* 🎨 [pre-commit.ci] Auto format from pre-commit.com hooks

*  Refactor test_multi_body_errors for compatibility with Pydantic v1 and v2

*  Refactor test_custom_encoder for Pydantic v1 and v2

*  Set input to None for now, for compatibility with current tests

* 🐛 Fix passing serialization params to model field when handling the response

* ♻️ Refactor exceptions to not depend on Pydantic ValidationError class

* ♻️ Revert/refactor params to simplify repr

*  Tweak tests for custom class encoders for Pydantic v1 and v2

*  Tweak tests for jsonable_encoder for Pydantic v1 and v2

*  Tweak test for compatibility with Pydantic v1 and v2

* 🐛 Fix filtering data with subclasses

* 🐛 Workaround examples in OpenAPI schema

*  Add skip marker for SQL tutorial, needs to be updated either way

*  Update test for broken JSON

*  Fix test for broken JSON

*  Update tests for timedeltas

*  Fix test for plain text validation errors

*  Add markers for Pydantic v1 exclusive tests (for now)

*  Update test for path_params with enums for compatibility with Pydantic v1 and v2

*  Update tests for extra examples in OpenAPI

*  Fix tests for response_model with compatibility with Pydantic v1 and v2

* 🐛 Fix required double serialization for different types of models

*  Fix tests for response model with compatibility with new Pydantic v2

* 🐛 Import Undefined from compat layer

*  Fix tests for response_model for Pydantic v2

*  Fix tests for schema_extra for Pydantic v2

*  Add markers and update tests for Pydantic v2

* 💡 Comment out logic for double encoding that breaks other usecases

*  Update errors for int parsing

* ♻️ Refactor re-enabling compatibility for Pydantic v1

* ♻️ Refactor OpenAPI utils to re-enable support for Pydantic v1

* ♻️ Refactor dependencies/utils and _compat for compatibility with Pydantic v1

* 🐛 Fix and tweak compatibility with Pydantic v1 and v2 in dependencies/utils

*  Tweak tests and examples for Pydantic v1

* ♻️ Tweak call to ModelField.validate for compatibility with Pydantic v1

*  Use new global override TypeAdapter from_attributes

*  Update tests after updating from_attributes

* 🔧 Update pytest config to avoid collecting tests from docs, useful for editor-integrated tests

*  Add test for data filtering, including inheritance and models in fields or lists of models

* ♻️ Make OpenAPI models compatible with both Pydantic v1 and v2

* ♻️ Fix compatibility for Pydantic v1 and v2 in jsonable_encoder

* ♻️ Fix compatibility in params with Pydantic v1 and v2

* ♻️ Fix compatibility when creating a FieldInfo in Pydantic v1 and v2 in utils.py

* ♻️ Fix generation of flat_models and JSON Schema definitions in _compat.py for Pydantic v1 and v2

* ♻️ Update handling of ErrorWrappers for Pydantic v1

* ♻️ Refactor checks and handling of types an sequences

* ♻️ Refactor and cleanup comments with compatibility for Pydantic v1 and v2

* ♻️ Update UploadFile for compatibility with both Pydantic v1 and v2

* 🔥 Remove commented out unneeded code

* 🐛 Fix mock of get_annotation_from_field_info for Pydantic v1 and v2

* 🐛 Fix params with compatibility for Pydantic v1 and v2, with schemas and new pattern vs regex

* 🐛 Fix check if field is sequence for Pydantic v1

*  Fix tests for custom_schema_fields, for compatibility with Pydantic v1 and v2

*  Simplify and fix tests for jsonable_encoder with compatibility for Pydantic v1 and v2

*  Fix tests for orm_mode with Pydantic v1 and compatibility with Pydantic v2

* ♻️ Refactor logic for normalizing Pydantic v1 ErrorWrappers

* ♻️ Workaround for params with examples, before defining what to deprecate in Pydantic v1 and v2 for examples with JSON Schema vs OpenAPI

*  Fix tests for Pydantic v1 and v2 for response_by_alias

*  Fix test for schema_extra with compatibility with Pydantic v1 and v2

* ♻️ Tweak error regeneration with loc

* ♻️ Update error handling and serializationwith compatibility for Pydantic v1 and v2

* ♻️ Re-enable custom encoders for Pydantic v1

* ♻️ Update ErrorWrapper reserialization in Pydantic v1, do it outside of FastAPI ValidationExceptions

*  Update test for filter_submodel, re-structure to simplify testing while keeping division of Pydantic v1 and v2

*  Refactor Pydantic v1 only test that requires modifying environment variables

* 🔥 Update test for plaintext error responses, for Pydantic v1 and v2

* ️ Revert changes in DB tutorial to use Pydantic v1 (the new guide will have SQLModel)

*  Mark current SQL DB tutorial tests as Pydantic only

* ♻️ Update datastructures for compatibility with Pydantic v1, not requiring pydantic-core

* ♻️ Update encoders.py for compatibility with Pydantic v1

* ️ Revert changes to Peewee, the docs for that are gonna live in a new HowTo section, not in the main tutorials

* ♻️ Simplify response body kwargs generation

* 🔥 Clean up comments

* 🔥 Clean some tests and comments

*  Refactor tests to match new Pydantic error string URLs

*  Refactor tests for recursive models for Pydantic v1 and v2

*  Update tests for Peewee, re-enable, Pydantic-v1-only

* ♻️ Update FastAPI params to take regex and pattern arguments

* ️ Revert tutorial examples for pattern, it will be done in a subsequent PR

* ️ Revert changes in schema extra examples, it will be added later in a docs-specific PR

* 💡 Add TODO comment to document str validations with pattern

* 🔥 Remove unneeded comment

* 📌 Upgrade Pydantic pin dependency

* ⬆️ Upgrade email_validator dependency

* 🐛 Tweak type annotations in _compat.py

* 🔇 Tweak mypy errors for compat, for Pydantic v1 re-imports

* 🐛 Tweak and fix type annotations

*  Update requirements-test.txt, re-add dirty-equals

* 🔥 Remove unnecessary config

* 🐛 Tweak type annotations

* 🔥 Remove unnecessary type in dependencies/utils.py

* 💡 Update comment in routing.py

---------

Co-authored-by: David Montague <35119617+dmontagu@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* 👷 Add CI for both Pydantic v1 and v2 (#9688)

* 👷 Test and install Pydantic v1 and v2 in CI

* 💚 Tweak CI config for Pydantic v1 and v2

* 💚 Fix Pydantic v2 specification in CI

* 🐛 Fix type annotations for compatibility with Python 3.7

* 💚 Install Pydantic v2 for lints

* 🐛 Fix type annotations for Pydantic v2

* 💚 Re-use test cache for lint

* ♻️ Refactor internals for test coverage and performance (#9691)

* ♻️ Tweak import of Annotated from typing_extensions, they are installed anyway

* ♻️ Refactor _compat to define functions for Pydantic v1 or v2 once instead of checking inside

*  Add test for UploadFile for Pydantic v2

* ♻️ Refactor types and remove logic for impossible cases

*  Add missing tests from test refactor for path params

*  Add tests for new decimal encoder

* 💡 Add TODO comment for decimals in encoders

* 🔥 Remove unneeded dummy function

* 🔥 Remove section of code in field_annotation_is_scalar covered by sub-call to field_annotation_is_complex

* ♻️ Refactor and tweak variables and types in _compat

*  Add tests for corner cases and compat with Pydantic v1 and v2

* ♻️ Refactor type annotations

* 🔖 Release version 0.100.0-beta1

* ♻️ Refactor parts that use optional requirements to make them compatible with installations without them (#9707)

* ♻️ Refactor parts that use optional requirements to make them compatible with installations without them

* ♻️ Update JSON Schema for email field without email-validator installed

* 🐛 Fix support for Pydantic v2.0, small changes in their final release (#9771)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sebastián Ramírez <tiangolo@gmail.com>

* 🔖 Release version 0.100.0-beta2

*  OpenAPI 3.1.0 with Pydantic v2, merge `master` (#9773)

*  Add dirty-equals as a testing dependency (#9778)

 Add dirty-equals as a testing dependency, it seems it got lsot at some point

* 🔀 Merge master, fix valid JSON Schema accepting bools (#9782)

* ️ Revert usage of custom logic for TypeAdapter JSON Schema, solved on the Pydantic side (#9787)

️ Revert usage of custom logic for TypeAdapter JSON Schema, solved on Pydantic side

* ♻️ Deprecate parameter `regex`, use `pattern` instead (#9786)

* 📝 Update docs to deprecate regex, recommend pattern

* ♻️ Update examples to use new pattern instead of regex

* 📝 Add new example with deprecated regex

* ♻️ Add deprecation notes and warnings for regex

*  Add tests for regex deprecation

*  Update tests for compatibility with Pydantic v1

*  Update docs to use Pydantic v2 settings and add note and example about v1 (#9788)

*  Add pydantic-settings to all extras

* 📝 Update docs for Pydantic settings

* 📝 Update Settings source examples to use Pydantic v2, and add a Pydantic v1 version

*  Add tests for settings with Pydantic v1 and v2

* 🔥 Remove solved TODO comment

* ♻️ Update conditional OpenAPI to use new Pydantic v2 settings

*  Update tests to import Annotated from typing_extensions for Python < 3.9 (#9795)

*  Add pydantic-extra-types to fastapi[extra]

*  temp: Install Pydantic from source to test JSON Schema metadata fixes (#9777)

*  Install Pydantic from source, from branch for JSON Schema with metadata

*  Update dependencies, install Pydantic main

*  Fix dependency URL for Pydantic from source

*  Add pydantic-settings for test requirements

* 💡 Add TODO comments to re-enable Pydantic main (not from source) (#9796)

*  Add new Pydantic Field param options to Query, Cookie, Body, etc. (#9797)

* 📝 Add docs for Pydantic v2 for `docs/en/docs/advanced/path-operation-advanced-configuration.md` (#9798)

* 📝 Update docs in examples for settings with Pydantic v2 (#9799)

* 📝 Update JSON Schema `examples` docs with Pydantic v2 (#9800)

* ♻️ Use new Pydantic v2 JSON Schema generator (#9813)

Co-authored-by: David Montague <35119617+dmontagu@users.noreply.github.com>

* ♻️ Tweak type annotations and Pydantic version range (#9801)

* 📌 Re-enable GA Pydantic, for v2, require minimum 2.0.2 (#9814)

* 🔖 Release version 0.100.0-beta3

* 🔥 Remove duplicate type declaration from merge conflicts (#9832)

* 👷‍♂️ Run tests with Pydantic v2 GA (#9830)

👷 Run tests for Pydantic v2 GA

* 📝 Add notes to docs expecting Pydantic v2 and future updates (#9833)

* 📝 Update index with new extras

* 📝 Update release notes

---------

Co-authored-by: David Montague <35119617+dmontagu@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Pastukhov Nikita <diementros@yandex.ru>
2023-07-07 19:12:13 +02:00

29 KiB

SQL (Relational) Databases

!!! info These docs are about to be updated. 🎉

The current version assumes Pydantic v1, and SQLAlchemy versions less than 2.0.

The new docs will include Pydantic v2 and will use <a href="https://sqlmodel.tiangolo.com/" class="external-link" target="_blank">SQLModel</a> (which is also based on SQLAlchemy) once it is updated to use Pydantic v2 as well.

FastAPI doesn't require you to use a SQL (relational) database.

But you can use any relational database that you want.

Here we'll see an example using SQLAlchemy.

You can easily adapt it to any database supported by SQLAlchemy, like:

  • PostgreSQL
  • MySQL
  • SQLite
  • Oracle
  • Microsoft SQL Server, etc.

In this example, we'll use SQLite, because it uses a single file and Python has integrated support. So, you can copy this example and run it as is.

Later, for your production application, you might want to use a database server like PostgreSQL.

!!! tip There is an official project generator with FastAPI and PostgreSQL, all based on Docker, including a frontend and more tools: https://github.com/tiangolo/full-stack-fastapi-postgresql

!!! note Notice that most of the code is the standard SQLAlchemy code you would use with any framework.

The **FastAPI** specific code is as small as always.

ORMs

FastAPI works with any database and any style of library to talk to the database.

A common pattern is to use an "ORM": an "object-relational mapping" library.

An ORM has tools to convert ("map") between objects in code and database tables ("relations").

With an ORM, you normally create a class that represents a table in a SQL database, each attribute of the class represents a column, with a name and a type.

For example a class Pet could represent a SQL table pets.

And each instance object of that class represents a row in the database.

For example an object orion_cat (an instance of Pet) could have an attribute orion_cat.type, for the column type. And the value of that attribute could be, e.g. "cat".

These ORMs also have tools to make the connections or relations between tables or entities.

This way, you could also have an attribute orion_cat.owner and the owner would contain the data for this pet's owner, taken from the table owners.

So, orion_cat.owner.name could be the name (from the name column in the owners table) of this pet's owner.

It could have a value like "Arquilian".

And the ORM will do all the work to get the information from the corresponding table owners when you try to access it from your pet object.

Common ORMs are for example: Django-ORM (part of the Django framework), SQLAlchemy ORM (part of SQLAlchemy, independent of framework) and Peewee (independent of framework), among others.

Here we will see how to work with SQLAlchemy ORM.

In a similar way you could use any other ORM.

!!! tip There's an equivalent article using Peewee here in the docs.

File structure

For these examples, let's say you have a directory named my_super_project that contains a sub-directory called sql_app with a structure like this:

.
└── sql_app
    ├── __init__.py
    ├── crud.py
    ├── database.py
    ├── main.py
    ├── models.py
    └── schemas.py

The file __init__.py is just an empty file, but it tells Python that sql_app with all its modules (Python files) is a package.

Now let's see what each file/module does.

Install SQLAlchemy

First you need to install SQLAlchemy:

$ pip install sqlalchemy

---> 100%

Create the SQLAlchemy parts

Let's refer to the file sql_app/database.py.

Import the SQLAlchemy parts

{!../../../docs_src/sql_databases/sql_app/database.py!}

Create a database URL for SQLAlchemy

{!../../../docs_src/sql_databases/sql_app/database.py!}

In this example, we are "connecting" to a SQLite database (opening a file with the SQLite database).

The file will be located at the same directory in the file sql_app.db.

That's why the last part is ./sql_app.db.

If you were using a PostgreSQL database instead, you would just have to uncomment the line:

SQLALCHEMY_DATABASE_URL = "postgresql://user:password@postgresserver/db"

...and adapt it with your database data and credentials (equivalently for MySQL, MariaDB or any other).

!!! tip

This is the main line that you would have to modify if you wanted to use a different database.

Create the SQLAlchemy engine

The first step is to create a SQLAlchemy "engine".

We will later use this engine in other places.

{!../../../docs_src/sql_databases/sql_app/database.py!}

Note

The argument:

connect_args={"check_same_thread": False}

...is needed only for SQLite. It's not needed for other databases.

!!! info "Technical Details"

By default SQLite will only allow one thread to communicate with it, assuming that each thread would handle an independent request.

This is to prevent accidentally sharing the same connection for different things (for different requests).

But in FastAPI, using normal functions (`def`) more than one thread could interact with the database for the same request, so we need to make SQLite know that it should allow that with `connect_args={"check_same_thread": False}`.

Also, we will make sure each request gets its own database connection session in a dependency, so there's no need for that default mechanism.

Create a SessionLocal class

Each instance of the SessionLocal class will be a database session. The class itself is not a database session yet.

But once we create an instance of the SessionLocal class, this instance will be the actual database session.

We name it SessionLocal to distinguish it from the Session we are importing from SQLAlchemy.

We will use Session (the one imported from SQLAlchemy) later.

To create the SessionLocal class, use the function sessionmaker:

{!../../../docs_src/sql_databases/sql_app/database.py!}

Create a Base class

Now we will use the function declarative_base() that returns a class.

Later we will inherit from this class to create each of the database models or classes (the ORM models):

{!../../../docs_src/sql_databases/sql_app/database.py!}

Create the database models

Let's now see the file sql_app/models.py.

Create SQLAlchemy models from the Base class

We will use this Base class we created before to create the SQLAlchemy models.

!!! tip SQLAlchemy uses the term "model" to refer to these classes and instances that interact with the database.

But Pydantic also uses the term "**model**" to refer to something different, the data validation, conversion, and documentation classes and instances.

Import Base from database (the file database.py from above).

Create classes that inherit from it.

These classes are the SQLAlchemy models.

{!../../../docs_src/sql_databases/sql_app/models.py!}

The __tablename__ attribute tells SQLAlchemy the name of the table to use in the database for each of these models.

Create model attributes/columns

Now create all the model (class) attributes.

Each of these attributes represents a column in its corresponding database table.

We use Column from SQLAlchemy as the default value.

And we pass a SQLAlchemy class "type", as Integer, String, and Boolean, that defines the type in the database, as an argument.

{!../../../docs_src/sql_databases/sql_app/models.py!}

Create the relationships

Now create the relationships.

For this, we use relationship provided by SQLAlchemy ORM.

This will become, more or less, a "magic" attribute that will contain the values from other tables related to this one.

{!../../../docs_src/sql_databases/sql_app/models.py!}

When accessing the attribute items in a User, as in my_user.items, it will have a list of Item SQLAlchemy models (from the items table) that have a foreign key pointing to this record in the users table.

When you access my_user.items, SQLAlchemy will actually go and fetch the items from the database in the items table and populate them here.

And when accessing the attribute owner in an Item, it will contain a User SQLAlchemy model from the users table. It will use the owner_id attribute/column with its foreign key to know which record to get from the users table.

Create the Pydantic models

Now let's check the file sql_app/schemas.py.

!!! tip To avoid confusion between the SQLAlchemy models and the Pydantic models, we will have the file models.py with the SQLAlchemy models, and the file schemas.py with the Pydantic models.

These Pydantic models define more or less a "schema" (a valid data shape).

So this will help us avoiding confusion while using both.

Create initial Pydantic models / schemas

Create an ItemBase and UserBase Pydantic models (or let's say "schemas") to have common attributes while creating or reading data.

And create an ItemCreate and UserCreate that inherit from them (so they will have the same attributes), plus any additional data (attributes) needed for creation.

So, the user will also have a password when creating it.

But for security, the password won't be in other Pydantic models, for example, it won't be sent from the API when reading a user.

=== "Python 3.10+"

```Python hl_lines="1  4-6  9-10  21-22  25-26"
{!> ../../../docs_src/sql_databases/sql_app_py310/schemas.py!}
```

=== "Python 3.9+"

```Python hl_lines="3  6-8  11-12  23-24  27-28"
{!> ../../../docs_src/sql_databases/sql_app_py39/schemas.py!}
```

=== "Python 3.6+"

```Python hl_lines="3  6-8  11-12  23-24  27-28"
{!> ../../../docs_src/sql_databases/sql_app/schemas.py!}
```

SQLAlchemy style and Pydantic style

Notice that SQLAlchemy models define attributes using =, and pass the type as a parameter to Column, like in:

name = Column(String)

while Pydantic models declare the types using :, the new type annotation syntax/type hints:

name: str

Have it in mind, so you don't get confused when using = and : with them.

Create Pydantic models / schemas for reading / returning

Now create Pydantic models (schemas) that will be used when reading data, when returning it from the API.

For example, before creating an item, we don't know what will be the ID assigned to it, but when reading it (when returning it from the API) we will already know its ID.

The same way, when reading a user, we can now declare that items will contain the items that belong to this user.

Not only the IDs of those items, but all the data that we defined in the Pydantic model for reading items: Item.

=== "Python 3.10+"

```Python hl_lines="13-15  29-32"
{!> ../../../docs_src/sql_databases/sql_app_py310/schemas.py!}
```

=== "Python 3.9+"

```Python hl_lines="15-17  31-34"
{!> ../../../docs_src/sql_databases/sql_app_py39/schemas.py!}
```

=== "Python 3.6+"

```Python hl_lines="15-17  31-34"
{!> ../../../docs_src/sql_databases/sql_app/schemas.py!}
```

!!! tip Notice that the User, the Pydantic model that will be used when reading a user (returning it from the API) doesn't include the password.

Use Pydantic's orm_mode

Now, in the Pydantic models for reading, Item and User, add an internal Config class.

This Config class is used to provide configurations to Pydantic.

In the Config class, set the attribute orm_mode = True.

=== "Python 3.10+"

```Python hl_lines="13  17-18  29  34-35"
{!> ../../../docs_src/sql_databases/sql_app_py310/schemas.py!}
```

=== "Python 3.9+"

```Python hl_lines="15  19-20  31  36-37"
{!> ../../../docs_src/sql_databases/sql_app_py39/schemas.py!}
```

=== "Python 3.6+"

```Python hl_lines="15  19-20  31  36-37"
{!> ../../../docs_src/sql_databases/sql_app/schemas.py!}
```

!!! tip Notice it's assigning a value with =, like:

`orm_mode = True`

It doesn't use `:` as for the type declarations before.

This is setting a config value, not declaring a type.

Pydantic's orm_mode will tell the Pydantic model to read the data even if it is not a dict, but an ORM model (or any other arbitrary object with attributes).

This way, instead of only trying to get the id value from a dict, as in:

id = data["id"]

it will also try to get it from an attribute, as in:

id = data.id

And with this, the Pydantic model is compatible with ORMs, and you can just declare it in the response_model argument in your path operations.

You will be able to return a database model and it will read the data from it.

Technical Details about ORM mode

SQLAlchemy and many others are by default "lazy loading".

That means, for example, that they don't fetch the data for relationships from the database unless you try to access the attribute that would contain that data.

For example, accessing the attribute items:

current_user.items

would make SQLAlchemy go to the items table and get the items for this user, but not before.

Without orm_mode, if you returned a SQLAlchemy model from your path operation, it wouldn't include the relationship data.

Even if you declared those relationships in your Pydantic models.

But with ORM mode, as Pydantic itself will try to access the data it needs from attributes (instead of assuming a dict), you can declare the specific data you want to return and it will be able to go and get it, even from ORMs.

CRUD utils

Now let's see the file sql_app/crud.py.

In this file we will have reusable functions to interact with the data in the database.

CRUD comes from: Create, Read, Update, and Delete.

...although in this example we are only creating and reading.

Read data

Import Session from sqlalchemy.orm, this will allow you to declare the type of the db parameters and have better type checks and completion in your functions.

Import models (the SQLAlchemy models) and schemas (the Pydantic models / schemas).

Create utility functions to:

  • Read a single user by ID and by email.
  • Read multiple users.
  • Read multiple items.
{!../../../docs_src/sql_databases/sql_app/crud.py!}

!!! tip By creating functions that are only dedicated to interacting with the database (get a user or an item) independent of your path operation function, you can more easily reuse them in multiple parts and also add unit tests for them.

Create data

Now create utility functions to create data.

The steps are:

  • Create a SQLAlchemy model instance with your data.
  • add that instance object to your database session.
  • commit the changes to the database (so that they are saved).
  • refresh your instance (so that it contains any new data from the database, like the generated ID).
{!../../../docs_src/sql_databases/sql_app/crud.py!}

!!! tip The SQLAlchemy model for User contains a hashed_password that should contain a secure hashed version of the password.

But as what the API client provides is the original password, you need to extract it and generate the hashed password in your application.

And then pass the `hashed_password` argument with the value to save.

!!! warning This example is not secure, the password is not hashed.

In a real life application you would need to hash the password and never save them in plaintext.

For more details, go back to the Security section in the tutorial.

Here we are focusing only on the tools and mechanics of databases.

!!! tip Instead of passing each of the keyword arguments to Item and reading each one of them from the Pydantic model, we are generating a dict with the Pydantic model's data with:

`item.dict()`

and then we are passing the `dict`'s key-value pairs as the keyword arguments to the SQLAlchemy `Item`, with:

`Item(**item.dict())`

And then we pass the extra keyword argument `owner_id` that is not provided by the Pydantic *model*, with:

`Item(**item.dict(), owner_id=user_id)`

Main FastAPI app

And now in the file sql_app/main.py let's integrate and use all the other parts we created before.

Create the database tables

In a very simplistic way create the database tables:

=== "Python 3.9+"

```Python hl_lines="7"
{!> ../../../docs_src/sql_databases/sql_app_py39/main.py!}
```

=== "Python 3.6+"

```Python hl_lines="9"
{!> ../../../docs_src/sql_databases/sql_app/main.py!}
```

Alembic Note

Normally you would probably initialize your database (create tables, etc) with Alembic.

And you would also use Alembic for "migrations" (that's its main job).

A "migration" is the set of steps needed whenever you change the structure of your SQLAlchemy models, add a new attribute, etc. to replicate those changes in the database, add a new column, a new table, etc.

You can find an example of Alembic in a FastAPI project in the templates from Project Generation - Template{.internal-link target=_blank}. Specifically in the alembic directory in the source code.

Create a dependency

Now use the SessionLocal class we created in the sql_app/database.py file to create a dependency.

We need to have an independent database session/connection (SessionLocal) per request, use the same session through all the request and then close it after the request is finished.

And then a new session will be created for the next request.

For that, we will create a new dependency with yield, as explained before in the section about Dependencies with yield{.internal-link target=_blank}.

Our dependency will create a new SQLAlchemy SessionLocal that will be used in a single request, and then close it once the request is finished.

=== "Python 3.9+"

```Python hl_lines="13-18"
{!> ../../../docs_src/sql_databases/sql_app_py39/main.py!}
```

=== "Python 3.6+"

```Python hl_lines="15-20"
{!> ../../../docs_src/sql_databases/sql_app/main.py!}
```

!!! info We put the creation of the SessionLocal() and handling of the requests in a try block.

And then we close it in the `finally` block.

This way we make sure the database session is always closed after the request. Even if there was an exception while processing the request.

But you can't raise another exception from the exit code (after `yield`). See more in [Dependencies with `yield` and `HTTPException`](./dependencies/dependencies-with-yield.md#dependencies-with-yield-and-httpexception){.internal-link target=_blank}

And then, when using the dependency in a path operation function, we declare it with the type Session we imported directly from SQLAlchemy.

This will then give us better editor support inside the path operation function, because the editor will know that the db parameter is of type Session:

=== "Python 3.9+"

```Python hl_lines="22  30  36  45  51"
{!> ../../../docs_src/sql_databases/sql_app_py39/main.py!}
```

=== "Python 3.6+"

```Python hl_lines="24  32  38  47  53"
{!> ../../../docs_src/sql_databases/sql_app/main.py!}
```

!!! info "Technical Details" The parameter db is actually of type SessionLocal, but this class (created with sessionmaker()) is a "proxy" of a SQLAlchemy Session, so, the editor doesn't really know what methods are provided.

But by declaring the type as `Session`, the editor now can know the available methods (`.add()`, `.query()`, `.commit()`, etc) and can provide better support (like completion). The type declaration doesn't affect the actual object.

Create your FastAPI path operations

Now, finally, here's the standard FastAPI path operations code.

=== "Python 3.9+"

```Python hl_lines="21-26  29-32  35-40  43-47  50-53"
{!> ../../../docs_src/sql_databases/sql_app_py39/main.py!}
```

=== "Python 3.6+"

```Python hl_lines="23-28  31-34  37-42  45-49  52-55"
{!> ../../../docs_src/sql_databases/sql_app/main.py!}
```

We are creating the database session before each request in the dependency with yield, and then closing it afterwards.

And then we can create the required dependency in the path operation function, to get that session directly.

With that, we can just call crud.get_user directly from inside of the path operation function and use that session.

!!! tip Notice that the values you return are SQLAlchemy models, or lists of SQLAlchemy models.

But as all the *path operations* have a `response_model` with Pydantic *models* / schemas using `orm_mode`, the data declared in your Pydantic models will be extracted from them and returned to the client, with all the normal filtering and validation.

!!! tip Also notice that there are response_models that have standard Python types like List[schemas.Item].

But as the content/parameter of that `List` is a Pydantic *model* with `orm_mode`, the data will be retrieved and returned to the client as normally, without problems.

About def vs async def

Here we are using SQLAlchemy code inside of the path operation function and in the dependency, and, in turn, it will go and communicate with an external database.

That could potentially require some "waiting".

But as SQLAlchemy doesn't have compatibility for using await directly, as would be with something like:

user = await db.query(User).first()

...and instead we are using:

user = db.query(User).first()

Then we should declare the path operation functions and the dependency without async def, just with a normal def, as:

@app.get("/users/{user_id}", response_model=schemas.User)
def read_user(user_id: int, db: Session = Depends(get_db)):
    db_user = crud.get_user(db, user_id=user_id)
    ...

!!! info If you need to connect to your relational database asynchronously, see Async SQL (Relational) Databases{.internal-link target=_blank}.

!!! note "Very Technical Details" If you are curious and have a deep technical knowledge, you can check the very technical details of how this async def vs def is handled in the Async{.internal-link target=_blank} docs.

Migrations

Because we are using SQLAlchemy directly and we don't require any kind of plug-in for it to work with FastAPI, we could integrate database migrations with Alembic directly.

And as the code related to SQLAlchemy and the SQLAlchemy models lives in separate independent files, you would even be able to perform the migrations with Alembic without having to install FastAPI, Pydantic, or anything else.

The same way, you would be able to use the same SQLAlchemy models and utilities in other parts of your code that are not related to FastAPI.

For example, in a background task worker with Celery, RQ, or ARQ.

Review all the files

Remember you should have a directory named my_super_project that contains a sub-directory called sql_app.

sql_app should have the following files:

  • sql_app/__init__.py: is an empty file.

  • sql_app/database.py:

{!../../../docs_src/sql_databases/sql_app/database.py!}
  • sql_app/models.py:
{!../../../docs_src/sql_databases/sql_app/models.py!}
  • sql_app/schemas.py:

=== "Python 3.10+"

```Python
{!> ../../../docs_src/sql_databases/sql_app_py310/schemas.py!}
```

=== "Python 3.9+"

```Python
{!> ../../../docs_src/sql_databases/sql_app_py39/schemas.py!}
```

=== "Python 3.6+"

```Python
{!> ../../../docs_src/sql_databases/sql_app/schemas.py!}
```
  • sql_app/crud.py:
{!../../../docs_src/sql_databases/sql_app/crud.py!}
  • sql_app/main.py:

=== "Python 3.9+"

```Python
{!> ../../../docs_src/sql_databases/sql_app_py39/main.py!}
```

=== "Python 3.6+"

```Python
{!> ../../../docs_src/sql_databases/sql_app/main.py!}
```

Check it

You can copy this code and use it as is.

!!! info

In fact, the code shown here is part of the tests. As most of the code in these docs.

Then you can run it with Uvicorn:

$ uvicorn sql_app.main:app --reload

<span style="color: green;">INFO</span>:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)

And then, you can open your browser at http://127.0.0.1:8000/docs.

And you will be able to interact with your FastAPI application, reading data from a real database:

Interact with the database directly

If you want to explore the SQLite database (file) directly, independently of FastAPI, to debug its contents, add tables, columns, records, modify data, etc. you can use DB Browser for SQLite.

It will look like this:

You can also use an online SQLite browser like SQLite Viewer or ExtendsClass.

Alternative DB session with middleware

If you can't use dependencies with yield -- for example, if you are not using Python 3.7 and can't install the "backports" mentioned above for Python 3.6 -- you can set up the session in a "middleware" in a similar way.

A "middleware" is basically a function that is always executed for each request, with some code executed before, and some code executed after the endpoint function.

Create a middleware

The middleware we'll add (just a function) will create a new SQLAlchemy SessionLocal for each request, add it to the request and then close it once the request is finished.

=== "Python 3.9+"

```Python hl_lines="12-20"
{!> ../../../docs_src/sql_databases/sql_app_py39/alt_main.py!}
```

=== "Python 3.6+"

```Python hl_lines="14-22"
{!> ../../../docs_src/sql_databases/sql_app/alt_main.py!}
```

!!! info We put the creation of the SessionLocal() and handling of the requests in a try block.

And then we close it in the `finally` block.

This way we make sure the database session is always closed after the request. Even if there was an exception while processing the request.

About request.state

request.state is a property of each Request object. It is there to store arbitrary objects attached to the request itself, like the database session in this case. You can read more about it in Starlette's docs about Request state.

For us in this case, it helps us ensure a single database session is used through all the request, and then closed afterwards (in the middleware).

Dependencies with yield or middleware

Adding a middleware here is similar to what a dependency with yield does, with some differences:

  • It requires more code and is a bit more complex.
  • The middleware has to be an async function.
    • If there is code in it that has to "wait" for the network, it could "block" your application there and degrade performance a bit.
    • Although it's probably not very problematic here with the way SQLAlchemy works.
    • But if you added more code to the middleware that had a lot of I/O waiting, it could then be problematic.
  • A middleware is run for every request.
    • So, a connection will be created for every request.
    • Even when the path operation that handles that request didn't need the DB.

!!! tip It's probably better to use dependencies with yield when they are enough for the use case.

!!! info Dependencies with yield were added recently to FastAPI.

A previous version of this tutorial only had the examples with a middleware and there are probably several applications using the middleware for database session management.