https://pyloong.github.io/pythonic-project-guidelines/practices/web/#5

搭建流程

开发环境

FastAPI

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.8+ based on standard Python type hints.

FastAPI Documentation

Poetry

Poetry is a tool for dependency management and packaging in Python. It allows you to declare the libraries your project depends on and it will manage (install/update) them for you. Poetry offers a lockfile to ensure repeatable installs, and can build your project for distribution.

Poetry Documentation

Click

Click is a Python package for creating beautiful command line interfaces in a composable way with as little code as necessary. It’s the “Command Line Interface Creation Kit”. It’s highly configurable but comes with sensible defaults out of the box.

It aims to make the process of writing command line tools quick and fun while also preventing any frustration caused by the inability to implement an intended CLI API.

Click in three points:

  • arbitrary nesting of commands
  • automatic help page generation
  • supports lazy loading of subcommands at runtime

Click Documentation

Dynaconf

Dynaconf

  • Inspired by the 12-factor application guide
  • Settings management (default values, validation, parsing, templating)
  • Protection of sensitive information (passwords/tokens)
  • Multiple file formats toml|yaml|json|ini|py and also customizable loaders.
  • Full support for environment variables to override existing settings (dotenv support included).
  • Optional layered system for multi environments [default, development, testing, production] (also called multi profiles)
  • Built-in support for Hashicorp Vault and Redis as settings and secrets storage.
  • Built-in extensions for Django and Flask web frameworks.
  • CLI for common operations such as init, list, write, validate, export.

SQLAlchemy

SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL.

It provides a full suite of well known enterprise-level persistence patterns, designed for efficient and high-performing database access, adapted into a simple and Pythonic domain language.

SQLAlchemy Documentation

Pydantic

Pydantic

Pydantic is the most widely used data validation library for Python.

Fast and extensible, Pydantic plays nicely with your linters/IDE/brain. Define how data should be in pure, canonical Python 3.8+; validate it with Pydantic.

FastAPI

FastAPI

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.8+ based on standard Python type hints.

The key features are:

  • Fast: Very high performance, on par with NodeJS and Go (thanks to Starlette and Pydantic). One of the fastest Python frameworks available.
  • Fast to code: Increase the speed to develop features by about 200% to 300%. *
  • Fewer bugs: Reduce about 40% of human (developer) induced errors. *
  • Intuitive: Great editor support. Completion everywhere. Less time debugging.
  • Easy: Designed to be easy to use and learn. Less time reading docs.
  • Short: Minimize code duplication. Multiple features from each parameter declaration. Fewer bugs.
  • Robust: Get production-ready code. With automatic interactive documentation.
  • Standards-based: Based on (and fully compatible with) the open standards for APIs: OpenAPI (previously known as Swagger) and JSON Schema.

Uvicorn

Uvicorn

Uvicorn is an ASGI web server implementation for Python.

Until recently Python has lacked a minimal low-level server/application interface for async frameworks. The ASGI specification fills this gap, and means we’re now able to start building a common set of tooling usable across all async frameworks.

Uvicorn currently supports HTTP/1.1 and WebSockets.

Alembic

Alembic is a lightweight database migration tool for usage with the SQLAlchemy Database Toolkit for Python.

Pytest

Pytest Documentation

The pytest framework makes it easy to write small, readable tests, and can scale to support complex functional testing for applications and libraries.

项目初始化

 项目结构

这里的项目结构采用src目录结构,见pypa/sampleproject

1
2
3
4
5
6
7
8
.
├── README.md
├── src
│ └── example_blog
│ └── __init__.py
└── tests
└── __init__.py

Technically, you can also create Python packages without an __init__.py file, but those are called namespace packages and considered an advanced topic (not covered in this tutorial). If you are only getting started with Python packaging, it is recommended to stick with regular packages and __init__.py (even if the file is empty).

初始化项目虚拟环境:

1
poetry init

根据交互式提示,进行相应内容选取填写,安装完成后,项目目录会自动生成 pyproject.toml 文件。

项目信息

编辑 pyproject.toml 文件, 配置项目描述信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
Explain[tool.poetry]
name = "example_blog"
version = "0.1.0"
description = "This is example blog system."
authors = ["huagang517 <huagang517@126.com>"]
readme = "README.md"

[tool.poetry.dependencies]
python = "^3.10"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

后续所有添加的依赖都会在此文件中显示。

项目自述文件

编写 README.md 文件

1
2
3
4
5
6
7
8
9
10
11
12
Explain# 一个简单博客系统示例.

此项目是一个简单的博客系统,提供一些用户管理和博客文章管理。目的是演示如何做一个更加 Pythonic 的项目。

如果您有任何意见和建议,欢迎开启 ISSUE 发起讨论。期待与您打造更加完美的 Python 示例。

## 协作开发

- Fork 仓库
- 编写代码,测试,提交
- 发起 PR
- 审核通过后合并,协作完成

配置 .gitignore

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
# Created by .ignore support plugin (hsz.mobi)
### Python template
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

### Windows template
# Windows thumbnail cache files
Thumbs.db
Thumbs.db:encryptable
ehthumbs.db
ehthumbs_vista.db

# Dump file
*.stackdump

# Folder config file
[Dd]esktop.ini

# Recycle Bin used on file shares
$RECYCLE.BIN/

# Windows Installer files
*.cab
*.msi
*.msix
*.msm
*.msp

# Windows shortcuts
*.lnk

### Linux template
*~

# temporary files which can be created if a process still has a handle open of a deleted file
.fuse_hidden*

# KDE directory preferences
.directory

# Linux trash folder which might appear on any partition or disk
.Trash-*

# .nfs files are created when an open file is removed but is still being accessed
.nfs*

### macOS template
# General
.DS_Store
.AppleDouble
.LSOverride

# Icon must end with two \r
Icon

# Thumbnails
._*

# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk

.vscode
.idea

安装开发包

1
poetry install

初始 Git 提交

1
2
3
4
5
git init
git config user.name example
git config user.email example@example.com
git add .
git commit -m "feat: First commit!"

项目功能开发

命令行入口

命令行入口是启动项目的主入口,常见的做法是使用一个 __main__ 函数,调用启动代码,然后使用 python 命令启动该文件。但对于多级命令参数的情况就比较麻烦,推荐使用 click 工具编写入口逻辑

安装依赖:

1
poetry add click

创建 src/example_blog/cmdline.py 文件:

1
2
3
4
5
6
7
8
9
10
import click

@click.group(invoke_without_command=True)
@click.pass_context
@click.option('-V', '--version', is_flag=True, help='Show version and exit.')
def main(ctx, version):
if version:
click.echo(__version__)
elif ctx.invoked_subcommand is None:
click.echo(ctx.get_help())

这里可以看到使用了装饰器来声明命令组,invoke_without_command=True 参数表示即使没有指定子命令,也应该执行这个组命令的函数。这使得该命令既可以作为父命令组来处理子命令,也可以独立执行。

@click.pass_context 装饰器使得函数可以接收 Click 的上下文对象作为第一个参数。这个上下文对象在命令行工具中非常重要,它包含了关于命令行调用的状态信息,例如是否有子命令被调用。

@click.option() 这个装饰器添加了一个选项到 main 命令。选项可以通过 -V--version 触发,is_flag=True 表明这是一个布尔标志,不需要额外的参数值。如果这个选项被设置,它会影响命令的行为。

def main(ctx, version) 这是命令组的主函数。它接收上下文对象 ctxversion 选项的值


编辑 pyproject.toml ,将命令行入口注册到项目描述文件中:

1
2
[tool.poetry.scripts]
example_blog = "example_blog.cmdline:main"

提交代码:

1
2
git add .
git commit -m "feat: Add cmdline."

项目配置系统

项目的配置系统是一个项目的核心驱动,使用配置系统便于管理散落在各处的配置参数,也方便在启动前通过调整配置,改变系统行为。

Dynaconf 是一个高度灵活的配置管理工具,支持多环境分层,多种配置导入等操作。

安装依赖:

1
poetry add dynaconf

查看 pyproject.toml ,将增加安装依赖:

1
2
3
[tool.poetry.dependencies]
click = "^8.1.3"
dynaconf = "^3.1.11"

建立配置包,和配置文件:

1
2
3
mkdir src/example_blog/config
touch src/example_blog/config/__init__.py
touch src/example_blog/config/settings.yml

编辑 src/example_blog/config/__init__.py , 初始化全局配置对象:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import os
import sys
from pathlib import Path

from dynaconf import Dynaconf

_BASE_DIR = Path(__file__).parent.parent

settings_files = [
Path(__file__).parent / 'settings.yml',
] # 指定绝对路径加载默认配置

settings = Dynaconf(
envvar_prefix="EXAMPLE_BLOG", # 环境变量前缀。设置`EXAMPLE_BLOG_FOO='bar'`,使用`settings.FOO`
settings_files=settings_files,
environments=False, # 启用多层次日志,支持 dev, pro
load_dotenv=True, # 加载 .env
env_switcher="EXAMPLE_BLOG_ENV", # 用于切换模式的环境变量名称 EXAMPLE_BLOG_ENV=production
lowercase_read=False, # 禁用小写访问, settings.name 是不允许的
includes=[os.path.join(sys.prefix, 'etc', 'example_blog', 'settings.yml')], # 自定义配置覆盖默认配置
base_dir=_BASE_DIR, # 编码传入配置
)

经过之前项目的开发,意识到Python中的路径问题需要特别处理。这里给出的代码便是路径处理的例子。

这里用到了Pathlib包,

编辑 src/example_blog/config/settings.yml ,初始化配置:

1
LOG_LEVEL: INFO

编辑 src/example_blog/config/settings.local.yml ,增加本地开发配置:

1
LOG_LEVEL: DEBUG

根据 Dynaconf 规则, settings.local.yml 的配置为本地配置,且优先级比 settings.yml 低,所以本地配置会在后面加载,覆盖之前的配置。

编辑 .gitignore ,将所有本地配置排除版本控制之外。

1
**/settings.local.yml

提交代码:

1
2
git add .
git commit -m "feat: Add config."

引入日志

创建 src/example_blog/log.py ,初始化 log :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from logging.config import dictConfig

from example_blog.config import settings


def init_log():
log_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'sample': {'format': '%(asctime)s %(levelname)s %(message)s'},
'verbose': {'format': '%(asctime)s %(levelname)s %(name)s %(process)d %(thread)d %(message)s'},
"access": {
"()": "uvicorn.logging.AccessFormatter",
"fmt": '%(asctime)s %(levelprefix)s %(client_addr)s - "%(request_line)s" %(status_code)s',
},
},
'handlers': {
"console": {
"formatter": 'verbose',
'level': 'DEBUG',
"class": "logging.StreamHandler",
},
},
'loggers': {
'': {'level': settings.LOG_LEVEL, 'handlers': ['console']},
},
}

dictConfig(log_config)

提交代码:

1
2
git add .
git commit -m "feat: Add log"

日志模块的代码解释:

导入模块

  1. from logging.config import dictConfig:
    • logging.config 模块中导入 dictConfig 函数,这个函数允许使用一个字典来配置日志系统。
  2. from example_blog.config import settings:
    • 从一个名为 example_blog.config 的模块中导入 settings 对象,这个对象可能包含了各种配置设置,这里主要用到的是日志级别 (settings.LOG_LEVEL)。

初始化日志配置的函数

  • def init_log()::
    • 定义了一个函数 init_log,用于设置日志配置。

配置字典

  • 日志配置通过一个名为 log_config 的字典进行设置,其中包含多个键值对,每个键值对控制日志系统的不同方面:
    1. 'version': 1:
      • 指定日志配置的版本,当前只有版本 1 可用。
    2. 'disable_existing_loggers': False:
      • 表示不禁用在配置此字典之前已经存在的日志记录器。
    3. 'formatters':
      • 定义了日志的格式化方式。这里定义了三种格式化器:
        • 'sample': 基本的日志格式,包括时间戳、日志级别和消息。
        • 'verbose': 更详细的格式,包括时间戳、日志级别、记录器名称、进程 ID、线程 ID 和消息。
        • 'access': 特定于 Web 访问的格式,使用 uvicorn.logging.AccessFormatter 类来格式化,包括时间戳、客户端地址、请求行和状态码。
    4. 'handlers':
      • 定义了日志的处理器:
        • 'console': 配置为输出到控制台,使用 'verbose' 格式化器,日志级别为 'DEBUG'
    5. 'loggers':
      • 配置日志记录器:
        • 空字符串 '' 键表示根日志记录器,配置为使用从 settings.LOG_LEVEL 获取的日志级别和 'console' 处理器。

应用日志配置

  • dictConfig(log_config):
    • 使用 dictConfig 函数应用上面定义的 log_config 字典,以设置日志系统。

数据访问

数据层是应用的最底层,与数据库进行交互。使用 sqlalchemy 作底层数据模型建模和数据访问操作。

安装依赖:

1
poetry add sqlalchemy mysqlclient

编写 src/example_blog/config/settings.yml ,增加数据库配置信息:

1
2
3
4
5
6
7
8
9
10
11
# ####################################################
# # https://docs.sqlalchemy.org/en/13/core/engines.html
DATABASE:
DRIVER: mysql
NAME: example_blog
HOST: 127.0.0.1
PORT: 3306
USERNAME: root
PASSWORD: root
QUERY:
charset: utf8mb4

settings.yml 为系统默认配置,会被 git 追踪管理,不要填写真正的数据库连接信息。真实配置信息可以写在 settings.local.yml 文件中,会覆盖默认配置。

新建 src/example_blog/db.py ,创建 sqlalchemy 访问对象:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
"""Database connections"""

from sqlalchemy.engine import create_engine
from sqlalchemy.engine.base import Engine
from sqlalchemy.engine.url import URL
from sqlalchemy.orm import scoped_session, sessionmaker

from example_blog.config import settings

# 根据配置文件中的数据库连接信息构建数据库连接的 URL
# 驱动名、用户名、密码、主机、端口、数据库名称和其他参数
url = URL(
drivername=settings.DATABASE.DRIVER,
username=settings.DATABASE.get('USERNAME', None),
password=settings.DATABASE.get('PASSWORD', None),
host=settings.DATABASE.get('HOST', None),
port=settings.DATABASE.get('PORT', None),
database=settings.DATABASE.get('NAME', None),
query=settings.DATABASE.get('QUERY', None),
)

# 创建数据库引擎对象 engine,并传入数据库连接的 URL
# echo=True 表示在控制台输出数据库引擎执行的 SQL 语句
engine: Engine = create_engine(url, echo=True)

# 会话工厂负责创建会话并配置会话的行为
SessionFactory = sessionmaker(bind=engine, autocommit=False, autoflush=True)

# 建一个线程范围内的会话,自动跟踪当前线程
# 确保在同一个线程中使用同一个会话对象
ScopedSession = scoped_session(SessionFactory)

创建 src/example_blog/models.py ,创建数据模型:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
"""Models"""

from datetime import datetime

from sqlalchemy import Column, DateTime, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base, declared_attr


class CustomBase:
"""https://docs.sqlalchemy.org/en/13/orm/extensions/declarative/mixins.html"""

@declared_attr
def __tablename__(cls):
return cls.__name__.lower()

__table_args__ = {
'mysql_engine': 'InnoDB',
'mysql_collate': 'utf8mb4_general_ci'
}

id = Column(Integer, primary_key=True, autoincrement=True)


BaseModel = declarative_base(cls=CustomBase)


class Article(BaseModel):
"""Article table"""
title = Column(String(500))
body = Column(Text(), nullable=True)
create_time = Column(DateTime, default=datetime.now, nullable=False)
update_time = Column(DateTime, default=datetime.now, onupdate=datetime.now, nullable=False)

此程序也定义了数据库中表的结构

为了在应用中更方便的使用数据模型对象,引入 pydantic 来定义一些对象模型的基本信息。

1
poetry add pydantic

创建 src/example_blog/schemas.py ,定义用于验证和序列化文章数据的 Pydantic 模型,并提供了用于创建、更新和检索文章的不同数据模型

创建对象模型:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
from datetime import datetime
from typing import Optional, TypeVar

from pydantic import BaseModel, constr

from example_blog.models import BaseModel as DBModel

ModelType = TypeVar('ModelType', bound=DBModel)
CreateSchema = TypeVar('CreateSchema', bound=BaseModel)
UpdateSchema = TypeVar('UpdateSchema', bound=BaseModel)


# 表示数据库中的记录
class InDBMixin(BaseModel):
id: int

class Config:
orm_mode = True


# 定义文章的基本属性
class BaseArticle(BaseModel):
title: constr(max_length=500)
body: Optional[str] = None


# 表示从数据库中检索文章
class ArticleSchema(BaseArticle, InDBMixin):
create_time: datetime
update_time: datetime


# 创建文章
class CreateArticleSchema(BaseArticle):
pass


# 更新文章
class UpdateArticleSchema(BaseArticle):
title: Optional[constr(max_length=500)] = None

创建 src/example_blog/dao.py ,创建数据访问层:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
from typing import Generic, List

from fastapi.encoders import jsonable_encoder
from sqlalchemy.orm import Session

from example_blog.models import Article
from example_blog.schemas import CreateSchema, ModelType, UpdateSchema, CreateArticleSchema, UpdateArticleSchema


# 泛型类,定义通用的 DAO 操作
class BaseDAO(Generic[ModelType, CreateSchema, UpdateSchema]):
model: ModelType

# 定义常用的CRUD操作
def get(self, session: Session, offset=0, limit=10) -> List[ModelType]:
result = session.query(self.model).offset(offset).limit(limit).all()
return result

def get_by_id(self, session: Session, pk: int, ) -> ModelType:
return session.query(self.model).get(pk)

def create(self, session: Session, obj_in: CreateSchema) -> ModelType:
"""Create"""
obj = self.model(**jsonable_encoder(obj_in))
session.add(obj)
session.commit()
return obj

def patch(self, session: Session, pk: int, obj_in: UpdateSchema) -> ModelType:
"""Patch"""
obj = self.get_by_id(session, pk)
update_data = obj_in.dict(exclude_unset=True)
for key, val in update_data.items():
setattr(obj, key, val)
session.add(obj)
session.commit()
session.refresh(obj)
return obj

def delete(self, session: Session, pk: int) -> None:
"""Delete"""
obj = self.get_by_id(session, pk)
session.delete(obj)
session.commit()

def count(self, session: Session):
return session.query(self.model).count()


# BaseDAO类的具体实现,用于操作文章模型
class ArticleDAO(BaseDAO[Article, CreateArticleSchema, UpdateArticleSchema]):
# 设置 model 属性为 Article,指定了该 DAO 类要操作的模型类型
model = Article

提交代码:

1
2
git add .
git commit -m "feat: Add models and DAO"

服务层

创建 src/example_blog/services.py ,创建服务:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
"""Service"""
from typing import Generic, List

from sqlalchemy.orm import Session

from example_blog.dao import ArticleDAO, BaseDAO
from example_blog.models import Article
from example_blog.schemas import CreateSchema, ModelType, UpdateSchema


# 泛型类BaseService用作创建应用程序不同部分(如处理文章、评论等)的特定服务的基础
class BaseService(Generic[ModelType, CreateSchema, UpdateSchema]):
dao: BaseDAO

"""
BaseService 中的方法:
get(session, offset, limit):使用分页从数据库获取项目列表。
total(session):返回数据库中所有项目的计数。
get_by_id(session, pk):通过主键检索单个项目。
create(session, obj_in):向数据库添加新项目。
patch(session, pk, obj_in):更新现有项目。
delete(session, pk):从数据库删除项目。
"""
def get(self, session: Session, offset=0, limit=10) -> List[ModelType]:
""""""
return self.dao.get(session, offset=offset, limit=limit)

def total(self, session: Session) -> int:
return self.dao.count(session)

def get_by_id(self, session: Session, pk: int) -> ModelType:
"""Get by id"""
return self.dao.get_by_id(session, pk)

def create(self, session: Session, obj_in: CreateSchema) -> ModelType:
"""Create a object"""
return self.dao.create(session, obj_in)

def patch(self, session: Session, pk: int, obj_in: UpdateSchema) -> ModelType:
"""Update"""
return self.dao.patch(session, pk, obj_in)

def delete(self, session: Session, pk: int) -> None:
"""Delete a object"""
return self.dao.delete(session, pk)


class ArticleService(BaseService[Article, CreateSchema, UpdateSchema]):
dao = ArticleDAO()

提交代码:

1
2
git add .
git commit -m "feat: Add services."

FastAPI

引入FastAPI框架作为API层

安装依赖:

1
poetry add fastapi uvicorn

创建 src/examp.e_blog/views.py ,创建视图:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session

from example_blog.dependencies import CommonQueryParams, get_db
from example_blog.schemas import (ArticleSchema, CreateArticleSchema,
UpdateArticleSchema)
from example_blog.services import ArticleService

# 创建APIRouter 实例,用于注册不同的路由处理函数
router = APIRouter()

# 创建ArticleService实例,通过这个实例调用具体的数据处理方法
_service = ArticleService()


@router.get('/articles')
def get(
session: Session = Depends(get_db),
commons: CommonQueryParams = Depends()
):
return _service.get(session, offset=commons.offset, limit=commons.limit)


@router.get('/articles/{pk}')
def get_by_id(
pk: int,
session: Session = Depends(get_db)
):
return _service.get_by_id(session, pk)


@router.post('/articles', response_model=ArticleSchema)
def create(
obj_in: CreateArticleSchema,
session: Session = Depends(get_db),
):
return _service.create(session, obj_in)


@router.patch('/articles/{pk}', response_model=ArticleSchema)
def patch(
pk: int,
obj_in: UpdateArticleSchema,
session: Session = Depends(get_db)
):
return _service.patch(session, pk, obj_in)


@router.delete('/articles/{pk}')
def delete(
pk: int,
session: Session = Depends(get_db)
):
return _service.delete(session, pk)

导入的模块和类

  • APIRouter:从 FastAPI 导入,用于声明一个可以包含多个路由的路由器对象。
  • Depends:也来自 FastAPI,用于依赖注入,可以将共享的逻辑(如数据库连接或参数验证)注入到路由处理函数中。
  • Session:来自 SQLAlchemy,表示数据库会话,用于执行数据库操作。
  • 自定义导入:
    • CommonQueryParams:可能是一个用于解析和验证查询参数(如分页信息)的类。
    • get_db:一个依赖函数,用于获取数据库会话。
    • 模式类(ArticleSchema, CreateArticleSchema, UpdateArticleSchema):用于请求和响应的数据验证和序列化。
    • ArticleService:处理与文章相关的业务逻辑的服务类。

路由和处理函数

  • router:一个 APIRouter 实例,用于注册不同的路由处理函数。
  • _service:一个 ArticleService 的实例,通过这个实例调用具体的数据处理方法。

路由详解

  1. 获取文章列表(分页):
    • 路径:/articles
    • 方法:get,用于获取文章列表,支持分页。
    • 参数:通过依赖注入获得 Session 对象和 CommonQueryParams 对象。
    • 功能:调用 _service.get 方法,根据提供的分页参数(偏移量和限制)从数据库中检索文章列表。
  2. 根据 ID 获取单个文章:
    • 路径:/articles/{pk}
    • 方法:get_by_id,根据文章的主键(pk)获取单个文章。
    • 参数:文章 ID 和通过依赖注入获得的 Session 对象。
    • 功能:调用 _service.get_by_id 方法,根据文章 ID 从数据库中检索具体文章。
  3. 创建新文章:
    • 路径:/articles
    • 方法:post,用于创建新的文章。
    • 参数:通过依赖注入获得 Session 对象和一个符合 CreateArticleSchema 的请求体。
    • 响应模型:ArticleSchema,确保返回的数据结构符合定义的模式。
    • 功能:调用 _service.create 方法,将新文章数据添加到数据库。
  4. 更新文章:
    • 路径:/articles/{pk}
    • 方法:patch,用于更新现有文章的部分信息。
    • 参数:文章 ID,通过依赖注入获得的 Session 对象和一个符合 UpdateArticleSchema 的请求体。
    • 响应模型:ArticleSchema
    • 功能:调用 _service.patch 方法,更新指定 ID 的文章信息。
  5. 删除文章:
    • 路径:/articles/{pk}
    • 方法:delete,用于删除指定 ID 的文章。
    • 参数:文章 ID 和通过依赖注入获得的 Session 对象。
    • 功能:调用 _service.delete 方法,从数据库中删除指定文章。

创建 src/example_blog/middlewares.py ,创建数据库会话中间件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
from typing import Callable

from fastapi import FastAPI, Request, Response

from example_blog.db import SessionFactory


async def db_session_middleware(request: Request, call_next: Callable) -> Response:
# 作为备用的错误信息相应
response = Response('Internal server error', status_code=500)
try:
# 为当前请求初始化一个数据库会话
request.state.db = SessionFactory()
# 使用 await 异步调用 call_next 函数
# 将控制权传递给中间件链中的下一个中间件或请求的终端处理器
response = await call_next(request)
finally:
# 关闭在 try 块中创建的数据库会话
request.state.db.close()

return response


def init_middleware(app: FastAPI) -> None:
app.middleware('http')(db_session_middleware)

作为FastAPI的异步中间件,管理每个请求的数据库会话。中间件的目的是在处理请求之前和之后进行一些必要的操作。


创建 src/example_blog/dependencies.py ,创建 Fastapi 的依赖项:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
from fastapi import Request
from sqlalchemy.orm import Session


# 从 FastAPI 的 Request 对象中提取数据库会话
def get_db(request: Request) -> Session:
return request.state.db


# 定义一个通用的查询参数模型
# 用于数据库查询中的分页处理
class CommonQueryParams:
def __init__(self, offset: int = 1, limit: int = 10):
self.offset = offset - 1
if self.offset < 0:
self.offset = 0
self.limit = limit

if self.limit < 0:
self.limit = 10

创建 src/example_blog/routes.py ,创建路由:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from fastapi import APIRouter, FastAPI

from example_blog import views


def router_v1():
router = APIRouter()
# 将 views 模块中定义的路由添加到这个路由器实例中
router.include_router(views.router, tags=['Article'])
return router


# 接受一个 FastAPI 应用实例作为参数,用于初始化和配置这个应用的路由
def init_routers(app: FastAPI):
app.include_router(router_v1(), prefix='/api/v1', tags=['v1'])

创建 src/example_blog/server.py ,创建服务启动逻辑:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
"""server"""
import uvicorn
from fastapi import FastAPI

from example_blog import middlewares, routes
from example_blog.config import settings
from example_blog.log import init_log


class Server:
"""服务器类,用于配置和运行 FastAPI 应用。
该类封装了应用的初始化和运行流程,包括日志设置、中间件和路由的配置。

方法:
__init__: 初始化 Server 类实例。
init_app: 配置 FastAPI 应用,包括中间件和路由。
run: 启动 ASGI 服务器来运行应用。
"""
def __init__(self):
"""
初始化 Server 类实例,配置日志并创建 FastAPI 应用实例。
"""
init_log()
self.app = FastAPI()

def init_app(self):
"""
配置 FastAPI 应用,包括添加中间件和路由到应用实例。
"""
middlewares.init_middleware(self.app)
routes.init_routers(self.app)

def run(self):
"""
启动服务器,使用 uvicorn 作为 ASGI 服务器来运行应用。
使用从配置文件中读取的主机名和端口号来运行应用。
"""
self.init_app()
uvicorn.run(
app=self.app,
host=settings.HOST,
port=settings.PORT,
)

修改 src/example_blog/config/settings.yml ,增加服务配置:

1
2
HOST: 127.0.0.1
PORT: 8000

提交代码:

1
2
git add .
git commit -m "feat: Add api service."

启动命令

编辑 src/example_blog/cmdline.py ,增加启动 Server 逻辑:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@main.command()
@click.option('-h', '--host', show_default=True, help=f'Host IP. Default: {settings.HOST}')
@click.option('-p', '--port', show_default=True, type=int, help=f'Port. Default: {settings.PORT}')
@click.option('--level', help='Log level')
def server(host, port, level):
"""Start server."""
kwargs = {
'LOGLEVEL': level,
'HOST': host,
'PORT': port,
}
for name, value in kwargs.items():
if value:
settings.set(name, value)

Server().run()

提交代码:

1
2
git add .
git commit -m "feat: Add server cmdline."

启动 Server

将本项目以可编辑方式安装到当前 Python 环境:

1
pip install -e .

命令行运行:

1
example_blog server

可以看到如下输出:

1
2
3
4
5
6
7
8
INFO:     Started server process [21687]
2020-12-28 18:11:56,341 INFO uvicorn.error 21687 139772921304768 Started server process [21687]
INFO: Waiting for application startup.
2020-12-28 18:11:56,341 INFO uvicorn.error 21687 139772921304768 Waiting for application startup.
INFO: Application startup complete.
2020-12-28 18:11:56,341 INFO uvicorn.error 21687 139772921304768 Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
2020-12-28 18:11:56,341 INFO uvicorn.error 21687 139772921304768 Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)

浏览器打开 http://127.0.0.1:8000/docs 即可查看接口文档。

引入迁移工具

注:关于alembic的所有命令(migrate)经测试都暂未成功运行,错误(Error)信息为数据库连接密码错误,具体原因未知

为了便于数据模型变更,引入 alembic 做数据库迁移。

安装依赖:

1
poetry add alembic

初始化 alembic :

1
2
alembic init migration
mv alembic.ini src/example_blog/migration

将 alembic 的相关文件全部放到 src/example_blog/migration 目录中

修改 src/example_blog/migration/alembic.ini

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
# A generic, single database configuration.

[alembic]
# path to migration scripts
;script_location = src/example_blog/migration
script_location = .

# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s

# timezone to use when rendering the date
# within the migration file as well as the filename.
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =

# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40

# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false

# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false

# version location specification; this defaults
# to src/example_blog/migration/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
# version_locations = %(here)s/bar %(here)s/bat src/example_blog/migration/versions

# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8

;sqlalchemy.url = driver://user:pass@localhost/dbname


[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples

# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks=black
# black.type=console_scripts
# black.entrypoint=black
# black.options=-l 79

# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic

[handlers]
keys = console

[formatters]
keys = generic

[logger_root]
level = WARN
handlers = console
qualname =

[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine

[logger_alembic]
level = INFO
handlers =
qualname = alembic

[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic

[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

修改 src/example_blog/migration/env.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
from logging.config import fileConfig

from alembic import context
from sqlalchemy import engine_from_config, pool

from example_blog import db
from example_blog.models import BaseModel

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)

# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
# target_metadata = None

target_metadata = BaseModel.metadata


# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.


def run_migrations_offline():
"""Run migrations in 'offline' mode.

This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.

Calls to context.execute() here emit the given string to the
script output.

"""
context.configure(
url=db.url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)

with context.begin_transaction():
context.run_migrations()


def run_migrations_online():
"""Run migrations in 'online' mode.

In this scenario we need to create an Engine
and associate a connection with the context.

"""
configuration = config.get_section(config.config_ini_section)
configuration['sqlalchemy.url'] = str(db.url)
connectable = engine_from_config(
configuration,
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)

with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)

with context.begin_transaction():
context.run_migrations()


if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

编写 src/example_blog/cmdline.py ,创建迁移命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
from pathlib import Path

from alembic import config
from click import Context


@main.command()
@click.pass_context
@click.option('-h', '--help', is_flag=True)
@click.argument('args', nargs=-1)
def migrate(ctx: Context, help, args):
"""usage migrate -- arguments """
with utils.chdir(Path(__file__).parent / 'migration'):
argv = list(args)
if help:
argv.append('--help')
config.main(prog=ctx.command_path, argv=argv)

创建 utils.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
"""Utils"""

import contextlib
import os
from os import PathLike
from typing import Union


@contextlib.contextmanager
def chdir(path: Union[str, PathLike]):
cwd = os.getcwd()
os.chdir(path)
yield
os.chdir(cwd)

由于使用了 click 包装了 alembic 命令,在使用上会有点不同,默认应该使用 migrate -- 后加 alembic 的其他参数,否则多参数的情况下会无法识别。

为了将 src/example_blog/migration 打包到项目中,需要将其变成 Python 包。

创建 src/example_blog/migration/__init__.pysrc/example_blog/migration/versions/__init__.py

创建空白数据库迁移版本:

1
example_blog migrate -- revision -m "init"

执行迁移:

1
example_blog migrate -- upgrade head

创建第一个数据库迁移版本:

1
example_blog migrate -- revision --autogenerate -m "init_table"

执行迁移:

1
example_blog migrate -- upgrade head

提交代码:

1
2
git add .
git commit -m "Add alembic migrate."

测试和优化代码

测试是软件开发中重要的一环,能够在发布之前检查出更多可能出现的异常情况。

测试框架选用比较常用的 pytest ,它具有强大的功能和很好的兼容性。

安装依赖:

1
poetry add -D pytest

创建 tests/settings.yml ,初始化测试配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
DEBUG: false
LOG_LEVEL: INFO

HOST: 127.0.0.1
PORT: 8000

DATABASE:
DRIVER: mysql
NAME: example_blog
HOST: 127.0.0.1
PORT: 3306
USERNAME: root
PASSWORD: root
QUERY:
charset: utf8mb4

编辑 tests/__init__.py ,加载测试配置:

1
2
3
4
5
6
import os

from example_blog.config import settings

settings.load_file(os.path.join(os.path.dirname(__file__), 'settings.yml'))
settings.load_file(os.path.join(os.path.dirname(__file__), 'settings.local.yml'))

虽然本地开发配置可以临时调整,但对于开发环境和测试环境依然有些不一样。从上面代码中可以看到加载了两个测试配置,和 Dynaconf 规则一样, settings.local.yml 配置为本地配置,不会被代码追踪,只不过这里是手动实现的。

提交代码:

1
2
git add .
git commit -m "test: Init test."

测试DAO

编写测试配置:

新建 tests/conftest.py ,创建测试配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
"""Test config"""

import os
from pathlib import Path

import pytest
from alembic import command, config

from sqlalchemy.orm import Session

from example_blog import migration
from example_blog.config import settings
from example_blog.db import SessionFactory
from example_blog.models import Article


@pytest.fixture()
def migrate():
"""Re-init database when run a test."""
os.chdir(Path(migration.__file__).parent)
alembic_config = config.Config('./alembic.ini')
alembic_config.set_main_option('script_location', os.getcwd())
print('\n----- RUN ALEMBIC MIGRATION: -----\n')
command.downgrade(alembic_config, 'base')
command.upgrade(alembic_config, 'head')
try:
yield
finally:
command.downgrade(alembic_config, 'base')
db_name = settings.DATABASE.get('NAME')
if settings.DATABASE.DRIVER == 'sqlite' and os.path.isfile(db_name):
try:
os.remove(db_name)
except FileNotFoundError:
pass


@pytest.fixture()
def session(migrate) -> Session:
"""session fixture"""
_s = SessionFactory()
yield _s
_s.close()


@pytest.fixture()
def init_article(session):
"""Init article"""
a_1 = Article(title='Hello world', body='Hello world, can you see me?')
a_2 = Article(title='Love baby', body='I love you everyday, and i want with you.')
a_3 = Article(title='Tomorrow', body='When the sun rises, this day is fine day, cheer up.')
session.add_all([a_1, a_2, a_3])
session.commit()

编写数据访问层用例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import pytest

from example_blog.dao import ArticleDAO
from example_blog.models import Article
from example_blog.schemas import CreateArticleSchema, UpdateArticleSchema


class TestArticle:

@pytest.fixture()
def dao(self, init_article):
yield ArticleDAO()

def test_get(self, dao, session):
users = dao.get(session)
assert len(users) == 3
users = dao.get(session, limit=2)
assert len(users) == 2
users = dao.get(session, offset=4)
assert not users

def test_get_by_id(self, dao, session):
user = dao.get_by_id(session, 1)
assert user.id == 1

def test_create(self, dao, session):
origin_count = session.query(dao.model).count()
obj_in = CreateArticleSchema(title='test')
dao.create(session, obj_in)
count = session.query(dao.model).count()
assert origin_count + 1 == count

def test_patch(self, dao, session):
obj: Article = session.query(dao.model).first()
body = obj.body
obj_in = UpdateArticleSchema(body='test')
updated_obj: Article = dao.patch(session, obj.id, obj_in)
assert body != updated_obj.body

def test_delete(self, dao, session):
origin_count = session.query(dao.model).count()
dao.delete(session, 1)
count = session.query(dao.model).count()
assert origin_count - 1 == count

def test_count(self, dao, session):
count = dao.count(session)
assert count == 3

运行测试:

1
pytest tests/test_dao.py

如果运行成功,则测试正确。

提交代码:

1
2
git add .
git commit -m "test: Add dao test."

测试服务层

创建 tests/test_services.py ,创建测试用例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import pytest

from example_blog.schemas import CreateArticleSchema, UpdateArticleSchema
from example_blog.services import ArticleService


class TestArticleService:

@pytest.fixture()
def service(self, init_article):
yield ArticleService()

def test_get(self, service, session):
objs = service.get(session)
assert len(objs) == 3
objs = service.get(session, limit=2)
assert len(objs) == 2
objs = service.get(session, offset=5)
assert not objs

def test_total(self, service, session):
total = service.total(session)
assert total == 3

def test_by_id(self, service, session):
__obj = session.query(service.dao.model).first()
obj = service.get_by_id(session, __obj.id)
assert obj.id == __obj.id

def test_create(self, service, session):
origin_count = service.total(session)
obj_in = CreateArticleSchema(title='test')
service.create(session, obj_in)
count = service.total(session)
assert origin_count + 1 == count

def test_patch(self, service, session):
origin_obj = session.query(service.dao.model).first()
body = origin_obj.body
obj_in = UpdateArticleSchema(body='test')
obj = service.patch(session, origin_obj.id, obj_in)
assert body != obj.body

def test_delete(self, service, session):
origin_count = service.total(session)
obj = session.query(service.dao.model).first()
service.delete(session, obj.id)
count = service.total(session)
assert origin_count - 1 == count

运行测试:

1
pytest tests/test_services.py

如果运行成功,则测试正确。

提交代码:

1
2
git add .
git commit -m "test: Add service test."

测试视图层

编辑 tests/conftest.py ,创建测试配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
from fastapi.testclient import TestClient

from example_blog import migration, server



@pytest.fixture
def client():
"""Fast api test client factory"""
_s = server.Server()
_s.init_app()
_c = TestClient(app=_s.app)
yield _c

由于 Fastapi 的 TestClient 依赖 requests ,所以需要先安装:

1
poetry add -D requests

创建 tests/test_views.py ,测试试图:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
import pytest
from fastapi.encoders import jsonable_encoder
from fastapi.responses import Response

from example_blog.models import Article
from example_blog.schemas import ModelType


def test_docs(client):
"""Test view"""
response = client.get('/docs')
assert response.status_code == 200


class BaseTest:
version = 'v1'
base_url: str
model: ModelType

@pytest.fixture()
def init_data(self):
pass

def url(self, pk: int = None) -> str:
url_split = ['api', self.version, self.base_url]
if pk:
url_split.append(str(pk))
return '/'.join(url_split)

def assert_response_ok(self, response: Response):
assert response.status_code == 200

def test_get(self, client, session, init_data):
count = session.query(self.model).count()
response = client.get(self.url())
self.assert_response_ok(response)
assert count == len(response.json())

def test_get_by_id(self, client, session, init_data):
obj = session.query(self.model).first()
response = client.get(self.url(obj.id))
self.assert_response_ok(response)
assert jsonable_encoder(obj) == response.json()

def test_delete(self, client, session, init_data):
count = session.query(self.model).count()
session.close()
response = client.delete(self.url(1))
self.assert_response_ok(response)
after_count = session.query(self.model).count()
assert after_count == 2
assert count - 1 == after_count


class TestArticle(BaseTest):
model = Article
base_url = 'articles'

@pytest.fixture()
def init_data(self, init_article):
pass

def test_create(self, client, session, init_data):
response = client.post(
self.url(),
json={'title': 'xxx'}
)
self.assert_response_ok(response)
assert response.json().get('title') == 'xxx'

def test_patch(self, client, session, init_data):
obj = session.query(Article).first()
response = client.patch(self.url(obj.id), json={'body': 'xxx'})
self.assert_response_ok(response)
assert response.json().get('body') != obj.body

运行测试:

1
pytest tests/test_views.py

如果运行成功,则测试正确。

提交代码:

1
2
git add .
git commit -m "test: Add view test."

测试命令行

编辑 tests/conftest.py ,创建测试配置:

1
2
3
4
5
6
7
from click.testing import CliRunner


@pytest.fixture
def cli():
runner = CliRunner(echo_stdin=True, mix_stderr=False)
yield runner

创建 tests/test_cmdline.py ,创建测试用例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import uvicorn
from alembic import config

import example_blog
from example_blog import cmdline


def test_main(cli):
result = cli.invoke(cmdline.main)
assert result.exit_code == 0
result = cli.invoke(cmdline.main, '-V')
assert result.exit_code == 0
assert str(result.output).strip() == example_blog.__version__


def test_run(cli, mocker):
mock_run = mocker.patch.object(uvicorn, 'run')
result = cli.invoke(cmdline.main, ['server', '-h', '127.0.0.1', '-p', '8080'])
assert result.exit_code == 0
mock_run.assert_called_once_with(app=mocker.ANY, host='127.0.0.1', port=8080)


def test_migrate(cli, mocker):
mock_main = mocker.patch.object(config, 'main')
cli.invoke(cmdline.main, ['migrate', '--help'])
mock_main.assert_called_once()

因为单元测试中使用了 mock ,所以安装配合 pytest 使用的 pytest-mock

1
poetry add -D pytest-mock

运行测试:

1
pytest tests/test_views.py

如果运行成功,则测试正确。

提交代码:

1
2
git add .
git commit -m "test: Add cmdline test."

其他测试

创建 tests/test_dependencies.py ,创建测试用例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import pytest

from example_blog.dependencies import CommonQueryParams


@pytest.mark.parametrize(
['args', 'expect_value'],
[
((), (0, 10)),
((0,), (0, 10)),
((-10, -10), (0, 10)),
((5, 100), (4, 100)),
]
)
def test_common_query_params(args, expect_value):
params = CommonQueryParams(*args)
assert params.offset == expect_value[0]
assert params.limit == expect_value[1]

创建 tests/test_utils.py ,创建测试用例:

1
2
3
4
5
6
7
8
9
10
11
import os

from example_blog.utils import chdir


def test_chdir():
path = '/tmp'
cwd = os.getcwd()
with chdir(path):
assert path == os.getcwd()
assert cwd == os.getcwd()

运行测试:

1
pytest

如果运行成功,则测试正确。

提交代码:

1
2
git add .
git commit -m "test: Add other test."

至此,所有测试运行完毕,除了 src/example_blog/migration 之外的包的测试已经可以全部覆盖。

优化代码

代码风格和代码规范是一个开发人员开发修养的体现,好的代码能够让人眼前一亮。为了规范,社区开发许多工具用于检测代码。

优化导入

isort 是一个自动格式化导入的工具。

安装依赖:

1
poetry add -D isort

格式化代码:

1
isort .

此时可以不用先急着提交,在后面对代码风格检测的时候可能还会再次格式化代码。

优化代码风格

flake8 是一个遵循 PEP8 规范检测代码的工具。使用该工具,可以检测出哪些代码不符合 PEP8 规范。

安装依赖:

1
poetry add -D flake8

检测代码:

1
flake8

根据输出提示,参照 flake8 规则 进行调整,直至完全符合为止。

提交代码:

1
2
git add .
git commit -m "feat: Lint code"

打包发布

到这一步, pyproject.toml 文件应该是这样的:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[tool.poetry]
name = "example_blog"
version = "0.1.0"
description = "This is example blog system."
authors = ["huagang <huagang517@126.com>"]
readme = "README.md"

[tool.poetry.dependencies]
python = "^3.10"
fastapi-sa = "^0.0.1.dev0"
sqlalchemy = "^1.4.44"
mysqlclient = "^2.1.1"
pydantic = "^1.10.2"
dynaconf = "^3.1.11"
fastapi = "^0.88.0"
uvicorn = "^0.20.0"
alembic = "^1.8.1"

[tool.poetry.group.dev.dependencies]
pytest = "^7.2.0"
isort = "^5.10.1"
requests = "^2.28.1"
pytest-mock = "^3.10.0"
flake8 = "^6.0.0"

[tool.poetry.scripts]
example_blog = "example_blog.cmdline:main"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

在整个开发过程中,是逐步丰富此文件的。这是项目的描述文件,描述了打包的配置信息。

打包

1
poetry build

dist 目录中可以看到两个文件,一个是 .tar.gz 的源码打包文件,一个是 .whl 的二进制文件。

发布

将开发好的项目发布到索引仓库,或内网的私有仓库。

使用 poetry 上传:

1
poetry publish