建議
建議
本文檔包含使用 Fastify 時的一系列建議。
使用反向代理
Node.js 是框架的早期採用者,在標準函式庫中附帶易於使用的 Web 伺服器。以前,使用 PHP 或 Python 等語言,需要一個對該語言有特定支援的 Web 伺服器,或能夠設定某種與該語言搭配使用的 CGI 閘道。使用 Node.js,可以編寫直接處理 HTTP 請求的應用程式。因此,誘惑是編寫處理多個網域請求、監聽多個埠(即 HTTP和 HTTPS),然後將這些應用程式直接暴露在網際網路上以處理請求。
Fastify 團隊強烈認為這是一種反模式,而且是非常不好的實踐
- 它會稀釋應用程式的焦點,增加不必要的複雜性。
- 它會阻礙水平擴展。
請參閱為什麼如果 Node.js 已準備好用於生產環境,我應該使用反向代理?,以更深入地討論為什麼應該選擇使用反向代理。
舉一個具體的例子,考慮以下情況:
- 應用程式需要多個實例來處理負載。
- 應用程式需要 TLS 終止。
- 應用程式需要將 HTTP 請求重新導向至 HTTPS。
- 應用程式需要服務多個網域。
- 應用程式需要服務靜態資源,例如 jpeg 檔案。
有許多可用的反向代理解決方案,您的環境可能會決定要使用的解決方案,例如 AWS 或 GCP。鑑於以上情況,我們可以利用 HAProxy 或 Nginx 來解決這些需求。
HAProxy
# The global section defines base HAProxy (engine) instance configuration.
global
log /dev/log syslog
maxconn 4096
chroot /var/lib/haproxy
user haproxy
group haproxy
# Set some baseline TLS options.
tune.ssl.default-dh-param 2048
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-server-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
# Each defaults section defines options that will apply to each subsequent
# subsection until another defaults section is encountered.
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
# The following option makes haproxy close connections to backend servers
# instead of keeping them open. This can alleviate unexpected connection
# reset errors in the Node process.
option http-server-close
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
# Enable content compression for specific content types.
compression algo gzip
compression type text/html text/plain text/css application/javascript
# A "frontend" section defines a public listener, i.e. an "http server"
# as far as clients are concerned.
frontend proxy
# The IP address here would be the _public_ IP address of the server.
# Here, we use a private address as an example.
bind 10.0.0.10:80
# This redirect rule will redirect all traffic that is not TLS traffic
# to the same incoming request URL on the HTTPS port.
redirect scheme https code 308 if !{ ssl_fc }
# Technically this use_backend directive is useless since we are simply
# redirecting all traffic to this frontend to the HTTPS frontend. It is
# merely included here for completeness sake.
use_backend default-server
# This frontend defines our primary, TLS only, listener. It is here where
# we will define the TLS certificates to expose and how to direct incoming
# requests.
frontend proxy-ssl
# The `/etc/haproxy/certs` directory in this example contains a set of
# certificate PEM files that are named for the domains the certificates are
# issued for. When HAProxy starts, it will read this directory, load all of
# the certificates it finds here, and use SNI matching to apply the correct
# certificate to the connection.
bind 10.0.0.10:443 ssl crt /etc/haproxy/certs
# Here we define rule pairs to handle static resources. Any incoming request
# that has a path starting with `/static`, e.g.
# `https://one.example.com/static/foo.jpeg`, will be redirected to the
# static resources server.
acl is_static path -i -m beg /static
use_backend static-backend if is_static
# Here we define rule pairs to direct requests to appropriate Node.js
# servers based on the requested domain. The `acl` line is used to match
# the incoming hostname and define a boolean indicating if it is a match.
# The `use_backend` line is used to direct the traffic if the boolean is
# true.
acl example1 hdr_sub(Host) one.example.com
use_backend example1-backend if example1
acl example2 hdr_sub(Host) two.example.com
use_backend example2-backend if example2
# Finally, we have a fallback redirect if none of the requested hosts
# match the above rules.
default_backend default-server
# A "backend" is used to tell HAProxy where to request information for the
# proxied request. These sections are where we will define where our Node.js
# apps live and any other servers for things like static assets.
backend default-server
# In this example we are defaulting unmatched domain requests to a single
# backend server for all requests. Notice that the backend server does not
# have to be serving TLS requests. This is called "TLS termination": the TLS
# connection is "terminated" at the reverse proxy.
# It is possible to also proxy to backend servers that are themselves serving
# requests over TLS, but that is outside the scope of this example.
server server1 10.10.10.2:80
# This backend configuration will serve requests for `https://one.example.com`
# by proxying requests to three backend servers in a round-robin manner.
backend example1-backend
server example1-1 10.10.11.2:80
server example1-2 10.10.11.2:80
server example2-2 10.10.11.3:80
# This one serves requests for `https://two.example.com`
backend example2-backend
server example2-1 10.10.12.2:80
server example2-2 10.10.12.2:80
server example2-3 10.10.12.3:80
# This backend handles the static resources requests.
backend static-backend
server static-server1 10.10.9.2:80
Nginx
# This upstream block groups 3 servers into one named backend fastify_app
# with 2 primary servers distributed via round-robin
# and one backup which is used when the first 2 are not reachable
# This also assumes your fastify servers are listening on port 80.
# more info: https://nginx.dev.org.tw/en/docs/http/ngx_http_upstream_module.html
upstream fastify_app {
server 10.10.11.1:80;
server 10.10.11.2:80;
server 10.10.11.3:80 backup;
}
# This server block asks NGINX to respond with a redirect when
# an incoming request from port 80 (typically plain HTTP), to
# the same request URL but with HTTPS as protocol.
# This block is optional, and usually used if you are handling
# SSL termination in NGINX, like in the example here.
server {
# default server is a special parameter to ask NGINX
# to set this server block to the default for this address/port
# which in this case is any address and port 80
listen 80 default_server;
listen [::]:80 default_server;
# With a server_name directive you can also ask NGINX to
# use this server block only with matching server name(s)
# listen 80;
# listen [::]:80;
# server_name example.tld;
# This matches all paths from the request and responds with
# the redirect mentioned above.
location / {
return 301 https://$host$request_uri;
}
}
# This server block asks NGINX to respond to requests from
# port 443 with SSL enabled and accept HTTP/2 connections.
# This is where the request is then proxied to the fastify_app
# server group via port 3000.
server {
# This listen directive asks NGINX to accept requests
# coming to any address, port 443, with SSL.
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
# With a server_name directive you can also ask NGINX to
# use this server block only with matching server name(s)
# listen 443 ssl;
# listen [::]:443 ssl;
# server_name example.tld;
# Enable HTTP/2 support
http2 on;
# Your SSL/TLS certificate (chain) and secret key in the PEM format
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /path/to/private.pem;
# A generic best practice baseline for based
# on https://ssl-config.mozilla.org/
ssl_session_timeout 1d;
ssl_session_cache shared:FastifyApp:10m;
ssl_session_tickets off;
# This tells NGINX to only accept TLS 1.3, which should be fine
# with most modern browsers including IE 11 with certain updates.
# If you want to support older browsers you might need to add
# additional fallback protocols.
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
# This adds a header that tells browsers to only ever use HTTPS
# with this server.
add_header Strict-Transport-Security "max-age=63072000" always;
# The following directives are only necessary if you want to
# enable OCSP Stapling.
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/chain.pem;
# Custom nameserver to resolve upstream server names
# resolver 127.0.0.1;
# This section matches all paths and proxies it to the backend server
# group specified above. Note the additional headers that forward
# information about the original request. You might want to set
# trustProxy to the address of your NGINX server so the X-Forwarded
# fields are used by fastify.
location / {
# more info: https://nginx.dev.org.tw/en/docs/http/ngx_http_proxy_module.html
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# This is the directive that proxies requests to the specified server.
# If you are using an upstream group, then you do not need to specify a port.
# If you are directly proxying to a server e.g.
# proxy_pass http://127.0.0.1:3000 then specify a port.
proxy_pass http://fastify_app;
}
}
Kubernetes
readinessProbe
預設使用 pod IP 作為主機名稱。Fastify 預設監聽 127.0.0.1
。在這種情況下,探測將無法連線到應用程式。若要使其運作,應用程式必須監聽 0.0.0.0
或在 readinessProbe.httpGet
規格中指定自訂主機名稱,如下列範例所示。
readinessProbe:
httpGet:
path: /health
port: 4000
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 5
生產環境的容量規劃
為了調整 Fastify 應用程式的生產環境大小,強烈建議您針對環境的不同組態執行自己的測量,這些組態可能會使用真實 CPU 核心、虛擬 CPU 核心 (vCPU) 或甚至是部分 vCPU 核心。在此建議中,我們將使用 vCPU 一詞來表示任何 CPU 類型。
諸如 k6 或 autocannon 等工具可用於進行必要的效能測試。
也就是說,您也可以考慮將以下內容作為經驗法則:
為了盡可能降低延遲,建議每個應用程式實例(例如,k8s pod)使用 2 個 vCPU。第二個 vCPU 主要將被垃圾回收 (GC) 和 libuv 執行緒池使用。這將最大限度地減少使用者的延遲以及記憶體使用量,因為 GC 將更頻繁地執行。此外,主執行緒不必停止讓 GC 執行。
若要針對輸送量進行最佳化(處理每個可用 vCPU 每秒盡可能多的請求量),請考慮每個應用程式實例使用較少的 vCPU 數量。以 1 個 vCPU 執行 Node.js 應用程式完全沒問題。
您可以試驗更少的 vCPU 數量,這在某些使用案例中可能會提供更好的輸送量。有報告指出,API 閘道解決方案在 Kubernetes 中使用 100m-200m vCPU 效果良好。
請參閱Node 的事件迴圈由內而外,以更詳細地了解 Node.js 的運作方式,並更好地確定您的特定應用程式的需求。
執行多個實例
在同一個伺服器上執行多個 Fastify 應用程式可能有幾個使用案例。一個常見的範例是在單獨的埠上公開指標端點,以防止公開存取,當使用反向代理或入口防火牆不是選項時。
在同一個 Node.js 程序中啟動多個 Fastify 實例並同時執行它們是完全沒問題的,即使在高負載系統中也是如此。每個 Fastify 實例只會產生它接收到的流量所帶來的負載,加上該 Fastify 實例所使用的記憶體。