使用 ByConity 作为存储引擎
#1. 简介
ByConity (opens new window) 是字节跳动基于 ClickHouse(最新同步自 ClickHouse v23.3)Fork 的项目,支持存算分离。
自 6.6 版本起,DeepFlow 支持通过调整部署参数决定使用 ClickHouse 还是 ByConity,默认使用 ClickHouse,可调整为使用 ByConity。
提示
ByConity 共有 17 个 Pod,其中 9 个 Pod 的 Request 和 Limit 为 1.1C 1280M,1 个 Pod 的 Request 和 Limit 为 1C 1G,1 个 Pod 的 Request 和 Limit 为 1C 512M。组件 byconity-server
、vw-default
和 vw-writer
的本地 Disk Cache 可通过 lru_max_size
配置修改,日志数据存储上限可通过 size
、count
配置修改。
资源需求:
- CPU: 建议 Kubernetes 集群至少剩余 12C 可分配资源,实际会消耗更高的资源。
- 内存: 建议 Kubernetes 集群至少剩余 14G 可分配资源,实际会消耗更高的资源。
- 磁盘: 建议每个数据节点磁盘容量超过 180G,其中本地 Disk Cache
byconity-server
、vw-default
和vw-writer
各 40G,日志数据byconity-server
、vw-default
和vw-writer
各 20G。
#1.1 部署参数
ByConity 默认对接对象存储,环境要求可参考官方说明。部署时在自定义 values-custom.yaml 文件中添加 byconity 配置即可。
注:本次配置以阿里云 OSS 为例,须将示例中 endpoint
,region
,bucket
,path
,ak_id
,ak_secret
修改为对象存储的正确参数,并建议将 byconity-server
、vw-default
和 vw-writer
副本数量调整至与 deepflow-server
或节点数量相同。
global:
storageEngine: byconity
clickhouse:
enabled: false
byconity:
enabled: true
nameOverride: ""
fullnameOverride: ""
image:
repository: "{{ .Values.global.image.repository }}/byconity"
tag: 1.0.0
imagePullPolicy: IfNotPresent
fdbShell:
image:
repository: "{{ .Values.global.image.repository }}"
byconity:
configOverwrite:
storage_configuration:
cnch_default_policy: cnch_default_s3
disks:
server_s3_disk_0: # FIXME
path: byconity0
endpoint: https://oss-cn-beijing-internal.aliyuncs.com
region: cn-beijing
bucket: byconity
ak_id: XXXXXXX
ak_secret: XXXXXXX
type: bytes3
is_virtual_hosted_style: true
policies:
cnch_default_s3:
volumes:
bytes3:
default: server_s3_disk_0
disk: server_s3_disk_0
ports:
tcp: 9000
http: 8123
rpc: 8124
tcpSecure: 9100
https: 9123
exchange: 9410
exchangeStatus: 9510
usersOverwrite:
users:
default:
password: ""
probe:
password: probe
profiles:
default:
allow_experimental_live_view: 1
enable_multiple_tables_for_cnch_parts: 1
server:
replicas: 1 # FIXME
image: ""
podAnnotations: { }
resources: { }
hostNetwork: false
nodeSelector: { }
tolerations: [ ]
affinity:
nodeAffinity: {}
imagePullSecrets: [ ]
securityContext: { }
storage:
localDisk:
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: openebs-hostpath # FIXME: replace to your storageClassName
log:
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: openebs-hostpath # FIXME: replace to your storageClassName
configOverwrite:
logger:
level: trace
disk_cache_strategies:
simple:
lru_max_size: 429496729600 # 400Gi
# timezone: Etc/UTC
tso:
replicas: 1
image: ""
podAnnotations: { }
resources: { }
hostNetwork: false
nodeSelector: { }
tolerations: [ ]
affinity: {}
imagePullSecrets: [ ]
securityContext: { }
configOverwrite: { }
additionalVolumes: { }
storage:
localDisk:
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: openebs-hostpath # FIXME: replace to your storageClassName
log:
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: openebs-hostpath # FIXME: replace to your storageClassName
daemonManager:
replicas: 1 # Please keep single instance now, daemon manager HA is WIP
image: ""
podAnnotations: { }
resources: { }
hostNetwork: false
nodeSelector: { }
tolerations: [ ]
affinity: {}
imagePullSecrets: [ ]
securityContext: { }
configOverwrite: { }
resourceManager:
replicas: 1
image: ""
podAnnotations: { }
resources: { }
hostNetwork: false
nodeSelector: { }
tolerations: [ ]
affinity: {}
imagePullSecrets: [ ]
securityContext: { }
configOverwrite: { }
defaultWorker: &defaultWorker
replicas: 1
image: ""
podAnnotations: { }
resources: { }
hostNetwork: false
nodeSelector: { }
tolerations: [ ]
affinity: {}
imagePullSecrets: [ ]
securityContext: { }
livenessProbe:
exec:
command: [ "/opt/byconity/scripts/lifecycle/liveness" ]
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 20
readinessProbe:
exec:
command: [ "/opt/byconity/scripts/lifecycle/readiness" ]
failureThreshold: 5
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
storage:
localDisk:
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: openebs-hostpath #replace to your storageClassName
log:
pvcSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: openebs-hostpath #replace to your storageClassName
configOverwrite:
logger:
level: trace
disk_cache_strategies:
simple:
lru_max_size: 42949672960 # 40Gi
# timezone: Etc/UTC
virtualWarehouses:
- <<: *defaultWorker
name: vw_default
replicas: 1 # FIXME
- <<: *defaultWorker
name: vw_write
replicas: 1 # FIXME
commonEnvs:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: "metadata.name"
- name: MY_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: "metadata.uid"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: "status.podIP"
- name: MY_HOST_IP
valueFrom:
fieldRef:
# fieldPath: "status.hostIP"
fieldPath: "status.podIP"
- name: CONSUL_HTTP_HOST
valueFrom:
fieldRef:
fieldPath: "status.hostIP"
additionalEnvs: [ ]
additionalVolumes:
volumes: [ ]
volumeMounts: [ ]
postStart: ""
preStop: ""
livenessProbe: ""
readinessProbe: ""
ingress:
enabled: false
# For more detailed usage, please check fdb-kubernetes-operator API doc: https://github.com/FoundationDB/fdb-kubernetes-operator/blob/main/docs/cluster_spec.md
fdb:
enabled: true
enableCliPod: true
version: 7.1.15
clusterSpec:
mainContainer:
imageConfigs:
- version: 7.1.15
baseImage: "{{ .Values.global.image.repository }}/foundationdb"
tag: 7.1.15
sidecarContainer:
imageConfigs:
- version: 7.1.15
baseImage: "{{ .Values.global.image.repository }}/foundationdb-kubernetes-sidecar"
tag: 7.1.15-1
processCounts:
stateless: 3
log: 3
storage: 3
processes:
general:
volumeClaimTemplate:
spec:
storageClassName: openebs-hostpath #replace to your storageClassName
resources:
requests:
storage: 20Gi
fdb-operator:
enabled: true
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 1
memory: 512Mi
affinity: {}
image:
repository: "{{ .Values.global.image.repository }}/fdb-kubernetes-operator"
tag: v1.9.0
pullPolicy: IfNotPresent
initContainerImage:
repository: "{{ $.Values.global.image.repository }}/foundationdb-kubernetes-sidecar"
initContainers:
6.2:
image:
repository: "{{ $.Values.global.image.repository }}/foundationdb/foundationdb-kubernetes-sidecar"
tag: 6.2.30-1
pullPolicy: IfNotPresent
6.3:
image:
repository: "{{ $.Values.global.image.repository }}/foundationdb/foundationdb-kubernetes-sidecar"
tag: 6.3.23-1
pullPolicy: IfNotPresent
7.1:
image:
repository: "{{ $.Values.global.image.repository }}/foundationdb/foundationdb-kubernetes-sidecar"
tag: 7.1.15-1
pullPolicy: IfNotPresent
hdfs:
enabled: false
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
重新部署 DeepFlow:
helm del deepflow -n deepflow
helm install deepflow -n deepflow -f values-custom.yaml deepflow/deepflow
2
#1.2 注意事项
- ByConity 只支持 AMD64 架构。
- 如果出现部分
byconity-fdb-storage
Pod 启动失败的情况,请调整内核参数:sudo sysctl -w fs.inotify.max_user_watches=2099999999 sudo sysctl -w fs.inotify.max_user_instances=2099999999 sudo sysctl -w fs.inotify.max_queued_events=2099999999
1
2
3 - Byconity 依赖于 FoundationDB 集群(简称 FDB),该集群用于存储 Byconity 的元数据。若对 FDB 集群进行删除或重建操作,将会导致 FDB 数据的丢失,进而引发 Byconity 数据的丢失。因此,在卸载 Byconity 的过程中,不会删除 FDB 组件。若确实需要删除该组件,请执行相应的删除操作:
kubectl delete FoundationDBCluster --all -n deepflow
1 - 使用私有仓库导致 FDB 部分组件无法拉取镜像情况,可以使用如下命令解决:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}' -n deepflow kubectl delete pod -n deepflow -l foundationdb.org/fdb-cluster-name=deepflow-byconity-fdb
1
2