Je suis nouveau sur fluentd. J'ai configuré la configuration de base de Fluentd dont j'ai besoin et je l'ai déployée sur mon cluster Kubernetes en tant que jeu de démons. Je vois que les journaux sont envoyés à ma solution de journalisation tierce. Cependant, je veux maintenant traiter certains journaux qui arrivent sous forme d'entrées multiples alors qu'ils devraient être uniques. Les journaux provenant du nœud semblent être en json et sont formatés comme suit
{"log":"2019-09-23 18:54:42,102 [INFO] some message \n","stream":"stderr","time":"2019-09-23T18:54:42.102Z"}
{"log": "another message \n","stream":"stderr","time":"2019-09-23T18:54:42.102Z"}
J'ai une carte de configuration qui ressemble à
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-config-map
namespace: logging
labels:
k8s-app: fluentd-logzio
data:
fluent.conf: |-
@include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"
@include kubernetes.conf
@include conf.d/*.conf
<match fluent.**>
# this tells fluentd to not output its log on stdout
@type null
</match>
# here we read the logs from Docker's containers and parse them
<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag raw.kubernetes.*
format json
read_from_head true
</source>
# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@id raw.kubernetes
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</match>
# Enriches records with Kubernetes metadata
<filter kubernetes.**>
@id filter_kubernetes_metadata
@type kubernetes_metadata
</filter>
<match kubernetes.**>
@type logzio_buffered
@id out_logzio
endpoint_url "https://listener-ca.logz.io?token=####"
output_include_time true
output_include_tags true
<buffer>
# Set the buffer type to file to improve the reliability and reduce the memory consumption
@type file
path /var/log/fluentd-buffers/stackdriver.buffer
# Set queue_full action to block because we want to pause gracefully
# in case of the off-the-limits load instead of throwing an exception
overflow_action block
# Set the chunk limit conservatively to avoid exceeding the GCL limit
# of 10MiB per write request.
chunk_limit_size 2M
# Cap the combined memory usage of this buffer and the one below to
# 2MiB/chunk * (6 + 2) chunks = 16 MiB
queue_limit_length 6
# Never wait more than 5 seconds before flushing logs in the non-error case.
flush_interval 5s
# Never wait longer than 30 seconds between retries.
retry_max_interval 30
# Disable the limit on the number of retries (retry forever).
retry_forever true
# Use multiple threads for processing.
flush_thread_count 2
</buffer>
</match>
Ma question est la suivante : comment faire pour que ces messages soient envoyés en une seule entrée au lieu de plusieurs ?