Différences
Ci-dessous, les différences entre deux révisions de la page.
systemes:belk [2017/10/05 16:06] william [Partie 1 : Le Besoin] |
systemes:belk [2019/02/06 14:03] |
||
---|---|---|---|
Ligne 1: | Ligne 1: | ||
- | ====== La suite Elastic : Beat ElasticSearch Logstash Kibana (BELK) ====== | ||
- | |||
- | <WRAP center round important 60%> | ||
- | </ | ||
- | |||
- | |||
- | ====== Projet ELK : La Suite Elastic ====== | ||
- | |||
- | Elastic Stack : https:// | ||
- | |||
- | Retour d' | ||
- | * http:// | ||
- | * INSA : http:// | ||
- | |||
- | Retour d' | ||
- | ===== Partie 1 : Le Besoin ===== | ||
- | |||
- | * Recensement | ||
- | * Indexation | ||
- | * Correlation | ||
- | * Actions | ||
- | * Securité des données | ||
- | * Service en continue | ||
- | * Authentification | ||
- | * Sauvegarde | ||
- | * Alerte XMPP ? https:// | ||
- | |||
- | ===== Partie 2 : Les outils ===== | ||
- | |||
- | * Tableau avec les outils gratuit et payant : https:// | ||
- | |||
- | * Les gratuits : | ||
- | |||
- | * **ElasticSearch** : Recherchez, analysez et stockez vos données https:// | ||
- | * **Logstash** : Intégrer les données https:// | ||
- | * pipeline qui ingère et traite simultanément des données provenant d'une multitude de sources, puis les transforme. On préférera la solution Beats | ||
- | * **Kibana** : Visualisez los données https:// | ||
- | * **Beats ** : Intégrer les données https:// | ||
- | * Filebeat : log fichier | ||
- | * Metricbeat : indicateur | ||
- | * Packetbeat : données réseau | ||
- | * Winlogbeat : logs windows | ||
- | * Heartbeat : heartbeat | ||
- | * X-pack : Search Profiler | ||
- | * X-pack : Monitoring | ||
- | |||
- | * Palier au module LDAP uniquement payant, faire une authentification apache/ | ||
- | |||
- | |||
- | |||
- | |||
- | ===== Tools ===== | ||
- | |||
- | ==== LibBeats ==== | ||
- | |||
- | * Beats de la communautée : https:// | ||
- | * systemd.journald / http / apache / mysql /ping / openconfig / nagios | ||
- | * Exemple de journald : https:// | ||
- | |||
- | ==== Kafka ==== | ||
- | |||
- | * Kafka en complément d’ElasticSearch, | ||
- | * https:// | ||
- | * Kafka-manager : Afin d' | ||
- | |||
- | ==== elasticsearch-HQ ==== | ||
- | |||
- | * elasticsearch-HQ est un outil web permettant l' | ||
- | * https:// | ||
- | |||
- | |||
- | ===== Architecture ===== | ||
- | |||
- | * kafka1.domaine.fr | ||
- | * kafka2.domaine.fr | ||
- | * elasticstack.domaine.fr elasticsearch.domaine.fr kibana.domaine.fr ( même machine ) | ||
- | * clientweb1.domaine.fr ( filebeat ) | ||
- | * clientdns1.domaine.fr ( logstash ) | ||
- | |||
- | |||
- | ===== Installation ===== | ||
- | |||
- | ==== Pré-requis ==== | ||
- | |||
- | * 4vcpu; 6Go | ||
- | * selinux : disable | ||
- | * firewalld : disable | ||
- | |||
- | === openJDK === | ||
- | |||
- | * Il faut installer openjdk sur les noeuds elasticsearch et les noeuds kafka< | ||
- | |||
- | === Clés RPM & folder === | ||
- | |||
- | * <code bash>rpm --import https:// | ||
- | mkdir -p /local/rpm | ||
- | cd / | ||
- | |||
- | |||
- | ---- | ||
- | |||
- | ==== ELASTICSEARCH ==== | ||
- | |||
- | * <code bash> | ||
- | rpm --install elasticsearch-5.2.1.rpm</ | ||
- | |||
- | === Configuration === | ||
- | * (https:// | ||
- | |||
- | |||
- | * vi / | ||
- | cluster.name: | ||
- | node.name: ${HOSTNAME} | ||
- | bootstrap.memory_lock: | ||
- | path.data: / | ||
- | path.logs: / | ||
- | network.host: | ||
- | http.port: 9200 | ||
- | # | ||
- | </ | ||
- | |||
- | * vi / | ||
- | # | ||
- | LimitMEMLOCK=infinity | ||
- | #Supprimer l' | ||
- | --quiet | ||
- | </ | ||
- | * vi / | ||
- | # | ||
- | MAX_LOCKED_MEMORY=unlimited | ||
- | |||
- | </ | ||
- | |||
- | === Start : === | ||
- | * <code bash> | ||
- | systemctl enable elasticsearch | ||
- | systemctl start elasticsearch</ | ||
- | |||
- | === Check === | ||
- | * <code bash> | ||
- | tcp6 | ||
- | |||
- | curl -XGET ' | ||
- | nodes :{......} | ||
- | |||
- | curl -XGET ' | ||
- | { | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | }, | ||
- | " | ||
- | } | ||
- | </ | ||
- | |||
- | |||
- | ---- | ||
- | |||
- | ==== Kafka ==== | ||
- | |||
- | * <code bash> | ||
- | cd / | ||
- | wget http:// | ||
- | tar -xvf kafka_2.12-0.11.0.0.tgz | ||
- | cd kafka_2.12-0.11.0.0 | ||
- | groupadd kafka | ||
- | useradd kafka -d "/ | ||
- | </ | ||
- | |||
- | |||
- | === Configuration === | ||
- | |||
- | * Import des certificats : <code bash> | ||
- | # Conversion au format pkcs12 | ||
- | openssl pkcs12 -export -in / | ||
- | # import dans le keystore | ||
- | keytool -importkeystore -deststorepass hhjjkk -destkeystore server.keystore.jks -srckeystore elasticstack.p12 -srcstoretype PKCS12 | ||
- | # Lister le keystore: | ||
- | keytool -list -keystore server.keystore.jks | ||
- | # Autorité de certification : | ||
- | keytool -keystore server.truststore.jks -alias CARoot -import -file / | ||
- | </ | ||
- | * vim config/ | ||
- | # Ecoute du port + fix problème fqdn/ | ||
- | listeners=PLAINTEXT://: | ||
- | advertised.host.name=kafka1.domaine.fr | ||
- | advertised.listeners=PLAINTEXT:// | ||
- | |||
- | # Replications sur les deux noeud | ||
- | offsets.topic.replication.factor=2 | ||
- | transaction.state.log.replication.factor=2 | ||
- | transaction.state.log.min.isr=2 | ||
- | default.replication.factor=2 | ||
- | offsets.topic.replication.factor=3 | ||
- | |||
- | # SSL | ||
- | ssl.keystore.location=/ | ||
- | ssl.keystore.password=hhjjkk | ||
- | ssl.key.password=hhjjkk | ||
- | ssl.truststore.location=/ | ||
- | ssl.truststore.password=hhjjkk | ||
- | |||
- | </ | ||
- | |||
- | * vim config/ | ||
- | dataDir=/ | ||
- | clientPort=2181 | ||
- | tickTime=2000 | ||
- | initLimit=10 | ||
- | syncLimit=5 | ||
- | server.1=kafka1.domaine.fr: | ||
- | server.2=kafka2.domaine.fr: | ||
- | </ | ||
- | |||
- | === Creation des services systemd | ||
- | * vim / | ||
- | [Unit] | ||
- | Description=Apache Zookeeper server (Kafka) | ||
- | Documentation=http:// | ||
- | Requires=network.target remote-fs.target | ||
- | After=network.target remote-fs.target | ||
- | |||
- | [Service] | ||
- | Type=simple | ||
- | User=kafka | ||
- | Group=kafka | ||
- | Environment=JAVA_HOME=/ | ||
- | ExecStart=/ | ||
- | ExecStop=/ | ||
- | |||
- | [Install] | ||
- | WantedBy=multi-user.target | ||
- | </ | ||
- | |||
- | * vi / | ||
- | [Unit] | ||
- | Description=Apache Kafka server (broker) | ||
- | Documentation=http:// | ||
- | Requires=network.target remote-fs.target | ||
- | After=network.target remote-fs.target kafka-zookeeper.service | ||
- | |||
- | [Service] | ||
- | Type=simple | ||
- | User=kafka | ||
- | Group=kafka | ||
- | Environment=JAVA_HOME=/ | ||
- | ExecStart=/ | ||
- | ExecStop=/ | ||
- | |||
- | [Install] | ||
- | WantedBy=multi-user.target | ||
- | </ | ||
- | |||
- | === Start === | ||
- | |||
- | * <code bash> | ||
- | systemctl daemon-reload | ||
- | systemctl start kafka-zookeeper.service | ||
- | systemctl start kafka.service | ||
- | </ | ||
- | |||
- | === Monitoring du cluster === | ||
- | |||
- | * <code bash> | ||
- | git clone https:// | ||
- | cd kafka-manager/ | ||
- | ./sbt clean dist | ||
- | cd target/ | ||
- | unzip kafka-manager-1.3.3.13.zip | ||
- | cd kafka-manager-1.3.3.13 | ||
- | ZK_HOSTS=localhost: | ||
- | </ | ||
- | * Go http:// | ||
- | | ||
- | | ||
- | ---- | ||
- | |||
- | ==== LOGSTASH ==== | ||
- | |||
- | * <code bash> | ||
- | rpm -ivh logstash-5.5.2.rpm | ||
- | </ | ||
- | |||
- | |||
- | === Configuration === | ||
- | |||
- | * Import du certification dans / | ||
- | * Convertir la clé pour que logstash puisse l' | ||
- | erreur : | ||
- | [2017-09-04T15: | ||
- | solution : | ||
- | 15:45:45 root@elasticstack:/ | ||
- | </ | ||
- | |||
- | * <code bash>cd / | ||
- | * webtest.conf (log apache) <code bash> | ||
- | input { | ||
- | kafka { | ||
- | bootstrap_servers => ' | ||
- | topics => [" | ||
- | auto_offset_reset => " | ||
- | codec => json {} | ||
- | } | ||
- | } | ||
- | filter | ||
- | { | ||
- | |||
- | grok { | ||
- | match => { " | ||
- | " | ||
- | remove_field => " | ||
- | } | ||
- | mutate { | ||
- | add_field => { " | ||
- | } | ||
- | date { | ||
- | match => [ " | ||
- | remove_field => " | ||
- | } | ||
- | useragent { | ||
- | source => " | ||
- | target => " | ||
- | remove_field => " | ||
- | } | ||
- | geoip { | ||
- | source => " | ||
- | target => " | ||
- | } | ||
- | |||
- | |||
- | |||
- | } | ||
- | output { | ||
- | elasticsearch { | ||
- | index | ||
- | hosts => [" | ||
- | sniffing => false | ||
- | } | ||
- | |||
- | stdout | ||
- | { | ||
- | codec => rubydebug | ||
- | } | ||
- | } | ||
- | </ | ||
- | |||
- | * Tester la configuration et la syntax <code bash>/ | ||
- | </ | ||
- | |||
- | |||
- | |||
- | === Start === | ||
- | * <code bash> | ||
- | systemctl start logstash | ||
- | </ | ||
- | |||
- | ---- | ||
- | |||
- | ==== FILEBEAT ==== | ||
- | |||
- | * <code bash> | ||
- | rpm -vi filebeat-5.5.2-x86_64.rpm</ | ||
- | |||
- | |||
- | === Configuration === | ||
- | |||
- | * Import du certification dans / | ||
- | * <code bash> | ||
- | filebeat.prospectors: | ||
- | - input_type: log | ||
- | paths: | ||
- | - / | ||
- | document_type: | ||
- | |||
- | |||
- | - input_type: log | ||
- | paths: | ||
- | - / | ||
- | |||
- | ............. | ||
- | </ | ||
- | |||
- | * Exemple vers logstash <code bash> | ||
- | # The Logstash hosts | ||
- | hosts: [" | ||
- | |||
- | # Optional SSL. By default is off. | ||
- | # List of root certificates for HTTPS server verifications | ||
- | ssl.certificate_authorities: | ||
- | |||
- | # Certificate for SSL client authentication | ||
- | # | ||
- | |||
- | # Client Certificate Key | ||
- | #ssl.key: "/ | ||
- | |||
- | template.name: | ||
- | template.path: | ||
- | template.overwrite: | ||
- | </ | ||
- | |||
- | * Exemple vers kafka <code bash> | ||
- | output.kafka: | ||
- | # initial brokers for reading cluster metadata | ||
- | # | ||
- | | ||
- | |||
- | # message topic selection + partitioning | ||
- | | ||
- | # | ||
- | | ||
- | | ||
- | |||
- | | ||
- | | ||
- | | ||
- | |||
- | | ||
- | | ||
- | | ||
- | </ | ||
- | |||
- | === Start === | ||
- | * <code bash> | ||
- | systemctl start filebeat | ||
- | </ | ||
- | |||
- | |||
- | |||
- | ---- | ||
- | |||
- | ==== KIBANA ==== | ||
- | |||
- | * <code bash> | ||
- | rpm -ivh kibana-5.5.2-x86_64.rpm</ | ||
- | |||
- | |||
- | === Configuration === | ||
- | |||
- | * vim / | ||
- | server.port: | ||
- | server.host: | ||
- | elasticsearch.url: | ||
- | </ | ||
- | |||
- | === Start === | ||
- | * <code bash> | ||
- | systemctl start kibana | ||
- | </ | ||
- | |||
- | === ProxyPass | ||
- | |||
- | * <code bash>yum install httpd | ||
- | vim / | ||
- | < | ||
- | ProxyPass " | ||
- | ProxyPassReverse " | ||
- | # Ajouter authentification de votre choix (htpasswd, ldap, ... ) | ||
- | </ | ||
- | </ | ||
- | |||
- | === Utilisation | ||
- | |||
- | * Selectionner les indexs ( pour le faire après configuration initiale : Management > Inde Patterns ) | ||
- | * exemple logstash : **filebeat-*** | ||
- | * exemple kafka : **webtest-logs-*** | ||
- | * Time Filter field name : @timestamp | ||
- | |||
- | ---- | ||
- | |||
- | ==== ELASTICSEARCH-HQ ==== | ||
- | |||
- | * <code bash>cd /local/ | ||
- | git clone https:// | ||
- | |||
- | |||
- | === Configuration === | ||
- | |||
- | * vim / | ||
- | ..... | ||
- | http.cors.allow-origin: | ||
- | http.cors.enabled: | ||
- | </ | ||
- | |||
- | * vim / | ||
- | < | ||
- | ProxyPass " | ||
- | </ | ||
- | < | ||
- | Options Indexes FollowSymLinks | ||
- | |||
- | AllowOverride None | ||
- | Require all granted | ||
- | </ | ||
- | |||
- | </ | ||
- | |||
- | ==== AUTRE ==== | ||
- | |||
- | === packetbeat === | ||
- | |||
- | Attention ! Beaucoup de CPU quand beaucoup de requêtes <code bash> | ||
- | yum install libpcap | ||
- | wget https:// | ||
- | rpm -vi packetbeat-5.6.0-x86_64.rpm | ||
- | </ | ||
- | |||
- | * Importer les dashboard dans kibana <code bash> | ||
- | / | ||
- | </ | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | ################# | ||
- | |||
- | |||
- | |||
- | grok debugger | ||
- | http:// | ||
- | |||
- | |||
- | |||
- | [[systemes: | ||