IT Cloud. Eugeny Shtoltc
{
host => localhost
index => "log -% {+ YYYY.MM.dd}"
}
}
}
Here the Elasticsearch index (a database, if we can analogy with SQL) changes every day. To create a new index, you do not need to create it specially – this is how NoSQL databases do it, since there is no strict requirement to describe the structure – property and type. But it is still recommended to describe it, otherwise all fields will be with string values, if a number is not specified. To display Elasticsearch data, a plugin of the WEB-ui interface in AngularJS – Kibana is used. To display a timeline in its charts, you need to describe at least one field with the date type, and for aggregate functions – a numeric one, be it an integer or floating point. Also, if new fields are added, indexing and displaying them requires re-indexing the entire index, so the most complete description of the structure will help to avoid the very time-consuming operation of reindexing.
The division of the index by days is done to speed up the work of Elasticsearch, and in Kibana you can select several by pattern, here log- * , the limitation of one million documents per index is also removed.
Consider a more detailed Logstash output plugin:
output {
if [type] == "Info" {
elasticsearch {
claster => elasticsearch
action => "create"
hosts => ["localhost: 9200"]
index => "log -% {+ YYYY.MM.dd}"
document_type => ....
document_id => "% {id}"
}
}
}
Interaction with ElasticSearch is carried out through the JSON REST API, for which there are drivers for most modern languages. But in order not to write code, we will use the Logstash utility, which also knows how to convert text data to JSON based on regular expressions. There are also predefined templates, like classes in regular expressions, such as % {IP: client} and others, which can be viewed at https://github.com/elastic/logstash/tree/v1.1.9/patterns. For standard services with standard settings on the Internet there are many ready-made configs, for example, for NGINX – https://github.com/zooniverse/static/blob/master/logstash- Nginx.conf. More similarly, it is described in the article https://habr.com/post/165059/.
ElasticSearch is a NoSQL database, so you don't need to specify a format (set of fields and its types). For searching, he still needs it, so he defines it himself, and with each format change, re-indexing occurs, in which work is impossible. To maintain a unified structure in the Serilog logger (DOT Net) there is an EventType field in which you can encrypt a set of fields and their types, for the rest you will have to implement them separately. To analyze the logs from a microservice architecture application, it is important to set the ID while it is being executed, that is, the request ID, which will be unchanged and transmitted from the microservice to the microservice, so that you can trace the entire path of the request.
Install ElasticSearch (https://habr.com/post/280488/) and check that curl -X GET localhost: 9200 works
sudo sysctl -w vm.max_map_count = 262144
$ curl 'localhost: 9200 / _cat / indices? v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open graylog_0 h2NICPMTQlqQRZhfkvsXRw 4 0 0 0 1kb 1kb
green open .kibana_1 iMJl7vyOTuu1eG8DlWl1OQ 1 0 3 0 11.9kb 11.9kb
yellow open indexname le87KQZwT22lFll8LSRdjw 5 1 1 0 4.5kb 4.5kb
yellow open db i6I2DmplQ7O40AUzyA-a6A 5 1 0 0 1.2kb 1.2kb
Create an entry in the blog database and post table curl -X PUT "$ ES_URL / blog / post / 1? Pretty" -d '
ElasticSearch search engine
In the previous section, we looked at the ELK stack that ElasticSearch, Logstash, and Kibana make up. In the full set, and often it is still extended by Filebeat – more tailored to work with the Logstash extension, for working with text logs. Despite the fact that Logstash quickly performs its task unnecessarily, they do not use it, and logs in JSON format are sent via the dump upload API directly to Logstash.
If we have an application, then pure ElasticSearch is used, which is used as a search engine, and Kibana is used as a tool for writing and debugging queries – the Dev Tools block. Although relational databases have a long history of development, the principle remains that the more demoralized the data, the slower it becomes, because it has to be merged with every request. This problem is solved by creating a View, which stores the resulting selection. But although modern databases have acquired impressive functionality, up to full-text search, they still cannot be compared in the efficiency and functionality of search with search engines. I will give an example from work: several tables with metrics, which are combined in a query into one, and a search is performed by the selected parameters in the admin panel, such as a date range, a page in pagination and content in a chat column term. This is not a lot, at the output we get a table of half a million rows, and the search by date and part of the row fits in milliseconds. But pagination slows down, in the initial pages its request takes about two minutes, in the final pages – more than four. At the same time, it will not work to combine a request for logical data and receive pagination in the forehead. And the same overgrowth, while it is not optimized, is executed in ElasticSearch in 22 milliseconds and contains both the data and the number of all data for pagination.
It is worth warning the reader against abandoning a rash relational database, although ElasticSearch contains a NoSQL database, but it is intended solely for search and does not contain full-fledged tools for normalization and recovery.
ElasticSearch does not have a console client in the standard delivery – all interaction is carried out via http calls GET, PUT and DELETE. Here is an example of using the Curl program (command) from the linux OS BASH shell:
# Create records (table and database are created automatically)
curl -XPUT mydb / mytable / 1 -d '{
....
} '
# Received values by id
curl -XGET mydb / mytable / 1
curl -XGET mydb / mytable / 1
# Simple search
curl -XGET mydb -d '{
"search": {
"match": {
"name": "my"
}
}
} '
# Removing base
curl -XDELETE mydb
Cloud systems as a source of continuous scaling: Google Cloud and Amazon AWS
In addition to hosting and renting a server, in particular a virtual VPS, you can use cloud solutions (SAS, Service As Software) solutions, that is, to carry out the work of our WEB application (s) only through the control panel using a ready-made infrastructure. This approach has both pros and cons, which depend on the customer's business. If from the technical side the server itself is remote, but we can connect to it, and as a bonus we get the administration panel, then for the developer the differences are more significant. We will divide projects into three groups according to the place of deployment: on hosting, in your data center, or using VPS and in the cloud. Companies using hosting due to significant restrictions imposed on development – the inability to install their software and the instability and size of the provided capacity – mainly specialize in custom (streaming) development of sites and stores, which, due to small requirements for the qualifications of developers and undemanding knowledge of the infrastructure the market is ready to pay for their labor at a minimum. The second group includes companies that implement completed projects, but developers are excluded from working with the infrastructure by the presence of system administrators, build engineers, DevOps