Time to timestamp issue

Hi All,
I use filter to extract the @Malahari into a field of its own called “@Malahari

message ==> “@Malahari”: “2017-12-04T17:44:34”

@Praveena” => 2017-12-28T12:29:22.096Z,

.conf file content

input {
file {
path => "/home/sdc/PycharmProjects/Kibana_Pro/utility/MAYOPETMR01_2017-12-04.gz.log"
type => "log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
} filter {
date {
match => ["@Malahari","EEE MMM dd HH:mm:ss YYYY"]
target => "@Praveena"
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "gesyslog_test"
document_type => "log"
}
stdout { codec => rubydebug }
}

Output:

{
"type" => "log",
"message" => "{"@Malahari": "2017-12-04T17:44:34", "@Krishang": "2212884484", "text": "Exception Class: Unknown Severity: Unknown\nFunction: ", "@Ashna": "MA"NSP SCP:RfHubCanHWO::RfBias 5462", "detail": {"view_Level": "4", "seq_Num": "0", "name": null, "format": "1", "h_Name": "prtte1_Seq": "4767637676"}, "@Neeha": "log"}",
"@Nainisha" => "1",
"path" => "/home/sdc/PycharmProjects/Kibana_Pro/utility/4444444_2017-12-04.gz.log",
"@Praveena" => 2017-12-28T12:29:22.096Z,
"host" => "sdc-VirtualBox"
} It is not replacing it. Please help me

The @time field obviously doesn’t match the “EEE MMM dd HH:mm:ss YYYY” pattern you’ve given. Try “ISO8601” instead.

Thanks for replying, , the same issue is occuring after changing to “ISO8601”.

Log file test_site.log

{"@Neeha": "Log", "@Joshnav": "yeryryryr", "@Ashna": "666666", "detail": {"view_Level": "4", "time_Seq": "1512536473", "suit_Name": null, "tag": "162", "host_Name": "test", "format": "1", "seq_Num": "1"}, "@Malahari": "2017-12-06T05:01:13", "text": "Signal 15 was received, causing a system shutdown.", "@Krishang": "501370064"}
{"@Neeha": "LOG", "@Joshnav": "Uyryryryry", "@Ashna": "666666", "detail": {"view_Level": "4", "time_Seq": "1512536473", "suit_Name": null, "tag": "130", "host_Name": "test", "format": "1", "seq_Num": "2"}, "@Malahari": "2017-12-06T05:01:13", "text": "start script failed", "@Krishang": "0"}
{"@Neeha": "LOG", "@Joshnav":" yyyy", "@Ashna": "666666", "detail": {"view_Level": "4", "time_Seq": "1512536473", "suit_Name": null, "tag": "225", "host_Name": "test", "format": "1", "seq_Num": "3"}, "@Malahari": "2017-12-06T05:01:13", "text": "Exception Class: Unknown\nFunction: yryryyr", "@Krishang": "200002379"}
{"@Neeha": "Log", "@Joshnav": "testts1", "@Ashna": "666666", "detail": {"view_Level": "4", "time_Seq": "1512536473", "suit_Name": null, "tag": "221", "host_Name": "test", "format": "1", "seq_Num": "4"}, "@Malahari": "2017-12-06T05:01:13", "text": "Exception Class: Unknown\nFunction", "@Krishang": "200002379"}
sdc@sdc-VirtualBox:~/PycharmProjects/Kibana_Pro/utility$ .conf file content : sdc@sdc-VirtualBox:~/PycharmProjects/Kibana_Pro/utility$ cat /home/sdc/PycharmProjects/Kibana_Pro/utility/logstash.conf
input {
file {
path => "/home/sdc/PycharmProjects/Kibana_Pro/utility/test_site.log"
type => "log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
} filter {
date {
match => ["@Malahari","ISO8601"]
#match => ["@Malahari","yyyy-MM-dd HH:mm:ss"]
target => "@Praveena"
}
} output {
elasticsearch {
hosts => ["localhost:9200"]

sniffing => true

manage_template => false

index => "gesyslog_test"
document_type => "log"
}
stdout { codec => rubydebug }
}
sdc@sdc-VirtualBox:~/PycharmProjects/Kibana_Pro/utility$ Output message root@sdc-VirtualBox:~# /usr/share/logstash/bin/logstash --path.settings=/etc/logstash -f /home/sdc/PycharmProjects/Kibana_Pro/utility/logstash.conf --path.data /usr/share/logstash/data
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
{
"type" => "log",
"@Nainisha" => "1",
"path" => "/home/sdc/PycharmProjects/Kibana_Pro/utility/test_site.log",
"host" => "sdc-VirtualBox",
"@Praveena" => 2017-12-29T11:42:23.968Z,
"message" => ..................... Could you please suggest me what needs to be corrected? thanks,
Subash

Someone can look at this issue?

It doesn’t look like you’re parsing the JSON input in any way (hence, there is no @time field to parse). What does the full output from the stdout plugin look like?

Full output file for a entry.
{

“type” => “log”,

“message” => “{”@Malahari": “2017-12-04T17:44:34”, “@Krishang”: “2212884484”, “text”: "Exception Class: Unknown Severity: Unknown\nFunction: “, “@Ashna”: “MA"NSP SCP:RfHubCanHWO::RfBias 5462”, “detail”: {“view_Level”: “4”, “seq_Num”: “0”, “name”: null, “format”: “1”, “h_Name”: “prtte1_Seq”: “4767637676”}, “@Neeha”: “log”}”,

@Nainisha” => “1”,

“path” => “/home/sdc/PycharmProjects/Kibana_Pro/utility/4444444_2017-12-04.gz.log”,

@Praveena” => 2017-12-28T12:29:22.096Z,

“host” => “sdc-VirtualBox”

}

Right, no @time field. Your event only has type, message, @version, path, @timestamp, and host fields. Use a json or json_lines codec in your file input, or process the message field with a json filter.

thanks Magnus.

i have updated the same. Now i am getting below error in /var/log/logstash/logstash-plain.log. And logstash is struck. Please advise me.
2018-01-12T18:14:41,753][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({“type”=>“unavailable_shards_exception”, “reason”=>"[gesyslog_test][3] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[gesyslog_test][3]] containing [2] requests]"})

[2018-01-12T18:14:41,753][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>4}

hi @Omisha try this i dont know in your case its working or not i hope its work!
date

{

match => ["@Malahari", “UNIX_MS”]

target => “Time”

}

Thanks Krunal…
Now i am getting below error in /var/log/logstash/logstash-plain.log.
===============================================================================

[2018-01-12T19:13:43,359][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({“type”=>“unavailable_shards_exception”, “reason”=>"[gesyslog_test][4] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[gesyslog_test][4]] containing [29] requests]"})

[2018-01-12T19:13:43,359][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>29}

[2018-01-12T19:13:43,401][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>500, :url=>“http://localhost:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s”}

Your ES cluster is in bad health. Look in the ES logs to find out more.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.