I'm using a logstash plug-in for a Java app using logback to forward logs on to my logstash server. I've setup a filter definition as follows:
input {
tcp {
port => 2856
codec => json_lines
}
}
filter {
mutate {
convert => {
"tenantId" => "integer"
"userId" => "integer"
}
}
}
Logs are being forwarded on to Elasticsearch using the following config:
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
user => "user"
password => "secure"
}
}
The index is just going to logstash-, and when I inspect the mappings in Elasticsearch, I see the following:
"logstash-2016.04.25" : {
"mappings" : {
"logs" : {
"_all" : {
"omit_norms" : true,
"enabled" : true
},
"properties" : {
...
"userId" : {
"type" : "long"
},
"tenantId" : {
"type" : "long"
},
...
}
}
}
}
So I can see that the fields are being set with an appropriate type, but they are neither analyzed nor defined as long values when I check Kibana. What am I missing?
Assuming these are new fields in a given index, you'll need to tell Kibana to refresh it's field listing.