Elasticsearch - DateTime mapping for 'Day of Week' -


i have following property in class:

public datetime insertedtimestamp { get; set; } 

with the following mapping in es

"insertedtimestamp ":{     "type":"date",     "format":"yyyy-mm-ddthh:mm:ssz" }, 

i run aggregation return data grouped 'day of week', i.e. 'monday', 'tuesday'...etc

i understand can use 'script' in aggregation call this, see here, however, understanding, using script has not insignificant performance impact if there alot of documents (which anticpated here, think analytics logging).

is there way can map property 'sub properties'. i.e. string can do:

"somestring":{     "type":"string",     "analyzer":"full_word",     "fields":{         "partial":{             "search_analyzer":"full_word",             "analyzer":"partial_word",             "type":"string"         },         "partial_back":{             "search_analyzer":"full_word",             "analyzer":"partial_word_back",             "type":"string"         },         "partial_middle":{             "search_analyzer":"full_word",             "analyzer":"partial_word_name",             "type":"string"         }     } }, 

all single property in class in .net code.

can similar store 'full date' , 'year' , 'month' , 'day' etc separately (some sort of 'script' @ index time), or need make more properties in class , map them individually? transform did? (which depreciated hence seeming indicate need separate fields...)

it possible @ indexing time using pattern_capture token filter.

you'd first define 1 analyzer + token filter combo per date parts , assign each sub-field of date field. each token filter capture group interested in.

{   "settings": {     "analysis": {       "analyzer": {         "year_analyzer": {           "type": "custom",           "tokenizer": "keyword",           "filter": [             "year"           ]         },         "month_analyzer": {           "type": "custom",           "tokenizer": "keyword",           "filter": [             "month"           ]         },         "day_analyzer": {           "type": "custom",           "tokenizer": "keyword",           "filter": [             "day"           ]         },         "hour_analyzer": {           "type": "custom",           "tokenizer": "keyword",           "filter": [             "hour"           ]         },         "minute_analyzer": {           "type": "custom",           "tokenizer": "keyword",           "filter": [             "minute"           ]         },         "second_analyzer": {           "type": "custom",           "tokenizer": "keyword",           "filter": [             "second"           ]         }       },       "filter": {         "year": {           "type": "pattern_capture",           "preserve_original": false,           "patterns": [             "(\\d{4})-\\d{2}-\\d{2}[tt]\\d{2}:\\d{2}:\\d{2}[zz]"           ]         },         "month": {           "type": "pattern_capture",           "preserve_original": false,           "patterns": [             "\\d{4}-(\\d{2})-\\d{2}[tt]\\d{2}:\\d{2}:\\d{2}[zz]"           ]         },         "day": {           "type": "pattern_capture",           "preserve_original": false,           "patterns": [             "\\d{4}-\\d{2}-(\\d{2})[tt]\\d{2}:\\d{2}:\\d{2}[zz]"           ]         },         "hour": {           "type": "pattern_capture",           "preserve_original": false,           "patterns": [             "\\d{4}-\\d{2}-\\d{2}[tt](\\d{2}):\\d{2}:\\d{2}[zz]"           ]         },         "minute": {           "type": "pattern_capture",           "preserve_original": false,           "patterns": [             "\\d{4}-\\d{2}-\\d{2}[tt]\\d{2}:(\\d{2}):\\d{2}[zz]"           ]         },         "second": {           "type": "pattern_capture",           "preserve_original": false,           "patterns": [             "\\d{4}-\\d{2}-\\d{2}[tt]\\d{2}:\\d{2}:(\\d{2})[zz]"           ]         }       }     }   },   "mappings": {     "test": {       "properties": {         "date": {           "type": "date",           "format": "yyyy-mm-dd't'hh:mm:ssz",           "fields": {             "year": {               "type": "string",               "analyzer": "year_analyzer"             },             "month": {               "type": "string",               "analyzer": "month_analyzer"             },             "day": {               "type": "string",               "analyzer": "day_analyzer"             },             "hour": {               "type": "string",               "analyzer": "hour_analyzer"             },             "minute": {               "type": "string",               "analyzer": "minute_analyzer"             },             "second": {               "type": "string",               "analyzer": "second_analyzer"             }           }         }       }     }   } } 

then when index date such 2016-01-22t10:01:23z, you'll each of date sub-fields populated relevant part, i.e.

  • date: 2016-01-22t10:01:23z
  • date.year: 2016
  • date.month: 01
  • date.day: 22
  • date.hour: 10
  • date.minute: 01
  • date.second: 23

you're free aggregate on of sub-fields want.


Comments

Popular posts from this blog

sql - VB.NET Operand type clash: date is incompatible with int error -

SVG stroke-linecap doesn't work for circles in Firefox? -

python - TypeError: Scalar value for argument 'color' is not numeric in openCV -