Tokenizers

The tokenizer is the component of Nominatim that is responsible for analysing names of OSM objects and queries. Nominatim provides different tokenizers that use different strategies for normalisation. This page describes how tokenizers are expected to work and the public API that needs to be implemented when creating a new tokenizer. For information on how to configure a specific tokenizer for a database see the tokenizer chapter in the Customization Guide.

Generic Architecture

About Search Tokens

Search in Nominatim is organised around search tokens. Such a token represents string that can be part of the search query. Tokens are used so that the search index does not need to be organised around strings. Instead the database saves for each place which tokens match this place's name, address, house number etc. To be able to distinguish between these different types of information stored with the place, a search token also always has a certain type: name, house number, postcode etc.

During search an incoming query is transformed into a ordered list of such search tokens (or rather many lists, see below) and this list is then converted into a database query to find the right place.

It is the core task of the tokenizer to create, manage and assign the search tokens. The tokenizer is involved in two distinct operations:

  • at import time: scanning names of OSM objects, normalizing them and building up the list of search tokens.
  • at query time: scanning the query and returning the appropriate search tokens.

Importing

The indexer is responsible to enrich an OSM object (or place) with all data required for geocoding. It is split into two parts: the controller collects the places that require updating, enriches the place information as required and hands the place to Postgresql. The collector is part of the Nominatim library written in Python. Within Postgresql, the placex_update trigger is responsible to fill out all secondary tables with extra geocoding information. This part is written in PL/pgSQL.

The tokenizer is involved in both parts. When the indexer prepares a place, it hands it over to the tokenizer to inspect the names and create all the search tokens applicable for the place. This usually involves updating the tokenizer's internal token lists and creating a list of all token IDs for the specific place. This list is later needed in the PL/pgSQL part where the indexer needs to add the token IDs to the appropriate search tables. To be able to communicate the list between the Python part and the pl/pgSQL trigger, the placex table contains a special JSONB column token_info which is there for the exclusive use of the tokenizer.

The Python part of the tokenizer returns a structured information about the tokens of a place to the indexer which converts it to JSON and inserts it into the token_info column. The content of the column is then handed to the PL/pqSQL callbacks of the tokenizer which extracts the required information. Usually the tokenizer then removes all information from the token_info structure, so that no information is ever persistently saved in the table. All information that went in should have been processed after all and put into secondary tables. This is however not a hard requirement. If the tokenizer needs to store additional information about a place permanently, it may do so in the token_info column. It just may never execute searches over it and consequently not create any special indexes on it.

Querying

At query time, Nominatim builds up multiple interpretations of the search query. Each of these interpretations is tried against the database in order of the likelihood with which they match to the search query. The first interpretation that yields results wins.

The interpretations are encapsulated in the SearchDescription class. An instance of this class is created by applying a sequence of search tokens to an initially empty SearchDescription. It is the responsibility of the tokenizer to parse the search query and derive all possible sequences of search tokens. To that end the tokenizer needs to parse the search query and look up matching words in its own data structures.

Tokenizer API

The following section describes the functions that need to be implemented for a custom tokenizer implementation.

Warning

This API is currently in early alpha status. While this API is meant to be a public API on which other tokenizers may be implemented, the API is far away from being stable at the moment.

Directory Structure

Nominatim expects two files for a tokenizer:

  • nominatim/tokenizer/<NAME>_tokenizer.py containing the Python part of the implementation
  • lib-php/tokenizer/<NAME>_tokenizer.php with the PHP part of the implementation

where <NAME> is a unique name for the tokenizer consisting of only lower-case letters, digits and underscore. A tokenizer also needs to install some SQL functions. By convention, these should be placed in lib-sql/tokenizer.

If the tokenizer has a default configuration file, this should be saved in the settings/<NAME>_tokenizer.<SUFFIX>.

Configuration and Persistence

Tokenizers may define custom settings for their configuration. All settings must be prefixed with NOMINATIM_TOKENIZER_. Settings may be transient or persistent. Transient settings are loaded from the configuration file when Nominatim is started and may thus be changed at any time. Persistent settings are tied to a database installation and must only be read during installation time. If they are needed for the runtime then they must be saved into the nominatim_properties table and later loaded from there.

The Python module

The Python module is expect to export a single factory function:

def create(dsn: str, data_dir: Path) -> AbstractTokenizer

The dsn parameter contains the DSN of the Nominatim database. The data_dir is a directory in the project directory that the tokenizer may use to save database-specific data. The function must return the instance of the tokenizer class as defined below.

Python Tokenizer Class

All tokenizers must inherit from nominatim.tokenizer.base.AbstractTokenizer and implement the abstract functions defined there.

Bases: ABC

The tokenizer instance is the central instance of the tokenizer in the system. There will only be a single instance of the tokenizer active at any time.

Source code in nominatim/tokenizer/base.py
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
class AbstractTokenizer(ABC):
    """ The tokenizer instance is the central instance of the tokenizer in
        the system. There will only be a single instance of the tokenizer
        active at any time.
    """

    @abstractmethod
    def init_new_db(self, config: Configuration, init_db: bool = True) -> None:
        """ Set up a new tokenizer for the database.

            The function should copy all necessary data into the project
            directory or save it in the property table to make sure that
            the tokenizer remains stable over updates.

            Arguments:
              config: Read-only object with configuration options.

              init_db: When set to False, then initialisation of database
                tables should be skipped. This option is only required for
                migration purposes and can be safely ignored by custom
                tokenizers.

            TODO: can we move the init_db parameter somewhere else?
        """


    @abstractmethod
    def init_from_project(self, config: Configuration) -> None:
        """ Initialise the tokenizer from an existing database setup.

            The function should load all previously saved configuration from
            the project directory and/or the property table.

            Arguments:
              config: Read-only object with configuration options.
        """


    @abstractmethod
    def finalize_import(self, config: Configuration) -> None:
        """ This function is called at the very end of an import when all
            data has been imported and indexed. The tokenizer may create
            at this point any additional indexes and data structures needed
            during query time.

            Arguments:
              config: Read-only object with configuration options.
        """


    @abstractmethod
    def update_sql_functions(self, config: Configuration) -> None:
        """ Update the SQL part of the tokenizer. This function is called
            automatically on migrations or may be called explicitly by the
            user through the `nominatim refresh --functions` command.

            The tokenizer must only update the code of the tokenizer. The
            data structures or data itself must not be changed by this function.

            Arguments:
              config: Read-only object with configuration options.
        """


    @abstractmethod
    def check_database(self, config: Configuration) -> Optional[str]:
        """ Check that the database is set up correctly and ready for being
            queried.

            Arguments:
              config: Read-only object with configuration options.

            Returns:
              If an issue was found, return an error message with the
              description of the issue as well as hints for the user on
              how to resolve the issue. If everything is okay, return `None`.
        """


    @abstractmethod
    def update_statistics(self) -> None:
        """ Recompute any tokenizer statistics necessary for efficient lookup.
            This function is meant to be called from time to time by the user
            to improve performance. However, the tokenizer must not depend on
            it to be called in order to work.
        """


    @abstractmethod
    def update_word_tokens(self) -> None:
        """ Do house-keeping on the tokenizers internal data structures.
            Remove unused word tokens, resort data etc.
        """


    @abstractmethod
    def name_analyzer(self) -> AbstractAnalyzer:
        """ Create a new analyzer for tokenizing names and queries
            using this tokinzer. Analyzers are context managers and should
            be used accordingly:

            ```
            with tokenizer.name_analyzer() as analyzer:
                analyser.tokenize()
            ```

            When used outside the with construct, the caller must ensure to
            call the close() function before destructing the analyzer.
        """

check_database(config) abstractmethod

Check that the database is set up correctly and ready for being queried.

Parameters:

Name Type Description Default
config Configuration

Read-only object with configuration options.

required

Returns:

Type Description
Optional[str]

If an issue was found, return an error message with the

Optional[str]

description of the issue as well as hints for the user on

Optional[str]

how to resolve the issue. If everything is okay, return None.

Source code in nominatim/tokenizer/base.py
189
190
191
192
193
194
195
196
197
198
199
200
201
@abstractmethod
def check_database(self, config: Configuration) -> Optional[str]:
    """ Check that the database is set up correctly and ready for being
        queried.

        Arguments:
          config: Read-only object with configuration options.

        Returns:
          If an issue was found, return an error message with the
          description of the issue as well as hints for the user on
          how to resolve the issue. If everything is okay, return `None`.
    """

finalize_import(config) abstractmethod

This function is called at the very end of an import when all data has been imported and indexed. The tokenizer may create at this point any additional indexes and data structures needed during query time.

Parameters:

Name Type Description Default
config Configuration

Read-only object with configuration options.

required
Source code in nominatim/tokenizer/base.py
163
164
165
166
167
168
169
170
171
172
@abstractmethod
def finalize_import(self, config: Configuration) -> None:
    """ This function is called at the very end of an import when all
        data has been imported and indexed. The tokenizer may create
        at this point any additional indexes and data structures needed
        during query time.

        Arguments:
          config: Read-only object with configuration options.
    """

init_from_project(config) abstractmethod

Initialise the tokenizer from an existing database setup.

The function should load all previously saved configuration from the project directory and/or the property table.

Parameters:

Name Type Description Default
config Configuration

Read-only object with configuration options.

required
Source code in nominatim/tokenizer/base.py
151
152
153
154
155
156
157
158
159
160
@abstractmethod
def init_from_project(self, config: Configuration) -> None:
    """ Initialise the tokenizer from an existing database setup.

        The function should load all previously saved configuration from
        the project directory and/or the property table.

        Arguments:
          config: Read-only object with configuration options.
    """

init_new_db(config, init_db=True) abstractmethod

Set up a new tokenizer for the database.

The function should copy all necessary data into the project directory or save it in the property table to make sure that the tokenizer remains stable over updates.

Parameters:

Name Type Description Default
config Configuration

Read-only object with configuration options.

required
init_db bool

When set to False, then initialisation of database tables should be skipped. This option is only required for migration purposes and can be safely ignored by custom tokenizers.

True

TODO: can we move the init_db parameter somewhere else?

Source code in nominatim/tokenizer/base.py
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
@abstractmethod
def init_new_db(self, config: Configuration, init_db: bool = True) -> None:
    """ Set up a new tokenizer for the database.

        The function should copy all necessary data into the project
        directory or save it in the property table to make sure that
        the tokenizer remains stable over updates.

        Arguments:
          config: Read-only object with configuration options.

          init_db: When set to False, then initialisation of database
            tables should be skipped. This option is only required for
            migration purposes and can be safely ignored by custom
            tokenizers.

        TODO: can we move the init_db parameter somewhere else?
    """

name_analyzer() abstractmethod

Create a new analyzer for tokenizing names and queries using this tokinzer. Analyzers are context managers and should be used accordingly:

with tokenizer.name_analyzer() as analyzer:
    analyser.tokenize()

When used outside the with construct, the caller must ensure to call the close() function before destructing the analyzer.

Source code in nominatim/tokenizer/base.py
220
221
222
223
224
225
226
227
228
229
230
231
232
233
@abstractmethod
def name_analyzer(self) -> AbstractAnalyzer:
    """ Create a new analyzer for tokenizing names and queries
        using this tokinzer. Analyzers are context managers and should
        be used accordingly:

        ```
        with tokenizer.name_analyzer() as analyzer:
            analyser.tokenize()
        ```

        When used outside the with construct, the caller must ensure to
        call the close() function before destructing the analyzer.
    """

update_sql_functions(config) abstractmethod

Update the SQL part of the tokenizer. This function is called automatically on migrations or may be called explicitly by the user through the nominatim refresh --functions command.

The tokenizer must only update the code of the tokenizer. The data structures or data itself must not be changed by this function.

Parameters:

Name Type Description Default
config Configuration

Read-only object with configuration options.

required
Source code in nominatim/tokenizer/base.py
175
176
177
178
179
180
181
182
183
184
185
186
@abstractmethod
def update_sql_functions(self, config: Configuration) -> None:
    """ Update the SQL part of the tokenizer. This function is called
        automatically on migrations or may be called explicitly by the
        user through the `nominatim refresh --functions` command.

        The tokenizer must only update the code of the tokenizer. The
        data structures or data itself must not be changed by this function.

        Arguments:
          config: Read-only object with configuration options.
    """

update_statistics() abstractmethod

Recompute any tokenizer statistics necessary for efficient lookup. This function is meant to be called from time to time by the user to improve performance. However, the tokenizer must not depend on it to be called in order to work.

Source code in nominatim/tokenizer/base.py
204
205
206
207
208
209
210
@abstractmethod
def update_statistics(self) -> None:
    """ Recompute any tokenizer statistics necessary for efficient lookup.
        This function is meant to be called from time to time by the user
        to improve performance. However, the tokenizer must not depend on
        it to be called in order to work.
    """

update_word_tokens() abstractmethod

Do house-keeping on the tokenizers internal data structures. Remove unused word tokens, resort data etc.

Source code in nominatim/tokenizer/base.py
213
214
215
216
217
@abstractmethod
def update_word_tokens(self) -> None:
    """ Do house-keeping on the tokenizers internal data structures.
        Remove unused word tokens, resort data etc.
    """

Python Analyzer Class

Bases: ABC

The analyzer provides the functions for analysing names and building the token database.

Analyzers are instantiated on a per-thread base. Access to global data structures must be synchronised accordingly.

Source code in nominatim/tokenizer/base.py
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
class AbstractAnalyzer(ABC):
    """ The analyzer provides the functions for analysing names and building
        the token database.

        Analyzers are instantiated on a per-thread base. Access to global data
        structures must be synchronised accordingly.
    """

    def __enter__(self) -> 'AbstractAnalyzer':
        return self


    def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
        self.close()


    @abstractmethod
    def close(self) -> None:
        """ Free all resources used by the analyzer.
        """


    @abstractmethod
    def get_word_token_info(self, words: List[str]) -> List[Tuple[str, str, int]]:
        """ Return token information for the given list of words.

            The function is used for testing and debugging only
            and does not need to be particularly efficient.

            Arguments:
                words: A list of words to look up the tokens for.
                       If a word starts with # it is assumed to be a full name
                       otherwise is a partial term.

            Returns:
                The function returns the list of all tuples that could be
                found for the given words. Each list entry is a tuple of
                (original word, word token, word id).
        """


    @abstractmethod
    def normalize_postcode(self, postcode: str) -> str:
        """ Convert the postcode to its standardized form.

            This function must yield exactly the same result as the SQL function
            `token_normalized_postcode()`.

            Arguments:
                postcode: The postcode to be normalized.

            Returns:
                The given postcode after normalization.
        """


    @abstractmethod
    def update_postcodes_from_db(self) -> None:
        """ Update the tokenizer's postcode tokens from the current content
            of the `location_postcode` table.
        """


    @abstractmethod
    def update_special_phrases(self,
                               phrases: Iterable[Tuple[str, str, str, str]],
                               should_replace: bool) -> None:
        """ Update the tokenizer's special phrase tokens from the given
            list of special phrases.

            Arguments:
                phrases: The new list of special phrases. Each entry is
                         a tuple of (phrase, class, type, operator).
                should_replace: If true, replace the current list of phrases.
                                When false, just add the given phrases to the
                                ones that already exist.
        """


    @abstractmethod
    def add_country_names(self, country_code: str, names: Dict[str, str]) -> None:
        """ Add the given names to the tokenizer's list of country tokens.

            Arguments:
                country_code: two-letter country code for the country the names
                              refer to.
                names: Dictionary of name type to name.
        """


    @abstractmethod
    def process_place(self, place: PlaceInfo) -> Any:
        """ Extract tokens for the given place and compute the
            information to be handed to the PL/pgSQL processor for building
            the search index.

            Arguments:
                place: Place information retrieved from the database.

            Returns:
                A JSON-serialisable structure that will be handed into
                the database via the `token_info` field.
        """

add_country_names(country_code, names) abstractmethod

Add the given names to the tokenizer's list of country tokens.

Parameters:

Name Type Description Default
country_code str

two-letter country code for the country the names refer to.

required
names Dict[str, str]

Dictionary of name type to name.

required
Source code in nominatim/tokenizer/base.py
 98
 99
100
101
102
103
104
105
106
@abstractmethod
def add_country_names(self, country_code: str, names: Dict[str, str]) -> None:
    """ Add the given names to the tokenizer's list of country tokens.

        Arguments:
            country_code: two-letter country code for the country the names
                          refer to.
            names: Dictionary of name type to name.
    """

close() abstractmethod

Free all resources used by the analyzer.

Source code in nominatim/tokenizer/base.py
35
36
37
38
@abstractmethod
def close(self) -> None:
    """ Free all resources used by the analyzer.
    """

get_word_token_info(words) abstractmethod

Return token information for the given list of words.

The function is used for testing and debugging only and does not need to be particularly efficient.

Parameters:

Name Type Description Default
words List[str]

A list of words to look up the tokens for. If a word starts with # it is assumed to be a full name otherwise is a partial term.

required

Returns:

Type Description
List[Tuple[str, str, int]]

The function returns the list of all tuples that could be

List[Tuple[str, str, int]]

found for the given words. Each list entry is a tuple of

List[Tuple[str, str, int]]

(original word, word token, word id).

Source code in nominatim/tokenizer/base.py
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
@abstractmethod
def get_word_token_info(self, words: List[str]) -> List[Tuple[str, str, int]]:
    """ Return token information for the given list of words.

        The function is used for testing and debugging only
        and does not need to be particularly efficient.

        Arguments:
            words: A list of words to look up the tokens for.
                   If a word starts with # it is assumed to be a full name
                   otherwise is a partial term.

        Returns:
            The function returns the list of all tuples that could be
            found for the given words. Each list entry is a tuple of
            (original word, word token, word id).
    """

normalize_postcode(postcode) abstractmethod

Convert the postcode to its standardized form.

This function must yield exactly the same result as the SQL function token_normalized_postcode().

Parameters:

Name Type Description Default
postcode str

The postcode to be normalized.

required

Returns:

Type Description
str

The given postcode after normalization.

Source code in nominatim/tokenizer/base.py
60
61
62
63
64
65
66
67
68
69
70
71
72
@abstractmethod
def normalize_postcode(self, postcode: str) -> str:
    """ Convert the postcode to its standardized form.

        This function must yield exactly the same result as the SQL function
        `token_normalized_postcode()`.

        Arguments:
            postcode: The postcode to be normalized.

        Returns:
            The given postcode after normalization.
    """

process_place(place) abstractmethod

Extract tokens for the given place and compute the information to be handed to the PL/pgSQL processor for building the search index.

Parameters:

Name Type Description Default
place PlaceInfo

Place information retrieved from the database.

required

Returns:

Type Description
Any

A JSON-serialisable structure that will be handed into

Any

the database via the token_info field.

Source code in nominatim/tokenizer/base.py
109
110
111
112
113
114
115
116
117
118
119
120
121
@abstractmethod
def process_place(self, place: PlaceInfo) -> Any:
    """ Extract tokens for the given place and compute the
        information to be handed to the PL/pgSQL processor for building
        the search index.

        Arguments:
            place: Place information retrieved from the database.

        Returns:
            A JSON-serialisable structure that will be handed into
            the database via the `token_info` field.
    """

update_postcodes_from_db() abstractmethod

Update the tokenizer's postcode tokens from the current content of the location_postcode table.

Source code in nominatim/tokenizer/base.py
75
76
77
78
79
@abstractmethod
def update_postcodes_from_db(self) -> None:
    """ Update the tokenizer's postcode tokens from the current content
        of the `location_postcode` table.
    """

update_special_phrases(phrases, should_replace) abstractmethod

Update the tokenizer's special phrase tokens from the given list of special phrases.

Parameters:

Name Type Description Default
phrases Iterable[Tuple[str, str, str, str]]

The new list of special phrases. Each entry is a tuple of (phrase, class, type, operator).

required
should_replace bool

If true, replace the current list of phrases. When false, just add the given phrases to the ones that already exist.

required
Source code in nominatim/tokenizer/base.py
82
83
84
85
86
87
88
89
90
91
92
93
94
95
@abstractmethod
def update_special_phrases(self,
                           phrases: Iterable[Tuple[str, str, str, str]],
                           should_replace: bool) -> None:
    """ Update the tokenizer's special phrase tokens from the given
        list of special phrases.

        Arguments:
            phrases: The new list of special phrases. Each entry is
                     a tuple of (phrase, class, type, operator).
            should_replace: If true, replace the current list of phrases.
                            When false, just add the given phrases to the
                            ones that already exist.
    """

PL/pgSQL Functions

The tokenizer must provide access functions for the token_info column to the indexer which extracts the necessary information for the global search tables. If the tokenizer needs additional SQL functions for private use, then these functions must be prefixed with token_ in order to ensure that there are no naming conflicts with the SQL indexer code.

The following functions are expected:

FUNCTION token_get_name_search_tokens(info JSONB) RETURNS INTEGER[]

Return an array of token IDs of search terms that should match the name(s) for the given place. These tokens are used to look up the place by name and, where the place functions as part of an address for another place, by address. Must return NULL when the place has no name.

FUNCTION token_get_name_match_tokens(info JSONB) RETURNS INTEGER[]

Return an array of token IDs of full names of the place that should be used to match addresses. The list of match tokens is usually more strict than search tokens as it is used to find a match between two OSM tag values which are expected to contain matching full names. Partial terms should not be used for match tokens. Must return NULL when the place has no name.

FUNCTION token_get_housenumber_search_tokens(info JSONB) RETURNS INTEGER[]

Return an array of token IDs of house number tokens that apply to the place. Note that a place may have multiple house numbers, for example when apartments each have their own number. Must be NULL when the place has no house numbers.

FUNCTION token_normalized_housenumber(info JSONB) RETURNS TEXT

Return the house number(s) in the normalized form that can be matched against a house number token text. If a place has multiple house numbers they must be listed with a semicolon as delimiter. Must be NULL when the place has no house numbers.

FUNCTION token_matches_street(info JSONB, street_tokens INTEGER[]) RETURNS BOOLEAN

Check if the given tokens (previously saved from token_get_name_match_tokens()) match against the addr:street tag name. Must return either NULL or FALSE when the place has no addr:street tag.

FUNCTION token_matches_place(info JSONB, place_tokens INTEGER[]) RETURNS BOOLEAN

Check if the given tokens (previously saved from token_get_name_match_tokens()) match against the addr:place tag name. Must return either NULL or FALSE when the place has no addr:place tag.

FUNCTION token_addr_place_search_tokens(info JSONB) RETURNS INTEGER[]

Return the search token IDs extracted from the addr:place tag. These tokens are used for searches by address when no matching place can be found in the database. Must be NULL when the place has no addr:place tag.

FUNCTION token_get_address_keys(info JSONB) RETURNS SETOF TEXT

Return the set of keys for which address information is provided. This should correspond to the list of (relevant) addr:* tags with the addr: prefix removed or the keys used in the address dictionary of the place info.

FUNCTION token_get_address_search_tokens(info JSONB, key TEXT) RETURNS INTEGER[]

Return the array of search tokens for the given address part. key can be expected to be one of those returned with token_get_address_keys(). The search tokens are added to the address search vector of the place, when no corresponding OSM object could be found for the given address part from which to copy the name information.

FUNCTION token_matches_address(info JSONB, key TEXT, tokens INTEGER[])

Check if the given tokens match against the address part key.

Warning: the tokens that are handed in are the lists previously saved from token_get_name_search_tokens(), not from the match token list. This is an historical oddity which will be fixed at some point in the future. Currently, tokenizers are encouraged to make sure that matching works against both the search token list and the match token list.

FUNCTION token_get_postcode(info JSONB) RETURNS TEXT

Return the postcode for the object, if any exists. The postcode must be in the form that should also be presented to the end-user.

FUNCTION token_strip_info(info JSONB) RETURNS JSONB

Return the part of the token_info field that should be stored in the database permanently. The indexer calls this function when all processing is done and replaces the content of the token_info column with the returned value before the trigger stores the information in the database. May return NULL if no information should be stored permanently.

PHP Tokenizer class

The PHP tokenizer class is instantiated once per request and responsible for analyzing the incoming query. Multiple requests may be in flight in parallel.

The class is expected to be found under the name of \Nominatim\Tokenizer. To find the class the PHP code includes the file tokenizer/tokenizer.php in the project directory. This file must be created when the tokenizer is first set up on import. The file should initialize any configuration variables by setting PHP constants and then require the file with the actual implementation of the tokenizer.

The tokenizer class must implement the following functions:

public function __construct(object &$oDB)

The constructor of the class receives a database connection that can be used to query persistent data in the database.

public function checkStatus()

Check that the tokenizer can access its persistent data structures. If there is an issue, throw an \Exception.

public function normalizeString(string $sTerm) : string

Normalize string to a form to be used for comparisons when reordering results. Nominatim reweighs results how well the final display string matches the actual query. Before comparing result and query, names and query are normalised against this function. The tokenizer can thus remove all properties that should not be taken into account for reweighing, e.g. special characters or case.

public function tokensForSpecialTerm(string $sTerm) : array

Return the list of special term tokens that match the given term.

public function extractTokensFromPhrases(array &$aPhrases) : TokenList

Parse the given phrases, splitting them into word lists and retrieve the matching tokens.

The phrase array may take on two forms. In unstructured searches (using q= parameter) the search query is split at the commas and the elements are put into a sorted list. For structured searches the phrase array is an associative array where the key designates the type of the term (street, city, county etc.) The tokenizer may ignore the phrase type at this stage in parsing. Matching phrase type and appropriate search token type will be done later when the SearchDescription is built.

For each phrase in the list of phrases, the function must analyse the phrase string and then call setWordSets() to communicate the result of the analysis. A word set is a list of strings, where each string refers to a search token. A phrase may have multiple interpretations. Therefore a list of word sets is usually attached to the phrase. The search tokens themselves are returned by the function in an associative array, where the key corresponds to the strings given in the word sets. The value is a list of search tokens. Thus a single string in the list of word sets may refer to multiple search tokens.