Writing custom sanitizer and token analysis modules for the ICU tokenizer🔗
The ICU tokenizer provides a highly customizable method to pre-process and normalize the name information of the input data before it is added to the search index. It comes with a selection of sanitizers and token analyzers which you can use to adapt your installation to your needs. If the provided modules are not enough, you can also provide your own implementations. This section describes the API of sanitizers and token analysis.
Warning
This API is currently in early alpha status. While this API is meant to be a public API on which other sanitizers and token analyzers may be implemented, it is not guaranteed to be stable at the moment.
Using non-standard sanitizers and token analyzers🔗
Sanitizer names (in the step
property) and token analysis names (in the
analyzer
) may refer to externally supplied modules. There are two ways
to include external modules: through a library or from the project directory.
To include a module from a library, use the absolute import path as name and make sure the library can be found in your PYTHONPATH.
To use a custom module without creating a library, you can put the module
somewhere in your project directory and then use the relative path to the
file. Include the whole name of the file including the .py
ending.
Custom sanitizer modules🔗
A sanitizer module must export a single factory function create
with the
following signature:
def create(config: SanitizerConfig) -> Callable[[ProcessInfo], None]
The function receives the custom configuration for the sanitizer and must
return a callable (function or class) that transforms the name and address
terms of a place. When a place is processed, then a ProcessInfo
object
is created from the information that was queried from the database. This
object is sequentially handed to each configured sanitizer, so that each
sanitizer receives the result of processing from the previous sanitizer.
After the last sanitizer is finished, the resulting name and address lists
are forwarded to the token analysis module.
Sanitizer functions are instantiated once and then called for each place that is imported or updated. They don't need to be thread-safe. If multi-threading is used, each thread creates their own instance of the function.
Sanitizer configuration🔗
The SanitizerConfig
class is a read-only dictionary
with configuration options for the sanitizer.
In addition to the usual dictionary functions, the class provides
accessors to standard sanitizer options that are used by many of the
sanitizers.
get_bool(param, default=None)
🔗
Extract a configuration parameter as a boolean.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
param
|
str
|
Name of the configuration parameter. The parameter must contain one of the yaml boolean values or an UsageError will be raised. |
required |
default
|
Optional[bool]
|
Value to return, when the parameter is missing.
When set to |
None
|
Returns:
Type | Description |
---|---|
bool
|
Boolean value of the given parameter. |
get_delimiter(default=',;')
🔗
Return the 'delimiters' parameter in the configuration as a compiled regular expression that can be used to split strings on these delimiters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
default
|
str
|
Delimiters to be used when 'delimiters' parameter is not explicitly configured. |
',;'
|
Returns:
Type | Description |
---|---|
Pattern[str]
|
A regular expression pattern which can be used to split a string. The regular expression makes sure that the resulting names are stripped and that repeated delimiters are ignored. It may still create empty fields on occasion. The code needs to filter those. |
get_filter(param, default='PASS_ALL')
🔗
Returns a filter function for the given parameter of the sanitizer configuration.
The value provided for the parameter in sanitizer configuration should be a string or list of strings, where each string is a regular expression. These regular expressions will later be used by the filter function to filter strings.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
param
|
str
|
The parameter for which the filter function will be created. |
required |
default
|
Union[str, Sequence[str]]
|
Defines the behaviour of filter function if parameter is missing in the sanitizer configuration. Takes a string(PASS_ALL or FAIL_ALL) or a list of strings. Any other value of string or an empty list is not allowed, and will raise a ValueError. If the value is PASS_ALL, the filter function will let all strings to pass, if the value is FAIL_ALL, filter function will let no strings to pass. If value provided is a list of strings each string is treated as a regular expression. In this case these regular expressions will be used by the filter function. By default allow filter function to let all strings pass. |
'PASS_ALL'
|
Returns:
Type | Description |
---|---|
Callable[[str], bool]
|
A filter function that takes a target string as the argument and returns True if it fully matches any of the regular expressions otherwise returns False. |
get_string_list(param, default=tuple())
🔗
Extract a configuration parameter as a string list.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
param
|
str
|
Name of the configuration parameter. |
required |
default
|
Sequence[str]
|
Takes a tuple or list of strings which will be returned if the parameter is missing in the sanitizer configuration. Note that if this default parameter is not provided then an empty list is returned. |
tuple()
|
Returns:
Type | Description |
---|---|
Sequence[str]
|
If the parameter value is a simple string, it is returned as a one-item list. If the parameter value does not exist, the given default is returned. If the parameter value is a list, it is checked to contain only strings before being returned. |
The main filter function of the sanitizer🔗
The filter function receives a single object of type ProcessInfo
which has with three members:
place: PlaceInfo
: read-only information about the place being processed. See PlaceInfo below.names: List[PlaceName]
: The current list of names for the place.address: List[PlaceName]
: The current list of address names for the place.
While the place
member is provided for information only, the names
and
address
lists are meant to be manipulated by the sanitizer. It may add and
remove entries, change information within a single entry (for example by
adding extra attributes) or completely replace the list with a different one.
PlaceInfo - information about the place🔗
This data class contains all information the tokenizer can access about a place.
address: Optional[Mapping[str, str]]
property
🔗
A dictionary with the address elements of the place. They key
usually corresponds to the suffix part of the key of an OSM
'addr:' or 'isin:' tag. There are also some special keys like
country
or country_code
which merge OSM keys that contain
the same information. See Import Styles for details.
The property may be None if the place has no address information.
centroid: Optional[Tuple[float, float]]
property
🔗
A center point of the place in WGS84. May be None when the geometry of the place is unknown.
country_code: Optional[str]
property
🔗
The country code of the country the place is in. Guaranteed to be a two-letter lower-case string. If the place is not inside any country, the property is set to None.
name: Optional[Mapping[str, str]]
property
🔗
A dictionary with the names of the place. Keys and values represent the full key and value of the corresponding OSM tag. Which tags are saved as names is determined by the import style. The property may be None if the place has no names.
rank_address: int
property
🔗
The rank address before any rank correction is applied.
is_a(key, value)
🔗
Set to True when the place's primary tag corresponds to the given key and value.
is_country()
🔗
Set to True when the place is a valid country boundary.
PlaceName - extended naming information🔗
Each name and address part of a place is encapsulated in an object of this class. It saves not only the name proper but also describes the kind of name with two properties:
kind
describes the name of the OSM key used without any suffixes (i.e. the part after the colon removed)suffix
contains the suffix of the OSM tag, if any. The suffix is the part of the key after the first colon.
In addition to that, a name may have arbitrary additional attributes. How attributes are used, depends on the sanitizers and token analysers. The exception is the 'analyzer' attribute. This attribute determines which token analysis module will be used to finalize the treatment of names.
clone(name=None, kind=None, suffix=None, attr=None)
🔗
Create a deep copy of the place name, optionally with the given parameters replaced. In the attribute list only the given keys are updated. The list is not replaced completely. In particular, the function cannot to be used to remove an attribute from a place name.
get_attr(key, default=None)
🔗
Return the given property or the value of 'default' if it is not set.
has_attr(key)
🔗
Check if the given attribute is set.
set_attr(key, value)
🔗
Add the given property to the name. If the property was already set, then the value is overwritten.
Example: Filter for US street prefixes🔗
The following sanitizer removes the directional prefixes from street names in the US:
import re
def _filter_function(obj):
if obj.place.country_code == 'us' \
and obj.place.rank_address >= 26 and obj.place.rank_address <= 27:
for name in obj.names:
name.name = re.sub(r'^(north|south|west|east) ',
'',
name.name,
flags=re.IGNORECASE)
def create(config):
return _filter_function
This is the most simple form of a sanitizer module. If defines a single
filter function and implements the required create()
function by returning
the filter.
The filter function first checks if the object is interesting for the
sanitizer. Namely it checks if the place is in the US (through country_code
)
and it the place is a street (a rank_address
of 26 or 27). If the
conditions are met, then it goes through all available names and
removes any leading directional prefix using a simple regular expression.
Save the source code in a file in your project directory, for example as
us_streets.py
. Then you can use the sanitizer in your icu_tokenizer.yaml
:
...
sanitizers:
- step: us_streets.py
...
Warning
This example is just a simplified show case on how to create a sanitizer.
It is not really read for real-world use: while the sanitizer would
correctly transform West 5th Street
into 5th Street
. it would also
shorten a simple North Street
to Street
.
For more sanitizer examples, have a look at the sanitizers provided by Nominatim.
They can be found in the directory
nominatim/tokenizer/sanitizers
.
Custom token analysis module🔗
The setup of the token analysis is split into two parts: configuration and analyser factory. A token analysis module must therefore implement the two functions here described.
configure(rules, normalizer, transliterator)
🔗
Prepare the configuration of the analysis module. This function should prepare all data that can be shared between instances of this analyser.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
rules
|
Mapping[str, Any]
|
A dictionary with the additional configuration options as specified in the tokenizer configuration. |
required |
normalizer
|
Any
|
an ICU Transliterator with the compiled global normalization rules. |
required |
transliterator
|
Any
|
an ICU Transliterator with the compiled global transliteration rules. |
required |
Returns:
Type | Description |
---|---|
Any
|
A data object with configuration data. This will be handed
as is into the |
create(normalizer, transliterator, config)
🔗
Create a new instance of the analyser. A separate instance of the analyser is created for each thread when used in multi-threading context.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
normalizer
|
Any
|
an ICU Transliterator with the compiled normalization rules. |
required |
transliterator
|
Any
|
an ICU Transliterator with the compiled transliteration rules. |
required |
config
|
Any
|
The object that was returned by the call to configure(). |
required |
Returns:
Type | Description |
---|---|
Analyzer
|
A new analyzer instance. This must be an object that implements the Analyzer protocol. |
The create()
function of an analysis module needs to return an
object that implements the following functions.
compute_variants(canonical_id)
🔗
Compute the transliterated spelling variants for the given canonical ID.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
canonical_id
|
str
|
ID string previously computed with
|
required |
Returns:
Type | Description |
---|---|
List[str]
|
A list of possible spelling variants. All strings must have been transformed with the global normalizer and transliterator ICU rules. Otherwise they cannot be matched against the input by the query frontend. The list may be empty, when there are no useful spelling variants. This may happen when an analyzer only usually outputs additional variants to the canonical spelling and there are no such variants. |
get_canonical_id(name)
🔗
Return the canonical form of the given name. The canonical ID must be unique (the same ID must always yield the same variants) and must be a form from which the variants can be derived.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
PlaceName
|
Extended place name description as prepared by the sanitizers. |
required |
Returns:
Type | Description |
---|---|
str
|
ID string with a canonical form of the name. The string may be empty, when the analyzer cannot analyze the name at all, for example because the character set in use does not match. |
Example: Creating acronym variants for long names🔗
The following example of a token analysis module creates acronyms from very long names and adds them as a variant:
class AcronymMaker:
""" This class is the actual analyzer.
"""
def __init__(self, norm, trans):
self.norm = norm
self.trans = trans
def get_canonical_id(self, name):
# In simple cases, the normalized name can be used as a canonical id.
return self.norm.transliterate(name.name).strip()
def compute_variants(self, name):
# The transliterated form of the name always makes up a variant.
variants = [self.trans.transliterate(name)]
# Only create acronyms from very long words.
if len(name) > 20:
# Take the first letter from each word to form the acronym.
acronym = ''.join(w[0] for w in name.split())
# If that leds to an acronym with at least three letters,
# add the resulting acronym as a variant.
if len(acronym) > 2:
# Never forget to transliterate the variants before returning them.
variants.append(self.trans.transliterate(acronym))
return variants
# The following two functions are the module interface.
def configure(rules, normalizer, transliterator):
# There is no configuration to parse and no data to set up.
# Just return an empty configuration.
return None
def create(normalizer, transliterator, config):
# Return a new instance of our token analysis class above.
return AcronymMaker(normalizer, transliterator)
Given the name Trans-Siberian Railway
, the code above would return the full
name Trans-Siberian Railway
and the acronym TSR
as variant, so that
searching would work for both.
Sanitizers vs. Token analysis - what to use for variants?🔗
It is not always clear when to implement variations in the sanitizer and when to write a token analysis module. Just take the acronym example above: it would also have been possible to write a sanitizer which adds the acronym as an additional name to the name list. The result would have been similar. So which should be used when?
The most important thing to keep in mind is that variants created by the token analysis are only saved in the word lookup table. They do not need extra space in the search index. If there are many spelling variations, this can mean quite a significant amount of space is saved.
When creating additional names with a sanitizer, these names are completely independent. In particular, they can be fed into different token analysis modules. This gives a much greater flexibility but at the price that the additional names increase the size of the search index.